Zelensky's Deepfake: A New Chapter in the Misinformation Era
Written on
Chapter 1: Introduction to Deepfakes
The intersection of global conflict and the difficulty of distinguishing reality from fabrication creates a perilous situation. Recently, Volodymyr Zelensky garnered attention for a video that was not actually him. In this manipulated footage, the Ukrainian president purportedly stated his intention to "return Donbas" and urged civilians to "lay down arms" in front of the Russian forces, asking them to "return to [their] families," according to reports from Sky News.
The deceptive video amassed hundreds of thousands of views across platforms such as YouTube, Facebook, and Twitter before it was removed for violating manipulated media guidelines. The hackers, whose identities remain unverified, even managed to air this deepfake on the Ukrainian TV channel Ukraine-24. Shortly after, the channel clarified on Facebook that the video was fake—an easily detectable one, given the mismatched proportions of the head and body, as well as the audio discrepancies. Zelensky himself addressed the issue through an Instagram post, confirming the footage was a deepfake.
This instance marks the first known use of deepfake technology in the ongoing conflict between Russia and Ukraine, and it could potentially be the first in any armed conflict.
While there have been various forms of propaganda and misinformation during this war—such as bot accounts flooding social media or dubious news articles—deepfakes represent a cutting-edge form of false information. They pose a threat to one of our most trusted sources of belief: moving faces and speaking voices.
Section 1.1: Understanding Deepfakes
For those unfamiliar with the term, a deepfake refers to a video or image that depicts a person doing or saying something they never did, created using AI models. Tech-savvy individuals can easily access deepfake software to either alter an existing video (for instance, swapping faces) or generate a completely fake one from scratch (like simulating mouth movements that match pre-recorded audio).
Although manipulated media has existed for a long time, deep learning algorithms have ushered in a new level of realism that blurs the line between fake and real. Professor Sandra Wachter from the University of Oxford emphasizes that the current internet landscape enables misinformation to spread more widely than ever before, distinguishing it from historical examples.
Section 1.2: Identifying Deepfakes
Zelensky's deepfake was relatively simple to identify due to its poor quality. Digital forensics expert Hany Farid from UC Berkeley outlined several "obvious signs" that indicated it was a deepfake. It was a "low-quality, low-resolution recording" designed to conceal distortions. Additionally, the body and arms lacked movement (high-quality deepfakes can convincingly mimic motion, but this one did not). Lastly, "visual inconsistencies" were apparent during its creation.
Beyond technical indicators, there are other heuristics to identify deepfakes. Critical questioning, seeking reliable sources, and comparing the video with others through internet searches can help. Ultimately, the most potent tool at our disposal is common sense: Does it seem plausible, given what I know? However, this is not foolproof, as our beliefs may already be misled.
Chapter 2: The Broader Implications of Deepfakes
While advanced detection methods exist, they won't always be effective. Zelensky's video was easily identified not only because it was poorly made but also due to his high profile. Before the Russian invasion, few knew who Zelensky was, but now he is a well-recognized figure. People are less likely to be deceived when they have a clear mental image of the individual in question. The ramifications of deepfakes could be dire if a lesser-known political leader were targeted, affecting millions.
Tom Simonite, a senior writer at Wired, notes that as deepfake technology becomes more accessible and convincing, Zelensky may not be the last political leader to fall victim to such fabricated videos. With advancements in computing power and algorithms, it may soon be possible for anyone to create high-quality deepfakes that are difficult to debunk, even for experts. The technology evolves rapidly; when a vulnerability is discovered, a fix is often implemented almost immediately.
Fortunately, the best tools for producing realistic deepfakes currently require significant computing resources, which are not widely available. However, technological advancements often lead to lower costs, suggesting that soon, creating indistinguishable deepfakes may be as simple as a few clicks, undermining the reliability of video evidence.
Final Reflections: The Quest for Truth
The immediate danger posed by deepfakes is that individuals may start to accept falsehoods as truths. However, the implications extend far beyond this apparent issue. If people can accept false information as true, they may also dismiss genuine content as false. The proliferation of deepfakes could lead to an increasing rate of false negatives, with individuals rejecting authentic videos or images.
This escalating problem could result in widespread misconceptions, where many beliefs accepted as true are actually false, and vice versa. Additionally, those unaware of the truth may share misinformation, perpetuating hoaxes. Some of these sharers may be viewed as credible sources, thereby creating invisible barriers to constructive scrutiny.
Another, less frequently discussed consequence of deepfakes is the potential for powerful individuals to evade accountability by claiming that incriminating footage is merely a deepfake. This phenomenon, known as the "liar's dividend," provides a shield for politicians and public figures, allowing them to protect their reputations despite evidence to the contrary.
René Descartes dedicated his life to discovering an undeniable truth—something he could trust beyond all else. He believed that doubt was the precursor to knowledge and the antidote to falsehood. In contrast, David Hume recognized that we inherently rely on others' testimonies to formulate our beliefs; much of what we accept as true is second-hand information at best. How can we ensure that our perception of reality aligns with the truths that lie beyond our grasp? Hume posited that we should place our trust in credible sources.
Yet in today's increasingly complex world, neither doubt nor trusted sources suffice. Both Descartes and Hume would recognize the profound challenges posed by misinformation today. Zelensky's deepfake is merely the tip of the iceberg, representing not only what lies ahead but also what may already be present and hidden from our view.
Doubt is a potent instrument, but it is only effective when we can contemplate it. Our trusted sources face their own challenges. The realities we cannot conceive—those "unknown unknowns"—will remain shrouded in mystery, forever beyond the divide between our few undeniable truths and the myriad illusory certainties that surround us.
Subscribe to The Algorithmic Bridge: Bridging the gap between algorithms and people. A newsletter about the AI that matters to your life.
You can also support my work on Medium directly and gain unlimited access by becoming a member through my referral link here! :)