In October, a viral video showed model Bella Hadid, who is half-Palestinian, apologizing for past remarks and expressing support for Israel. At the time, the internet was being swarmed with videos of destroyed buildings and children crying in the rubble of Gaza. Journalists concluded that these internet videos as well as numerous others were fabrications known as deepfakes.
According to Merriam-Webster Dictionary, a deepfake is “an image or recording that has been convincingly altered and manipulated to misrepresent someone as doing or saying something that was not actually done or said.” Deepfakes are so ubiquitous and convincing, that the public needs to be equipped with critical media literacy skills to avoid being persuaded by these falsehoods.
It is true that there is a long history of manipulating images: Photographers staged pictures during the Civil War, news media such as Fox News have altered images to lampoon opponents, and political operatives have introduced and altered images in a way to engender racist sentiment among voters.
However, the advent of artificial intelligence coupled with smart devices and platforms such as Instagram have enabled users to manipulate images with ease.
Many in the tech industry saw the threats posed by deepfakes years in advance. In 2016, technologist Aviv Ovadya raised the alarm about an “Infocalypse.”
“We were utterly screwed a year and a half ago and we’re even more screwed now,” Ovadya told BuzzFeed News in 2018. “And depending how far you look into the future it just gets worse.”
AI is being used to make political ads
Less than a decade later, as advancements have been made in AI, Ovadya’s warning has come to fruition.
It is now possible to generate political advertisements with the click of a button, create lifelike renditions of deceased people as talking heads in documentaries, produce songs from deceased artists and construct videos of prominent figures such as former House Speaker Nancy Pelosi being drunk during a speech or former President Barack Obama speaking comedically on fake news.
These deepfakes get more convincing by the day.
Trump unleashes DeSantis deepfake.GOP should support a grownup for president.
The existence of these deepfakes should at least lead us to be skeptical if not outright suspicious of the veracity of any news-related videos until factually confirmed by trusted sources or corroborating evidence. When any party releases a video, whether it be Israel’s footage after a hospital raid or Palestinian videos showing the damage from Israel attacks, audiences should be skeptical and demand transparently sourced, factual documentation.
This occurred early after the Oct. 7 Hamas attacks, when President Joe Biden confirmed, and later denied, seeing photos of Hamas beheading babies. This claim is suspect at best given that murdering babies is an old fake news trope used historically to justify antisemitism and provide support for multiple wars.
A media literate public would not only want evidence to prove Biden had seen such photos, but also consider that any photo that does exist can be a deepfake. As the images have not been released, we may never know, but it needs to be a very real consideration for all future events and reporting.
Deepfakes can be used to ruin your reputation
The threats posed by deepfakes are just starting to be understood. Imagine if a video were posted of you saying something antisemitic, racist, homophobic or sexist and you did not say it. In this moment of so-called cancel culture, your livelihood could be threatened, even your physical safety.
Social media spreads trauma:I can’t help but follow graphic images from Israel-Hamas war. I should know better.
Imagine someone pairs your likeness with a sex doll and uses it to embarrass you, or worse posts the fake images online. Imagine if a video online showed your child doing something embarrassing that they did not do, and they were bullied at school as a result.
Imagine if in every community across the nation, videos were created of police abusing a person of color and in the same location videos were created of people of color attacking white communities. Given America’s history of racism and violent clashes over racial and social justice, such deepfaked videos could result in violent street altercations or widespread unrest
The public is largely ill prepared to respond to these technological developments. That was made painfully obvious recently when actress Jamie Lee Curtis posted an image of Palestinian children fleeing from bombs, in a post supporting Israel. The image depicted the violence thrust upon Palestinians by Israel, the exact opposite of what Curtis claimed it showed.
The incident demonstrated that Curtis, as well as the social media users who accepted her interpretation of the image, lack critical media literacy skills. A critical media literate user would have looked for corroborating evidence such as the accurate worded description that was readily available just below the image.
AI, deepfakes exploit our trust:AI is creating an alternative reality where even Tom Hanks can’t be trusted
For their part, lawmakers have avoided efforts to curtail the creation and dissemination of these videos and have instead turned to the tech industry to manage the AI problem.
Senate Majority Leader Chuck Schumer, whose daughters work in the tech industry, has asked lawmakers to slow down on AI legislation as he brings industry leaders to Congress to discuss the emerging technology.
Similarly, the Federal Election Commission’s slow process to potentially regulate political deepfakes reveals that the government has largely avoided taking steps to seriously address the threats posed by AI. Meanwhile, lack of public trust in the establishment news media has prevented the press from being a leading voice on exposing deepfakes.
Given the waning faith in government and media, the challenges posed by deepfakes necessitate a critical media literacy education, where citizens learn journalistic skills, how to evaluate and analyze sources, separate fact from opinion, interrogate the production process and investigate the politics of representation.
A critical approach to news encourages users to examine the power dynamics expressed in media and be skeptical of how those power dynamics may result in the creation of false or duplicitous content, such as deepfakes, to manipulate users’ behaviors and attitudes as a form of propaganda.
Critical media literacy offers the best promise for addressing the threats posed by deepfakes because it empowers the citizenry – rather than unaccountable governments or industries – to determine the veracity of content for themselves. Yes, this will be a lot of work for the public, but we owe it to those living through the damage, destruction, death and chaos in Gaza and Israel to get it right.
Nolan Higdon is a national judge for Project Censored and a frequent contributor to its yearly book, “State of the Free Press.” He is a lecturer at Merrill College and the Education Department at University of California, Santa Cruz.