In today’s digital world, deepfakes pose a serious challenge and the Kamala Harris deepfake is the latest example of it. The video in question falsely portrays Harris making a series of nonsensical and confusing statements. This misleading content quickly spread on TikTok and X (formerly Twitter) before being flagged and removed. However, the problem not ends with just one video.
First of all, here is a quick reminder. What is a deepfake? Deepfakes are fake videos or audio recordings created using AI. They can make it look and sound like someone said or did something they never actually did. This technology can produce very convincing but completely false content, making it hard to tell what’s real and what’s not.
The Kamala Harris deepfake wasn’t the first of its kind. Deepfakes had appeared before and caused some trouble for names like Taylor Swift, Gareth Southgate, Megan Thee Stallion, Bobbi Althoff, and Elon Musk. This time, America’s current VP and next presidential candidate is attacked.
The Kamala Harris deepfake issue
The controversy started after President Joe Biden announced he wouldn’t run for reelection and endorsed Kamala Harris as the Democratic nominee. This led to various reactions online. Supporters of Harris shared funny and quirky moments from her past, but some took a different approach.
A video falsely attributed to Vice President Kamala Harris shows her making confusing statements, such as: “Today is today. And yesterday was today yesterday. Tomorrow will be today tomorrow. So live today, so the future today will be as the past today as it is tomorrow.”
In reality, Harris never said this. The video was manipulated from footage of a 2023 rally at Howard University, where the original speech did not contain these remarks. Critics had previously mocked her real speech, calling it a “word salad,” but this was based on genuine comments, not the deepfake video.
They spread a fake video showing Kamala Harris giving a bizarre and nonsensical speech. In the video, Harris appears to say confusing things that she never actually said. The Kamala Harris deepfake video was altered to include fake audio and visual effects, making it look like Harris was making these strange statements.
Spread and reactions
The fake video first went viral on TikTok, where it got millions of views before being flagged and removed. TikTok quickly took down the video and the fake audio because it violates their rules against harmful AI-generated content.
The video also appeared on X (formerly Twitter), where it continued to spread. Users added notes to the post to warn others that the video was fake and eventually, X removed the Kamala Harris deepfake.
What are Kamala Harris’ policies for America’s tech future?
Technical details
Experts noticed several signs that the video was fake. There was digital noise around Harris’ mouth, suggesting the video had been edited. Additionally, the audio lacked background noise you would normally hear at a live event. These clues helped reveal that the video was not genuine.
In summary, deepfakes, which use AI to create fake but realistic content, pose a significant challenge by distorting reality and spreading misinformation. The recent deepfake of Kamala Harris, which falsely depicted her making confusing statements, illustrates this problem. This video, manipulated from footage of a 2023 rally, quickly went viral on TikTok and X before being flagged and removed. Despite X’s policy against misleading content, the video remained on the platform with user-added notes correcting the misinformation. This incident highlights the need for media literacy and effective moderation to combat digital deception.
Featured image credit: White House