The 2024 U.S. presidential election is approaching, and generative artificial intelligence (AI) is affecting this political contest in an unprecedented way. From AI-generated political propaganda images to deepfake phone calls, AI technology is being used to influence voters’ judgments and even manipulate election results. The editor of Downcodes will give you an in-depth understanding of how AI technology can be used maliciously in elections, and how we can deal with this emerging challenge.
As the 2024 U.S. presidential election approaches, we have entered a new era—generative artificial intelligence (AI) has begun to make its mark on the electoral stage. Just imagine, voters’ voting decisions may be influenced by AI-generated images, videos and audios. This is no joke! Not long ago, former President Trump shared a set of AI-generated images showing Taylor. Swift's fans wore T-shirts in support of him, and the photos were initially flagged for satire.
Even more worryingly, in January, some New Hampshire residents received deepfake phone calls in an attempt to prevent them from participating in the Democratic primary. With just months to go until voting day, experts say similar AI disinformation will only intensify and the technology to identify it remains immature. Lance Hunter, a political science professor at the University of Georgia, said: "If some people don't realize that this is false, then this could have a material impact on the outcome of the election."
Generative AI has applications far beyond chatbots, with the ability to generate a variety of images, videos, and audio. Such technology is rapidly spreading around the world and is easily accessible to anyone, including those who want to use it for malicious purposes. In fact, this has happened in countries such as India, Indonesia and South Korea, although it is not clear whether these content actually affected voters' choices. But imagine the impact on voting if a fake video of Trump or Vice President Harris went viral!
The U.S. Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency (CISA) has been on high alert for the threats that generative AI may pose. "Foreign adversaries have targeted U.S. elections and its infrastructure in previous elections, and we expect this threat to continue in 2024," said Kate Conley, CISA's senior adviser. She emphasized that CISA is working on behalf of states and localities. of election officials provide guidance on outside influence operations and disinformation.
So how do we stop the chaos caused by generative AI before the election? The problem is that a lot of generated content is difficult to easily distinguish between authenticity and fakeness. With the advancement of technology, the content generated by AI has developed from the weird image of "15 fingers" to the lifelike one today.
Last July, the Biden administration secured voluntary commitments from companies including Amazon, Anthropic, Google, Meta, Microsoft and OpenAI to address potential risks posed by AI. However, these agreements are not legally binding. Professor Hunter believes that there will be bipartisan support for legislation at the federal level in the future to specifically control false content in political campaigns.
Social media platforms such as Meta, TikTok, and However, existing detection tools are not ideal. Some tools have even been criticized as "snake oil" and cannot provide exact answers, often only giving vague judgments of "85% probability".
As election day is approaching, generative AI technology is still developing rapidly, which raises concerns about whether bad actors will use this technology to create more online chaos before voting begins. As for the final election situation, everyone is waiting to see.
Facing the electoral risks brought by AI, the government, technology companies and social media platforms need to work together to strengthen supervision and improve the public's discernment ability to ensure a fair and just election. This game with AI technology has just begun.