This article analyzes a recent report released by Meta, which assesses the impact of generative artificial intelligence on elections in multiple countries and regions around the world. The report concludes that although there are some cases of using artificial intelligence for political propaganda or spreading false information, the extent of its impact is far lower than expected, and Meta's existing policies and measures are sufficient to deal with this challenge. The report details the response measures taken by Meta and analyzes the spread of false information on other platforms.
At the start of the year, concerns were widespread about how generative AI could interfere with global elections and spread propaganda and disinformation. However, a recent report from Meta showed that these concerns have not been realized on its platform. Meta said that on social platforms such as Facebook, Instagram and Threads, generative artificial intelligence has extremely limited impact on election-related information.
Picture source note: The picture is generated by AI, and the picture is authorized by the service provider Midjourney
According to Meta's research, the report covers major elections in multiple countries and regions, including the United States, Bangladesh, Indonesia, India, Pakistan, the European Parliament, France, the United Kingdom, South Africa, Mexico and Brazil. Meta noted that while there were indeed some confirmed or suspected uses of artificial intelligence during the election, the overall number was still low and existing policies and processes were sufficient to reduce the risks posed by generative AI content. The report shows that during the above-mentioned election period, AI content related to elections, politics and social topics accounted for less than 1% of fact-checked false information.
To prevent election-related deepfake images, Meta’s Imagine AI image generator rejected nearly 590,000 created images involving Trump, Vice President Harris, Governor Walz, and President Biden in the month leading up to Election Day. request. Additionally, Meta found that online accounts attempting to spread propaganda or disinformation saw only minor productivity and content generation gains when using generative AI.
Meta emphasized that the use of AI does not hinder its ability to combat these covert influence activities, because the company focuses on the behavior of these accounts rather than the content they publish, whether or not that content is generated by AI. In addition, Meta also announced that it has removed approximately 20 new covert influence operations globally to prevent foreign interference. Most of these hit networks have no real audiences, and some even falsely show their popularity through fake likes and followers.
Meta also blamed other platforms, saying that false videos related to the US election often appear on platforms such as X. Meta said it will continue to review its policies as it takes stock of lessons learned during the year and will announce any changes in the coming months.
All in all, Meta’s report provides valuable reference data on the impact of generative artificial intelligence in elections, and its proactive response measures are also worth learning from. However, ongoing monitoring and improvement are still necessary to address challenges that may arise in the future. Technological progress needs to be coordinated with corresponding supervision and response strategies to better maintain the health and stability of the network environment.