Recently, a high-profile controversy emerged in a federal lawsuit in Minnesota regarding the use of deepfakes to influence elections. In the evidence submitted by the state's attorney general, some references to key research were suspected to be generated by artificial intelligence, triggering a broad discussion about the application of artificial intelligence in the legal field, and important questions about how to ensure the accuracy and reliability of information. The editor of Downcodes will conduct a detailed analysis of this incident.
Picture source note: The picture is generated by AI, and the picture authorization service provider Midjourney
According to the Minnesota Reformer, the state’s Attorney General Keith Ellison asked Jeff Hancock, the founding director of the Stanford Social Media Lab, to provide relevant evidence. However, several of the studies mentioned in Hancock's affidavit lack substantive evidence and show possible AI "hallucinations."
Hancock's affidavit cited a 2023 study published in the Journal of Information Technology and Politics titled "The Impact of Deepfake Videos on Political Attitudes and Behavior."
However, no record of the study was found in the journal or any other publication, reports state. In addition, another study mentioned in the affidavit titled "Deepfakes and the Illusion of Authenticity: Cognitive Processes Behind the Acceptance of Misinformation" also lacks empirical basis.
In response, lawyers for Minnesota Rep. Mary Franson and conservative YouTuber Christopher Khols said in the filing: "These quotes clearly have the effect of artificial intelligence (AI) 'illusion' Features, implying that at least part of the content is generated by large language models such as ChatGPT.” They further pointed out that such a situation calls into question the credibility of the entire affidavit, especially because many of its arguments lack the support of methodology and analytical logic.
Hancock did not respond to the incident. This incident triggered a discussion on the application of artificial intelligence in the legal field, especially when it comes to public interest and electoral matters. How to ensure the accuracy and reliability of information has become an important topic.
This incident not only draws attention to the impact of deepfakes technology, but also provides new thinking for the legal community when dealing with evidence related to artificial intelligence. How to effectively identify and verify information sources has become an important challenge that needs to be faced in legal practice.
This incident exposed the risks of applying AI technology in the legal field, and also warned us that we need to be cautious about AI-generated content, especially in the application of evidence materials, and that strict review and verification are needed to ensure the authenticity and reliability of the information. This is crucial to maintaining the fairness and authority of the law.