Stanford University professor Jeff Hancock was accused of citing a non-existent study in his expert testimony in a case involving political deepfake videos, raising questions about academic integrity and the application of AI in the legal field. Widely controversial. The case, brought by Minnesota Attorney General Keith Ellison, involves a ban in Minnesota that could pose a threat to free speech. Professor Hancock’s testimony supported the ban, but the plaintiff’s legal team found that the research he cited did not exist and believed that it might be false content generated by AI, seriously affecting the credibility of the testimony.
Recently, Stanford University communications professor Jeff Hancock attracted widespread attention after he was accused of using artificial intelligence to falsify testimony in a case involving political deepfake videos. The case was brought by Minnesota Attorney General Keith Ellison over Minnesota's recently passed law banning political deepfake videos, which was considered a potential threat to free speech.
Picture source note: The picture is generated by AI, and the picture authorization service provider Midjourney
In it, Professor Hancock submitted an expert statement supporting the law advocated by the Attorney General. However, the legal team discovered that a study cited by Hancock called "The Impact of Deepfake Videos on Political Attitudes and Behavior" did not exist. They pointed out in a 36-page memo that although the relevant journals existed, such research had never been published.
In the memo, the attorneys detailed their attempts to find the study, emphasizing that the information was unable to be found on the Internet and in multiple academic search engines. They said, "A fragment of this title cannot be found anywhere, not even in the academic search engine Google Scholar." Lawyers questioned Hancock's evidence, arguing that it may have been generated by artificial intelligence False content seriously affects the credibility of the statement.
In addition, lawyers pointed out that Hancock's statement lacked the necessary research methodology and analytical logic, which questioned the credibility of the entire statement. They argued that if parts of the statement were forged, the entire testimony should be deemed unreliable and called on the court to exclude it from consideration.
The incident sparked widespread discussion about academic integrity and the application of artificial intelligence in the legal field. Currently, the case is still under further proceedings and the court’s final ruling has not yet been determined.
Highlight:
Professor Hancock is accused of citing a non-existent study in his evidence that may have been generated by AI.
The testimony submitted by Minnesota Attorney General Ellison has been questioned, affecting support for the political deepfake video bill.
Lawyers for the plaintiffs called on the court to exclude Hancock’s testimony, arguing that its overall credibility was seriously affected.
This incident highlights the ethical and legal challenges posed by the rapid development of artificial intelligence technology and reminds us to remain vigilant about the reliability of information sources. The subsequent development of the case deserves continued attention, and its results will have a profound impact on the application of artificial intelligence in the legal field.