Stanford University communications professor Jeff Hancock has attracted widespread attention after being accused of using artificial intelligence to falsify testimony in a case involving political deepfake videos. The case, brought by the Minnesota Attorney General, centers on a recently passed law banning political deepfake videos, which has called into question the legality of the law and its possible impact on free speech. The editor of Downcodes will provide a detailed interpretation of this incident and analyze its impact on academic integrity and the legal application of artificial intelligence.
Picture source note: The picture is generated by AI, and the picture authorization service provider Midjourney
Professor Hancock submitted expert statements in support of the law, but cited unsubstantiated research. The legal team pointed out in a 36-page memo that the research paper cited by Professor Hancock does not exist at all and its content may have been generated by artificial intelligence. They detailed their search process and failed to find the study in multiple sources.
In the memo, the lawyers detailed their attempts to find the study, emphasizing that the information was unable to be found on the Internet and in multiple academic search engines. "A snippet of this title cannot be found anywhere, not even in Google Scholar," they said. Lawyers questioned Hancock's testimony, arguing it could have been generated by artificial intelligence False content seriously affects the credibility of the statement.
In addition, lawyers pointed out that Hancock's statement lacked the necessary research methodology and analytical logic, which called into question the credibility of the entire statement. They argued that if part of the statement was false, the entire testimony should be deemed unreliable and called on the court to exclude it from consideration.
The incident sparked widespread discussion about academic integrity and the application of artificial intelligence in the legal field. Currently, the case is still under further proceedings and the court’s final decision has not yet been determined.
The outcome of Professor Hancock’s case will have a profound impact on the application of artificial intelligence in the legal field, and will also sound a wake-up call to the academic community about the importance of academic integrity. In the future, how to effectively identify and prevent false information generated by artificial intelligence will become an important topic. We look forward to the court’s final ruling and will continue to pay attention to the subsequent development of this incident.