Recently, in a federal lawsuit on a law on the law of "use of deep forgery technology", there was a striking controversy: the oath content that supported the law was questioned as AI to generate text. The core of this incident is that in the evidence submitted by the Minister of Justice, a number of research papers have been accused of not existing, causing concerns about the application of artificial intelligence technology in the legal field and the severe challenges of information authenticity verification. This incident is not only related to the fairness of legal proceedings, but also highlights the importance of maintaining information accuracy in the era of artificial intelligence.
Recently, a new controversy has appeared in a federal lawsuit on the law on the ongoing federal lawsuit of the law on the law of "using deep forgery technology". The plaintiff Lawyer Group pointed out in the latest legal documents that the oaths that support this law may include texts generated by artificial intelligence.
Picture source remarks: Pictures are generated by AI, picture authorization service provider Midjourney
According to the "Minnesota Reformator", Keith Ellison, the state's Minister of Justice, asked Jeff Hancock, the founding director of the Stanford Social Media Lab. However, several studies mentioned in the oaths submitted by Hancock did not have substantial evidence and showed possible AI "hallucinations".
Hankak's oath quoted a research published in "Information Technology and Political Magazine" in 2023, titled "Deep Falunage Video's Political State and Behavior."
However, relevant reports pointed out that no record of this study was found in the magazine or any other publications. In addition, the study mentioned in the oath called "The illusion of deep falsification and authenticity: the cognitive process behind the error message" also lacks an empirical basis.
In this regard, Mary Franson and conservative Youtuber Kristopher Khols lawyers said in the document: "These citations obviously have the artificial intelligence (AI) illusions' in the document. Features, implies at least part of the content is generated by large language models such as ChatGPT. "They further pointed out that this situation has questioned the credibility of the entire oath, especially many of the arguments lacking methodology and analysis logic support.
Hancock did not respond to the incident. This incident has triggered a discussion of the application of artificial intelligence in the legal field. Especially in the case of public interests and election affairs, how to ensure the accuracy and reliability of information has become an important topic.
This incident not only pays attention to the impact of deep forgery technology, but also provides new thinking for the legal community when handling evidence related to artificial intelligence. For how to effectively identify and verify the source of information, it has become an important challenge to face in legal practice.
Points:
The content of the oath of the Mingzhou Deep Facing Act was questioned as a text generated by AI.
The lawyer team pointed out that the referenced research did not exist, and the AI "hallucination" was suspected to have occurred.
This incident triggered a widespread discussion of artificial intelligence in the use of legal documents and pay attention to the accuracy of information.
This incident sounded the alarm for the legal community, suggesting that we need to re -examine the application of artificial intelligence technology in legal evidence and establish a more complete verification mechanism to ensure the fairness and reliability of legal procedures. In the future, how to effectively identify and prevent false information generated by AI will become an important topic.