Former Trump lawyer Michael Cohen recently admitted to citing false cases generated by AI in his court documents. He mistook Google’s Bard for a “super search engine” and cited AI-generated content without verification when using Bard for legal research. Cohen claimed that he did not intentionally mislead the court, but lacked understanding of AI legal technology. This incident is not an isolated case. Similar AI-generated false references have appeared in court documents before, triggering widespread controversy and highlighting the risks and challenges of applying AI technology in the legal field.
Former Trump lawyer Michael Cohen admitted in court documents to citing false AI-generated cases of using Google's Bard to conduct legal research while mistakenly treating it as a "super search engine." Cohen claimed that he did not intentionally mislead the court and had insufficient understanding of AI legal technology. Similar AI-generated false quotes have appeared in court documents before, causing some controversy.
The Cohen incident once again reminds us that when applying AI technology in the legal field, we need to carefully evaluate its reliability and accuracy, and strengthen the ethics and supervision of AI tools. Reliance on AI technology needs to be based on a full understanding of its limitations to avoid serious consequences caused by misuse. In the future, how to better use AI to assist legal research while avoiding its potential risks will become an important topic.