A recent legal dispute in British Columbia, Canada, caused by the use of ChatGPT to generate false cases, has triggered widespread concern about the application of artificial intelligence in the legal field. This incident highlights the importance of careful use of artificial intelligence tools in legal practice and also warns lawyers to strictly review AI-generated content to ensure the accuracy and authenticity of the information. This article will detail the incident and its implications.
British Columbia lawyer Chong Ke caused court chaos after citing a false case generated by ChatGPT in divorce proceedings on behalf of millionaire Wei Chen. The judge pointed out that generative artificial intelligence cannot replace the professional knowledge of lawyers and emphasized that careful choices should be made when using technological tools. Ke was ordered to bear the opposing party's attorney's fees and to review documents from other cases. The legal community warned of the risks of using artificial intelligence tools and emphasized that materials submitted to the court must be accurate and true.This incident not only exposed the limitations of artificial intelligence technology, but also sounded the alarm for lawyers and related practitioners. In the future, how to better use artificial intelligence to assist legal work while avoiding its potential risks will become an important issue. We need to formulate more complete norms and guidelines to ensure the healthy development and application of artificial intelligence technology in the legal field.