Recently, two papers on generative artificial intelligence (AI) have sparked heated discussions, pointing out that texts generated by AI are often "nonsense." These two papers provide an in-depth analysis of the potential harm to society caused by misinformation generated by AI from the perspective of the essential characteristics of AI and the shortcomings of current laws and regulations, and call for more effective measures to avoid risks. The authors of the paper believe that it is misleading to simply attribute AI errors to "illusions" and that more accurate terms should be used to describe this phenomenon, thereby improving the public's awareness and understanding of AI technology and promoting the implementation of relevant laws and regulations. Complete.
Recently, two research teams have released papers that have attracted widespread attention, bluntly stating that the content of generative artificial intelligence (AI) products can basically be regarded as "nonsense." The paper, titled "ChatGPT is Nonsense," points out that generative AI's disregard for accuracy when producing information creates many challenges for public servants, especially officials who have a legal duty to tell the truth.
Authors Michael Townsen Hicks, James Humphries, and Joe Slater emphasize that the misinformation generated by generative AI cannot simply be described as "lies" or "illusions." Unlike lies that are intentionally deceptive, bullshit refers to a form of expression that does not care about the truth in an attempt to give a specific impression. They argue that calling AI errors “hallucinations” only misleads the public into thinking that these machines are still somehow trying to communicate what they “believe.”
"Calling these errors 'bullshit' rather than 'illusions' would not only be more accurate, but would also help improve public understanding of the technology," they said. This passage highlights the importance of using more precise terms to describe AI errors. , especially in the current context where scientific and technological communication is in urgent need of improvement.
Meanwhile, another research paper on Large Language Models (LLMs) focuses on the EU’s legal and ethical environment on this issue. The conclusion of the paper is that the current laws and regulations on AI are still not perfect enough to effectively prevent the harm caused by the "nonsense" generated by these AIs. Authors Sandra Wachter, Brent Mittelstadt and Chris Russell suggest the introduction of regulations similar to those in publishing, with an emphasis on avoiding "casual speech" that may cause social harm.
They note that this obligation emphasizes that no single entity, public or private, should be the sole arbiter of truth. They also suggest that the "random speech" of generative AI could turn truth into a matter of frequency and majority opinion rather than actual facts.
All in all, these two papers jointly reveal the potential risks of generative AI and call on all sectors of society to pay attention and take action to improve relevant laws and regulations, standardize the development of AI technology, and ensure that AI technology can better serve human society.