Microsoft recently filed a patent application to solve the problem of artificial intelligence generating false information (AI hallucinations). The patent application, called "Entering External Knowledge and Feedback to Interact with Language Models", aims to improve the accuracy and reliability of AI models through a mechanism called "Response Augmentation System" (RAS). The system can automatically obtain more information from external sources, verify the answers generated by AI, and prompt users for potential shortcomings, effectively reducing false information generated by AI. The disclosure of this patent application marks an important step forward in Microsoft's fight against the issue of AI hallucinations and reflects the industry's urgent need to solve this problem.
Recently, Microsoft filed a patent application aimed at reducing or eliminating false information generated by artificial intelligence through a technical approach. The patent, titled "Using external knowledge and feedback to interact with language models", was submitted to the U.S. Patent and Trademark Office (USPTO) last year and was published on October 31. At the heart of this proposal is to provide an “response enhancement system” (RAS) for AI models, allowing it to automatically extract more information based on user queries and check the “validity” of its answers.
Specifically, the response enhancement system can identify whether there is information from "external sources" that can better answer users' questions. If the AI's answer does not contain this information, the system will determine it as ineffective. In addition, RAS can also prompt the user whether there are any shortcomings in their answers, and users can also provide feedback on this. The advantage of this solution is that developers or companies do not need to make detailed adjustments to existing models.
Currently, the USPTO website has not shown the patent number for the application, which means that the patent is still under review. We contacted Microsoft for more information, including whether the patent is related to the Azure AI Content Security Tool, a previously announced tool for reducing AI illusions. This tool provides AI-driven verification for enterprise AI chatbots, which can conduct fact verification in the background to determine whether AI's answers are "unfounded" or "founded". Before providing answers to users, only provide actual data support. answer.
The AI hallucination problem is one of the biggest challenges facing generative AI, which seriously affects the credibility of AI chatbots. In this regard, both Google and Twitter’s AI systems have produced remarkable mistakes, such as suggesting users to apply glue on pizza or eat stones, and even spread election disinformation. Apple CEO Tim Cook also admitted that Apple's smart technology cannot avoid hallucinations. Recently, OpenAI's "Whisper" audio transcription tool has also been found to have frequent hallucinations, which has attracted attention to its use in American hospitals.
Despite the prominent AI illusion problem, tech giants still have strong demand for AI data centers, and companies including Google, Microsoft and Meta are considering nuclear power as a potential solution to meet AI's high-energy needs.
Key points:
Microsoft has filed a new patent to reduce false information generated by AI.
The core of this patent is to introduce a response enhancement system for AI models to automatically extract more information.
Despite the serious AI hallucination problem, demand for AI data centers from tech companies remains strong.
The disclosure of Microsoft's patent application indicates that technology companies will continue to invest more efforts in dealing with the problem of AI hallucinations. In the future, more reliable and trustworthy AI systems are expected to appear, which will greatly enhance the application value and user experience of AI technology.