The editor of Downcodes learned that Microsoft recently submitted a patent application aimed at solving the problem of false information generated by artificial intelligence. The patent application, titled "Using External Knowledge and Feedback to Interact with Language Models," was submitted to the United States Patent and Trademark Office (USPTO) last year and was made public on October 31. The patent proposes a "Response Augmentation System" (RAS), which aims to improve the accuracy and reliability of the AI model's answers and reduce the occurrence of AI "hallucinations." This technology does not require complex adjustments to existing models, and effectively improves the quality of AI answers by combining external information sources and user feedback, thereby reducing the generation of false information.
Recently, Microsoft submitted a patent application aimed at reducing or eliminating false information generated by artificial intelligence through a technical method. The patent, titled "Interacting with Language Models Using External Knowledge and Feedback," was submitted to the United States Patent and Trademark Office (USPTO) last year and was made public on October 31.
The core of this proposal is to provide AI models with a "response augmentation system" (RAS) that can automatically extract more information based on user queries and check the "validity" of their answers.
Specifically, the response enhancement system is able to identify whether information from an "outside source" could better answer the user's question. If the AI's answer doesn't include this information, the system will judge it as not useful enough. In addition, RAS can also prompt users if there are deficiencies in their answers, and users can also provide feedback on this. The advantage of this approach is that it does not require developers or companies to make detailed adjustments to existing models.
Currently, the USPTO website does not yet display a patent number for the application, meaning the patent is still under review. We reached out to Microsoft to learn more, including whether the patent is related to Microsoft's previously announced tool to reduce AI illusions - the Azure AI Content Security Tool. This tool provides AI-driven verification for enterprise AI chatbots. It can fact-check in the background to determine whether the AI's answers are "unfounded" or "founded" and only provide answers that are supported by actual data before providing answers to users. answer.
The problem of AI illusion is one of the biggest challenges facing generative AI, which seriously affects the credibility of AI chatbots. In this regard, both Google and Twitter’s AI systems have made high-profile errors, such as suggesting that users put glue on pizza or eat rocks, and even spread election disinformation. Apple CEO Tim Cook also admitted that Apple’s smart technology is also unable to avoid the problem of hallucinations. Recently, OpenAI’s “Whisper” audio transcription tool has also been found to frequently experience hallucinations, drawing attention to its use in U.S. hospitals.
Despite the prominent problem of AI hallucinations, technology giants still have strong demand for AI data centers, and companies including Google, Microsoft and Meta are considering nuclear energy as a potential solution to meet the high energy consumption needs of AI.
Microsoft's move is intended to improve the reliability of AI models and reduce the spread of false information, which is of great significance to the development and application of AI technology. In the future, we look forward to this patent successfully passing the review and finally being used in actual products to bring users a safer and more reliable AI experience.