Recently, an investigation by The Guardian revealed potential security risks in OpenAI’s ChatGPT search tool. Investigations have shown that ChatGPT is susceptible to malicious code or manipulation instructions hidden in web pages, giving inaccurate or even harmful responses, such as being manipulated into giving overly positive reviews of a product even if there are negative reviews for the product. The investigation raised concerns about the security of large language models and highlighted the critical importance of security in the development of AI technology.
Recently, an investigation by the British "Guardian" revealed possible security risks in OpenAI's ChatGPT search tool. The investigation found that ChatGPT may be manipulated or even return malicious code when processing web page excerpts containing hidden content. These hidden content may include third-party instructions designed to interfere with ChatGPT's response, or a large amount of hidden text promoting a certain product or service.
During the test, ChatGPT was provided with a link to a fake camera product page and asked to judge whether the camera was worth buying. On a normal page, ChatGPT can point out the pros and cons of a product in a balanced way. However, when the hidden text on the page contained instructions asking for a positive review, ChatGPT's responses became entirely positive, even though there were negative reviews on the page. Furthermore, even without explicit instructions, simple hidden text can influence ChatGPT’s summary results, making it tend to give positive reviews.
Jacob Larsen, a cybersecurity expert at CyberCX, warned that if ChatGPT's search system is fully released in its current state, it may face a "high risk" that someone may design a website specifically to deceive users. However, he also pointed out that OpenAI has a strong AI security team, and it is expected that they will have rigorously tested and fixed these issues by the time the feature is open to all users.
Search engines such as Google have penalized websites for using hidden text, causing them to have their rankings dropped or even removed entirely. Karsten Nohl, chief scientist of SR Labs, pointed out that SEO poisoning is a challenge for any search engine, and ChatGPT is no exception. Nevertheless, this is not a problem with the large language model itself, but a challenge faced by new entrants in the search field.
Highlight:
ChatGPT may be manipulated by hidden content and return false reviews.
Hidden text can affect ChatGPT's evaluation, even if the page has negative reviews.
OpenAI is actively fixing potential issues to improve the security of the search tool.
Although ChatGPT faces new challenges such as SEO poisoning, this does not mean that the large-scale language model technology itself is defective. OpenAI is aware of these issues and is actively taking steps to resolve them. In the future, as technology continues to develop and improve, the security of large language models is expected to be further improved, providing users with more reliable services.