There is a heated discussion within OpenAI on the application of ChatGPT watermark technology. The AI text watermark technology developed by it is designed to combat academic plagiarism, but controversy and challenges in technology application have also followed. The technology, which adjusts words and phrases to create detectable patterns, is said to be 99.9% accurate and resistant to simple paraphrasing. However, OpenAI also admitted that complex rewriting can easily bypass watermarks, and that watermarks may bother some users, especially some non-native speakers, thus affecting user experience and usage.
Recently, OpenAI has been embroiled in internal discussions over watermarking technology. According to the Wall Street Journal, the company has already developed a watermarking technology to mark text generated by ChatGPT and has also prepared a detection tool. But internal opinions are divided on whether to bring this technology to market.
From a certain perspective, launching watermark technology seems like the responsible thing to do. This technology creates a detectable pattern by adjusting the words and phrases predicted by the model. While this sounds a bit complicated, its purpose is clear – to help teachers prevent students from plagiarizing assignments generated using AI. The report states that this watermark does not affect the chatbot’s text quality. Moreover, OpenAI found in a survey that globally, the number of people supporting AI detection tools outnumbered opponents by four times.
However, the implementation of watermarking technology is not that simple. A blog update from OpenAI confirmed that the company has developed watermarking technology and said the method is highly accurate, claiming it is "99.9% effective." Moreover, this kind of watermark has certain resistance to "tampering", such as simple rewriting. However, OpenAI also pointed out that this watermark can be easily bypassed by rewriting it with other models, which worries them.
Additionally, OpenAI considers that watermarks may cause some users to feel stigmatized, especially for non-native speakers. Therefore, although some employees believe that watermarks are effective, the survey shows that nearly 30% of ChatGPT users said that they may use less if watermarks are implemented. As a result, some employees have suggested exploring "relatively less controversial" methods, although their effectiveness has not yet been proven.
In a blog update today, OpenAI mentioned that they are exploring ways to embed metadata in the "early stages" and said it's still too early to tell how effective it will be. However, they point out that because this metadata is cryptographically signed, false positives can be avoided.
Highlights:
✅ There are differences within OpenAI regarding the launch of watermarking technology, and whether it will be released is still under discussion.
? Surveys show that most people around the world support AI detection tools, but users are worried that watermarks will affect their use.
? OpenAI is considering embedding metadata in an effort to find a balance between technology and user experience.
OpenAI’s cautious attitude towards watermarking technology reflects its efforts to find a balance between technological development and user experience. In the future, how to effectively solve the problems of identifying AI-generated content and protecting user privacy will become an important issue in the field of artificial intelligence.