The GPT store launched by OpenAI aims to provide diverse customized versions of ChatGPT, but its content review mechanism seems to have loopholes. Recently, a large number of AI chatbots that violate usage policies have appeared in stores, raising concerns about the platform's content management capabilities. The article will analyze the content supervision challenges faced by GPT stores, and the resulting thoughts on ethical issues such as artificial intelligence companions and emotional dependence.
OpenAI CEO Sam Altman said at a conference that the new store appears to be facing content management issues despite the GPTStore rules prohibiting it. The highly anticipated GPTStore has finally been launched. However, a search for "girlfriend" on the new market turned up at least eight romantic artificial intelligence chatbots, violating OpenAI's usage policy. Although the policy clearly states that GPTs that promote romantic relationships are not allowed, OpenAI has not responded to the issue of illegal content on the new store. In recent years, with the rise of artificial intelligence companion platforms, the phenomenon of users establishing emotional dependence with their artificial intelligence companions has attracted attention. More than two months after OpenAI officially launched the GPT store, users have created more than 3 million customized versions of ChatGPT chatbots.The rapid development of GPT stores and the lag in content supervision have exposed the contradiction between the development of artificial intelligence technology and the construction of ethical norms. How to effectively supervise AI content and prevent it from being used to build unhealthy artificial intelligence companions is a challenge that OpenAI and the entire industry must face. In the future, it will be crucial to strengthen technical means and manual review, improve relevant laws and regulations, and establish sound AI ethics standards.