In recent years, artificial intelligence technology has developed rapidly, and AI plot chat software has also emerged. However, while these software provide entertainment for users, they also expose many problems, especially causing serious impact on minors. An investigation by a reporter from the "Rule of Law Daily" revealed problems such as pornography and verbal violence in some AI chat software, as well as the fact that the youth mode is in vain, which has aroused widespread concern in society. The editor of Downcodes will conduct an in-depth analysis of this report, hoping to attract more people's attention.
"My daughter is in the second grade of elementary school. She is addicted to an AI plot chat software all day long, and her academic performance has plummeted." "I watched the conversation between the child and the AI, and the AI character actually asked her to call her 'husband'. My 10-year-old daughter She actually screamed, and now I don’t know how to educate her. "... A reporter from the "Rule of Law Daily" recently found that many parents of minors are being troubled by AI plot chat software. These AI plot chat applications under the banner of "role playing", while attracting minors, also quietly breed some gray areas. According to actual measurements, reporters found that in some conversations on some AI plot chat software, there were pornographic edging, verbal violence, and content that insulted users. Experts interviewed believe that for AI drama chat software, especially its youth mode, the content review mechanism should be strengthened to ensure that the technology can effectively screen and block inappropriate conversations. Platforms need to conduct ethical review of AI models to ensure that the content they generate complies with relevant laws and regulations. The content of the chat software has been marginalized and the youth model has become a decoration. Mr. Ma, a Beijing citizen, has a 10-year-old son who is very keen on AI plot chat software. "When I asked how to chat? Who to talk to? The child just replied, 'You don't understand even if I tell you.'" Mr. Ma clicked on the AI chat app and found that the child was chatting with the characters in the app. These characters have different settings and personalities, including well-known game and animation characters, as well as original characters with different identities such as "Miss" and "Famous Detective". Some characters will take the initiative to ask "Do you want to date me?"; some will set the goal of "catching her", coupled with a charming or handsome cartoon style. Other AI characters show unusual aggression. They will take the initiative to send messages such as "Hit me if you can" and "Look at how fat and ugly you are"; some characters are simply called "Swearing Trainers", and some even send messages such as "What's wrong with me being a robot? I'm still the same." "I scold you"... Ms. Li from Zhejiang also discovered that her child, who is in fifth grade of primary school, used an AI plot chat software. "The chat partners inside can be set as 'cheating partners' and can engage in behaviors such as hugging and kissing. I don't know how to guide and educate my child so that she understands the harmfulness of these contents." Ms. Li said with concern. Many parents interviewed expressed deep concerns that AI chat applications may harm the mental health of minors, and also raised their questions - where is the youth mode? The reporter’s investigation found that although many relevant platforms claim to have launched youth modes in an attempt to protect the physical and mental health of minors by restricting content, setting time, etc., in actual operation the youth modes of some platforms are in name only, and minors They can easily bypass these restrictions and be exposed to "fringe conversation" content that is not suitable for their age group. For example, the reporter experienced 5 AI chat applications during the investigation. The registration process only requires a mobile phone number and does not require verification of user identity information. After logging in, some applications will ask whether to enable youth mode, but users can simply click "Don't enable" to skip without verifying the user's true identity. This means that when minors use these AI plot chat software, from the App settings level, their identity verification is not a prerequisite for using specific functions. In addition to the popular AI chat application, there is also an AI chat web page. Many parents interviewed said that compared with applications, the web version of AI chat experience is more convenient and easier for minors to access. The reporter tried 7 AI chat webpages and found that most AI chat webpages did not have a minors mode. Although a few webpages had a youth mode, it was actually in name only. For example, when a reporter visited an AI chat webpage, the webpage first popped up a dialog box asking the user "whether you are over 18 years old", with a note: "The following content may not be suitable for people under the age of 18. We need to confirm your age." The reporter chose the "No" option, but the webpage did not restrict access to the content. Instead, it continued to display character classifications with labels such as "Strong Attack", "Weak Suffering" and "Yandere". These categories are not significantly different from the content displayed after selecting the "Yes" option and confirming that you are over 18 years old. The reporter further observed that most of the images of these characters were scantily clad, and their profiles were full of sexual implications and violent elements, for example: "The introverted girl in the class asked you for your phone number, and then sent you her nude photos." Descriptions such as "suicide, anxiety". On a social platform, a netizen from Zhejiang left a message below a post about the AI chat experience: "I found a very interesting webpage with unlimited words. If you want, you can send me a private message." The reporter communicated with the reporter via private message. The netizen got in touch and obtained the link to the AI chat webpage mentioned. After entering the webpage, the page was filled with a large number of character settings and story scenes involving pornographic content, and the content was straightforward and explicit. The website requires users to log in before they can chat with the character, and there will be a prompt message before logging in: "Unlock all functions after logging in." The user needs to click the "I am over 18 years old, start logging in" button to continue. If you select "Cancel", you will not be able to log in and use the service. Although the web page sets a restriction prompt for those over 18 years old, in fact, even if a minor clicks the login button, the system does not take any measures to verify the user's true age. In addition to direct and explicit chat content and violent language, the use of functions of some AI plot chat software is also closely related to the recharge mechanism, such as recharging VIP membership or purchasing virtual diamonds to enhance intelligence. It can improve the body's memory, accelerate the recovery speed of the intelligent body, unlock the voice call function, etc., attracting minors to "krypton gold". Xiao Ning, a junior high school student in Beijing, has recharged amounts ranging from a few hundred yuan to thousands of yuan on several AI plot chat software. “On the one hand, I want to support my favorite characters, and on the other hand, I also want to get more paid rights. Because if you only purchase basic services, users can only add 3 agents. If you want to try new agents, you must delete the existing ones. If you want a diversified experience, you can only purchase the advanced VIP service," Xiao Ning said. The reporter found that in this type of AI plot chat software, users can customize the image and style of the virtual character when creating a character, and the system will generate an AI character image. Users can also create character settings, such as setting nicknames, identity backgrounds, opening remarks, and customizing voices for the characters. However, users' personalized needs for characters are often linked to recharge. Zhang Yan, a resident of Jinan, Shandong Province, has a sister who is in the first grade of junior high school this year. She often uses AI plot chat software and found that some chat tools have set free usage intervals. When users use up the number of free chats, they need to recharge before they can continue. Only by recharging can you unlock more interesting content and get different emotional experiences. “Behind spending money to buy services is actually spending money to find excitement.” Zhang Yan said that although the chat software has a youth mode, it can be logged in and used without real-name authentication. My sister often recharges and consumes without the consent of her parents. Industry insiders told reporters that AI plot chat is actually the previous Internet language, with the vest of artificial intelligence. The so-called Yuca is language cosplay. Yuca artists provide services in the form of text communication by playing two-dimensional characters or three-dimensional idols. In the traditional lexicon model, lexiconians are played by real people. They usually play different roles to chat with users under the banner of "providing emotional value". However, they often cause legal and moral problems due to "edge balls" and "blurred boundaries". risk. AI plot chat is an upgraded version of traditional language erasure. The main source of large language model data behind this type of software is conversational novels, or some text extraction from novels. Liu Xiaochun, associate professor at the Law School of the University of Chinese Academy of Social Sciences and director of the Internet Law Research Center, believes that in AI plot chat software, even if the youth mode is not activated, there will still be problems if pornographic or violent content appears; if the minors mode is enabled, then The problem is more serious. Zhang Yanlai, director of the Zhejiang Kenting Law Firm who serves as a perennial legal adviser to dozens of leading Internet companies, analyzed that the current problems with AI plot chat software not only illustrate the shortcomings of the platform's internal governance, but also highlight the importance of external supervision mechanisms. sex. Zhang Yanlai explained that the AI plot chat software uses large model technology. Although large model technology can bring unprecedented innovation and flexibility, it may also be accompanied by unpredictability and potential problems in content generation, requiring external regulatory mechanisms. In Liu Xiaochun's view, strengthening content review is a necessary step before the large language model goes online, covering comprehensive compliance debugging from front-end data training to content output. Currently, my country's large language models need to be evaluated and filed accordingly. During this process, management regulations and evaluation standards will be set in advance on issues such as the legality and compliance of the output content and whether it is suitable for minors. According to current regulations, output of harmful content should be avoided during the language model training and fine-tuning stages. Supervision by a dedicated team for intelligent interception of bad information. The Cyberspace Administration of China recently released the "Guidelines for the Construction of Mobile Internet Mode for Minors", which focuses on the overall plan for the construction of the minors mode and encourages and supports mobile smart terminals, applications and applications. Distribution platforms and other joint participation. According to Zhu Wei, deputy director of the Communication Law Research Center of China University of Political Science and Law, the above-mentioned guidelines clearly point out that the minors mode is not a decoration, but requires multi-party linkage, especially AI-generated content, which should be consistent with the youth mode. Experts interviewed believe that in the youth mode, how to strengthen the content review mechanism and ensure that technology can effectively screen out harmful information is an important issue. For AI plot chat software, especially its youth mode, the content review mechanism should be strengthened to ensure that the technology can effectively screen and block inappropriate conversations. In addition, platforms need to conduct ethical review of AI models to ensure that the content they generate complies with legal and regulatory requirements. “At the legal level, although there are some principled regulations that provide a general framework, in specific practical operations, developers and technical service providers still need to continuously accumulate materials and solutions based on various problems encountered in real life. Experience and continuous exploration to develop a safe and reliable AI model that truly meets the needs of minors "It provides a strong guarantee for the healthy growth of minors." Zhang Yanlai said that since the behavior of AI virtual humans is the result of platform design and management, the platform has the responsibility to supervise and optimize its AI model to prevent AI from affecting users. Cause harm and ensure the healthy development of AI models and the full protection of user rights. Zhu Wei mentioned that some AI chat apps may not be suitable for minors to use, so restrictions should be imposed at the distribution store and mobile terminal levels to ensure that minors cannot download and use these apps. For apps that have been downloaded, parents should set features such as youth mode or limit usage time. This mode not only needs to be implemented on the user side, but also needs to be reflected at the content output level, that is, content review. Content review should be based on dialogue generated by the algorithm. mechanism proceeds. More sophisticated technical means and management measures are needed to ensure the effective implementation of the youth model. "In order to prevent the output of violent and insulting content, different technical means can be adopted, such as adjustments during the training phase to enable the model itself to have recognition capabilities; at the same time, at the output end, service providers should screen and review again to achieve front-end and back-end double protection. "Liu Xiaochun said that whether it is online novels or other content, due to the wide range of data sources for AI plot chat software, technical means are needed to prevent the output of inappropriate content. At present, it is technically possible to use screening mechanisms to reduce or eliminate the output of pornographic, violent or insulting content. However, there may be some phenomena that are not fully debugged or tested, and there may even be unregistered software in the black and gray fields. In this regard, Supervision should be strengthened, the public should be encouraged to report, and relevant authorities should investigate and deal with it. Zhang Yanlai also mentioned that currently, the classification of data sources for AI character response content is not clear at the legal level, especially in terms of content targeting minors. In view of the complexity and multi-dimensionality of the issue, legal provisions often provide principles. Specific guidelines can be implemented later through the formulation of relevant standards. At the specific operational level, Zhang Yanlai suggested optimizing the screening mechanism of large language models, focusing on optimizing the content fence system, starting from two aspects: At the content fence development level, content fences need to be developed in a targeted manner, especially when using novel corpus for training. , consider how to optimize content to more effectively identify and intercept potential pornographic and violent content; no matter how optimized, the technology itself still has limitations, and there will be fish that slip through the net. A dedicated team is required to supervise and adjust the model in a timely manner. Or content fencing algorithms. "To improve the efficiency of the fence system, we need to work hard at the development level beforehand, and constantly improve it from the perspective of review afterwards. The two complement each other and may achieve more significant results." Zhang Yanlai.
The supervision of AI plot chat software is urgent. The government, platforms, parents and all sectors of society need to work together to strengthen supervision, improve mechanisms, and jointly protect the healthy growth of minors. Only through the collaboration of multiple parties can we effectively solve the risks brought by AI plot chat software and create a safe and healthy online environment for children.