Recently, a large number of AI-generated bloggers have emerged on the Xiaohongshu platform, attracting widespread attention. These bloggers can easily identify their AI identities due to their highly similar facial expressions and smooth skin features. What is even more worrying is that these AI bloggers have begun to approach advertising business, mainly in the fields of health products and skin care products. Given that some middle-aged and elderly users lack the ability to identify, this phenomenon can easily lead to fraud and lead to social risks. The lag in platform supervision is in sharp contrast to the surge in the number of AI-generated bloggers, exposing the shortcomings of the existing regulatory mechanism.
Recently, a large number of bloggers with similar expressions and excessively smooth skin have appeared on Xiaohongshu, which are suspected to be generated by AI. These AI bloggers began to accept advertisements on Xiaohongshu, such as health products and skin care products. Similar to the Douyin "Fake Jin Dong" face-changing video incident, some middle-aged and elderly users have difficulty identifying authenticity and are at risk of being deceived. The number of bloggers generated by AI is huge, and the problem of platform supervision failing to keep up is prominent. Although platforms such as Xiaohongshu have formulated preliminary management measures, there are difficulties in implementation and supervision is urgently needed.The Xiaohongshu platform and other related platforms need to strengthen the supervision of AI-generated content, improve the review mechanism, improve user identification capabilities, and actively explore technical means to identify and combat false information generated by AI to ensure the healthy development of the platform ecosystem and users. security of rights and interests. This requires platforms, regulators and technology companies to work together to effectively address this emerging challenge.