Producing 7 articles for 1 cent can earn tens of thousands per day. Be wary of AI articles becoming popular.
Author:Eve Cole
Update Time:2024-11-22 10:30:01
1 cent generates 7 articles, earning tens of thousands of yuan per day. Recently, some media started an AI article industry chain. According to reports, some practitioners focus on fixed regions and use AI to generate 4,000-7,000 marketing account articles every day. These articles are a mixed bag, and it is difficult to tell whether they are true or false. However, because of their considerable traffic and extremely low cost, they have made a lot of money, and their "career" seems to be getting bigger and bigger. This batch-generated, traffic-first model has huge hidden dangers. In January this year, a message about the "Xi'an Explosion" appeared on an online platform, and it was later found to be the work of an MCN agency. After investigation, the person in charge of this organization operated a total of 5 "MCN organizations", operating 842 accounts, and could generate 4,000-7,000 articles in a day, with basically no manual participation required in the entire process. The "Xi'an Explosion" was just one of his "works", but through the reposting of these accounts, it became a public incident and caused the organization to overturn. The reason why it can become an industrial chain is mainly because the cost of AI generation is extremely low and the profits are huge. With the advancement and popularization of AI technology, the cost of article generation has become almost negligible. According to a media survey, using a domestic AI, the cost of an AI article is RMB 0.00138, and 7 articles can be generated for 1 cent. Taking the above-mentioned institution under investigation as an example, the police estimated that the average income of a single article was 1.43 yuan. "Based on preliminary estimates, the daily income is more than 10,000 yuan." The ratio of costs and benefits is several decimal points off, which shows that its "profit margin" is huge and it has become a "good business". Making money through articles is not the entire purpose of the organization. It can be seen from media reports that these MCN organizations will further expand their business scope through buying and selling accounts, advertising and marketing, etc., and have become a major "power" that cannot be ignored on the Internet. The AI article industry chain has become the starting point of this series of "commercial territories" and deserves vigilance. In fact, fake news produced by AI has also shown an increasing trend. Sensational news such as "an explosion occurred in a private house in a certain place", "a certain city restricted takeout delivery" and "a certain actress passed away with regret" were later confirmed to be AI fake news one by one. Regarding the application of AI, the relevant regulations are very clear. The "Regulations on the Management of Deep Synthesis of Internet Information Services" officially implemented in January 2023 clearly states that if deep synthesis technology is used, prominent markings should be made at reasonable locations and areas where information content is generated or edited to remind the public of the deep synthesis situation. In April this year, the Secretariat of the Central Cyberspace Affairs Commission issued the "Notice on Carrying out the Special Action of "Clearing and Rectifying 'Self-Media' Unlimited Bountiful Traffic'", requiring that information generated using technologies such as AI must be clearly marked as being generated by technology. But it is obvious that many practitioners in the AI article industry chain aim to avoid labeling and thereby deceive others, so they will naturally ignore such regulations. This also brings new problems to today's Internet governance. Because compared to the past "hand-typing text", AI can generate words too quickly and has extremely strong imitation capabilities. It is easy for people to believe it is true, and even achieve the purpose of disrupting the ecology of public opinion and manipulating public attention. How corresponding governance can keep up with this technological progress is a big challenge. This means that regulatory authorities and Internet platforms should strengthen linkage, increase technical identification capabilities, and completely block loopholes in AI abuse. For example, relevant departments should start supporting technical standards and keep dynamically updating the database to monitor false information generated by AI in real time, so as to identify and deal with it as soon as possible. At the same time, the detailed rules of law enforcement will be clarified and those responsible for violating laws and regulations will be severely dealt with. Similarly, the platform must also bear the main responsibility. These AI fake news are ultimately spread on the platform, and many of the ways in which they make money are through "platform subsidies." Platforms must not turn a blind eye or turn a blind eye just because they enjoy relevant dividends. Therefore, the platform should take more active actions to improve technical details, promptly limit the flow of illegal accounts, ban accounts, etc., and report relevant clues to relevant departments to ensure the common sharing of information. In addition, every netizen living in cyberspace should also improve their Internet literacy. People should be aware that today's technical capabilities are no longer what they used to be, and AI generation has become ubiquitous. For those overly exaggerated and sensational news, you might as well be more cautious and conduct more cross-validation. In fact, if netizens can maintain a rational and prudent attitude, they will not be easily led away. These dazzling AI articles will naturally have no market; this AI "business" will also have a positive impact on the network ecology. It was eventually eliminated during operation.