Downcodes editor reports: A latest study reveals the surprising secret of AI-generated avatar accounts on the X platform. A German research team analyzed nearly 15 million X accounts and found that only a very small number of accounts (0.052%) used AI-generated avatars. These accounts show unique characteristics, such as a small number of followers and followers, most of the accounts were created in 2023, and were even created in batches, which indicates that they are most likely fake accounts used to spread specific information.
Picture source note: The picture is generated by AI, and the picture authorization service provider Midjourney
These accounts show obvious characteristics: they have fewer followers and follow fewer accounts, and more than half were created in 2023, and some accounts were even created in batches within a few hours. Jonas Ricker, the lead author of the study, pointed out that this clearly shows that these accounts are not real users. After nine months of observation, Platform X has closed more than half of these accounts.
In a manual content analysis of 1,000 accounts that used AI avatars, the researchers found that the content themes of these accounts were mainly focused on politics, especially issues related to Trump, COVID-19 vaccines, and the war in Ukraine. Additionally, lottery and financial topics such as cryptocurrencies come up frequently. Researchers speculate that these accounts may have been created specifically to spread disinformation and political propaganda, as accounts were created in large numbers and shared the same metadata, suggesting they may be part of an organized network.
Although this study did not explore the spread of these accounts using AI avatars, their average number of followers and accounts shows that their influence is limited. This is consistent with a study by OpenAI, which found that social media accounts spreading AI-generated political propaganda received few responses, making these accounts less effective. The research team plans to expand the automation of AI fake avatar recognition in future analyzes and incorporate updated models to better understand the impact of this technology on social media platforms.
This research provides valuable insights into our understanding of the application of AI-generated avatars on social media platforms, and also reminds us to be wary of using AI technology to spread malicious information. In the future, more effective AI recognition technology will become an important means to maintain the security of the network environment. The editor of Downcodes will continue to pay attention to relevant developments and bring you more technological information.