Recently, a new research on X platform (formerly Twitter) has attracted attention. Researchers from Bochum, GESIS Leibniz Institute and CISPA Helmholtz Center analyzed nearly 15 million X accounts to explore the characteristics and potential impact of AI-generated avatar accounts. Research results show that the proportion of accounts using AI to generate avatars is extremely low, and these accounts show unique behavior patterns, which raises concerns about the spread of false information.
A recent study conducted by Bochum, GESIS Leibniz Institute and CISPA Helmholtz Center, recently revealed that X (formerly Twitter) accounts that use AI to generate avatars account for only 0.052% of all accounts. . Researchers analyzed nearly 15 million X accounts and found that 7723 of them used AI-generated avatars.
Image source notes: The image is generated by AI, and the image authorized service provider Midjourney
These accounts show obvious characteristics: fewer followers, not many accounts followed, and more than half were created in 2023, and some accounts were even created in batches within a few hours. This clearly shows that these accounts are not real users, noted the study's lead author Jonas Ricker. After nine months of observation, X Platform has closed more than half of these accounts.
During manual content analysis of 1,000 accounts using AI avatars, the researchers found that the content themes of these accounts focused mainly on political aspects, especially issues related to Trump, the COVID-19 vaccine, and the Ukrainian war. In addition, lottery and financial topics such as cryptocurrencies also appear frequently. The researchers speculated that these accounts might have been created specifically to spread false information and political propaganda, because large amounts of accounts were created and the same metadata suggest that they might belong to an organized network.
Although this study did not explore the spread of these accounts using AI avatars, their average fewer followers and fewer followers indicate limited influence. This is consistent with a study by OpenAI that found that social media accounts that spread AI to generate political propaganda information received few responses, and therefore these accounts are less effective. The research team plans to expand automation of AI fake avatar recognition in future analysis and incorporate updated models to better understand the impact of this technology on social media platforms.
Key points:
Research shows that only 0.052% of accounts on the X platform use AI to generate avatars, and most of them are newly created accounts.
Account content topics using AI avatars mainly involve political, COVID-19 and financial topics.
The batch creation of these accounts and similar metadata imply that false information may be spreading on the organization's network.
This study provides an important reference for identifying and responding to false information on social media, and also suggests that we need to continue to pay attention to the potential risks of AI technology in information dissemination and explore more effective coping strategies. Future research will further explore the communication mechanism and impact of AI-generated avatars in social media to better maintain the healthy development of cyberspace.