A recent study revealed subtle differences in user perception between artificial intelligence-generated fake news and human-generated fake news. The study found that users' willingness to share AI-generated fake news is not significantly different from their willingness to share human-generated fake news, which raises concerns about information dissemination mechanisms and users' discernment capabilities. In addition, socioeconomic status also has an impact on users' trust in information, implying the importance of information literacy education.
The latest research points out that there is a certain difference in user perception between fake news generated by artificial intelligence and fake news produced by humans. The survey showed that participants were equally willing to share fake news, and that socioeconomic factors affected users’ trust. The study calls for increased education, the introduction of new labels and possibly regulatory measures to protect vulnerable groups.The results of this study emphasize that when it comes to dealing with the spread of fake news, technical means alone may not be enough to solve the problem. Education, labeling, and possible regulatory measures need to be combined to effectively protect the public from false information and build a healthier information ecosystem. environment. Future research should further explore the impact mechanisms of different types of fake news on user cognition to provide a basis for more effective response strategies.