The rapid development of large language models (LLM) has brought convenience to information acquisition, but it has also brought new challenges. Recent research shows that LLM is at risk of spreading false information when dealing with facts, conspiracy theories and controversial topics. This article will focus on analyzing the potential risks of such models and their possible negative impacts, and explore directions for future improvements.
Latest research reveals that large language models have problems spreading false information, especially when answering statements about facts, conspiracies, controversies, etc. The research highlighted ChatGPT’s frequent errors, contradictions and duplication of harmful information. It was pointed out that context and questioning methods may affect the degree of "adhesion" of the model to false information. This raises concerns about the potential dangers of these models as they may learn incorrect information during the learning process.Advances in large language model technology need to be paralleled by risk assessment and mitigation measures. Future research should focus on how to improve the model's ability to identify information and reduce its probability of spreading false information to ensure its safe and reliable application in various fields. Only in this way can the advantages of LLM be truly utilized and its potential harm avoided.