In recent years, artificial intelligence technology has developed rapidly, and its application has penetrated into all aspects of academic research. A recent study revealed the impact of AI in peer review, triggering widespread academic attention on AI-assisted writing and scientific quality control. Research has found that in top AI conferences, content generated by models such as ChatGPT accounts for as high as 17%. This not only raises the issue of academic integrity, but also challenges the traditional peer review mechanism. Below is a detailed explanation of the study's findings.
Recent research found that in top AI conferences in 2023-2024, content generated by models such as ChatGPT accounted for as much as 17% of peer reviews. Review content mostly appears near the deadline and lacks academic citations and reviewer participation. The research raises questions, such as whether to disclose AI to assist in reviewing manuscripts. The growth of artificial intelligence has implications for scientific quality control, requiring a reconsideration of the merits of hybrid knowledge work.
The results of this study warn us that we need to re-examine the role of artificial intelligence in academic research and develop corresponding norms and standards to ensure the quality and integrity of academic research. In the future, the academic community needs to actively adapt to the development of AI technology, explore how to better utilize AI tools, while avoiding its potential negative impacts, and ultimately promote the healthy development of academic research. This requires academics, technology developers, and policymakers to work together to build a fairer and more transparent academic ecosystem.