Recently, researchers from Microsoft Research and Carnegie Mellon University collaborated to publish a survey on the use of generative AI by knowledge workers such as Copilot and ChatGPT. The study conducted a survey of 319 knowledge workers who used generative AI every week, and explored in-depth the impact of generative AI on users' critical thinking ability and revealed some of these worrying phenomena. The research results show that there is a delicate balance between the convenience of AI tools and the critical thinking ability of users. Excessive dependence on AI may lead to a decline in critical thinking ability, which is worthy of our in-depth thinking and discussion.
Recently, Microsoft Research and Carnegie Mellon University researchers jointly released a new study that uncovers potential problems for knowledge workers when using generative artificial intelligence such as C opilot and ChatGPT. The research team explored their critical thinking applications when using these tools by surveying 319 knowledge workers who use generative AI every week.
The research results show that workers who are confident in the task tend to think critically about the output of generative AI. However, for those who lack confidence in the task, they tend to think that the generative AI answer is sufficient and therefore no further thought. This phenomenon has attracted the attention of researchers, who noted that excessive reliance on AI tools may lead to a decline in critical thinking ability.
The study mentioned that “trust in AI is related to efforts to reduce critical thinking, while self-confidence is related to enhanced critical thinking.” This means that when designing enterprise AI tools, how to balance these two factors should be considered. Researchers suggest that AI tools should include mechanisms that support long-term skill development, encouraging users to think reflectively when interacting with the output generated by AI.
At the same time, the researchers mentioned that it is not enough to just explain how AI draws conclusions. Good AI tools should also promote users' critical thinking through proactive design strategies and provide the necessary help. They stress that knowledge workers should use critical thinking in their daily work to verify the output of AI and avoid excessive dependence on AI.
The conclusions of the study emphasize that as AI gradually integrates into our working world, knowledge workers need to maintain a certain ability in basic skills in information collection and problem solving to prevent excessive dependence on AI. They should be trained to develop skills such as information verification, answer integration and task management.
The paper for the study will be released at the 2025 Human-Computer Interaction Conference, which the research team hopes to attract widespread attention on the impact of generative AI.
Key points:
Research shows that trust in generative AI may lead to a decrease in critical thinking ability among knowledge workers.
Self-confidence is proportional to critical thinking, and the design of enterprise AI tools needs to pay attention to this balance.
Knowledge workers should be trained to maintain basic information collection and problem-solving capabilities to avoid over-reliance on AI.
This research is of great guiding significance for how to effectively utilize AI tools in the AI era while maintaining and improving one's own critical thinking abilities. In the future, in the design and application of AI tools, we should pay more attention to cultivating user critical thinking and avoiding the negative impact of excessive dependence.