A new study from Ohio State University shows that ChatGPT performs well in psychotherapy responses, and its response scores often generate higher than professional therapists. The study, published in the journal PLOS Mental Health, tested more than 800 participants using simulated scenarios of couple therapy, and the results showed that participants had difficulty distinguishing ChatGPT from human therapists’ responses. Research reveals the characteristics of ChatGPT responding longer and using more nouns and adjectives, which may be related to higher scores, triggering widespread thinking about the application of artificial intelligence in the field of psychotherapy.
According to a study published February 12, 2025 in the journal PLOS Mental Health, H. Dorian Hatch from Ohio State University and his team found that psychotherapy responses generated by ChatGPT are usually rated higher. This study has attracted widespread attention to whether machines can serve as psychotherapists, especially in the context of increasingly obvious advantages of generative artificial intelligence.
During the study, the research team tested more than 800 participants and demonstrated scenario simulations of 18 couple treatments. Results showed that although participants were able to notice differences in language patterns, it was hardly possible to tell whether the response was written by ChatGPT or a psychotherapist. This echoes Alan Turing's prediction, that humans have difficulty distinguishing the responses written by machines and humans. Even more surprisingly, ChatGPT’s response generally received higher ratings in the core guiding principles of psychotherapy.
In-depth analysis revealed that ChatGPT often generates responses longer than the therapist’s response. ChatGPT's response still uses more nouns and adjectives when controlling for response length. Nouns are often used to describe people, places, and things, while adjectives provide more context, which may mean that ChatGPT is providing more context to patients. This broader context may allow participants to give ChatGPT a higher response in their ratings on common factors in psychotherapy.
The researchers believe that these results may indicate the potential of ChatGPT in improving the psychotherapy process. Future research may lead to the development of different psychotherapy interventions. Given the growing potential application of generative artificial intelligence in therapeutic environments, the authors call on mental health experts to improve technical literacy to ensure that AI models are trained and supervised by responsible professionals, thereby improving the quality and accessibility of mental health services. sex.
"Since ELIZA was launched nearly 60 years ago, researchers have been discussing whether AI can act as a psychotherapist. While there are still many important questions to be solved, our findings suggest that the answer may be 'yes'. We Hopefully this study will prompt the public and psychological practitioners to think about the ethics, feasibility and practicality of integration of AI with mental health treatment.”
Key points:
ChatGPT's psychotherapy response usually receives higher ratings, exceeding professional psychotherapists.
Participants were barely able to distinguish between machine and human therapy responses.
Research shows that AI may play an active role in psychotherapy and needs to strengthen the technical literacy of mental health experts.
This study provides a new perspective for the application of artificial intelligence in the field of mental health, and also raises important issues such as ethics, supervision and technical literacy. It needs further in-depth research and discussion to ensure that artificial intelligence technology can be applied safely and effectively In psychological treatment, benefit more patients.