Recently, a study revealed the limitations of ChatGPT-4 in pediatric medical diagnosis. The results showed that its accuracy was only 17%, which was lower than its performance in general medical cases. This has triggered further thinking about the application of artificial intelligence in the medical field and also highlighted the importance of human professional experience. The researchers also pointed out that by improving the training data of the model and providing more accurate medical literature, it is expected to improve the accuracy of ChatGPT in pediatric diagnosis.
The latest research points out that ChatGPT-4's accuracy in pediatric medical cases is only 17%, and its performance is worse than last year's general medical cases. The researchers proposed that through training and providing accurate medical literature, it is expected to improve the pediatric diagnostic accuracy of ChatGPT. The study highlights the irreplaceability of human pediatricians’ clinical experience in medicine. Out of 100 cases, the model correctly diagnosed only 17, with the majority of errors concentrated in the same organ system. Through training with specific medical data, it is expected to improve the diagnostic accuracy of large-scale language model chatbots for pediatric diseases.The results of this study remind us that despite the rapid development of artificial intelligence technology, in fields such as medical care that require a high degree of professionalism and experience judgment, it still needs to be applied with caution and give full play to the role of human professionals. In the future, how to better combine artificial intelligence technology and human expertise will be an important direction for continued exploration in the medical field.