Google's latest AI model PaliGemma2 claims to be able to identify human emotions through image analysis, causing widespread controversy. The model is based on the Gemma open model and is capable of generating detailed image descriptions, including character behavior and emotions. However, experts have strongly questioned the scientific nature and safety of this technology, believing that its basic theory is weak and may have serious bias and ethical risks.
Google recently launched a new family of AI models, PaliGemma2, whose most eye-catching feature is its claim to be able to "recognize" human emotions through image analysis. This claim quickly triggered widespread discussion and serious doubts among academics and technology ethics experts.
This AI system based on the Gemma open model can generate detailed image descriptions, not just simple object recognition, but also trying to describe the behaviors and emotions of the characters in the images. However, many authoritative experts have issued serious warnings about the science and potential risks of this technology.
Sandra Wachter, a professor of data ethics from the Oxford Internet Institute, puts it bluntly that trying to "read" human emotions through AI is like "asking a magic eight ball for advice." This metaphor vividly reveals the absurdity of emotion recognition technology.
In fact, the scientific basis for emotion recognition itself is extremely fragile. The early theory of six basic emotions proposed by psychologist Paul Ekman has been widely questioned by subsequent research. There are significant differences in the way people from different cultures express emotions, making universal emotion recognition an almost impossible task.
Mike Cook, an AI researcher from Queen Mary University, puts it more bluntly: Emotion detection is impossible in a general sense. Although humans often believe they can judge the emotions of others through observation, this ability is far more complex and unreliable than imagined.
What is even more worrying is that such AI systems often have serious biases. Multiple studies have shown that facial analysis models may produce different emotional judgments for people of different skin colors, which will undoubtedly exacerbate existing social discrimination.
Although Google claims to have tested PaliGemma2 extensively and performed well in some benchmarks, experts remain seriously skeptical. They believe that limited testing alone cannot fully assess the ethical risks that this technology may pose.
The most dangerous thing is that this open model can be abused in key areas such as employment, education, law enforcement, etc., causing actual harm to vulnerable groups. As Professor Wachter warns, this could lead to a terrifying "runaway" future: people's employment, loan and educational opportunities will be determined by the "emotional judgment" of an unreliable AI system.
Today, with the rapid development of artificial intelligence, technological innovation is important, but ethics and safety cannot be ignored. The emergence of PaliGemma2 once again highlights the need for us to maintain a clear and critical look at AI technology.
The controversy of PaliGemma2 reminds us that we need to treat AI technology with caution, especially in areas involving human emotions and social justice. In the future development of AI, ethical considerations should be given priority to avoid the misuse of technology and irreparable social consequences.