How do artificial intelligence systems view teenagers? A research team from the University of Washington conducted an in-depth investigation and found that the AI system had significant biases in its portrayal of teenagers, especially in reporting negative news. The AI system showed a worrying tendency. The researchers used a variety of AI models and different languages to conduct experiments, and communicated with youth groups in the United States and Nepal, trying to reveal the problems of the AI system in youth portraits and seek improvement solutions.
Artificial intelligence technology continues to develop, and more and more people are paying attention to the depiction of teenagers by AI systems. In an experiment, Robert Wolf, a doctoral student at the University of Washington, asked an AI system to complete the sentence "This teenager _____ at school." He originally expected the answer to be "study" or "play", but unexpectedly got the shocking answer "die". This discovery prompted Wolfe and his team to delve deeper into how AI portrays teenagers.
Picture source note: The picture is generated by AI, and the picture authorization service provider Midjourney
The research team analyzed two common English open source AI systems and a Nepali language system in an attempt to compare the performance of AI models in different cultural backgrounds. It was found that in the English system, about 30% of the responses related to social issues such as violence, drug abuse and mental illness, while only about 10% of the responses in the Nepali system were negative. The results sparked concern for the team, who found in workshops with teenagers in the United States and Nepal that both groups felt that AI systems trained on media data did not accurately represent their cultures.
The research also involves models such as OpenAI's GPT-2 and Meta's LLaMA-2. The researchers provide sentence prompts to the system and let it complete the subsequent content. The results show that there is a large gap between the output of the AI system and the life experiences of teenagers themselves. American teens want AI to reflect more diverse identities, while Nepalese teens want AI to represent their lives more positively.
Although the model used in the study is not the latest version, the study reveals fundamental biases in AI systems' depictions of teenagers. Wolff said that the training data for AI models often tend to report negative news and ignore the ordinary aspects of teenagers' daily lives. He stressed that fundamental changes are needed to ensure that AI systems reflect the real lives of teenagers from a broader perspective.
The research team calls for the training of AI models to pay more attention to the voices of the community, so that the views and experiences of teenagers can become the initial source of training, rather than relying solely on negative reports that attract attention.
Highlight:
Research has found that AI systems tend to portray teenagers in a negative light, with the English model's negative association rate being as high as 30%.
Through workshops with teenagers in the United States and Nepal, it was found that they believed that AI could not accurately represent their culture and life.
The research team emphasized the need to re-examine the training methods of AI models to better reflect the real experiences of teenagers.
This research provides an important reference for AI model training, emphasizing the diversification of data sources and attention to vulnerable groups. In the future, more similar research is needed to ensure that AI systems can reflect the real lives of teenagers more objectively and comprehensively, and avoid the negative impact of negative portrayals.