A groundbreaking research utilizes generative AI, specifically large language models (LLM), to successfully build an architecture that can accurately simulate human behavior in a variety of situations. This research provides an unprecedentedly powerful tool for social science research, which means it can more effectively understand and predict human behavior, thereby better formulating social policies and business strategies. The research team collected a large amount of data through in-depth interviews and used this to train the model and build thousands of virtual "clones". The performance of these "clones" in various tests was highly consistent with that of real participants.
A new study shows that using generative AI models, especially large language models (LLM), it is possible to build an architecture that can accurately simulate human behavior in a variety of situations. The findings provide a powerful new tool for social science research.
The researchers first recruited more than 1,000 participants from various backgrounds in the United States and conducted two-hour in-depth interviews with them to collect information on their life experiences, opinions, and values. The researchers then used these interview transcripts and a large language model to build a "generative agent architecture."
This architecture can create thousands of virtual "clones" based on participants' interviews, each with a unique personality and behavioral patterns. Researchers assessed the clones' behavioral performance through a series of standard social science tests, such as the Big Five Personality Test and behavioral economics games.
Surprisingly, the clones performed in tests that were highly consistent with real participants. Not only can they accurately predict their responses on questionnaires, but they can also predict their behavioral responses in experiments, such as in experiments where power affects trust, where "clones" performed like real participants and how trustworthy the high-power group was Significantly lower than the low power group.
This research shows that generative AI models can be used to create highly realistic "virtual humans" and predict the behavior of real humans. This provides a completely new approach to social science research, for example using these "virtual humans" to test the effects of new public health policies or marketing strategies without the need for large-scale experiments with real people.
The researchers also found that relying solely on demographic information to construct "virtual humans" is not enough. Only combined with in-depth interviews can more accurately simulate individual behavior. This demonstrates that each individual has unique experiences and perspectives, and this information is critical to understanding and predicting their behavior.
To protect participants' privacy, the researchers plan to build an "agent library" and provide access in two ways: open access to aggregate data for fixed tasks, and restricted access to individual data for open tasks. This makes it easier for researchers to use these "virtual humans" while minimizing the risks associated with the content of the interviews.
This research result undoubtedly opens a new door for social science research. Let us wait and see what far-reaching impacts it will have in the future.
Paper address: https://arxiv.org/pdf/2411.10109
This research not only provides new tools for social science research, but also brings new possibilities to other fields, such as public policy formulation, marketing, and behavioral prediction. In the future, with the continuous development and improvement of technology, this research result will surely have a more profound impact.