A study from the University of Cambridge has revealed the potential manipulative nature of artificial intelligence (AI), with AI tools potentially being used to influence online users' decisions, ranging from consumer choices to political voting. Research predicts that a new market called the "intention economy" will emerge, in which AI assistants will predict and manipulate user intentions and sell this information to businesses. This has triggered deep thinking about AI ethics and regulation, as well as new concerns about personal privacy and data security.
According to research from the University of Cambridge, artificial intelligence (AI) tools have the potential to be used to manipulate the decisions of online users, from purchasing goods to choosing whom to vote for. The study points to the potential for a new market to emerge in the future, called the "intention economy," in which AI assistants can understand, predict and manipulate human intentions and sell this information to companies that profit from it.
Picture source note: The picture is generated by AI, and the picture authorization service provider Midjourney
The research team comes from the Center for Future Intelligence (LCFI) at the University of Cambridge, and they regard the intention economy as the successor of the attention economy. In the attention economy, social networks keep the economy going by attracting users’ attention and serving ads. In the intention economy, technology companies that understand AI will sell the user motivation information they have, such as users’ plans to book a hotel or their views on a certain political candidate, to the company with the highest bid.
Dr Jonny Payne, LCFI's technology historian, said: "Attention has been the currency of the internet for decades. Users share their attention on social media platforms such as Facebook and Instagram, fueling the online economy. ” He pointed out that if left unregulated, the intention economy will treat users’ motivations as new currencies and will trigger a “gold rush” for human intentions.
The study states that large language models (LLMs) underpinning AI tools such as ChatGPT will be used to "predict and guide" user behavior, and these models will be analyzed through "intent, behavioral and psychological data." The attention economy allows advertisers to win users’ attention through real-time bidding, while in the intent economy, LLMs will be able to access users’ intentions in real time, such as asking users if they would consider going to see a certain movie, or asking if they want to help them, the study said. Book movie tickets.
In this emerging intent economy, advertisers will be able to leverage generative AI tools to create personalized online ads. Additionally, the study noted that Cicero, an AI model developed by Mark Zuckerberg's Meta company, has achieved "human-level" abilities in playing the board game "Diplomacy," which relies on speculation and Anticipate your opponent's intentions.
The research further explores future scenarios in which Meta might auction user intent, such as booking a restaurant, flight or hotel, to advertisers. While there is already an industry dedicated to predicting and bidding on human behavior, AI models will refine these practices into a form that is “highly quantified, dynamic and personalized.”
Highlight:
AI tools may manipulate user decisions, forming a new market "intention economy."
In the intent economy, users’ motivations will be treated as the new currency, and technology companies will sell this information.
AI models will help advertisers deliver ads accurately by analyzing user data to predict their intentions.
This study has sounded the alarm for us, calling for ethical and regulatory considerations in the development of AI technology to prevent its malicious use, safeguard users' rights and data security, and ensure that scientific and technological progress benefits mankind.