An AI ethicist at the University of Cambridge has warned that in the future, AI assistants could predict and influence our decisions without our awareness, and even sell these "intent signals" to companies. They call it the "intention economy," and it's a lucrative but troubling digital marketplace. This article will deeply explore the operating mechanism, potential risks, and trends of technology giants in this field, and analyze its impact on future society.
In the future, AI assistants may predict and influence our decisions at an early stage, selling these emerging “intentions” in real time to companies that can fill the need, even before we are aware of our decisions. This is not science fiction, but a warning from an AI ethicist at the University of Cambridge. They argue that we are at the beginning of a "lucrative but troubling new market for digital intent signals," affecting everything from buying movie tickets to voting for candidates. They call it the "intention economy."
Researchers at the University of Cambridge's Center for Future Intelligence (LCFI) believe that the explosion of generative AI and our increasing familiarity with chatbots has opened up a new field of "persuasive technology", which is already hinted at by the latest announcements from the tech giants.
Human-like AI agents, such as chatbot assistants, digital mentors and even virtual lovers, will have access to vast amounts of personal psychological and behavioral data, often collected through informal conversations. Not only do these AIs know our online habits, they also have an uncanny ability to connect with us in ways we feel comfortable with – mimicking personalities and predicting our desired responses. Researchers warn that this level of trust and understanding will enable large-scale social manipulation.
Dr. Yaqub Chaudhary, a visiting scholar at LCFI, said: "The huge resources invested in positioning AI assistants in various areas of life raise the question of whose interests and purposes are these so-called assistants actually designed." He emphasized that people What is said during a conversation, how it is said, and the real-time inferences drawn from it, are more private than a mere record of an online interaction. "We warn that AI tools are already being developed to elicit, infer, collect, record, understand, predict, and ultimately manipulate and commercialize human plans and purposes."
Dr Jonnie Penn, a technology historian at the University of Cambridge's LCFI, points out: "Attention has been the currency of the internet for decades. Sharing your attention with social media platforms such as Facebook and Instagram drives the online economy," he warned. : “If left unregulated, the intention economy will treat your motivations as the new currency. It will be a gold rush for those who locate, channel and sell human intention.”
Dr. Penn and Dr. Chaudhary pointed out in a paper in the "Harvard Data Science Review" that the intent economy will be the "temporalization" of the attention economy: analyzing the connection between user attention, communication methods and behavior patterns, and final decision-making. “While some intents are ephemeral, classifying and targeting those that are long-lasting will be very profitable for advertisers,” explains Dr. Chaudhary.
In the intent economy, large language models (LLMs) can be used to cheaply target a user’s tone of voice, political stance, vocabulary, age, gender, online history, and even preference for flattery. This information will be linked to a brokered bidding network to maximize a specific goal, such as selling movie tickets (“You mentioned feeling overworked, let me help you book that movie ticket we were talking about?”). Researchers believe this could include directing conversations to serve specific platforms, advertisers, businesses, or even political organizations.
While the researchers believe the intent economy remains a "vision" for the tech industry for now, they are tracking early signs of the trend through published research and hints from several major tech companies. These include OpenAI’s open call for “data that expresses human intent in any language, topic, and format” in a 2023 blog post, and Shopify’s product director speaking at a conference that same year about how chatbots can “explicitly capture users’ intention".
Nvidia’s CEO has publicly talked about using LLM to understand intentions and desires, and Meta released its “Intentonomy” study back in 2021, a dataset for understanding human intent. In 2024, Apple's new "App Intents" developer framework for connecting apps to Siri includes "predicting actions someone might take in the future" and "using predictions you [the developer] provide, in the future. protocol with the intention of recommending the app to someone.
Dr. Chaudhary noted that Meta's AI agent, CICERO, is said to have achieved human-level performance in Diplomacy, a game that relies on inferring and predicting intentions and using persuasive dialogue to advance one's position. He warned: “These companies are already selling our attention. To gain a commercial advantage, the logical next step is to use the technology they are developing to predict our intentions and target them before we fully understand them. Selling our desires.”
Dr. Penn noted that these developments were not necessarily bad, but they could have devastating consequences. "Public awareness of what's coming is key to making sure we don't go down the wrong path," he said.
All in all, the rise of the “intention economy” brings both convenience and potential risks. How to strike a balance between technological development and ethical norms requires the joint attention and efforts of the whole society. In-depth understanding and effective supervision of the "intention economy" will be the key to maintaining personal privacy and social stability in the future.