Artificial intelligence (AI) tools could be used to manipulate the online public into making decisions – from what to buy to who to vote for – according to researchers at the University of Cambridge.
The paper highlights the emergence of a new market for “digital intent signals” – known as the “intent economy” – where AI assistants understand, predict and manipulate human intentions and sell this information to companies that can profit from it.
The intent economy is presented by researchers at the Leverhulme Center for the Future of Intelligence (LCFI) in Cambridge as the successor to the attention economy, in which social networks keep users hooked to their platforms and offer them advertisements.
The intent economy involves tech companies with expertise in AI selling to the highest bidder what they know about your motivations, from plans to stay in a hotel to opinions on a political candidate.
“For decades, attention has been the currency of the Internet,” said Dr. Jonnie Penn, technology historian at LCFI. “Sharing your attention on social media platforms such as Facebook and Instagram has boosted the online economy.”
He added: “Unless regulated, the intent economy will treat your incentives as the new currency. It will be a gold rush for those who target, direct and sell human intentions.
“We should begin to think about the likely impact of such a market on human aspirations, including free and fair elections, a free press, and fair market competition, before we become victims of its unintended consequences. »
The study claims that large language models (LLM), the technology behind AI tools such as chatbot ChatGPT, will be used to “anticipate and guide” users based on “intentional, behavioral and psychological”.
The authors argue that the attention economy allows advertisers to buy access to users’ attention in the present via real-time bidding on ad exchanges or to buy it in the future by acquiring a month of advertising space on a billboard.
LLMs will also be able to access attention in real time, by asking, for example, whether a user has thought about seeing a particular movie – “have you thought about seeing Spider-Man tonight?” – as well as making suggestions regarding your future intentions, for example by asking: “You mentioned feeling overwhelmed, should I book you that movie ticket we talked about?” »
The study proposes a scenario in which these examples are “dynamically generated” to match factors such as a user’s “personal behavioral traces” and “psychological profile.”
“In an economy of intent, an LLM could, at low cost, exploit cadence, politics, vocabulary, age, gender, preferences for sycophancy, etc. of a user, in conjunction with negotiated offers, to maximize the probability of achieving a given objective. (for example to sell a cinema ticket),” the study suggests. In such a world, an AI model would guide conversations to serve advertisers, businesses, and other third parties.
Advertisers will be able to use generative AI tools to create tailored online ads, the report claims. He also cites the example of an AI model created by Mark Zuckerberg’s Meta, called Cicero, which achieved “human-level” ability to play the board game Diplomacy – a game the authors say depends of deducing and predicting the intention of adversaries.
AI models will be able to adjust their results in response to “incoming user-generated data streams,” the study adds, citing research showing that models can infer personal information through everyday exchanges and even “steering” conversations to obtain more personal information. information.
The study then discusses a future scenario in which Meta would auction off to advertisers a user’s intention to book a restaurant, flight or hotel. While there already exists an industry dedicated to predicting and bidding on human behavior, the report says, AI models will distill these practices into a “highly quantified, dynamic, and personalized format.”
The study quotes the research team behind Cicero warning that a “[AI] the agent can learn to push his interlocutor to achieve a particular objective.
The research refers to technology executives discussing how AI models will be able to predict a user’s intentions and actions. He cites the chief executive of the largest AI chipmaker, Nvidia’s Jensen Huang, who said last year that the models “will determine what is your intention, what is your desire, what are you trying to do, given context, and will present the result. information in the best possible way.