.
W

hat does it take to change a person’s mind? As generative artificial intelligence becomes more embedded in customer-facing systems—think of human–like phone calls or online chatbots—it is an ethical question that needs to be addressed widely.

The capacity to change minds through reasoned discourse is at the heart of democracy. Clear and effective communication forms the foundation of deliberation and persuasion, which are essential to resolve competing interests. But there is a dark side to persuasion: false motives, lies, and cognitive manipulation—malicious behavior that AI could facilitate.

In the not–so–distant future, generative AI could enable the creation of new user interfaces that can persuade on behalf of any person or entity with the means to establish such a system. Leveraging private knowledge bases, these specialized models would offer different truths that compete based on their ability to generate convincing responses for a target group—an AI for each ideology. A wave of AI–assisted social engineering would surely follow, with escalating competition making it easier and cheaper for bad actors to spread disinformation and perpetrate scams.

The emergence of generative AI has thus fueled a crisis of epistemic insecurity. The initial policy response has been to ensure that humans know that they are engaging with an AI. In June, the European Commission urged large tech companies to start labeling text, video, and audio created or manipulated by AI tools, while the European Parliament is pushing for a similar rule in the forthcoming AI Act. This awareness, the argument goes, will prevent us from being misled by an artificial agent, no matter how convincing.

But alerting people to the presence of AI would not necessarily safeguard them against manipulation. As far back as the 1960s, the ELIZA chatbot experiment at MIT demonstrated that people can form emotional connections with, have empathy for, and attribute human thought processes to a computer program with anthropomorphic characteristics—in this case, natural speech patterns—despite being told that it is a non–human entity.

We tend to develop a strong emotional attachment to our beliefs, which then hinders our ability to assess contradictory evidence objectively. Moreover, we often seek information that supports, rather than challenges, our views. Our goal should be to engage in reflective persuasion, whereby we present arguments and carefully consider our beliefs and values to reach well-founded agreement or disagreement.

But, crucially, forming emotional connections with others can increase our susceptibility to manipulation, and we know that humans can make these types of connections even with chatbots that are not designed to do so. When chatbots are built to connect emotionally with humans, this would create a new dynamic rooted in two long–standing problems of human discourse: asymmetrical risk and reciprocity.

Imagine that a tech company creates a persuasive chatbot. Such an agent would be taking essentially zero risk—either emotional or physical—in attempting to convince others. As for reciprocity, there is very little chance that the chatbot doing the persuading would have any capacity to be persuaded. It is more likely that an individual could get the chatbot to concede a point in the context of their limited interaction, which would then be internalized for training. This would make active persuasion—which is about inducing a change in belief, not reaching momentary agreement—largely infeasible.

In short, we are woefully unprepared for the dissemination of persuasive AI systems. Many industry leaders, including OpenAI, the company behind ChatGPT, have raised awareness about its potential threat. But awareness does not translate into a comprehensive risk-management framework.

A society cannot be effectively inoculated against persuasive AI, as that would require making each person immune to such agents—an impossible task. Moreover, any attempt to control and label AI interfaces would result in individuals transferring inputs to new domains, not unlike copying text produced by ChatGPT and pasting it into an email. System owners would therefore be responsible for tracking user activity and evaluating conversions.

But persuasive AI need not be generative in nature. A wide range of organizations, individuals, and entities have already bolstered their persuasive capabilities to achieve their objectives. Consider state actors’ use of computational propaganda, which involves manipulating information and public opinion to further national interests and agendas.

Meanwhile, the evolution of computational persuasion has provided the ad–tech industry with a lucrative business model. This burgeoning field not only demonstrates the power of persuasive technologies to shape consumer behavior, but also underscores the significant role they can play in driving sales and achieving commercial objectives.

What unites these diverse actors is a desire to enhance their persuasive capacities. This mirrors the ever–expanding landscape of technology–driven influence, with all its known and unknown social, political, and economic implications. As persuasion is automated, a comprehensive ethical and regulatory framework becomes imperative.

Aurélie Jean also contributed to this commentary.

Copyright: Project Syndicate, 2024.

About
Mark Esposito
:
Mark Esposito, a policy associate at the University College London, is an adjunct professor at Georgetown University, and a professor at Hult International Business School.
About
Josh Entsminger
:
Josh Entsminger is a PhD student in innovation and public policy at the UCL Institute for Innovation and Public Purpose.
About
Terence Tse
:
Terence Tse, Co-Founder and Executive Director of Nexus FrontierTech, is a professor at Hult International Business School.
The views presented in this article are the author’s own and do not necessarily represent the views of any other organization.

a global affairs media network

www.diplomaticourier.com

The threat of persuasive AI

Advertising is already everywhere. But we are unprepared for the advent of persuasive AI, according to Mark Esposito et al. Photo by Joe Yates on Unsplash


February 1, 2024

Persuasive AI—whether generative AI or other uses of AI technology—is coming, and we lack a comprehensive risk-management framework to cope with it. We require a comprehensive ethical and regulatory framework to protect our institutions, write Mark Esposito, Josh Entsminger, and Terence Tse.

W

hat does it take to change a person’s mind? As generative artificial intelligence becomes more embedded in customer-facing systems—think of human–like phone calls or online chatbots—it is an ethical question that needs to be addressed widely.

The capacity to change minds through reasoned discourse is at the heart of democracy. Clear and effective communication forms the foundation of deliberation and persuasion, which are essential to resolve competing interests. But there is a dark side to persuasion: false motives, lies, and cognitive manipulation—malicious behavior that AI could facilitate.

In the not–so–distant future, generative AI could enable the creation of new user interfaces that can persuade on behalf of any person or entity with the means to establish such a system. Leveraging private knowledge bases, these specialized models would offer different truths that compete based on their ability to generate convincing responses for a target group—an AI for each ideology. A wave of AI–assisted social engineering would surely follow, with escalating competition making it easier and cheaper for bad actors to spread disinformation and perpetrate scams.

The emergence of generative AI has thus fueled a crisis of epistemic insecurity. The initial policy response has been to ensure that humans know that they are engaging with an AI. In June, the European Commission urged large tech companies to start labeling text, video, and audio created or manipulated by AI tools, while the European Parliament is pushing for a similar rule in the forthcoming AI Act. This awareness, the argument goes, will prevent us from being misled by an artificial agent, no matter how convincing.

But alerting people to the presence of AI would not necessarily safeguard them against manipulation. As far back as the 1960s, the ELIZA chatbot experiment at MIT demonstrated that people can form emotional connections with, have empathy for, and attribute human thought processes to a computer program with anthropomorphic characteristics—in this case, natural speech patterns—despite being told that it is a non–human entity.

We tend to develop a strong emotional attachment to our beliefs, which then hinders our ability to assess contradictory evidence objectively. Moreover, we often seek information that supports, rather than challenges, our views. Our goal should be to engage in reflective persuasion, whereby we present arguments and carefully consider our beliefs and values to reach well-founded agreement or disagreement.

But, crucially, forming emotional connections with others can increase our susceptibility to manipulation, and we know that humans can make these types of connections even with chatbots that are not designed to do so. When chatbots are built to connect emotionally with humans, this would create a new dynamic rooted in two long–standing problems of human discourse: asymmetrical risk and reciprocity.

Imagine that a tech company creates a persuasive chatbot. Such an agent would be taking essentially zero risk—either emotional or physical—in attempting to convince others. As for reciprocity, there is very little chance that the chatbot doing the persuading would have any capacity to be persuaded. It is more likely that an individual could get the chatbot to concede a point in the context of their limited interaction, which would then be internalized for training. This would make active persuasion—which is about inducing a change in belief, not reaching momentary agreement—largely infeasible.

In short, we are woefully unprepared for the dissemination of persuasive AI systems. Many industry leaders, including OpenAI, the company behind ChatGPT, have raised awareness about its potential threat. But awareness does not translate into a comprehensive risk-management framework.

A society cannot be effectively inoculated against persuasive AI, as that would require making each person immune to such agents—an impossible task. Moreover, any attempt to control and label AI interfaces would result in individuals transferring inputs to new domains, not unlike copying text produced by ChatGPT and pasting it into an email. System owners would therefore be responsible for tracking user activity and evaluating conversions.

But persuasive AI need not be generative in nature. A wide range of organizations, individuals, and entities have already bolstered their persuasive capabilities to achieve their objectives. Consider state actors’ use of computational propaganda, which involves manipulating information and public opinion to further national interests and agendas.

Meanwhile, the evolution of computational persuasion has provided the ad–tech industry with a lucrative business model. This burgeoning field not only demonstrates the power of persuasive technologies to shape consumer behavior, but also underscores the significant role they can play in driving sales and achieving commercial objectives.

What unites these diverse actors is a desire to enhance their persuasive capacities. This mirrors the ever–expanding landscape of technology–driven influence, with all its known and unknown social, political, and economic implications. As persuasion is automated, a comprehensive ethical and regulatory framework becomes imperative.

Aurélie Jean also contributed to this commentary.

Copyright: Project Syndicate, 2024.

About
Mark Esposito
:
Mark Esposito, a policy associate at the University College London, is an adjunct professor at Georgetown University, and a professor at Hult International Business School.
About
Josh Entsminger
:
Josh Entsminger is a PhD student in innovation and public policy at the UCL Institute for Innovation and Public Purpose.
About
Terence Tse
:
Terence Tse, Co-Founder and Executive Director of Nexus FrontierTech, is a professor at Hult International Business School.
The views presented in this article are the author’s own and do not necessarily represent the views of any other organization.