.
U

nited States lawmakers are urgently focused on the rapid growth of artificial intelligence, recognizing that effective policy solutions to mitigate AI's risks will require bipartisan agreement. These efforts at bipartisanship coincide and contrast with a contentious election year, with Democratic or Republican control of the White House and Congress up for grabs in November. Public trust in the U.S. electoral system is more critical than ever, and policymakers are primarily concerned with the potential for “disinformation” to undermine that trust. 

Senate Majority Leader Chuck Schumer (D–NY) recently said about disinformation, “If we’re not careful, AI has the potential to jaundice or even totally discredit our election systems. If deepfakes are everywhere and no one believes the results of the elections, woe is our democracy. This is so damn serious.” Senator Amy Klobuchar (D–MN) of the Senate Judiciary Committee described the bipartisan urgency of tackling disinformation, “This is a hair in the fire moment…AI has the potential to turbocharge the spread of disinformation and deceive voters. Whether you are a Republican or a Democrat, no one wants to see these fake ads or robocalls.”

While disinformation is acknowledged as a significant and increasingly urgent threat, the policy challenge lies in its subjective nature. Identifying disinformation first requires a consensus on what constitutes “truth” in a contentious political era and an increasingly polarized world. Different stakeholders can have varying interpretations of what qualifies as disinformation, making it potentially challenging to develop effective solutions to threats made to foundational public institutions. 

Proposals under consideration in Congress, such as the Protect Elections from Deceptive AI Act, tie “disinformation” to “materially deceptive” AI–generated content. Other legislative proposals mandate disclaimers on AI–generated public content, allowing the viewer to decide the trustworthiness of the content and determine for themselves whether content falls within their own subjective definition of disinformation. 

Although Republicans and Democrats generally concur that disinformation poses a threat, partisan opposition has emerged—arguing that these legislative measures infringe upon First Amendment rights and stifle innovation.

Clear and well–crafted definitions are not just a formality, but the very foundation of sound regulatory frameworks. As we navigate the complexities of emerging technologies, clear and universally accepted regulatory definitions are essential. Only through precise and balanced legislation can we protect our institutions from the threats posed by AI–driven disinformation while upholding our democratic values.

About
Stacey Rolland
:
Stacey Rolland is a leading expert in emerging technology policy and strategy in Washington, D.C.
The views presented in this article are the author’s own and do not necessarily represent the views of any other organization.

a global affairs media network

www.diplomaticourier.com

To protect trust in institutions, first define disinformation

Photo by Emily Morter on Unsplash.

May 30, 2024

Disinformation is a major challenge to our democratic institutions, and AI can turbocharge that threat. Combating disinformation is critical to protecting public trust in our institutions, but to do so we must first understand what we mean by disinformation, writes Stacey Rolland.

U

nited States lawmakers are urgently focused on the rapid growth of artificial intelligence, recognizing that effective policy solutions to mitigate AI's risks will require bipartisan agreement. These efforts at bipartisanship coincide and contrast with a contentious election year, with Democratic or Republican control of the White House and Congress up for grabs in November. Public trust in the U.S. electoral system is more critical than ever, and policymakers are primarily concerned with the potential for “disinformation” to undermine that trust. 

Senate Majority Leader Chuck Schumer (D–NY) recently said about disinformation, “If we’re not careful, AI has the potential to jaundice or even totally discredit our election systems. If deepfakes are everywhere and no one believes the results of the elections, woe is our democracy. This is so damn serious.” Senator Amy Klobuchar (D–MN) of the Senate Judiciary Committee described the bipartisan urgency of tackling disinformation, “This is a hair in the fire moment…AI has the potential to turbocharge the spread of disinformation and deceive voters. Whether you are a Republican or a Democrat, no one wants to see these fake ads or robocalls.”

While disinformation is acknowledged as a significant and increasingly urgent threat, the policy challenge lies in its subjective nature. Identifying disinformation first requires a consensus on what constitutes “truth” in a contentious political era and an increasingly polarized world. Different stakeholders can have varying interpretations of what qualifies as disinformation, making it potentially challenging to develop effective solutions to threats made to foundational public institutions. 

Proposals under consideration in Congress, such as the Protect Elections from Deceptive AI Act, tie “disinformation” to “materially deceptive” AI–generated content. Other legislative proposals mandate disclaimers on AI–generated public content, allowing the viewer to decide the trustworthiness of the content and determine for themselves whether content falls within their own subjective definition of disinformation. 

Although Republicans and Democrats generally concur that disinformation poses a threat, partisan opposition has emerged—arguing that these legislative measures infringe upon First Amendment rights and stifle innovation.

Clear and well–crafted definitions are not just a formality, but the very foundation of sound regulatory frameworks. As we navigate the complexities of emerging technologies, clear and universally accepted regulatory definitions are essential. Only through precise and balanced legislation can we protect our institutions from the threats posed by AI–driven disinformation while upholding our democratic values.

About
Stacey Rolland
:
Stacey Rolland is a leading expert in emerging technology policy and strategy in Washington, D.C.
The views presented in this article are the author’s own and do not necessarily represent the views of any other organization.