.
W

hen incorporating AI into the fight against mis/disinformation, the straightforward approach is tempting: use AI for instant fact–checking. But AI isn’t ready for that task yet, and polarization runs so deep that many people disregard fact–checkers, let alone trust a complex technology to make the calls instead. 

Understanding where AI fits in the puzzle starts by recognizing that mis/disinformation is rarely true vs. false. What often deceives people is the way information is framed to push a misleading conclusion even if all that information is true. These stories are called narratives. 

A brilliant example comes from the ongoing hip hop feud between Kendrick Lamar and Drake. To win the battle of perceptions, each rapper has exposed compromising information about one another. However, the fanbases flip everything to their benefit: Lamar’s photo purportedly showing Drake’s Ozempic prescription was a win among his fans, while Drake fans reacted by arguing the items were planted as an elaborate joke, a theory later supported by Drake himself although still unconfirmed. But regardless of the items’ authenticity, the prevailing narrative heavily influenced which rapper was winning the feud. 

This is how information misleads. Narratives spread before facts are confirmed, making sticky impressions that resist fact–checking (for those who even pay attention to corrections).

AI's role in combating disinformation must take a narrative–driven approach. Such an approach could involve a suite of tools that together compose "narrative anticipation engines.” These engines could (1) identify dominant narratives, (2) forecast how they might evolve to frame future information/events, and (3) craft counter messaging to debunk the anticipated narratives before they emerge.

Doing so would require various deployments of AI. Machine learning AI can assess trends in discourse and sentiment and integrate them probabilistic models of event occurrence and narrative adaptation. 

Generative AI could then craft targeted counter narratives to explain why the anticipated narrative is misleading, using strategies like proposing plausible alternative explanations or recalibrating emotions, among others. 

This approach moves beyond reactive fact–checking to preemptive narrative shaping, recognizing that effective counter–disinformation must attack narrative not the information within them.

It’s complex to move from fact–based corrections to targeting amorphous narratives. However, not only is it necessary to tackling disinformation, but it is now possible with new AI technologies.

About
Thomas Plant
:
Thomas Plant is an analyst at Valens Global and supports the organization’s work on domestic extremism. He is also an incoming Fulbright research scholar to Estonia and the co-founder of William & Mary’s DisinfoLab, the nation’s first undergraduate disinformation research lab.
The views presented in this article are the author’s own and do not necessarily represent the views of any other organization.

a global affairs media network

www.diplomaticourier.com

Don't just use AI to fact check against mis– and disinformation

Photo by Kevin Ku on Unsplash.

May 29, 2024

To fight mis/disinformation, we must move from fact–based corrections to targeting amorphous narratives with the help of AI, writes Tom Plant.

W

hen incorporating AI into the fight against mis/disinformation, the straightforward approach is tempting: use AI for instant fact–checking. But AI isn’t ready for that task yet, and polarization runs so deep that many people disregard fact–checkers, let alone trust a complex technology to make the calls instead. 

Understanding where AI fits in the puzzle starts by recognizing that mis/disinformation is rarely true vs. false. What often deceives people is the way information is framed to push a misleading conclusion even if all that information is true. These stories are called narratives. 

A brilliant example comes from the ongoing hip hop feud between Kendrick Lamar and Drake. To win the battle of perceptions, each rapper has exposed compromising information about one another. However, the fanbases flip everything to their benefit: Lamar’s photo purportedly showing Drake’s Ozempic prescription was a win among his fans, while Drake fans reacted by arguing the items were planted as an elaborate joke, a theory later supported by Drake himself although still unconfirmed. But regardless of the items’ authenticity, the prevailing narrative heavily influenced which rapper was winning the feud. 

This is how information misleads. Narratives spread before facts are confirmed, making sticky impressions that resist fact–checking (for those who even pay attention to corrections).

AI's role in combating disinformation must take a narrative–driven approach. Such an approach could involve a suite of tools that together compose "narrative anticipation engines.” These engines could (1) identify dominant narratives, (2) forecast how they might evolve to frame future information/events, and (3) craft counter messaging to debunk the anticipated narratives before they emerge.

Doing so would require various deployments of AI. Machine learning AI can assess trends in discourse and sentiment and integrate them probabilistic models of event occurrence and narrative adaptation. 

Generative AI could then craft targeted counter narratives to explain why the anticipated narrative is misleading, using strategies like proposing plausible alternative explanations or recalibrating emotions, among others. 

This approach moves beyond reactive fact–checking to preemptive narrative shaping, recognizing that effective counter–disinformation must attack narrative not the information within them.

It’s complex to move from fact–based corrections to targeting amorphous narratives. However, not only is it necessary to tackling disinformation, but it is now possible with new AI technologies.

About
Thomas Plant
:
Thomas Plant is an analyst at Valens Global and supports the organization’s work on domestic extremism. He is also an incoming Fulbright research scholar to Estonia and the co-founder of William & Mary’s DisinfoLab, the nation’s first undergraduate disinformation research lab.
The views presented in this article are the author’s own and do not necessarily represent the views of any other organization.