.
O

nline misinformation continues to wipe billions off the stock market, fuel political distrust, insurrection, and — as Biden pointed out in his recent chastisement of social media companies — prevent vaccine uptake. Yet until we have a common understanding of what misinformation is, we will not be able to develop targeted countermeasures. 

There are, however, three significant barriers to finding a common definition of these terms. 

The first issue is the difference between mis- and disinformation. According to Merriam-Webster, misinformation is “incorrect or misleading information,” and is distinct from disinformation, a term defined as “false information deliberately and often covertly spread…to influence public opinion or obscure the truth.” Both mis- and disinformation deceive audiences, but misinformation is agnostic as to motive. 

There is, for example, a true — albeit extremely low —risk of developing a blood clot after certain COVID-19 vaccines. Yet, while sharing this information on social media isn’t wrong, embellishing upon it or exploiting it — as many anti-vaxxer groups have done — to discourage vaccination is clearly damaging and could be seen as misinformation.  

More: Refugee camps fight fake news alongside pandemic.

In practice, however, distinguishing between these terms is more difficult. Misinformation can be co-opted and shared on social media by organized groups with a disinformation agenda, and vice versa. Properly establishing the motive or intention behind false information is, furthermore, not always easy. And even when it is possible to establish motive, the current definition of these terms doesn’t clearly distinguish between highly organized activities and individuals who share incorrect information online. Should we, for example, hold states and individuals to the same level of account when they intentionally spread falsehoods about election results?

Developing more nuanced criteria for what qualifies as either mis- or disinformation, as well as buzzwords like “fake news,” would help prevent these terms being used interchangeably, or as catchalls for very different types of activities. 

More: Public protests in the age of information manipulation. 

The second challenge relates to the information environment itself. Some researchers characterize misinformation based on whether it contradicts expert consensus at the time. Yet for many issues, particularly those which are sociopolitical in nature, there is no expert consensus. Even in public health situations like the COVID-19 pandemic, scientific inquiry and what constitutes the “best evidence” is susceptible to change. 

Early in the pandemic, for example, the former US Surgeon General, Jerome Adams, posted a tweet discouraging mask-wearing on the grounds that "They are NOT effective in preventing general public from catching #Coronavirus…” Adams, who later reversed his position and deleted the tweet, defended himself on the grounds that he was following the science at the time. Should the now-deleted tweet by Adam be classified as misinformation? Or did it become misinformation only once there was a strong international public health consensus? Different people may determine this to be erroneous, misleading, or misinformation, depending on how they are defining this term. 

The third barrier comes down to whether we incorporate the impact, or harm done in our definition of misinformation. In the UK, for example, the government Online Media Literacy Strategy defines misinformation as “the inadvertent sharing of false information.” This differs from the definition in the EU European Democracy Action Plan (EDAP), which adds that “the effects can still be harmful” even when false information is shared in good faith. 

Some misinformation may not cause harm — and sometimes the truth itself can be damaging. For example, an online post about the relationship between blood clots and vaccines is not necessarily misinformation, but insofar as it prevents vaccine uptake, it may lead to a greater number of deaths.

More: Communicating effectively in a health crisis

While defining misinformation may seem like an academic problem, the lack of clear criteria carries real-world consequences. The assault on truth presents a major obstacle to public education on some of the most important issues of our age, including public health and climate change. These issues are not resigned to borders and cannot be addressed as such. 

Without consensus, we will not have the tailored policy responses and interventions required to effectively prioritize or counter the causes of mis- and disinformation. An international summit, bringing together interdisciplinary expertise across policymaking, technology, and academia could be a first step towards establishing conceptual consensus, and corresponding policy responses. 

Until we come to a consensus on this increasingly important issue, people with harmful intent will continue to exploit our weaknesses, leading to further public health crises and delays in our attempts to control global issues like climate change. To address the enormous amount of damage mis- and disinformation will continue to cause, we need to learn, together, how to identify it.

About
Daniella Lebor
:
Daniella Lebor is a director at APCO Worldwide.
The views presented in this article are the author’s own and do not necessarily represent the views of any other organization.

a global affairs media network

www.diplomaticourier.com

Defining Misinformation Is Critical to Combatting It

Photo by Adobe Stock.

August 3, 2021

Online misinformation is disrupting stock markets, fueling political distrust, and preventing vaccine uptake. Yet we cannot effectively combat misinformation until we broadly agree on a common understanding of what misinformation is.

O

nline misinformation continues to wipe billions off the stock market, fuel political distrust, insurrection, and — as Biden pointed out in his recent chastisement of social media companies — prevent vaccine uptake. Yet until we have a common understanding of what misinformation is, we will not be able to develop targeted countermeasures. 

There are, however, three significant barriers to finding a common definition of these terms. 

The first issue is the difference between mis- and disinformation. According to Merriam-Webster, misinformation is “incorrect or misleading information,” and is distinct from disinformation, a term defined as “false information deliberately and often covertly spread…to influence public opinion or obscure the truth.” Both mis- and disinformation deceive audiences, but misinformation is agnostic as to motive. 

There is, for example, a true — albeit extremely low —risk of developing a blood clot after certain COVID-19 vaccines. Yet, while sharing this information on social media isn’t wrong, embellishing upon it or exploiting it — as many anti-vaxxer groups have done — to discourage vaccination is clearly damaging and could be seen as misinformation.  

More: Refugee camps fight fake news alongside pandemic.

In practice, however, distinguishing between these terms is more difficult. Misinformation can be co-opted and shared on social media by organized groups with a disinformation agenda, and vice versa. Properly establishing the motive or intention behind false information is, furthermore, not always easy. And even when it is possible to establish motive, the current definition of these terms doesn’t clearly distinguish between highly organized activities and individuals who share incorrect information online. Should we, for example, hold states and individuals to the same level of account when they intentionally spread falsehoods about election results?

Developing more nuanced criteria for what qualifies as either mis- or disinformation, as well as buzzwords like “fake news,” would help prevent these terms being used interchangeably, or as catchalls for very different types of activities. 

More: Public protests in the age of information manipulation. 

The second challenge relates to the information environment itself. Some researchers characterize misinformation based on whether it contradicts expert consensus at the time. Yet for many issues, particularly those which are sociopolitical in nature, there is no expert consensus. Even in public health situations like the COVID-19 pandemic, scientific inquiry and what constitutes the “best evidence” is susceptible to change. 

Early in the pandemic, for example, the former US Surgeon General, Jerome Adams, posted a tweet discouraging mask-wearing on the grounds that "They are NOT effective in preventing general public from catching #Coronavirus…” Adams, who later reversed his position and deleted the tweet, defended himself on the grounds that he was following the science at the time. Should the now-deleted tweet by Adam be classified as misinformation? Or did it become misinformation only once there was a strong international public health consensus? Different people may determine this to be erroneous, misleading, or misinformation, depending on how they are defining this term. 

The third barrier comes down to whether we incorporate the impact, or harm done in our definition of misinformation. In the UK, for example, the government Online Media Literacy Strategy defines misinformation as “the inadvertent sharing of false information.” This differs from the definition in the EU European Democracy Action Plan (EDAP), which adds that “the effects can still be harmful” even when false information is shared in good faith. 

Some misinformation may not cause harm — and sometimes the truth itself can be damaging. For example, an online post about the relationship between blood clots and vaccines is not necessarily misinformation, but insofar as it prevents vaccine uptake, it may lead to a greater number of deaths.

More: Communicating effectively in a health crisis

While defining misinformation may seem like an academic problem, the lack of clear criteria carries real-world consequences. The assault on truth presents a major obstacle to public education on some of the most important issues of our age, including public health and climate change. These issues are not resigned to borders and cannot be addressed as such. 

Without consensus, we will not have the tailored policy responses and interventions required to effectively prioritize or counter the causes of mis- and disinformation. An international summit, bringing together interdisciplinary expertise across policymaking, technology, and academia could be a first step towards establishing conceptual consensus, and corresponding policy responses. 

Until we come to a consensus on this increasingly important issue, people with harmful intent will continue to exploit our weaknesses, leading to further public health crises and delays in our attempts to control global issues like climate change. To address the enormous amount of damage mis- and disinformation will continue to cause, we need to learn, together, how to identify it.

About
Daniella Lebor
:
Daniella Lebor is a director at APCO Worldwide.
The views presented in this article are the author’s own and do not necessarily represent the views of any other organization.