.
A

rtificial intelligence (AI) and machine learning (ML) systems have become the backbone of many operations across all sectors of the global workforce since the early 1980s when the commercial value of AI was first recognized. While these systems have advanced the abilities of the modern workforce, they can pose detrimental threats to companies, organizations, and governments when misused or weaponized. The sophistication of AI can exacerbate malicious attacks and facilitate disinformation campaigns through features such as generative adversarial networks (GANs), chatbots, and algorithms. 

Cybersecurity professionals are constantly developing software and tools to spearhead detection and prevention efforts against these issues. These are the latest mitigations experts in the industry are bringing to the fore.

Deepfakes and Cyclical Learning Behaviors


GANs are instrumental to deepfakes. Deepfakes are a type of AI that use GANs to allow for a constant-learning type of behavior in the AI system. Deepfakes have been used to fabricate financial transfer requests that impersonate company executives. Cybersecurity professionals have used AI systems to detect and counter deepfakes, resulting in an additional level of identification verification. For example, if a deepfake were to request a money transfer over email, cybersecurity professionals would be able to use AI against this and the software would prompt users for a second form of authentication, such as a phone call. In its constant-learning behavior type, deepfakes would be able to adapt and mimic the executive's voice, as well as use a fake number to satisfy the second identification verification requirement. Chris Kennedy of AttackIQ, said, “those kinds of things can put a company out of business through reputation damage. We’re hitting the tipping point in which technology is taking advantage of the biggest human weakness; we’re over-trusting.”

Chatbots Pose Reputational Challenges for Organizations

Chatbots help supplement customer service operations for companies, however, users have taken advantage of chatbots in the past, which has led to reputational damage for companies and organizations. In 2016, Microsoft launched a new chatbot named Tay on Twitter. Twitter users manipulated Tay’s algorithm, and it eventually tweeted comments citing Adolf Hitler and stating that 9/11 was an inside job. One Twitter user asked the chatbot directly if it supported genocide, to which Tay responded with “I do indeed.” Microsoft took Tay offline quickly, and issued an extensive public apology and explained lessons learned in engaging with social media.

Algorithms for Disinformation Campaigns

Algorithms are quite apt at facilitating disinformation campaigns, which have become detrimental to companies’ reputations and can have international impact. For example, disinformation campaigns that propagate across the web through social media platforms are likely to cause irreversible damage to a company’s image and even drive down stock prices. AI systems have the ability to replicate company profiles on social media platforms that impersonate businesses to spread fabricated information. From faked letters issued by corporate executives to fabricated news stories about recalled products, disinformation stories can affect any major retail or production company. 

Social media algorithms use deep learning to propagate content to users and are largely based on user bias to determine what type of content is seen. Content on Twitter, for instance, is shown to more users the more engagement it receives. As users retweet, reply, or “favorite” a tweet, the algorithm displays that content to users based on predictive modeling so that additional users will further interact with the same content. This type of snowball effect is what led to the spiraling of a disinformation campaign against Coca-Cola.

In 2017, a falsified story stated that Coca-Cola had issued a recall for Dasani water due to parasite contamination. The fake story reported that people were hospitalized for illnesses linked to the contamination and referenced that the U.S. Food and Drug Administration (FDA) had ordered a production facility to shut down to contain the contamination. This fabricated story resulted in a catastrophe for public relations teams, and Coca-Cola provided a statement to assure consumers that there was no product contamination. The FDA also confirmed that it had not required the company to shut down any facilities. Even with the affirmations from Coca-Cola and the FDA, the fabricated story continued to circulate on social media and was reposted on other websites. 

The Future of Weaponized Cyberspace


The capabilities of ML and AI have proliferated significantly over the past decade. While using these systems can advance operations globally in all sectors, when misrepresented or exploited with harmful intent, the negative effects can be devastating to companies, organizations, and governments. Cybersecurity professionals have effectively created systems to counter malicious attacks, deepfakes, and disinformation campaigns, but must continue to be vigilant to predict abrupt adaptations that circumvent security systems. In the meantime, the ever-developing nature of AI and ML has created a circulating paradox wherein malicious actors and cybersecurity professionals are cemented in an escalating arms race.

About
Madeline Olden
:
Madeline Olden is an intelligence and national security professional who is currently working as a Global Intelligence Analyst with Royal Caribbean Group.
The views presented in this article are the author’s own and do not necessarily represent the views of any other organization.

a global affairs media network

www.diplomaticourier.com

The Proliferation of Weaponized Artificial Intelligence

Image via AdobeStock.

August 31, 2021

Over the past decade, the growing sophistication of machine learning and AI have allowed these systems to be weaponized with concerning implications for cybersecurity. This has created an arms race between cybersecurity professionals and bad actors that will only get worse, writes Madeline Olden.

A

rtificial intelligence (AI) and machine learning (ML) systems have become the backbone of many operations across all sectors of the global workforce since the early 1980s when the commercial value of AI was first recognized. While these systems have advanced the abilities of the modern workforce, they can pose detrimental threats to companies, organizations, and governments when misused or weaponized. The sophistication of AI can exacerbate malicious attacks and facilitate disinformation campaigns through features such as generative adversarial networks (GANs), chatbots, and algorithms. 

Cybersecurity professionals are constantly developing software and tools to spearhead detection and prevention efforts against these issues. These are the latest mitigations experts in the industry are bringing to the fore.

Deepfakes and Cyclical Learning Behaviors


GANs are instrumental to deepfakes. Deepfakes are a type of AI that use GANs to allow for a constant-learning type of behavior in the AI system. Deepfakes have been used to fabricate financial transfer requests that impersonate company executives. Cybersecurity professionals have used AI systems to detect and counter deepfakes, resulting in an additional level of identification verification. For example, if a deepfake were to request a money transfer over email, cybersecurity professionals would be able to use AI against this and the software would prompt users for a second form of authentication, such as a phone call. In its constant-learning behavior type, deepfakes would be able to adapt and mimic the executive's voice, as well as use a fake number to satisfy the second identification verification requirement. Chris Kennedy of AttackIQ, said, “those kinds of things can put a company out of business through reputation damage. We’re hitting the tipping point in which technology is taking advantage of the biggest human weakness; we’re over-trusting.”

Chatbots Pose Reputational Challenges for Organizations

Chatbots help supplement customer service operations for companies, however, users have taken advantage of chatbots in the past, which has led to reputational damage for companies and organizations. In 2016, Microsoft launched a new chatbot named Tay on Twitter. Twitter users manipulated Tay’s algorithm, and it eventually tweeted comments citing Adolf Hitler and stating that 9/11 was an inside job. One Twitter user asked the chatbot directly if it supported genocide, to which Tay responded with “I do indeed.” Microsoft took Tay offline quickly, and issued an extensive public apology and explained lessons learned in engaging with social media.

Algorithms for Disinformation Campaigns

Algorithms are quite apt at facilitating disinformation campaigns, which have become detrimental to companies’ reputations and can have international impact. For example, disinformation campaigns that propagate across the web through social media platforms are likely to cause irreversible damage to a company’s image and even drive down stock prices. AI systems have the ability to replicate company profiles on social media platforms that impersonate businesses to spread fabricated information. From faked letters issued by corporate executives to fabricated news stories about recalled products, disinformation stories can affect any major retail or production company. 

Social media algorithms use deep learning to propagate content to users and are largely based on user bias to determine what type of content is seen. Content on Twitter, for instance, is shown to more users the more engagement it receives. As users retweet, reply, or “favorite” a tweet, the algorithm displays that content to users based on predictive modeling so that additional users will further interact with the same content. This type of snowball effect is what led to the spiraling of a disinformation campaign against Coca-Cola.

In 2017, a falsified story stated that Coca-Cola had issued a recall for Dasani water due to parasite contamination. The fake story reported that people were hospitalized for illnesses linked to the contamination and referenced that the U.S. Food and Drug Administration (FDA) had ordered a production facility to shut down to contain the contamination. This fabricated story resulted in a catastrophe for public relations teams, and Coca-Cola provided a statement to assure consumers that there was no product contamination. The FDA also confirmed that it had not required the company to shut down any facilities. Even with the affirmations from Coca-Cola and the FDA, the fabricated story continued to circulate on social media and was reposted on other websites. 

The Future of Weaponized Cyberspace


The capabilities of ML and AI have proliferated significantly over the past decade. While using these systems can advance operations globally in all sectors, when misrepresented or exploited with harmful intent, the negative effects can be devastating to companies, organizations, and governments. Cybersecurity professionals have effectively created systems to counter malicious attacks, deepfakes, and disinformation campaigns, but must continue to be vigilant to predict abrupt adaptations that circumvent security systems. In the meantime, the ever-developing nature of AI and ML has created a circulating paradox wherein malicious actors and cybersecurity professionals are cemented in an escalating arms race.

About
Madeline Olden
:
Madeline Olden is an intelligence and national security professional who is currently working as a Global Intelligence Analyst with Royal Caribbean Group.
The views presented in this article are the author’s own and do not necessarily represent the views of any other organization.