.
Self-targeting guns, autonomous fighter jets, and combat drones – the pairing of lethal systems and sophisticated artificial intelligence (AI) is poised to radically transform the battlefield, as well as the nature of war. Industry leaders and AI experts have rightful concerns regarding their proliferation and the problematic legal uncertainty surrounding them. International powers need to start regulating AI warfare now, before it begins to dominate the battlefield. The consequences of lethal AI's unrestrained use, in the absence of rules and norms governing it, would be highly destabilizing. AI has, to varying degrees, existed in weapons systems for many years. The United States’ armed drones, a hallmark of the “War on Terror,” are a familiar example. However, a key characteristic of these systems is their limited level of autonomy.  Humans are deeply involved in the decision-making process, providing guidance on targeting and authorizing attacks. The issue begins when AI weapons systems are developed to operate with greater autonomy, independently making decisions. This degree of advanced AI is still off in the future, but the issues it could cause are already discernible. Given a mission and let loose, autonomous weapons will be suitable for tasks such as assassinations, subduing populations or certain ethnic groups, and destabilizing nations. Unlike nuclear, biological, or chemical weapons, AI systems are simple to store and maintain. Likewise, AI does not require costly materials to produce. Like cyber weapons, the advanced code and computing hardware that go into lethal AI are readily available even for actors with limited economic means. Because of this, these weapons could be cheaply produced en masse, becoming ubiquitous among major military powers. Fearful of being militarily offset, states lacking these systems will likely endeavor to acquire their own – triggering an arms race.  Gauging from past examples of proliferation, there is legitimate concern that it may only be a matter of time before they fall in the hands of actors with destabilizing goals, such as dictators, warlords, or terrorists. The consequences of their misuse could be devastating. Equally devastating, the lack of clarity in international law regarding autonomous weapons creates a challenging environment for human rights and wartime accountability. Humanitarian law governing conduct in war has language that assumes human beings, not machines, are making decisions. In cases of semi-autonomous weapons, such as armed drones, legal responsibility is placed on the humans who sign-off on each attack decision. But what about lethal machines acting independently? At present, no clear path or precedent exists to hold accountable individuals who use AI weapons for destabilizing or illegal purposes. To resolve these legal uncertainties and restrain the use of lethal AI, tech experts have called for a treaty to ban all AI weapons. However, a distinct challenge will be arriving at an agreeable standard of what “is” and “isn’t” an automated weapon. As previously noted, the United States, China, Russia, and other powers have already made use of semi-autonomous systems, which operate with varying degrees of independence. It would be exceedingly difficult to secure buy-in on an agreement that seeks to ban all forms of AI weapon systems when states have already invested in and incorporated them into their respective militaries. Instead, as some legal experts have suggested, a series of agreements addressing particular aspects of AI weapon use would be more palatable for the countries leading their development and effective as tools of governance. Even if countries don’t sign on to each, these would create norms of unacceptable use and a “moral high ground” that would moderate the use of AI in war – akin to the prohibition on the use of anti-personnel mines, which the United States has not signed but generally adheres to. These agreements could include treaties on the liability of AI weapons, affirming that actors are liable for their drones just as they are for their military personnel. A treaty outlining acceptable and unacceptable targets for drones during times of war would be a step toward limiting their use against civilian populations or infrastructure. As with nuclear test treaties, an agreement on testing and operational standards of AI weapons would ensure safe and superior performance in their use and limit the potential for consequences arising from improper or faulty programming. While these agreements would be just a start, they would begin to address issues of importance where the international community can come together to find mutually acceptable solutions. It is far better to do so now, in the early stages of AI weapons’ development, than dealing with the issues after they begin to arise – or, worse yet, not dealing with them at all. About the author: Cody Knipfer is the Technology & Cybersecurity Fellow at Young Professionals in Foreign Policy (YPFP). He has experience working with space and aerospace trade associations, as well as a space policy consultancy. Cody expects to receive his MA in International Science and Technology Policy in 2018 from George Washington University's Space Policy Institute. Photo credit  

The views presented in this article are the author’s own and do not necessarily represent the views of any other organization.

a global affairs media network

www.diplomaticourier.com

AI Weapons: International Regulations are Needed, Before Too Late

Staff Sgt. Trung, 432nd Aircraft Maintenance Squadron weapons load crew chief, reports a munitions load to the munitions operations center at Creech Air Force Base, Nev., May 12, 2014. The 432nd Maintenance Group ensures that Airmen, MQ-1 Predator and MQ-9 Reaper aircraft, ground control stations, Predator Primary Satellite Links, and a globally-integrated communications network are fully capable to support aircrew training, combat operations, operational test and evaluation, and natural disaster support. (U.S. Air Force photo by Senior Master Sgt. C.R./Released)
October 16, 2017

Self-targeting guns, autonomous fighter jets, and combat drones – the pairing of lethal systems and sophisticated artificial intelligence (AI) is poised to radically transform the battlefield, as well as the nature of war. Industry leaders and AI experts have rightful concerns regarding their proliferation and the problematic legal uncertainty surrounding them. International powers need to start regulating AI warfare now, before it begins to dominate the battlefield. The consequences of lethal AI's unrestrained use, in the absence of rules and norms governing it, would be highly destabilizing. AI has, to varying degrees, existed in weapons systems for many years. The United States’ armed drones, a hallmark of the “War on Terror,” are a familiar example. However, a key characteristic of these systems is their limited level of autonomy.  Humans are deeply involved in the decision-making process, providing guidance on targeting and authorizing attacks. The issue begins when AI weapons systems are developed to operate with greater autonomy, independently making decisions. This degree of advanced AI is still off in the future, but the issues it could cause are already discernible. Given a mission and let loose, autonomous weapons will be suitable for tasks such as assassinations, subduing populations or certain ethnic groups, and destabilizing nations. Unlike nuclear, biological, or chemical weapons, AI systems are simple to store and maintain. Likewise, AI does not require costly materials to produce. Like cyber weapons, the advanced code and computing hardware that go into lethal AI are readily available even for actors with limited economic means. Because of this, these weapons could be cheaply produced en masse, becoming ubiquitous among major military powers. Fearful of being militarily offset, states lacking these systems will likely endeavor to acquire their own – triggering an arms race.  Gauging from past examples of proliferation, there is legitimate concern that it may only be a matter of time before they fall in the hands of actors with destabilizing goals, such as dictators, warlords, or terrorists. The consequences of their misuse could be devastating. Equally devastating, the lack of clarity in international law regarding autonomous weapons creates a challenging environment for human rights and wartime accountability. Humanitarian law governing conduct in war has language that assumes human beings, not machines, are making decisions. In cases of semi-autonomous weapons, such as armed drones, legal responsibility is placed on the humans who sign-off on each attack decision. But what about lethal machines acting independently? At present, no clear path or precedent exists to hold accountable individuals who use AI weapons for destabilizing or illegal purposes. To resolve these legal uncertainties and restrain the use of lethal AI, tech experts have called for a treaty to ban all AI weapons. However, a distinct challenge will be arriving at an agreeable standard of what “is” and “isn’t” an automated weapon. As previously noted, the United States, China, Russia, and other powers have already made use of semi-autonomous systems, which operate with varying degrees of independence. It would be exceedingly difficult to secure buy-in on an agreement that seeks to ban all forms of AI weapon systems when states have already invested in and incorporated them into their respective militaries. Instead, as some legal experts have suggested, a series of agreements addressing particular aspects of AI weapon use would be more palatable for the countries leading their development and effective as tools of governance. Even if countries don’t sign on to each, these would create norms of unacceptable use and a “moral high ground” that would moderate the use of AI in war – akin to the prohibition on the use of anti-personnel mines, which the United States has not signed but generally adheres to. These agreements could include treaties on the liability of AI weapons, affirming that actors are liable for their drones just as they are for their military personnel. A treaty outlining acceptable and unacceptable targets for drones during times of war would be a step toward limiting their use against civilian populations or infrastructure. As with nuclear test treaties, an agreement on testing and operational standards of AI weapons would ensure safe and superior performance in their use and limit the potential for consequences arising from improper or faulty programming. While these agreements would be just a start, they would begin to address issues of importance where the international community can come together to find mutually acceptable solutions. It is far better to do so now, in the early stages of AI weapons’ development, than dealing with the issues after they begin to arise – or, worse yet, not dealing with them at all. About the author: Cody Knipfer is the Technology & Cybersecurity Fellow at Young Professionals in Foreign Policy (YPFP). He has experience working with space and aerospace trade associations, as well as a space policy consultancy. Cody expects to receive his MA in International Science and Technology Policy in 2018 from George Washington University's Space Policy Institute. Photo credit  

The views presented in this article are the author’s own and do not necessarily represent the views of any other organization.