.
A

lmost exactly 66 years ago, 22 preeminent scientists from ten countries, including the United States and the Soviet Union, gathered in Pugwash, Nova Scotia, to identify the dangers that nuclear weapons posed and devise peaceful ways of resolving conflicts among countries. With that, the international organization known as the Pugwash Conferences on Science and World Affairs, or the Pugwash Movement, was born. Though the world is hardly free of nuclear weapons, the Movement’s efforts to advance disarmament were powerful enough to win it the Nobel Peace Prize in 1995.

Today, the world needs a new Pugwash Movement, this time focused on artificial intelligence. Unlike nuclear weapons, AI holds as much promise as peril, and its destructive capacity is still more theoretical than real. Still, both technologies pose existential risks to humanity. Leading scientists, technologists, philosophers, ethicists, and humanitarians from every continent must therefore come together to secure broad agreement on a framework for governing AI that can win support at the local, national, and global levels.

Unlike the original Pugwash Movement, the AI version would not have to devise a framework from scratch. Scores of initiatives to govern and guide AI development and applications are already underway. Examples include the Blueprint for an AI Bill of Rights in the United States, the Ethics Guidelines for Trustworthy AI in the European Union, the OECD’s AI Principles, and UNESCO’s Recommendation on the Ethics of Artificial Intelligence.

Instead, the new Pugwash Movement would focus largely on connecting relevant actors, aligning on necessary measures, and ensuring that they are implemented broadly. Institutions will be vital to this effort. But what kind of institutions are needed and can realistically be established or empowered to meet the AI challenge quickly?

United Nations Secretary-General António Guterres has called for “networked multilateralism,” in which the UN, “international financial institutions, regional organizations, trading blocs, and others”—including many nongovernmental entities—“work together more closely and more effectively.” But to be effective, such multi-stakeholder networks would have to be designed to serve specific functions.

A paper released this month by a group of leading AI scholars and experts from universities and tech companies identifies four such functions with regard to AI: spreading beneficial technology, harmonizing regulation, ensuring safe development and use, and managing geopolitical risks.

Understandably, many consider “ensuring safe development and use” to be the top priority. So, efforts are underway to develop an institution that will identify and monitor actual and potential harms arising from AI applications, much as the Intergovernmental Panel on Climate Change monitors the existential risk of climate change. In fact, the Global AI Observatory, recently proposed by the Artificial Intelligence and Equality Initiative, would be explicitly modeled on the IPCC, which is essentially a network of networks that works very well for accumulating knowledge from many different sources.

Cumulation networks—from the U.S. Cybersecurity and Infrastructure Security Agency’s Incident Reporting System to Wikipedia—have a central authoritative hub that can pull together information and analysis from many different types of institutions, some of which are already networked. But such a hub cannot take swift action based on the information it gathers. To govern AI, a hierarchical multilateral institution with the power to make and implement decisions—such as a functioning UN Security Council—is still needed.

As for the function of spreading beneficial technology (which is just as important for most people as preventing harm), a combined innovation-collaboration network is likely to work best. Innovation networks typically include many far-flung nodes—to ensure access to the most possible sources of new ideas and practices—and a limited number of hubs focused on transforming ideas into action, collaborating on best practices, and preventing exploitation. The hubs could be centered in specific regions or perhaps tied to specific UN Sustainable Development Goals.

Harmonizing regulation—including experimenting with different types of regulation—will require a broader, looser, and maximally inclusive structure. AI technologies are simply too broad and too fast-moving for one or even several centralized regulatory authorities to have any chance of channeling and guiding them alone. Instead, we propose a widely distributed multi-hub network that supports what we call “digital co-governance.”

Our model is based on the distributed architecture and co-governance system that are widely credited with maintaining the stability and resilience of the internet. Decades ago, technology researchers, supported by the U.S. government and early internet businesses, created several institutions in a loosely-coordinated constellation, each with its own functional responsibilities.

The Internet Society promotes an open, globally connected internet. The World Wide Web Consortium develops web standards. The Internet Governance Forum brings stakeholders together to discuss relevant policy issues. And the Internet Corporation for Assigned Names and Numbers (ICANN) coordinates and safeguards the internet’s unique identifiers.

The key to these institutions’ success is that they are operated through distributed peer-to-peer, self-governing networks that bring together a wide range of stakeholders to co-design norms, rules, and implementation guidelines. ICANN, for example, has dozens of self-organized networks dealing with the Domain Name System—crucial to enable users to navigate the internet—and coordinates other self-governing networks, such as the five regional institutions that manage the allocation of IP addresses for the world.

These institutions are capable of handling a wide range of policy questions, from the technical to the political. When Russia invaded Ukraine in 2022, the Ukrainian authorities pressured ICANN to remove .ru from the Domain Name System’s master directory, known as the root zone, which is managed by 12 institutions across four countries, coordinated but not controlled by ICANN. Ultimately, the Internet Assigned Numbers Authority declined the request.

A Pugwash-like conference on AI would have no shortage of proposals to consider, or governmental, academic, corporate, and civic partners to engage. The original Pugwash participants were responding to a call from intellectual giants like the philosopher Bertrand Russell and the physicist Albert Einstein. Who will step up today?

Copyright: Project Syndicate, 2023.

About
Anne-Marie Slaughter
:
Anne-Marie Slaughter, a former director of policy planning in the U.S. State Department, is CEO of the think tank New America, Professor Emerita of Politics and International Affairs at Princeton University, and the author of Unfinished Business: Women Men Work Family.
About
Fadi Chehadé
:
Fadi Chehadé, a former president and CEO of ICANN (2012-16), is a member of the UN Secretary-General’s High-Level Panel on Digital Cooperation.
The views presented in this article are the author’s own and do not necessarily represent the views of any other organization.

a global affairs media network

www.diplomaticourier.com

AI’s Pugwash Moment

Image by Kohji Asakawa from Pixabay

July 31, 2023

Efforts are underway at the UN to develop governance insitutions capable of ensuring the safe development and use of AI. But harmonizing regulation will require a broader, looser, and maximally inclusive structure, writes Anne-Marie Slaughter and Fadi Chehadé.

A

lmost exactly 66 years ago, 22 preeminent scientists from ten countries, including the United States and the Soviet Union, gathered in Pugwash, Nova Scotia, to identify the dangers that nuclear weapons posed and devise peaceful ways of resolving conflicts among countries. With that, the international organization known as the Pugwash Conferences on Science and World Affairs, or the Pugwash Movement, was born. Though the world is hardly free of nuclear weapons, the Movement’s efforts to advance disarmament were powerful enough to win it the Nobel Peace Prize in 1995.

Today, the world needs a new Pugwash Movement, this time focused on artificial intelligence. Unlike nuclear weapons, AI holds as much promise as peril, and its destructive capacity is still more theoretical than real. Still, both technologies pose existential risks to humanity. Leading scientists, technologists, philosophers, ethicists, and humanitarians from every continent must therefore come together to secure broad agreement on a framework for governing AI that can win support at the local, national, and global levels.

Unlike the original Pugwash Movement, the AI version would not have to devise a framework from scratch. Scores of initiatives to govern and guide AI development and applications are already underway. Examples include the Blueprint for an AI Bill of Rights in the United States, the Ethics Guidelines for Trustworthy AI in the European Union, the OECD’s AI Principles, and UNESCO’s Recommendation on the Ethics of Artificial Intelligence.

Instead, the new Pugwash Movement would focus largely on connecting relevant actors, aligning on necessary measures, and ensuring that they are implemented broadly. Institutions will be vital to this effort. But what kind of institutions are needed and can realistically be established or empowered to meet the AI challenge quickly?

United Nations Secretary-General António Guterres has called for “networked multilateralism,” in which the UN, “international financial institutions, regional organizations, trading blocs, and others”—including many nongovernmental entities—“work together more closely and more effectively.” But to be effective, such multi-stakeholder networks would have to be designed to serve specific functions.

A paper released this month by a group of leading AI scholars and experts from universities and tech companies identifies four such functions with regard to AI: spreading beneficial technology, harmonizing regulation, ensuring safe development and use, and managing geopolitical risks.

Understandably, many consider “ensuring safe development and use” to be the top priority. So, efforts are underway to develop an institution that will identify and monitor actual and potential harms arising from AI applications, much as the Intergovernmental Panel on Climate Change monitors the existential risk of climate change. In fact, the Global AI Observatory, recently proposed by the Artificial Intelligence and Equality Initiative, would be explicitly modeled on the IPCC, which is essentially a network of networks that works very well for accumulating knowledge from many different sources.

Cumulation networks—from the U.S. Cybersecurity and Infrastructure Security Agency’s Incident Reporting System to Wikipedia—have a central authoritative hub that can pull together information and analysis from many different types of institutions, some of which are already networked. But such a hub cannot take swift action based on the information it gathers. To govern AI, a hierarchical multilateral institution with the power to make and implement decisions—such as a functioning UN Security Council—is still needed.

As for the function of spreading beneficial technology (which is just as important for most people as preventing harm), a combined innovation-collaboration network is likely to work best. Innovation networks typically include many far-flung nodes—to ensure access to the most possible sources of new ideas and practices—and a limited number of hubs focused on transforming ideas into action, collaborating on best practices, and preventing exploitation. The hubs could be centered in specific regions or perhaps tied to specific UN Sustainable Development Goals.

Harmonizing regulation—including experimenting with different types of regulation—will require a broader, looser, and maximally inclusive structure. AI technologies are simply too broad and too fast-moving for one or even several centralized regulatory authorities to have any chance of channeling and guiding them alone. Instead, we propose a widely distributed multi-hub network that supports what we call “digital co-governance.”

Our model is based on the distributed architecture and co-governance system that are widely credited with maintaining the stability and resilience of the internet. Decades ago, technology researchers, supported by the U.S. government and early internet businesses, created several institutions in a loosely-coordinated constellation, each with its own functional responsibilities.

The Internet Society promotes an open, globally connected internet. The World Wide Web Consortium develops web standards. The Internet Governance Forum brings stakeholders together to discuss relevant policy issues. And the Internet Corporation for Assigned Names and Numbers (ICANN) coordinates and safeguards the internet’s unique identifiers.

The key to these institutions’ success is that they are operated through distributed peer-to-peer, self-governing networks that bring together a wide range of stakeholders to co-design norms, rules, and implementation guidelines. ICANN, for example, has dozens of self-organized networks dealing with the Domain Name System—crucial to enable users to navigate the internet—and coordinates other self-governing networks, such as the five regional institutions that manage the allocation of IP addresses for the world.

These institutions are capable of handling a wide range of policy questions, from the technical to the political. When Russia invaded Ukraine in 2022, the Ukrainian authorities pressured ICANN to remove .ru from the Domain Name System’s master directory, known as the root zone, which is managed by 12 institutions across four countries, coordinated but not controlled by ICANN. Ultimately, the Internet Assigned Numbers Authority declined the request.

A Pugwash-like conference on AI would have no shortage of proposals to consider, or governmental, academic, corporate, and civic partners to engage. The original Pugwash participants were responding to a call from intellectual giants like the philosopher Bertrand Russell and the physicist Albert Einstein. Who will step up today?

Copyright: Project Syndicate, 2023.

About
Anne-Marie Slaughter
:
Anne-Marie Slaughter, a former director of policy planning in the U.S. State Department, is CEO of the think tank New America, Professor Emerita of Politics and International Affairs at Princeton University, and the author of Unfinished Business: Women Men Work Family.
About
Fadi Chehadé
:
Fadi Chehadé, a former president and CEO of ICANN (2012-16), is a member of the UN Secretary-General’s High-Level Panel on Digital Cooperation.
The views presented in this article are the author’s own and do not necessarily represent the views of any other organization.