.
Presenter: Nicolas Economou, Chief Executive, H5 Moderator: Ana C. Rold, Publisher, Diplomatic Courier As the Fourth Industrial Revolution continues to see technology advance at unprecedented rates, it can be argued that it is artificial intelligence that is moving at the fastest pace—and perhaps with the most promise. Indeed, while artificial intelligence seems like a technology of the distant future, it is in fact already disrupting every facet of life, from law to warfare to the very concept of what it means to be human. At the recent IMF/World Bank Spring Meetings—specifically, the IMF’s New Economy Forum—Nicolas Economou, Chairman and Chief Executive of H5, argued that due to the rapid transformations artificial intelligence is beginning to create, it is absolutely critical that we begin discussing the governance of AI and the framework by which societies should delegate decisions to machines in an effort to mitigate the impact of risks as we move forward—lest we begin to see the beginnings of a dystopian-like future. Moderated by Diplomatic Courier’s own Ana C. Rold, here are the key takeaways. There is a plethora of definitions of artificial intelligence. Artificial intelligence today is broadly defined as big data-driven, massively computerized, machine learning-centric algorithmic systems. However, such definitions fail to account for unexpected sources of innovation and also remain inaccessible to the ordinary citizen. Therefore, it may be preferable to define artificial intelligence in simpler terms as the science and engineering of intelligent systems. Despite these working definitions, however, it is most likely futile that we’ll come to an all-agreed definition of AI anytime soon. After all, we still don’t have a settled definition of human intelligence after 3,000 years of scientific and philosophical debates. “Artificial intelligence is many things to many people—but artificial intelligence is not artificial humanity.” – Nicolas Economou The more important question is what artificial intelligence is NOT. Rather than focusing on the complexity of what AI is, it may be more useful to keep in mind what it is not. AI does not have empathy—it cannot mourn a deceased family member, for example. AI cannot feel nostalgia about its childhood, dream about its future, or feel any of the joys and sorrows that are so central to the human experience. Therefore, it is important to remember that while artificial intelligence may be many things, it is not artificial humanity—a crucial aspect to take into consideration when thinking about how to govern AI. There are different types of artificial intelligence. While the AI of today can be extremely well suited to certain discrete tasks—such as playing chess—modern AI’s inability to intuitively solve a range of versatile problems makes it unlikely that artificial intelligence will reach a level of cognition similar to human intelligence anytime soon. Artificial general intelligence (AGI) will most likely not be realized in the foreseeable future, but more narrow forms of AI will continue to evolve at a rapid pace, with progressively less human supervision. The Fourth Industrial Revolution is different in many ways from past industrial revolutions. While the Fourth Industrial Revolution is similar to past revolutions in its system-wide impact and universal disruption of power, there are many new challenges we must also face.
  • There is a difference in magnitude and velocity. During the First Industrial Revolution, there were approximately one billion people alive; in the 1970’s, that number increased to around three billion. Today there are seven billion people worldwide, with artificial intelligence set to affect them all in immense ways. More importantly, the speed at which this transformation is happening is unprecedented, with McKinsey predicting that by 2030, 30% of the workforce in the developed world will need to adjust to new or different types of work due to the Fourth Industrial Revolution.
  • The tolerance for violence is different. During the First Industrial Revolution, the distress and disruption caused by dramatic changes in the manufacturing process brought about a great deal of suffering—most of which society was able to tolerate. Today, societies’ tolerance for violence is far lower, which raises considerable public policy and public order questions in how citizens may respond to radical change.
The Fourth Industrial Revolution will force us to reevaluate our very nature. While prior industrial revolutions forced us to reexamine our relationship to work and how society is organized, the advent of artificial intelligence will confront us instead with our very conception of what it means to be human. The risks and opportunities associated with artificial intelligence are intertwined. While there is much fear over the inherent risks of artificial intelligence, these risks mirror groundbreaking opportunities for the future of humanity. It is this high stakes intertwining of risks and opportunities that make AI governance so important. Already, AI is being used in the court system. In the legal system, for example, in a recent case in Wisconsin, a judge relied in part on a black box algorithm to determine the length of the sentence for a person who committed a crime. The defendant was denied the right to examine the algorithm. An AI algorithm was thus used to determine the length of a sentence, without any review of its decision-making pathways, any scientific evidence of its effectiveness, or any evidence that anyone in the courtroom was competent to understand it. This example illustrates the risks of AI adoption in the absence of norms. But one can envision the sound deployment of AI in the legal system to facilitate access to justice and to produce more consistent system-wide outcomes. “While the pace of change is rapid, we still have time to establish global, but culturally adaptive, norms for the beneficial governance of AI.” –Nicolas Economou There are opportunities and risks in “social” artificial intelligence. Studies in psychology have revealed that humans interact with human-like machines much the same way they interact with humans, which has numerous potential benefits in providing care for children, the elderly and the socially isolated, and for education. However, this same mechanism carries risks as well, and we should ask questions of what kind of values, perspectives and political alignments such artificial intelligence would carry that could affect the people they interact with, but also our democratic institutions.  We need to begin discussing how we should approach the governance of artificial intelligence. An effective, adaptable and legitimate framework for the governance of AI is indispensable. The goal of governing artificial intelligence, explained Economou, is precisely to prevent the possibility that AI governs us. AI must remain an instrument in the hands of humans for the benefit of humans. In order to build a framework, we need to address important questions first. First, to what extend should societies delegate to machines decisions that affect people? What central values should artificial intelligence be advancing? What principles, ethical values, public policy recommendations, technical standards and codes of practice should a framework that governs AI entail? And what methodology should be used to go about creating this framework? Creating a consensus AI governance framework is challenging, but our experience governing human intelligence can help. AI brings up some entirely new ethical challenges, in particular the surrender of human agency to non-ethical agents. Even so, in developing a governance framework for AI, we can learn a lot from three thousand years of experience in governing human intelligence. Many of the principles, laws, norms, regulations, codes of practice, and even international agreements we have applied to “HI” can translate to the governance of AI. There is a lack of international cooperation. While there is tremendously good research surrounding AI both at the national and international levels, international cooperation on governance of AI is insufficient. Some emerging endeavors in this respect are laudable, including the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems, the Dubai Global Governance of AI Roundtable,  President Macron’s proposal for an IPCC for AI, as well as the emergence of international think tanks such as The Future Society, which are exclusively focused on addressing the governance of AI.  

About
Winona Roylance
:
Winona Roylance is Diplomatic Courier's Senior Editor and Writer.
The views presented in this article are the author’s own and do not necessarily represent the views of any other organization.

a global affairs media network

www.diplomaticourier.com

Governing the Ascent of Artificial Intelligence

||AI’s unrelenting ascent produces extraordinary innovation, systemic changes, hopes and fears in domains ranging from healthcare to warfare, and from the labor market to the administration of all institutions of society. Increasingly, AI affects the rights, quality of life, freedom of opportunity, agency and dignity of citizens. Tomorrow, it will challenge our conception of what it means to be human. How should policy makers and ordinary citizens think about the impact of artificial intelligence in the midst of great uncertainty? What unique ethical and societal challenges arise from AI’s ascent? What governance frameworks are available to the international community?|||||artificial intelligence
May 1, 2018

Presenter: Nicolas Economou, Chief Executive, H5 Moderator: Ana C. Rold, Publisher, Diplomatic Courier As the Fourth Industrial Revolution continues to see technology advance at unprecedented rates, it can be argued that it is artificial intelligence that is moving at the fastest pace—and perhaps with the most promise. Indeed, while artificial intelligence seems like a technology of the distant future, it is in fact already disrupting every facet of life, from law to warfare to the very concept of what it means to be human. At the recent IMF/World Bank Spring Meetings—specifically, the IMF’s New Economy Forum—Nicolas Economou, Chairman and Chief Executive of H5, argued that due to the rapid transformations artificial intelligence is beginning to create, it is absolutely critical that we begin discussing the governance of AI and the framework by which societies should delegate decisions to machines in an effort to mitigate the impact of risks as we move forward—lest we begin to see the beginnings of a dystopian-like future. Moderated by Diplomatic Courier’s own Ana C. Rold, here are the key takeaways. There is a plethora of definitions of artificial intelligence. Artificial intelligence today is broadly defined as big data-driven, massively computerized, machine learning-centric algorithmic systems. However, such definitions fail to account for unexpected sources of innovation and also remain inaccessible to the ordinary citizen. Therefore, it may be preferable to define artificial intelligence in simpler terms as the science and engineering of intelligent systems. Despite these working definitions, however, it is most likely futile that we’ll come to an all-agreed definition of AI anytime soon. After all, we still don’t have a settled definition of human intelligence after 3,000 years of scientific and philosophical debates. “Artificial intelligence is many things to many people—but artificial intelligence is not artificial humanity.” – Nicolas Economou The more important question is what artificial intelligence is NOT. Rather than focusing on the complexity of what AI is, it may be more useful to keep in mind what it is not. AI does not have empathy—it cannot mourn a deceased family member, for example. AI cannot feel nostalgia about its childhood, dream about its future, or feel any of the joys and sorrows that are so central to the human experience. Therefore, it is important to remember that while artificial intelligence may be many things, it is not artificial humanity—a crucial aspect to take into consideration when thinking about how to govern AI. There are different types of artificial intelligence. While the AI of today can be extremely well suited to certain discrete tasks—such as playing chess—modern AI’s inability to intuitively solve a range of versatile problems makes it unlikely that artificial intelligence will reach a level of cognition similar to human intelligence anytime soon. Artificial general intelligence (AGI) will most likely not be realized in the foreseeable future, but more narrow forms of AI will continue to evolve at a rapid pace, with progressively less human supervision. The Fourth Industrial Revolution is different in many ways from past industrial revolutions. While the Fourth Industrial Revolution is similar to past revolutions in its system-wide impact and universal disruption of power, there are many new challenges we must also face.
  • There is a difference in magnitude and velocity. During the First Industrial Revolution, there were approximately one billion people alive; in the 1970’s, that number increased to around three billion. Today there are seven billion people worldwide, with artificial intelligence set to affect them all in immense ways. More importantly, the speed at which this transformation is happening is unprecedented, with McKinsey predicting that by 2030, 30% of the workforce in the developed world will need to adjust to new or different types of work due to the Fourth Industrial Revolution.
  • The tolerance for violence is different. During the First Industrial Revolution, the distress and disruption caused by dramatic changes in the manufacturing process brought about a great deal of suffering—most of which society was able to tolerate. Today, societies’ tolerance for violence is far lower, which raises considerable public policy and public order questions in how citizens may respond to radical change.
The Fourth Industrial Revolution will force us to reevaluate our very nature. While prior industrial revolutions forced us to reexamine our relationship to work and how society is organized, the advent of artificial intelligence will confront us instead with our very conception of what it means to be human. The risks and opportunities associated with artificial intelligence are intertwined. While there is much fear over the inherent risks of artificial intelligence, these risks mirror groundbreaking opportunities for the future of humanity. It is this high stakes intertwining of risks and opportunities that make AI governance so important. Already, AI is being used in the court system. In the legal system, for example, in a recent case in Wisconsin, a judge relied in part on a black box algorithm to determine the length of the sentence for a person who committed a crime. The defendant was denied the right to examine the algorithm. An AI algorithm was thus used to determine the length of a sentence, without any review of its decision-making pathways, any scientific evidence of its effectiveness, or any evidence that anyone in the courtroom was competent to understand it. This example illustrates the risks of AI adoption in the absence of norms. But one can envision the sound deployment of AI in the legal system to facilitate access to justice and to produce more consistent system-wide outcomes. “While the pace of change is rapid, we still have time to establish global, but culturally adaptive, norms for the beneficial governance of AI.” –Nicolas Economou There are opportunities and risks in “social” artificial intelligence. Studies in psychology have revealed that humans interact with human-like machines much the same way they interact with humans, which has numerous potential benefits in providing care for children, the elderly and the socially isolated, and for education. However, this same mechanism carries risks as well, and we should ask questions of what kind of values, perspectives and political alignments such artificial intelligence would carry that could affect the people they interact with, but also our democratic institutions.  We need to begin discussing how we should approach the governance of artificial intelligence. An effective, adaptable and legitimate framework for the governance of AI is indispensable. The goal of governing artificial intelligence, explained Economou, is precisely to prevent the possibility that AI governs us. AI must remain an instrument in the hands of humans for the benefit of humans. In order to build a framework, we need to address important questions first. First, to what extend should societies delegate to machines decisions that affect people? What central values should artificial intelligence be advancing? What principles, ethical values, public policy recommendations, technical standards and codes of practice should a framework that governs AI entail? And what methodology should be used to go about creating this framework? Creating a consensus AI governance framework is challenging, but our experience governing human intelligence can help. AI brings up some entirely new ethical challenges, in particular the surrender of human agency to non-ethical agents. Even so, in developing a governance framework for AI, we can learn a lot from three thousand years of experience in governing human intelligence. Many of the principles, laws, norms, regulations, codes of practice, and even international agreements we have applied to “HI” can translate to the governance of AI. There is a lack of international cooperation. While there is tremendously good research surrounding AI both at the national and international levels, international cooperation on governance of AI is insufficient. Some emerging endeavors in this respect are laudable, including the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems, the Dubai Global Governance of AI Roundtable,  President Macron’s proposal for an IPCC for AI, as well as the emergence of international think tanks such as The Future Society, which are exclusively focused on addressing the governance of AI.  

About
Winona Roylance
:
Winona Roylance is Diplomatic Courier's Senior Editor and Writer.
The views presented in this article are the author’s own and do not necessarily represent the views of any other organization.