.
Over the course of seven months, The Future Society collected data concerning the governance of AI from thousands of participants and contributors from around the world. This data culminated into the recently launched 2018 report “A Global Civic Debate on Governing the Rise of Artificial Intelligence”. The report provides insights on the current concerns about AI, as well as concrete steps that can be taken to ensure a secure future as technology continues to evolve. There is a wealth of information, opinions, and ideas surrounding AI, ranging from frenzied media reports about self-driving vehicles to science fiction movies and television shows about robots taking over. It can be easy to become lost in these conversations, and difficult to know which voices to follow. The Future Society’s report clarifies the many shifting stories about AI by compiling seven key insights on governing machine learning. The Evolving Notion of AI There is currently no consistent or straightforward definition of AI, and this needs to change. Because AI technology has evolved and expanded rather quickly, it has been challenging to pin down a definition of AI that is clear and consistent. An unambiguous definition of AI is necessary to further progress, impose regulations, and mitigate risks. The Future Society created a working definition that describes AI as “big data driven, machine learning algorithm-centric, complex socio-technical systems powered by supercomputing”. Although complex, an encompassing definition like this could ensure that all parties are on the same page when discussing and making decisions about AI. As of yet, AI has mostly been developed and conceived as being an imitation of humans. Anthropomorphized robots and smartphones that speak to us like people perpetuate the notion that AI should mimic human qualities. But what if we imagined AI beyond human imitation? How can AI compliment human abilities but also act in ways that humans can’t? For example, AI has the ability to analyze large and complex sets of data at the same time, something that the human brain cannot do. As these technologies evolve, our conception of and our relationship with AI will also shift. In the coming years we will see an ongoing process of transforming social norms and values alongside evolving technologies. As our notions of this technology change, it will become increasingly more important to create a universal definition and criteria to categorize AI. Diverging Expectations Of AI: A Call for Trust in Technology  Trust-building will be necessary in order for AI to be successful. People tend to have visceral reactions about AI, as this technology raises ethical issues surrounding labor replacement, personal data collection, and more. But when it comes down to it, AI will be impossible to govern without trust. One suggestion for building trust in AI is creating a global ethics committee that would serve as a “watchdog” to ensure companies follow ethical procedures. Such a committee would require cooperation of governments and corporations globally, but will be a necessary step to manage AI developments and allow the public to trust in new technologies. Applying Blockchain for Ethical AI and Governance As blockchain technology has evolved alongside AI, we are beginning to recognize that these technologies could support one another. Blockchain serves as a “digital ledger” to store data and record transactions. The nature of this technology is that it is decentralized and highly secure—features that could decrease certain risks of AI such as bias in data collection and data transfer security. The transactional nature of blockchain could also incentivize people to share or exchange personal data in a safe way. Ultimately, the development of machine learning requires a substantial amount of personal data, and people are rightly afraid to share this. Blockchain is a way to overcome this obstacle and allow data to be transmitted securely. New Hopes and Utopian Futures The participants in the study expressed optimism that AI could be used to tackle larger social issues such as poverty, climate change, and disease. AI has the capacity to provide goods and services at a low cost, including legal and medical services, potentially increasing access to marginalized communities. Some argue that this could help society progress to be more cooperative and egalitarian. The idea that AI could automate services and replace human jobs has been a major point of contention and fear throughout. However, the counter argument is that AI will free resources for humans to pursue hobbies and creative endeavors, as well as make scientific explorations. Smart Governance is Key to Mitigate Threats and Challenges AI will not reach its full potential unless governments collaborate instead of compete with one another. Competition will create a “race to the bottom” situation, meaning that standards and ethical procedures will fall short in an effort to get products on the market quickly. This scenario points to the increasing need for standardized procedures that hold companies accountable. Another challenge is bias—as AI is used to collect, store, and share data, the data needs to be representative and as unbiased as possible. This will simply not happen without strategic and careful governance. New Threats, Fears and Risks Arise from Granting AI Algorithms Larger Roles in Society Since the idea of AI was first conceptualized, people have feared a world where robots outsmart humans and take over humanity. While many of these fears are exaggerated by science fiction, there is merit to the fear that AI could gain too much power. People justifiably fear that as AI is implemented in healthcare, infrastructure, and the judicial system, it will eventually take away human autonomy. The main way to prevent this from happening is by holding people, instead of AI, accountable and by not granting AI authority over major roles in society. For example, if AI is used for medical diagnostics, there could be a team of physicians who use the information collected by AI to reach their own conclusions and make plans for treatment. In the court of law, AI could be used to expedite investigations but humans would be responsible for legal decisions. The bottom line is that technology should be used to compliment, rather than replace, human responsibility. Evolving Identity and Roles for Humans What would the world look like without a traditional work week? Will people lead productive, fulfilling lives or become lawless and idle members of society? These are the questions that were raised as participants imagined a world where AI automates jobs and optimizes tasks to the extent that the entire workforce is revolutionized. If AI made goods and services cheaper and more accessible, people eventually may not have to work to meet their basic needs. And with a traditional labor force replaced by AI, this poses philosophical questions about the role of humans in society. This revolution would also call for a restructuring in the way our cities are built and the way our education systems are set up. Our lives are centered around working to meet our needs, and uncertainty lies in how society would change if this were no longer a factor. This calls for anthropologists, behavioral scientists, and philosophers to come together to tackle these questions. Conversations like those facilitated in this report are a start to governing AI, and these conversations need to continue to happen at both a local and global level. Currently, the United States and China are at the head of AI innovation and implementation, but if these governments treat innovation as a race, it will cause political instability and increase the risks associated with AI. One way to facilitate these conversations at a local level could be through community-based education initiatives. If implemented correctly, AI could empower marginalized communities and promote equality, but without careful consideration it could create further economic and social divisions. Education at a local level can lead to inclusion and engagement, by empowering people to know their rights when it comes to AI, understanding the risks and benefits, and ensuring access to goods and services. Through consistent and inclusive conversation and diversity in participation at all levels, we can ultimately maximize AI’s potential. The complex and evolving world of AI poses challenges to governments, corporations, developers, and consumers. As we identify these challenges and work together to create concrete solutions, we can ensure a stable and thriving future where AI is used to better humanity.

About
Hannah Bergstrom
:
Hannah Bergstrom is a Diplomatic Courier Correspondent and Brand Ambassador for the Learning Economy.
The views presented in this article are the author’s own and do not necessarily represent the views of any other organization.

a global affairs media network

www.diplomaticourier.com

The Future of AI Governance

World Communication Montage|
October 9, 2018

Over the course of seven months, The Future Society collected data concerning the governance of AI from thousands of participants and contributors from around the world. This data culminated into the recently launched 2018 report “A Global Civic Debate on Governing the Rise of Artificial Intelligence”. The report provides insights on the current concerns about AI, as well as concrete steps that can be taken to ensure a secure future as technology continues to evolve. There is a wealth of information, opinions, and ideas surrounding AI, ranging from frenzied media reports about self-driving vehicles to science fiction movies and television shows about robots taking over. It can be easy to become lost in these conversations, and difficult to know which voices to follow. The Future Society’s report clarifies the many shifting stories about AI by compiling seven key insights on governing machine learning. The Evolving Notion of AI There is currently no consistent or straightforward definition of AI, and this needs to change. Because AI technology has evolved and expanded rather quickly, it has been challenging to pin down a definition of AI that is clear and consistent. An unambiguous definition of AI is necessary to further progress, impose regulations, and mitigate risks. The Future Society created a working definition that describes AI as “big data driven, machine learning algorithm-centric, complex socio-technical systems powered by supercomputing”. Although complex, an encompassing definition like this could ensure that all parties are on the same page when discussing and making decisions about AI. As of yet, AI has mostly been developed and conceived as being an imitation of humans. Anthropomorphized robots and smartphones that speak to us like people perpetuate the notion that AI should mimic human qualities. But what if we imagined AI beyond human imitation? How can AI compliment human abilities but also act in ways that humans can’t? For example, AI has the ability to analyze large and complex sets of data at the same time, something that the human brain cannot do. As these technologies evolve, our conception of and our relationship with AI will also shift. In the coming years we will see an ongoing process of transforming social norms and values alongside evolving technologies. As our notions of this technology change, it will become increasingly more important to create a universal definition and criteria to categorize AI. Diverging Expectations Of AI: A Call for Trust in Technology  Trust-building will be necessary in order for AI to be successful. People tend to have visceral reactions about AI, as this technology raises ethical issues surrounding labor replacement, personal data collection, and more. But when it comes down to it, AI will be impossible to govern without trust. One suggestion for building trust in AI is creating a global ethics committee that would serve as a “watchdog” to ensure companies follow ethical procedures. Such a committee would require cooperation of governments and corporations globally, but will be a necessary step to manage AI developments and allow the public to trust in new technologies. Applying Blockchain for Ethical AI and Governance As blockchain technology has evolved alongside AI, we are beginning to recognize that these technologies could support one another. Blockchain serves as a “digital ledger” to store data and record transactions. The nature of this technology is that it is decentralized and highly secure—features that could decrease certain risks of AI such as bias in data collection and data transfer security. The transactional nature of blockchain could also incentivize people to share or exchange personal data in a safe way. Ultimately, the development of machine learning requires a substantial amount of personal data, and people are rightly afraid to share this. Blockchain is a way to overcome this obstacle and allow data to be transmitted securely. New Hopes and Utopian Futures The participants in the study expressed optimism that AI could be used to tackle larger social issues such as poverty, climate change, and disease. AI has the capacity to provide goods and services at a low cost, including legal and medical services, potentially increasing access to marginalized communities. Some argue that this could help society progress to be more cooperative and egalitarian. The idea that AI could automate services and replace human jobs has been a major point of contention and fear throughout. However, the counter argument is that AI will free resources for humans to pursue hobbies and creative endeavors, as well as make scientific explorations. Smart Governance is Key to Mitigate Threats and Challenges AI will not reach its full potential unless governments collaborate instead of compete with one another. Competition will create a “race to the bottom” situation, meaning that standards and ethical procedures will fall short in an effort to get products on the market quickly. This scenario points to the increasing need for standardized procedures that hold companies accountable. Another challenge is bias—as AI is used to collect, store, and share data, the data needs to be representative and as unbiased as possible. This will simply not happen without strategic and careful governance. New Threats, Fears and Risks Arise from Granting AI Algorithms Larger Roles in Society Since the idea of AI was first conceptualized, people have feared a world where robots outsmart humans and take over humanity. While many of these fears are exaggerated by science fiction, there is merit to the fear that AI could gain too much power. People justifiably fear that as AI is implemented in healthcare, infrastructure, and the judicial system, it will eventually take away human autonomy. The main way to prevent this from happening is by holding people, instead of AI, accountable and by not granting AI authority over major roles in society. For example, if AI is used for medical diagnostics, there could be a team of physicians who use the information collected by AI to reach their own conclusions and make plans for treatment. In the court of law, AI could be used to expedite investigations but humans would be responsible for legal decisions. The bottom line is that technology should be used to compliment, rather than replace, human responsibility. Evolving Identity and Roles for Humans What would the world look like without a traditional work week? Will people lead productive, fulfilling lives or become lawless and idle members of society? These are the questions that were raised as participants imagined a world where AI automates jobs and optimizes tasks to the extent that the entire workforce is revolutionized. If AI made goods and services cheaper and more accessible, people eventually may not have to work to meet their basic needs. And with a traditional labor force replaced by AI, this poses philosophical questions about the role of humans in society. This revolution would also call for a restructuring in the way our cities are built and the way our education systems are set up. Our lives are centered around working to meet our needs, and uncertainty lies in how society would change if this were no longer a factor. This calls for anthropologists, behavioral scientists, and philosophers to come together to tackle these questions. Conversations like those facilitated in this report are a start to governing AI, and these conversations need to continue to happen at both a local and global level. Currently, the United States and China are at the head of AI innovation and implementation, but if these governments treat innovation as a race, it will cause political instability and increase the risks associated with AI. One way to facilitate these conversations at a local level could be through community-based education initiatives. If implemented correctly, AI could empower marginalized communities and promote equality, but without careful consideration it could create further economic and social divisions. Education at a local level can lead to inclusion and engagement, by empowering people to know their rights when it comes to AI, understanding the risks and benefits, and ensuring access to goods and services. Through consistent and inclusive conversation and diversity in participation at all levels, we can ultimately maximize AI’s potential. The complex and evolving world of AI poses challenges to governments, corporations, developers, and consumers. As we identify these challenges and work together to create concrete solutions, we can ensure a stable and thriving future where AI is used to better humanity.

About
Hannah Bergstrom
:
Hannah Bergstrom is a Diplomatic Courier Correspondent and Brand Ambassador for the Learning Economy.
The views presented in this article are the author’s own and do not necessarily represent the views of any other organization.