.
D

emocracies are rather bad at identifying and understanding long-term risks. They are certainly appalling at mobilizing resources to address what, in hindsight, are rather obvious challenges like climate change or, more recently, artificial intelligence (AI). 

In a recent FT piece, Simon Kuper broke down the four stages of democratic problem management—a sort of stages of grief for inaction. First, “only a few experts even realize there is a problem” and second, “ignorant public debate led by politicians, journalists and assorted under-informed noisemakers.” These are likely closest to denial and anger. This then continues into “a long upskilling of the public debate.” Something akin to bargaining and depression—the public just doesn’t want to accept things as they are, and helplessness ensues. Finally, there is acceptance or, for Kuper, “a sophisticated majority agreement emerges on what should be done.” This, mind you, doesn’t mean action follows, merely that a consensus agrees that something should be done, almost certainly a set of bullet points or memoranda of agreement. 

The Coming Wave | Mustafa Suleyman & Michael Bhaskar | Crown

Kuper finds the debate about AI at the second stage, the “ignorant public debate,” though Mustafa Suleyman, the co-founder and former head of applied AI at DeepMind, hopes to advance the debate through his new book “The Coming Wave.” Suleyman joins a growing chorus of AI experts and technologists warning about the titular coming wave of artificial intelligence, but offers a different approach to the conversation about how AI will change the way in which we live and work. He is a rare technologist that offers a slightly pessimistic look and certainly cautionary warning about AI but stops short of saying “the sky is falling.” In the end, political inertia and geopolitics may well militate against the containment he advocates. 

For Suleyman, AI will markedly change the way in which we live and work, but it is less one single change and more a collection of changes that are enabled by and through AI. The coming changes are more akin to those enabled by steam power or electricity. Both inventions were revolutionary to be sure, but it was the broader system of changes that those creations unleashed. Indeed, there is no single “AI,” and we are well off from an artificial general intelligence (AGI) that can do all things all the time (and could presage the downfall of humanity, some warn). 

It is the nexus of AI and other technologies, and the developments that AI enables that will have the most profound and significant impact on society. Suleyman points to synthetic biology as an example. The declining price of DNA synthesizers and increase in processing power enabled by AI means that the ability to create and edit life is spreading at an alarming rate. These are no idle concerns. Suleyman’s company, DeepMind, created AlphaFold, an AI tasked with determining how proteins fold and interact. It was, as Suleyman writes, so quickly successful that it won and ended a global contest to develop a tool to accurately determine this vital piece of knowledge. Could an AI develop a business plan to make money from Amazon? Suleyman thinks it is only a matter of time. Paired with robotics, drones, and greater autonomy, the opportunities for both good and ill through AI are nearly endless. 

In “The Coming Wave,” Suleyman tilts away from the techno-utopianism of Silicon Valley and its associated communities of innovation. He generally balances the promise and peril of AI, even if the enthusiasm is inescapable. His status as one of the leading drivers of AI development makes his warnings all the weightier.  

Suleyman’s overview of the development of AI and comparable technological revolutions is interesting, but has been written about in greater depth, and more succinctly in other books. This is less a reflection of “The Coming Wave” itself than the sheer deluge of books on the topic that are being published and aim to address Kuper’s “ignorant public debate.” 

He does, however, raise some interesting points such as the systemic schism that could well emerge between “fully fledged techno-dictatorships” such as China or democratic systems that are increasingly rendered hollow and ineffective in the face of technology’s spread. He rather creatively warns of a third way or “Hezbollahization” where technology is spread, tribes develop, and no one really is in charge. A bit fanciful perhaps, but it does lead to curious thought exercises, not the least of which is how do these three systems interact with one another? We may well soon find out. 

Reviewing the threats and opportunities of AI and what it enables is not where the value of Suleyman’s book is found. It is less in his diagnosis and more in his prescription for what society needs to do in response. For Suleyman, society faces three courses of action—it can either halt progress entirely, which is neither feasible nor practical; it could allow unfettered technological progress, which would allow for unanticipated consequences; or it could seek to contain AI and other technologies through a series of policy actions. It should be no surprise that Suleyman advocates containment above the other two—technological progress is unstoppable, but necessary for human development and growth, while unchecked technological progress will lead to the consequences about which he warns in the first half of the book. 

The 10 steps Suleyman outlines are generally sensible. There should be, for example, more AI researchers working on ethics and safety issues. AI systems should be audited to better understand what’s happening inside the “black box” of algorithms and neural networks. As Suleyman writes, “a paradox of the coming wave is that its technologies are largely beyond our ability to comprehend at a granular level yet still within our ability to create and use.” Changing business incentives to prioritize responsible AI development and shift away from the first-to-market, consequences-be-damned model is necessary. Developing a culture within the AI and technology community of greater transparency, openness, and candor that discloses failures would also improve the overall ecosystem through iterative learning. 

While Suleyman rightly advocates for policy solutions that could lead to the bounding, if not containment, of AI’s societal impact, in practice the only limitation that may well be possible is the containment of the damage caused by the technology in question. The delta between global conversations on AI regulation, the requirements and demands of strategic competition, and domestic politics is significant and growing. While there is ongoing regulatory activity about AI regulation in the EU, a growing awareness in Washington, and some comments coming from Beijing, the possibility of a global AI Weapons Convention is very slim. At a national-level, there may be steps to offset the economic pain that results from AI-driven economic dislocation, but that is a post-impact solution. 

In America, Congress can barely pass a budget to keep the government open (and now is without a Speaker of the House), let alone grasp the complexity of AI and the social and economic dislocation that it will generate or the opportunities it will create. The “ignorant public” is content to accept the benefits of AI, relishing in the novelty and usefulness of ChatGPT, without giving a second thought to what’s under the hood (as it were) and what it means more broadly. 

Alarmingly still, the recent history of governing, containing, and regulating technology leaves much to be desired, and is not cause for optimism about the future. Social media governance remains an open question. Progress on regulation and governance related to climate change proceeds in fits and starts, subject to the winds and vagaries of politics. Yet even with global warming there remains an open and vocal debate about the role humanity plays in Anthropocene environmental alteration despite clear scientific data. 

Despite Suleyman’s noteworthy entry into the popular literature on AI, and broader efforts to bound the conversation, we’re sadly likely to find ourselves stuck in Kuper’s “ignorant public” stage for some time.

About
Joshua Huminski
:
Joshua C. Huminski is the Senior Vice President for National Security & Intelligence Programs and the Director of the Mike Rogers Center at the Center for the Study of the Presidency & Congress.
The views presented in this article are the author’s own and do not necessarily represent the views of any other organization.

a global affairs media network

www.diplomaticourier.com

Keeping AIs in Their Box

Photo by Jouwen Wang on Unsplash

October 7, 2023

In his latest book, Mustafa Suleyman seeks to advance public debate on AI so that more informed consensus on how to regulate AI can be formed. While the book is interesting and actionable, the topic has been covered in more depth and more succinctly elsewhere, writes Joshua Huminski.

D

emocracies are rather bad at identifying and understanding long-term risks. They are certainly appalling at mobilizing resources to address what, in hindsight, are rather obvious challenges like climate change or, more recently, artificial intelligence (AI). 

In a recent FT piece, Simon Kuper broke down the four stages of democratic problem management—a sort of stages of grief for inaction. First, “only a few experts even realize there is a problem” and second, “ignorant public debate led by politicians, journalists and assorted under-informed noisemakers.” These are likely closest to denial and anger. This then continues into “a long upskilling of the public debate.” Something akin to bargaining and depression—the public just doesn’t want to accept things as they are, and helplessness ensues. Finally, there is acceptance or, for Kuper, “a sophisticated majority agreement emerges on what should be done.” This, mind you, doesn’t mean action follows, merely that a consensus agrees that something should be done, almost certainly a set of bullet points or memoranda of agreement. 

The Coming Wave | Mustafa Suleyman & Michael Bhaskar | Crown

Kuper finds the debate about AI at the second stage, the “ignorant public debate,” though Mustafa Suleyman, the co-founder and former head of applied AI at DeepMind, hopes to advance the debate through his new book “The Coming Wave.” Suleyman joins a growing chorus of AI experts and technologists warning about the titular coming wave of artificial intelligence, but offers a different approach to the conversation about how AI will change the way in which we live and work. He is a rare technologist that offers a slightly pessimistic look and certainly cautionary warning about AI but stops short of saying “the sky is falling.” In the end, political inertia and geopolitics may well militate against the containment he advocates. 

For Suleyman, AI will markedly change the way in which we live and work, but it is less one single change and more a collection of changes that are enabled by and through AI. The coming changes are more akin to those enabled by steam power or electricity. Both inventions were revolutionary to be sure, but it was the broader system of changes that those creations unleashed. Indeed, there is no single “AI,” and we are well off from an artificial general intelligence (AGI) that can do all things all the time (and could presage the downfall of humanity, some warn). 

It is the nexus of AI and other technologies, and the developments that AI enables that will have the most profound and significant impact on society. Suleyman points to synthetic biology as an example. The declining price of DNA synthesizers and increase in processing power enabled by AI means that the ability to create and edit life is spreading at an alarming rate. These are no idle concerns. Suleyman’s company, DeepMind, created AlphaFold, an AI tasked with determining how proteins fold and interact. It was, as Suleyman writes, so quickly successful that it won and ended a global contest to develop a tool to accurately determine this vital piece of knowledge. Could an AI develop a business plan to make money from Amazon? Suleyman thinks it is only a matter of time. Paired with robotics, drones, and greater autonomy, the opportunities for both good and ill through AI are nearly endless. 

In “The Coming Wave,” Suleyman tilts away from the techno-utopianism of Silicon Valley and its associated communities of innovation. He generally balances the promise and peril of AI, even if the enthusiasm is inescapable. His status as one of the leading drivers of AI development makes his warnings all the weightier.  

Suleyman’s overview of the development of AI and comparable technological revolutions is interesting, but has been written about in greater depth, and more succinctly in other books. This is less a reflection of “The Coming Wave” itself than the sheer deluge of books on the topic that are being published and aim to address Kuper’s “ignorant public debate.” 

He does, however, raise some interesting points such as the systemic schism that could well emerge between “fully fledged techno-dictatorships” such as China or democratic systems that are increasingly rendered hollow and ineffective in the face of technology’s spread. He rather creatively warns of a third way or “Hezbollahization” where technology is spread, tribes develop, and no one really is in charge. A bit fanciful perhaps, but it does lead to curious thought exercises, not the least of which is how do these three systems interact with one another? We may well soon find out. 

Reviewing the threats and opportunities of AI and what it enables is not where the value of Suleyman’s book is found. It is less in his diagnosis and more in his prescription for what society needs to do in response. For Suleyman, society faces three courses of action—it can either halt progress entirely, which is neither feasible nor practical; it could allow unfettered technological progress, which would allow for unanticipated consequences; or it could seek to contain AI and other technologies through a series of policy actions. It should be no surprise that Suleyman advocates containment above the other two—technological progress is unstoppable, but necessary for human development and growth, while unchecked technological progress will lead to the consequences about which he warns in the first half of the book. 

The 10 steps Suleyman outlines are generally sensible. There should be, for example, more AI researchers working on ethics and safety issues. AI systems should be audited to better understand what’s happening inside the “black box” of algorithms and neural networks. As Suleyman writes, “a paradox of the coming wave is that its technologies are largely beyond our ability to comprehend at a granular level yet still within our ability to create and use.” Changing business incentives to prioritize responsible AI development and shift away from the first-to-market, consequences-be-damned model is necessary. Developing a culture within the AI and technology community of greater transparency, openness, and candor that discloses failures would also improve the overall ecosystem through iterative learning. 

While Suleyman rightly advocates for policy solutions that could lead to the bounding, if not containment, of AI’s societal impact, in practice the only limitation that may well be possible is the containment of the damage caused by the technology in question. The delta between global conversations on AI regulation, the requirements and demands of strategic competition, and domestic politics is significant and growing. While there is ongoing regulatory activity about AI regulation in the EU, a growing awareness in Washington, and some comments coming from Beijing, the possibility of a global AI Weapons Convention is very slim. At a national-level, there may be steps to offset the economic pain that results from AI-driven economic dislocation, but that is a post-impact solution. 

In America, Congress can barely pass a budget to keep the government open (and now is without a Speaker of the House), let alone grasp the complexity of AI and the social and economic dislocation that it will generate or the opportunities it will create. The “ignorant public” is content to accept the benefits of AI, relishing in the novelty and usefulness of ChatGPT, without giving a second thought to what’s under the hood (as it were) and what it means more broadly. 

Alarmingly still, the recent history of governing, containing, and regulating technology leaves much to be desired, and is not cause for optimism about the future. Social media governance remains an open question. Progress on regulation and governance related to climate change proceeds in fits and starts, subject to the winds and vagaries of politics. Yet even with global warming there remains an open and vocal debate about the role humanity plays in Anthropocene environmental alteration despite clear scientific data. 

Despite Suleyman’s noteworthy entry into the popular literature on AI, and broader efforts to bound the conversation, we’re sadly likely to find ourselves stuck in Kuper’s “ignorant public” stage for some time.

About
Joshua Huminski
:
Joshua C. Huminski is the Senior Vice President for National Security & Intelligence Programs and the Director of the Mike Rogers Center at the Center for the Study of the Presidency & Congress.
The views presented in this article are the author’s own and do not necessarily represent the views of any other organization.