.
W

hen Meta, née Facebook, announced that they were jettisoning billions of facial recognition scans, the response from the public was largely a shoulder shrug. Either they were too busy to notice or didn’t realize that every time they tagged a friend’s photo, they were helping Facebook’s artificial intelligence (AI) software improve. Undoubtedly the decision was part of the company’s effort to rebrand itself, redirect scandal, and present a more user-friendly, privacy-conscious image. This masks a much bigger theme—the ubiquity of AI in our daily lives. For as much as futurists pen books, essays, and tweets about how AI can or will change the way we live, work, and love, the reality is that it is already here, even if we don’t fully appreciate what that means.

The Age of AI and Our Human Future | Henry Kissinger, Eric Schmidt, and Daniel Huttenlocher | Little, Brown and Company | November 2021.

The alarming for some, exciting for others, reality is that we have only scratched the surface of AI and what it means for society, politics (both the regular and geopolitical flavors), and humanity writ large. Perhaps even more concerning is the simple fact that we have yet to even start having the conversation on what AI means, what it can do, and how humanity will adjust accordingly. There are pockets of conversation, particularly in academia and the policy think tank worlds, as well as often simple conversations between doyens of their fields, one of which turned into a book “The Age of AI and Our Human Future”—a joint work by Henry Kissinger, Eric Schmidt, and Daniel Huttenlocher.

The three are an interesting pairing and offer, at least at first glance, the promise of a fascinating dialogue: Kissinger, the high priest of geopolitics (to some), Schmidt, the co-founder of Google, and Huttenlocher, the dean of MIT’s College of Computing. The authors’ approach raises two parallel lines of inquiry, one more practical than the other. At a societal level, how will AI change the way we live, work, and fight? At a philosophical level, how will AI change how we view and understand our humanity?

“The Age of AI” attempts to provide a neutral look at AI, starting with what it is, how it came about, and what it can do at present. It then shifts to an exploration of how it could change society and security. It is a book that has very few answers but raises a litany of questions—ones that we as a society should be asking ourselves. That paucity of answers is actually a good thing as it forces the reader to grapple with what are transformative, if thorny, problems. At the same time, it could, as The Financial Times noted, be alarming that these three experts do not have many, if any, answers.

For the authors, AI is a disruptive triple threat in that it is easily developed and deployed; it is dual-use meaning it has civilian and military utility; and it has significant destructive potential. No previous technology has embodied all three. There is, however, a fourth if under-recognized issue in that there are uneven appreciations of the ethics surrounding AI. While there exists at least a strategic appreciation of nuclear weapons, though not an operational doctrine, no such similar thing exists for AI.

There is a nascent discussion on the international stage, but this appears limited to the utility and ethics of autonomous weapons systems. There is no formal arms control structure or discussion framework for cyber, let alone AI. Here the risk is that Russia, China, and the United States have differing understandings of the ethics and morality of AI in security and defense, and this is almost certainly the case at the moment.

There is almost certainly an asymmetrical understanding of the ethics and norms of behavior on AI. While the United States had, until October of this year, the National Security Commission on AI, one suspects that the other national-level dialogues in Beijing and Moscow are less public and transparent. (Georgetown University’s Center for the Study of Emerging Technology is doing particularly good work on this subject with a particularly good report recently published on China’s military adoption of AI).

Here it is interesting to note that there is relatively little on China and China’s understanding and application of AI, and next to nothing about Beijing’s application of these technologies e.g. the Uyghurs, social credit score, Hong Kong surveillance. This is likely unsurprising given the authors’ commercial considerations, but it is a notable omission nonetheless.

The speed with which the effects of AI will take place and the scope of the potential impact far surpasses that of nuclear weapons or any other emergent technology. There is over 70 years of intervening arms control theory and practice that led to the body of agreements constraining nuclear weapons and their delivery systems. This did not happen overnight. Yet, AI may and in all likelihood will demand the swifter development of an analogous framework for constraining its use.

If, as the authors assert, AI will battle AI and AI will defend networks against AI, how can we establish norms of behavior to avoid missteps, miscalculations, and strategic errors? In 1983 a Russian lieutenant colonel in the air defense corps, Stanislav Petrov, literally saved the world. Russian early warning systems indicated that an American intercontinental ballistic missile attack was underway. Petrov, the duty officer, thought the system was malfunctioning, and indeed it was. He convinced his superiors of this and, thus, prevented a nuclear response. In this case there was a “human-in-the-loop”, but this may not be the case in the future. If your response time is measured at the speed of light and your adversary is more freely using AI, what incentive is there for restraint? Or are you likely to remove the human from the equation entirely?

This highlights part of the “black box” problem of AI in that while AI is extremely powerful in identifying patterns and solutions to the questions asked of it, we often don’t understand how those systems arrived at the conclusion. It is impossible to check the math of AI and deep-learning neural networks. What about when AIs make mistakes? Who is then responsible? Should there be an AI ethical clearing house? Again, another series of questions the authors rightly raise.

As anyone who has applied for a government job on USAJobs can attest, getting past the algorithm is perhaps the greatest obstacle. AIs screen candidates in advance looking for relevant keywords or phrases, artificially filtering applicants. If the AIs are biased, intentionally or otherwise, how can we know other than by testing the outcomes?

The downside is that in attempting to be so neutral, the authors err a bit too much on the side of the positive aspects of AI, glossing over some of the more dramatic potential effects, not the least of which is widespread economic dislocation. It is no longer just low-skilled labor that could be automated or affected by AI, but also high-skill knowledge-based positions such as lawyers, doctors, bankers, and more. True, new avenues of employment will open, such as the increase in financial service employees when ATMs arrived, but there will still be economic pain and disruption, something the authors only briefly address.

While there are references to deep-fakes, mis- and dis-information, and cyberattacks, these issues are largely glossed over. If we as a society are having a hard-enough time dealing with these challenges at the current level of technology, how will we cope when it is fully automated and left to its own devices? If past is prologue, service providers are ill-equipped if not incentivized at all to intervene if it means limiting or losing market share. The authors do spend a lengthy chapter exploring the role of social media companies and their transnational implications, but it falls a bit short. If companies are incentivized to rapidly push products to market, AI or otherwise, and in so doing push security and other ethical concerns to the back-burner, what risks are we facing as a society when it comes to AI?

Across all of these AI issues is the fundamentally failing of the ability of policymakers, let alone institutions, to engage with and address these issues, or be equipped with the base level of knowledge and understanding to address AI. This is an issue that the authors do not address—the inability of legislative bodies to keep pace with emerging technologies. When you have congressional representatives confused by the basics of e-mail and social media, you also have a body of legislators that cannot grasp what AI will mean for their constituents and the country writ large.

“The Age of AI” does not break new ground on AI—there are other, more thorough books that explain the underpinnings of the tech and the potential implications of these tools. What the book does quite well is raising key questions about how AI will change the way we live and work, and it is clear that we are just barely scratching the surface of what AI means for society and humanity.

About
Joshua Huminski
:
Joshua C. Huminski is the Senior Vice President for National Security & Intelligence Programs and the Director of the Mike Rogers Center at the Center for the Study of the Presidency & Congress.
The views presented in this article are the author’s own and do not necessarily represent the views of any other organization.

a global affairs media network

www.diplomaticourier.com

Humanity in the Age of AI

Photo via Adobe Stock.

November 21, 2021

“The Age of AI” does not break new ground on AI but it raises key questions about how AI will change the way we live and work, and it is clear that we are just barely scratching the surface of what AI means for society and humanity.

W

hen Meta, née Facebook, announced that they were jettisoning billions of facial recognition scans, the response from the public was largely a shoulder shrug. Either they were too busy to notice or didn’t realize that every time they tagged a friend’s photo, they were helping Facebook’s artificial intelligence (AI) software improve. Undoubtedly the decision was part of the company’s effort to rebrand itself, redirect scandal, and present a more user-friendly, privacy-conscious image. This masks a much bigger theme—the ubiquity of AI in our daily lives. For as much as futurists pen books, essays, and tweets about how AI can or will change the way we live, work, and love, the reality is that it is already here, even if we don’t fully appreciate what that means.

The Age of AI and Our Human Future | Henry Kissinger, Eric Schmidt, and Daniel Huttenlocher | Little, Brown and Company | November 2021.

The alarming for some, exciting for others, reality is that we have only scratched the surface of AI and what it means for society, politics (both the regular and geopolitical flavors), and humanity writ large. Perhaps even more concerning is the simple fact that we have yet to even start having the conversation on what AI means, what it can do, and how humanity will adjust accordingly. There are pockets of conversation, particularly in academia and the policy think tank worlds, as well as often simple conversations between doyens of their fields, one of which turned into a book “The Age of AI and Our Human Future”—a joint work by Henry Kissinger, Eric Schmidt, and Daniel Huttenlocher.

The three are an interesting pairing and offer, at least at first glance, the promise of a fascinating dialogue: Kissinger, the high priest of geopolitics (to some), Schmidt, the co-founder of Google, and Huttenlocher, the dean of MIT’s College of Computing. The authors’ approach raises two parallel lines of inquiry, one more practical than the other. At a societal level, how will AI change the way we live, work, and fight? At a philosophical level, how will AI change how we view and understand our humanity?

“The Age of AI” attempts to provide a neutral look at AI, starting with what it is, how it came about, and what it can do at present. It then shifts to an exploration of how it could change society and security. It is a book that has very few answers but raises a litany of questions—ones that we as a society should be asking ourselves. That paucity of answers is actually a good thing as it forces the reader to grapple with what are transformative, if thorny, problems. At the same time, it could, as The Financial Times noted, be alarming that these three experts do not have many, if any, answers.

For the authors, AI is a disruptive triple threat in that it is easily developed and deployed; it is dual-use meaning it has civilian and military utility; and it has significant destructive potential. No previous technology has embodied all three. There is, however, a fourth if under-recognized issue in that there are uneven appreciations of the ethics surrounding AI. While there exists at least a strategic appreciation of nuclear weapons, though not an operational doctrine, no such similar thing exists for AI.

There is a nascent discussion on the international stage, but this appears limited to the utility and ethics of autonomous weapons systems. There is no formal arms control structure or discussion framework for cyber, let alone AI. Here the risk is that Russia, China, and the United States have differing understandings of the ethics and morality of AI in security and defense, and this is almost certainly the case at the moment.

There is almost certainly an asymmetrical understanding of the ethics and norms of behavior on AI. While the United States had, until October of this year, the National Security Commission on AI, one suspects that the other national-level dialogues in Beijing and Moscow are less public and transparent. (Georgetown University’s Center for the Study of Emerging Technology is doing particularly good work on this subject with a particularly good report recently published on China’s military adoption of AI).

Here it is interesting to note that there is relatively little on China and China’s understanding and application of AI, and next to nothing about Beijing’s application of these technologies e.g. the Uyghurs, social credit score, Hong Kong surveillance. This is likely unsurprising given the authors’ commercial considerations, but it is a notable omission nonetheless.

The speed with which the effects of AI will take place and the scope of the potential impact far surpasses that of nuclear weapons or any other emergent technology. There is over 70 years of intervening arms control theory and practice that led to the body of agreements constraining nuclear weapons and their delivery systems. This did not happen overnight. Yet, AI may and in all likelihood will demand the swifter development of an analogous framework for constraining its use.

If, as the authors assert, AI will battle AI and AI will defend networks against AI, how can we establish norms of behavior to avoid missteps, miscalculations, and strategic errors? In 1983 a Russian lieutenant colonel in the air defense corps, Stanislav Petrov, literally saved the world. Russian early warning systems indicated that an American intercontinental ballistic missile attack was underway. Petrov, the duty officer, thought the system was malfunctioning, and indeed it was. He convinced his superiors of this and, thus, prevented a nuclear response. In this case there was a “human-in-the-loop”, but this may not be the case in the future. If your response time is measured at the speed of light and your adversary is more freely using AI, what incentive is there for restraint? Or are you likely to remove the human from the equation entirely?

This highlights part of the “black box” problem of AI in that while AI is extremely powerful in identifying patterns and solutions to the questions asked of it, we often don’t understand how those systems arrived at the conclusion. It is impossible to check the math of AI and deep-learning neural networks. What about when AIs make mistakes? Who is then responsible? Should there be an AI ethical clearing house? Again, another series of questions the authors rightly raise.

As anyone who has applied for a government job on USAJobs can attest, getting past the algorithm is perhaps the greatest obstacle. AIs screen candidates in advance looking for relevant keywords or phrases, artificially filtering applicants. If the AIs are biased, intentionally or otherwise, how can we know other than by testing the outcomes?

The downside is that in attempting to be so neutral, the authors err a bit too much on the side of the positive aspects of AI, glossing over some of the more dramatic potential effects, not the least of which is widespread economic dislocation. It is no longer just low-skilled labor that could be automated or affected by AI, but also high-skill knowledge-based positions such as lawyers, doctors, bankers, and more. True, new avenues of employment will open, such as the increase in financial service employees when ATMs arrived, but there will still be economic pain and disruption, something the authors only briefly address.

While there are references to deep-fakes, mis- and dis-information, and cyberattacks, these issues are largely glossed over. If we as a society are having a hard-enough time dealing with these challenges at the current level of technology, how will we cope when it is fully automated and left to its own devices? If past is prologue, service providers are ill-equipped if not incentivized at all to intervene if it means limiting or losing market share. The authors do spend a lengthy chapter exploring the role of social media companies and their transnational implications, but it falls a bit short. If companies are incentivized to rapidly push products to market, AI or otherwise, and in so doing push security and other ethical concerns to the back-burner, what risks are we facing as a society when it comes to AI?

Across all of these AI issues is the fundamentally failing of the ability of policymakers, let alone institutions, to engage with and address these issues, or be equipped with the base level of knowledge and understanding to address AI. This is an issue that the authors do not address—the inability of legislative bodies to keep pace with emerging technologies. When you have congressional representatives confused by the basics of e-mail and social media, you also have a body of legislators that cannot grasp what AI will mean for their constituents and the country writ large.

“The Age of AI” does not break new ground on AI—there are other, more thorough books that explain the underpinnings of the tech and the potential implications of these tools. What the book does quite well is raising key questions about how AI will change the way we live and work, and it is clear that we are just barely scratching the surface of what AI means for society and humanity.

About
Joshua Huminski
:
Joshua C. Huminski is the Senior Vice President for National Security & Intelligence Programs and the Director of the Mike Rogers Center at the Center for the Study of the Presidency & Congress.
The views presented in this article are the author’s own and do not necessarily represent the views of any other organization.