.
W

hile many exponential technologies have vied for global attention—whether from investors, policymakers, regulators, or customers—none have enjoyed the rapid ascent to consciousness as AI. Like all novel technologies, there are both risks and opportunities. On one hand, AI’s proponents argue how AI, like the internet before it, can take human potential beyond intellectual points of diminishing returns. Meanwhile, the opponents argue that AI, especially when it reaches a state of general intelligence (AGI), raises the specter of an extinction level event of everything that requires math, computational logic, or encryption (which by today’s standards is virtually everything). The result would be serfs at the hands of super-intelligent machines.

Somewhere between the glorified modernization of Clippy, Microsoft’s paperclip shaped virtual assistant from 1996, to the end-of-day scenes of the “Terminator” movies, lies a pragmatic middle ground. Indeed, more than 1,000 concerned scientists and technologists joined ranks in a rare sign of unanimity calling for responsible innovation and guardrails for AI. The world got a flavor of what these governance guardrails might look like with the controversial near ouster of Sam Altman by OpenAI’s board at the end of last year. This event, for which scant details have been made public, shows how governance still very much depends on individual leaders—instead of the specific types of collective defense advocated by concerned technologists in their global appeals. 

This case was notable since OpenAI is the maker of ChatGPT, arguably the world’s most successful technology launch—if measured on an axis of user growth and the limited marketing funds spent by the firm. This begs some important societal questions. Namely, will generally available AI genuinely, meaningfully, and constructively augment human potential? Or, will it trigger a deleterious slide in intellectual curiosity, independence, and human discovery? 

The early breakthrough of calculators was received by hard core pencil and paper mathematicians as a scourge, yet today, calculators play a vital role as mathematical building blocks of an ambitious human learning journey. In the hands of thoughtful curators, AI is like a scientific calculator for complex, intellectual abstractions and the occasional platitudinous ponderosity occupying our minds. Whether AI sets humans free or binds us is still to be determined, especially since the world has only witnessed the first population-scale versions of this impressive technology.

About
Dante A. Disparte
:
Dante A. Disparte serves as the Chief Strategy Officer & Head of Global Policy for Circle and is member of Diplomatic Courier’s editorial advisory board.
The views presented in this article are the author’s own and do not necessarily represent the views of any other organization.

a global affairs media network

www.diplomaticourier.com

Deus Ex Machina: Exploring AI’s Impact on Human Potential

Image via Adobe Stock.

January 15, 2024

Like all novel technologies, AI presents both risks and opportunities. Whether AI empowers human flourishing or binds us is yet to be determined, but used correctly it can be a tool to unlock greater human potential, writes Circle’s Dante Disparte.

W

hile many exponential technologies have vied for global attention—whether from investors, policymakers, regulators, or customers—none have enjoyed the rapid ascent to consciousness as AI. Like all novel technologies, there are both risks and opportunities. On one hand, AI’s proponents argue how AI, like the internet before it, can take human potential beyond intellectual points of diminishing returns. Meanwhile, the opponents argue that AI, especially when it reaches a state of general intelligence (AGI), raises the specter of an extinction level event of everything that requires math, computational logic, or encryption (which by today’s standards is virtually everything). The result would be serfs at the hands of super-intelligent machines.

Somewhere between the glorified modernization of Clippy, Microsoft’s paperclip shaped virtual assistant from 1996, to the end-of-day scenes of the “Terminator” movies, lies a pragmatic middle ground. Indeed, more than 1,000 concerned scientists and technologists joined ranks in a rare sign of unanimity calling for responsible innovation and guardrails for AI. The world got a flavor of what these governance guardrails might look like with the controversial near ouster of Sam Altman by OpenAI’s board at the end of last year. This event, for which scant details have been made public, shows how governance still very much depends on individual leaders—instead of the specific types of collective defense advocated by concerned technologists in their global appeals. 

This case was notable since OpenAI is the maker of ChatGPT, arguably the world’s most successful technology launch—if measured on an axis of user growth and the limited marketing funds spent by the firm. This begs some important societal questions. Namely, will generally available AI genuinely, meaningfully, and constructively augment human potential? Or, will it trigger a deleterious slide in intellectual curiosity, independence, and human discovery? 

The early breakthrough of calculators was received by hard core pencil and paper mathematicians as a scourge, yet today, calculators play a vital role as mathematical building blocks of an ambitious human learning journey. In the hands of thoughtful curators, AI is like a scientific calculator for complex, intellectual abstractions and the occasional platitudinous ponderosity occupying our minds. Whether AI sets humans free or binds us is still to be determined, especially since the world has only witnessed the first population-scale versions of this impressive technology.

About
Dante A. Disparte
:
Dante A. Disparte serves as the Chief Strategy Officer & Head of Global Policy for Circle and is member of Diplomatic Courier’s editorial advisory board.
The views presented in this article are the author’s own and do not necessarily represent the views of any other organization.