.
T

his report was compiled from a collective intelligence gathering of World in 2050’s (W2050) Senior Fellows (Exponential Technology Committee). The meeting took place under the Chatham House Rule, so specific ideas will not be directly attributed to any specific Fellow.  W2050 Senior Fellows attending the committee meeting were: Lisa Gable, Joseph Toscano, Mario Vasilescu, and Nikos Acuña. 

On the morning of October 30, 2023, the Biden administration released an executive order giving guiding regulatory principles and priorities for the development and use of AI. On the afternoon of the same day, members of W2050’s Senior Fellows committee on exponential technology met to discuss the executive order. Fellows also discussed what best practice regulation on AI could look like in 2024 and beyond. While the discussion began with the Biden administration executive order, thinking on longer-term regulatory best practice was global.

Where the Executive Order Falls Short

The Biden administration’s executive order calls for companies developing the largest, most capable AI models to disclose a wide swath of information to the federal government. Fellows agreed that while the intent of the executive order appears to be good and it will help advance the legal side of regulating / de-risking the development of AI, what the executive order calls for is infeasible for two main reasons. First, the level of granularity the executive order calls for in companies’ disclosure requirements is technically unrealistic and would, if enforced, have a cooling effect on innovation. Second, many of the larger companies developing the most powerful AI models will be resistant to disclosing information, even when it would be for the good of society. 

The first problem is a matter of regulators asking for more than is necessary and would create an unnecessary level of burden on innovators. The second problem speaks to the level of resistance large companies innovating AI will bring to bear against any calls for disclosure of their development practices. This second problem is exacerbated by powerful lobbying practices—which decry government interference and infringement on free will—that will make building actual regulatory law off this executive order very tricky.

However, the executive order itself had many positive aspects, according to the Fellows. Some of the programs suggested in the executive order are promising, and the order is beginning to tackle the legal ramifications of what healthy regulation can look like. Perhaps more importantly, it gives us a basis for thinking about what is possible and what can work. 

Toward Healthy Regulatory Ecosystems

A regulatory ecosystem is not about specific regulations for specific problems. Instead, it is a set of guiding principles which not only informs how particular innovations will be regulated, but helps innovators understand what will and will not be acceptable behavior as they develop new technologies. 

A big part of successful regulation must be about disclosure, and despite pushback from innovating companies there is a great deal of precedent, from finances for the sake of taxation to disclosure on safety protocols across various industries. It is possible for companies to disclose where their training data comes from, how training is carried out, and what the intent of the AI innovation is. This can be done in a way that is vague enough that the algorithm cannot be reverse engineered—thus protecting the intellectual property of the innovators—while still giving regulators enough information to operate effectively. This will create extra work both for regulators and for companies, but it can be done without harming the ability to innovate and is worthwhile in the name of safety and national security both. 

These same points also point to what other digital platforms could do if they chose to. The failure both of regulators to prompt this sort of disclosure and the failure of platforms to choose to be transparent can be instructive in figuring out a way forward. The same problems that came about with poor social media regulation—the proliferation of mis- and disinformation, misuse of user data, and non-transparent engineering of platforms to influence human behavior—will still exist with AI but will be supercharged. So, in better regulating AI we should also be addressing pre-existing issues in order to create a truly healthy regulatory ecosystem. 

Priorities for Healthy Regulatory Ecosystems

Data Disclosure & Laws on Data Crime: Companies should not be required to disclose everything they’re doing, but disclosures need to be enough to check for data crimes. Unfortunately, we haven’t yet fully agreed on legislation about what data crimes look like, which makes that a necessary first step in creating truly effective disclosure guidance. 

Independent Regulators: We need regulatory agencies which are wholly independent of either governments or private enterprises, though these agencies must be funded at least in part by governments. Independent experts are needed to regulate effectively, and creating independence from political systems helps avoid regulatory overreach. Government funding helps avoid regulatory capture—when regulatory agencies become dominated by the interests they regulate rather than the public interest. 

Data Sovereignty: The sacrifice of privacy must be addressed—we need transparency about where our data goes and how it is being used by algorithms. This should be part of data disclosure. We also need to define and protect fundamental rights to our data—to delete, amend, and access our data at will. We also need to resist AI-empowered automated decision making based on data capture as algorithmic risk assessment has proven prone to bias.

Digital Education: Media literacy and digital literacy should just as core to education systems as mathematics and language. This can protect against mis- and disinformation, but also help us protect our data and be better prepared for evolving labor market needs.

Protecting the Information Commons: The information commons incentivizes speed and noise—those who can garner the most attention the most quickly profit. We can reward meaningful and reliable information via concepts like transparent verification of provenance (for AI-generated content) and proof of effort for content creators. In this way we can slow down the proliferation of content and ensure there is transparency about where it comes from so consumers can make informed decisions about the information they consume.

About
Shane Szarkowski
:
Dr. Shane Szarkowski is Editor-in-Chief of Diplomatic Courier and the Executive Director of World in 2050.
The views presented in this article are the author’s own and do not necessarily represent the views of any other organization.

a global affairs media network

www.diplomaticourier.com

Regulating AI: Guardrails Without Going Off the Rails

Image via Adobe Stock

January 8, 2024

In this inaugural report from a World in 2050 Senior Fellows collective intelligence gathering, fellows discussed the Biden executive order on regulating AI—what it did well and where it fell short…and what we need to do better to ensure AI helps humanity flourish in 2024 and beyond.

T

his report was compiled from a collective intelligence gathering of World in 2050’s (W2050) Senior Fellows (Exponential Technology Committee). The meeting took place under the Chatham House Rule, so specific ideas will not be directly attributed to any specific Fellow.  W2050 Senior Fellows attending the committee meeting were: Lisa Gable, Joseph Toscano, Mario Vasilescu, and Nikos Acuña. 

On the morning of October 30, 2023, the Biden administration released an executive order giving guiding regulatory principles and priorities for the development and use of AI. On the afternoon of the same day, members of W2050’s Senior Fellows committee on exponential technology met to discuss the executive order. Fellows also discussed what best practice regulation on AI could look like in 2024 and beyond. While the discussion began with the Biden administration executive order, thinking on longer-term regulatory best practice was global.

Where the Executive Order Falls Short

The Biden administration’s executive order calls for companies developing the largest, most capable AI models to disclose a wide swath of information to the federal government. Fellows agreed that while the intent of the executive order appears to be good and it will help advance the legal side of regulating / de-risking the development of AI, what the executive order calls for is infeasible for two main reasons. First, the level of granularity the executive order calls for in companies’ disclosure requirements is technically unrealistic and would, if enforced, have a cooling effect on innovation. Second, many of the larger companies developing the most powerful AI models will be resistant to disclosing information, even when it would be for the good of society. 

The first problem is a matter of regulators asking for more than is necessary and would create an unnecessary level of burden on innovators. The second problem speaks to the level of resistance large companies innovating AI will bring to bear against any calls for disclosure of their development practices. This second problem is exacerbated by powerful lobbying practices—which decry government interference and infringement on free will—that will make building actual regulatory law off this executive order very tricky.

However, the executive order itself had many positive aspects, according to the Fellows. Some of the programs suggested in the executive order are promising, and the order is beginning to tackle the legal ramifications of what healthy regulation can look like. Perhaps more importantly, it gives us a basis for thinking about what is possible and what can work. 

Toward Healthy Regulatory Ecosystems

A regulatory ecosystem is not about specific regulations for specific problems. Instead, it is a set of guiding principles which not only informs how particular innovations will be regulated, but helps innovators understand what will and will not be acceptable behavior as they develop new technologies. 

A big part of successful regulation must be about disclosure, and despite pushback from innovating companies there is a great deal of precedent, from finances for the sake of taxation to disclosure on safety protocols across various industries. It is possible for companies to disclose where their training data comes from, how training is carried out, and what the intent of the AI innovation is. This can be done in a way that is vague enough that the algorithm cannot be reverse engineered—thus protecting the intellectual property of the innovators—while still giving regulators enough information to operate effectively. This will create extra work both for regulators and for companies, but it can be done without harming the ability to innovate and is worthwhile in the name of safety and national security both. 

These same points also point to what other digital platforms could do if they chose to. The failure both of regulators to prompt this sort of disclosure and the failure of platforms to choose to be transparent can be instructive in figuring out a way forward. The same problems that came about with poor social media regulation—the proliferation of mis- and disinformation, misuse of user data, and non-transparent engineering of platforms to influence human behavior—will still exist with AI but will be supercharged. So, in better regulating AI we should also be addressing pre-existing issues in order to create a truly healthy regulatory ecosystem. 

Priorities for Healthy Regulatory Ecosystems

Data Disclosure & Laws on Data Crime: Companies should not be required to disclose everything they’re doing, but disclosures need to be enough to check for data crimes. Unfortunately, we haven’t yet fully agreed on legislation about what data crimes look like, which makes that a necessary first step in creating truly effective disclosure guidance. 

Independent Regulators: We need regulatory agencies which are wholly independent of either governments or private enterprises, though these agencies must be funded at least in part by governments. Independent experts are needed to regulate effectively, and creating independence from political systems helps avoid regulatory overreach. Government funding helps avoid regulatory capture—when regulatory agencies become dominated by the interests they regulate rather than the public interest. 

Data Sovereignty: The sacrifice of privacy must be addressed—we need transparency about where our data goes and how it is being used by algorithms. This should be part of data disclosure. We also need to define and protect fundamental rights to our data—to delete, amend, and access our data at will. We also need to resist AI-empowered automated decision making based on data capture as algorithmic risk assessment has proven prone to bias.

Digital Education: Media literacy and digital literacy should just as core to education systems as mathematics and language. This can protect against mis- and disinformation, but also help us protect our data and be better prepared for evolving labor market needs.

Protecting the Information Commons: The information commons incentivizes speed and noise—those who can garner the most attention the most quickly profit. We can reward meaningful and reliable information via concepts like transparent verification of provenance (for AI-generated content) and proof of effort for content creators. In this way we can slow down the proliferation of content and ensure there is transparency about where it comes from so consumers can make informed decisions about the information they consume.

About
Shane Szarkowski
:
Dr. Shane Szarkowski is Editor-in-Chief of Diplomatic Courier and the Executive Director of World in 2050.
The views presented in this article are the author’s own and do not necessarily represent the views of any other organization.