.
T

he start of 2024 has been marked by a wave of predictions regarding the trajectory of artificial intelligence, ranging from optimistic to cautious. Nevertheless, a clear consensus has emerged: AI is already reshaping human experience. To keep up, humanity must evolve.

For anyone who has lived through the rise of the internet and social media, the AI revolution may evoke a sense of déjà vu—and raise two fundamental questions: Is it possible to maintain the current momentum without repeating the mistakes of the past? And can we create a world in which everyone, including the 2.6 billion people who remain offline, is able to thrive?

Harnessing AI to bring about an equitable and human–centered future requires new, inclusive forms of innovation. But three promising trends offer hope for the year ahead.

First, AI regulation remains a top global priority. From the European Union’s AI Act to U.S. President Joe Biden’s October 2023 executive order, proponents of responsible AI have responded to voluntary commitments from Big Tech firms with policy suggestions rooted in equity, justice, and democratic principles. The international community, led by the newly established United Nations High–Level Advisory Body on AI (one of us, Dhar, is a member) is poised to advance many of these initiatives over the coming year, starting with its interim report on Governing AI for Humanity.

Moreover, this could be the year to dismantle elite echo chambers and cultivate a global cadre of ethical AI professionals. By expanding the reach of initiatives like the National Artificial Intelligence Research Resource Task Force—established by the United States’ 2020 AI Initiative Act—and localizing implementation strategies through tools such as the UNESCO Readiness Assessment methodology, globally inclusive governance frameworks could shape AI in 2024.

At the national level, the focus is expected to be on regulating AI–generated content and empowering policymakers and citizens to confront AI–powered threats to civic participation. As a multitude of countries, representing more than 40% of the world’s population, prepare to hold crucial elections this year, combating the imminent surge of mis– and disinformation will require proactive measures. This includes initiatives to raise public awareness, promote broad–based media literacy across various age groups, and address polarization by emphasizing the importance of empathy and mutual learning.

As governments debate AI’s role in the public sphere, regulatory shifts will likely trigger renewed discussions about using emerging technologies to achieve important policy goals. India’s use of AI to enhance the efficiency of its railways and Brazil’s AI–powered digital–payment system are prime examples.

In 2024, entities like the UN Development Programme are expected to explore the integration of AI technologies into digital public infrastructure (DPI). Standard–setting initiatives, such as the upcoming UN Global Digital Compact, could serve as multi–stakeholder frameworks for designing inclusive DPI. These efforts should focus on building trust, prioritizing community needs and ownership over profits, and adhering to “shared principles for an open, free, and secure digital future for all.”

Civil–society groups are already building on this momentum and harnessing the power of AI for good. For example, the non–profit Population Services International and the London–based start–up Babylon Health are rolling out an AI–powered symptom checker and health–provider locator, showcasing AI’s ability to help users manage their health. Similarly, organizations like Polaris and Girl Effect are working to overcome the barriers to digital transformation within the non–profit sector, tackling issues like data privacy and user safety. By developing centralized financing mechanisms, establishing international expert networks, and embracing allyship, philanthropic foundations and public institutions could help scale such initiatives.

As nonprofits shift from integrating AI into their work to building new AI products, our understanding of leadership and representation in tech must also evolve. By challenging outdated perceptions of key players in today’s AI ecosystem, we have an opportunity to celebrate the true, diverse face of innovation and highlight trailblazers from a variety of genders, races, cultures, and geographies, while acknowledging the deliberate marginalization of minority voices in the AI sector.

Organizations like the Hidden Genius Project, Indigenous in AI, and Technovation are already building the “who’s who” of the future, with a particular focus on women and people of color. By collectively supporting their work, we can ensure that they take a leading role in shaping, deploying, and overseeing AI technologies in 2024 and beyond.

Debates over what it means to be “human–centered” and which values should guide our societies will shape our engagement with AI. Multi–stakeholder frameworks like UNESCO’s Recommendation on the Ethics of Artificial Intelligence could provide much–needed guidance. By focusing on shared values such as diversity, inclusiveness, and peace, policymakers and technologists could outline principles for designing, developing, and deploying inclusive AI tools. Likewise, integrating these values into our strategies requires engagement with communities and a steadfast commitment to equity and human rights.

Given that AI is well on its way to becoming as ubiquitous as the internet, we must learn from the successes and failures of the digital revolution. Staying on our current path risks perpetuating—or even exacerbating—the global wealth gap and further alienating vulnerable communities worldwide.

But by reaffirming our commitment to fairness, justice, and dignity, we could establish a new global framework that enables every individual to reap the rewards of technological innovation. We must use the coming year to cultivate multi–stakeholder partnerships and promote a future in which AI generates prosperity for all.

Copyright: Project Syndicate, 2024.

About
Yolanda Botti–Lodovico
:
Yolanda Botti–Lodovico is Policy and Advocacy Lead for the Patrick J. McGovern Foundation.
About
Vilas Dhar
:
Vilas Dhar is President of the Patrick J. McGovern Foundation.
The views presented in this article are the author’s own and do not necessarily represent the views of any other organization.

a global affairs media network

www.diplomaticourier.com

Will 2024 be the year of responsible AI?

February 11, 2024

Harnessing AI to bring about an equitable and human–centered future requires new, inclusive forms of innovation. Three promising trends offer hope for the year ahead, write Yolanda Botti–Lodovico and Vilas Dhar.

T

he start of 2024 has been marked by a wave of predictions regarding the trajectory of artificial intelligence, ranging from optimistic to cautious. Nevertheless, a clear consensus has emerged: AI is already reshaping human experience. To keep up, humanity must evolve.

For anyone who has lived through the rise of the internet and social media, the AI revolution may evoke a sense of déjà vu—and raise two fundamental questions: Is it possible to maintain the current momentum without repeating the mistakes of the past? And can we create a world in which everyone, including the 2.6 billion people who remain offline, is able to thrive?

Harnessing AI to bring about an equitable and human–centered future requires new, inclusive forms of innovation. But three promising trends offer hope for the year ahead.

First, AI regulation remains a top global priority. From the European Union’s AI Act to U.S. President Joe Biden’s October 2023 executive order, proponents of responsible AI have responded to voluntary commitments from Big Tech firms with policy suggestions rooted in equity, justice, and democratic principles. The international community, led by the newly established United Nations High–Level Advisory Body on AI (one of us, Dhar, is a member) is poised to advance many of these initiatives over the coming year, starting with its interim report on Governing AI for Humanity.

Moreover, this could be the year to dismantle elite echo chambers and cultivate a global cadre of ethical AI professionals. By expanding the reach of initiatives like the National Artificial Intelligence Research Resource Task Force—established by the United States’ 2020 AI Initiative Act—and localizing implementation strategies through tools such as the UNESCO Readiness Assessment methodology, globally inclusive governance frameworks could shape AI in 2024.

At the national level, the focus is expected to be on regulating AI–generated content and empowering policymakers and citizens to confront AI–powered threats to civic participation. As a multitude of countries, representing more than 40% of the world’s population, prepare to hold crucial elections this year, combating the imminent surge of mis– and disinformation will require proactive measures. This includes initiatives to raise public awareness, promote broad–based media literacy across various age groups, and address polarization by emphasizing the importance of empathy and mutual learning.

As governments debate AI’s role in the public sphere, regulatory shifts will likely trigger renewed discussions about using emerging technologies to achieve important policy goals. India’s use of AI to enhance the efficiency of its railways and Brazil’s AI–powered digital–payment system are prime examples.

In 2024, entities like the UN Development Programme are expected to explore the integration of AI technologies into digital public infrastructure (DPI). Standard–setting initiatives, such as the upcoming UN Global Digital Compact, could serve as multi–stakeholder frameworks for designing inclusive DPI. These efforts should focus on building trust, prioritizing community needs and ownership over profits, and adhering to “shared principles for an open, free, and secure digital future for all.”

Civil–society groups are already building on this momentum and harnessing the power of AI for good. For example, the non–profit Population Services International and the London–based start–up Babylon Health are rolling out an AI–powered symptom checker and health–provider locator, showcasing AI’s ability to help users manage their health. Similarly, organizations like Polaris and Girl Effect are working to overcome the barriers to digital transformation within the non–profit sector, tackling issues like data privacy and user safety. By developing centralized financing mechanisms, establishing international expert networks, and embracing allyship, philanthropic foundations and public institutions could help scale such initiatives.

As nonprofits shift from integrating AI into their work to building new AI products, our understanding of leadership and representation in tech must also evolve. By challenging outdated perceptions of key players in today’s AI ecosystem, we have an opportunity to celebrate the true, diverse face of innovation and highlight trailblazers from a variety of genders, races, cultures, and geographies, while acknowledging the deliberate marginalization of minority voices in the AI sector.

Organizations like the Hidden Genius Project, Indigenous in AI, and Technovation are already building the “who’s who” of the future, with a particular focus on women and people of color. By collectively supporting their work, we can ensure that they take a leading role in shaping, deploying, and overseeing AI technologies in 2024 and beyond.

Debates over what it means to be “human–centered” and which values should guide our societies will shape our engagement with AI. Multi–stakeholder frameworks like UNESCO’s Recommendation on the Ethics of Artificial Intelligence could provide much–needed guidance. By focusing on shared values such as diversity, inclusiveness, and peace, policymakers and technologists could outline principles for designing, developing, and deploying inclusive AI tools. Likewise, integrating these values into our strategies requires engagement with communities and a steadfast commitment to equity and human rights.

Given that AI is well on its way to becoming as ubiquitous as the internet, we must learn from the successes and failures of the digital revolution. Staying on our current path risks perpetuating—or even exacerbating—the global wealth gap and further alienating vulnerable communities worldwide.

But by reaffirming our commitment to fairness, justice, and dignity, we could establish a new global framework that enables every individual to reap the rewards of technological innovation. We must use the coming year to cultivate multi–stakeholder partnerships and promote a future in which AI generates prosperity for all.

Copyright: Project Syndicate, 2024.

About
Yolanda Botti–Lodovico
:
Yolanda Botti–Lodovico is Policy and Advocacy Lead for the Patrick J. McGovern Foundation.
About
Vilas Dhar
:
Vilas Dhar is President of the Patrick J. McGovern Foundation.
The views presented in this article are the author’s own and do not necessarily represent the views of any other organization.