.
A

mazon found itself in a rather odd position, having to defend its “Just Walk Out” artificial intelligence (AI) technology as not actually being run by humans in India. A Business Insider article suggested that Amazon’s technology that allowed consumers to walk in, pick up a product, and “just walk out” without interacting with any humans didn’t leverage AI, but was in fact run by 1,000 workers in India. Amazon responded by saying that the workers—who are real, by the way—manually checked a sample of the AI monitoring for accuracy. 

This slightly surreal and dystopian news cycle illustrated a very common, if underappreciated, practice of using humans to code and tag data feeds to train AI algorithms. This very human aspect of AI is the subject of “Code Dependent.” Shortlisted for the Women’s Prize for Non–Fiction 2024, Madhumita Murgia’s—the Financial Times’ AI editor—focused not on Chat GPT or the wildest fantasies of what AI could one day do, but the very real impacts on everyday people from around the world. From the data taggers to women affected by deepfake pornography, gig workers shortchanged by delivery services and juvenile offenders targeted by predictive algorithms, Murgia puts a very human face on the impact of AI. 

Code Dependent | Madhumita Murgia | Picador

These are stories and lived experiences that most students of and users of AI are unlikely to ever encounter, and that is what makes it a rather powerful book. For as much as AI is automated and technical, it remains very much manual and human. Murgia pulls back the veil of the “black box” to show that much of both the work underpinning and impact of AI is reliant on people. They are often from low–income countries and often without many insights into how the technology they train is used or deriving much benefit from the fruits of their labor. This is, of course, not massively different from the shifting of low–skill, repetitive tasks to markets with the lowest wage rates. 

What Murgia writes about is less the failings of artificial intelligence and more the failings of humanity. In nearly every story she recounts, one could remove artificial intelligence and see the core challenges that remain. These are failings of society and humanity, shortcomings of justice and equity, but not those of technology alone. Does the presence of artificial intelligence accelerate and exacerbate these social failings and create new pathways for exploitation? Most certainly. But those core elements were present before AI and will almost always, and inevitably, accompany transformative technologies. 

 AI supercharges and accelerates the impact and effect of those un- or under-addressed social and political issues. Photoshop was certainly used to create offensive fake images, but with AI the speed and diversity of deepfake creation is limited only by creators’ horrid, twisted imaginations. There is little recourse for deepfake victims, and even where relevant legislation is emerging, there is no transnational consensus or regime for justice and compensation. 

Indeed, a central theme throughout Murgia’s book, though one that is not explicitly articulated, is the simple fact that AI is a tool. The questions are how it is built and coded, and are pre-existing biases baked into the system consciously or otherwise, and against what problem sets is it tasked? Clearly, as she writes, it could be a powerful tool to help rural doctors receive a second opinion, but only so long as it is properly trained—if it cannot distinguish between tuberculosis and COVID-19, the tool needs improvement. If it is based solely on the medical histories and experiences of white northern Europeans, it will be ill–equipped to accommodate different pain experiences of, for example, African Americans—a case about which she writes. 

What happens, however, when the reliance on AI becomes a crutch on which law enforcement and policymakers rely instead of using their previously established best practices? In the Netherlands an AI powered system sought to identify potential offenders and interdict crime preventatively. Instead, it trapped juveniles in an endless Kafka–esque loop of surveillance, monitoring, and rather lazy police and social work. Instead of one tool among many, it simply became the tool—offenders went on the list, and they stayed there, unable to leave. Rather than identifying potential, vulnerable youths and guiding appropriate social work their way, the AI became a modern scarlet letter. Facial recognition in the United States and elsewhere had an alarmingly high rate of false–positives, leading to wrongful imprisonment—AI trumped common sense and good policing. 

Murgia eloquently and gracefully recounts the experiences of her subjects—and these are stories that should be widely read and widely heard. She is right to highlight the experiences of those without a voice, most aggrieved and affected by artificial intelligence—the perpetual underclasses. Yet, the inclusion of loaded phraseology such as “digital colonialism” serves as a dog–whistle for some. While it undoubtedly engenders support from those already predisposed to see the legacies of imperialism and oppression manifested in digital datasets, it will almost certainly switch off those who need to hear these stories the most. This is a significant obstacle for the modern labor movement—addressing the impacts of AI without the baggage of the past or indeed the lexicon of past efforts. One suspects that most people would support efforts to make Uber and other app–based services more transparent and fairer for those in the gig economy, but when the language of unionization enters the discussion, those same people may switch off. This is as much a function of the failure of the labor movement and growing disconnect, at least in the United States, between unions and the public as it is success on the part of tech companies vilifying such movements. 

Murgia walks a rather fine line between seeing oppression by design and oppression by consequence. At times “Code Dependent” reads as though there is some nefarious conspiracy out of Silicon Valley—tech bros scheming to ensure perpetual oppression via algorithmic code. Is it possible? Sure, but it is far more likely that those very tech bros are unconsciously coding their pre–existing biases into the system—they simply don’t know that the algorithms are ill–equipped or ill–designed to accommodate things outside of their own lived experiences. 

It is less bias by commission than unconscious bias by omission, the latter being decidedly less nefarious but no less consequential for those on the receiving end of the coded shortcomings. Omission can be corrected—hiring more diverse programmers and bringing in more diverse perspectives from ethicists, advocates, and minority communities, as Murgia suggests. Commission, however, requires firm regulation and punitive action either legally or in the public square. 

The international community is working to develop frameworks for AI governance. The United Kingdom recently held a summit at Bletchley Park—most famous for hosting the codebreaking enterprise in World War II and home to one of the world’s first computers—in an attempt to develop rules of the road for AI. It is a start, as are bi– and multi–lateral efforts to reach similar frameworks. 

While the effects of AI on the underprivileged and most vulnerable may be on the agenda, the real and potential downsides of AI development will almost certainly be overpowered and overshadowed by first–to–market incentives in the private sector and the demands of strategic competition. Murgia, to her credit, highlights a potential logical end of AI’s unchecked proliferation: China’s national (and increasingly exported) authoritarian surveillance state. While an interesting thought–experiment and alarmingly alluring argument, it is a stretch to suggest that Beijing’s panopticon will spread to the West, if for no other reason than democracies offer a mechanism for accountability and limitation—provided voters avail themselves of the opportunity and legislatures act.

“Code Dependent” is a unique entry into the rapidly expanding body of writing on AI and it is a welcome one, too. It is too easy to be swept into the hyperbole and futuristic excitement, or alternatively, dystopian doom and gloom of AI, and to forget the very human impacts of this technology. Murgia is a needed voice in the discussion on AI’s development and her book serves as a cautionary tale for readers and policymakers alike. 

About
Joshua Huminski
:
Joshua C. Huminski is the Senior Vice President for National Security & Intelligence Programs and the Director of the Mike Rogers Center at the Center for the Study of the Presidency & Congress.
The views presented in this article are the author’s own and do not necessarily represent the views of any other organization.

a global affairs media network

www.diplomaticourier.com

Revealing the human hands and faces behind AI

May 25, 2024

Amazon's "Just Walk Out" AI technology faced scrutiny when claims arose it was powered by humans based in India. Madhumita Murgia's "Code Dependent" explores AI's human impact, highlighting societal issues and serving as a cautionary tale for readers and policymakers alike, writes Joshua Huminski.

A

mazon found itself in a rather odd position, having to defend its “Just Walk Out” artificial intelligence (AI) technology as not actually being run by humans in India. A Business Insider article suggested that Amazon’s technology that allowed consumers to walk in, pick up a product, and “just walk out” without interacting with any humans didn’t leverage AI, but was in fact run by 1,000 workers in India. Amazon responded by saying that the workers—who are real, by the way—manually checked a sample of the AI monitoring for accuracy. 

This slightly surreal and dystopian news cycle illustrated a very common, if underappreciated, practice of using humans to code and tag data feeds to train AI algorithms. This very human aspect of AI is the subject of “Code Dependent.” Shortlisted for the Women’s Prize for Non–Fiction 2024, Madhumita Murgia’s—the Financial Times’ AI editor—focused not on Chat GPT or the wildest fantasies of what AI could one day do, but the very real impacts on everyday people from around the world. From the data taggers to women affected by deepfake pornography, gig workers shortchanged by delivery services and juvenile offenders targeted by predictive algorithms, Murgia puts a very human face on the impact of AI. 

Code Dependent | Madhumita Murgia | Picador

These are stories and lived experiences that most students of and users of AI are unlikely to ever encounter, and that is what makes it a rather powerful book. For as much as AI is automated and technical, it remains very much manual and human. Murgia pulls back the veil of the “black box” to show that much of both the work underpinning and impact of AI is reliant on people. They are often from low–income countries and often without many insights into how the technology they train is used or deriving much benefit from the fruits of their labor. This is, of course, not massively different from the shifting of low–skill, repetitive tasks to markets with the lowest wage rates. 

What Murgia writes about is less the failings of artificial intelligence and more the failings of humanity. In nearly every story she recounts, one could remove artificial intelligence and see the core challenges that remain. These are failings of society and humanity, shortcomings of justice and equity, but not those of technology alone. Does the presence of artificial intelligence accelerate and exacerbate these social failings and create new pathways for exploitation? Most certainly. But those core elements were present before AI and will almost always, and inevitably, accompany transformative technologies. 

 AI supercharges and accelerates the impact and effect of those un- or under-addressed social and political issues. Photoshop was certainly used to create offensive fake images, but with AI the speed and diversity of deepfake creation is limited only by creators’ horrid, twisted imaginations. There is little recourse for deepfake victims, and even where relevant legislation is emerging, there is no transnational consensus or regime for justice and compensation. 

Indeed, a central theme throughout Murgia’s book, though one that is not explicitly articulated, is the simple fact that AI is a tool. The questions are how it is built and coded, and are pre-existing biases baked into the system consciously or otherwise, and against what problem sets is it tasked? Clearly, as she writes, it could be a powerful tool to help rural doctors receive a second opinion, but only so long as it is properly trained—if it cannot distinguish between tuberculosis and COVID-19, the tool needs improvement. If it is based solely on the medical histories and experiences of white northern Europeans, it will be ill–equipped to accommodate different pain experiences of, for example, African Americans—a case about which she writes. 

What happens, however, when the reliance on AI becomes a crutch on which law enforcement and policymakers rely instead of using their previously established best practices? In the Netherlands an AI powered system sought to identify potential offenders and interdict crime preventatively. Instead, it trapped juveniles in an endless Kafka–esque loop of surveillance, monitoring, and rather lazy police and social work. Instead of one tool among many, it simply became the tool—offenders went on the list, and they stayed there, unable to leave. Rather than identifying potential, vulnerable youths and guiding appropriate social work their way, the AI became a modern scarlet letter. Facial recognition in the United States and elsewhere had an alarmingly high rate of false–positives, leading to wrongful imprisonment—AI trumped common sense and good policing. 

Murgia eloquently and gracefully recounts the experiences of her subjects—and these are stories that should be widely read and widely heard. She is right to highlight the experiences of those without a voice, most aggrieved and affected by artificial intelligence—the perpetual underclasses. Yet, the inclusion of loaded phraseology such as “digital colonialism” serves as a dog–whistle for some. While it undoubtedly engenders support from those already predisposed to see the legacies of imperialism and oppression manifested in digital datasets, it will almost certainly switch off those who need to hear these stories the most. This is a significant obstacle for the modern labor movement—addressing the impacts of AI without the baggage of the past or indeed the lexicon of past efforts. One suspects that most people would support efforts to make Uber and other app–based services more transparent and fairer for those in the gig economy, but when the language of unionization enters the discussion, those same people may switch off. This is as much a function of the failure of the labor movement and growing disconnect, at least in the United States, between unions and the public as it is success on the part of tech companies vilifying such movements. 

Murgia walks a rather fine line between seeing oppression by design and oppression by consequence. At times “Code Dependent” reads as though there is some nefarious conspiracy out of Silicon Valley—tech bros scheming to ensure perpetual oppression via algorithmic code. Is it possible? Sure, but it is far more likely that those very tech bros are unconsciously coding their pre–existing biases into the system—they simply don’t know that the algorithms are ill–equipped or ill–designed to accommodate things outside of their own lived experiences. 

It is less bias by commission than unconscious bias by omission, the latter being decidedly less nefarious but no less consequential for those on the receiving end of the coded shortcomings. Omission can be corrected—hiring more diverse programmers and bringing in more diverse perspectives from ethicists, advocates, and minority communities, as Murgia suggests. Commission, however, requires firm regulation and punitive action either legally or in the public square. 

The international community is working to develop frameworks for AI governance. The United Kingdom recently held a summit at Bletchley Park—most famous for hosting the codebreaking enterprise in World War II and home to one of the world’s first computers—in an attempt to develop rules of the road for AI. It is a start, as are bi– and multi–lateral efforts to reach similar frameworks. 

While the effects of AI on the underprivileged and most vulnerable may be on the agenda, the real and potential downsides of AI development will almost certainly be overpowered and overshadowed by first–to–market incentives in the private sector and the demands of strategic competition. Murgia, to her credit, highlights a potential logical end of AI’s unchecked proliferation: China’s national (and increasingly exported) authoritarian surveillance state. While an interesting thought–experiment and alarmingly alluring argument, it is a stretch to suggest that Beijing’s panopticon will spread to the West, if for no other reason than democracies offer a mechanism for accountability and limitation—provided voters avail themselves of the opportunity and legislatures act.

“Code Dependent” is a unique entry into the rapidly expanding body of writing on AI and it is a welcome one, too. It is too easy to be swept into the hyperbole and futuristic excitement, or alternatively, dystopian doom and gloom of AI, and to forget the very human impacts of this technology. Murgia is a needed voice in the discussion on AI’s development and her book serves as a cautionary tale for readers and policymakers alike. 

About
Joshua Huminski
:
Joshua C. Huminski is the Senior Vice President for National Security & Intelligence Programs and the Director of the Mike Rogers Center at the Center for the Study of the Presidency & Congress.
The views presented in this article are the author’s own and do not necessarily represent the views of any other organization.