.
R

ecent discussions about the implications of artificial intelligence for employment have veered between the poles of apocalypse and utopia. Under the apocalyptic scenario, AI will displace a large share of all jobs, vastly exacerbating inequality as a small capital–owning class acquires productive surpluses previously shared with human laborers.

The utopian scenario, curiously, is the same, except that the very rich will be forced to share their winnings with everyone else through a universal basic income or similar transfer program. Everyone will enjoy plenty of freedom, finally achieving Marx’s vision of communism, where it is “possible for me to do one thing today and another tomorrow, to hunt in the morning, fish in the afternoon, rear cattle in the evening, criticize after dinner, just as I have a mind, without ever becoming hunter, fisherman, herdsman, or critic.”

The common assumption in both scenarios is that AI will vastly increase productivity, forcing even highly paid doctors, software programmers, and airline pilots to go on the dole alongside truck drivers and cashiers. AI will not only code better than an experienced programmer; it will also be better at performing any other tasks that that coder might be retrained to do. But if all this is true, then AI will generate unheard–of wealth that even the most extraordinary sybarite would have trouble exhausting.

The dystopic and utopian outcomes both reduce AI to a political problem: whether the left–behind (who will have the advantage of numbers) will be able to compel the AI tycoons to share their wealth. There is reason for optimism. First, the gains from AI under this scenario are so extravagant that the super–rich might not mind giving up a few marginal dollars, whether to appease their consciences or to buy social peace. Second, the growing mass of the left–behind will include highly educated, politically engaged people who will join the traditionally left–behind in agitating for redistribution.

But there is also a deeper question. How will people respond, psychologically and politically, to the realization that they can no longer contribute to society by engaging in paid work? Labor–force participation has already declined significantly since the 1940s for men, and though women entered the workforce in large numbers only in the 1970s and 1980s, their participation rate also has begun to decline. This may well reflect a trend of people at the bottom losing the capacity to convert their labor into compensable value as technology advances. AI could accelerate this trend, defenestrating people at the middle and top as well.

If the social surplus is shared widely, one might ask, “Who cares?” In the past, members of the upper class avoided taking jobs, and disdained those who did. They filled their time with hunting, literary pursuits, parties, political activities, hobbies, and so on—and they seem to have been rather pleased with their situation (at least if you ignore the bored gentry idling in summer dachas in Chekhov’s stories).

Modern economists tend to think of work in the same ways, as simply a cost (“c”) that must be offset by a higher wage (“w”) to induce people to work. Like Adam and Eve, they implicitly think of work as a pure bad. Social welfare is maximized through consumption, not through the acquisition of “good jobs.” If this is right, we can compensate people who lose their jobs simply by giving them money.

Maybe human psychology is flexible enough that a world of plenty and little or no work could be regarded as a boon rather than an apocalypse. If aristocrats of the past, retirees of today, and children of all eras can fill their time with play, hobbies, and parties, perhaps the rest of us can, too.

But research indicates that the psychological harms of unemployment are significant. Even after controlling for income, unemployment is associated with depression, alcoholism, anxiety, social withdrawal, disruption of family relations, worse outcomes for children, and even early mortality. The recent literature on “deaths of despair” provides evidence that unemployment is associated with elevated suicide and overdose risk. The mass unemployment linked to the “China shock” in some regions of the United States was associated with elevated mental–health risks among those affected. Loss of self–esteem and a sense of meaning and usefulness is inevitable in a society that valorizes work and scorns the unemployed and unemployable.

As such, the long–term challenge posed by AI may be less about how to redistribute wealth, and more about how to preserve jobs in a world in which human labor is no longer valued. One proposal is to tax AI more relative to labor, whereas another—recently advanced by MIT economist David Autor—is to use government resources to shape the development of AI so that it complements rather than substitutes for human labor.

Neither idea is promising. If the most optimistic predictions about AI’s future productivity benefits are accurate, a tax would have to be tremendously high to have any impact. Moreover, AI applications are likely to be both complements and substitutes. After all, technological innovations generally enhance some workers’ productivity, while eliminating others’ tasks. If the government steps in to subsidize complementary AI—say, algorithms that improve writing or coding—it could just as easily end up displacing jobs as preserving them.

Even if taxes or subsidies can keep alive jobs that produce less value than AI substitutes, they will merely be putting off the day of reckoning. People who derive self–esteem from their jobs do so in part because they believe that society values their work. Once it becomes clear that their work can be done better and more cheaply by a machine, they will no longer be able to maintain the illusion that their work matters. If the U.S. government had preserved the jobs of buggy–whip makers when automobiles displaced horse–drawn carriages, one doubts that those positions would still confer much self–esteem on anyone who took them today.

Even if humans are able to adjust to a life of leisure in the long term, the most optimistic projections of AI productivity portend massive short–run disruptions to labor markets, akin to the impact of the China shock. That means substantial—and for many people, permanent—unemployment. There is no social safety net generous enough to protect people from the mental–health effects, and society from the political turmoil, that would follow from such widespread disappointment and alienation.

Copyright: Project Syndicate, 2024.

About
Eric Posner
:
Eric Posner, a professor at the University of Chicago Law School, is the author of How Antitrust Failed Workers.
The views presented in this article are the author’s own and do not necessarily represent the views of any other organization.

a global affairs media network

www.diplomaticourier.com

The future of work in the AI era

Men in line for unemployment aid, 1938. Image by Dorothea Lange courtesy of The New York Public Library from Unsplash.

April 23, 2024

AI calls forth questions regarding employment and the future of the labor force with two commonly discussed paths: apocalypse or utopia. Regardless of the ultimate scenario, both involve a vast increase in productivity and unheard–of wealth—reducing AI to a political problem, writes Eric Posner.

R

ecent discussions about the implications of artificial intelligence for employment have veered between the poles of apocalypse and utopia. Under the apocalyptic scenario, AI will displace a large share of all jobs, vastly exacerbating inequality as a small capital–owning class acquires productive surpluses previously shared with human laborers.

The utopian scenario, curiously, is the same, except that the very rich will be forced to share their winnings with everyone else through a universal basic income or similar transfer program. Everyone will enjoy plenty of freedom, finally achieving Marx’s vision of communism, where it is “possible for me to do one thing today and another tomorrow, to hunt in the morning, fish in the afternoon, rear cattle in the evening, criticize after dinner, just as I have a mind, without ever becoming hunter, fisherman, herdsman, or critic.”

The common assumption in both scenarios is that AI will vastly increase productivity, forcing even highly paid doctors, software programmers, and airline pilots to go on the dole alongside truck drivers and cashiers. AI will not only code better than an experienced programmer; it will also be better at performing any other tasks that that coder might be retrained to do. But if all this is true, then AI will generate unheard–of wealth that even the most extraordinary sybarite would have trouble exhausting.

The dystopic and utopian outcomes both reduce AI to a political problem: whether the left–behind (who will have the advantage of numbers) will be able to compel the AI tycoons to share their wealth. There is reason for optimism. First, the gains from AI under this scenario are so extravagant that the super–rich might not mind giving up a few marginal dollars, whether to appease their consciences or to buy social peace. Second, the growing mass of the left–behind will include highly educated, politically engaged people who will join the traditionally left–behind in agitating for redistribution.

But there is also a deeper question. How will people respond, psychologically and politically, to the realization that they can no longer contribute to society by engaging in paid work? Labor–force participation has already declined significantly since the 1940s for men, and though women entered the workforce in large numbers only in the 1970s and 1980s, their participation rate also has begun to decline. This may well reflect a trend of people at the bottom losing the capacity to convert their labor into compensable value as technology advances. AI could accelerate this trend, defenestrating people at the middle and top as well.

If the social surplus is shared widely, one might ask, “Who cares?” In the past, members of the upper class avoided taking jobs, and disdained those who did. They filled their time with hunting, literary pursuits, parties, political activities, hobbies, and so on—and they seem to have been rather pleased with their situation (at least if you ignore the bored gentry idling in summer dachas in Chekhov’s stories).

Modern economists tend to think of work in the same ways, as simply a cost (“c”) that must be offset by a higher wage (“w”) to induce people to work. Like Adam and Eve, they implicitly think of work as a pure bad. Social welfare is maximized through consumption, not through the acquisition of “good jobs.” If this is right, we can compensate people who lose their jobs simply by giving them money.

Maybe human psychology is flexible enough that a world of plenty and little or no work could be regarded as a boon rather than an apocalypse. If aristocrats of the past, retirees of today, and children of all eras can fill their time with play, hobbies, and parties, perhaps the rest of us can, too.

But research indicates that the psychological harms of unemployment are significant. Even after controlling for income, unemployment is associated with depression, alcoholism, anxiety, social withdrawal, disruption of family relations, worse outcomes for children, and even early mortality. The recent literature on “deaths of despair” provides evidence that unemployment is associated with elevated suicide and overdose risk. The mass unemployment linked to the “China shock” in some regions of the United States was associated with elevated mental–health risks among those affected. Loss of self–esteem and a sense of meaning and usefulness is inevitable in a society that valorizes work and scorns the unemployed and unemployable.

As such, the long–term challenge posed by AI may be less about how to redistribute wealth, and more about how to preserve jobs in a world in which human labor is no longer valued. One proposal is to tax AI more relative to labor, whereas another—recently advanced by MIT economist David Autor—is to use government resources to shape the development of AI so that it complements rather than substitutes for human labor.

Neither idea is promising. If the most optimistic predictions about AI’s future productivity benefits are accurate, a tax would have to be tremendously high to have any impact. Moreover, AI applications are likely to be both complements and substitutes. After all, technological innovations generally enhance some workers’ productivity, while eliminating others’ tasks. If the government steps in to subsidize complementary AI—say, algorithms that improve writing or coding—it could just as easily end up displacing jobs as preserving them.

Even if taxes or subsidies can keep alive jobs that produce less value than AI substitutes, they will merely be putting off the day of reckoning. People who derive self–esteem from their jobs do so in part because they believe that society values their work. Once it becomes clear that their work can be done better and more cheaply by a machine, they will no longer be able to maintain the illusion that their work matters. If the U.S. government had preserved the jobs of buggy–whip makers when automobiles displaced horse–drawn carriages, one doubts that those positions would still confer much self–esteem on anyone who took them today.

Even if humans are able to adjust to a life of leisure in the long term, the most optimistic projections of AI productivity portend massive short–run disruptions to labor markets, akin to the impact of the China shock. That means substantial—and for many people, permanent—unemployment. There is no social safety net generous enough to protect people from the mental–health effects, and society from the political turmoil, that would follow from such widespread disappointment and alienation.

Copyright: Project Syndicate, 2024.

About
Eric Posner
:
Eric Posner, a professor at the University of Chicago Law School, is the author of How Antitrust Failed Workers.
The views presented in this article are the author’s own and do not necessarily represent the views of any other organization.