Yves here. Richard Wolff describes why AI does not have to wind up displacing workers; that outcome is the result of capitalist profit imperatives.
By Richard D. Wolff, professor of economics emeritus at the University of Massachusetts, Amherst, and a visiting professor in the Graduate Program in International Affairs of the New School University, in New York. Wolff’s weekly show, “Economic Update,” is syndicated by more than 100 radio stations and goes to 55 million TV receivers via Free Speech TV. His three recent books with Democracy at Work are The Sickness Is the System: When Capitalism Fails to Save Us From Pandemics or Itself, Understanding Socialism, and Understanding Marxism, the latter of which is now available in a newly released 2021 hardcover edition with a new introduction by the author. Produced by a href=”https://independentmediainstitute.org/economy-for-all/”>Economy for All, a project of the Independent Media Institute.
Artificial Intelligence (AI) presents a profit opportunity for capitalists, but it presents a crucial choice for the working class. Because the working class is the majority, that crucial choice confronts society as a whole. It is the same profit opportunity/social choice that was presented by the introduction of robotics, computers, and indeed by most technological advances throughout capitalism’s history. In capitalism, employers decide when, where, and how to install new technologies; employees do not. Employers’ decisions are driven chiefly by whether and how new technologies affect their profits.
If new technologies enable employers to profitably replace paid workers with machines, they will implement the change. Employers have little or no responsibility to the displaced workers, their families, neighborhoods, communities, or governments for the many consequences of jobs lost. If the cost to society of joblessness is 100 whereas the gain to employers’ profits is 50, the new technology is implemented. Because the employers’ gain governs the decision, the new technology is introduced, no matter how small that gain is relative to society’s loss. That is how capitalism has always functioned.
A simple arithmetic example can illustrate the key point. Suppose AI doubles some employees’ productivity. During the same work time, they produce twice as much as before the use of AI. Employers who use AI will then fire half of their employees. Such employers will then receive the same output from the remaining 50 percent of their employees as before the introduction of AI. To keep our example simple, let’s assume those employers then sell that same output for the same price as before. Their resulting revenues will then likewise be the same. The use of AI will save the employers 50 percent of their former total wage bills (less the cost of implementing AI) and those savings will be kept by employers as added profit for them. That added profit was an effective incentive for the employer to implement AI.
If we imagine for a moment that the employees had the power that capitalism confers exclusively on employers, they would choose to use AI in an altogether different way. They would use AI, fire no one, but instead cut all employees’ working days by 50 percent while keeping their wages the same. Once again keeping our example simple, this would result in the same output as before the use of AI, and the same price for the goods or services and revenue inflow would follow. The profit margin would remain the same after the use of AI as before (minus the cost of implementing the technology). The 50 percent of employees’ previous workdays that are now available for their leisure would be the benefit they accrue. That leisure—freedom from work—is their incentive to use AI differently from how employers did.
One way of using AI yields added profits for a few, while the other way yields added leisure/freedom to many. Capitalism rewards and thus encourages the employers’ way. Democracy points the other way. The technology itself is ambivalent. It can be used either way.
Thus, it is simply false to write or say—as so many do these days—that AI threatens millions of jobs or jobholders. Technology is not doing that. Rather the capitalist system organizes enterprises into employers versus employees and thereby uses technological progress to increase profit, not employees’ free time.
Throughout history, enthusiasts celebrated most major technological advances because of their “labor-saving” qualities. Introducing new technologies would deliver less work, less drudgery, and less demeaning labor. The implication was that “we”—all people—would benefit. Of course, capitalists’ added profits from technical advances no doubt brought them more leisure. However, the added leisure new technologies made possible for the employee majority was mostly denied to them. Capitalism—the profit-driven system—caused that denial.
Today, we face the same old capitalist story. The use of AI can ensure much more leisure for the working class, but capitalism instead subordinates AI to profiteering. Politicians shed crocodile tears over the scary vista of jobs lost to AI. Pundits exchange estimates of how many millions of jobs will be lost if AI is adopted. Gullible liberals invent new government programs aimed to lessen or soften AI’s impact on employment. Once again, the unspoken agreement is not to question whether and how the problem is capitalism nor to pursue the possibility of system change as that problem’s solution.
In an economy based on worker coops, employees would collectively be their own employers. Capitalism’s core structure of enterprises—the employer versus employee system—would no longer prevail. Implementing technology would then be a collective decision democratically arrived at. With the absence of capitalism’s employer versus employee division, the decision about when, where, and how to use AI, for example, would become the task and responsibility of the employees as a collective whole. They might consider profitability of the enterprise among their goals for using AI, but they would certainly also consider the gain in leisure that this makes possible. Worker coops make decisions that differ from those of capitalist enterprises. Different economic systems affect and shape the societies in which they operate differently.
Across capitalism’s history, employers and their ideologues learned how best to advocate for technological changes that could enhance profits. They celebrated those changes as breakthroughs in human ingenuity deserving everyone’s support. Individuals who suffered due to these technological advances were dismissed as, “the price to pay for social progress.” If those who suffered fought back, they were denounced for what was seen as anti-social behavior and were often criminalized.
As with previous technological breakthroughs, AI places on society’s agenda both new issues and old contentious ones. AI’s importance is NOT limited to productivity gains it achieves and job losses it threatens. AI also challenges—yet again—the social decision to preserve the employer-employee division as the basic organization of enterprises. In capitalism’s past, only employers made the decisions whose results employees had to live with and accept. Maybe with AI, employees will demand to make those decisions via a system change beyond capitalism toward a worker-coop based alternative.
Discussing the UAW strike with a friend, we agreed that the capital/labor ratio seems to be the core of the fight. Grossly out-of-balance for decades, perhaps mean reverting. Perhaps. AI coming to a head at the same time is interesting. Barring a severe economic dislocation that returns leverage to capital, I have presumed that UBI with corporate/wealth profits taxed to pay for it (AI profit winners), will be the answer. We know this has not been possible politically recently, but the political calculus may change with enough growth to the standing army of un/under-employed due to AI.
Why would Universal Basic Income (UBI) be the outcome, rather than shorter work weeks for all? I think most workers would prefer a shorter work week rather than being on the dole (which is how UBI is perceived by many).
Companies are not going to pay employees 40 hours for 32 hours worth of work. But that income will need to be replaced. Not to mention the outright headcount deduction due to AI. Either corps share a greater percentage of profits or the gov’t taxes and redistributes
Is it wrong to suggest that companies do not pay employees 40 hours pay for 40 hours work or 32 hours pay for 32 hours work? Neither will corporations share a greater percentage of profits nor will the government tax corporate profits and redistribute benefits to the public. Where have you been living for the last fifty+ years? Is there some plan ‘B’? I doubt a Universal Basic Income (UBI) could address the issues any more than a tax on corporate profits … under present laws … might.
Is it wrong to suggest that companies do not pay employees 40 hours pay for 40 hours work or 32 hours pay for 32 hours work? Neither will corporations share a greater percentage of profits nor will the government
tax corporate profits and redistribute benefits to the public. Where have you been living for the last fifty+ years? Is there some plan ‘B’? I doubt a Universal Basic Income (UBI) could address the issues any more than a tax on corporate profits … under present laws … might.
By your definition, half of America is “on the dole” through Social Security, SSI and pensions. Except, we earned that. UBI has no stigma. Let’s not give it one with careless words.
As the late Mark Fisher was fond of saying–“It’s easier to imagine the end of the world than the end of capitalism”.
Observing affairs since his death I’ve come to understand why he decided to commit suicide
It appears to me, that AI will demolish jobs with paper in and paper out. Trade jobs where these is much material handling by people will continue.
The jobs with paper in and out seem to be be middle class jobs, and the jobs with much material handling the trade jobs.
The Middle class is greatly reduces, and the trade less so reduced.
The real question is what will happen to the money flow. It appears to be headed for a substantial collapse.
The presumption is that the upper class will also experience a financial collapse, because their income is supported by the others spending.
The US’ success was driven by spreading the wealth, as opposed to the UK where the wealth was much more concentrated, and certainly not in the working poor as the rise of the UK labor party illustrated in the 2oth Century.
My one consolation is that, reportedly, management consultants may be some of the first to be hit. AI is supposedly already very good at drafting long screeds of b******t.
I did see a video from an experimental setup where a human operator would demonstrate to an “AI” how do some physical tasks using a pair of robotic arms. Then supposedly the AI was left to train on the demonstration over night before given control of the arms directly.
Sadly the video was mostly talking heads with a few glory shots of the arms pouring, stirring and flipping what looked like very well done pancakes. So all in all it was hard to evaluate their claims based on what was shown.
That sounds more like standard adaptive machine learning, not AI as it is applied nowadays. ML optimises within prescribed parameters, while AI makes up stuff based on a large background of examples.
I wonder how self correcting AI can be. Can it be programmed to respond to its own inefficiencies. How will it “innovate?” I imagine a comedy nightmare where AI is impervious to its own failure to be flexible and respond in time. So far that has been my limited experience with telephone AI. It’s hopeless. When something mechanical is called intelligent as opposed to efficient it misleads. I’d be more comfortable with the term “artificial efficiency.” And to pin it down even more it would be good to include the entire chain of extraction involved in manufacturing artificial efficiency, like how costly to the environment and society is it really? How frequently does it need to be replaced, etc. If those costs are high then it is not productive. We should double down on meta and do an AI analysis of AI. I remember Michio Kaku’s enthusiasm about quantum AI – he speculated that quantum computing could analyze all the consequences of some decision so that we could decide not to do something. I like that idea. I think that “productivity” and “profit” are archaic terms that also need to be adjusted. It would be kinda bad to go bravely into our new AI world using our “barbaric relic” ideas of prosperity in a world of abundance when it looks more like extinction.
It’s really the proponents of this saying, “Dammit! If you won’t believe in TINA, we’ll convince ourselves that the machines will!”
my personal experience is that the original generation is often well executed, and any subsequent changes reduce the quality of the work, just as making a photocopy of a photocopy reduces the clarity of the image and inserts artifacts. AI seems to lose track of where it’s at and what it’s trying to achieve in the moment. it seems, often times, more lucky than smart.
Guess who shows some agreement with the professor:
https://www.theatlantic.com/ideas/archive/2023/09/benjamin-netanyahu-elon-musk-ai-pessimism/675406/
“…This is why the Israeli leader puts little stock in utopian promises, whether they are made by progressive internationalists or Silicon Valley futurists, and places his trust in hard power instead…”
I’m read the questions B.N. posed about the algorithms and had a hell-freezes-over moment agreeing with him. Then the best propaganda the Atlantic can muster is he’s just a “pessimist” only concerned with his own power.
The writer and his ilk refuse to acknowledge the hard power being sought by SillyCon Valley and their finance bros.
It ends with:
“…But Netanyahu believes that all of these technological advances are only as good as the humans who operate them—and humans, he knows, don’t have the best track record…”
He knows? The witer should know it too.
In Professor Wolff’s “simple arithmetic example” he leaves out that it is the capitalists that are paying for and taking the risks in investing in AI technology and, under the rules of the capitalist system, they are allowed to reap the benefits of that investment. So under the system’s rules, it is not possible for workers to realize 100% of the benefit of increased productivity.
Don’t get me wrong: I’m not a libertarian capitalist. And the professor is correct in pointing out that there will be social costs from AI that will be paid by workers. What we want is an economic system that more equitably distributes the benefits of AI investment.
Also a company that can produce 2x as much for the same amount of input will not fire half their employees. This is simply false.
If they’re small enough that they don’t affect the price in their market, they’ll produce twice as much and make twice the money.
If they’re large enough then they’ll produce more. The price of the good will go down, but this will increase demand. Greater economies of scale may further increase production efficiency. So several effects will interact. It may lead to lower employment but it may lead to higher employment, or substitution into alternate roles.
I think the only case where the author’s example makes any sense is if the quantity of product demanded is constant and perfectly inelastic
I suppose everything works for the best in all possible Neoliberal Corporate Catels. Profits reward risk, effort, and quality and that is why Microsoft, Google, Apple, et al. are so much a part of the economic landscape. So work hard, keep your nose clean, and someday you too may be rich and famous … and if I can just believe enough I can grow wings and fly.
How much of that technology was developed using public resources? How much of the operational systems rely on public resources? What are the public costs of operation, e.g. emissions, garbage, etc.?
It seems that in general most of the risk of developing technology is public, while the profit is private. Funny how the public components of cost:benefit is rarely acknowledged.
I am extremely skeptical that present versions of AI truly replace any employees. However, I can grasp — all too easily — how our Corporate overlords might embrace AI — such as it is.
I believe in the potentials of AI … as much as I believe in how much present AI has been over-sold, over-hyped, and misrepresented. My real fear of AI is that we might become so jaded we miss the important discoveries the future might [I greatly hope] AI might provide. I believe AI could become an important augment to the human mind. That of course depends on being able to understand what AI can and does do with data. I like to believe this is not a bridge too far.
I hold high hopes for the AI of future versions of the craft, although I confess vast discomfort with the tendency I feel to push AI ‘solutions’ before AI has found ‘solutions’ to deeper issues of cognition and learning and the greatest mystery — consciousness remains largely untouched. I believe this last mystery is the danger posed by the once-upon-a-time dangers AI poses at the time of singularity and transcendence.
A lot of people here are picturing an AI as a giant brain a la the great Spencer Tracy/Katherine Hepburn comedy from the 1950s “The Desk Set”.
In fact, AI/ML has been used for at least the last 20 years in voice recognition and now image recognition. In the former AI has already replaced thousands of call center jobs, perhaps much more. With image recognition coming online that next jobs to be replaced will be security guards, traffic monitors, and eventually package handlers in shipping departments. Most assembly lines don’t rely on people recognizing items but rather have limited degrees of freedom where an item might be and how it’s oriented.
BTW, people may have noticed that the Captcha human-challenge tests no longer present a mix of letters that you’re to type in but have switched to presenting a dozen photos and ask you to identify those that have, for example, traffic lights in them. What you are doing is provide classification data for some AI vision recognition system. All machine learning systems (including AI that relies on ML) require these training sets. Pretty sneaky of the Captcha folks, I actually wasn’t aware of the ulterior purpose behind the switch until I read of it recently.
P
RE: “Yves here. Richard Wolff describes why AI does not have to wind up displacing workers; that outcome is the result of capitalist profit imperatives”
Wolff apparently has not figured out yet that it isn’t a matter of a hypothetical “not have to” but of what reality IS.
So here is what IS… the actual reality with AI:
The FAKE narrative (ie propaganda) nearly everyone, including “alternative news” sources, have been spreading is that the TRULY big threat is that AI just creates utter chaos in society and that it might achieve control over humans. Therefore it must be regulated.
The TRUE narrative (ie empirical reality) virtually no one talks about or spreads is that the TRULY big threat with AI is that AI allows the governing psychopaths-in-power to materialize their ultimate wet dream to control and enslave everyone and everything on the whole planet, a process that’s long been ongoing in front of everyone’s “awake” (=sleeping, dumb) nose …. The 2 Married Pink Elephants In The Historical Room: Humans’ Invisibilized Soullessness Spectrum Disorder
Like with every criminal inhumane self-concerned agenda of theirs the psychopaths-in-control sell and propagandize AI to the timelessly foolish (=”awake”) public with total lies such as AI being the benign means to connect, unit, transform, benefit, and save humanity.
The official narrative is… “trust official science” and “trust the authorities” but as with these and all other “official narratives” they want you to trust and believe …
“We’ll know our Disinformation Program is complete when everything the American public believes is false.” —William Casey, a former CIA director=a leading psychopathic criminal of the genocidal US regime
The proof is in the pudding… ask yourself, “how is the hacking of the planet going so far? Has it increased or crushed personal freedom?”
Since many of the same criminal establishment “expert” psychopaths, such as Musk and Harari (Harari is the psychopath affiliated with Schwab’s WEF) or Geoffrey Hinton, the “godfather of AI” who have for many years helped develop, promote, and invest in AI are now suddenly supposedly have a change of heart (they grew a conscience overnight) and warn the public about AI it’s clear their current call for a temporary AI ban and/or its regulation is just a manipulative tactic to misdirect and deceive the public, once again.
This scheme is part of The Hegellian Dialectic in action: problem-reaction-solution.
This “warning about AI” campaign is meant to raise public fear/hype panic about an alleged big “PROBLEM” (these psychopaths helped to create in the first place!) so the public demands (REACTION) the governments regulate and control this technology =they provide the “SOLUTION’ FOR THEIR OWN INTERESTS AND AGENDAS… because… all governments are owned and controlled by the leading psychopaths-in-power (see first linked url above).
What a convenient self-serving trickery … of the ever foolish public.
“AI responds according to the “rules” created by the programmers who are in turn owned by the people who pay their salaries. This is precisely why Globalists want an AI controlled society- rules for serfs, exceptions for the aristocracy.” —Unknown
“Almost all AI systems today learn when and what their human designers or users want.” —Ali Minai, Ph.D., American Professor of Computer Science, 2023
“Who masters those technologies [=artificial intelligence (AI), chatbots, and digital identities] —in some way— will be the master of the world.” — Klaus Schwab, at the World Government Summit in Dubai, 2023
“COVID is critical because this is what convinces people to accept, to legitimize, total biometric surveillance.” — Yuval Noah Harari, member of the dictatorial ruling mafia of psychopaths, World Economic Forum
“The whole idea that humans have this soul, or spirit, or free will … that’s over.” — Yuval Noah Harari, member of the dictatorial ruling mafia of psychopaths, World Economic Forum
I hate to tell you but your hyperventialiting comment is music to the ears of tech giants.
They are seeking to regulate AI (did you manage to miss that many of them are sounding alarms?) because AI is not protectable from a commercial standpoints. Anyone with not too much computing power could use AI to simplify parts of their operation, like a law firm generating cookie cutter filings or routine letters to clients.
They want AI regulations to preserve THEIR profits, not for your good.