Yves here. While it’s good to see discussion of how to mitigate the employment effect of greater use of AI and what to do to mitigate it, I am bothered by Richard Murphy making unironic use of a chat with ChatGPT to get proposals. In a variant on Lenin, any worker, but above all knowledge workers, who interact with AI are updating Lenin by helping AI weave the rope with which it will hang them. Why are they so busily training their replacements? If you have any interest in self-preservation, you should stay well away from AI interactions unless you can find a way to corrupt their training sets.
It is unsurprising that the bot served up hackneyed ideas like “Let them eat training.”
By Richard Murphy, a chartered accountant and a political economist. He has been described by the Guardian newspaper as an “anti-poverty campaigner and tax expert”. He is Professor of Practice in International Political Economy at City University, London and Director of Tax Research UK. He is a non-executive director of Cambridge Econometrics. He is a member of the Progressive Economy Forum. Originally published at Tax Research UK
There is much discussion on the media this morning on the possibility that AI will cost tens, if not hundreds, of millions of jobs in the world economy, with the FT having articles on the issue today, none of which add much to debate except by sending out warning signals.
The obvious question to ask is, will AI create unemployment and what can be done about it? So I asked ChatGPT, an AI tool, and this was its best response:
AI has the potential to create unemployment in certain industries and job roles, particularly those that involve repetitive tasks or can be easily automated. However, it’s important to note that AI can also create new job opportunities and help individuals to be more productive and efficient in their work.
To mitigate the potential negative effects of AI on employment, there are several things that can be done:
- Invest in education and retraining programs: Governments, educational institutions, and businesses can invest in education and retraining programs to help workers develop the skills needed for new jobs that are emerging as a result of AI.
- Encourage entrepreneurship: Encouraging and supporting entrepreneurship can create new job opportunities and help individuals to start their own businesses.
- Implement policies to support workers: Governments can implement policies such as unemployment insurance, healthcare, and retirement benefits to support workers who may be displaced by AI.
- Foster a culture of lifelong learning: Encouraging and supporting lifelong learning can help workers to adapt to new technologies and job roles as they emerge.
Overall, it’s important to take a proactive approach to address the potential impact of AI on employment, and to focus on creating new job opportunities and supporting workers through education and retraining programs.
As far as it goes, that’s not bad. A good A level answer, maybe.
But the fact is, it does not go nearly far enough because what this answer does not do is address the fundamental issues that flow from the suggestion it makes. That suggestion is that people engaged in processes where employment is replaced by AI technology should be retrained to work in sectors where there could be growth. In another iteration of the answer (I asked several times) it was suggested that new employment might come in renewable energy and, most especially, in healthcare and education where, it said, there is ‘likely to be significant growth in coming years’.
That may well be true, but the missing piece in all this is how this transformation is going to be paid for alongside all the other changes that are going to necessarily happen over the years to come.
We can, of course, have significant retraining programmes. And we can employ more people in renewables, healthcare and education, but all of these are likely to require significant government support and right now we have a prevailing attitude that, firstly, there is no more money and, secondly, there must be no more tax.
Neither of those attitudes is sustainable in the face of AI. What AI does is increase the return to capital in the economy by reducing the return to labour. That is the whole basis of its appeal. I suggest it is pointless for anyone to argue otherwise: this is what will happen in the absence of intervention from governments.
Historically when such transitions took place three things happened. First, there were major economic downturns (this happened in the last three decades of the 19th century, almost continually). Second, there was poverty. I would note the same era, prior to the creation of trade unions with their vital role in addressing this. And third, new sectors did develop, eventually, to employ the displaced labour.
The first two are real possibilities with the widespread use of AI. After all, unemployed people cannot spend much: the risk of real economic decline even as the return to capital increases is big.
What worries me most, though, is that I see almost no chance of new private sector activity that seeks to employ those people being made redundant by AI. It is true that we do need more education, healthcare and social care, as well as better pensions. That is true irrespective of AI. AI makes all of them possible by making the necessary resources available, but only if government can employ the people involved.
Of course, a government like that in the UK can create the funds required to do this. We know that is possible. But to control inflation it will also need to increase tax, and there is only one obvious sector on which that additional tax should be charged – which is on the owners of the benefits arising from AI. That, inevitably, means that taxes on capital will have to increase very significantly.
The question is, will that be possible? If it is, the transition to AI could be beneficial. If it is not – because tax havens hold out and resist this change in a way that makes it impossible for this shift in reward towards capital to be properly addressed and corrected through the tax system – then we could be in deep trouble.
I think this is the first time I have addressed this issue. I see it as a new frontier in tax justice. We either work out how to tax capital effectively (and not by taxing machines as Bill Gates once suggested, but by taxing the ownership of capital itself and the rewards flowing from it) then AI creates massive economic risk.
I suspect this is going to be a recurring theme here in the future.
This article goes well with Ted Chiang’s recent New Yorker piece, “Will A.I. Become the New McKinsey?”.
Yves S.: If you have any interest in self-preservation, you should stay well away from AI interactions unless you can find a way to corrupt their training sets.
Think that through. Those most at risk, for instance, are coders and developers —
https://levelup.gitconnected.com/chatgpt-will-replace-programmers-within-10-years-91e5b3bd3676
Are they going to write bad code that doesn’t work? If so, they’ll be fired much sooner. How about bad code that somehow works, but does so only temporarily and with built-in backdoor malware, etc.? Maybe more difficult than you think, and exactly the sort of thing AI can learn to detect.
On the upside regarding AI, four points: –
[1] ChatGPT4 is garbage in/garbage out. On its site, the very first demo example to show what it can do is “Explain quantum computing in simple terms.’ Its essential answer is: What makes qubits special is that they can be both 0 and 1 simultaneously. This is called superposition. It’s like having a light switch that can be both on and off at the same time.
This is the simplified ‘explanation’ of QC that journalists give, which is where ChatGPT got it from. Besides being no explanation at all, it’s so simplified that it’s essentially garbage — just wrong.
How QC actually works is that down on the level of fundamental reality — of subatomic particles like electrons and photons — different rules of probability obtain, involving numbers—amplitudes—that may be positive, negative, or even complex (involving the square root of -1, forex). If a photon hitting a certain spot on a screen, say, could happen one way with positive amplitude and another with negative amplitude, the two possibilities can even cancel each other out, so the total amplitude is zero and the event never happens at all.
This interaction of amplitudes, called “quantum interference,” is what QC exploits. In a quantum computer, a qubit is — yes — a bit with some amplitude for being 0 and some other amplitude for being 1. If you then “measure” the qubit, you force it to decide, randomly, whether to “collapse” out of superposition to 0 or 1. But till that qubit is measured, it’s in superposition and its amplitudes undergo quantum interference, producing effects derived from both amplitudes but very much not explained by the qubit’s being simply 0 or 1.
If your quantum computer then has, say, a thousand qubits, and they interact (forming entangled states) there’s an amplitude for every possible configuration of all thousand bits. That’s 2 to the 1,000 amplitudes, much more than the number of atoms in the observable universe, and it’s this that gives QC its enormous potential power.
So the very first demo answer ChatGPT4 offers is effectively wrong. Garbage in, garbage out.
[2] Likewise, instruct ChatGPT4 to ‘say something original and meaningful.’ All it can produce is meaningless, unoriginal boilerplate. Try it and see.
[3] Conversely, however, turn a large language model (LLM) loose directly on a real genomic sequence (as opposed to human disinformation) and it can see things human probably never would. Most inherited characteristics are multigenic and one gene’s action can be modified by another two-thousand genes downstrean, and then yet another. This is difficult for our human brains to figure out.
An LLM, however, can inspect an entire genome to catch and model individual mutations at the nucleotide scale, and to account for how various traits have emerged. LLMs can also do generative AI models for genomics. Forex —
GenSLMs: Genome-scale language models reveal SARS-CoV-2 evolutionary dynamics
https://www.biorxiv.org/content/10.1101/2022.10.10.511571v2
[4] As regards the future in the big global picture, societies where elites refuse to row back on ‘free markets’ — and especially unrestrained ‘free markets’ in their state-enforced neoliberal form — in the face of AI are going to come asunder far sooner than societies where the state will more willingly enforce socialist measures for the general good. AI is likely to enforce a kind of social Darwinism on a global, interstate level over the next few decades, therefore.
Bad for the US as currently constituted, maybe. But maybe the US as currently constituted needs to go into the garbage can of history.
If you ask Marx (and the others after), this is even precisely how socialism happens: society and its economy get to the point where capitalism is no longer able to fulfill its role of organizing production effectively and is forced into a new mode against its will, as previous modes of production were forced in the past.
If you ask Marx (and the others after), this is even precisely how socialism happens.
Indeed.
Perhaps “corrupt the trained-on data sets” isn’t addressed to coders and etc. Perhaps it is addressed to every single person who uses a computer without being paid to use it. Like bloggers and commenters randomising their spelling of words and introducing strange grammar here and there to corrupt the AI-brains training on it. But yet still be understood by the human writers and readers.
Learning to switch from English to Anguish Languish and back again at random intervals for random stretches of time. Or using combinations of words and emojis for punctuation marks . . . . like instead of ending a sentence with a period, end it with the word “full” followed by a picture of a stop sign. If there is a stop-sign emoji, use that. Or if one needs to provide a link to an image of a stop sign, one could do that.
Using that method, a sentence might finish with Full http://cliparts.co/clipart/2376951
( Full http://cliparts.co/clipart/2376951 being read by the human reader as ” full stop”, meaning ” the little black dot at the end of a sentence which we call a ” period” ).
If a hundred million people or better a billion people all did a little of that sometimes in their personal computer or smart phone usage, and the A Is trained themselves on that, that would fix the
A I’s wagon. Or maybe break it.
I think Caitlin Johnstone had a great take too Tech Would Be Fine If We Weren’t Ruled By Monsters
Or as I’ve been putting it, AI is simply a force multiplier for our corporations and government. Think of the victories that our defense department will have on the battlefield by leveraging AI and just translate that to the victory that corporations and government will have when they are unleashed and fully realized through the benefits of AI.
Thinking more broadly about tech, I’ve been thinking about it from a gift giving perspective, a la Marcel Mauss’s “The Gift”. I truly believe that tech entrepreneurs perceive this as their gift to us. We just need to conceive of a gift for them in return.
P.S. highly recommend Ted Chiang’s (the author’s) short stories collection, “Stories of Your Life and Others”
The Neoliberal version of AI is “job loss” is your problem not ours. It’s unlikely the donor class will allow increase taxes on Capital. More likely is the public debt will increase to $100 trillion.
“More likely is the public debt will increase to $100 trillion”.
Yes. And there is no reason that won’t work. Not that I’m a fan of going on welfare.
The myth of Neoliberal power makes little distinction between $30 trillion and $100 trillion. Unfortunately for society Neoliberalism is NOT a Great Noble Lie. It’s just a lie.
So this guy asked ChatGPT for solutions to the mass unemployment that will result from ChatGPT’s uptakes and it suggested the following-
1.Invest in education and retraining programs
2.Encourage entrepreneurship
3.Implement policies to support workers
4.Foster a culture of lifelong learning:
Yeah, this sounds identical to what was suggested what to do with workers that had been replaced by computers a generation ago or when their work places had been shipped overseas. The short-hand for this was “learn to code.” What happened with government support with those workers was that lucrative training programs were occasionally set up which meant little unless those workers were prepared to move to places like California. And that promised support? From what I read at the time it never happened and those workers were thrown to the wolves. And it was bipartisan. I would expect the same to happen here only learn to code will not be an option as an AI will be doing that.
That “encourage entrepreneurship” answer really chaps my nether regions. It’s just a thoughtless answer that sounds good because capitalism. But what is the end result if everyone is an entrepreneur?!?
The old saw is you can’t have a viable economy where everybody just cuts each other’s hair, for example. The techbros seem intent on creating an economy where we just sell data back and forth for advertising purposes. “I’ll sell you my data if you sell me yours” won’t create a viable economy either.
Yeah. I bet that the works of Karl Marx do not feature in any US AI’s data base. It would possibly give rise to some interesting answers if they were.
The techbros seem intent on creating an economy where we just sell data back and forth for advertising purposes.
Untenable on the face of it, but an idea that reverse-alienated genius billionaires and their wannabes can easily get heads around. AI provides all symbolic value, from coding to entertainment to advertising to education to design. Real world commodities like food, housing and medicine probably require some combination of machines and debt/wage slaves, like now. A mechanized police state can augment or even replace the wetware with convicts as required.
The only thing us plebes have left to offer is a record of our habits — economically useless unless we’re plugged into some kind of consumer matrix. Ergo basic income, hello Andy Yang and (finally) the dis-utopia we always deserved! Seems like a colossal waste though, when you could just hunker down, corner all the resources and leave us to die instead.
100%. Lambert really nailed it with his characterization of ChatGPT etc. as BS generators. It’s really that simple. They distill prevailing prejudices and social/commercial forces and make them appear—if you’re not looking too closely—objective.
I have a career in digital e-commerce. Half of my success is based of regulation is 10 years behind commerce realities. If I would be 20 years old now I would place myself in AI. It will take a decade until governments adjust to the new realities AI defines. This shapes career opportunities for lifetimes.
Career opportunities, or is it really grifting?
The ChatGPT suggestions are identical to those lauded by the PMC in the last four decades. These “masters of the universe”, secure in their “Bullshit Jobs (Graeber, 2018)”, sent real production overseas, without really understanding the implications for Society. And now they have no 155 mm shells, and the barbarians are prevailing; somehow powerpoint bullets did not perform as expected.
Alison Wolf in “Does Education Matter? Myths About Education and Economic Growth (Penguin, 2003)” chronicled the well-meaning attempts at implementing these very same proposals in the UK, and their abyssmal failure. As the great Yogi informs us: “In theory, there is no difference between practice and theory. In practice, there is”.
The ChatGPT suggestions are identical to those lauded by the PMC in the last four decades.
Yes. Garbage in, garbage out.
Unless we are calling multi-million dollar executives and billion dollar hedge funders and whatnot to be “PMC”, I don’t think it was the PMCs who did that. Unless we were to regard Clinton and his Free Trade co-conspirators in and out of the House and the Senate and the White House to be PMCs. But then we might be overworking the PMC concept until we wear it all the way out.
“What AI does is increase the return to capital in the economy by reducing the return to labour”
Ahem
In many cases what AI does is unknown.
Consider
GPT-4 Is a Giant Black Box and Its Training Data Remains a Mystery
‘Black box’ ‘AI’ is widely seen as problematic, e.g. from the The Royal Society
Explainable AI: the basics Policy briefing
“There has, for some time, been growing discussion in research and policy communities about the extent to which individuals developing AI, or subject to an AI-enabled decision, are able to understand how
AI works, and why a particular decision was reached. These discussions were brought into sharp relief following adoption of the European General Data Protection Regulation, which prompted debate about
whether or not individuals had a ‘right to an explanation’.”
The ‘black box’ in policy and research debates
“Some of today’s AI tools are able to produce highly-accurate results, but are also highly complex if not outright opaque, rendering their workings difficult to interpret.
These so-called ‘black box’ models can be too complicated for even expert users to fully understand. As these systems are deployed at scale, researchers and policymakers are questioning whether accuracy at a specific task outweighs
other criteria that are important in decision-making.”
An important goal of explainable AI (XAI) is to provide algorithmic accountability. This
requires an open approach to building AI systems that goes beyond the
closed black-box methodology, where the inputs and outputs are known but
the algorithms that are used to arrive at a decision are not available and
may not be understood by the people that developed the algorithms and
the software that is used to build the black-box AI system.
In practice, “I don’t how I got the answer” isn’t a satisfactory response to
a question about an answer.
NIST have propsoed four principles of explainable artificial Intelligence (XAI) systems.
NIST defines four principles of explainable artificial intelligence
The principles are:
1. An AI system should supply \evidence, support, or reasoning for
each output.”
2. An AI system should provide explanations that its users can
understand.
3. An explanation should accurately reflect
the process the AI system used to arrive at the output.
4. An AI system should operate only under the
conditions it was designed for and not provide output when it
lacks sufficient confidence in the result.
In addition, we know that algorithms are not necessarily a strong point for the ‘tech sector’
“Microsoft simply used bad algorithms to begin with, and it never bothered to replace them with good algorithms”
The danger with these systems is that flaky half baked tech will be introduced in ways that have a pernicious and destructive impact on people’s rights.
From a corporate perspective this is great. They don’t have to understand how the AI system that they have created – $$$ – kerching.
The foundations for resolving this have already been laid out by tax payers funded organisations like NIST and the Royal Society, they have outlined some key issues and some options for using this technology.
A key point to note is that there is a big difference between attempts at transparent AI such as ‘explainable AI’ XAI and ‘black box’ approaches which may well be ‘BS-AI’.
It is possible that labor savings from AI use will result in lower prices. Likely? I don’t know but business is competitive. And AI is a ‘capital light’ innovation, which might reduce the demand for capital and therefore capital returns. Until this year, bond yields went down for almost 30 years. And even today, real interest rates are likely negative.
However, the largest transformation in automation was the automation and productivity gains in cereal grain algriculture. It employed half or more of the population until the 20th century. A bushel of grain during WW I was between $5-$10 bbl. And still is over 100 years later. Capital employed in farming has not produced good returns. A labor saving of between 10x and 100x did not turn farming into Silicon Valley.
One of the examples of using AI is in the handling of health insurance claims. Does anyone in the field enjoy the work? Administrative costs are largely passed through, so it wouldn’t be surprised to see that.
Tax policy is a much about where and how it is collected as anything. I’d bet on a VAT in the US.
Overmechanizing, overchemicalizing, and understaffing farming has resulted in chronically poisoned farmland ( especially with Roundup residues) all over America, a dead zone in the Gulf of Mexico, a reduction of the topsoil depth of Iowa to half its pre-settlement level, etc. So what was the big achievement of automation in farming? In the long run?
Its simply a major example of automation, and how the costs reductions might filter down to land, labor and capital. I don’t find it appealing that the poorest Americans aren’t hungry, but are obese.
But people aren’t clamoring to preform the backbreaking labor of traditional farming.
But its a good point to note the negative unintended consequences of automation. It’s certainly not unequivocally beneficial.
Two things: (1) US tax policy encourages AI and automation b/c it allows immediate expensing for capital goods (which in turn exempts normalized returns from taxation – a point made by E. Carey Brown circa 1948). This benefit is a subsidy. GAAP is better on this then US tax b/c GAAP at least tries to amortize the capital expense over the time period that the capital good produces income.
(2) As more and more people are thrown out of work, the pressure for either UBI or a universal jobs scheme will increase.
Finally, it would be nice to learn of a mirabile dictu, the news of late is pretty depressing!