Labor Economist: AI May Bring a Boom in Horrible Jobs

Yves here. Any job that AI could change is not “horrible” by the standards of working in a coal mine or meatpacking factory. However, some workers who thought they had skills that insulated them from displacement, pay cuts, and extreme routinization and loss of substantive input look to be set for a rude awakening.

I don’t mean to sound critical, but the interviewee in this post does not seem to be current on the degree of surveillance and tracking already in place in lower-level jobs. For instance, USPS delivery drivers must scan a bar code at every address on their route, which serves to allow the logging of the time they made that stop.

By Lynn Parramore, Senior Research Analyst at the Institute for New Economic Thinking. Originally published at the Institute for New Economic Thinking website

There’s a high likelihood that developments in artificial intelligence (AI) are already affecting your work. ChatGPT has attracted 100 million users in a matter of two months (it took Netflix 18 years to reach that milestone). As of May 2023, one survey found that 85% of American workers have used AI tools to perform tasks on the job, and a fifth report “high exposure.” A recent report found a similar number in Europe highly exposed. Many eyes are watching the regulatory framework developing in the European Union and how it will impact workplace use of new technologies.

Some hail the coming of AI as the “end of boring work” and claim it is “empowering” employees to achieve “maximum productivity.” But who does productivity really benefit? What kinds of jobs can we actually expect? Nadia Garbellini of the University of Modena in Italy has interviewed workers concerning their experience of AI. She explains to the Institute for New Economic Thinking why we should be skeptical of claims that AI will improve conditions at work for most people.


Lynn Parramore: How do you think AI will impact workers?

Nadia Garbellini: In 2020, the European Commission (EU) categorized critical AI applications based on three “strategic value chains.” These value chains are IIoT (industrial internet of things); Mobility (AI-enabled transportation and movement); and Smart health (AI for health environments).

All three are capable of strongly impacting workers, but let’s focus on IIoT. In the report I mentioned, the European Commission identified 24 relevant AI applications in the IIoT value chain. The AI capabilities used are: insight generation from complex data; language processing, text and audio analytics; image recognition and video analytics; automated decision-making; and machine learning. These applications, in turn, perform four main functions for companies: R&D; supply chain and production planning; core production; and after-sales support.

From interviews conducted with Italian metalworkers from various industries, the report found that the main consequences on working conditions of the application of these technologies are concerning. The report found that workers experienced a lowered job performance in the sense of the knowledge required to perform assigned tasks: with AI, operating complex machines requires less and less knowledge. For the previous generation of metalworkers, the numerical control machines were programmed directly by the worker operating them. Even the detection of minor problems and discrepancies was the responsibility of the operator, who intervened when he deemed it necessary. Today, machines are programmed by computer scientists and engineers who are often not even employees of the company, but of machine suppliers. In other words, workers enjoy an ever-decreasing degree of autonomy and feel deprived of the possibility of using their own intelligence in their daily tasks.

Another issue voiced by the metalworkers was intensification of the pace of work. Since operating machines requires less effort, it is now common that one single worker has to operate more than one machine — maybe 2, or even 3 or 4 — at the same time. After all, workers are told, the machine only has to be started (and in some cases unloaded when the cycle is complete); during the cycle, the worker only has to wait. So in order not to waste these precious minutes, he is given other machines to start in succession. But during the cycle, the worker must pay attention to any problems, jams, blockages of all machines operated. This intensified performance increases fatigue, not only physically, but above all mentally.

The workers also experienced a loss of control over the production process and thus a weakening of the trade union’s ability to make demands. There are two causes of this loss of control. First of all, cycle times are presented as the objective outcome of some machine learning/big data processes (whereas algorithms are informed by human beings according to parameters determined by human beings) and therefore out of the realm of bargaining. Secondly, many corporate functions are relocated outside the production unit, and even outside the company or the country. Workers can’t reconstruct the supply chain in which they are engaged, and so they are unable to organize themselves effectively as their horizon becomes increasingly narrow.

Finally, monitoring was a concern of the workers. The company can control the individual worker and track his movements in real-time without any need for video surveillance. Each component employed in production is assigned a unique identifier, normally associated with a barcode which is then associated with the different production stages. A worker operating a machine logs in at the beginning of the shift, so it’s always possible to know, for each worker, which machine(s) she has been operating, how many cycles have been started, which components have been employed, and which products have been produced. In other words, for each non-compliant output, it is possible to identify the stage at which the problem arose and the identity of the worker performing it.

LP: Are you concerned that AI will take away jobs?

NG: The substitution of labor for capital is one of the main features of capitalism; technological unemployment has always been a concern of the labor movements (think of the Luddites in 19th century England). AI and its industrial applications are also labor-saving, so surely it will be possible to expand production with less than proportional expansion of employment.

However, AI technologies must also be produced; as brilliantly explained in an article by Josh Dzleza in New York Magazine, training AI is very labor intensive. I wouldn’t be able to say whether the net effect on jobs is going to be negative or positive. What concerns me, more than the disappearance of jobs, is the quality of the new ones in terms of working conditions, wages, autonomy, alienation, etc. What I fear is a world with millions of underpaid, ignorant, politically naive, isolated workers, stuck at home in front of their computers in both work and leisure time, producing goods and services they cannot afford to buy.

LP: Yet there are enthusiastic predictions about how AI may benefit people in the workplace from economists such as David Autor. What do you say to such predictions?

NG: In a recent interview, David Autor claimed that AI could help rebuild the middle class. He also stated that what he is mostly worried about is the devaluation of expertise. Two studies have been mentioned: one by Erik Brynjolfsson, Danielle Li, and Lindsey Raymond about workers in a software company adopting an old version of chatGPT and one by Shakked Noy and Whitney Zhang about an experiment with college-educated people doing writing tasks. In both cases, the authors concluded that AI narrows the productivity gap between lower-skilled workers and workers with more skills. But in both cases, the sample is not a representative one – the authors are focusing on technologically advanced tertiary sectors which cannot be taken as the entire labor market.

What we actually found with our interviews is that the introduction of AI technologies is increasingly polarizing the workforce between higher- and lower-skilled workers. This does not hold for factory workers only, but also for white collars — take the examples of industrial design and CAD; software production and Scrum/DevOps; etc. It seems to me that this is going to make the middle class smaller and smaller, and correspondingly the lower class, and possibly the numbers of people completely out of the productive economy, larger and larger.

LP: What does history tell us about increased worker productivity and rewards for workers such as higher wages? Who typically benefits from higher productivity?

NG: Productivity in economics is a famously controversial notion. It is often taken non-technically to be something like value added per worker and an indicator of what Marxian economists pointed to as registering the capitalist’s ability to extract relative surplus value. Looking at data for recent decades on functional income distribution it is very easy to see that productivity increases have been regularly associated with reductions in the wage share. After all, applied research is carried out by, or on behalf of, big business. The goal is to develop technologies that can be incorporated into industrial processes, improving their efficiency, where efficiency means only economic efficiency, that is, minimization of production costs.

Automation significantly changed between the late 1970s and the early 1980s, with the introduction of information and communication technologies (ICT). The objective of R&D investments was to replace human activity by generating a growing amount of information about the production process. Before the ICT revolution, machines were equipped with an unalterable mechanical memory: no real-time re-programming was possible. Then flexible automation was introduced. Technological developments from the 1980s to the present day allowed companies to push ICT integration through the entire production chain. This was accompanied by developments in organizational science, which have, at the same time, developed, implemented, and refined new business models suitable for large multinational companies committed to maximum rationalization of resources.

In other words, these technologies have been developed precisely in order to allow for productivity maximization, and therefore one should not be surprised to find that their application benefits companies.

LP: How can we ensure that AI is not used against workers?

NG: First of all, we should stop thinking of productivity gains as synonymous with technical progress, and vice versa. We are used to thinking that technical progress cannot but be labor-saving. In reality, there could be labor-consuming technical progress, aiming at preventing worker fatigue, energy-saving, pollution-minimizing, and so on. Of course, this kind of technical progress means that production costs increase, and hence it is not likely in the interest of big companies.

The prerequisite for technology not to be used against workers is that research cease to be controlled by the private sector, and returns fully under public control, directed toward the development of technologies that achieve social and environmental goals. Today we see the opposite trend: research is targeted to produce patents attractive to private capital; even the criteria for funding public universities are based on such assessments.

It would help to give union representatives not only greater rights to information and consultation but also supervisory and control duties and decision-making power in guiding key strategic choices. These issues, of course, are wholly political.

Print Friendly, PDF & Email