Yves here. Any job that AI could change is not “horrible” by the standards of working in a coal mine or meatpacking factory. However, some workers who thought they had skills that insulated them from displacement, pay cuts, and extreme routinization and loss of substantive input look to be set for a rude awakening.
I don’t mean to sound critical, but the interviewee in this post does not seem to be current on the degree of surveillance and tracking already in place in lower-level jobs. For instance, USPS delivery drivers must scan a bar code at every address on their route, which serves to allow the logging of the time they made that stop.
By Lynn Parramore, Senior Research Analyst at the Institute for New Economic Thinking. Originally published at the Institute for New Economic Thinking website
There’s a high likelihood that developments in artificial intelligence (AI) are already affecting your work. ChatGPT has attracted 100 million users in a matter of two months (it took Netflix 18 years to reach that milestone). As of May 2023, one survey found that 85% of American workers have used AI tools to perform tasks on the job, and a fifth report “high exposure.” A recent report found a similar number in Europe highly exposed. Many eyes are watching the regulatory framework developing in the European Union and how it will impact workplace use of new technologies.
Some hail the coming of AI as the “end of boring work” and claim it is “empowering” employees to achieve “maximum productivity.” But who does productivity really benefit? What kinds of jobs can we actually expect? Nadia Garbellini of the University of Modena in Italy has interviewed workers concerning their experience of AI. She explains to the Institute for New Economic Thinking why we should be skeptical of claims that AI will improve conditions at work for most people.
Lynn Parramore: How do you think AI will impact workers?
Nadia Garbellini: In 2020, the European Commission (EU) categorized critical AI applications based on three “strategic value chains.” These value chains are IIoT (industrial internet of things); Mobility (AI-enabled transportation and movement); and Smart health (AI for health environments).
All three are capable of strongly impacting workers, but let’s focus on IIoT. In the report I mentioned, the European Commission identified 24 relevant AI applications in the IIoT value chain. The AI capabilities used are: insight generation from complex data; language processing, text and audio analytics; image recognition and video analytics; automated decision-making; and machine learning. These applications, in turn, perform four main functions for companies: R&D; supply chain and production planning; core production; and after-sales support.
From interviews conducted with Italian metalworkers from various industries, the report found that the main consequences on working conditions of the application of these technologies are concerning. The report found that workers experienced a lowered job performance in the sense of the knowledge required to perform assigned tasks: with AI, operating complex machines requires less and less knowledge. For the previous generation of metalworkers, the numerical control machines were programmed directly by the worker operating them. Even the detection of minor problems and discrepancies was the responsibility of the operator, who intervened when he deemed it necessary. Today, machines are programmed by computer scientists and engineers who are often not even employees of the company, but of machine suppliers. In other words, workers enjoy an ever-decreasing degree of autonomy and feel deprived of the possibility of using their own intelligence in their daily tasks.
Another issue voiced by the metalworkers was intensification of the pace of work. Since operating machines requires less effort, it is now common that one single worker has to operate more than one machine — maybe 2, or even 3 or 4 — at the same time. After all, workers are told, the machine only has to be started (and in some cases unloaded when the cycle is complete); during the cycle, the worker only has to wait. So in order not to waste these precious minutes, he is given other machines to start in succession. But during the cycle, the worker must pay attention to any problems, jams, blockages of all machines operated. This intensified performance increases fatigue, not only physically, but above all mentally.
The workers also experienced a loss of control over the production process and thus a weakening of the trade union’s ability to make demands. There are two causes of this loss of control. First of all, cycle times are presented as the objective outcome of some machine learning/big data processes (whereas algorithms are informed by human beings according to parameters determined by human beings) and therefore out of the realm of bargaining. Secondly, many corporate functions are relocated outside the production unit, and even outside the company or the country. Workers can’t reconstruct the supply chain in which they are engaged, and so they are unable to organize themselves effectively as their horizon becomes increasingly narrow.
Finally, monitoring was a concern of the workers. The company can control the individual worker and track his movements in real-time without any need for video surveillance. Each component employed in production is assigned a unique identifier, normally associated with a barcode which is then associated with the different production stages. A worker operating a machine logs in at the beginning of the shift, so it’s always possible to know, for each worker, which machine(s) she has been operating, how many cycles have been started, which components have been employed, and which products have been produced. In other words, for each non-compliant output, it is possible to identify the stage at which the problem arose and the identity of the worker performing it.
LP: Are you concerned that AI will take away jobs?
NG: The substitution of labor for capital is one of the main features of capitalism; technological unemployment has always been a concern of the labor movements (think of the Luddites in 19th century England). AI and its industrial applications are also labor-saving, so surely it will be possible to expand production with less than proportional expansion of employment.
However, AI technologies must also be produced; as brilliantly explained in an article by Josh Dzleza in New York Magazine, training AI is very labor intensive. I wouldn’t be able to say whether the net effect on jobs is going to be negative or positive. What concerns me, more than the disappearance of jobs, is the quality of the new ones in terms of working conditions, wages, autonomy, alienation, etc. What I fear is a world with millions of underpaid, ignorant, politically naive, isolated workers, stuck at home in front of their computers in both work and leisure time, producing goods and services they cannot afford to buy.
LP: Yet there are enthusiastic predictions about how AI may benefit people in the workplace from economists such as David Autor. What do you say to such predictions?
NG: In a recent interview, David Autor claimed that AI could help rebuild the middle class. He also stated that what he is mostly worried about is the devaluation of expertise. Two studies have been mentioned: one by Erik Brynjolfsson, Danielle Li, and Lindsey Raymond about workers in a software company adopting an old version of chatGPT and one by Shakked Noy and Whitney Zhang about an experiment with college-educated people doing writing tasks. In both cases, the authors concluded that AI narrows the productivity gap between lower-skilled workers and workers with more skills. But in both cases, the sample is not a representative one – the authors are focusing on technologically advanced tertiary sectors which cannot be taken as the entire labor market.
What we actually found with our interviews is that the introduction of AI technologies is increasingly polarizing the workforce between higher- and lower-skilled workers. This does not hold for factory workers only, but also for white collars — take the examples of industrial design and CAD; software production and Scrum/DevOps; etc. It seems to me that this is going to make the middle class smaller and smaller, and correspondingly the lower class, and possibly the numbers of people completely out of the productive economy, larger and larger.
LP: What does history tell us about increased worker productivity and rewards for workers such as higher wages? Who typically benefits from higher productivity?
NG: Productivity in economics is a famously controversial notion. It is often taken non-technically to be something like value added per worker and an indicator of what Marxian economists pointed to as registering the capitalist’s ability to extract relative surplus value. Looking at data for recent decades on functional income distribution it is very easy to see that productivity increases have been regularly associated with reductions in the wage share. After all, applied research is carried out by, or on behalf of, big business. The goal is to develop technologies that can be incorporated into industrial processes, improving their efficiency, where efficiency means only economic efficiency, that is, minimization of production costs.
Automation significantly changed between the late 1970s and the early 1980s, with the introduction of information and communication technologies (ICT). The objective of R&D investments was to replace human activity by generating a growing amount of information about the production process. Before the ICT revolution, machines were equipped with an unalterable mechanical memory: no real-time re-programming was possible. Then flexible automation was introduced. Technological developments from the 1980s to the present day allowed companies to push ICT integration through the entire production chain. This was accompanied by developments in organizational science, which have, at the same time, developed, implemented, and refined new business models suitable for large multinational companies committed to maximum rationalization of resources.
In other words, these technologies have been developed precisely in order to allow for productivity maximization, and therefore one should not be surprised to find that their application benefits companies.
LP: How can we ensure that AI is not used against workers?
NG: First of all, we should stop thinking of productivity gains as synonymous with technical progress, and vice versa. We are used to thinking that technical progress cannot but be labor-saving. In reality, there could be labor-consuming technical progress, aiming at preventing worker fatigue, energy-saving, pollution-minimizing, and so on. Of course, this kind of technical progress means that production costs increase, and hence it is not likely in the interest of big companies.
The prerequisite for technology not to be used against workers is that research cease to be controlled by the private sector, and returns fully under public control, directed toward the development of technologies that achieve social and environmental goals. Today we see the opposite trend: research is targeted to produce patents attractive to private capital; even the criteria for funding public universities are based on such assessments.
It would help to give union representatives not only greater rights to information and consultation but also supervisory and control duties and decision-making power in guiding key strategic choices. These issues, of course, are wholly political.
My thoughts chime with those of Yves: that a lot of these “negative phenomena” were well underway already and AI is just ramping them up to the next level. My best friend is a Dean of a Dept in a University in Tokyo. AI was merely the “straw that broke the camel’s back” in terms of coming to terms with student cheating. All evaluation is now by paper and pencil in examination rooms under the “old system”. As he put it so piquantly “AI has eaten itself”.
Plus, in terms of my own thoughts, I latched onto the bit about human expertise. This also has been devalued, with AI being merely the final step in crapification and devaluation of human wisdom. I’ve seen what I predicted in terms of econometrics “colonising” other fields in choice modelling: exotic models understood by only a few people on the planet, when I can still eyeball datasets, do a few exploratory analyses and from 20 years of experience know what segments are there in terms of human preferences.
Finally, this wisdom is most useful, and shows up the limitations of AI, in terms of extrapolation and thinking outside the box. We already have seen on NC examples of AI “making stuff up”. Even when it doesn’t do this, it can only really move “along the production possibilities curve” (apologies to those who haven’t done Econ 101 but it’s easily researched and intuitive). Designs I used to use in understanding “products/medical interventions not yet made” (NOT on the PPC) were heavily based on orthogonality, Latin Squares, etc. I immediately saw the links when I watched this fascinating very accessible visual illustration of how highly “non-solid” 3D shapes can produce “solid” shadows in all 3 dimensions. (This latter bit is just a tangent but I found it fascinating, as it is linked to so many other things in life, not just survey research).
Terry Flynn: ...the limitations of AI, in terms of extrapolation and thinking outside the box …. Even when it doesn’t do this, it can only really move “along the production possibilities curve”
Arguably true only in certain contexts. And AI, conversely, can recognize patterns in nature that human brains never could.
If you talk to people in the (serious) VC world, for the last two or three years many have been inundated with pitches from companies applying large language models to molecular biology, computational genomics, and drug discovery. The fuss about LLMs is old news there. Forex, from last year —
https://blogs.nvidia.com/blog/2022/09/20/bionemo-large-language-models-drug-discovery/
LLMs can look at enormous sweeps of genomic data, and see the pattern in how some gene upregulates expression of another a thousand genes downstream, which a human brain would almost certainly never pick out of the data in a thousand years.
OP: Nadia Garbellini: It seems to me that this is going to make the middle class smaller and smaller, and correspondingly the lower class, and possibly the numbers of people completely out of the productive economy, larger and larger.
This, I’m afraid.
If we proceed on our current trajectory, certainly.
Frankly, it may be that the best we can hope for is general acceptance and institutionalization of a UBI and a large class of UBI recipients, with a concomitant reduction in such social mobility as is now theoretically enabled by — please don’t laugh — capitalism. Furthermore, the USA and individual Americans will of course handle the idea of a UBI and its necessity as badly as possible.
Doubtless some here will claim that in order to sustain the existing employment status quo the US can somehow create a stasis on the implementation of AI. But what if the US could do that? The Chinese see themselves as already having an advantage over the US in introducing AI applications into general life and industry, and they’re intent on widening that lead.
See, forex, this transcript of a panel discussion on a Chinese TV show titled, “West, Face Reality” where Zhang Weiwei, co-host Wu Xinwen, and the moderator deconstruct an article, “The West Must Prepare for a Long Overdue Reckoning,” written for the US publication National Interest by Chandran Nair.
https://karlof1.substack.com/p/zhang-weiwei-and-chinese-language-d16
Host: In the area of basic research of artificial intelligence, the United States is still very strong, but in the application link, our application scenarios are extremely rich, our industry is very wide, the field is very wide, and various products of artificial intelligence can find a variety of ports combined with the industry in China.
Eventually, therefore, even if the US could carry out an AI freeze and perpetuate its existing status quo, it would likely place it in the position that Quing dynasty China was when the British and the Europeans rocked up.
“And AI, conversely, can recognize patterns in nature that human brains never could.”
Is it really “never” or the speed?
Human brains also have a body with all its bodily functions among other complexities.
Why would it just go randomly searching for patterns? Even a scientist with the job of searching for patterns in nature has a brain doing so much more each fraction of a second at the same time it’s searching for patterns.
Do people really know ALL the human brain is capable of? I don’t think anyone or thing does.
Thanks to you both for comments. But I perhaps didn’t make my point clearly enough: HOW can AI give you an answer when the EXPERIMENTAL data (e.g. is a combination of x and y simply additive, or are there interactions etc) ARE NOT THERE? AI MUST by definition “make this up” or admit defeat. The former seems to be the default. An old uni friend of mine works for Google in AI. He asked a lot about what I did……..then never followed up……which to me spoke loads……
If A plus B has not been done in the real world then AI CANNOT definitively tell you that there are no interaction terms. If it does, it is simply “making stuff up”. You need REAL data. Either actual products or the kind of experiments that people like me did. Please quit quoting “pattern recognition”. This is necessarily based on EXISTING data, not HYPOTHETICAL. This is why, for all its faults, the iphone was a game changer. NOBODY was ready to challenge it using existing data on consumer preferences…….except one US company who I can’t name but was our client and who (during a proactive phase) actually got my colleagues to do choice experiments that mocked up real stuff on a hypothetical smartphone that was remarkably similar to what the iphone turned out to be. The client decided to pass. Now it is a pathetic subsidiary of a larger company.
“Plus, in terms of my own thoughts, I latched onto the bit about human expertise. This also has been devalued, with AI being merely the final step in crapification and devaluation of human wisdom…”
I’m on the same page with you.
@Mikel —
The immense scale of the data sets in genomic data are what LLMs are better able to address.
Terry Flynn: You need REAL DATA.
And that’s the point. When you turn LLMs loose on real data from nature, as w. molecular biology, you can get valuable results sometimes.
Conversely, when you turn them loose on human bloviation or more verbiage extruded from other LLMs, you get the equivalent of xeroxes of xeroxes — in Ted Chiang’s formulation, ChatGPT is a blurry JPEG of the web.
That use/aspect of AI seems pretty much the highway to hell, as you suggest.
Nadia Garbellini: It seems to me that this is going to make the middle class smaller and smaller, and correspondingly the lower class, and possibly the numbers of people completely out of the productive economy, larger and larger.
Yes. Desk jobs will be decimated. Any job which is paper in to paper out is very vulnerable. These include most Middle Class jobs (Doctors, Lawyers and Engineers)
By a large stroke of Irony that includes Politicians, Doctors and Engineers, especially Coders.
Stupid question: What do all the people who kept warning us Wikipedia was terrible have to say about AI?
Wikipedia is not hard to fact check, its consistent layout and presentation makes it easy to sort through the information. AI I’ve read seems to englobulate facts with a thicket of other words used to create the illusion of a news article when all I’m really seeing is some facts swaddled in extra words which, if left to its own devices, AI will swaddle and reswaddle again (OK, my exposure has been fake news stories and I’m sure they’re programmed for length which is why we’re all seeing weird stories that repeat all the main facts three times).
Seems to me like AI is just a self-propelled version of Wikipedia on steroids, and one that would benefit from having an orange cone dropped on its hood.
Did anyone see the Furturama episode about Momazon? Where do you suppose the idea of maturity was replaced by aging ie pay your debts, let your emotions rule your thinking, don’t rock the boat…
Most people don’t grow up. Most people age. They find parking spaces, honor their credit cards, get married, have children, and call that maturity.
What that is, is aging.
~Maya Angelou
Isn’t that effectively a variant of the paperclip maximizer?
Technology does not necessarily improve productivity, flexibility, or service. Sometimes it improves speed of service and sometimes delays it. What it does reliably do is change distribution and ownership of the transactional money flow whatever the business. That is its chief attraction to grasping capitalists IMO; it cuts new owners in by undermining the older capitalists and/or the older share belonging to labor. It is not designed to gratify consumers.
I can easily see a situation where the output of AIs will end up choking the internet with stuff that may or may not be true. There is an old computer adage that says ‘Garbage In, Garbage Out’ and this describes AI pretty well as who know what data sets were used to train some of these AIs on. Reddit? 4chan maybe? Terry in a comment above has noted AI’s propensity for making stuff up. So how many newsarticles on the internet right now were mostly the work of an AI? How many images are actually the work of an AI as well? I see images of young girls in bikinis on YouTube videos and you can tell that they are not real girls but AI constructs – for the moment. But just the sheer amount of AI generated stuff could flood the internet with everything from articles, reviews, images, etc. and crowding out the work of real writers and real artists. Certainly it will make the work of propaganda outfits like the UK’s 77th Brigade much easier with AI writing their stuff. Oh brave new world…
My personl interactions with AI have generally been disappointing. Interesting certainly but not as useful as I had hoped, imagined, or had been led to believe.
Quantity must increase if costs go down, at least until effectiveness can be documented. What doesnt perform is usually self-limiting to a degree. The more garbage that exists means I will have less interactions with the sources of said garbage.
Is it possible there will be new labels or certifications like with foods to denote “made by a human alone” or “made by a human with AI assist” which may allow for segmentation by creator type? I expect there would be no benefit to a “made by AI” label, so would not be utilized.
The internet is already sufferering from decay based on recency. How many locations on Google Maps, for instance, are closed or moved, or changed and different from what is presented? Where street views are 4 years old? The utility of decaying informational tools are like physical tools that aren’t well maintained.
TL:DR The public library I worked for did not need AI to make working there a hellscape for its workers.
I worked for a public library system that was actively de-skilling its workers by the time I retired in 2014. They were getting rid of specialists and if they replaced them at all, they replaced them with generalists with lower skill sets and lower pay. My job was in a local history room that for it to serve library patrons required a great deal of content knowledge and specialisation which I had. I did not have a master’s degree in library science but I did have a BA degree in a different field and 34 years of experience working in libraries. For a long time, I did the job of a Librarian I without the MLS or the pay which a former business manager told me he “laughed all the way to the bank” whenever he thought about what I was paid.
Eventually, I became a Library Associate which is as high as you could go without the MLS. After 28 years working for this library, I’d had enough and managed to hang on long enough to retire which was not easy because they were actively shoving “old” people out the door by making them miserable enough to leave through a variety of tactics. So I did. The person who replaced me had my duties and all of his to do for less pay. Unlike me, he went on to library school for the MLS only to have his job, my former one deskilled further by no longer requiring a college degree for the Library Associate and no longer requiring an MLS to be a Librarian I after my replacement went to all of the trouble to get the degree he is now in debt for. He became a Librarian I just as they no longer required his hard won MLS degree.
This library system before I left went all in on the “retail model” with all the bad things you would expect when the corporate mindset is applied to a civic institution with all of the “move fast and break things” decline that goes with it.
Man, I can only imagine what it is like at your library after nine years have gone by. Do they still have books on shelves or are they trying to replace them as much as possible with digital stuff? If you were working in the local history room you have my respect. That is a job where you need to have on tap a lot of esoteric knowledge.
They were already decimating collections before I left. Because I had also worked in periodicals, I had to fight to keep them from wiping it out almost completely which almost immediately after I left they did. People who “weeded” books had no content knowledge of the books they were discarding because anyone with content knowledge would not have pitched the things that were discarded. They based the discarding on circulation stats with no regard to the value of the book and I don’t mean monetary value, but the value the book represented in terms of knowledge. Now if a book has not circulated in 18-24 months, it’s gone. The digital books are mostly the potboiler of the month, mostly of the kind that would be a “beach read” and not a whole lot of more meaningful things to read.
The circulating collection has largely been reduced to one floor, all of non fiction and fiction. That puts them back to probably less than the entire collection of the old Carnegie library building where I started out a long time ago. They also used to brag that one floor of the “new” building had the same square footage as the entire Carnegie building which is only partially true because an enormous hole on each floor for the skylight.
When it came to buying books for the library, there used to be a committee that selected the books and there used to be a time when staff with specialty subject knowledge had their input valued. Now one person is in charge and they just mostly take whatever comes from one of the big book vendors.
Fortunately, the local history room is mostly safe from the depredations of the rest of the library because local history and genealogy is still quite popular. Lots of rare books that are not easily found elsewhere. It was also the last place that still felt like a library and you could feel like you were doing real good. The first staff development day after I retired had a group of authors and poets who talked about how much the library meant to them and how they could not have been the people they were without the books they loved. Immediately after that, the then director stood up in front of those people and with great enthusiasm talked about her five year plan for getting rid of those books. I was glad not to see that.
There are a few people left who still work there that I know, the things I hear sometimes is incredibly disheartening. I sometimes have survivor’s guilt for retiring when I did which also makes me a bit angry because I earned every penny of that pittance.
Thanks for that valuable comment. I believe that what you say is accurate. Over the past decade or so, through my son’s employment and that of some friends, I’ve been able to observe the workings of a large metropolitan (NYC) library system. Their experience doesn’t contradict your conclusions, but it does offer a somewhat different perspective.
Yes, the original structure and purpose of the library system has changed (to a greater or lesser degree, depending on location). Much of this change, though, is in response to the needs of the surrounding community it serves. And it is a fact that, in the absence of funding for community centers, this particular library system has taken on burdens that might be better served elsewhere. The key word is “might.” In fact, there is an acute and active need for literacy classes, assistance with job applications, and a long list of services that are scarce or lacking elsewhere. Libraries may not have been designed to fill those needs, but it’s not hard to see why they are attempting to.
That does to an extent devalue traditional, specialized skills and training. There should, of course, be room and funds for both.
I agree that libraries should change in response to the needs of the surrounding community it serves but I don’t see that it was necessary to decimate collections. It was as though the library wanted rid of our older traditional patrons and replace them with young hip patrons who never showed up to be the replacements. The older patrons never came back except for the researchers in the local history room because that collection so far has not been subjected to what happened to the circulating non fiction and fiction collections.
Some of the services you mention my former library has been doing for about twenty years shortly after it got public access internet. I could not count how many times I helped people who did not own a computer try to fill out an online job application so complicated that if you successfully completed the form the person doing it should have been hired on the spot. Or trying to help the same people set up an email account they barely understand and writing out the instructions for them to see if they got a response from the employer with the too complicated application form for a custodial position that would have not required on the job computer use at the time.
We public service desk workers had to be part social worker, part security and juggle the traditional library duties meeting the information needs of those who come in. We were never trained on the social worker and the security part of the job. Eventually the library hired security but they were stretched really thin and the turnover rate was really high. My location was downtown and it is like a lot of other downtowns in mid to large size cities in America with all the problems that come with it. It was never about funding, it was about priorities and by the time I left and much more so afterward, fewer people taking on more and more and expected to do all of that without any help with each person less skilled and paid than the person before them. If someone outside of the library confronted administration about that, they would unsurprisingly deny it but reality was something else altogether. I’m glad I wasn’t there when the technical services department had what amounted to a “St. Valentine’s Day massacre” with firing and demotions out of the blue, or the day in a four thirty e-mail, they let go the entire part time staff during the pandemic and never rehired any of them after things got better. Some of those part timers were in route to the library unaware they no longer had jobs or the ones who were trying to check their email discovering their accounts had already been deleted. Many of those people had been there was much as twenty years at the public library because the university library paid so little they had to work multiple jobs. The full time workers had to pick up the duties of the former part timers and their own work.
The only consolation I feel in all of this is that not all libraries are as terrible as the library system I used to work for. I can easily imagine what my former workplace would do with AI when the opportunity presents itself because much of it they were already doing.
Ouch. My sympathies. Beyond the general deterioration, which sounds as it was determined by Management, what you say of the firings is disheartening. Sounds as if they were emulating long-established corporate tactics.
Fortunately, the system my son works for has a relatively strong and wily union. That doesn’t mean no one is ever fired during rounds of budget cuts, but there is some level of protection.
I believe you’re right about the junking of ” outdated” or “unpopular” books, though I don’t know enough to assess the state of the collections. I have gotten used to the fact that most of my interests are so esoteric that only specialized libraries will satisfy. Beyond that, I mostly read detective stories, which remain truly popular, though on Libby I have to hit the “random” button to avoid being inundated by endless volumes of James Patterson.
Management loved being corporate so much that they hired someone who had been previously working in Amazon’s HR department and we got a real taste of the way Amazon likes to run their warehouses. Everything was designed to punish the rank and file.
Your son is very lucky to have a strong and wily union looking out for him.
I was a library kid growing up, I never in my wildest imagination thought working in a library, one of my favourite places to be would become such a nightmare. I don’t visit them anymore, I do have a very large personal library which will take me beyond the rest of my life to finish reading. Once upon a time, working in a library was a good day job for an artist to have a career and still be able to pay the rent. It was also good for the musicians, writers and fellow visual artists I have had the pleasure to work with as we were trying to make a go of things.
Hello Patrick Lynch,
Funny, I went through the same story in the libraries of the Geneva University in Switzerland. I went through all the departement libraries of the Human sciences each with its own classification system that made a lot of sense for each discipline. For the seven years I saw each time a librarian have retired they were not replacing him, they just added his job to one staying on. But at the same time they kept adding people to the offices, that weren’t producing or doing anything except eating having fun making lot of noise and producing posters that were mostly illisible. They started something the direction called “rationalisation” all the libraries had to go on the same classification they called that “normalisation” because the poor students wouldn’t bother to understand (I thought that universities were for intelligent folks) they made a “back office” and “front office” jobs. When I started, in a student job, we did everything from incoming mail, lending books to other libraries, inventory, we equipped new books, served and gave information to our readers, kept the library in order, found lost books created new readers accounts. When I was leaving all that we had right to do was to turn on the light in the morning and turn it off, all other stuff was automated we couldn’t touch the system and we had to hurry up at night to get out because the door locked itself! There you go for the crapification .. I told my boss I had to go because I didn’t come to Switzerland from Czechoslovakia to live in a Communist dictature…I did not know how right I was until reading Michael Hudson’s posts…
Now I drive handiccaped children to school with my masters degree in human sciences and I’m not well paid but happy.
As a retired tech nerd I have done some simple hacking with things I have never used before. Chat GDP has been a great help in creating these things. But, you have to understand how ChatGDP’s creation works. In the real world, you can’t just say “give me this” and plug it into a complex system. What happens when your plugin is messing things up? You are on the hook. You better understand what ChatGDP creates. It’s not going to replace everybody.
Anybody else wondering if there is a connection between all the hopes for image recognition and surveillance and the apparent heavy lobbying against mask wearing for one’s safety?
Yes. I never understood why the Dems weren’t all-in on masking since the MAGA crowd was so dead-set against it; I would have thought masking would be a tribal identifier for them. But it also makes sense that the decision not to promote masking may have been made at a much higher (or perhaps deeper) level of government than the Biden administration or CDC.
Long after work is automated will we eventually rediscover the value, the reward, of doing work that contributes to society, for the sake of contributing to society, that feeling of making a difference, of being proud of our skills. Or will that be forever lost, our existence meaningless, with no further avenues toward contributing to the world, the alienation complete?
>He also stated that what he is mostly worried about is the devaluation of expertise.
To pop the bubble in expertise created by the Cult of Credibility would be a crowning achievement for AI.
Yeah this can only end really badly. The uses it’s already being put to would shock the majority of public if they new. We have AI deciding the shifts people will work where I’m at for one example. It doesn’t matter that it’s not intelligent, isn’t using accurate up to date information, nor really making choices, just basic pattern matching future predictions.
Any use a company thinks it can it will use this shit for. Humans will lose any and all control and freedom they thought they had in there work and probably home lives. You will become a perfect drone entirely managed by stupid MLs.