The Bogus Justification for AI Uptake and the Real Reason for the Scam

Your humble blogger has been reluctant to dignify AI, even in the face of technologists we know and respect saying that it is truly revolutionary. But then the question becomes “Revolutionary for what?”

The enthusiasm for AI, aside from investors in its realm and various professional hangers-on, comes from businesses out of the prospect of cost savings due to productivity increases. And most are unabashed in saying that this means replacing workers.

But as we will soon show, AI mainly decreases rather than increases productivity. So if that is the case, why has the fanfare continued at a fever pitch?

It is not hard to discern that, irrespective of actual performance, AI is yet another tool to discipline labor, here the sort of white collar and professional laborers that management would tend to view as uppity, particularly those that push back over corners-cutting and rules-breaking.

In this it falls in the proud tradition of other labor-bargaining-power-reducing yet overhyped gimmicks like outsourcing and offshoring.

Let me quote IT expert Robert Cringely from an important 2015 article on the use of H1-B visas and offshoring. Cringley said it was an open secret that offshoring was not working, but in a modern analogue to footbinding, no one dared stop because investors would punish the company based on wrong-headed assumptions. From Cringley:

Now let’s look at what this has meant for the U.S. computer industry.

First is the lemming effect where several businesses in an industry all follow the same bad management plan and collectively kill themselves…

This mad rush to send more work offshore (to get costs better aligned) is an act of desperation. Everyone knows it isn’t working well. Everyone knows doing it is just going to make the service quality a lot worse. If you annoy your customer enough they will decide to leave.

The second issue is you can’t fix a problem by throwing more bodies at it. USA IT workers make about 10 times the pay and benefits that their counterparts make in India. I won’t suggest USA workers are 10 times better than anyone, they aren’t. However they are generally much more experienced and can often do important work much better and faster (and in the same time zone). The most effective organizations have a diverse workforce with a mix of people, skills, experience, etc. By working side by side these people learn from each other. They develop team building skills. In time the less experienced workers become highly effective experienced workers. The more layoffs, the more jobs sent off shore, the more these companies erode the effectiveness of their service. An IT services business is worthless if it does not have the skills and experience to do the job.

The third problem is how you treat people does matter. In high performing firms the work force is vested in the success of the business. They are prepared to put in the extra effort and extra hours needed to help the business — and they are compensated for the results. They produce value for the business. When you treat and pay people poorly you lose their ambition and desire to excel, you lose the performance of your work force. It can now be argued many workers in IT services are no longer providing any value to the business. This is not because they are bad workers. It is because they are being treated poorly.

Let’s turn more briefly to offshoring which America is now amusingly trying to reverse. Through the McKinsey mafia and other contacts, I have heard quite a few tales about decisions to move manufacturing abroad. In a substantial majority, the business case was not compelling and/or the company could have achieved similar results through just-in-time and other improvements. But they went ahead because management wanted to look like it was keeping up with the Jones and/or they knew it was what investors wanted to see.

Moreover, from what I could tell, no one risk-adjusted the alleged improvement in results. What about the cost of contracting? Of disputes and finger-pointing about goods quality and delivery times? Of all of the extra coordination and supervision? Of catastrophic events at the vendor’s plant? And as Cringley alluded, the loss of basic know-how?

Keep in mind that for most manufactured goods, direct factory labor is 3% to 7% of total product cost, typically 3% to 5%. Offshoring is best though of as not a cost savings but a transfer from direct factory labor to higher executives, middle managers, and to a lesser degree, various outside parties (lawyers, outsouring consultants), all of whom have more to do in devising and minding a more complex and fragile business system.

Now back to AI.

I am not saying that there are no implementations where AI would not be a net plus, even allowing for increased risk. But there’s way way way too much treating AI output as if it were an oracle when it’s often wrong (and I see this well over 50% of the time when readers quote AI results in comment on topics in which I have expertise). And I’ve been shown cases of literally dangerously output in high-stakes environments, like medicine.

One of our normally skeptical tech experts who was impressed by AI made clear its limits: it was like having a 3.9 GPA freshman as your assistant. It provides a very good pass but its results still have to be reviewed and revised. But how often is that happening in practice?

And surveys so far have been finding that AI is a productivity dampener. For instance, from Inc Magazine last July:

When corporate executives look at AI, many of them see a means of boosting productivity. But ask employees how they view the tech and you get a much more pessimistic perspective.

That’s the big takeaway from a survey that freelance job platform Upwork just published. According to the firm’s research arm, 96 percent of C-suite executives “expect the use of AI tools to increase their company’s overall productivity levels.” Yet at the same time, 77 percent of employees who use AI tools have found that the technology has “actually decreased their productivity and added to their workload.”

That’s for a variety of reasons, the survey indicates, including the time employees now have to spend learning how to use AI, double-checking its work, or keeping up with the expectations of managers who think AI means they can take on a bigger workload.

And from Vimoh’s Ideas in December:

…when AI tools came over the horizon, we were hearing a lot about how they’re going to make people more productive. And there were studies to this effect. There is at least one study by McKinsey, which predicted a productivity growth of 0.1 to 0.6% by 2040 from AI use. But 2040 is far away, and until now, we haven’t seen that. In fact, we may actually be seeing the opposite because a recent study done by Intel says that productivity is actually down.

They followed 6,000 employees in Germany, France, and the UK and found that AI PC owners were spending longer on digital chores than using traditional PCs. The reason behind this, of course, is that you cannot hold AI tools accountable. If you are someone who has AI tools, who has a workplace where AI tools are being used to achieve something, you can’t fire an AI tool. In fact, you’re paying money to use the AI tool; you’re paying money to the company that made the AI tool. At the end of the day, the person you can hold responsible, the person you can hold accountable, is your employee. You can tell them that if this job does not get done, your job is on the line. You can’t say that to an AI tool.

So, the work at the end of the day is still being done by someone who’s using the AI tool. And now, while earlier they just had to do the job, now they have to train the AI to do the job, make the AI do the job, and check what the AI has done. And in some cases, probably many cases, fix the mistakes being made by the AI.

I myself have tried to use AI tools to do some jobs that I don’t like to do. And in every single case, it has to be checked. In every single case, it has to be fixed. So, Intel thinks this is a problem of employees not knowing how to use the tools. I think that short of AI becoming agents in their own right, where they take decisions and perform tasks and are proactive, we’re not going to solve this problem. And that the buck will continue to stop at humans and the humans who hire them to do work.

The way Vimoh breaks his argument down suggests he’s had to say this sort of thing to resistant higher ups before.

The AI optimists among you might contend that surely the employees or the companies will get better at AI implementation. Erm, bad tools are bad tools. But even if workers get better at finessing them, there will still have been the phase of productivity losses. And that’s front loaded, which makes it more expensive in net-present-value terms. Is there any reason to think that this will eventually be recovered?

Again, before you try saying yes, consider the counter-evidence, which is the low level of tech competence generally. From I Will Fucking Piledrive You If You Mention AI Again:

Unless you are one of a tiny handful of businesses who know exactly what they’re going to use AI for, you do not need AI for anything – or rather, you do not need to do anything to reap the benefits. Artificial intelligence, as it exists and is useful now, is probably already baked into your businesses software supply chain…

Consider the fact that most companies are unable to successfully develop and deploy the simplest of CRUD applications on time and under budget. This is a solved problem – with smart people who can collaborate and provide reasonable requirements, a competent team will knock this out of the park every single time, admittedly with some amount of frustration….But most companies can’t do this, because they are operationally and culturally crippled….

Most organizations cannot ship the most basic applications imaginable with any consistency, and you’re out here saying that the best way to remain competitive is to roll out experimental technology that is an order of magnitude more sophisticated than anything else your I.T department runs, which you have no experience hiring for, when the organization has neverused a GPU for anything other than junior engineers playing video games with their camera off during standup, and even if you do that all right there is a chance that the problem is simply unsolvable due to the characteristics of your data and business? This isn’t a recipe for disaster, it’s a cookbook for someone looking to prepare a twelve course fucking catastrophe.

Mind you, this generally sorry picture does not seem likely to be improved by DeepSeek or similarly more efficient models with different underlying paradigms. We are at least relieved that DeepSeek and its ilk may and should derail OpenAI, ChatGPT, and other US AI flagships that are monster energy hogs. At least the level of planetary destruction will be reduced.

Or perhaps not. AI hype has made a lot of people obscenely rich, so the incentive to keep the grift going is very large. So expect more strained justifications about necessity and improved competitiveness, when the evidence of either among users remains thin.

Print Friendly, PDF & Email

101 comments

  1. VTDigger

    TLDR from healthcare: “AI” produces false information. “AI” by definition can’t fact check itself after it has made a pronouncement, because if it can then the output is unpredictable and therefore useless. Ergo, humans have to fact check everything the bot spits out.

    Labor savings erased.

    It’s the same everywhere, there’s always, and I mean always, a team in the Philippines doing manual data labeling and corrections. If there’s labor arbitrage at all it’s going to be more people in the 3rd world doing data entry.

    Reply
    1. Joe Well

      Healthcare is where lumping together Large Language Models (LLMs) with other kinds of machine learning as “AI” may be deadly. There are interesting developments in terms of using machine learning to flag medical images or test results for review (though they can also be misused), but that has nothing to do with using chagpt to write visit notes.

      Reply
      1. Dr. Nod

        A point well taken. It seems to me that many spout off the term AI without having any idea of the various sorts of artificial intelligence.

        Reply
        1. LY

          Data analytics, machine learning, etc. have been around for a while and have basically taken over computer science and engineering. However, it seems like they’re all trying to ride the coat tails of the recent hype based on Large Language Models (LLM).

          Right before the pandemic, an audio processing friend (electrical engineer) and I were talking about machine learning used in signal processing and telecommunications, such as for noise cancellation, picture filters, scheduling data in a wireless network, etc. Same issues then: quality of training data, handling of corner cases, and the blackbox nature…

          Reply
        2. fjallstrom

          This is by design.

          Large language models (LLMs) are hyped as Artificial Intelligence, in effect an intelligent being, when they are just machines to produce statistically likely text (and in some cases images). If they were not touted as intelligent beings, the limitations of producing statistically likely text would be evident.

          LLMs are in turn a subset of machine learning (ML), a way to produce statistically likely correlations. These also counts as AI today. There are actually useful statistically likely correlations, but to use them properly one has to do away with the notion of it being an intelligent being.

          Expert systems (a design set of algoritms) were called AI in the 80ies and claimed to be able to replace skilled workers. They were not, and are mostly no longer called AI, though sometimes they are. They are also useful, in their niches.

          Without the hype, and indeed with hyping technologies which are not intelligent as AI, the limitations would be clear. And without the hype there would not be a bubble, and without the bubble and the need to keep the bubble going by making the line go up, there would be no need for spouting off about it.

          Reply
    2. ArvidMartensen

      Businesses have been trying forever to get skilled labour as slaves or to work for peanuts, or buy a machine that means they don’t need people at all.

      When the knowledge management craze took off in the late 90’s, I was asked to write a “Knowledge Plan” for a medium sized org of round 120,000 ‘human resources’. Researching from scratch made for a really interesting task, and the results were duly incorporated into the org’s Strategic Plan.

      A couple of years later I realised what the real goal of this was. It was how to suck up the knowledge of the white collar people who worked there, put it in a database system, and then sack them. And all the while telling staff that people were the org’s most important asset.

      So AI is just a new and shiny attack on the rights and salaries of working people. Owner’s are gagging to sack white collar workers and replace them with AI.

      Do employers ever revisit the cost/benefit of bad decisions? Idk, jobs have been offshored for decades. It’s only because of geopolitical reasons that onshoring is being considered now.
      So it probably doesn’t matter if AI performs like the proverbial crock, the ones making a fortune from it will convince CEOs that their jobs are on the line if they don’t subscribe.

      Reply
  2. Socal Rhino

    Worker discipline and the pump to keep the market inflating when it had started to sag.

    Deep seek was just a psyop, they are lying, it’s no big deal and nothing we didn’t already know how to do, if you were a tech insider you’d know this is old news and the stuff we’re working on will be world changing and we are way ahead of the rest of the world (the last via Reid Hoffman on CNBC this morning.)

    The impact of Deep seek, I think, is analogous and adds to the impact of the US experience in Ukraine. The US hollowed out its industrial base but thought it would retain a permanent technology edge. If we lose the technology edge, well that’s unthinkable. Hypersonic missiles are hype, and anyway you can’t hit a moving aircraft carrier with a hypersonic missile (said with confidence despite our failure to date to produce one ourselves).

    Genuine artificial general intelligence would be game changing (with real risks), but hyper-scaled LLMs seem very, very unlikely to be a path to AGI.

    Reply
  3. Matthew

    Excellent, excellent piece; will share. Love the opening bars. So delighted that they’ve been cut down to size here (though I see the ruthless attacks as a sign that this stumbling Western machine is increasingly and openly willing now to play the bull in a china shop when crossed).

    The angle I’d like to see some left theorist explore is the way that this is obviously a kind of primitive accumulation, predicated on the earlier, late 80s appropriation of catalogs, data, art, etc by the entertainment, book, and other industries as everyone digitized and sought “content” (what all knowledge became), tossing what it might not be terribly exaggerated to call the cumulative knowledge/patrimony of mankind–wrested from people who will see no recompense–and tossed into the maw of what is, in the end, an increasingly sophisticated but often–for now–ham-handed mechanical factotum–for which a bunch of boring guys in suits should not extract bazillions in order to forge a clearly feudalist relationship with us FOR OUR OWN AND OUR ANCESTORs hard work. In a crucial way, we need to continue to just make our disgust with them paramount. Lead with it and not forget it.

    Reply
    1. Alena Shahadat

      Thank you, you are speaking from my heart.
      As someone helping to make a large project of a music album reality, 10 years of work on arrangements, scraping money to pay correctly our old musiciens, most of whom are living in poverty because of the non respect of the rights of artists in their country. I am still under the shock after reading about the Suno CEO who “trained” his software that creates music on “heaps” of stolen artictic creation.

      He says in his interview :
      “Every single person at Suno has, like, an incredible deep love and respect for music”, Shulman says.

      https://completemusicupdate.com/suno-ceo-mikey-shulman-says-making-music-sucks-skill-doesnt-matter-and-everyone-building-ai-products-infringes-copyright/

      And he “would rather not be sued” ? I cannot stomach his pretending to be naïvely disappointed by music industry’s reaction while he just wants to “help people”.

      I see there is a mounting backlash since his interview. But will it be enough? It would be less revolting if the AI technological firms were not swimming in billions of investment money.

      I am thankful for the above article. Thank you so much Yves Smith.

      Reply
    2. Redolent

      yes….content as knowledge…business evolution as scripture…american exceptionalism…c-suites on the take…feudalism ad infinitum

      Reply
    3. hazelbee

      I can… read 10 articles and summarize them into a new article.
      I can read 10 books and write a new one in a similar style.
      I can be inspired by , say, 17th century painters and do a new work in a similar style.

      If I can do this as an individual, why can’t I do it with a machine ?
      How is that different?

      Now how is it different if I do this based on 10 billion documents?

      How does copyright either fair usage or derivative work play into this discussion?

      Reply
      1. lyman alpha blob

        Those sound like distinctions an “AI” would not understand, but a human being would. There’s a certain ineffable qualitative difference.

        How can we be sure you aren’t an “AI” trying to defend itself?

        Reply
  4. ambrit

    Having spent much of my working life in construction, I can see where the combination of “AI” and mobile robots will tell the tale concerning the “efficiency” of “AI” in general. Getting an autonomous robot to carry out tasks on a construction site is the true test. Can a self-directing robot replace a Terran human worker in carrying out physical tasks? Can a robot “scrape” the myriad intellectual and physical skills needed to function in a crowded environment from “wet wired” Terran humans? How would such “scraping” be carried out? Many of the ‘skills’ and techniques needed to so function are not visible nor obvious from logical deduction using direct observation in the phenomenal world.
    The bottom line here is, can “AI” think, or does it just ‘parrot’ other thinking beings?
    I’ll go out on a limb and say that the situation described in the post above gives me doubt as to whether or not our “C” Suite managers can think also.
    Stay safe.

    Reply
    1. ChrisFromGA

      My understanding based on talking to several experts in the field is that this current iteration of AI is merely statistical. It can only predict the next word in a sequence of tokens. It has no logic-based capability.

      Earlier research (think the 80’s) focused on logic-based AI but it didn’t pan out.

      The statistical basis for current AI is why it requires such huge training datasets. And it can never do anything innovative. It can only solve problems for which the solution is already known (like really tough math challenges.)

      Another friend of mine described AI as “the loudest kid in the room” not the smartest one. I think there is some truth to that. It has to be constantly checked and reviewed for inaccurate outputs. In the legal field, it fails miserably to even cite the basic rules correctly, as Yves pointed out to me when I made the mistake of using ChatGPT to summarize the rule for fraud.

      As such it is never going to “think” or do the sort of tasks that you describe on a construction site. At least, not the current version being pushed on us by our techlords. Of course, some future breakthrough can’t be ruled out.

      Reply
      1. Reader Keith

        Unfortunately when you combine AI with the PMC (tech mgmt) you get both the “loudest kid in the room” and the “dumbest kid in the room”.

        Reply
      2. XXYY

        It can only predict the next word in a sequence of tokens. It has no logic-based capability.

        If you remember nothing else about AI, remember the above. There is no thinking going on whatsoever. It is just mining it’s training data set for the most likely sequence of tokens or words. (Some random perturbation is thrown in to ensure that the response to a particular prompt varies somewhat over time, which makes the AI seem more “human-like”.)

        From this perspective, it’s trivial to see that nothing original is going to happen in an AI. The machine has no understanding of the prompt at any kind of conceptual level, in fact the prompt is reduced to a series of numeric values before the output is generated.

        Reply
  5. Joe Well

    They’re forcing “AI” (really large language models, LLMs) into workplaces before the technology is polished enough to be useful for the average worker, thereby getting a ton of test users so they can get it into a polished state.

    Kind of like how PCs were forced into offices in the late 1980s-1990s. People just printed out stuff they would have typed or handwritten because the screens we’re so awful and the systems for sharing files in those days were so cumbersome. Sometimes middle managers had to learn to type rather than just dictate, and there were a lot of other examples of questionable reallocations of workers’ time.

    Maybe eventually LLMs will get to the point where the average workflow benefits from them and the all this waste will be memory-holed just like the transition to computers was.

    Reply
    1. Yves Smith Post author

      I don’t agree with your take on PCs at all. I started using one in 1989 and could not have run my own business without one. And I am a terrible typist and hate learning software.

      Reply
      1. Lefty Godot

        But many people were given PCs because they were a status symbol in the company. They didn’t know how to use them and were constantly sucking up IT resources for support. When they did use them properly (from a technical standpoint), they still struggled with cumbersome software applications and entered bad data that someone would eventually have to ferret out and correct.

        PCs were a joy for the hobbyist and a wonder for the tech savvy small business owner. In many other cases, they added steps to your work and (especially after connection to the internet) exposed your company to security threats, as well as tempting your employees to waste time doing non-work related web surfing or playing solitaire. Like the internet, PCs should have been rolled out more deliberately and with more cognizance of the downside risks. Hard to fight fads though, especially among C-suite denizens.

        Reply
        1. Jokerstein

          When I took my first job (Kodak in Wealdstone, 1986) all research staff got a DEC Rainbow 300, PLUS at VT240 hooked to our VAX. Depending on our role we got either three days or two weeks of computer orientation. It was extremely useful for us in communicating with the mothership in Rochester, NY, both reports and generally. It also helped get us off the terrible IBM-hosted PROFS suite.

          Reply
        2. heresey101

          As one of the first spreadsheet users, you sound like the mainframe people of previous eras. We developed rates and forecasts for a small utility that had 20 rural electric, gas, water, and telecom utilities in four states. One of my co-workers roommate worked at Apple and said there was a program that we had to try – Visicalc. We borrowed a couple Apple IIs and Visicalc and tested it for a couple months in 1979/80. We found it so useful that we convinced our boss to spend $10,000 on 2 Visicalc, 2 16K Apple IIs with green screens, and a daisywheel printer. The mainframe people were furious and thought we were nuts because they could do it cheaper. A couple of years later, the California Public Utilities Commission came to see why we weren’t making the same errors and mistakes PG&E and SCE were making with their hand calculators and mainframes.

          After the IBM PC came out, we went to Lotus123, which functioned like Visicalc. A few years later, I was put in charge of a $40 million capital budget for the 20 utilities. I sent, via USPS, 8.5 inch floppies of Lotus123 budgets to the utilities and was able to meet each annual budget within 1%.

          PCs do real work, especially when using word processing and spreadsheets.

          Reply
          1. hazelbee

            My father was creating spreadsheet software in the US at that time – he has an MSc in business and worked as what you’d now call the product manager for a company called Access technology, creators of a spreadsheet called Supercomp. Second person in.

            The programmers that wrote it are the ones… that later when on to write Lotus 123. The owner of Access Technology thought there was no future for the IBM PC and that it was a toy… the developers thought otherwise and parted company. Massive missed opportunity for us.

            – we’d moved from the UK and spent 4 years there from 79 to 83. Fascinating period. They all used to go and buy copies of the competition’s latest software to see what new features were there, and copy if good.

            Reply
        3. Yves Smith Post author

          Not true. Execs and senior managers may have had them on their desks but the great majority regarded typing as secretarial and did not use them. I was working with some heavyweight law firms and came across only one partner (at Covington) who used them, and he was seen as a deviant borderline geek.

          Reply
          1. Mark Gisleson

            Just verifying what Yves just said: before 2000 most men couldn’t type but for some in the late ’90s and then almost everyone in the 2000s MS Office familiarity became mandatory. It was a job related skill only—then suddenly every job required PC literacy.

            Men not in IT were slow to pick up computer skills because pre-PC jobs involving typing paid less (way less)(“women’s jobs”). Coincidentally, as soon as men accepted keyboarding as an essential skill the internet blew up into the big thing it is today. Strangely the wages for keyboarding did not go up : (

            Reply
    2. Socal Rhino

      It was the opposite in my experience in financial services. My first unit had one PC needed for a specific task, I briefly reviewed a Lotus 1-2-3 tutorial on it and started finding all sorts of uses for it. A couple of years later, every person in the department had a PC on their desk. Word processing and spreadsheets were magical even before the internet was rolled out.

      I had already used other computers at home, to play games and to dial in to message boards.

      A comment I made to my wife this morning was that current “AI” is like the internet, if when you sent a message over the internet only 80 percent of your intended recipients got it, plus several you didn’t intend to get it, and the message might have random words you didn’t write.

      We had a joke at work that the only applications we really needed for disaster recovery were spreadsheets, word processors and email.

      To date, AI strikes me more like Excel macros. Very useful in specific cases for certain users, but prone to bugs and risky to use without thorough review.

      Reply
      1. t

        AI strikes me more like Excel macros. Very useful in specific cases for certain users, but prone to bugs and risky to use without thorough review.

        Excellent analogy.

        Reply
      2. Soceital Ilusions

        this for sure. what i notice is that it is good at summarizimg and synthesizing data, and using what its been fed to build on that.

        and for now what it has been fed has come mostly from humans.

        as the cost of content plumets, i might assume good content will become more valuable. there will be an adjustment period when there is overwhem.

        but eventually, LLM’s will surely train on more and more their own output. this feels very much like making a copy of a copy of a copy, amd watching the image quality degrade with each pass.

        this seems something to be on the lookout for.

        Reply
        1. fjallstrom

          An LLM is only good at good at summarising and synthesizing data if you don’t care if it is correct. If you care if it is correct you need to check it, and then in most cases you an just do it yourself.

          Reply
    3. Skk

      This is a real trip down memory lane. I remember when PCs were bought on a team budget and used almost samizdat style. Until of course the IT management could not not bear the loss of control and consultants were eager to help with their organisational creations for budget, use , supervision, creating this thing called End User Computing ( EUC).

      I called it the End OF User Computing. I wasn’t too popular with management.

      Reply
  6. lyman alpha blob

    “And now, while earlier they just had to do the job, now they have to train the AI to do the job, make the AI do the job, and check what the AI has done. And in some cases, probably many cases, fix the mistakes being made by the AI.”

    This has been my experience exactly. When I started my current job well over a decade ago, on the first day I was introduced to what was then called “automation” which was supposed to read various documents and input the info from them into our system. I used it once, realized that I had to double check everything because it made mistakes, and decided to get things right the first time by doing them myself. Then we recently got a new system which uses “AI” to read documents and input data. It works as well as the “automation” did over a decade ago – it still routinely makes mistakes on what you’d think would be the most basic data such as document dates, so from the perspective of this end user, pretty much zero improvement over all that time. But it’s called something different. And I’m now required to spend time reporting all the mistakes it makes to help “train” it, while we also pay for the service.

    Also, I belong to an astronomy group and recently one member who is a university professor decided to use “AI” to solve an n-body problem, and he wanted to solve for n=1000. From what I can tell, he wrote some sort of code and fed it into an AI which then produced a graphic representation of the solution, with 1,000 little dots orbiting around each other. He was very pleased with the result and shared it with the group. One person responded with what I’d been thinking – “That’s a nice graphic, but how would we know that it’s correct?”. So far I haven’t seen an actual answer from anyone in the group to that impertinent question, although one other member who is a computer science professor involved with AI development did respond with a rather lengthy post where she admitted that developers don’t really have an understanding about how these “AIs” work, and then proceeded to talk about “internal architecture”, “optimization targets” and “stochastic gradients”, most of which I could not grok.

    From my admittedly very limited experience and understanding, it sure seems like “AI” proponents are taking the Thomas Dolby approach – blind them with science.

    Reply
    1. Craig H.

      Excellent example Mr. blob!

      The most informed and enthusiastic endorsers of the new LLM’s seem to have all included the information that their search engine is broken. None of them seem to include the implications of this datum.

      Google search was an AI system from day 1. It is engineered to find for you what you are looking for. Everybody’s search engine is a custom personal search engine. And every time you search for garbage (foremost example is porn) you are degrading the power of your search engine.

      Google search works (almost) perfectly fine for me. I can scan through the first page of search results more quickly than I can read through the paragraphs output of an LLM.

      Reply
    2. Acacia

      Then we recently got a new system which uses “AI” to read documents and input data. It works as well as the “automation” did over a decade ago – it still routinely makes mistakes on what you’d think would be the most basic data such as document dates, so from the perspective of this end user, pretty much zero improvement over all that time.

      Yep, and for a well-known example of exactly this, look no further than the ubiquitous Adobe Acrobat, which can OCR documents.

      For my regular work, I’ve been using Acrobat Pro for 25+ years now, and what strikes me is that over this time its OCR abilities have actually gotten worse. It constantly makes stupid errors, like reading the year 2025 as 2O25, or randomly switching typefaces in the middle of a word, etc. Every sentence it OCRs needs to be double-checked by a human. I have tried various other OCR apps, including open-source Tesseract, even trying to train it, etc., and none of them are much better. As of today, Google Vision is maybe somewhat better, but the user experience is a nightmare.

      Upshot: if the IT industry can’t build a decent OCR app that doesn’t suck, how can anybody expect a decent “AI” app?

      Reply
  7. noonespecial

    Commenting on Yves’ statement from above, “It is not hard to discern that, irrespective of actual performance, AI is yet another tool to discipline labor.”

    Posting the following since author of the Baffler article outlines one AI tool that complements Yves’ declaration. The AI tool described, Talent Signal, sounds like an evolution of the already available tools used by management to monitor employee output and worthiness to be kept on the job. Am aware via NCs AI=BS and other essays on AI’s propensity for mistakes. However, companies who choose to integrate this scorecard system for employees may just be finding out it would be much easier to enable a machine to justify layoffs and exempt HR from those difficult decisions regardless of AI systems’ programmed biases.

    https://thebaffler.com/outbursts/hiring-squad-marz

    Over three days last September, more than seven thousand HR professionals and vendors gathered at the Mandalay Bay Convention Center in Las Vegas for the annual HR Technology Conference & Exposition…the conference was oversaturated with more than forty AI sessions…Talent Signal “the first product to use AI to leverage work-data to show employees and managers who is performing at the highest levels, stack rank employees, and give workers and managers detailed development feedback.” What this means is, having trained on the performance data of similar employees, it evaluates all of a customer service rep’s calls, or all of an engineer’s code, from their first three months of work. Talent Signal accomplishes this through integration with the platforms where that work is done: Salesforce, Zendesk, and GitHub. Each employee is ranked “high potential,” “typical,” or “pay attention,” according to how promising they are determined to be.

    Reply
  8. alrhundi

    I’m of the belief that AI has vast potential. There’s a lot of focus on generative applications but you have to remember at the core they are a way to translate data from one type to another. It’s really interesting to think you could have something like whale noises being transfered to human speech. I currently think in business environments that agents will dominate. Agents will be trained to perform certain specific tasks for automation. We all joke about how bad customer service bots are but in a few years they can be indistinguishable from humans. Interactive agents are beginning to show up in various settings like the crypto space where you can have them generate and deploy tokens, solve your intent and interact with smart contracts, etc. by asking them with text instead of using a UI, for example.

    IMO within a few years Siri, Gemini, etc. will all be personal AI agents in our smartphones that will act as personal assistants, creating calendar events and reminders, even scheduling appointments for us. Hell at some point our AI assistants will be contacting our doctor’s office’s agent to schedule things with no human interaction.

    Hallucinations and false data are a known issue and will continue to be worked on. I do see some level of human validation required in the meantime until things work better, especially in high risk settings like healthcare. It’ll likely be like any technology where you have a specialized person providing oversight to the tech or operating it. From what I have read it’s a common theme with technological innovations. But I’m of the belief that AI can be just as competent as an average person eventually, especially in specific tasks.

    I really liked the article that was posted a week or two ago from I think KKF health, talking about who is liable when something goes wrong from AI use. I have a friend who studies this actually and it’s an important thing to work out.

    Reply
    1. lyman alpha blob

      “…in a few years they can be indistinguishable from humans.”

      I hate to break it to you, but human customer service has also been atrocious for decades now, because these low paid workers aren’t supposed to think for themselves to provide assistance to customers, they are there to read from a script handed down by C-suite types and designed to maximize corporate profits, customer satisfaction be damned.

      As to your second paragraph, it sounds like you might like to live in a world where fat and lazy people who don’t know how to actually do anything ride around in their hover carts talking to screens. I don’t want to live in Wall-E world myself, and don’t appreciate it being forced on me by a bunch of clowns in Silicon valley try to get rich quick.

      Reply
      1. alrhundi

        That’s not the world I aspire for or imagine. I look at it as more of a digital personal assistant. Not to say I don’t think there is risks involved when it comes to privacy or learning executive function.

        My phone already suggests calendar events and such based on my text messages or emails and I find that really useful because I store/track those things there anyway. I wouldn’t mind that being automated or streamlined.

        Reply
        1. lyman alpha blob

          I use a pencil to write messages in the appropriate square on my calendar made of paper. No privacy concerns at all, and this year it’s Edward Gorey-themed, making me smile every day. To each their own I suppose…

          Reply
        2. ChrisFromGA

          The big problem I have with AI enthusiasts is that they fail to recognize that these sort of use cases have already been solved. Software agents have been around since the early 2000’s, at least. Agents are autonomous programs that do specific tasks: listen on a TCP port, respond to incoming requests, run business logic, and produce outputs.

          (Actually, that kind of sounds like a basic web app.)

          Similarly, Amazon Alexa has been around for at least 5 years and has “skills” to be a sort of digital assistant, reminding me to take the trash out or feed the bunny.

          Automation has been around for a while, and while AI may make it even better and more useful, that’s hardly revolutionary. More like incremental progress – solid, but not worthy of the hype from the Cramers and Sam Altman’s out there.

          Reply
          1. HH

            Current large software applications are characterized by labyrinthine menu structures that only the most adept practitioners can put into cognitive maps. AI interfaces will permit conversational learning and use of even the most complex software tools. This revolution in software interfaces will have profound consequences. “Same as it ever was” is not going to age well as a motto for the AI era.

            Reply
      2. deplorado

        As someone on Twitter said, I don’t want AI to write poetry in Latin for me, I want to be able to do that myself.

        No Wall-E world for me either please.

        Reply
    2. Cat Burglar

      A world of AI personal assistants looks like what Bartolomeo Vanzetti called “increased perfectioned exploitation.” Can Siri get us lives without smart contracts or enforced tight scheduling? That would be real assistance.

      Reply
    3. juno mas

      Don’t count on Siri to get it right. In a conversation with the Chair. Phys. Ed. Dept. at my community college I mentioned that Billie Jean King (nee’ Moffet) father played college basketball with ______.
      I couldn’t remember the name, but said “he was the first African/American major league baseball player”. She asked Siri “who was the first Black baseball player in New York”. Siri answered: “Larry Doby”. WRONG! It was Jackie Robinson, who played for then (1947) Brooklyn Dodgers.

      Don’t trust any AI.

      Reply
  9. TG

    Ah, but think of the AI potential for crapification. User help lines that run people around in endless circles. Bills sent out that become impossible to contest. Bots snatching up credit card info and signing people up for all sorts of ‘services’ automatically and nobody is responsible. Insurance claims being denied and it’s not really clear how or why. Propaganda automatically being generated at huge volumes and driving everything else out. Phishing attacks of a volume and apparent legitimacy like you can’t imagine. Really, what’s not to like?

    One thing the current generation of AI could be very good at, though, is warfare. If you send 100 AI equipped drones at the enemy, and 50 of them hallucinate and hit empty fields or trees, and 50 of them get it right and hit the enemy, well, that’s a win. Think of fields where a high error rate is acceptable – attack drones, phishing attacks, making cover art for books (you can keep at it until you get one you like), etc. – current LLM AI is perfect.

    Reply
    1. deplorado

      Agree with you.

      I recently watched “Deal of the Century”, 1984, with Chevy Chase, where he is Eddie Muntz, a small-time arms dealer that stumbles upon a chance to sell military drones (although remotely operated, not AI) – the satiric parallels with today leave you gasping.

      No doubt today’s Eddie Muntzes are salivating.

      Hearty recommend.

      Reply
    2. samm

      “One thing the current generation of AI could be very good at, though, is warfare. If you send 100 AI equipped drones at the enemy, and 50 of them hallucinate and hit empty fields or trees, and 50 of them get it right and hit the enemy, well, that’s a win.”

      It seems to have worked for Israel — AI chose all those Hamas targets, and now Gaza is leveled:

      https://www.972mag.com/lavender-ai-israeli-army-gaza/

      Reply
    3. ChrisFromGA

      So, that rules out law, medicine, and finance (things where accuracy kind of matters.)

      Yet, the bastards are pushing AI into all of those fields with no ethical or moral guidelines.

      Can the AMA and the ABA save themselves?

      Reply
  10. PanDuh

    AI, specifically resource intense and inefficient AI, seems to be what ‘the market’ (needs refinement) wants. I’ve seen enough stories about AI inaccuracy here on NC and casual conversations within my PMC bubble that I’m confident that the average PMC doesn’t trust AI to be a replacement for any purpose. I’ve seen enough articles over the past year claiming that some new code from a university makes an AI model 100x faster, that I was not surprised by deepseek kicking other model’s butt. I am not sure why researchers who introduce new more efficient code don’t become the new rock stars of Silicon Valley. My cyncial self thinks the money tied to AI is more about betting on its energy consumption needs than anything it produces. In fact the less efficient and less we trust AI, the only solution must be more AI (.?!)

    Reply
  11. hazelbee

    To provide a counterpoint to the negativity around AI.

    We use it at work and approach with curiousity and skeptical optimism.

    Certain use cases are solid, others not so good.

    Anything with a testable output is a good usage .

    e.g. It makes a mid or senior level developer faster at their existing work. The output is testable – it either works and passes the tests or it doesn’t. There are many technical disciplines where the output is verifiable in that way.

    and I’ve seen the deepseek open source model running on a new laptop. To have that level of capability running locally is extraordinary. and keep in mind this is the worst it is going to be. further refinement and enhancement sees this whole tech improve over the years to come.

    So I agree there is a lot of hype right now. $500 billion for stargate is bonkers.

    but please look beyond the hype. the field is moving very quickly, there is genuinely huge value in certain specific use cases, and it will only continue to get better.

    Adoption and awareness of the traps and how to get the value out will improve. e.g. compare this to either early PC usage or early www usage. early adopters first, lots of hype, then mass market adoption later.

    Reply
      1. NotThePilot

        Not the person you’re replying to, but I think there’s definitely a case for machine-learning models proposing solutions to NP problems (e.g. logistics problems like the traveling salesman, scheduling, etc.) One useful quality of NP problems, though they’re defined in other terms, is that while no known method can efficiently solve them, you can easily check a proposed solution.

        And ML models are really good at clever guesses. I think that’s why things like DeepFold (protein folding prediction is NP complete I believe) and the latest AI go/chess engines seem to have genuinely exceeded everyone’s expectations. I can’t think of a specific situation like that that would use an LLM off the top of my head, but there are some relatively mechanical clerical tasks and text-analysis problems out there.

        In a similar vein, ML models can be really good at capturing non-linear effects or residual variances, which IIUC non-Bayesian statistical methods can’t even account for by definition. I don’t know if they market it this way, but I’m pretty sure even the Instant Pot has a small neural network that it uses to fine-tune its own control loop. I think the same goes for most recent cars and maybe even the next-gen aircraft engines.

        And it’s a genuine quality improvement. As long as you still engineer hard limits in the control laws, even if the AI whiffs in the worst possible way, you’re just temporarily reverting to a slightly suboptimal output still within tolerance. Of course, those are the sort of things that never come up in the current AI boom because the AI boom ultimately seems more like a Bizarro-world, prosperity gospel racket than anything.

        Reply
        1. NotThePilot

          Just correcting myself: Google’s protein structure predictor is AlphaFold, and the Google lab that trained it is called DeepMind.

          I’m clearly struggling to keep all the marketing names straight.

          Reply
      2. hazelbee

        I have already stated that in the original comment.

        Whole swathes of software development is vastly accelerated by the tech. It has been for years, its faster and better now. and unsurprisingly technologists are early adopters of… technology.

        for example
        rapid prototyping of proof of concepts for user testing , rather than creating designs in Figma (or other tools) and then later moving to an active prototype. Faster iteration like this shortens delivery time and removes risk from the work.

        bug finding – analyse the code for potential bugs and identify

        rapid test creation – writing code but the tests are the code

        learning new programming languages – sometimes one doesnt get the choice… but if you are competent in one language then it is fairly trivial for LLM to convert from your language of choice to the target.

        learning new API – learning new apis (application programming interfaces – the way a system expresses what you can do with it)- this is a huge time sink on many projects. It is much much faster to learn / work on a new API with AI assistance.

        use your imagination – these all have in common that…. there is a human in the loop that is verifying the output, AND the output is actually verifiable, testable. It’s fundamentally not the same as writing, say, a Linkedin post which is subjective in its success.

        other use cases? anything as an aid to creativity

        using AI as a 3rd person. an intelligent rubber duck – its a way to explore your own ideas with some active feedback on them.

        there are many many of them.

        Reply
        1. Yves Smith Post author

          With all due respect, you are blind to what is lost by reducing real world phenomena to models.

          I have a post in process that DIRECTLY connects the terrible state of the performance of our elites to technology and the loss of manufacturing in the US and UK, which has produced a bizarre detachment from physical world operations and an increasing belief in the ability to fix things through non-physical activities like narrative control. I suggest you read Michael Scharge’s Serious Play, on how the selection of modeling media has a direct impact on the design process and quality of output.

          For starters, all of this better coding has not translated into a lower failure rate of large IT projects in corporate settings.

          You also incorrectly depict software as getting better over time when it routinely gets worse due to vendors engaging in perpetual feature bloat to force hardware and software upgrades.

          WordPerfect circa 1994 is vastly superior to Word, which only gets worse all the time. For instance, it won’t let users control outline formats and then the ones it imposes generate unfixable errors.

          Excel is vastly inferior to Lotus Improv circa the mid 1990s.

          Apple keeps crapifying the desktop OS.

          The NeXT computer (proprietary Unix OS) if it could have been souped up with enough memory, would be superior to any OS experience now.

          Don’t get IM Doc started on the utter abortion called electronic health records, which have made the practice of medicine worse.

          Reply
          1. hazelbee

            you are making a gross assumption by saying I am blind to what is lost by reducing real world phenomena to models. You don’t know who I am or my background to state that.

            on: ” all of this better coding has not translated into a lower failure rate of large IT projects in corporate settings.”

            Define failure. over what time period ? I’d argue the impact of the current AI tech has not been felt yet. and I’d agree that failure rates are high, independent of technology and decade.

            I have not claimed that AI and LLM is some magic bullet to wash away large IT project failure. Large IT projects in corporate settings are primarily messy people related change programmes. A lot of the cost is not in the engineering at all. It is in the cost of change, the inertia of large bodies of people, the complexity of opinion, the many times conflicting objectives, the creeping scope from the lack of initial understanding of the problem, or the impossibility of understanding the initial problem owing to the complexity in the system.

            And there are vast numbers of ill conceived, badly funded, poorly scoped projects and programmes annually. I know . I’ve had to run them. It’s painful.

            But helping engineers handle complexity and get systems into the hands of those commissioning them faster?

            that is at the very heart of software engineering good practice – iterative development, faster feedback cycles – these are ways of removing risk from a change programme by speeding up the learning cycle.

            Now… I do agree that there are a lot of other issues that current AI doesnt’ address. Assume the cost of software creation goes to zero. What then? then we’re in the similar position we are now. It’s possible to go very fast in the wrong direction. At that point user research, usability, user experience and service design, business analysis – all those disciplines assume even more importance. as they set the scope for the change.

            They have always been the important part.
            There is a saying around this –
            are we “building things right or building the right things” –
            AI helps with the first part – building things right and quickly.
            The human experience around user experience, research, etc is needed to do the second – build the right things.

            And at the macro level of the corporate? you can spend a lot of money REALLY quickly by building the wrong things.

            That’s why this is a frustrating topic here – both are needed, both are valuable. I am not trying to create an either / or situation, the answer is BOTH are needed.
            AI is super useful to build things right and quickly. That must sit alongside the other disciplines to build the right things.

            lastly – I look forward to reading the post about elites. and the Michael Scharge link is good thank you.

            Reply
        2. SocalJimObjects

          If you are writing to a layman, you might just come across as someone who works in the software development world, almost ….
          – Most projects are not agile despite the whole two weeks iteration thingy. Management usually wants to know how much money a particular project is going to cost in advance, which violates the Agile Manifesto. The only time I’ve been involved in a truly Agile project with fast iterations was in a startup and the thing did not make it.
          – Testing setters and getters (non tech people are not going to get this) is not testing. Sure it’s nice not to have to write those “tests”, but you don’t need AI for this. Heck automated test generation has been around for a while. See Parasoft for an example, they’ve been hawking their Enterprise testing tool for more than 10 years.
          – it is fairly trivial for LLM to translate one language to another. Now I know for sure you’ve never been involved in any serious development projects. It is particularly difficult for example to convert something like Java to C++, because of the different idioms. Among others, Java has automatic garbage collection while programmers have to manually allocate and deallocate memory in C++. If the translator gets this wrong, your C++ program will just crash when it gets into a certain point. Good luck finding the “bug” when it’s a 100000 line+ project. How about translating Java/C++ to something like Haskell where the later TOTALLY LACKS support for Object Oriented concepts. At one time I actually had to write a SAS to Java translator from scratch for a given project, and it’s probably one of the hardest things I’ve had to do given the totally different the two languages are.
          – Learning new API? What does this mean? Is there a tool somewhere that will read an API’s documentation and spit out working code? What happens if there’s no/incomplete/wrong documentation which is par for the course for most projects?

          Your third point alone disqualifies you from making further comments. I haven’t even gotten to deployments because software means s*** until it gets deployed to a server or two or three and AWS IAM (permission) among other AWS technologies is so convoluted, an AI that can actually make heads or tails of it would be REALLY useful, but till now AWS still has a big service organization with a ton of Solution Architects (which usually means, our software is so crappy, we need to help you use it), so why are those people still employed if AI is so great?

          Reply
          1. hazelbee

            you are correct that post is worded for the layperson.

            neat ad hominem attack. I am not sure what your point is.

            you know nothing about me.

            to your comments on my professional experience
            Wrote my first program aged 11 on an Atari 800. Been involved ever since. I’ve professionally used everything from C, C++, Java, Javascript, Perl (yuk), to ancient Intel assembler (once. never again).

            Is writing peer to peer networking software 20 years ago to work on Mac, Linux and windows “serious” enough work for you? Or neural networks dissertation back in 95 at uni? How about writing code generation tooling to take CASE tool output and output 100s of thousands of lines of C++ to create a distributed objects framework working against CORBA? or… Real time banking message broking software supporting capital markets systems? Or being an Apache committer on a tool to support TDD?
            Or translating a system from C++ to Java back when we thought Java was good in the browser (remember applets?) I had to put a sticky note on the C++ developers monitor. One word only – “new” :D :D (work it out)

            I agree with you on agile – the way it has been coopted by , say, the agile alliance, SAFe etc is a shame. I was at the early stages back in 98- 02 e.g. used to go to the Object Technology conferences in the UK with the likes of Kent Beck of XP fame, Jim Coplien and others. I still work with one of the original signatories of the Agile Manifesto. That promise has not manifested in how software commonly gets developed unfortunately.

            Reply
    1. samm

      It seems to me Yves did say there are specific use cases for AI, that isn’t the problem. The problem is the companies flogging AI aren’t focusing on the specific use cases too much. Instead the have raised several hundred billion dollars and use that cash to very forcibly shove it down everyone’s throats as hard as fast as they can. Ignore the hype? With so much silly money choking the very air like so much smog, how can it be ignored?

      Every day I come into the office, before I log into my workstation, Microsoft has deemed it necessary that I see a message like, “A pharmacy in the Swiss Canton of Bern is using AI to streamline their workflow.” While that’s very nice for the pharmacy, I’ve never actually been to Switzerland so I don’t see how the info is supposed to be helpful for me.

      There is so much money, so many mouths watering at the success to come, so many economists hoping AI will save capitalism:

      https://thenextrecession.wordpress.com/2025/01/06/assa-2025-part-one-ai-ai-ai/

      So rather than a tool for specific use cases, AI seems to excel at creating the mother of all perverse incentives. Where’s the use case where that turns into a good for society?

      Reply
      1. hazelbee

        thanks for your considered comment.

        I agree the hype train is loud and hard to ignore. Maybe I should say please try and look beyond the hype? or peer behind it? or some variation? and that everyone is trying to put this in front of you – I am on Google suite – so I get Gemini pushed at me. which is problematic from many perspectives (GDPR, privacy?)

        but I see many negative or poorly informed comments on any AI post and very few with deep knowledge like the one above from NotthePilot. so I’m trying to redress the balance.

        There are multiple aspects to this. There are at least the “how useful is it really” angle, the “is it a grift”, the “but what about consequences, climate”, and probably others.

        If I compare the current fervour to other hype trains like… crypto, NFT or the world wide web… this is much closer to the latter. Yes a lot of hype but genuinely massive utility. I think this is an example of where people are overestimating the short term impact and underestimating the long term.

        I’d love to see some writing here on “assume it lives up to the hype, what next?”

        Reply
  12. Es s Ce Tera

    My own experience from the fintech world has been every corporate townhall I’ve attended for the last few years AI has been a topic, every leader from VP on up to CEO routinely receives questions from the floor around what they are doing about AI, how it’s intended to be used, etc. Without exception every leader has said AI is simply not ready for anything critical, has an inacceptably high inaccuracy rate, often validated, tested and quantified (e.g. 60%). Such leaders usually say that of course there’s a project or team exploring it, it would be irresponsible not to, and perhaps in future the accuracy will improve, but it seems years or decades from being useful in the here and now. The general sentiment is this is the latest fad, before that it was crypto, and before that blockchain, we’re obviously not going to put all our eggs in that one basket, will only dip toes, be cautious. I don’t know anyone in the sector who is promoting it as an actual workable solution to anything (although there are very many ideas and much feasibility).

    Compare and contrast with the massive hype I experience outside of this fintech bubble. One thing which probably contributes to the hype is the average non-technical layperson (I suspect many journalists meet this definition) often uses “AI” interchangeably with “software” or “algorithm”. Apparently everything is AI now, online forms, web pages, speech recognition, your tax or accounting software, email, ones calendar, non-player characters in games, etc. (Gamers know the difference, they just use “AI” as a term of convenience.)

    And perhaps another factor may be that tech companies are exploiting layperson ignorance to make sales – they simply rebrand a product or feature as “AI” which is basically an improved Clippy now called “CoPilot” or “Office Assistant”. After all, if that’s the language of the customer, you need to speak that language.

    Reply
  13. Adam1

    There is a lot to unpack here and it’s a great piece. In the 90’s when I was in my late 20’s working for a consulting firm we did a ton of work with telecommunications companies – everyone from start-ups to AT&T. These companies had money to blow including the start-ups who were being drown in money from investors as fast as they could spend it. Well, we all know how that bubble went… So, during that bubbles deflation I recall being out with some colleagues who loved to reminisce about the heydays. As we’re trading stories (and forgive me if I’m off here on specifics as my memory is trying to go back 20+ years) it becomes a group joke that someone recalled seeing a business case in like 1994 that was supported by a data point that said data bandwidth needs were doubling like every quarter… AND that they had seen that same reference in a different business case for a different company still being used in 1998 or 99.

    In 1998/99 nobody wanted to upset the apple cart and question the investment wisdom of funding more buried fiber optic cable or more switching gear or anything. There was just too much money to be made on the inflating bubble. Damn anyone who would question what the current rate of bandwidth growth was.

    I think a lot of the same logic is being applied to AI investment today.

    I am very skeptical of any notion that AI is the new panacea, but I do see where it can very readily be deployed in confined situations. What my gut tells me though, that even with successful deployments, and I’m not talking where call centers of humans are replace, it is going to be hard to justify the mounting AI costs and they are massive… Just a few years ago we were mothballing nuclear plants and now people are trying to fund new and restarting plants. All well and good, but NONE of those endeavors are cheap and AI is supposed to be the driver of those needs. At some point some lowly accountant is going to remind someone that negative profits can’t be sustained forever and that’s usually when a bubble pops.

    Reply
  14. deplorado

    Well, what do you know: as of today, DeepSeek access is blocked at my workplace. Justification is privacy, compliance, censorship, etc.
    I don’t doubt this will become the standard. Probably soon to be followed by an act of Congress.
    I can see immediately where this is going: we in the West will close ourselves further to the more advanced outside world, and gradually will become as the Qing dynasty was once.
    You can put today on the calendar as the inflection point.
    Irony so rich you can spread it on toast.

    Reply
    1. mrsyk

      Interesting. This would seem to be a signal that AI is about mostly about its investment bubbleliciousness, rather than being beneficial across society.
      As to replacing members of the administration class, two things. One, before relishing the thought too much, remember that these will be members of the lowest order of admin, the mopes of the PMC class. Two, is the threat of replacement enough to keep these mopes in line, as AI might cock it up royally and there might be consequences, which leads us to…
      What of liability? This has been discussed here before in the context of AI in medicine. Upper management is always looking to offshore liability. Will the courts buy it?

      Reply
      1. deplorado

        >>”This would seem to be a signal that AI is about mostly about its investment bubbleliciousness, rather than being beneficial across society.”

        Bingo. Nvidia went down 19% after DeepSeek announced last week. It now has recovered somewhat, and the indications, in my view, are that DeepSeek will receive the TikTok treatment – although the case is more complex: there is the matter with Nvidia and other chip sales in China. No doubt Silicon Valley moguls have been on the phone all weekend trying to figure out what to do to protect the AI play.
        Trump’s $500 bln initiative would be all but dead if something is not done ASAP…

        Reply
        1. NYMutza

          Trump has announced that Nvidia will be banned from selling any and all GPUs to China. For now, Nvidia has been allowed to sell degraded chips to China, but that soon will change as the Trump administration attempts to cut off China’s AI air supply entirely. So expect China to develop their own AI chips. And so the game continues.

          Reply
        2. NotThePilot

          I don’t think you’re wrong, and a part of me wonders if the DeepSeek developers (or someone higher up in the Chinese power structure) decided to open-source it with exactly that in mind.

          It’s one thing to block a web domain and tell everyone it’s “evil” or something. It’s another to stop millions of nerds with a BitTorrent link and a Raspberry Pi from telling everyone they know that paying for ChatGPT is a rip-off.

          Reply
    2. Hombre

      This.
      Just like blocking automobiles from China.
      Circle the wagons, retreat into your bubble, make yourself believe you’ll beat them someday.
      In the meantime, move on people, nothing to see here.

      Reply
  15. Chris Cosmos

    AI is going to be hard to integrate into our economic system initially because the world-view of “the economy” and, subsequently, the society is very narrow. AI, as it is now, is amazing and mind-blowing as far as I’m concerned, but I recognize that is hard to make it work until we get into the culture of AI as it is emerging. My guess is that it will take at least four or five years to integrate these systems to maximize their effectiveness. I’m not surprised by the fact it is not productive at present. It would usually take me some time to digest a new programming environment to maximize its potential and begin to show its virtue.

    Reply
  16. Reader Keith

    Most companies fail or have failed when implementing machine learning projects (80% fail is an oft-cited statistic), AI will most likely suffer the same fate unless there is some sort of seismic shift in the potential “use cases”. All of the examples in the article and comments above for the application of AI have been tried and failed with SAS, data warehouses, analytics, etc. The ones who succeed do simple things and iterate over time but may not get a quick quarterly bump because of it. So the bean counters get bored and chase after the next shiny object.

    My fear is AI becomes a brute force military tool of drones, etc. Doomer talk but those seem to be the “killer app” no pun intended.

    Reply
    1. ISL

      It is a powerful use case – although human soldiers may be better than an AI drone, few are willing to engage in suicide missions. Not an ideal use case to transfer to businesses.

      Another (actually same) is in pattern recognition by pre-screening massive data sets for human review. Which is of most use for govt security agencies to spy on their citizens.

      Reply
  17. KLG

    AI in medical education, an anecdote: I attended an AI session at a recent international meeting. The most “memorable” presentation in a group of four or five was on using video plus AI to grade medical students on their hand washing skills in preparation for surgery. Basically, AI was completely incompetent at doing what a human being could do correctly 100% of the time no matter the training set. Small example but telling. It reminded me of scientists of my acquaintance who automate everything possible in their research. But this generally only separates them from their data and prevents them from noticing the important discrepancies that lead to real discoveries. And unless the repetitive scale of the task requires a robot, “productivity” is also decreased.

    Reply
  18. JMH

    Hear! Hear! I am an unreliable typist and find learning new software frustrating and intimidating. I was also around for the unveiling of ENIAC. I have a female friend of like vintage who as a young woman refused to learn how to type as she found the idea of being a secretary abhorrent.

    My son has been in IT for nearly 30 years. He refers to AI as machine learning. John Scalzi wrote a nifty Science Fiction story that features an “intelligent agent.” Said agent is created by the kind of hand wave by which science fiction dismisses all those pesky technical details. I detect considerable hand waving among the AI enthusiasts.

    Reply
  19. Froghole

    Is it not a coincidence that the mania for AI gathered pace in the wake of the pandemic? The boss class were aghast at the fact that workers were demanding higher wages (and, what was worse, securing them) whilst they were also deciding to quit in large numbers, forcing wages up for the first time in decades, or deciding to work from home (where they could be less easily disciplined). The hype surrounding AI has therefore been partly performative insofar as it is the boss class signalling to their workers – not very subtly – that all the nonsense about diverting corporate profits from dividends and buybacks to workers was a temporary aberration Which Must Now Stop – indeed, must be reversed.

    For much of the history of capitalism the boss class has valued ‘discipline’ (especially the discipline of the sack) even more than it has valued the survival of its own enterprises. For AI, by degrading the labour share of the economy will, in turn, depress demand and, with it, the corporate profits of many enterprises. The nefarious Altman (whom the Chinese so pleasingly humiliated earlier this week) has spoken about the need for a ‘new social contract’, but if AI eviscerates the profit share of many firms, UBI will presumably become unaffordable, so AI will perforce be used to surveil and coerce the populace in order to prevent them rising up and giving the tech magnates the French Revolution treatment.

    Reply
  20. deplorado

    “It is not hard to discern that, irrespective of actual performance, AI is yet another tool to discipline labor, here the sort of white collar and professional laborers that management would tend to view as uppity, particularly those that push back over corners-cutting and rules-breaking.”

    Love this insight. So, AI is basically a tool to go deeper with bad management and bad social policy. This suggests that its long term effect may be deeply harmful to competitiveness at micro and macro level for the society that embraces is uncritically.

    Reply
  21. Societal Illusions

    my company works with AI to, among other things, reactivate dormant leads using SMS. It has proven to be functional and efficient and follows direction well in this use case. We charge based on performance. And Just last week I built a website ROI calculator with custom javascript, html and CSS code. I had no right doing this although I have sufficient writing skill and enough technical acument to follow instructions. and share error messages back and retest the code. and in the end we got there. Together. I could have hired someone to make it for me, but I got to do it myself and learn from it and immediately make adjustments im design or functionality during the process. But it was like managing a child sometimes.

    this is only a tool. it has the ability to amplify skills and creativity. I find it rarely has deep insights or novel solutions or light bulb moments that are truly unique or could be called “creativity.” It easily missed the forest for the trees and can get stuck in loops of its owb making. But if someone else has solved a problem, it is likely able to share the solution and save time reinventing the wheel.

    Sure it can save time and effort doing soecific tasks it is good at reformatting or following explicit instructions around manipulating data or words. those that use it to allow staff to be more prodictive – to do more – will see the most benefit. but this has been true of any technological leap.

    i am excited by it and see how it can help with lower value tasks that will allow those who use their brains to make grest leaps in productivity and output. Is that something to be feared? Or reverred?

    Reply
  22. HH

    There is a profound question about the future of labor. What happens to our cherished isms when both proletarian labor and middle class management labor are displaced by AI/robotic systems? The AIs will never be less capable, and there is a ceiling on maximum human ability. Tiny elites pursuing personal causes, hobbies, and vices have been supported by toiling masses since the dawn of civilization. Why can’t the human masses become comfortably supported by the increasingly capable and numerous AIs?

    Reply
  23. aj

    First off, none of this is AI. AI is a branding term applied to what was formerly and rightlyfully called “machine learning.” The I in intelligence is a misnomer as machines are not intelligent as we define it, they are just really fast at doing calculations.

    All of the new “AI’ models are Large Language Models. They are trained to mimic human speech and writing patterns and they are very good at it. So good, that they trick people into thinking they are actually smart. LLMs are very good at language tasks. For example, you can feed it a large quantity of text and ask it to summarize the text. They are in fact, really good at this task. They are also really good at coming up with BS internet listicle articles.

    What they are not good at is telling truth from fiction or performing tasks outside of their training sets. Most of these models have been trained off publicly available internet data, and we all know the internet is full of a lot of factually incorrect information. We’ve all seen the pictures of hands that have random numbers of fingers or complete hallucinations of made up data.

    Further more, any system is not 100% correct all the time. In fact you don’t want it to be. If I’m trying to identify fraudulent activity, I set the parameters in such a way that the system will flag the most probably to be fraud. I then need an actual human to perform further investigation to determine if fraud actually occurred or not. The problem we see is that several companies (e.g. United Healthcare) take anything the AI flags as 100% false and no human is ever involved.

    Reply
    1. hazelbee

      This is wrong on two counts.

      Artificial intelligence as a term has been around since before I was born (in the early 70s). At university in the 90s we studied Expert Systems as a type of artificial intelligence, natural language processing, some very early work on neural networks. Saying that machine learning is the genesis is wrong.

      and no, all the new AI models are not large language models. The latest DeepSeek release is for images. e.g. like Dall-e. This tech uses LLM for the language processing, but that is only part of the solution. There is an image component also (clearly, otherwise there’s no image output).

      Reply
  24. samm

    “Why can’t the human masses become comfortably supported by the increasingly capable and numerous AIs?”

    That’s certainly seems to be the question for the next several decades, but I think it’s roughly the same as, “why couldn’t all those serfs in medieval times be comfortably supported on those large tracts of land?”

    Reply
  25. Watt4Bob

    AI is a perfect stand-in for the sorry state of contemporary “Western Culture”.

    AI = automatically instantiated mediocrity.

    The LLMs AI has been trained on, have been the mediocre work product of mediocre teams, of mediocre thinkers, led by guess who?

    The poor performance of AI trained by these LLMs should come as no surprise, and doesn’t, to those of us who have a basic understanding of what “Move Fast and Break Things” means in practice …

    … and what the release of DeepSeek has revealed about our commitment to mediocrity, evidenced in Trumps attempt to help his Tech-Bros build a $Trillion money-moat around their almost sure-fire, pending failure. (already backed up by ‘Plan B’, use all that $Trillion in infrastructure to mine crypto)

    I predict they attempt to ban DeepSeek, and/or Trump announces selling Taiwan $Trillion worth of F-35s.

    Reply
    1. Skk

      Its worth reading Ed Zitrons take on this

      “Fat, happy and lazy, and most of all, oblivious, America’s most powerful tech companies sat back and built bigger, messier models powered by sprawling data centers and billions of dollars …it’s about how the American tech industry is incurious, lazy, entitled, directionless and irresponsible. OpenAi and Anthropic are the antithesis of Silicon Valley. They are incumbents, public companies wearing startup suits, unwilling to take on real challenges, more focused on optics and marketing than they are on solving problems, even the problems that they themselves created with their large language models.”

      I read Karpathys tweet on Deepseek just after Christmas and, being retired, I treat this a hobby so I was going to download the models on the whenever.
      Then this blew up on Twitter last Fri, so I downloaded the code, the research paper, and was going to read and understand and use, mixing it with my quantum computing code and study….all very nice and easy …

      But now…. The number of tweets with research level links, code links , variants , Alibaba competitor to Deepseek, papers in just 4 days is overwhelming. I’ve got 31 entries bookmarked to study…
      O to be young again..Maybe not…, if someone retired , now just a hobbyist finds this overwhelming, wonder how anxious and stressed those in the field are. And the corporatists and financiers of the “old, obsolete” AI paradigm must be cr*pping in their pants.
      How useful this is going to be and where.. I have no idea, yet. Let me study those papers and see what I can see.

      Reply
      1. Watt4Bob

        Thanks for the reply, and introduction to EZ’s writings.

        I retired last year and want nothing to do with computers other than using them for music production which was the original reason I started working with them in 1983 or there about.

        I started working in network administration in the early 90s and had extensive experience cleaning up databases as prep prior to analysis.

        I was never, in 30 years, able to convince bosses that clean data was necessary to insure honest, useful answers. (My queries made $Money)

        But they were always happy with answers they ‘liked‘, and would gladly base action on BS results that harmonized with their biases, rather than put any effort into understanding results they didn’t like.

        Over the years I was tasked with giving out-side vendors access to our customer database on dozens of occasions. I was never able to get the boss/owners to reconsider these decisions.

        If the vendor promised they could sell even 1% more next quarter, then “Give them what they want.”

        What they wanted was “All of it.“, and I could tell they put no effort into cleaning it up prior to doing their ‘Magic‘ because their output was always sketchy and full of useless garbage that usually resulted in thousands of undeliverable mailings being returned every other month.

        Fat, happy, lazy, and, oblivious.

        Exactly!

        Reply
        1. lyman alpha blob

          Math teacher friend of mine was really taken aback when a supervisor who was not a math person told her that “bad data was better than no data.” Truly a facepalm moment.

          Reply
  26. Ram

    Original chatgpt produced rubbish code with no reasoning why it did that. Deepseek produced code with very few logic errors and I could see why those logic errors were made. Few more prompts clearing logic confusion and got a working code. Probably saved 2 hrs work. Problem I see are
    1. Works great for small tools but will it understand million lines code and then produce working code on top of that
    2. Ok It wrote 10000 line on top of million, now how will I incorporate this code into LLM “brain”
    3. Reading other people code and understanding it is a pain. Keep using AI then no human will have knowledge of the code. If AI screws up then it will be very tough to fix

    Point 3 is why code gets crappified as people move around. Now it will get worse faster with LLM

    Reply
  27. esop

    I read on Bloomberg that the importance of AI was to help develop a tool to crack encryption codes everywhere. Can that happen?

    Reply
  28. SocalJimObjects

    The technology behind AI, the so called Transformers architecture, https://research.google/blog/transformer-a-novel-neural-network-architecture-for-language-understanding/ was originally intended to improve machine translation. After 8 years, I have not noticed any significant improvement(s) in Google Translate, so why should I believe that the result would be different for LLMs? In order to keep the grift going, we’ll soon hear about the mrNAI platform, guaranteed to snow crash even the hardiest of skeptics. Heck, might as well put Anthony Fauci as the head of the program.

    Reply
  29. ISL

    I am not convinced at all of the general case benefit of email (versus a quick call). Most of the time it seems a productivity de-enhancer. Same for my cell phone. If I want to get work done, I forget it for the morning in the truck.

    Reply
  30. hazelbee

    Ok let me try somethign new. A thought experiment for the commentariat here.

    Assume that
    – AI only gets more competent from here.
    – the cost reduces as per every tech
    – the cost of software creation trends to zero.
    – this becomes ubiquitous either as open source self hosted models, or other means.

    What could that mean ?
    Well ..
    How many charities could benefit from a massive reduction in technology costs? or increase in capability?
    Perhaps, say, cloning marketplaces but for sharing, or writing software to support right to repair, or keeping a digital eye on the digital activities of offending companies? or easier processing of sensor data to support environmental charities?

    What about using the vast quantity of content here to create a digital Yves or Lambert or combination of all authors.. to be able to do first pass moderation of comments without a human in the loop? you could prompt to look for the various fallacies and things against site policy – e.g. ad hominen attacks, making shit up, . It’s possible to turn a comment into a verification plan, and then use a different verifier to check. anything doesn’t look right? flag for review.

    or use the same content to create a digital research assistant. – process the daily output from the various news outlets that typically surface in links or in articles but with a “nakedcapitalism” point of view. Use that to flag interesting articles, blogs, etc to do follow up research.

    I put those all forward as positive enhancements not negative.

    Reply
    1. Yves Smith Post author

      First, you ignore the point of this post, that AI is intended to and likely will increase income inequality due to its use to reduce the number of employed white collar and tech skilled blue collar worker (think “pink collar” like nurses). So how many charities will there be in a world of plutocrats and serfs?

      Second, AI will be used as a justification to greatly reduce money spent on educate. Why bother creating intellectually skilled people if you have AI? This will above all lead to discouragement of critical thinking since critical thinkers are uppity.

      Third, AI will ease the ability to create intellectual frauds in bulk and can readily pollute what passes for knowledge.

      Reply
      1. hazelbee

        or, alternatively, I don’t ignore the point of the post, but I don’t agree with your conclusions?

        That is part of the comments is it not? to correct, challenge, take issue with what is posted and thereby strengthen overall?

        To your second point – my son has been encouraged to use ChatGPT, Claude or the like to help explain certain concepts in Maths if the teacher is not present. This is for further maths concept pre university. He is learning to use it and learning to learn with it. It is speeding up his learning. The machine is a patient tutor. And with the right skills you don’t just get taken to the answer, you get to navigate through to it. Exactly as a tutor might do. In that world view machines can encourage and build our critical thinking skills.

        So AI can be a powerful tool for education. Or it can be a justification to greatly reduce money spent on education.

        to your third point – not sure entirely what you mean by intellectual fraud – but technology already has the ability to do this in bulk.

        Reply
          1. hazelbee

            because my son can answer the questions correctly when it comes to a test in class. i.e. maths is verifiable and him passing tests later verifies the learning.

            and more simply because , bizarrely, UK course books have the answers in the back of them!

            I was surprised at that. They are not always correct – we ended up proving an error in one of the answers. The teacher very confused at the level of effort from my son – she had the later version with the error corrected…!

            Reply

Leave a Reply

Your email address will not be published. Required fields are marked *