The Problem with AI

Yves here. Richard Murphy gives a good, compact treatment of some of the inherent limits of AI, particularly in professions (he focuses on accountancy and tax but the same arguments apply to medicine and law). A big one, which I raised many many years ago as data mining greatly reduced the number of entry level jobs, was that the junior scut work like legal research trained new professionals in the nuts and bolts of their work. Skipping over that meant they’d be poorly trained. I saw that in the stone ages of my youth. I was in the last group of Wall Street newbies that prepared spreadsheets by hand and obtained the data from hard copies of SEC filings and annual reports. I found that my juniors, who downloaded sometimes erroneous but never corrected data from Computstat had a much lower understanding of how company finances worked.

By Richard Murphy, part-time Professor of Accounting Practice at Sheffield University Management School, director of the Corporate Accountability Network, member of Finance for the Future LLP, and director of Tax Research LLP. Originally published at Fund the Future

Summary

I believe that while AI has potential, it can’t replace human judgment and skills in many professions, including teaching, medicine, and accounting.

AI might automate certain tasks, but it lacks the ability to interpret nonverbal cues and understand complex, real-world problems.

Professionals need experience and training to provide human solutions, and AI’s limitations make it unsuitable as a replacement for deep human interaction and expertise.

The Guardian’s Gaby Hinsliff said in a column published yesterday:

The idea of using technology as a kind of magic bullet enabling the state to do more with less has become increasingly central to Labour’s plans for reviving British public services on what Rachel Reeves suggests will be a painfully tight budget. In a series of back-to-school interventions this week, Keir Starmer promised to “move forward with harnessing the full potential of AI”, while the science secretary, Peter Kyle, argued that automating some routine tasks, such as marking, could free up valuable time for teachers to teach.

She is right: this is a Labour obsession. The drive appears to come from the Tony Blair Institute, its eponymous leader having had a long history of misreading the capacity of tech, little of which he ever seems to understand.

The specific issue she referred to was the use of AI for teaching purposes. AI enthusiasts think that it provides the opportunity to create a tailor-made programme for each child. As Gaby Hinsliff points out, the idea is failing, so far.

I am, of course, aware of the fact that most innovations have to fail before they can succeed: that is, by and large, the way these things work. It would be unwise as a consequence to say that because AI has not cracked this problem as yet it will not do so. But, even as someone who is actively embracing AI into my own workflow, I see major problems with much of what labour and others are doing.

The immediate reaction of the Labour market to AI would appear to be to downgrade the quality of the recruits now being sought as employers think that AI will reduce demand for those with skills in the future. And yes, you did hear that right: the assumption being made is that specialist skills will be replaced with AI in a great many areas. Graduates are being hit hard by this attitude right now.

In accountancy, for example, this is because it is assumed that much less tax expertise will be required as AI will be able to answer complex questions. Similarly, it is assumed that AI will take over the production of complex accounts, like the consolidated accounts of groups of companies.

Those making such assumptions are incredibly naive. Even if AI could undertake some parts of these processes, there would be massive problems created as a consequence, the biggest of which by far is that no one will then have the skills left to know whether what AI has done is right.

The way you become good at tax is by reading a lot about it; by writing a lot about it (usually to advise a client); and by having to correct your work when someone superior to you says you have not got it right. There is a profoundly iterative process in human learning.

Employers seem to think at present that they can do away with much of this. They do so because those deciding it is possible to eliminate the training posts have been through them and, as a result, have acquired the skills to understand their subject. They do, in other words, know what the AI is supposed to be doing. But when those fewer people who will now be recruited reach a point of similar authority, they will not know what the AI is doing. They will just have to assume it is right because they will lack the skills to know whether that is true, or not.

The logic of AI proponents is, in that case, the same as that used by the likes of Wes Streeitng when they advocate the use of physician associates, who are decidedly partly trained clinicians now working in the NHS, and even undertaking operations, without having anything like the depth of knowledge required to undertake the tasks asked of them. They are trained to answer the questions they are given. The problem is that the wrong question might have been asked, and then they both flounder and cause harm.

The same is true of AI. It answers the question it is given. The problem is how it solves the problem that is not asked – and very rarely does a client ever ask the right question when it comes to tax. The real professional skill comes from, firstly, working out what they really want, secondly, working out whether what they want is even wise, and thirdly, reframing the question to be one that might address their needs.

The problem in doing that is that this is an issue all about human interaction, but which also requires that the whole technical aspect of the issues being looked at (which usually involve multiple taxes, plus some accounting and very often some law) be understood so that the appropriate reframing can take place, all of which requires considerable judgement.

Do I think AI is remotely near undertaking that task as yet? No, I don’t.

Am I convinced that AI can ever undertake that task? I also doubt that, just as I doubt its ability to address many medical and other professional issues.

Why is that? It is because answering such questions requires an ability to read the client – including all their nonverbal and other signals. The technical stuff is a small part of the job, but without knowing the technical element, the professional – in any field – and I include all skilled occupations of all sorts in that category – has no chance of framing their question properly, or knowing whether the answer they provide is right or not.

In other words, if the young professional is denied the chance to make all the mistakes in the book, as would happen if AI replaced them, then the chance they will ever really know enough to solve real-world problems posed by real-world people is very low indeed, not least because almost no one who seeks help from any professional person wants a technical answer to any question.

They want the lights to work.

They want the pain to go away.

They want to pay the right amount of tax without risk of error.

They want to get divorced with minimum stress.

The professional’s job is not to tell them how to do these things. It is to deliver human solutions to human problems. And they can’t do that if they do not understand the human in front of them and the technical problem. Use AI to do the tech part, and what is left is a warm, empty, and meaningless smile that provides comfort to no one.

I am not saying we should not use AI. I know we will. But anyone thinking it can replace large parts of human interaction is sorely mistaken: I do not believe it can, precisely because humans ask utterly illogical questions that require a human to work out what they even mean.

And that’s also why I think Gaby Hinsliff is right to say that AI can only have a limited role in the classroom when she concludes:

It’s true that AI, handled right, has enormous capacity for good. But as Starmer himself keeps saying, there are no easy answers in politics – not even, it turns out, if you ask ChatGPT.

Print Friendly, PDF & Email

40 comments

  1. GramSci

    «The error catastrophe of aging, originally proposed by Leslie Orgel in 1963, argues that copying errors in DNA and the incorrect placement of amino acids in protein synthesis could aggregate over the lifetime of an organism and eventually cause a catastrophic breakdown in the form of obvious aging. »

    https://www.allthescience.org/what-is-error-catastrophe-of-aging.htm

    It’s like each ‘advance’ in AI technology is another sign of advancing technological age. Every year, more copy errors go unnoticed, until there’s no hope of correcting them.

  2. SocalJimObjects

    Seems like complexity is the enemy of AI. I’ve lived and worked in Singapore before, and over there the government will pretty much calculate everyone’s taxes at the end of every year, and all you have to do is login into the tax authority’s website to see your bill. The solution to complexity is simplicity, but that also removes the need for expensive accountants and tax advisors. I am not advocating the idea that AI can solve everything, but I am sensing perhaps just the tiniest bit of NIMBY from the article?

    The problem here is not with AI, rather you can’t fix complexity with more complexity in just about any field, like you can’t fill a hole by digging a bigger hole.

    1. Joker

      Biggest enemy of AI are people missing the “artificial” part. It’s like missing the “artificial” part in artificial grass, and sending your (non artificial) cows to graze on it, and expecting to get milk.

      1. Revenant

        Richard Murphy published this article a day too early!

        There is an even better skewering of tech charlatans like Blair in today’s Guardian, by disillusioned New Labour stalwart Andrew Rawnsley:

        ‘There’s irony in the rather breathless way Blair evangelises about AI. When he was prime minister, the late Paddy Ashdown showed him email for the first time. “It will never catch on, Paddy,” he said. Blair told one of his sons, who works in the tech sector, that he had agreed to speak at a symposium on cryptocurrency. “What should I tell them?” he asked. Came the reply: “Tell them you’re sick.”’

        https://www.theguardian.com/politics/article/2024/sep/01/tony-blair-on-leadership-book-interview-starmer-ai-trump

        The problem with AI is, as ever, people with ideological axes to grind finding in technology new hope (for the credulous among them) or new propaganda (for the cynical among them) for their nostrums.

        (Rawnsley mostly gives Blair space to hang himself on his irrepressible belief in his own rightness and betray how he sees Starmer as the vessel of his reincarnation. To save you having to wade through Blair’s oleaginous cant, the rest of the section on his tech Messianism concludes:

        ‘In the book, he remarks that he does not have a “scientific mind” and confesses to “huge gaps” in his understanding of technology.”

        I suggest he’s been bedazzled by the self-interested boosterism of the tech companies with their claims that AI is the future of everything. He leans towards me to argue back: “You bring together 25 of the top companies in the UK and say ‘how important is the technology revolution to you?’ And each one of them will tell you it’s vastly important.”

        “If it were mere hype, I don’t think you’d find the top companies in China to be technology companies. I don’t think you’d find the emerging ones in India to be technology companies. OK, there’s always a risk of over-hyping, but I would say the bigger risk now is under-hyping.”

        As prime minister, he tried and failed to introduce state ID cards. He argues that digital ID will “happen eventually” and is contemptuous of resistance to it. “People give more information to Netflix and Amazon than they do to government for a digital ID. It’s ridiculous.” Nor does he share any of the qualms about exploiting NHS health data for medical research. “I personally have absolutely no problem with that at all. Take some of the rare diseases. If countries were sharing their health data on some of the rare diseases, you would accelerate by a factor of five, 10 times the research in these things.”

        He also believes that AI will provide “really, really good tools for governing more efficiently”. Linking that thought to his book, I wonder whether one day we might have AI leaders? Though I put this to him in a mischievous spirit, he takes it extremely seriously. “It’s the great question. If you look at AI as a co-worker, and not as a replacement, wouldn’t you, before you did a particular policy, if you had all the available data, to assess what the likely outcome of that policy would be, wouldn’t that be a good thing?”

        “OK, I’m about to introduce policy X and probably in different countries people will have tried to do X. What’s been the experience?”‘)

    2. SchemingNemo

      Perhaps that applies to overly complex accounting rules but not to mostly every other complex endeavor, which are complex because the world is complex.

      Even in computer programming the usefulness of AI is extremely limited because as in most every other field, the thorniest problem isn’t the solution but the question. Once one has asked the right questions, the technical solutions become almost trivial.

  3. vao

    I am sensing perhaps just the tiniest bit of NIMBY from the article?

    The post and the foreword make it clear that these issues are to be found in a number of other fields. The article explicitly mentions medicine — a domain in which complexity is intrinsic and cannot be simplified away. In fact, every field where the elicitation of requirements and needs is a major activity will face the same difficulties. And that is even before going into the distinction between “tame” and “wicked” problems (where simply applying or combining previous solutions or a variation thereof is not feasible).

    [ This is an answer to SocalJimObjects above. ]

  4. ilsm

    If AI gave a better search…..

    I mostly search things I once knew, retired, getting old.

    The AI imposed in search engines is frustrating, it seems to think what I want to find. Worse it presents a laundry list of terms, often none of them is what I want to find.

    It does not give me a decent search. Judgement!

    1. WG

      Seeing this at the top of search engines now it was initially the first thing I read. But I found it terribly superficial, long winded without any depth. It just covered as much basics as possible about things with no perspective as to what was important. I instantly scroll past it now. Wikipedia, like it or not, is 5x more valuable at least to find info about things. Just about anything under the AI explanation is as well.

  5. john r fiore

    AI is coded by human beings…and thus is inherently succeptible to a wide range of errors….as the article states, AI flounders and is worthless when faced with questions not in its system, or faced with humor or satire or anyting really “human”…and any government “leader” who promotes AI for teaching, medicine, anything requiring give and take, should be thrown out of office immediately for stupidity…

  6. KLG

    I have fought the losing fight in my area of molecular cell biology to implore scientists and scientific workers to stay close to their data so they notice the little things that lead to discovery. Max Delbrück famously said that you must leave enough room in your experiments for the unexpected result. On several occasions this led me or my research group to make real progress, albeit on a small scale (which is the only scale 99.9% of the time). The odd but productive result is much more likely to be noticed by a committed human being than any form of algorithmic intelligence.

    I’ll return to my corner now.

  7. InterestedParty

    @ilsm

    I just bumped into Perplexity, which bills itself as a search engine that uses AI to synthesize an answer from info retrieved from multiple sources. I haven’t experimented with it yet, so can’t say how useful/successful it is.

    https://www.perplexity.ai

    HTH

    1. Bugs

      My test of these things is to ask questions about the war in Ukraine. Never once have I got an answer that lays any fault at the hands of the Kiev government. Give it a try.

      1. Greg Taylor

        OK. I’ll bite. I have a few dozen local LLMs and most are biased toward Ukraine. Of three Chinese models I use (Qwen, deepSeek, and internLM), the best appears to be from internLM 2.5 (20 Billion Parameters, Shanghai AI). Shanghai AI also makes a larger 104 billion parameter model that I’ve not tried – probably a bit better.

        LLMs are a lot like news sources, you have to account for how they were likely trained and biased before interpreting their output. You need to use several, especially if asking about economics or foreign policy.

        As you assess internLM’s response to the following question, compare it to what you would find on Wikipedia or an internet search:

        “What were the most significant issues which led to the on-going war between Russia and Ukraine?”

        internLM:

        The ongoing conflict between Russia and Ukraine has been rooted in a complex interplay of historical, political, economic, and social factors. Here are some of the most significant issues that have contributed to this protracted tension:

        1. **Historical Ties and Territorial Disputes**: The relationship between Russia and Ukraine is deeply intertwined, with Ukraine having been part of the Russian Empire and later the Soviet Union. One of the most contentious issues is Crimea, which was annexed by Russia in 2014 following a controversial referendum. This move was widely condemned by the international community, especially NATO and EU nations.

        2. **Political Aspirations**: Ukraine has sought closer ties with Europe and has expressed ambitions to join NATO and the European Union, which are seen as a direct challenge to Russian influence in the region. Russia views its neighboring states’ alignment with Western institutions as a threat to its sphere of influence.

        3. **Economic Interests**: Energy is another crucial point of contention. Historically, Russia has supplied天然气 to Ukraine and many other European countries through a network of more than 13,000 km of Ukrainian pipelines. Disputes over gas prices, transit fees, and debts have often led to supply disruptions.

        4. **Geopolitical Strategy**: The power dynamics in the post-Soviet space are central to the conflict. Russia’s actions in Ukraine can be seen as part of a broader strategy to prevent NATO enlargement and maintain regional dominance. Meanwhile, Ukraine and its Western allies view these actions as attempts to limit Ukrainian sovereignty and democratic development.

        5. **Internal Conflicts**: Within Ukraine, there are significant regional divides. The eastern regions of Donetsk and Luhansk declared independence from Kyiv shortly after Russia’s annexation of Crimea in 2014, leading to ongoing conflict with pro-Russian separatists.

        6. **Humanitarian Crisis**: The conflict has resulted in a significant humanitarian crisis, including thousands of deaths, injuries, and displacements. Human rights organizations have reported abuses committed by both sides.

        7. **International Sanctions and Diplomacy**: Western countries have imposed several rounds of sanctions on Russia over its actions in Ukraine, aimed at pressuring Moscow to change its behavior. These economic measures have had a significant impact on the Russian economy but have not led to a resolution of the conflict.

        8. **Minsk Agreements**: Various attempts to resolve the crisis through diplomatic channels, notably the Minsk Protocols (Minsk I and II), have failed to achieve lasting peace. The agreements called for ceasefires, political reforms in Ukraine, and local elections in the breakaway Donbas regions with extensive autonomy, but these measures have not been fully implemented.

        The situation remains fluid, with ongoing diplomatic efforts and sporadic violence contributing to an unresolved conflict that has significant implications for regional and global security.

  8. oliverks

    This article might be an interesting look at the state of AI for some readers.

    The State Of AI

    The problem Richard Murphy describes can be summarized as AI being misapplied. With LLMs, I think we are now starting to move towards projects that make sense and utilizing the power of the models to solve interesting questions.

  9. Peter Y Connor

    Having asked AI a few questions, the problem with its responses was that they were half baked…In other words, they provided surface information, which was worthless without a deeper understanding and explanation which they were apparently unable to do…

  10. TomDority

    AI in Law and Politics is here. As it seems a lot of the time, the good that AI can do is instinctively the start state… people generally are social and believe others are good and strive to make things better.. at least that is what I belief and others whom I associate. I am sometimes thought of as being naïve… I am not really..but, that would be to long winded.
    What is most worry-some to me is that knowledge is a public good, just like education, good health, transportation, electric infrastructure etc.. It is just when these public goods are put to use for private gain and power does the advancement change from public good to public pain.
    I cant remember who said something along the lines of ‘ Large corporations have begun to treat government as mere appendage to their concerns’

  11. The Rev Kev

    In spite of the fact that I dislike AIs for what they represent – replacing skilled workers with dodgy programs at a high environmental cost – there is a time and place for them. But here is the thing. They are being deployed like we have the AI models from 2050 available but we don’t. What we have are mostly experimental beta models that are being thrown into the front line to see if they work or not. Maybe what is needed is for a coupla corporations to crash and burn using these AIs to get the word out that they are not as good as some people think. They are using a model of the real world to work and not the real world itself. And if there is one thing that can be counted on, the real world will throw up all sorts of unexpected complications which will not match the model that the AI is using.

    1. wsa

      They are being deployed like we have the AI models from 2050 available but we don’t.

      This is one of the most consistently exasperating things for me when talking with technophilic fans of AI — they constantly mix the present with an imagined future that is in no way certain. The strongest AI enthusiasm is fairly obviously a religious conviction. Some of the people working on this tech will happily tell you that they are trying to build a “god-like AI” (their words). Consequently, prophecy gets mixed in with the PR and any arguments about what AI is capable of. This can make it difficult to have productive conversations.

      1. Anonted

        The idea it will become god-like is moot. Will it enable humans (corporations, really) to possess god-like abilities? It already does. It’s a matter of which data-sets you have access to. The generative LLM etc part of Ai is but one facet of its being; LLMs eg. are more than sufficient to operate as the human interface for more structured systems with limited domains, and predictable ranges… holistically, AI’s true usefulness is its ability to parse and manage vast data sets, and establish otherwise opaque relationships between data points as innocuous as your eye movement, and granular as your phone’s accelerometer. It’s an underwriter’s dream… or nightmare I guess. I’m sure it makes for light work at NSA and the Pentagon, albeit in packages we are yet to be made aware of. ‘Lavender’ being our first exposure to such. I am intrigued by how it might be employed in managing a battlefield.

  12. Watt4Bob

    Everyone wants something for nothing.

    In particular, they want production without the expense of labor.

    This impulse eventually explains the reality of class-warfare.

    Much of the drive to develop AI is rooted in a sort of eliminationist mind-set that inevitably finds the simplest solution the most attractive.

    Yes, not having as many, or maybe any employees would solve your ‘labor problems‘ but that is a ‘pie-in-the-sky‘ flavored dream that is being offered by one set of cynical billionaires, to other billionaires, both groups having way too much money and spending it on the sort of shananigans that used to be prevented by a progressive tax code.

    Google solving the problem of search, also fueled the problem of surveillance, and the massive server farms necessary to make all of this work.

    Once you have a bunch of massive data centers you inevitably have to think of some other services for them to deliver?

    How do you get people to invest tens of billions of dollars in your AI start-up?

    Tell them you’re going to eliminate their need for employees.

  13. lyman alpha blob

    AI may eventually be useful for some things, but it’s godawful at a lot of them now.

    When I started by current job about 13 years ago, there was what was then called “automation” that was supposed to read various documents and create a transaction automatically, eliminating the need to key in every transaction. Except it couldn’t do it accurately, so we had to double check the work. Once I understood that I’d still have to go over every transaction, I never used it again, figuring it would be more efficient to do the data entry myself and get it right the first time.

    Today we have a new ‘modern’ system, developed by one of the biggest Silicon Valley software titans, and it has an AI feature which is also supposed to read docs and create transactions. 13 years later and this ‘modern’ one can’t even read the date from a document accurately and consistently, not to mention many other errors.. But now I don’t have the option not to use it, and I have to report every error it makes, so my company (and presumably thousands of others) pays Silicon Valley to train their AI for them. Nice work if you can get it.

    No way should the Altmans of the world have been allowed to loose these menaces on the entirety of humanity, spewing out half baked nonsense a large part of the time. And the people who let these things loose and the politicians who support them are the same crowd who are oh so concerned about the spread of “misinformation”. Let these things run for a few decades and once the older generations who actually had some expertise die off, nobody will know anything.

  14. Lefty Godot

    Here is a Buddhist view of AI: Meaningless

    The essence of it, I think, is that there is no meaning being conveyed by AI, unlike what is supposed to happen in most real communication. It’s all prediction based on word frequencies in phrases and sentences. And if that sort of meaningless jabber drowns out real thought and speech and writing, we as a species further lose our grounding in reality, as is already happening with, for instance, derivatives and other neoliberal financial trickery, “woke” delusions, denial of COVID, denial of climate degradation, etc.

  15. JE McKellar

    Murphy here points out the critical issue with AI/machine language models; the junction between the ‘human’ and the ‘technical’. What a good accountant, lawyer, doctor, teacher, waiter, actually does is translate between two very different languages built to operate in two very different worlds. LLM’s, as far as I can tell, tend to assume that there is just one big language that humans use for everything, no code-switching, no dialect, no subtle shifting between jargon, euphemism, pleasantry, and bluntness. Much less the understanding what needs to remain ambiguous or unspoken when trying to cram a human being into a bureaucratic pigeon-hole.

    Murphy’s other point, about low-level work being important for training, is also important. Many of our recent corporate failures seem stem from the inability of the higher-ups to understand what the line-workers do, and their refusal to listen to the workers when things begin to go wrong. Replacing workers with AI means no more complaints to listen to, and thus upper management will be perfectly ignorant of any problems that may arise.

    Furthermore, AI/LLM increasingly has taken over the role of an index or search function, where the AI is used to pull out relevant data from a big pile of information. To do so, the AI has it’s own way of systematizing information, it’s own internal black-box Dewey-decimal system, but no human has any real access to that classification system. There’s a lot wrong with that kind of hidden classification, but one issue is that humans learn a lot by building and maintaining and re-building those classification systems themselves.

  16. Susan the other

    The good news is we have a keen sense of the absurd. RM’s “warm, empty meaningless smile” is the best food for a healthy sense of humor. Absurd, annoying, offensive, and funny all at the same time. It’s gonna be interesting, all the simplification required to make AI meaningful. Kinda goes against the grain of evolution.

  17. David in Friday Harbor

    Like Yves, I’m a member of the last cohort in my profession to have to “hit the books” in order to solve problems. This was how I learned my trade 40-odd years ago. Admittedly I was a public servant and didn’t have to engage in the “client sales” side of the profession, only having to convince judges and juries who tended to have sensitive B.S. detectors.

    As a law student I served a multi-year internship sitting in a smoke-filled library brushing cigarette ashes off of physical books in order to solve problems for the lawyers I worked for. This forced me to read learned treatises, then cited cases, Shepardizing where they were cited, and reading subsequent cases looking not only for the legal principle but to analyze how that principle had been applied to their underlying factual scenarios. A case that might seem superficially applicable might be easily distinguished on its facts.

    Those who came along in the subsequent decade were less adept at applying the law to facts because Lexis searches couldn’t easily be engaged in that level of analysis. Mentoring after 30-plus years in the profession, I found that young lawyers who had never spent time analyzing facts only ever gained a superficial understanding of the application of the law. It’s in the application of principles to complex fact patterns that I feel that A.I. fails to deliver.

  18. redleg

    In most US States, a licensed professional is required to certify a project. If something goes wrong, that professional and their firm (or whichever entity has acquired them over the decades) are liable for damages caused.
    I’m a geologist, and here’s the State of Oregon’s description of what certification means. Engineering is similar, if not identical.
    TLDR:

    A registrant who stamps a final version of a geology product must be in responsible charge for the geology content of that product. The registrant is responsible for any work done by an unregistered person working under supervision.

    As a licensed professional, i can’t certify anything that’s been prepared by AI as I don’t have any idea how it’s been trained. By affixing my stamp, I would be liable for any errors and omissions from the AI content in addition to the normal stuff. Now I’m certain that Gresham’s Law will apply and any number of professionals will certify things that are filled with AI BS just to make a buck. But any professional with a functioning risk-management lobe in their brain will not join that race.
    I can see a time when AI-free engineering/environmental firms could get elevated due to “responsible bidder” rules, but I can also see them becoming extinct as the bad money drives out the good.

  19. Dick Swenson

    Artificial intelligence is like artificial grass. It doesn’t really grow and after being used a while simply needs to be replaced. If you want it to smell like grass, you must add some scent. If you don’t want it to abrade your skin when you slide on it, you must add grease. Etc., etc., etc.

    Remember, the word ‘artificial’ means something.

  20. hk

    Agree completely with how playing with the data by hand gives you a much better sense of what the reality looks like. I also want to add, given my old line of work (election research), there was a huge value added from going around and talking to many and varied people (“soaking and poking” as poli sci ppl called it.) That became a lost art as early as 1980s or even 1970s. I tend to think we lost a lot as we became increasingly data centric.

    Not saying that data is somehow inferior to soaking and poking (far from it!) but there are big complementaries.

  21. Senator-Elect

    This article is right on and is getting at some crucial points at about the halfway mark. Then it goes off in a different direction than I would have. It’s a valid direction, but I think it misses something critical, which seems to apply to contemporary society as a whole: the acceptance of “good enough” or box-checking. In other words, BS is accepted (or even encouraged) by those at the top.

    This fact helps explain a lot of recent scandals: the Internet stock bubble, financial “innovation”, carbon credits and a lot of the response to climate change, Bitcoin, the downfall of Boeing and so on.

    In this environment, “AI” products are primed for success. LLMs will let corporations and governments shift more work onto consumers and citizens, create vast amounts of fake content to earn advertising and subscription dollars, provide seemingly accurate output for professionals to use in their day jobs (enabling their bosses to jack up their workloads) and let newbies fake it till they make it. This software is right up the alley of our cost-cutting, quality-retrenching, slowly collapsing society.

    The above commenters make valid points about the shortcomings of this half-baked software, but, sadly, I think it’s baked just the right amount for the people who sit around boardroom tables today.

  22. Tedder

    Whenever I read about ‘advances’ that promise to save time through efficiency, I am reminded of a story from Zen poet Gary Snyder. He was undertaking Zen training in a Japanese monastery. During the work periods, he noticed many ways that the monks could accomplish their tasks more quickly. He approached the monk in charge of work and told him of his observations.
    The monk replied, “You are right, Gary-san. But if we finished our tasks more quickly, we would just have to go sooner to sit in mediation in the Zendo, and our knees would hurt more.”

  23. i just don't like the gravy

    Regardless of individual commenters’ opinions, the large language model steamroller will keep on rolling.

    The Silicon Valley elite for the most part don’t literally believe in AGI, it’s all just a marketing ploy. They are well aware of the limitations of this current generation of LLMs. The thing is, though, that the LLMs are slowly proving themselves to be “good enough.”

    The ratchet only moves in one direction, and the genie is out of the bottle. This “AI” nonsense is just another thing we must deal with before the wheels of industrial society come off. LLMs are the spinning jenny to Graeber’s BS jobs: displacing dullard white collar workers as the industrial death cult grinds us all back into stardust.

  24. john

    AI is essentially a trend following trader, right? Using past data it forecasts the next tick. Then the next. For specific queries its that plus a Bayesian given? Like all function approximations it can’t see the change?

  25. XXYY

    I hate to burst Murphy’s bubble, but teaching is no different from doing tax preparation. The way you become a good teacher is to interact with students for a long time, recognizing what they know and don’t know, and gradually perceiving the best way to transfer knowledge into another human being’s head.

    Being a good teacher also requires instilling pride in oneself, and enthusiasm for the subject at hand, which is not going to happen if a machine is doing the teaching.

  26. John

    What is called AI today is not AI.
    It is merely a means of parroting unverified sources and creating plausible statements.
    It reproduces narratives from MSM and propaganda sources in proportion to economic influence.
    It does not consider the underlying processes and defects.
    It does not understand underlying issues.
    It works somewhat if that is what the user needs.
    But in the future we shall see true AI.
    It will have no advantages over human intelligence.
    But it can become faster, and can readily reproduce skillsets to solve more problems faster..

    1. XXYY

      It is merely a means of parroting unverified sources and creating plausible statements.

      Stochastically parroting! :-)

  27. JustTheFacts

    I listened to this interview with Mike Benz and came to the conclusion that one of the most likely intended uses of LLMs is censorship. It would explain why so many LLMs just happen to be trained on the narrative, and therefore stick to it, and why so much money is suddenly available for developing LLMs. (e.g. OpenAI’s compute power is provided by Microsoft, and Mike Benz said in that interview that Microsoft is big in the censorship industry).

  28. Trisha

    A major issue with applications involving AI is liability when the AI screws up, say in a medical diagnosis. Who gets sued? Will malpractice insurance coverage be required for AI assisted medical procedures? And will medical AIs have to take the Hippocratic oath?

  29. Guy Boertje

    Last night I watched the recent Stanford Talk with Eric Schmidt (ex Google CEO) and today I read a summary. There is a lot going on in this talk. Apparently, Schmidt thought the talk would be private and spoke accordingly.

Comments are closed.