The Problem with AI

Yves here. Richard Murphy gives a good, compact treatment of some of the inherent limits of AI, particularly in professions (he focuses on accountancy and tax but the same arguments apply to medicine and law). A big one, which I raised many many years ago as data mining greatly reduced the number of entry level jobs, was that the junior scut work like legal research trained new professionals in the nuts and bolts of their work. Skipping over that meant they’d be poorly trained. I saw that in the stone ages of my youth. I was in the last group of Wall Street newbies that prepared spreadsheets by hand and obtained the data from hard copies of SEC filings and annual reports. I found that my juniors, who downloaded sometimes erroneous but never corrected data from Computstat had a much lower understanding of how company finances worked.

By Richard Murphy, part-time Professor of Accounting Practice at Sheffield University Management School, director of the Corporate Accountability Network, member of Finance for the Future LLP, and director of Tax Research LLP. Originally published at Fund the Future

Summary

I believe that while AI has potential, it can’t replace human judgment and skills in many professions, including teaching, medicine, and accounting.

AI might automate certain tasks, but it lacks the ability to interpret nonverbal cues and understand complex, real-world problems.

Professionals need experience and training to provide human solutions, and AI’s limitations make it unsuitable as a replacement for deep human interaction and expertise.

The Guardian’s Gaby Hinsliff said in a column published yesterday:

The idea of using technology as a kind of magic bullet enabling the state to do more with less has become increasingly central to Labour’s plans for reviving British public services on what Rachel Reeves suggests will be a painfully tight budget. In a series of back-to-school interventions this week, Keir Starmer promised to “move forward with harnessing the full potential of AI”, while the science secretary, Peter Kyle, argued that automating some routine tasks, such as marking, could free up valuable time for teachers to teach.

She is right: this is a Labour obsession. The drive appears to come from the Tony Blair Institute, its eponymous leader having had a long history of misreading the capacity of tech, little of which he ever seems to understand.

The specific issue she referred to was the use of AI for teaching purposes. AI enthusiasts think that it provides the opportunity to create a tailor-made programme for each child. As Gaby Hinsliff points out, the idea is failing, so far.

I am, of course, aware of the fact that most innovations have to fail before they can succeed: that is, by and large, the way these things work. It would be unwise as a consequence to say that because AI has not cracked this problem as yet it will not do so. But, even as someone who is actively embracing AI into my own workflow, I see major problems with much of what labour and others are doing.

The immediate reaction of the Labour market to AI would appear to be to downgrade the quality of the recruits now being sought as employers think that AI will reduce demand for those with skills in the future. And yes, you did hear that right: the assumption being made is that specialist skills will be replaced with AI in a great many areas. Graduates are being hit hard by this attitude right now.

In accountancy, for example, this is because it is assumed that much less tax expertise will be required as AI will be able to answer complex questions. Similarly, it is assumed that AI will take over the production of complex accounts, like the consolidated accounts of groups of companies.

Those making such assumptions are incredibly naive. Even if AI could undertake some parts of these processes, there would be massive problems created as a consequence, the biggest of which by far is that no one will then have the skills left to know whether what AI has done is right.

The way you become good at tax is by reading a lot about it; by writing a lot about it (usually to advise a client); and by having to correct your work when someone superior to you says you have not got it right. There is a profoundly iterative process in human learning.

Employers seem to think at present that they can do away with much of this. They do so because those deciding it is possible to eliminate the training posts have been through them and, as a result, have acquired the skills to understand their subject. They do, in other words, know what the AI is supposed to be doing. But when those fewer people who will now be recruited reach a point of similar authority, they will not know what the AI is doing. They will just have to assume it is right because they will lack the skills to know whether that is true, or not.

The logic of AI proponents is, in that case, the same as that used by the likes of Wes Streeitng when they advocate the use of physician associates, who are decidedly partly trained clinicians now working in the NHS, and even undertaking operations, without having anything like the depth of knowledge required to undertake the tasks asked of them. They are trained to answer the questions they are given. The problem is that the wrong question might have been asked, and then they both flounder and cause harm.

The same is true of AI. It answers the question it is given. The problem is how it solves the problem that is not asked – and very rarely does a client ever ask the right question when it comes to tax. The real professional skill comes from, firstly, working out what they really want, secondly, working out whether what they want is even wise, and thirdly, reframing the question to be one that might address their needs.

The problem in doing that is that this is an issue all about human interaction, but which also requires that the whole technical aspect of the issues being looked at (which usually involve multiple taxes, plus some accounting and very often some law) be understood so that the appropriate reframing can take place, all of which requires considerable judgement.

Do I think AI is remotely near undertaking that task as yet? No, I don’t.

Am I convinced that AI can ever undertake that task? I also doubt that, just as I doubt its ability to address many medical and other professional issues.

Why is that? It is because answering such questions requires an ability to read the client – including all their nonverbal and other signals. The technical stuff is a small part of the job, but without knowing the technical element, the professional – in any field – and I include all skilled occupations of all sorts in that category – has no chance of framing their question properly, or knowing whether the answer they provide is right or not.

In other words, if the young professional is denied the chance to make all the mistakes in the book, as would happen if AI replaced them, then the chance they will ever really know enough to solve real-world problems posed by real-world people is very low indeed, not least because almost no one who seeks help from any professional person wants a technical answer to any question.

They want the lights to work.

They want the pain to go away.

They want to pay the right amount of tax without risk of error.

They want to get divorced with minimum stress.

The professional’s job is not to tell them how to do these things. It is to deliver human solutions to human problems. And they can’t do that if they do not understand the human in front of them and the technical problem. Use AI to do the tech part, and what is left is a warm, empty, and meaningless smile that provides comfort to no one.

I am not saying we should not use AI. I know we will. But anyone thinking it can replace large parts of human interaction is sorely mistaken: I do not believe it can, precisely because humans ask utterly illogical questions that require a human to work out what they even mean.

And that’s also why I think Gaby Hinsliff is right to say that AI can only have a limited role in the classroom when she concludes:

It’s true that AI, handled right, has enormous capacity for good. But as Starmer himself keeps saying, there are no easy answers in politics – not even, it turns out, if you ask ChatGPT.

Print Friendly, PDF & Email

4 comments

  1. GramSci

    «The error catastrophe of aging, originally proposed by Leslie Orgel in 1963, argues that copying errors in DNA and the incorrect placement of amino acids in protein synthesis could aggregate over the lifetime of an organism and eventually cause a catastrophic breakdown in the form of obvious aging. »

    https://www.allthescience.org/what-is-error-catastrophe-of-aging.htm

    It’s like each ‘advance’ in AI technology is another sign of advancing technological age. Every year, more copy errors go unnoticed, until there’s no hope of correcting them.

    Reply
  2. SocalJimObjects

    Seems like complexity is the enemy of AI. I’ve lived and worked in Singapore before, and over there the government will pretty much calculate everyone’s taxes at the end of every year, and all you have to do is login into the tax authority’s website to see your bill. The solution to complexity is simplicity, but that also removes the need for expensive accountants and tax advisors. I am not advocating the idea that AI can solve everything, but I am sensing perhaps just the tiniest bit of NIMBY from the article?

    The problem here is not with AI, rather you can’t fix complexity with more complexity in just about any field, like you can’t fill a hole by digging a bigger hole.

    Reply
  3. vao

    I am sensing perhaps just the tiniest bit of NIMBY from the article?

    The post and the foreword make it clear that these issues are to be found in a number of other fields. The article explicitly mentions medicine — a domain in which complexity is intrinsic and cannot be simplified away. In fact, every field where the elicitation of requirements and needs is a major activity will face the same difficulties. And that is even before going into the distinction between “tame” and “wicked” problems (where simply applying or combining previous solutions or a variation thereof is not feasible).

    [ This is an answer to SocalJimObjects above. ]

    Reply
  4. ilsm

    If AI gave a better search…..

    I mostly search things I once knew, retired, getting old.

    The AI imposed in search engines is frustrating, it seems to think what I want to find. Worse it presents a laundry list of terms, often none of them is what I want to find.

    It does not give me a decent search. Judgement!

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *