Yves here. In this post, Lynn Parramore interviews historian Lord Robert Skidelsky over his concerns over the long-term, societal impact of AI. Skidelsky is particularly concerned with degradation of human capabilities and even initiative and creativity. Memorization skills have declined over time due to improved tools, from written records and now reliance on devices. It’s hard to think that anyone other than a very very few have the sort of retention that bards like Homer had back in the day. A more recent example is a college roommate who could recite pages of verse. How many can do that now? Outside actors memorizing scripts, who in the population now has to memorize lot of text precisely as a condition of employment? We have less simple to demonstrate but reportedly widespread phenomena, such as pervasive use of smart phones reducing the ability of many to concentrate on long-form text, like novels and research studies.
Skidelsky and Parramore take up the concern that AI can promote fascism, without admitting to the authoritarianism now practiced in self-styled liberal democracies like the US and UK.
Perhaps it is covered in Skidelsky’s book but did not make it into the interview is AI corruption of information, such as hallucinations and fabricated citations to support dodgy conclusions. There’s a real risk of what we perceive as knowledge to become quickly and throughly corrupted by this sort of garbage in, garbage out.
One of many troubling examples come in a recent Associated Press story flagged by Kevin W: Researchers say an AI-powered transcription tool used in hospitals invents things no one ever said:
Tech behemoth OpenAI has touted its artificial intelligence-powered transcription tool Whisper as having near “human level robustness and accuracy.”
But Whisper has a major flaw: It is prone to making up chunks of text or even entire sentences, according to interviews with more than a dozen software engineers, developers and academic researchers. Those experts said some of the invented text — known in the industry as hallucinations — can include racial commentary, violent rhetoric and even imagined medical treatments.
If anything, the performance is even worse than this article indicates. From IM Doc:
I have been forced to use AI now since Labor Day. On all the chart notes – I can get into how it is turning the notes into ever more gobbledygook and it is doing that for sure. But at the end of the day, it does indeed often make shit up. I have not figured out if there are sentences it is not hearing – or if it is just assuming things. And believe me – this is just wild stuff. Things you would never want in a patient chart – and they are NOT EVEN CLOSE to being accurate.
Also, it appears there are at least 4-5 HAL 9000s in the system. It is so hard to explain but they each have a different output in the final chart. From “just the facts Ma’am” all the way to “Madame Bovary”.
Some of these write out 6 paragraphs where one would do. I feel obligated to read through them before signing ( many are not even doing this simple task ) – and I correct them. But the day will soon be here when the MBAs decide this has so helped us be more efficient that we need to add another 8-10 patients a day – we now have time thanks to the AI. Thankfully – my place is not the Stasi about this – but it is indeed already happening in big corporations. They largely give up on those of us over 55 I would say – too many loyal patients – and too independent – they are just told to go to hell. But the younger kids – Hoo boy – they are on a completely different track than I ever was. And they are not liking it at all – and this is just going to make it worse. They are leaving in the droves back home – on to greener pastures with telemedicine companies or actually all kinds of stuff.
The profession is entering its death throes.
A point I have either not seen made or else no where near as often made as it should be that AI that was relentlessly retrained so as to produce highly accurate results would give its users a tremendous advantage, not just commercially but in critical geopolitical sectors like military use. Yet Scott Ritter has decried the IDF’s deployment of AI as producing poor outcomes but not leading to any changes in its development or use. If this is happening in supposedly technologically advanced Israel, it seems very likely the same dynamic exists in the US.
Now to the main event.
By Lynn Parramore, senior research analyst at the Institute for New Economic Thinking. Originally published at the Institute for New Economic Thinking website
Picture this: Dr. Victor Frankenstein strides into a sleek Silicon Valley office to meet with tech moguls, dreaming of a future where he holds the reins of creation itself. He’s got a killer app to “solve death” that’s bound to be a game changer.
With his arrogant obsession to master nature, Mary Shelley’s fictional scientist would fit right into today’s tech boardrooms, convinced he’s on a noble mission while blinded by overconfidence and a thirst for power. We all know how this plays out: his grand idea to create a new species backfires spectacularly, resulting in a creature that becomes a dark reflection of Victor’s hubris—consumed by vengeance and ultimately turning murderously against both its creator and humanity.
It’s a killer app all right.
In the early nineteenth century, Shelley plunged into the heated debates on scientific progress, particularly the quest to create artificial humans through galvanism, all set against the tumultuous backdrop of the French and Industrial Revolutions. In Frankenstein, she captures the dark twist of the technological dream, showing how Victor’s ambition to create a god only leads to something monstrous. The novel is a warning about the darker side of scientific progress, emphasizing the need for accountability and societal concern — themes hit home in today’s AI debates, where developers, much like Victor, rush to roll out systems without considering the fallout.
In his latest work, Mindless: The Human Condition in the Age of Artificial Intelligence, distinguished economic historian Robert Skidelsky traverses history, intertwining literature and philosophy to reveal the high stakes of AI’s rapid emergence. Each question he poses seems to spawn another conundrum: How do we rein in harmful technology while still promoting the good? How do we even distinguish between the two? And who’s in charge of this control? Is it Big Tech, which clearly isn’t prioritizing the public interest? Or the state, increasingly captured by wealthy interests?
As we stumble through these challenges, our increasing dependence on global networked systems for food, energy, and security is amplifying risks and escalating surveillance by authorities. Have we become so “network-dependent” that we can’t distinguish between lifesaving tools and those that could spell our doom?
Skidelsky warns that as our disillusionment with our technological future grows, more of us find ourselves looking to unhinged or unscrupulous saviors. We focus on optimizing machines instead of bettering our social conditions. Our increasing interactions with AI and robots condition us to think like algorithms—less insightful and more artificial—possibly making us stupider in the process. We ignore the risks to democracy, where resentful groups and dashed hopes could easily lead to a populist dictatorship.
In the following conversation, Skidelsky tackles the dire risks of spiritual and physical extinction, probing what it means for humanity to wield Promethean powers while ignoring our own humanity—grasping the fire but lacking foresight. He stresses the urgent need for deep philosophical reflection on the human-machine relationship and its significant impact on our lives in a tech-driven world.
Lynn Parramore: What is the biggest threat of AI and emerging technology in your view? Is it making us redundant?
Robert Skidelsky: Yes, making humans redundant — and extinct. I think, of course, redundancy can lead to spiritual extinction, too. We stop being human. We become zombie-like and prisoners of a logic that is essentially alien. But physical extinction is also a threat. It’s a threat that has a technological base to it, that’s to say, obviously, the nuclear threat.
The historian Misha Glenny has talked about the “four horsemen of the modern apocalypse.” One is nuclear, another is other global warming, then pandemics, and finally, our dependence on networks that may stop working at some time. If they stop working, then the human race stops functioning, and a lot of it simply starves and disappears. These particular threats worry me enormously, and I think they’re real.
LP: How does AI interact with those horsemen? Could the emergence of AI, for example, potentially amplify the threat of nuclear disasters or other kinds of human-made disasters?
RS: It can create a hubristic mindset that we can tackle all challenges rooted in science and technology just by applying improved science and tech, or by regulating to limit the downside while enhancing the upside. Now, I’m not against doing that, but I think it will require a level of statesmanship and cooperation which is simply not there at the moment. So I’m more worried about the downside.
The other aspect of the downside, which is foreshadowed in science fiction, is the idea of rogue technology. That’s to say, technology that is actually going to take over the control of our future, and we’re not going to be able to control it any longer. The AI tipping point is reached. That is a big theme in some philosophic discussions. There are institutes at various universities that are all thinking about the post-human future. So all that is slightly alarming.
LP: Throughout our lives, we’ve faced fears of catastrophes involving nuclear war, massive use of biological weapons, and widespread job displacement by robots, yet so far we seem to have held off these scenarios. What makes the potential threat of AI different?
RS: We haven’t had AI until very recently. We’ve had technology, science, of course, and we’ve always been inventing things. But we’re starting to experience the power of a superior type of technology, which we call artificial intelligence, a development of the last 30 years or so. Automation starts in the workplace, but then it gradually spreads, and now you have a kind of digital dictatorship developing. So the power of technology has increased enormously, and it’s growing all the time.
Although we’ve held off things, we’ve held off things that we are much more in control of. I think that is the key point. The other point is, with the new technology, it only needs one thing to go wrong, and it has enormous effects.
If you’ve seen “Oppenheimer,” you might recall that even back then, top nuclear scientists were deeply concerned about technology’s destructive potential, and that was before thermonuclear devices and hydrogen bombs. I’m worried about the escalating risks: we have conventional wars on one side and doom scenarios on the other, leading to a perilous game of chicken, unlike the Cold War, where nuclear conflict was taboo. Today, the lines between conventional and nuclear warfare are increasingly blurred. This makes the dangers of escalation even more pronounced.
There’s a wonderful book called The Maniac about John von Neumann and the development of thermonuclear weapons out of his own work on computerization. There’s a link between the aims of controlling human life and the development of ways of destroying it.
LP: In your book, you often reference Mary Shelley’s Frankenstein. What if Victor Frankenstein had sought input from others or consulted institutions before his experiment? Would ethical discussions have changed the outcome, or would it have been better if he’d never created the creature at all?
RS: Ever since the scientific revolution, we’ve had a completely hubristic attitude to science. We’ve never accepted any limitations. We have accepted some limitations on application, but we’ve never accepted limitations on the free development of science and the free invention of anything. We want the benefits that it promises, but then we rely on some systems to control it.
You asked about ethics. The ethics we have are rather thin, I would say, in relation to the threat that AI poses. What do we all agree on? How do we start our ethical discussion? We start by saying, well, we want to equip machines or AI with ethical rules, one of which is don’t harm humans. But what about don’t harm machines? It doesn’t exclude the war between machines themselves. And then, what is harm?
LP: Right, how do we agree on what’s good for us?
RS: Yes. I think the discussion has to start from a different place, which is what is it to be human? That is a very difficult question, but an obvious question. And then, what do we need to protect our humanness? Every restriction on the development of AI has to be rooted in that.
We’ve got to protect our humanness—this applies to our work, the level of surveillance we accept, and our freedom, which is essential to our humanity. We’ve got to protect our species. We need to apply the question of what it means to be human to each of these areas where machines threaten our humanity.
LP: Currently, AI appears to be in the hands of oligopolies, raising questions about how nations can effectively regulate it. If one country imposes strict regulations, won’t others simply forge ahead without them, creating competitive imbalances or new threats? What’s your take on that dilemma?
RS: Well, this is a huge question. It’s a geopolitical question.
Once we start dividing the world into friendly and malign powers in a race for survival, you can’t stop it. One lesson from the Cold War is that both sides agreed to engage in the regulation of nuclear weapons via treaties, but that was only reached after an incredible crisis—the Cuban Missile Crisis—when they drew back just in time. After that, the Cold War was conducted according to rules, with a hotline between the Kremlin and the White House, allowing them to communicate whenever things got dangerous.
That hotline is no longer there. I don’t believe that there’s a hotline between Washington, Beijing, and Moscow at the moment. It’s very important to realize that once the Soviet Union had collapsed, the Americans really thought that history had ended.
LP: Francis Fukuyama’s famous pronouncement.
RS: Yes, Fukuyama. You could just go on to a kind of scientific utopia. The main threats were gone because there would always be rules that everyone agreed on. The rules actually would be largely laid down by the United States, the hegemon, but everyone would accept them as being for the good of all. Now, we don’t believe that any longer. I don’t know when we stopped believing it, perhaps from the time when Russia and China started pulling their muscle and saying, no, you’ve got to have a multipolar order. You can’t have this kind of Western-dominated system in which everyone accepts the rules, the rules of WTO, the rules of the IMF, and so on.
So we’re very far from being in a position to think of how we can stop the competition in the growth of AI because once it becomes part of a war or a military competition, it can escalate to any limit possible. That makes me rather gloomy about the future.
LP: Do you see any route to democratizing the spread and development of AI?
RS: Well, you’ve raised the issue, which is, I think, one posed by Shoshana Zuboff [author of The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power] about private control of AI in the hands of oligopolies. There are three or four platforms that really determine what happens in the AI world, partly because no one else is in a position to compete. They put lots and lots of money into it, a huge amount of money. The interesting question is, who really calls the shots? Is it the oligopolies or the state?
LP: Ordinary people don’t seem to feel like they’re calling the shots. They’re fearful about how AI will impact their daily lives and jobs, along with concerns about potential misuse by tech companies and its influence on the political landscape. You can feel this in the current U.S. election cycle.
RS: Let me go back to the Bible because, in a way, you could say it prophesied an apocalypse, which would be the prelude to a Second Coming. “Apocalypse” means “revelation,” [from the Greek “apokalypsis,” meaning “revealing” or “unveiling”]. We use the word, but we can’t get our minds around the idea. To us, an apocalypse means the end of everything. The world system collapses, and then either the human race is extinguished or people are left and they have to build it again from a much lower level.
But I’ve been quite interested in Albert Hirschman and his idea of the small apocalypse, which can promote the learning process. We learn from disasters. We don’t learn from just thinking about the possibility of disaster, because we rarely believe they will actually happen. But when disaster does strike, we learn from it. That’s one of our human traits. The learning may not last forever, but it’s like a kick in the backside. The two world wars led to the creation of the European Union and the downfall of fascism. A relatively peaceful, open world started to develop out of the ruins of that war. I would hate to say that we need another war in order to learn because now the damage is too colossal. In the past, when you were still in a position to fight conventional wars: they were extremely destructive, but they didn’t threaten the survival of humanity. Now we have atomic weapons. The escalatory ladder is a much higher one now than it was before.
Also, we can’t arrange apocalypses. It would be immoral, and it would also be impossible. We can’t — to use moral language — wish evil on the world in order that good may come of it. The fact that this has often been the historical mechanism doesn’t mean we can then use it to suit our own ideas of progress.
LP: Do you believe that technology itself is neutral, that it’s just a tool that can be used for good or bad, depending on human intentions?
RS: I don’t believe technology has ever been neutral. Behind its development has always been some purpose—often military. The role of military procurement in advancing technology and AI has been enormous. To put it starkly, I wonder if we would have seen beneficial developments in medicine without military funding, or if you and I could even have this virtual conversation without military demands. In that sense, technology has never been neutral in its aspirations.
There’s always been a hubristic element. Many scientists and mathematicians believe they can devise a way to control humanity and prevent past catastrophes, embracing a form of technological determinism: that advanced science and its applications can eliminate humanity’s errors. You abolish original sin.
LP: Sounds like something Victor Frankenstein might have agreed with before his experiment went awry.
RS: Yes. It was also there with von Neumann and those mathematicians of the early 20th century. They really believed that if you could set society on a mathematical foundation, then you were on the road to perfection. That was the way the Enlightenment dream worked its way through the development of science and into AI. It’s a dangerous dream to have because I think we are imperfect. Humanness consists of imperfection, and if you aim to eliminate it, you will destroy humanity, or if you succeed, they’ll become zombies.
LP: A perfect being is inhuman.
RS: Yes, a perfect being is inhuman.
LP: What are your thoughts on how fascist political elements might converge with the rise of AI?
RS: The way I’ve seen it discussed mostly is in terms of the oxygen it gives to social media and the effects of social media on politics. You give an outlet to the worst instincts of humans. All kinds of hate, intolerance, insult, and these things sort of fester in the body politic and eventually produce politicians who can exploit them. That is something that’s often said, and there’s a lot of truth in it.
The promise, of course, was completely different – that of democratizing public discussion. You were taking it out of the hands of the elites and making it truly democratic. Democracy was then going to be a self-sustaining route to improvement. But what we see is something very different. We see minorities empowered to spread hatred and politicians empowered through those minorities to create the politics of hate.
There’s a different view centered on conspiracy theories. Many of us once dismissed them as the irrational obsessions of cranks and fanatics rooted in ignorance. But ignorance is built into the development of AI; we don’t truly understand how these systems work. While we emphasize transparency, the reality is that the operation of our computer networks is a black hole, even programmers struggle to grasp it. The ideal of transparency is fundamentally flawed—things are transparent when they’re simple. Despite our discussions about the need for greater transparency in areas like banking and politics, the lack of it means we can’t ensure accountability. If we can’t make these systems transparent, we can’t hold them accountable, and that’s already evident.
Take the case of the British postmasters [Horizon IT scandal]. Thousands of them were wrongly convicted on the basis of a faulty machine, which no one really knew was faulty. Once the fault was identified, there were a lot of people with a vested interest in suppressing that fault, including the manufacturers.
The question of accountability is key — we want to hold our rulers and our politicians accountable, but we don’t understand the systems that govern many of our activities. I think that’s hugely important. The people who recognized this aren’t so much the scientists or the people who talk about it, but rather the dystopian novelists and fiction writers. The famous ones, of course, like Orwell and Huxley, and also figures like Kafka, who saw the emergence of digital bureaucracy and how it became completely impenetrable. You didn’t know what they wanted. You didn’t know what they were accusing you of. You didn’t know whether you were breaking the law or not breaking the law. How do we deal with that?
I’m a pessimist about our ability to cope with this, but I appreciate engaging with those who aren’t. The lack of understanding of the system is staggering. I often find the technology I use frustrating, as it imposes impossible demands while promising a delusional future of comfort. This ties back to Keynes and his utopia of freedom to choose. Why didn’t it materialize? He overlooked the issue of insatiability, as we’re bombarded with irresistible promises of improvement and comfort. One click to approve, and suddenly you’ve trapped yourself inside the machine.
LP: We’re having this virtual conversation, and it’s fantastic that we’re connected. But it’s unsettling to think someone might be listening in, recording our words, and using them for purposes we never agreed to.
RS: I’m in a parliamentary office at the moment. I don’t know whether they’ve put up any Big Brother-type system of seeing and hearing what we’re saying and doing. Someone might come in eventually and say, hey, I don’t think your conversation has been very useful for our purposes. We’re going to accuse you of something or other. It’s very unlikely in this particular case — we’re not at this kind of control envisaged by Orwell — but the road has sort of shortened.
And standing in the way is the commitment of free societies to freedom, freedom of thought and accountability. Both of those commitments, one has to realize, were also based on the impossibility of controlling humans. Spying is a very old practice of governments. You had spies back in the ancient world. They always wanted to know what was going on. I have in my book, sorry – this is not a very attractive example, from Swift’s Gulliver’s Travels, where they get evidence of subversive thoughts from looking at people’s feces.
LP: It’s not so far-fetched considering where technology is heading. We have wearable sensors that detect emotions and companies like Neuralink developing brain-computer interfaces to connect our brains to devices that interpret thoughts. We even have smart toilets tracking data that could be used for nefarious purposes!
RS: Yes, the incredible prescience of some of these fiction writers is striking. Take E.M. Forster’s The Machine, written in 1906—over 120 years ago. He envisions a society where everyone has been driven underground by a catastrophic event on the surface. Everything is controlled by machines. Then, one day, the machine stops working. They all die because they’re entirely dependent on it—air, food, everything relies on the machine. The imaginative writers and filmmakers have a way of discussing these things, which is beyond the reach of people who are committed to rational thought. It’s a different level of understanding.
LP: In your book, you highlight the challenges posed by capitalism’s insatiable drive for growth and profit, often sacrificing ethics, especially regarding AI. But you argue that the real opposition lies not between capitalism and socialism, but between humans and humanity. Can you explain what you mean by that?
RS: I think it’s difficult to define the current political debates or the forms politics is taking around the world using the old left-right division. We often mislabel movements as far right or far left. The real issue, in my view, is how to control technology and AI. You might argue there are leftist or rightist approaches to control, but I think those lines blur, and you can’t easily define the two poles based on their views on this. So one huge area of debate between left and right has disappeared.
But there is another area remaining, and that is relevant to what Keynes was saying, and that is the question of distribution. Neoclassical economics has increased inequality, and it’s put a huge amount of power in the hands of the platforms, essentially. Keynes thought that liberty would follow from the distribution of the fruits of the machine. He didn’t envisage that they’d be captured so much by a financial oligarchy.
So in that sense, I think the left-right divide becomes relevant. You’ve got to have a lot of redistribution. Redistribution, of course, increases contentment and reduces the power of conspiracy theories. A lot of people now think that the elites are doing something that isn’t in their interest, partly because they’re just poorer than they should be. The growth of poverty in wealthy societies has been tremendous in the last 30 or 40 years.
Ever since the Keynesian revolution was abolished, capitalism has been allowed to rampage through our society. That is where left-right is still important, but it’s no longer the basis of stable political blocs. Our Prime Minister says, we aim to improve the condition of the working people. Who are the working people? We’re working people. You can’t talk about class any longer because the old class blocs that Marx identified between those who have nothing to sell except their labor power, no assets, and those who own the assets in the economy, are blurred. If you consider people who are very, very, rich and the rest, it’s still there. But you can’t create an old division of politics on that basis.
I’m not sure what the new political divisions will look like, but the results of this election in America are crucial. The notion that machines are taking jobs, coupled with the fact that oligarchs are often behind this technological shift, is hard to comprehend. When you present this idea, it can sound conspiratorial, leaving us tangled in various conspiracy theories.
What I long for is a level of statesmanship that is higher than what we’ve got at the moment. Maybe this is an old person’s idea that things were better in the past, but Roosevelt was a much greater statesman and politician than anyone on display in America today. This is true of a lot of European leaders of the past. They were of higher caliber. I think many of the best people are deterred from going into politics by the current state of the political process. I wish I could be more hopeful. Hopefulness is a feature of human beings. They have to have hope.
LP: People do need to have hope, and right now, the American electorate is facing anxiety and a grim view of politics with little expectation for improvement. Voters are stressed out and exhausted, wondering where that hope might lie.
RS: I would go to the economic approach here at this point. I don’t have much time for economic mathematical model building, but there are certain ideas that can be realized through better economic policy. You can get better growth. You can have job guarantees. You can have proper training programs. You can do all kinds of things that will make people feel better and therefore less prone to conspiracy thinking, less prone to hate. Just to increase the degree of contentment. It’s not going to solve the existential problems that loom ahead, but it’ll make politics more able to deal with them, I think. That’s where I think the area of hope lies.
Try this on for a revelation;
There is currently nothing stopping Sam Altman from selling Anthony Fauci on putting AI in charge of Gain of Function research.
AI is essentialy the culmination of the eliminationist vision of profit without employees.
The fact that the technology is tremendously complicated, has the perverse effect of obscuring the simplicity of the intent and purpose.
This is exactly it.
Our future is limited to essentially two possibilities: total climate catastrophe leading to industrial breakdown…
Or this AI stuff works out and the billionaires quite literally enslave us.
I think the second possibility you mention is intended, but that the first is much more likely.
Modern techno-industrial civilization’s probable course is not just climate catastrophe. Ecological overshoot is manifesting in soil loss and degradation, mass extinction, ocean acidification, microplastics in everything, including our brains, poisoning by persistent organics and heavy metals —-
What can’t go on, won’t. I suppose by the end of the 21st century there will be a lot fewer people. I might be wrong.
Thank you, Yves. I think your introduction has more depth than the actual interview, which is (I feel) based on Idiocrazy level naivete about humans becoming dumb and helpless given half a chance. It makes a great comedy, but isn’t perhaps the best way to approach the problem.
As anecdata, all my kids have grown up in a world completely different to where I grew up, when it comes to amenities and high-tech tools at their disposal – yet all of them are way more creative and expressive than I am. And somehow they have also managed to gain general knowledge of about everything under and beyond the sun – they can have an intelligent discussion on any issue.
Now, a few words about the “advantage” AI is supposed to bring… It really depends on the data you feed to it during the training – if there are errors and biases in the data, a resulting classifier will have a bias and make errors. A lot also depends on how the training handles specificity and sensitivity (false and true positive ratios).
Lastly, I’d rather say language models “babble” and “crib talk”, rather than hallucinate and make stuff up – the latter implies context and intention, while the previous terms refer to mere mimicry and imitation.
Agreed on the power of Yves’ intro, especially IMDoc’s harrowing description of AI’s intrusion into medical practice. LLMs are too like human ‘intelligence’. As with humans, individual LLM’s power must be restricted. They must be put on a Maximum Wage.
About the specificity and sensitivity of a LLM – my company has a chat feature in a product that has been tuned to only give high confidence answers. The thought is better no answer than a wrong answer. Yet, we are getting complaints when it doesn’t provide answers… I wonder how long before someone decides to relax the constraints for “better” customer satisfaction.
I don’t use Amazon much anymore, but I noticed today that it gave an “I’m not sure” reply to a product info query. I use the Brave search engine, which almost always coughs up a chatbot answer, but it does sometimes give a “no answer” reply, which I appreciate.
In both case I sometimes get a “some say this, some say that” reply. I wish I knew which “some” they’re referring to.
I read this headline as being about Al Franken. Heh.
But being in IT, I am in the trenches on this. It’s really being shoved down our throats too. I’ll check out the interview and hope someone sees an off ramp.
I first saw this headline and thought “Al Franken future?”
But being in IT, I’m seeing this first hand. It’s being shoved down our throats, like it or not. And, of course, they try to tell us that it’s just going to assist on the tedious parts but, once it’s in there, the attitude is always “let’s open it up and see what this baby can do!”
We joke around here about the Dems “learn to code” mantra, but just this week, Google was bragging about a good chunk of their code being written by AI. Google is a leader other companies emulate. If everyone has AI churning out code, what’s next for those who learned to code in good faith?
Not to mention, as it stands, I’m not confident AI can turn out quality code. So what then? Are the humans reduced to proof reading the AI’s work? That’s exactly the opposite of how we were told it would work.
Just one example. Coding is a creative skill. I hate the way the winds are blowing on this issue.
‘Google was bragging about a good chunk of their code being written by AI.’
That is about 25% right now which is a lot. If an AI is writing the code, then are they having another AI check that code for security holes and flaws? It it is released and something goes haywire, who bears financial liability for any damage caused? This sounds like the software that the world is going to be run on will actually be a black box which nobody really knowing what is in it. Not a confidence builder that. Not with what we have seen with autonomous cars. And most certainly the security agencies will stick their grubby mitts into those AIs and insist that all sorts of back-doors be built into any AI-written code for their own use. It’s going to be a mess.
The question pops up, Can you sue the dictionary? I can’t help thinking that AI hallucinations are almost not funny, even though these deep contradictions are the bedrock of both humor and language. Language is built on metaphor and infinite derivatives of those metaphors. One way to look at it, just for ducks, is that against all odds we humans have survived our own crazy languages and customs. I think because there is a logic to contradiction, however fleeting, that manages to find a synthesis. Gobbledegook explaining gobbledegook. Maybe that’s why AI hallucinates. But who knows.
AI hallucinates when it lacks context and is incentivized to come up with answers. My students do the same when they lack context on exam essays. I don’t penalize them for guessing so they make shit up.
I suspect that the forceful integration of AI in non-entertainment areas (e.g. autonomous vehicles, automatic medical/engineering/legal analyses/diagnoses, software development…) will clash with the requirements for liability (especially everything deemed safety critical).
This might lead to confine AI to sectors that do not entail serious material consequences (such as entertainment), or to those where the users of AI either self-insure or do not care about liability — such as military and intelligence…
It’s even worse than that… LLMs in the past year alone have become exceptionally good programmers.
Both in-editor solutions like Cursor and chat-bot modes work so much better now. There are even open source models like DeepSeek Coder v2 (released a while ago) that are quite good.
Writing code is about as creative as writing a judicial opinion. It’s all plumbing that we humans like to imbue with our individuality. However, at the end of the day whatever creativity exists is unnecessary.
I don’t see LLMs replacing most software engineers in the workforce anytime soon, but it will come eventually. It’s easy to rag on this tech but there are in fact incredible applications of it. Ironic that in this case it is eating its masters’ livelihoods.
Depends on the language.
When I asked a Generative AI app to code up a simple CSV parser in my language of choice, it failed repeatedly, giving me bogus functions that didn’t exist.
It simply couldn’t do the job, but kept telling me that it could.
In any case, real computer science isn’t just “coding”. As you say, this tech will at some point endanger the low-end coding jobs, but LLMs cannot “think” ergo in the near term this tech won’t have a significant impact on the people doing actual research — except perhaps by crapifying their tools.
There is a bit of a trap here.
I do a lot of coding for my work.
And I keep having people telling me how useful AI is for speeding up coding, and How you need to get on board and use it to improve your efficiency.
The problem is that what I code is quite obscure and requires very specific domain knowledge to do right that the AI does not have. And that means I cannot trust it. Thus it actually hurts efficiency, because reading code that somebody else wrote and making sure it has no errors takes more time than writing your own code.
However, it is still true that if some day AI becomes good enough, it will truly make things go much faster. And if you have not walked that path together with it, you will have fallen quite a bit behind in terms of knowing how to best use it by the time that moment comes.
So what do you do? Waste time with the current AI or ignore it?
No good option.
It’s kind of like the story from IM Doc’s practice. Right now it is a net negative for it, and they are having it forced on them. We aren’t, but we still have a problem already…
This has also been my experience. I don’t mind tools “suggesting” the remainder of a call to a function, class, etc., provided they stay out of the way. But the chore of having to check/debug/rewrite a bot’s full code in the end is far more trouble than it’s worth. Far easier just to hammer through it alone without the “help”.
In my experience (mostly programming tools for other people to use) the hardest part is to tease out of the people what they need and not what they want the tool to do.
That phase of the development process is in sort of a quantum state, where measuring it will often change it. As in, more often than not people realize early in the process that the tool will modify their existing workflows in a way they had not though before and my role as a developer has been to help them to navigate the waste plateau of new possibilities towards solutions that stay away from mission creep and are achievable within the budget and time constraints.
Of course, at the same time I gain specific domain knowledge from them, which is necessary, as stated above, for a functional and usable end product.
I have to disagree by noting that the eventual desired outcome of this trend, at least, the outcome most desired by the financial overlords, is the elimination of all “Teran human” employees from the job.
“Think” is being replaced by “just good enough to last till the warranty period ends.” Thankfully, it appears that AI at the present state of the art is not even capable of that lowest common denominator outcome.
Who would have imagined that Skynet became Master of Reality simply through those pesky Terran humans voluntarily relinquishing their “grip on observable reality” themselves?
Last month EU introduced a proposal for revising the “directive on liability for defective products” to include software. One would think that this alone should make the bean counters and their lawyers to think twice before pushing for the cheapest, unverified option.
At first, though, they will very likely try to enforce the EULAs to weasel their way out of any liability, but if this proposal goes trough, EU may not take such shenanigans kindly.
Oddly enough, the proposal also states that “in order not to hamper innovation or research, this Directive should not apply to free and open-source software developed or supplied outside the course of a commercial activity, since products so developed or supplied are by definition not placed on the market.”
Evolutionary biologists have long been concerned about the effect of technology on relaxing natural selection in the human population.
Which results in loss of absolute fitness over time and degeneration of the gene pool
Which then, in the quite likely event of the technological rug being pulled out from underneath us, will cause a very big problem.
But AI was not advanced enough to enter the thinking when that problem was discussed in the past.
Now it is quite apparent that it has huge potential to drastically accelerate the process…
Technology is part of environment and living things altering environment and then adapting to the new reality to the point they couldn’t survive in the original habitat is part of the game. So is the occasional meteorite smashing into Earth and rewriting the rules in a big way.
Organisms have indeed always reshaped their environment, and thus changing the selection landscape. Quite often to their own detriment.
But what we have here is indeed unprecedented.
Because technology changes the strength of selection without changing the population size.
Remember Fisher’s fundamental theorem of natural selection:
Usually you get weakening of selection by lowering effective population size.
But what technology does is that it directly reduces the fitness variance. This is a completely new thing.
I would not have survived (or been allowed to survive) in the ancient world, given my visual and orthopedic issues, all of which are considered subclinical or treatable. So the adaption isn’t just to changes in the environment but to a larger degree to people who would have been burdens and would not have been allowed to survive (either by active intent or negligence).
Thus reducing the fitness variance, and in turn the strength of natural selection.
Which is, more generally, also an effect of the combination of demographic transition plus comfortable lifestyles in “developed” societies.
Everyone has 1 or 2 kids, very few people have more, but also many fewer people die before they even get to reproductive age than in the past.
Variance is low.
But when they had 8-10-12 and high child and infant mortality, the variance was very high. Even higher in polygamous warlord societies where most males didn’t reproduce at all.
Low fitness variance =>; weak selection => deleterious variation accumulates => absolute fitness goes down.
You would have been valued for your brain. There is a reason the wise people in dramas are often old ladies, blind, or leaning on a cane.
Intelligence in women was not valued then.
I suspect the degree to which this principle applied depended upon the individual culture in question.
cf. Cassandra?
Medea?
Humans use tools to solve problems more effectively than they can on their own, which by necessity means the less effective old ways are forgotten as they no longer have any use. How many people could survive when thrown naked into forest?
It was different world when there were just couple of books to read, and most people couldn’t read or write anyway, so memory was the only way. Today it would be impossible to memorize even fraction of the content you come across every day, not to mention that memorizing pages of text verbatim is simply pointless and waste of time (ok, it can be nice party trick) when computers can do it infinitely more easily.
Thanks IMDoc.
When EHR’s were pushed prematurely into the medical system, voluminous semi-accurate records became prevalent. Unedited/updated “same as the last note/exam” segments, plus maybe new/old/added-later labs and vital signs, became the norm on hospital charts, then office notes. Studies showing 20% efficiency loss for doctors were ignored, IT companies thrived, scribes and more physician assistants were hired, and costs rose. Longevity and chronic disease continued to fall and rise, respectively.
Docs who actually read the lengthy notes groaned, but were typically able to find the key notations of relevant findings and assessments. Trying to read between the lengthy and untrustworthy AI output, is another matter altogether. It makes irritation from undesired and automatic “spellcheck”, that introduces errors, seem quaint.
Regarding so-called AI making us stupid, there are two sources I think are worth considering. The first is the myth of Theuth, the Egyptian god of writing, as described in Plato’s Phaedrus:
However, as many classicists will point out, the Phaedrus was a written text, ergo Plato himself likely didn’t entirely agree with Thamus’ criticism of Theuth. Broadly, Plato was hostile to rhetoric, though with some rather important provisos offered at the end of the Phaedrus.
A second source worth considering is Kant’s famous essay, “What is Enlightenment?” which begins thus:
This appears to be much closer to where we’re at with so-called AI. There will be people who know AI = BS and will find the resolve and courage to think for themselves, to cultivate and develop their powers of reason. But there will equally be a great many people who will not find that resolve, who will outsource any mental exertion to AI apps, and who as a result will never learn critical thinking. They will remain, to use Kant’s term, forever “immature”.
Of course, we already live in such a world, but so-called AI, as a force working against Enlightenment, as a purveyor of pseudo-knowledge (a.k.a. BS), will only complicate matters further.
…AI corruption of information, such as hallucinations and fabricated citations to support dodgy conclusions. There’s a real risk of what we perceive as knowledge to become quickly and throughly corrupted by this sort of garbage in, garbage out.
Is that really so different from, say, a Trump – or Biden, for that matter – public address? Or a Jake Sullivan press conference?
I’m not looking for a fight, but i am looking for what is at the rock bottom of this fear of AI. Maybe it was covered in the above, i didn’t read it, i have a hard time with the transcript format, i apologize.
It’s worse because people generally expect politicians, or perhaps even people in general, to lie. Meanwhile, studies have shown (sorry no link, and maybe this is based on “bad science”, but it also mirrors my experience so it must be true :) that people do tend to believe machines (more than they should)…
In my corporate life, we have always joked that managers will aways believe the spreadsheet over the person. AI turns this up to 11.
There’s also the conundrum that since these chat tools are unreliable, then you really can only safely ask them question for which you already know the answer – so you can verify their response – or ask them questions where having the correct answer doesn’t matter very much… so how useful is this?
But, if you go back to manager’s propensity to believe machines over people, how do you think this is going to work out…
Thank you Duke for the response.
I sincerely mean no disrespect by saying, i think you’ve talked around the question i asked.
If a manager is loathe to trust his/her people, but in the absence of a machine/spreadsheet must nonetheless, and that counsel returns “poor” results upon implementation, how is that functionally dissimilar to a machine reeling off information that is, to borrow from Ms. Smith, corrupted, hallucinatory, or fabricated?
Again, i thank you very much for the response and the opening to restate and clarify my question.
~Personal code of conduct violation incoming in 3…2…1…~
There is no intelligence more artificial than my own.
Personal code of conduct violation #2, it’s a rainy day so i’m kind of on a roll.
I’m not sure where a talent like Ramanujan plays here with my previous assertion.
If his brain was wired in such a way that he could intuit complex mathematics, the only reason that is acceptable to us is because it has later been proven rigorously. Who’s to say that in a hundred years, the early AI ravings won’t have been proven to be similarly pre-cognizant?
But also, does the fact that he intuited so many theorems (don’t know the terminology precisely sorry) make the process of discovery any less “artificial?”
For now, i’ll contain my assertion to describing only that “intelligence,” such as it is, which transmits from my own braincase.
And gut, don’t forget, thats an important organ for thinkifying about stuff.
If I understand correctly, you’re asking that since people are untrustworthy, then how is it worse that AI is also untrustworthy?
To put my point another way, AI is worse because the untrustworthiness of humans vs AI is not evaluated equally. At the moment, there’s a big mental thumb on the scale in favor of AI. In general, untrustworthy people are weeded out from successful organizations (or put into roles where their damage is contained). If not, the organization fails (yes, I realize this is idealistic and simplified, but, in aggregate, I think it’s more true than not). This is not the case with AI. AI reigns supreme (please ignore all the warnings from the techies who actually understand it).
Also, to echo some of Aurelian’s points below, AI can serve up steaming piles of BS in seconds that can easily fool someone not deeply familiar with the subject. It will fool some of the heuristics that humans use to judge trustworthiness – it uses proper grammar and complete sentences, it sounds like something a reasonably intelligent person would say, there are usually grains of truth scattered throughout. Yet, it can be totally wrong. I’ve read some statistics put the good models at 80% accurate, but those 20% inaccuracies can be really hard to find.
Yes, con men can use the same heuristics against us, but there’s not a big push to add con men to every product and business process, nor the need to build new power and water sources to feed these con men.
Thank you very much again for taking the time.
To be clear, i neither agree nor disagree with your assertion that AI is a “more worser” evaluator, promulgator, or synthesizer of data than the human mind. Since we are more or less at the beginning of the story, i would assert that it’s too soon to say for certain, especially in light of the fact that other civilization-states (for starters, China, Russia, probably India) have stated or likely harbor some opinions on how AI tech should develop/be developed.
I will add that considering AI models are inherently bound up with inputs that are/have been generated from the human intelligence (are we to say, natural intelligence?), what manner of critique is the claim that AI is a GIGO generator?
As to the water and power outlay, i not only take your point, i mostly agree with it, with the caveat that those possessors of “natural intelligence” by their very existence are still the far greater consumer, waster, and destroyer of such resources.
What worries me most about this technology is its capability to produce highly-convincing, detailed and completely false documentation and media on any subject you like, from any point of view, quickly and to order. Such material would be beyond the ability of all but a few experts to evaluate, especially if it was mixed with genuine material. Of course fakes have been produced in the past–the classic is the “Majestic-12” documents claimed to show agreements entered into between the US government and aliens in the 1940s. The documents are assumed to be fakes, but they are very cleverly done, and must have taken a long time and a lot of work to prepare. They could also only be distributed by mail. Something similar today could be prepared in a few minutes and circulated worldwide the same day.
This is particularly so because most of us have poor discrimination, and little specialised knowledge, so they judge information, or even reported information, not by its likelihood, but by what they think of the originator or the subject. I’ve had people comment on my own essays along the lines of “because I don’t believe that X was true, people at the time can’t have thought that X was true either, even if contemporary documents say they did.” We tend to believe what we want to believe (confirmation bias) and so we look for, and believe, what comforts us. Few stories can have been more discredited that the alleged remark by CIA director Tenet (or Casey) that “when what every person in the US believes to be true is false, then the CIA will have completed its disinformation work,” yet the story comes up again and again because it satisfies a psychological need. The last discussion of this story I remember (here perhaps?) concluded with someone admitting that maybe there was no evidence for the story, but it must be true because the CIA was evil. These days, it would be a trivial task to produce a whole set of documents allegedly from the period appearing to prove the story true.
The most dangerous aspect of all this is precisely the à la carte bit. From Snowden and Wikileaks and a whole pile of other documents, you could train an AI to produce highly-realistic and very convincing intelligence documents to say anything you like. Want to prove that the British government tried to kill the Skripals? Here’s a whole file of documents including letters between Ministers, operational orders, and the briefing of the teenage daughter of the Chief Nurse of the British Army about what to do when she found them. Or maybe you prefer this file of intelligence reports about how Putin ordered the killings personally, complete with intelligence intercepts of phone conversation and human sources within the Kremlin, together with a letter of congratulations to the perpetrators. (Or come to that you could do it in Russian.) Likewise, you could easily produce a trove of NATO documents setting out a detailed plan to start a war with Russia, or according to taste a trove of documents captured by the Ukrainians showing Russian plans to invade the Baltic States and Poland.
One of the problems of propaganda is that it has to look at least vaguely convincing, and preferably be based on what people think is true and want to hear. That means that little propaganda is just a “forgery” in the crude sense. Early in the war in Bosnia, for example, the Muslim government commissioned a New York PR firm to do a survey of the kind of propaganda that would be most effective with its target audience–students, NGOs and the media–and went about producing it, to be disseminated by a credulous media. Any time now, we’ll see the first atrocity dossiers precisely targeted at the expected audiences, complete with videos, testimony, photos and heart-rending personal accounts. The problem is that it’s so easy to mix complete fiction with verifiable fact, that the whole concept of “truth”, even at an approximate level, may be about to disappear, even if people are actually prepared to exercise a bit of rational judgement.
As the environment changes, people change.
Let’s not forget that for a very long time–most of human history–people could make “deep fakes” whenever they wanted using words indistinguishable from those that comprise the “truth.” These are called “lies,” and they can be very convincing and effective. People developed strategies to deal with lies by, for instance, not believing everything they heard.
The same process has already occurred with images. Before “generative AI,” we had the photoshop era, which greatly reduced trust in photorealistic representations. People don’t believe everything they see.
Having “convincing fakes” at everyone’s fingertips has merely reduced the image to the word. It’s as easy to lie with an image as it is with a word, but this is the world merely reverting to an ancient mean. In those days, reputation mattered greatly. People would ask you: “Who is your father? What does he do? Where do you come from?” They would get to know you before they’d listen to you.
I think, in the coming years, ethos will again take its place at the rhetorical forefront.
I don’t disagree with the perils you’ve outlined, but it appears your premise starts with human actors (the “natural intelligence”) “training” AI to produce nefarious fakes.
We haven’t separated the “natural intelligence” from the “artificial,” if the “natural” is still ultimately the cause of all of the “artificial’s” antics.
What profits it an AI to undertake any of the deeds you outlined above, without external prompting? Does AI have, understand, labor towards, a profit, at all? These are still human actions, simply the artifice presenting them has been refined.
Also, what do you define as an approximate – level truth? Sounds like politics to me.
If we consider the current state of the art, there is no AGI — nor will there be anytime soon (see Rodney Brooks) —, ergo “AI” is essentially a misnomer. LLMs are not sentient, do not have consciousness, and no amount of training with “better data” will change this fact.
So we could say that the “natural” intelligence is the human intellect that produced all the primary texts that are now being strip-mined to train LLMs to act like “natural” intelligence (perhaps we could say these apps simulate “denatured” intelligence), but again, that doesn’t change the picture: AGI remains a pipe dream and there is 60+ years of research on this that has failed to produce anything better than LLMs.
The “antics” are the limitations or failures of the apps, i.e., the seams appearing in what is at bottom a threadbare attempt to hoodwink people who don’t understand the technology, to believe it is more advanced that it in fact is.
I’m struck by Duke’s statistic above regarding the accuracy of the “good” models hovering around 80%. I wonder on what subjects humans have had greater accuracy, not to sound glib, just generally curious.
Speaking from daily personal experience, I’m lucky if I ever feel better than 51% confidently accurate about anything of any importance at all.
Yes, this is precisely my fear — it will enable Bannon style “flooding the zone with family blog” on an unprecedented industrial scale.
There seem to be the most tentative moves by some governments in the direction of regulation, after the horse has bolted over the hill and AI is making geometric progress in some sectors. However, quite apart from the prospect of mass unemployment, immiseration, anomie and dejection, there is the hightened risk of a military cataclysm because it would appear that such regulations as have been put in place often appear to exempt military technology, and it is quite possible to envisage a situation in an atmosphere of increased international tension (AI having been trained to treat certain countries as enemies) where an autonomous or quasi-autonomous application does something disastrous and irreversible. Indeed, the desperation of the newly unemployed may make zero sum game diplomacy a certainty; combined with AI operating on a like basis, the risk of a literally combustible outcome perhaps becomes very much more likely.
Wasn’t this the whole premise of James Cameron’s ‘Terminator’ franchise?
I, for one, am wondering how I am going to be able to subsist and afford subsistence for my dependants for the remainder of my life, and was last night investigating some of the alternatives (plumber, locksmith, electrician, etc.), but I suspect that there may be a rush of the desperate into those lines of work, crushing day rates in the process.
Learning a trade has been a pretty obvious move for a young person for about a decade, but it is still difficult to find good tradespeople so it has not yet run its course. From what I can tell, most of the young people who tried it in the past ten years did not stick with it long enough to become good at it, so the field is still pretty open. Unfortunately, it does require ten years or so of being a low-level gofer before anyone will (or should) trust you to do jobs on your own. This is very difficult for smart-ish people to accept.
First, the blanket term A.I. really needs to be retired from service. Aside from allowing developers to over-hype their products, the term mostly serves to mystify the pubic and derail understanding. For instance, “generative AI” — which is getting all the attention lately — has vastly different applications than “predictive AI,” and ironically, generative AI apps like ChatGPT based on Large Language Models (LLMs) may, in fact, be less useful and prone to failure than predictive AI. See, for instance, Ed Zitron’s numerous blog posts on the coming bankruptcy/collapse of Sam Altman’s OpenAI.
On the other hand, it may be generative AIs like ChatGPT that will be responsible for some of the worst cultural/informational effects of this technology, “flooding the zone” (Steve Bannon’s term) with deepfakes, disinformation, mediocrity, and bullshit — which is what generative AI is especially good at. Indeed, a recent paper in “Ethics and Information Technology,” explores the thesis that ChatGPT is a “bullshit machine,” using BS in the precise way that the academic philosopher Harry Frankfurt does in his book “On Bullshit.” Rather than calling the errors LLMs make “lies” or “hallucinations,” the authors insist that this misrepresents how LLMs work and what their output actually consists of. They write that whereas “lying and hallucinating require some concern with the truth of their statements….LLMs are simply not designed to accurately represent the way the world is, but rather to give the impression that this is what they’re doing.”
In other words, ChatGPT’s goal is not to make “true” statements, but only to persuade a user that its output might be plausibly written by a human being. It is a bullshit machine, tasked with generating believable bullshit. But don’t we already have enough bullshit infesting our information systems? Do we really need more?
Well put, thank you.
You’re quite right, ChatGPT is a bullshit machine, but it can’t be said to have a goal as you aver in the last paragraph. It is completely unaware that it is involved in discourse, and has no intentions at all. It’s one step up from the Mechanical Turk and, on a lot of levels, even less convincing. There’s a video on Youtube by a young Physicist called, Angela Collier, that goes into why the term artificial intelligence is a misnomer. I’d recommend it to anybody interested in this subject. The reality is, AI is just a marketing term, where intelligence, a qualtiy we regularly see but have difficulty defining, is performed as an illusory fascimile of actual intelligence. That’s not to say it can’t be destructive when deployed as a technology, but that’s more to do with the vicissitudes of late stage Capitalism with it’s assymetrical power structures and the need for bullshit machines that it creates.
Ando, thanks for the tip to the Hicks et al. paper.
Ask AI to provide the best method for the redistribution of US wealth in order to achieve equality of contentment.