Yves here. I found this to be a truly depressing article about AI receptivity, since it ignored, as most discussions do, serious issues about the limits of AI and how it is being implemented without taking those limits into account. One tech industry veteran who regards AI as truly revolutionary (and he is a hype skeptic) points out that it’s like having a 3.9 GPA college freshman working for you: it regularly makes a very good first pass, but you still need to review results. We’ve even seen in comments readers posting AI results that are demonstrably wrong, particularly on things that ought to be basic and not hard to get right, like what certain legal terms mean. So please do NOT post AI output, since it will be assumed to be accurate when it often is not.
In keeping, even though some analyses report well above human accuracy levels of AI readings of medical images, there are studies that beg to differ:
Can we really trust AI in critical areas like medical image diagnosis? No, and they are even worse than random. Our latest study, “Worse than Random? An Embarrassingly Simple Probing Evaluation of Large Multimodal Models in Medical VQA,” uncovers the stark limitations of… pic.twitter.com/pt3d02RZcM
— Xin Eric Wang (@xwang_lk) June 3, 2024
We have a more deliciously skeptical take here, I Will Fucking Piledrive You If You Mention AI Again (hat tip Micael T). A nuggets early in the piece:
And then some absolute son of a bitch created ChatGPT, and now look at us. Look at us, resplendent in our pauper’s robes, stitched from corpulent greed and breathless credulity, spending half of the planet’s engineering efforts to add chatbot support to every application under the sun when half of the industry hasn’t worked out how to test database backups regularly. This is why I have to visit untold violence upon the next moron to propose that AI is the future of the business – not because this is impossible in principle, but because they are now indistinguishable from a hundred million willful fucking idiots.
The second issue is that the capitalist classes are using AI to eliminate yet more employment and also to reduce accountability. How do you penetrate increasingly AI-managed systems to get to a real human and dispute an arguable (or worse, deliberately contrary to contract terms) decision? Who is liable if AI in a medical system makes a bad call and the damage to the patient is serious? AI is being rolled out well in advance of having answers to critical questions about how to protect public and customer rights.
By Chiara Longoni, Associate Professor, Marketing and Social Science, Bocconi University, Gil Appel, Assistant Professor of Marketing, School of Business, George Washington University, and Stephanie Tully, Associate Professor of Marketing, USC Marshall School of Business, University of Southern California. Originally published at The Conversation
The rapid spread of artificial intelligence has people wondering: who’s most likely to embrace AI in their daily lives? Many assume it’s the tech-savvy – those who understand how AI works – who are most eager to adopt it.
Surprisingly, our new research (published in the Journal of Marketing) finds the opposite. People with less knowledge about AI are actually more open to using the technology. We call this difference in adoption propensity the “lower literacy-higher receptivity” link.
This link shows up across different groups, settings and even countries. For instance, our analysis of data from market research company Ipsos spanning 27 countries reveals that people in nations with lower average AI literacy are more receptive towards AI adoption than those in nations with higher literacy.
Similarly, our survey of US undergraduate students finds that those with less understanding of AI are more likely to indicate using it for tasks like academic assignments.
The reason behind this link lies in how AI now performs tasks we once thought only humans could do. When AI creates a piece of art, writes a heartfelt response or plays a musical instrument, it can feel almost magical – like it’s crossing into human territory.
Of course, AI doesn’t actually possess human qualities. A chatbot might generate an empathetic response, but it doesn’t feel empathy. People with more technical knowledge about AI understand this.
They know how algorithms (sets of mathematical rules used by computers to carry out particular tasks), training data (used to improve how an AI system works) and computational models operate. This makes the technology less mysterious.
On the other hand, those with less understanding may see AI as magical and awe inspiring. We suggest this sense of magic makes them more open to using AI tools.
Our studies show this lower literacy-higher receptivity link is strongest for using AI tools in areas people associate with human traits, like providing emotional support or counselling. When it comes to tasks that don’t evoke the same sense of human-like qualities – such as analysing test results – the pattern flips. People with higher AI literacy are more receptive to these uses because they focus on AI’s efficiency, rather than any “magical” qualities.
<>It’s Not About Capability, Fear or Ethics
Interestingly, this link between lower literacy and higher receptivity persists even though people with lower AI literacy are more likely to view AI as less capable, less ethical, and even a bit scary. Their openness to AI seems to stem from their sense of wonder about what it can do, despite these perceived drawbacks.
This finding offers new insights into why people respond so differently to emerging technologies. Some studies suggest consumers favour new tech, a phenomenon called “algorithm appreciation”, while others show scepticism, or “algorithm aversion”. Our research points to perceptions of AI’s “magicalness” as a key factor shaping these reactions.
These insights pose a challenge for policymakers and educators. Efforts to boost AI literacy might unintentionally dampen people’s enthusiasm for using AI by making it seem less magical. This creates a tricky balance between helping people understand AI and keeping them open to its adoption.
To make the most of AI’s potential, businesses, educators and policymakers need to strike this balance. By understanding how perceptions of “magicalness” shape people’s openness to AI, we can help develop and deploy new AI-based products and services that take the way people view AI into account, and help them understand the benefits and risks of AI.
And ideally, this will happen without causing a loss of the awe that inspires many people to embrace this new technology.
I see the “readers posting AI results” (not to mention postings by writers) becoming more frequent, though not prevalent, in other contexts as well.
I believe this increasing recourse to AI is caused by two major factors:
1) Convenience: instead of gathering, filtering, perusing, and synthesizing information, one can just let an AI tool do the whole work and present the final results. The only effort required is drafting the “prompt”.
2) Search difficulties: google search is nowadays so thoroughly crapified that one has not just to sift through masses of advertisements and junk SEO-driven postings to find information, but also exhibit enough tenacity to look for it since relevant websites get frequently algorithmically or manually demoted to only appear on the nth results page. This makes resorting to AI tools all the more appealing.
Of course, the whole hype about Artificial Intelligence prepares the ground for blissful acceptance of whatever Chat-GPT and co produce.
Perhaps we should use “Intelligence” in the same way as spies: information obtained by doubtful means, from suspect sources, that must be considered untrustworthy as long as it has not been thoroughly cross-checked and corroborated.
this is the way to go.
And this summarizes part of the guidance in the way we’re told – at least once a week – to use AI because it’s great and goes great without expertise… unless we’re doing something that’s going out the door to a client or other use in the big wide world in which case we will likely be fired.
Dr. Angela Collier, theoretical physicist, presents (July 2023): “AI does not exist but it will ruin everything anyway”:
https://www.youtube.com/watch?v=EUrOxh_0leE&t=1s&ab_channel=AngelaCollier
and a bit of a sweetmeat “billionaires want you to know they could have done physics”:
https://www.youtube.com/watch?v=GmJI6qIqURA&ab_channel=AngelaCollier
This article makes liberal use of the phrase “AI literacy” without explaining what it means. By now, anyone not living under a rock has been pummeled non-stop with AI hype waxing lyrical about the capabilities about to be unlocked by this technology in education, medicine, robotics, biology, law, creative fields etc. Are these folks AI literate as per the article? Or is a working understanding of neural networks and the transformer architecture underpinning most of what we call AI today required to qualify? Entering a prompt into a chat interface and getting a response requires knowing how to type on a digital device with a graphical user interface, there’s no steep learning curve standing between you and that interface. It’s the hype that drives people to want to try out this technology, the idea that some nebulous AI literacy is the dividing line between the skeptics and the early adopters is frankly not very convincing.
This.
Knowing less about AI makes people more open to having it in their lives. And makes them think that “Stargate” is a great idea. What could go wrong giving Sam Altman, Larry Ellison, and Masayoshi Son a giant pile of $$ to build a datacenter bridge to nowhere?
Trump tech agenda begins with $500B private AI plan and cuts to regulation -> https://archive.ph/ZTQbQ
Beat me to it about the half a trillion dollar giveaway.
Apparently the Chicago school released a report a few years ago suggesting that closing the rural gap for high speed Internet would boost GDP by $160 billion. The former Biden administration tried to capitalize on it by introducing BEAD, a ~$40billion program to do exactly that, but of course like other acts that would actually benefit Americans, like pharmaceutical price controls, deployment wasn’t scheduled to start until 2026 (Giving the states basically until now to submit their proposals for broadband deployment, and then have those go through a review process before shovels start digging), putting those programs into jeopardy if one of Biden’s political opponents won the election instead of him or an ally.
My guess is that Trump diverts that money to subsidizing Musk’s satellite broadband network Starlink instead of building more resilient fiber networks, as was originally intended.
I lead a development team integrating “AI” LLMs into business systems. We use models from a variety of sources, both those we host and those available only by API. The main trend I see over the last year is that the models are becoming more consistent, in that they mostly do what you ask instead of making random excursions into nonsense. In this there has been major progress. However the quality of the deeper semantics is been growing more slowly and less consistently. The distortions have become more subtle and harder to detect. We are accustomed to judging the importance and truth of a statement in part by the quality of its language. A modern LLM writes quite well, but that guarantees nothing about its underlying veracity or utility. Critical thinking has never been more important.
I haven’t seen a AI hype article that mentions the importance of crititical thinking. There will be probably be an attack on that as an idea.
They claim skepticism comes fromm lack of AI literacy. Skepticism comes from having actual literacy or skill.
It is stochastic. You can’t guarantee quality.
These things should not be deployed, never see daylight in any capacity that might cause any harm. (Basically no use case is valid.)
Kind of hilarious that, in software engineering, reliable, consistent output for any input is so prized; imagine banking software that randomly drops digits, and dealing with precision is a crucial thing. For example in JavaScript, dealing with just a representation of money requires jumping through hoops because of how JavaScript understand real numbers, oops. And guess what sits atop the frontend of any Web-based ecommerce? JavaScript.
So now we’re throwing all that to the winds, and interfacing with a system that by definition is probabilistic in its responses.
This is, in a word.
Madness.
And this does not even touch on the irreconcilable costs of running LLM queries. It cannot be done profitably. This garbage is going to burn up the entire planet.
I guess someone said, wait, you think BTC mining is gonna burn up the planet? You ain’t seen nothing yet!
It seems like the 21st century will be humans’ most destructive yet.
The link in the article to the World Economic forum article on AI literacy weakly defines AI literacy, but the links provided to training resources is a pretty good start — I’ll accept “to know when AI is being used and evaluate the benefits and limitations of it in a particular use case that might impact us.” as a reasonable starting point of literacy, as it doesn’t automatically require deep technical knowledge, but provides a basis for assessing AI’s effect.
But, this article’s authors want to preserve the public’s view that AI contains a fair amount of ‘magic’, which is antithetical to effective literacy. AI proselytizers have stepped back from the concept of ‘explainability’ — what were the defined factors and methods used that led an AI to present a particular answer — as they are reliant on a general public not understanding the ‘magic’ in order to command resources and wield power. So it has been with religion, so it is with AI.
The new Trinity: The Invisible Hand; Elon Musk; and AI. Father, Son and Holy Spirit.
Knowing Less About Foreign Countries Makes US People More Open to Having Them Bombed – Old Research
There is still nothing new under the sun. It’s the same trust the goverment, “scientists”, “experts”, priests, celebrities, etc.
In Frank Herbert’s Dune there are many references to “Then Orange Catholic Bible in which there is are these statements:
Thou shalt not make a machine in the likeness of a human mind. Thou shalt not disfigure the soul.
There are many cautionary tales in Science Fiction that revolve around the “thinking machine” that escapes the control of the oh-so clever humans who constructed it with the best, or was it the worst, of intentions. “AI” is a venture into making a machine in the likeness of a human mind, bad enough, but doing so for profit and to reduce the employment of humans further increasing profit, compounds the offense. There is much more to it than that and as I look at it nothing good. I would not touch it with a barge pole.
This has always seemed especially prescient to me:
“Once men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them,” Reverend Mother Gaius Helen Mohiam, from the first book.
As always it’s the power relationships that matter
I’m not afraid of technology but I am afraid of the sociopaths who wield said technology
If it worked as advertised those adopting it would quickly become “the people who no longer know how to do anything” due to skill destruction.
If it doesn’t work as advertised those adopting it would quickly become “the people who wasted resources on chatbots” resources that could have gone elsewhere.
Clearly a modern marvel…..
AI is gawdawful on any of the bill paying sites (Synchrony, Xfinity, Verizon). Not helpful at all. I am not impressed. I would rather deal with the overseas call center than AI.
AI’s popularity depends entirely on its novelty, and in a few years, for better or worse, it will be part of our everyday lives, and no one will care.
I can’t help feeling that Idiocracy is the end state of relying on AI.
First off, I’m really glad you & Micael T included a link to that one rant. I don’t remember where I first saw it (maybe here), but it’s a funny one. Sometimes, all the rational arguments in the world can’t match a well-delivered suplex. Cue the Macho Man Randy Savage voice: “Oh, yeah!”
Overall, I think the thesis of this article checks out. Most of the people I know who are AI-skeptics work with computers or have a strong background with them. The point about people who know more about AI preferring it in relatively low-stakes or very bounded problems also checks out. I really do believe the LLM fetish in particular among American tech companies is going to turn out to be a world-historical waste; meanwhile, there probably isn’t a controls theory problem on earth that couldn’t benefit from a tiny ML optimizing phase.
To JMH’s comment, while Dune’s always a fun book to find interesting social commentary in, I’ve told other people before that I think it’s weirdly prescient for things like AI and deepfakes. More than the “thinking machines” themselves though, I’ve always thought the idea of lasers & energy shields ironically leading people to revert to swords was insightful.
Beyond all the other issues Yves mentioned, there’s a huge element of “why would I even want on this ride?” If corrupted AI slop becomes enough of the digital content in the world, not necessarily even a plurality, people will eventually just revert to trusting analog communication more (and in the case of deepfakes, maybe only news delivered in-person through reliable friends).
Even in more focused business applications though, I’ve started interpreting “AI improved my productivity” as “much of my job consists of tasks that probably shouldn’t exist”. Like transcribing meeting minutes seems to be a recurring theme, and it always makes me think, “Enabling more meetings is your idea of progress? Bruh.”
Talk about missing the elephant in the room. The author doesn’t even consider the possibility that perhaps the ones with more understanding are the ones who know what they’re doing, and the ones with less understanding are making a mistake. They instead conclude that we should all aspire to be like the ignorant cohort, deliberately avoiding any deeper understanding so that we don’t lose our sense of childlike wonder:
This is also the strategy of scammers and fraudsters and the kind of person they target, a point you’d think would be worth making.
Sorry to bring it up but have you forgotten the grotesque proportion of Americans which truly truly truly believe in the existence of angels? Thinking about it, probably the same proportion of Americans which have the uttermost wretched skills in reading, writing and spelling. So, wave the flag and consult for your “phone” for everything else. Oh it’s grim, indeed it is. Or should I have written: Oh it’s, yuh know, like, grim…
The Director of the place I work for pulled out an AI Marketing Plan she had concocted. Honestly, it was perfect. Completely meaningless words in English stuck together to seem like enlightening info but just the same old marketing-speak word-salad.
It’s PEAK AI, it’s never going to do anything any better than that. I sign up Larry Ellison for personalized AI MRNA vaccines, all he can get.
All-Source Intelligence – Substack
The next year of AI target analysis
The premier nonprofit partner of US satellite imagery analysis and military targeting hosted the US Gov. product manager of Anthropic on Wednesday for a discussion on the future of generative AI.
by Jack Poulson
https://jackpoulson.substack.com/p/the-next-year-of-ai-target-analysis