Knowing Less About AI Makes People More Open to Having it in Their Lives – New Research

Yves here. I found this to be a truly depressing article about AI receptivity, since it ignored, as most discussions do, serious issues about the limits of AI and how it is being implemented without taking those limits into account. One tech industry veteran who regards AI as truly revolutionary (and he is a hype skeptic) points out that it’s like having a 3.9 GPA college freshman working for you: it regularly makes a very good first pass, but you still need to review results. We’ve even seen in comments readers posting AI results that are demonstrably wrong, particularly on things that ought to be basic and not hard to get right, like what certain legal terms mean. So please do NOT post AI output, since it will be assumed to be accurate when it often is not.

In keeping, even though some analyses report well above human accuracy levels of AI readings of medical images, there are studies that beg to differ:

We have a more deliciously skeptical take here, I Will Fucking Piledrive You If You Mention AI Again (hat tip Micael T). A nuggets early in the piece:

And then some absolute son of a bitch created ChatGPT, and now look at us. Look at us, resplendent in our pauper’s robes, stitched from corpulent greed and breathless credulity, spending half of the planet’s engineering efforts to add chatbot support to every application under the sun when half of the industry hasn’t worked out how to test database backups regularly. This is why I have to visit untold violence upon the next moron to propose that AI is the future of the business – not because this is impossible in principle, but because they are now indistinguishable from a hundred million willful fucking idiots.

The second issue is that the capitalist classes are using AI to eliminate yet more employment and also to reduce accountability. How do you penetrate increasingly AI-managed systems to get to a real human and dispute an arguable (or worse, deliberately contrary to contract terms) decision? Who is liable if AI in a medical system makes a bad call and the damage to the patient is serious? AI is being rolled out well in advance of having answers to critical questions about how to protect public and customer rights.

By Chiara Longoni, Associate Professor, Marketing and Social Science, Bocconi University, Gil Appel, Assistant Professor of Marketing, School of Business, George Washington University, and Stephanie Tully, Associate Professor of Marketing, USC Marshall School of Business, University of Southern California. Originally published at The Conversation

The rapid spread of artificial intelligence has people wondering: who’s most likely to embrace AI in their daily lives? Many assume it’s the tech-savvy – those who understand how AI works – who are most eager to adopt it.

Surprisingly, our new research (published in the Journal of Marketing) finds the opposite. People with less knowledge about AI are actually more open to using the technology. We call this difference in adoption propensity the “lower literacy-higher receptivity” link.

This link shows up across different groups, settings and even countries. For instance, our analysis of data from market research company Ipsos spanning 27 countries reveals that people in nations with lower average AI literacy are more receptive towards AI adoption than those in nations with higher literacy.

Similarly, our survey of US undergraduate students finds that those with less understanding of AI are more likely to indicate using it for tasks like academic assignments.

The reason behind this link lies in how AI now performs tasks we once thought only humans could do. When AI creates a piece of art, writes a heartfelt response or plays a musical instrument, it can feel almost magical – like it’s crossing into human territory.

Of course, AI doesn’t actually possess human qualities. A chatbot might generate an empathetic response, but it doesn’t feel empathy. People with more technical knowledge about AI understand this.

They know how algorithms (sets of mathematical rules used by computers to carry out particular tasks), training data (used to improve how an AI system works) and computational models operate. This makes the technology less mysterious.

On the other hand, those with less understanding may see AI as magical and awe inspiring. We suggest this sense of magic makes them more open to using AI tools.

Our studies show this lower literacy-higher receptivity link is strongest for using AI tools in areas people associate with human traits, like providing emotional support or counselling. When it comes to tasks that don’t evoke the same sense of human-like qualities – such as analysing test results – the pattern flips. People with higher AI literacy are more receptive to these uses because they focus on AI’s efficiency, rather than any “magical” qualities.

<>It’s Not About Capability, Fear or Ethics

Interestingly, this link between lower literacy and higher receptivity persists even though people with lower AI literacy are more likely to view AI as less capable, less ethical, and even a bit scary. Their openness to AI seems to stem from their sense of wonder about what it can do, despite these perceived drawbacks.

This finding offers new insights into why people respond so differently to emerging technologies. Some studies suggest consumers favour new tech, a phenomenon called “algorithm appreciation”, while others show scepticism, or “algorithm aversion”. Our research points to perceptions of AI’s “magicalness” as a key factor shaping these reactions.

These insights pose a challenge for policymakers and educators. Efforts to boost AI literacy might unintentionally dampen people’s enthusiasm for using AI by making it seem less magical. This creates a tricky balance between helping people understand AI and keeping them open to its adoption.

To make the most of AI’s potential, businesses, educators and policymakers need to strike this balance. By understanding how perceptions of “magicalness” shape people’s openness to AI, we can help develop and deploy new AI-based products and services that take the way people view AI into account, and help them understand the benefits and risks of AI.

And ideally, this will happen without causing a loss of the awe that inspires many people to embrace this new technology.

Print Friendly, PDF & Email

One comment

  1. vao

    I see the “readers posting AI results” (not to mention postings by writers) becoming more frequent, though not prevalent, in other contexts as well.

    I believe this increasing recourse to AI is caused by two major factors:

    1) Convenience: instead of gathering, filtering, perusing, and synthesizing information, one can just let an AI tool do the whole work and present the final results. The only effort required is drafting the “prompt”.

    2) Search difficulties: google search is nowadays so thoroughly crapified that one has not just to sift through masses of advertisements and junk SEO-driven postings to find information, but also exhibit enough tenacity to look for it since relevant websites get frequently algorithmically or manually demoted to only appear on the nth results page. This makes resorting to AI tools all the more appealing.

    Of course, the whole hype about Artificial Intelligence prepares the ground for blissful acceptance of whatever Chat-GPT and co produce.

    Perhaps we should use “Intelligence” in the same way as spies: information obtained by doubtful means, from suspect sources, that must be considered untrustworthy as long as it has not been thoroughly cross-checked and corroborated.

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *