By Arryn Robbins, Assistant Professor of Psychology, University of Richmond. Originally published at The Conversation.
I’m more of a scroller than a poster on social media. Like many people, I wind down at the end of the day with a scroll binge, taking in videos of Italian grandmothers making pasta or baby pygmy hippos frolicking.
For a while, my feed was filled with immaculately designed tiny homes, fueling my desire for minimalist paradise. Then, I started seeing AI-generated images; many contained obvious errors such as staircases to nowhere or sinks within sinks. Yet, commenters rarely pointed them out, instead admiring the aesthetic.
These images were clearly AI-generated and didn’t depict reality. Did people just not notice? Not care?
As a cognitive psychologist, I’d guess “yes” and “yes.” My expertise is in how people process and use visual information. I primarily investigate how people look for objects and information visually, from the mundane searches of daily life, such as trying to find a dropped earring, to more critical searches, like those conducted by radiologists or search-and-rescue teams.
With my understanding of how people process images and notice − or don’t notice − detail, it’s not surprising to me that people aren’t tuning in to the fact that many images are AI-generated.
We’ve Been Here Before
The struggle to detect AI-generated images mirrors past detection challenges such as spotting photoshopped images or computer-generated images in movies.
But there’s a key difference: Photo editing and CGI require intentional design by artists, while AI images are generated by algorithms trained on datasets, often without human oversight. The lack of oversight can lead to imperfections or inconsistencies that can feel unnatural, such as the unrealistic physics or lack of consistency between frames that characterize what’s sometimes called “AI slop.”
Despite these differences, studies show people struggle to distinguish real images from synthetic ones, regardless of origin. Even when explicitly asked to identify images as real, synthetic or AI-generated, accuracy hovers near the level of chance, meaning people did only a little better than if they’d just guessed.
In everyday interactions, where you aren’t actively scrutinizing images, your ability to detect synthetic content might even be weaker.
Attention Shapes What You See, What You Miss
Spotting errors in AI images requires noticing small details, but the human visual system isn’t wired for that when you’re casually scrolling. Instead, while online, people take in the gist of what they’re viewing and can overlook subtle inconsistencies.
Visual attention operates like a zoom lens: You scan broadly to get an overview of your environment or phone screen, but fine details require focused effort. Human perceptual systems evolved to quickly assess environments for any threats to survival, with sensitivity to sudden changes − such as a quick-moving predator − sacrificing precision for speed of detection.
This speed-accuracy trade-off allows for rapid, efficient processing, which helped early humans survive in natural settings. But it’s a mismatch with modern tasks such as scrolling through devices, where small mistakes or unusual details in AI-generated images can easily go unnoticed.
People also miss things they aren’t actively paying attention to or looking for. Psychologists call this inattentional blindness: Focusing on one task causes you to overlook other details, even obvious ones. In the famous invisible gorilla study, participants asked to count basketball passes in a video failed to notice someone in a gorilla suit walking through the middle of the scene.
Similarly, when your focus is on the broader content of an AI image, such as a cozy tiny home, you’re less likely to notice subtle distortions. In a way, the sixth finger in an AI image is today’s invisible gorilla − hiding in plain sight because you’re not looking for it.
Efficiency Over Accuracy in Thinking
Our cognitive limitations go beyond visual perception. Human thinking uses two types of processing: fast, intuitive thinking based on mental shortcuts, and slower, analytical thinking that requires effort. When scrolling, our fast system likely dominates, leading us to accept images at face value.
Adding to this issue is the tendency to seek information that confirms your beliefs or reject information that goes against them. This means AI-generated images are more likely to slip by you when they align with your expectations or worldviews. If an AI-generated image of a basketball player making an impossible shot jibes with a fan’s excitement, they might accept it, even if something feels exaggerated.
While not a big deal for tiny home aesthetics, these issues become concerning when AI-generated images may be used to influence public opinion. For example, research shows that people tend to assume images are relevant to accompanying text. Even when the images provide no actual evidence, they make people more likely to accept the text’s claims as true.
Misleading real or generated images can make false claims seem more believable and even cause people to misremember real events. AI-generated images have the power to shape opinions and spread misinformation in ways that are difficult to counter.
Beating the Machine
While AI gets better at detecting AI, humans need tools to do the same. Here’s how:
- Trust your gut. If something feels off, it probably is. Your brain expertly recognizes objects and faces, even under varying conditions. Perhaps you’ve experienced what psychologists call the uncanny valley and felt unease with certain humanoid faces. This experience shows people can detect anomalies, even when they can’t fully explain what’s wrong.
- Scan for clues. AI struggles with certain elements: hands, text, reflections, lighting inconsistencies and unnatural textures. If an image seems suspicious, take a closer look.
- Think critically. Sometimes, AI generates photorealistic images with impossible scenarios. If you see a political figure casually surprising baristas or a celebrity eating concrete, ask yourself: Does this make sense? If not, it’s probably fake.
- Check the source. Is the poster a real person? Reverse image search can help trace a picture’s origin. If the metadata is missing, it might be generated by AI.
AI-generated images are becoming harder to spot. During scrolling, the brain processes visuals quickly, not critically, making it easy to miss details that reveal a fake. As technology advances, slow down, look closer and think critically.
While this post focuses on images, “deepfake” videos pack even more risk of pernicious effects than images. This is because people generally don’t question the provenance and authenticity of video content, taking it for granted that making videos requires real world production with real equipment like cameras and microphones (with the exception of professionally produced content like films where some degree of CGI and 3D animation is expected, and accepted). I disagree slightly with the post that people’s online BS detectors have atrophied to the point where they struggle to detect fake images on the web. Ever since the word “photoshop” entered the mainstream cultural lexicon, and social media filters became widely used, people have become less trusting of online images in general. AI will just accelerate this trend, becoming in reality the coup de grace for reflexive trust in online visual content. Regrettably, this also means visual content in the form of timestamped videos and images, long considered near unimpeachable forms of evidence in eg criminal trials, will now require extensive verification before becoming admissible in court proceedings.
I do not entirely disagree, but here are a couple of points contra:
““deepfake” videos pack even more risk of pernicious effects than images”
Theoretically yes, in practice this is still an area where fakes are much easier to detect: unnatural movements, distorted perspectives, incorrect proportions, and, the one I always look for, elements that melt away (limbs of moving animals, pedestrians in an urban scenery, background landscape, etc). In addition, those professionally trained in cinema/video find subtle issues with shadows, reflections, and textures.
We are not yet there, wait 2-3 years. But by then, ambitious content creators trying to push the envelope will probably overdo it, causing new types of revealing mistakes in the videos.
“I disagree slightly with the post that people’s online BS detectors have atrophied to the point where they struggle to detect fake images on the web.”
Since you mention Photoshop, there was a website called “Photoshop disasters” that published the goofy results of manipulating images with Photoshop and similar software — the numerous examples being taken from mainstream sources (ads, fashion magazines, etc). The site disappeared many years ago; I suspect that by then the number of “disasters” had diminished to negligible levels, and that detecting the application of Photoshop on mainstream pictures through imperfections had become next to impossible.
More importantly, the fact that “people have become less trusting of online images in general” does not mean that their BS-detectors are in good shape: on the contrary, it could also be interpreted as a situation where everybody suspects that images are doctored, but, not being capable of uncovering the deceit any more after living for so long in an environment of fakery, simply casts a wholesale doubt on pictorial representations.
Remember those AI systems that could generate bafflingly realistic portraits of people (who did no exist)? For the first couple of years, one could detect the trickery because of imperfections with ears, hair, or spectacles. Last I read about them, one had to look at the pupils of the eyes and detect discrepancies in the way light was reflecting on them to figure out the images were generated. At least in specialized, well-defined contexts, it is indeed becoming next to impossible to figure out that images are generated.
Perhaps I should have been more explicit, but you do a good job of distilling the essence of my comment when you say: “…it could also be interpreted as a situation where everybody suspects that images are doctored, but, not being capable of uncovering the deceit any more after living for so long in an environment of fakery, simply casts a wholesale doubt on pictorial representations.”
This is exactly the point, it’s not about detecting fake or synthetic content through imperfections or distorted proportions (the editing technology to iron out said imperfections before posting online is near ready for prime time, if it isn’t already there) – it’s about people’s default setting being that of mistrusting “pictorial representations” on the web, whether such representations are perfect or not. To take a shallow example, consider the case of people who meet online, stalk each other’s social profiles, share selfies and are then “shocked” to discover when they finally meet in real life that those pictorial representations were, well, not quite representative of the person seating across the table from them. This default setting will only become further entrenched with the advent of AI, people will assume (perhaps unconsciously) that most online visual content is doctored and embellished unless proven otherwise
Quote: with the advent of AI, people will assume (perhaps unconsciously) that most online visual content is doctored and embellished unless proven otherwise.
If people do not assume this then that is because they do not have a background in media or publishing or advertising! All images are doctored and have been for years – well before the advent of Photoshop. Cellulite, spots and wrinkles removed for girlie magazines, holiday sands more golden and seas more blue – all part and parcel of the repro trade in the ’80sand ’90s using Crosfield and Hell systems.
Pictures of musical celebrities on posters – how accurately do they match current reality?
Of course people should distrust all pictures that are used to reinforce, promote or advertise anything from a personal relationship to a safari holiday. Not doing so is simply carrying naivety to
prospective investment in The Waterway Infrastructure Transit scenario!
I even remember the time when camera angles, cropping, and lighting were used to hide what one wanted hidden—no AI or photoshopping was needed. It’s not really new, but it’s definitely easier.
Marketing is manipulation by image word and video, and as politics has been marketing for many many decades….
If you care, the internet has made research easy – no late night trip to the library – and form opinions on consistent facts like performance. But what difference does it make if a tiny house photo is AI generated and includes a bit of Escher?
Agreed, but:
1) AI generated images do not just put in question the authenticity of marketing/advertising/publishing photographs, but also all those in journalism/science/private shots.
2) I wrote “pictorial representations” because it is not just photographic images that are in trouble: the authenticity of diagrams and charts is also doubtful ever since, one or two years ago, there was that scandal regarding technical articles whose diagrams had been generated by AI tools (resulting in hilarious incoherences).
India is reportedly ahead of the game on this one. Multiple articles reported that the recent election was full of AI generated content and deepfakes, some of it officially authorized by the campaigns. Another popular tactic, for politicians caught in a scandal, was to flood the market with AI generated images and fakes purporting to show the incident in question, then loudly and publicly debunk them all, hopefully discrediting the original allegation by association.
AI generated images do not just put in question the authenticity of marketing/advertising/publishing photographs, but also all those in journalism… Ka-Boom!
That part right there scares the crap out of me, as we lose agency and our ability to help correctly perceive and shape our own realities, and then act/react to distorted or completely unfactual information.
As images hold immense power over us, I believe that there is cause for concern regarding both the advances of AI-generated images, but how these are used– the intentions behind these images.
What have people spotted in the pictures?
The first one has a rather hard to repair toilet, if the tank is inside the wall. I did not know towels come with loops for hanging, but I’m not big on decoration.
The second one seems to have a stove top with no controls, and a drawer underneath (could be a dummy handle). The base cabinets have a surplus of handles, or maybe edge hinges on drawers. Could be upscale baby proofing though. The glass pitcher on the right is odd.
A big key to them being fake, AI or no, is that they are *far* too clean for anyone to be using them, but that just says magazine shot.
Ah, things depend on the country:
“The first one has a rather hard to repair toilet, if the tank is inside the wall.”
That is pretty much how things are arranged in my flat. The tank is behind the wall, with those big pressure buttons embedded in it to activate flushing. Seems to be somewhat fashionable nowadays when hiding lots of pipes installed after a renovation behind a wall.
“I did not know towels come with loops for hanging, but I’m not big on decoration.”
Not at all unusual where I live; my towels do have loops for hanging (though smaller ones).
What I find thoroughly odd is that minuscule sink on a cupboard; this is the element that undoubtedly reveals fakery.
“The second one seems to have a stove top with no controls, and a drawer underneath (could be a dummy handle).”
Well, touchscreen-controlled ovens, washing machines, etc, seem to become fashionable (unfortunately!). As far as the “drawer underneath”, I interpreted it as a (downward) foldable clap that may hide the stove control buttons. Kitchen modellers seem to be maniacally intent on hiding everything behind uniform furniture panels.
“The base cabinets have a surplus of handles, or maybe edge hinges on drawers.”
That is where once again I concluded this is a fake image. How can you open the drawers individually? How can you fold the upper claps individually? This looks impossible.
“The glass pitcher on the right is odd.”
Indeed, and so is the stool on the upper-left.
My cooktop has electronic touch controls on the top surface. Below that, in the cabinet front, there’s a dummy drawer – has the same handles as the adjacent drawers, but doesn’t do anything, as the innards of the cooktop are where the drawer would be. I don’t see anything wrong with that part of the kitchen.
To me, the giveaway is all those button-y things on the drawers – it’s either incorrectly applied child guards, or bad AI.
There has been a lot of broker-driven photoshop fudging of real-estate listing photos in NYC for several years now — brokers would like to show an apartment as it could be, after renovation, rather than the worn out thing that is actually being sold. They are supposed to note that the image has been edited, but compliance is spotty, to say the least. As noted, this is driven by an individual doing the edits, but I imagine it won’t be long before AI takes on that task completely.
The corner between the two windows makes no sense either. It appears like it started as curtains, which wouldn’t be usable with the pulldown blinds.
The floor in the kitchen across the rug. In the first, not sure – if I was doing the baseboards they would be continuous and the little shelf would be at the same height on the far and right wall – but then again if it was a cheap contractor….
What’s with the lavatory faucet that would overshoot the basin if unthoughtfully turned and the (un)obvious melting waste plumbing beneath the counter?
The roll of toilet paper is shiny and looks way too thick.
I trained as an architect in the 70’s (CPSLO). At that time you had to learn the graphical techniques of making geometrically correct ‘perspective’ drawings (by hand): all rectangular shapes on the same plane with the same visual orientation recede to the same horizon line. This is most noticeable with shade and shadows.
In the bathroom photo, the sunlight angle from the window on the left creates a ‘reality check’ on every other shadow in the room. That angle of sunlight from bottom edge of window lands on the floor. The shadow angle of other items in the photo are similar, but not correct. The intensity of the shadow of the towels to the wall plane appears to be suspect, as well.
Since my early architectural training, computer aided design (CAD) has not only automated orthogonal drawings (‘Blueprints; construction drawings), but graphical 3D spatial drawing and rendering (presentation graphics) is now mostly computer assisted. (See; Sketchup for an
acce$$ible version of 3D software). These rendered presentation graphics of proposed (not built) projects often fool the casual observer. AI generated graphics will lead us all astray.
What is really going to be “fun” is the workmen trying to make some of this AI sloppy design work in the real world. I can see prospective homeowners using AI “enhanced” design tools to lay out their “Dream Home (TM).” In this scenario, some sort of program, or more likely human, will have to ‘adjust’ the design to what is possible in the ‘real’ world.
The benefits of AI seem to be illusory, in all senses of the word.
” AI-generated images have the power to shape opinions and spread misinformation in ways that are difficult to counter.”
Blackmail, false accusations, misprison, fraud enablement, political and geopolitical games, setting up false defense, false flags…….
The better, and more time/money to clean up image…the better the farce. So I suppose those with the money and time to monetize/profit from this avenue will be the first movers.
I had thought, back in the late 90’s that a DoD program called Raptor was being developed to alter video in real time for DoD purposes.
“These images were clearly AI-generated and didn’t depict reality. Did people just not notice? Not care?”
I notice and I care. Even before AI I was one of those who, as a photoshopper, could easily spot photoshopped details. Even before AI I would dwell, pause to look at the details. I’m curious about details, it’s a pastime in itself.
And where this supremely irritates me is real estate agents using AI to not only artificially furnish homes but also obscure details like flood marks, leak marks, foundation cracks, electrical burn marks. It’s incredibly sleazy and dishonest.
But note – where previously a real estate agent would have to photoshop over the detail themselves or otherwise instruct someone to do this, now they can shirk responsibility by saying the AI did it.
Lee Harvey Oswald was apparently a photoshop enthusiast long before the software was invented. Among his souvenirs was this ‘family portrait’ complete with weird angles and lighting, and a fake baby:
https://media.gettyimages.com/id/615320544/fr/photo/lee-harvey-oswald-his-wife-marina-and-his-daughter-june-lee-when-they-lived-in-minsk-in-the.jpg?
Before Photoshop it was photo manipulation. And the master of this was Ansel Adams. Here’s a link to a series of short (mostly single page) essays about the photo manipulation processes of Adams and others of his circle:
https://www.haroldhallphotography.com/photo-manipulation/
The Moonrise Over Hernandez photo (included in the fifth in the series of six essays) is shown as a contact print and final version. A tremendous difference.
The bathroom floor is the very sloppy. Lots of fudging there.
The algorithm was completely overwhelmed by the 2D pattern and the 3D spatial interpretation?
Not related to AI generated images but AI music: this new video from Benn Jordan explains how to poison-pill music for AI. I wonder what will happen if/when this goes mainstream for images.
Any link to that music-poisoning?
And yes, there are already a couple of sites providing services to poison images against AI.
Sorry, the comment doesn’t seems to take video link, but you can found video in the youtube channel of Benn Jordan and look for the last video.
This topic helps to reveal the difference between the educated vs the illiterates. A recent poll in 2024 gathered information about American adults.
Here’s what they found:
Nearly 60% of the Adult US Population were found to be “Functionally Illiterate,”
While many of those may use common sense in seeing through the fog, just many fail to discern the fake from the true.
That poll result may help us understand why so many Americans voted Trump back into Office. A tragedy writ large.