The Trampoline Bunnies That Broke My Heart
A few weeks ago, I fell for trampoline bunnies.
Not a metaphor. An online clip: rabbits bounding joyfully on a backyard trampoline, ears flopping in slow motion, fur catching golden light. Pure delight. I watched it three times, sent it to my daughters with “I love this!!!” and briefly considered buying a trampoline for rabbits I don’t even own.
Then I found out: completely fake. AI-generated.
My heart broke a little.
That same week, a single AI-generated photo of an explosion near the Pentagon erased hundreds of billions in market value in minutes. Traders panicked. The S&P 500 plunged.
Same technology that fooled me with bunnies. Exponentially higher stakes.
Ancient Hardware, Modern Chaos
The truth: our brains aren’t broken. They’re just outdated.
We’re all running a 300,000-year-old operating system, tuned for survival on the savannah, now trying to process TikTok, detect deepfakes, and fact-check viral tweets. It’s like using a paper map to navigate a city that rebuilds itself every night.
Our ancestors’ brains were shaped by a single brutal rule: if you saw it or heard it, it was real. If a face looked angry, you stayed alert. If you heard a child cry, you responded. If food looked rotten, you didn’t eat it. There was no such thing as “synthetic", only authentic. Dangerously, brutally authentic.
We even evolved specialised regions like the fusiform face area (FFA), a kind of always-on facial recognition 'security' system. It’s why babies stare at faces, why you spot your friend in a stadium crowd, why you see faces in clouds or toast. That false positive bias, pareidolia, was a survival feature. Better to imagine a face that wasn’t there than miss one that was.
But those security guards are still using a Stone Age manual to check IDs in a 2025 nightclub filled with synthetic guests.
The Hacks That Get Us
Faces. A 2022 UC Berkeley study tested 400 AI-generated faces against real ones. Participants rated the fakes as 7.7% more trustworthy. Asked to identify which were real, they scored 48%, worse than a coin flip. Why? AI “hyperfaces” are smoother, more symmetrical, hitting all the ratios our brains evolved to trust.
Voices. Familiar voices are processed in about 250 milliseconds, before conscious thought can catch up. Deep learning can now clone a voice from three seconds of audio. In July, scammers impersonated US Senator Marco Rubio, contacting ministers and governors via Signal. The voice was convincing enough that senior officials nearly engaged with what authorities suspect was a classified information grab.
The Smoothness Trap. Psychologists call it the fluency heuristic: if something feels smooth and easy to process, we believe it. That’s why a polished deepfake often seems more real than shaky authentic footage. Production value becomes truth value.
Your Glitchy Brain
Want proof of how unreliable we already are? Phantom vibration syndrome.
Studies show 78% of medical interns feel their phones vibrate when they don’t. By the end of internship, it’s 95%. Nearly every brain in the study hallucinated phone calls that never happened.
If we can invent false vibrations, how can we trust ourselves to sort synthetic from real?
And there’s a cost. Each time you stop to ask “Is this real?” you’re activating what Nobel laureate Daniel Kahneman called System 2 thinking—the slow, deliberate mode we use for math problems, legal documents, or parallel parking. It’s effortful. It burns energy.
Now we’re using it constantly, just to maintain baseline reality. That’s a Reality Tax. And we’re all paying it.
When Training Makes Things Worse
The common response? Train people to spot deepfakes.
A 2023 Royal Society study tested this. Without warnings, 33% of participants correctly identified synthetic videos. With warnings that some clips might be fake? Accuracy dropped to 21.6%. People became hypersensitive, seeing fakes everywhere—marking real videos as fake more often than they caught the actual fakes.
The cure made the disease worse. Doubt spread faster than detection.
Crossing the Uncanny Valley
In 1970, roboticist Masahiro Mori coined the “uncanny valley”—that eerie feeling when something looks almost but not quite human. For decades, it was our safeguard. CGI faces felt “off.” We rejected them.
But AI has erased the valley. Synthetic faces no longer creep us out. In fact, studies show we often rate them as more trustworthy than real ones.
Evolution didn’t prepare us for humans that never existed but look more human than we do.
The Bottom Line
Our Stone Age brains are magnificent. They got us from the savannah to the stars. But they’re not built for a synthetic world.
The trampoline bunnies broke my heart a little. The technology behind them? That’s breaking trust at every level: elections, reputations, even retirement accounts.
Here’s what gives me hope: understanding why we’re vulnerable is the first step to adapting. Not through paranoia, not through endless vigilance, but by recognising that falling for synthetic media isn’t a personal failure.
It’s a species-wide software update. And we’re all installing it together.