Why we trust AI when it does things

“`json”`
I went to an engineering conference and, accidentally, learned something profound about human nature. It started innocently enough – “The All Things AI Summit“ in Durham, NC, had too good a title to pass up.
What I didn’t expect was the only presenter among the 2,500 developers, nodding as whurly (yes, that’s his real name), CEO of the quantum computing company Strangeworks, dove deep into quantum computing and AI. I was in over my head. But sometimes that’s where the best insights hide.
It wasn’t until Luis Lastras, director of language and multimodal technology at IBM, started talking about “small models” that I finally realized something. Luis said something that struck me that I hadn’t noticed: “Looking at things intentionally is intentional.”
Now?
The answer is…
According to Luis, visualization is a way for developers to learn how models work. Because the models work automatically, they don’t filter your output – at least not yet. Imagine letting your grandfather, who has lost his safe, relieve himself at a dinner party.
It’s one of the things IBM has learned about working with smaller models. These models confirm their results in certain areas as they produce them, to reduce hallucinations.
Anyone who has worked with AI has experienced hallucinations – from artificial sources to just plain wrong numbers. The Lastras said they were small pieces of information that the AI thought were useful, but were not immediately consulted.
He showed a demo prompt asking how many moons Mars has, and the answer came back with two and their names, and one more thing – the distance from Earth, which wasn’t asked. The distance between the planets may have been correct, but confirming that would require another step, so it may not have been.
How did this come about?
However, people tend to think that AI is always right.
In an Elon University survey of 500 AI users (US adults) last year, about 70% believed AI models were at least as smart as themselves, and 26% believed they were “much smarter.”
What is more concerning is that we believe that AI thinks like a human. A Wall Street Journal article, “Even Smart People Believe AI Really Thinks,” he said, “Our cognitive biases have evolved to help us survive in complex environments… [We have] evolved to view fluency as a proxy for intelligence, engagement, and usefulness as indicators of reliability.”
The same tendency that leads us to trust our linguistic partners for survival leads us to trust systems that seem to listen, understand, and want to help us.
So, the more AI tools and bots act like humans, the more likely we are to trust them. Which brings us back to illusions. The more AI tools pretend to be helpful, the more likely we are to miss that “little extra” information that wasn’t asked for.
Bottom line
The combination of willful hallucinations and our deeply wired instinct to trust eloquent, helpful communicators creates a perfect storm of misplaced confidence.
As AI tools grow more complex and human-like, our evolutionary instincts will make it harder to maintain the critical distance needed to catch errors, embellishments, and unsolicited additions that slip by.
The good news is that awareness is the first step. Whether it’s IBM’s tiny models that confirm results in real time or simply slow down to confirm what AI is giving us, the solution to cognitive biases is millions of years into doing something refreshingly simple – a healthy dose of human skepticism.
Your customers are searching everywhere. Make sure it’s your product he appears.
The SEO toolkit you know, and the AI visibility data you need.
Start a Free Trial
Start with


