You’ve probably heard the term “AI hallucination.” That means when there is data missing, AI will make things up that make sense to itself. There’s a huge problem here: I don’t need a machine that thinks like I do.

We keep hearing that Artificial Intelligence is smarter and more reliable than the human brain for a lot of tasks. It often is, and like most of us I could use all the help I can get. But what if that AI bot has the same bad habit I do when it comes to processing information?

The goal of AI is to augment the human brain and help process information. It contains millions of bits of data we can’t possibly retain and processes them in ways that examine, choose, and then follow models that get us the best outcomes. So far so good.

But when there are gaps in the information pool, or something doesn’t make sense, AI will fill in the gaps to the best of its ability using the most likely explanation. The current term for this is “hallucinating.” That’s cute, but it’s incorrect. AI doesn’t suddenly see pink elephants or my dead aunt in the corner of the room. In fact, it’s really good at screening out things that make no sense and seem completely incongruous. What it does is more insidious: it just makes up stuff that makes sense and passes it off as believable data.

That’s actually my own worst human trait. I have a hard time admitting I don’t know something, and so I make statements that make a lot of sense in context, and because I say it with confidence, usually get away with it. That’s what AI does, and just like working with me, accepting what it says at face value can lead to problems.

A CASE STUDY IN AI BS

Let me give you an example (and you’re about to learn more about me than you want.)  A while ago, rather than struggle over writing a bio for my writing work, I asked Chat GPT for “a two-paragraph bio of Author Wayne Turmel.”

This is the kind of task we’re told AI is particularly good at. It came back with two paragraphs of well-written information that made me look great and contained far fewer grammatical errors than anything I would write. It also flat-out lied.

Besides extolling my actual work, it gave me a college degree I don’t have from a university I didn’t attend and credited me with a book title I didn’t write. How does that happen?

I am one of the few people in my industry (especially those who speak and write frequently) who don’t have at least a four-year college degree. I attended a tech school in British Columbia, Canada and have a 2-year associates degree in Broadcast Journalism. Other than one line in my Facebook profile, that’s not listed in any of the easily searched information about me. Since my industry usually requires a degree, and it couldn’t find evidence of one, the AI took disjointed information and made what my father would call “a wild-a** guess.”

Looking at this forensically, here’s my best stab at what happened. Since I’m a recognized expert in the field of training, remote work, and communication, and since the majority of people in this field attended university, I must have a college degree, likely in Communication. Most of what appears online about my early career came when I was working and living in Chicago, so that’s probably where I went to school. I have spoken a couple of times at events at Loyola University. Thus, ChatGPT confidently proclaimed, “Wayne Turmel has a degree in communications from Loyola University in Chicago.”

Except I don’t. AI found a gap in its data, and then made a reasonable guess based on available information and passed it off as fact. It didn’t say, “based on what we know it’s likely that…”  It made a blanket statement and almost anyone would accept it without question.

Of course, since I know my own history, this mistake was easily caught before I just cut and pasted it into a larger document. But what if I didn’t already have that context? Or worse, wanted to puff myself up to look more qualified or just better educated than I really am? This could easily be used to commit fraud or in other malicious ways.

Now comes the embarrassing admission: when faced with situations where I lack hard evidence, or I’m in a meeting and it’s clear I don’t have all the facts, I make stuff up. I argue facts not in evidence. It’s been my worst habit my entire life and I’m working on it, I swear.

AI does the same thing. And it’s better at it than I am.

The ease of using and relying on AI is one of its biggest dangers. Accepting what we are told at face value is easier than applying critical thinking. The machine MUST know something I don’t, therefore I will accept what I’m told. In the most likely scenario, I’m in a hurry and fact-checking takes time I don’t have in my busy day. The next thing you know…. the world thinks I’m walking around with a degree from a prestigious college. Because that’s now in the model the AI uses, it becomes a fact and gets built on. Someday I could get a Nobel Prize.

Please note. I am not against AI any more than I’m against hammers, guns, or Twitter. They all have their uses and inherent problems. Uncritical use of any of them can lead to very bad things. Here’s what worries me: I am self-aware enough to know when I’m acting in bad faith or passing on incorrect information.

AI isn’t. At least not yet.

Get a weekly dose of inspiration & motivation!


Our e-newsletters are packed with powerful articles and resources that are designed to help you and your organization create remarkable results and be more successful!

Wayne Turmel has been writing about how to develop communication and leadership skills for almost 26 years. He has taught and consulted at Fortune 500 companies and startups around the world. For the last 18 years, he’s focused on the growing need to communicate effectively in remote and virtual environments.

Share your thoughts

Your email address will not be published. Required fields are marked

{"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"}