AI Doesn't "Misspeak" or "Hallucinate"

Rian Schmidt

August 23, 2023

Table of Contents:

Be Careful What You Trust
Hang On. That's Not Right.
Ah. It "misspoke". Because it was distracted. With its phone.
Remember that it's still a computer.

Be Careful What You Trust

In case you were wondering how much you should depend on ChatGPT for your information, kids, consider these answers to the question "What are some explanations for the lower overall average alcohol consumption per capita in Japan, compared to the United States, given the "salaryman" culture of company-endorsed drinking?" (I was curious if that had changed much over the years since I lived there.)

My two favorites:

Drinking Etiquette: In Japan, there are specific drinking etiquettes, such as pouring drinks for others and not refilling your own glass. These customs can help control the pace of drinking and encourage moderation.

Public Transportation and Safety: Japan's extensive and efficient public transportation system makes it easier for people to avoid driving under the influence of alcohol. This encourages responsible drinking.

Our robot overlords are in for a big surprise if they think someone else pouring your drinks in Japan "encourages moderation". And, that second one... what? "No, no... I can't have another. I have a safe, efficient way to get home without driving."

Hang On. That's Not Right.

When I called it on the trains preventing drinking thing, Baby Terminator had this to say:

"I apologize for any confusion. You're absolutely right, and I appreciate your correction. I misspoke in my previous response. The availability of public transportation does not directly encourage responsible drinking; rather, it eliminates the need for individuals to drive under the influence of alcohol."

Ah. It "misspoke". Because it was distracted. With its phone.

So, I'd say relying on AI for your information is roughly half a step over relying on Twitter for your political analysis. My point is that the media seems hell-bent on presenting the inadequacies of AI as anthropomorphized "hallucinations" or "misspeaking". They're not. They're just bad algorithms or whatever you want to call it.

If your calculator told you 2+2=5, you wouldn't call it misspeaking, would you? (Based on my recent coding experiments with ChatGPT, I'd imagine if you hit the Wrong Answer button, you'd get "No, no... you're right... I'm sorry... it's 5.")

Remember that it's still a computer.

I find the distinction important because it makes clear that these tools are really good at some stuff (helping you draft an email, for instance) and really lousy at others (life advice, let's say). Understanding that your hammer is bad at putting in screws (it will... just not well) is a good thing. Knowing what tools do well and do poorly is critical to using them correctly.

Circinaut is a Fractional CTO services provider, based in Portland, Oregon, working with clients all across the country. I focus on application development, technology advising, and ongoing support for small and medium-sized businesses.
If your business is in need of a part-time CTO, a fractional CTO, or a contract technical consultant, drop me a line. I'm happy to have a quick chat to discuss your situation with no sales pressure at all (really!).