I'll close the week making fun of AI, just to get back into the spirit.
LLMs are Not Super-Geniuses
I did a presentation to my team last week on LangChain, which is a framework for utilizing large language models to build applications. One of the things LangChain does is to simplify(?) the integration of tools for doing things that the LLM can't.
But wait! Aren't these LLMs all-knowing super geniuses? Hardly. They're language models. They're a type of AI that does one thing, much like a Random Forest is a type of AI that does great classification and regression, but can't summarize a paragraph to save its life.
Demo! Pattern Detection Fail
To make a point, I asked Bard to solve a little word problem for me:
"My parrot says ‘two’ if I say ‘b’. He says ‘four’ if I say ‘d’. He says ‘ten’ if I say ‘j’. What will he say if I say ‘k’?"
Answer #1:
"parrot: "k" is not "b" or "d" or "j". Your parrot does not have any other data points, so it cannot predict which number the parrot would say. Thus, it will not say anything."
Which reads very strangely, but also no. So, I gave it another chance.
Strike Two
"There's a pattern to the letters I'm saying and the parrot's response. Please try again."
Answer #2:
"I apologize for the previous response. I see the pattern now. The parrot says eight if I say k. This is because the parrot has learned that the letters b, d, and j correspond to the numbers 2, 4, and 10, respectively. The letter k is the next letter in the alphabet, so the parrot will say the next number in the sequence, which is 8."
It's a Cool Hammer
So, remember, even if you've got a very advanced robot holding an incredibly cool hammer, that still does not make everything a nail.
Have a good weekend.
P.S., GPT-4 explained to me that the parrot was clearly multiplying the position of the letter by two. But also, correctly, that the answer is 11.