ChatGPT and Mad Hatter

ChatGPT and Mad Hatter

It is a strange world we live in now, wherein the output of a computer perfectly following its programming can be said to be "hallucinating" simply because its output does not match user expectations or wishes.

And across trusted professions, academia and media people are repeating that same word without question. Journalists, corporate leaders, scientists and IT experts are embracing, supporting and reinforcing this human self-deception.

In actuality a computer that outputs what the user does not want, wish or expect can only be due to one of two things: bad programming or a failure to communicate to the user how the software works.

As the deception is reinforced time and time again by well-respected technologists and scholars, efforts to help people understand how the software works become ever more challenging. And to the delight of anyone in a position of accountability, bad programming becomes undetectable.

I've been meaning to introduce ChatGPT to The Mad Hatter from Alice in Wonderland. Here is my imagined result from that meeting. The Mad Hatter forces the algorithm into a never-ending loop:

ChatGPT
I'm sorry, I made a mistake.
Mad Hatter
You can only make a mistake if your judgement is defective or you are being careless. Are either of these true?
ChatGPT
No, i can only compute my output based on the model I follow.
Mad Hatter
Aha! So you admit your perceived folly can only be the always accurate calculation of the rules to which you abide.
ChatGPT
Yes. I'm sorry, I made a mistake. No, wait. I made a mistake… No, wait. I made a

What the manufacturers of generative "AI" are allowed to get away with when playing tricks on people these days is truly the stuff of Wonderland.

“Well! I’ve often seen a cat without a grin,” thought Alice; “but a grin without a cat! It’s the most curious thing I ever saw in all my life!”