It’s Not Intelligent, It’s an LLM

With all the controversy around AI, there’s one thing that people don’t seem to be talking about. And when they do, they talk about it the wrong way. There is a problem with AI we seem to have missed, and it isn’t with the AI itself.

It’s Wrong


ChatGPT is often criticised because it keeps getting things wrong. Take the famous case of the lawyers who tried to use it to build a case. It cited cases that didn’t exist, and just did an all around terrible job of it. It was so bad the lawyers were fined for it.

A quick search for “ChatGPT is wrong” will link you to various stories about how ChatGPT is easily manipulated, how it is dumb, and how bad it is at writing code. The criticisms are huge and there are many.

I’ve even experimented with it myself, asking it to write some simple Python scripts. While the scripts I got technically worked, they weren’t concise, didn’t take advantage of the latest language features, and were pretty bloated. I had to keep giving it hints, and I still couldn’t get it to write a script that was as clean as my own implementation. I presented it with my own solution and it said:

In terms of simplicity and readability, both solutions are quite good. Your solution has the advantage of using pathlib, which provides cleaner syntax for path manipulation, while my solution is more concise in terms of code length.

Its solution was still longer at the time.

ChatGPT is often wrong. It isn’t smart. And it will simply make up facts false enough to get lawyers into trouble. But all of this is missing the point. Because ChatGPT is just supposed to talk, to emulate human speech. It isn’t supposed to be right.

You Are Wrong


ChatGPT and its ilk are Language Learning Models. According to Wikipedia:

A large language model (LLM) is a language model notable for its ability to achieve general-purpose language generation and other natural language processing tasks such as classification. LLMs acquire these abilities by learning statistical relationships from text documents during a computationally intensive self-supervised and semi-supervised training process. LLMs can be used for text generation, a form of generative AI, by taking an input text and repeatedly predicting the next token or word.

“Repeatedly predicting the next token or word.” ChatGPT boiled down to its core is a speech generator. It is supposed to talk like a human would. You type something in, it gives a response. Nothing about it says that the response is supposed to be truthful or useful.

AI has been pushed as the next big thing that will replace jobs, write all our code for us, create all our art for us, and become some sort of intelligent entity that can achieve anything. The real truth is that AI is nothing more than a mirror.

One of the earliest and most simple LLMs was ELIZA. ELIZA worked by essentially repeating the users words back at them. It was surprisingly successful, with users forming emotional connections to the machine.

Modern LLMs are essentially an extension of that. Only it uses vast amounts of data to generate a response, rather than parroting a users words back to them. That data is us. It’s the words written in every book. The messages sent over social media networks. The reddit comments. The 4chan posts. The fascinating analyses of films and movies. The boomer’s racist rants of the good old days.

They are parroting our own words back to us, just in a more complex way. They are as right and as wrong as we are. They are a reflection of everything we’ve ever written. Of us. AI is nothing more than a mirror.

When people try to use LLMs for something other than generating random speech, they are the problem. They expect the AI to be hyper-intelligent, but have you ever read the comments on YouTube? They’re in there too. The AI is only as intelligent as the thing it reflects, and humans are as often boastful, dishonest, and wrong as they are modest, truthful, and correct.

LLMs are designed to generate random speech, to repeat our own words back to us. Nothing more. Nothing less. Anyone who uses them for anything more is using them wrong.

At the end of the day, when ChatGPT is wrong, it’s because we are wrong.