“The question whether computers can think is not more interesting than the question whether submarines can swim.” Dixit the famous computer scientist Edsger Dijkstra. However, others apparently think this is interesting: there is quite some attention to the question what AI actually does. Can you call that “thinking”? Or at least say that AI “understands” something or can “reason”? The term Artificial Intelligence itself is controversial: it’s not intelligent at all! When it’s artificial, it’s not intelligent, and when it’s intelligent, it’s not artficial. Words matter.
A machine that mimics a human being?
He wanted to illustrate that the question isn’t particularly interesting, but Dijkstra’s comparison got me thinking. Of course, submarines cannot swim! They sail, that is something completely different. Swimming involves specific movements using arms and legs, which is completely different from a propeller rotating in the water. (Similarly, the English term ‘sailing’ can be confusing, as it refers to both wind-powered and propeller-driven movement.)
But wait. What about planes? They ‘fly’ using motors and rotating propellers and such. Just like the ‘swimming’ of the submarines. For that type of flying we did not invent another word. We could have called it sailing as well. You can see this in words like ‘airport’, and ‘boarding’ a plane.
We’re comfortable with physical activities like swimming and flying, but when it comes to thinking, questions arise.
The noble sport of playing chess is the prime example of human creativity, character and mental power. Or at least, that is what we all thought, until Deep Blue defeated the world champion of chess. What about reading, writing and having a memory? We already talk about computers like this for decades. As if they are human.
In short, there can be a discussion whether machines can do (or simulate) human behaviour. But let’s first look at it from the other direction: when we compare human activities to those of machines, what terms do we use to describe something as abstract as the human mind?
A human compared to a machine
In ancient times, knowledge was seen as light and not knowing was seen as darkness. “I’m in the dark on this one.” Plato compared our reason as a charioteer that controls a chariot pulled by two horses: a honorable and light horse, and a dark horse with lower desires.
Later, more complicated mechanical things like clocks were developed. Descartes saw the brain like a clockwork, that reasons in a logical way. An expression like “what makes you tick?” reminds us of this metaphor. Does that ring a bell?
The steam engine brought more metaphors. Even today, we say “full steam ahead” when we start something. We continue working until we “run out of steam”. And when something has gone wrong we need to “blow off steam” in order to prevent that we “go off the rails”.
The electromechanical machines that were introduced mid-20th century, led to phrases like “we are on the same wavelength”. However, someone could also be “off the hook”, and when that happened too often, you would be “on tilt”. Tilt, by the way, comes from the world of pinball machines, where tilt detectors were installed to prevent cheating by lifting the whole machine. Since then, ideas are “sparked” and when an idea is too creative, there would be “short-circuit” in your brain.
Computers offered additional metaphors. “I have to process this”, or “let me update you”. People have to “reboot” sometimes. Sayings like “you have to delete your harddrive” are not so fashionable anymore, though.
Apparently, we have no problem comparing our own thought processes to those of a machine. But assigning human traits to a machine is another matter entirely!
A machine with human understanding?
There are researchers who think that language models actually have a model of our (mental) world. The latest development is to create a model of our whole world, not just language: world models. That is quite ambitious, but for word meanings, the current language models do a surprisingly good job. A technology which is called ‘word embeddings’, enables a machine to grasp concepts in such a way that in many cases, you can creat quite logical outputs. (I have explained word embeddings in this blog.)
Other researchers take the opposite position. A well-known thought experiment is the Chinese room: a participant (that does not know Chinese) sits in a closed room and has been provided a pen, paper, and a large manual with instructions. Messages in Chinese are deliverd through a small opening. The large manual describes how to react to what characters. It’s a very large manual: every conceivable quetsion is treated, including follow-ups. In this way, the participant can react to Chinese questions, without knowing what she is talking about. When the manual is large enough (compare to: when the Large Language Model is large enough), an external observer will think that the participant understands Chinese – which is not the case.
A well-known critic of the AI hype, Emily Bender, has refined this experiment, concluding that true understanding is only possible if you’ve actually experienced the things you’re talking about. The example in her article: when you have never seen a coconut and a catapult, you will not understand what a coconut catapult is.
All kinds of counterarguments can be brought against this, and it is nice to see that this discussion has already been going on for ages. Interesting for academics, but what does that mean for our daily society?
A confused consumer…
I have compared AI to food before: the AI Act compared to food safety laws and how the use of GenAI compares to not making your own food. Food is also very much influenced by words. The official reason to ban the use of ‘meat-like’ words for vegetarian alternatives is motivated by consumers who would be confused. There is some lobbying involved, too.
Is it imaginable that there will be a similar lobby from Big Tech, to make sure that AI models are officially allowed to ‘think’? It’s a sign that many bigger LLMs start to have a ‘thinking mode’. We have the AI Act that mandates an AI provider to clearly indicate to the consumer that she is chatting with a computer, and not a human being. The ‘confused consumer’ is a good reason for such a ban, in this case.
… is dangerous
It’s not good for our egos when a bunch of silicon, metal and plastics appears to do the same thing as we do. When our ego would be the only problem, it wouldn’t be too bad. A confused consumer might not seem like a big problem, but it can actually have serious consequences. Think of chatbots who talk to people with psychiatric issues, and only say what the user wants to hear. The results can be dramatic.
The risk is real when people start to ‘humanize’ computers. The very first chatbot, ELIZA, made its users feel like they had very personal conversations – they would ask others to leave the room when they were interacting. Humans have the tendency to quickly recognize all kinds of human traits in random data: pareidolia. Chatbot providers even seem to promote the humanization of their services: you can chose a ‘personality’, and they steer the behaviour of the chatbot based on a ‘soul document’.
Another risk is the lack of trustworthiness of GenAI (more about that in another blog). For the context of this blog, it is noteworthy that a term has become customary for this: hallucination. It’s a poorly chosen word. Hallucination is perceiving things without sensory inputs, which is not what a chatbot does. A much better term is confabulation: invent things that sound logical but are not true. Telling untruths is a known feature of various forms of dementia. It is annoying that AI can be so convincing when they are confabulating. Humans are not always trustworthy either, but they are not always so convincing.
Nobody really cares whether submarines swim or not. It doesn’t really matter. Whether a computer thinks or not, also doesn’t really matter. But consumers who become genuinely confused – that’s something we definitely want to avoid.


Plaats een reactie