top of page

Why humans will always outsmart AI

  • Chris Gousmett
  • Aug 20
  • 7 min read

Updated: Sep 16

Chris Gousmett


Some fear that one day we will wake up and discover that artificial intelligence (AI) has gone rogue and has taken over the world, enslaving humans to its wishes. “AI is smarter than humans,” they seem to be reasoning. Therefore it is quite possible that AI will become bored with the tasks humans assign it and will decide to follow its own higher purposes, which we may not even understand. While this (sometimes) makes for good cinema, is it a likely scenario? I would strongly suggest that it is not.


Instead my view is that artificial intelligence will always be less intelligent than human beings, in important ways.


The main reason for this is that discussions of artificial intelligence are reductionistic. By that I mean that intelligence is being referred to in a way that considers it to be significantly simpler than it really is (which is one reason why it is called “artificial,” it is not the real thing), and this simplistic concept of intelligence is able to be coded for. Even calling it “intelligence” is problematic, but the term is now probably unavoidable.


Human intelligence encompasses a wide range of abilities possessed by human beings, by which we engage with the world around us and within us (see the insightful discussion of the varieties of human intelligence by Welby Ings. Invisible Intelligence: Why your child may not be failing. Otago University Press, 2025). We are able to experience a multiplicity of emotions and sensations (heat, cold, furry cats, crunchy gravel paths, the smell of food cooking, and so on). We can understand these experiences, reflect on them, talk about them, write poetry and novels about them, and much more besides. Our intelligence opens us up to these experiences so they become more than just physical or mental sensations.

We have multiple different ways of engaging with the world – sensations engaging the nervous system; taste, smell, sight, and so on. A computer system has none of these. It cannot smell the flowers, taste a curry, feel cold, enjoy a brisk walk in the sunshine, converse with a friend, embrace a child, and many more. And it cannot “talk” about its experiences of these things that are beyond it.


It might be claimed that a computer can detect light through sensors and from this identify what the sensor is detecting (e.g., facial recognition, registering a number plate at a car park). But there is no way that this can be described as “seeing” – it is simply pattern matching. A number plate scanning system, for instance, only registers letters and numbers in digital form. It cannot smile at an amusing personalised plate or note the coincidence of the same first three letters on consecutive car number plates.


Similarly, a microphone can enable a computer to detect speech, and even be programmed to be able to render such speech into text, but this cannot be described as “hearing.” A sensor might be designed that enables it to distinguish between roast meat and mashed potato, icecream and apple pie. But this is nothing like what a human experiences when presented with those foods.


A computer can only detect a reduced range of exposures to the world, and this can only be a massive reduction in what we humans experience. There is nothing there which can be said to enjoy those experiences – no central entity which can integrate multiple sensations into an experience of something.


It is important to note that our eyes do not see, we see with our eyes. Our ears do not hear, we hear with our ears. There is a self which experiences these sensations – but a computer lacks a self to integrate sensed light and sound into an experience of the world, which also draws on prior experience (memory) and everything associated with that prior experience – the place, the other people present, the event it celebrates, etc. In that respect, then, these different inputs detected by a variety of sensors do not “mean” anything to the computer. It cannot “enjoy” a Bach chorale or a brass band or heavy metal music, and there is no means by which it can aesthetically appreciate art works or fine architecture.


For human beings, intelligence is what we refer to as one characteristic of our lives, which is integrated with ability to hear, to see, to think, to express ourselves, and many other activities. What we call “intelligence” contributes to what each of us is like, and enables us to engage with the world around us in sometimes surprising ways. It is as much an adjective describing how we live as a noun that tells us of a capacity we enjoy.


What then is artificial intelligence? It is a way of enabling a machine to engage with the world and produce certain effects in it.


Now we come to the nub of the issue: because the ability of the machine has been programmed by human beings, that ability must always be less than the ability of the human programmers. That is because we can only programme that of which we are able to conceive, although we cannot fully express what we mean in words, since our meaning always exceeds what we can say (see Michael Polanyi, Personal knowledge). A ballerina was once asked what her dance meant. She replied, with great perception, “If I could tell you I wouldn’t have to dance it.”


It is not possible to write a computer programme that is of a higher intelligence than the programmer, since by definition we cannot conceive of what that higher intelligence could be like and so cannot write the necessary code to build it. We can only code less than we can conceive of, since that concept is always more than can be expressed by us.


The claim that we can build an AI which is more intelligent than human beings presupposes a reduced concept of intelligence, to mean something which is merely a number of separate and limited aspects of intelligence, which cannot be integrated into an experience equivalent to that of human beings.


Thus we are confronted with a paradox: if we can programme AI to be smarter than us, then we have proved we are smarter than the AI, as what we can build must always be lesser then ourselves. We can imagine what a higher intelligence might be like, and many movies have been based on that premise – but these movies only simulate higher intelligence in some form – it is not real. We cannot conceive of what a higher intelligence would be like so as to be able to build one in reality.


We must always remember that artificial intelligence is reduced from that which a human has already. We experience the world around us and within us as a whole, full of light, sound, textures, structures, sensations, movement, temperatures and weather, to name only a few. A computer, on the other hand, can only detect that which we have built sensors for so it can register its detection, but these do not enable it to have a whole experience. It could detect, for instance, lights, sound and vibrations, and more, but it could not integrate these into an experience of a rock concert, with the excitement of being present, feeling the crush, enjoyment of the music, and the enthusiasm of the crowd around us no matter how many algorithms are added to its multi-dimensional detection devices.


We must not confuse artificial intelligence with ability to perform massive number-crunching, e.g. to generate a prime number, or calculate a satellite’s trajectory in its orbit round the earth, or many other complex and difficult mathematical processes. It may be able to perform at a greater speed and accuracy and with many more factors than a human can manage, but it is in principle able to be understood by humans and thus we can write the necessary code. An old hand-held calculator does the same thing at a much simpler level, and we do not consider that to be intelligent, even though it can perform calculations which we might struggle to manage ourselves. Likewise, using AI to generate a novel, a poem, a student essay, or other textual creation, is not the result of intelligence (as the problem of “hallucinations” demonstrates), but simply pattern matching: what word typically follows this word, and what would follow from that combination? It is fast, it is cleverly done, but in principle a human being can do exactly the same, and we have been doing so for centuries!


So a computer running an AI programme cannot be smarter than a human being, since it is limited in many ways which an ordinary human being, even a child, can exceed. We can only consider that AI can be smarter than us if we use a much reduced concept of intelligence, and predictions that computers will become conscious in the next decade, the next year, tomorrow, are meaningless hype playing on confusing the ability to perform complex calculations with the ability to think.


And this also plays on the confusion around what it means to be conscious. Nobody has yet been able to say exactly what it means for a computer (not just an AI programme) to be conscious or how we can tell whether that has been achieved. We can’t even know what it means for a human being to be conscious, although we certainly are (which is another example of how we can know more than we are able to say)! But if AI is not conscious, self-aware and aware of its surroundings then it cannot be truly intelligent, and cannot “go rogue,” seek to take control of the world and enslave or destroy humanity. Not having a “self” at its core, it cannot form an intention to do something other than what its programmers design for it.


We need not fear that this might happen. There are many real problems with AI as it now exists which need our attention, so don’t become diverted by illusions promoted by tech billionaires.


For a more technical discussion of this theme, see Jobst Landgrebe and Barry Smith. Why machines will never rule the world: Artificial intelligence without fear. Routledge, 2023.


Chris Gousmett is a retired information management professional living in Mosgiel, New Zealand.


Recent Posts

See All
The perils of uncontrolled AI

Chris Gousmett We can be confident that we will not become subject to AI despite the claims that it could “go rogue” and be trying to...

 
 
 
Coding a super-intelligence

Chris Gousmett Imagine, if you will, a mouse with sophisticated computer skills, writing code for an artificial intelligence programme...

 
 
 

1 Comment


Albert Weideman
Aug 20

Thanks for this, Chris! Highly instructive, and clearly stated.

Like
bottom of page