top of page

Coding a super-intelligence

  • Chris Gousmett
  • Sep 8
  • 8 min read

Updated: Sep 16

Chris Gousmett


Imagine, if you will, a mouse with sophisticated computer skills, writing code for an artificial intelligence programme with abilities which are intended to be at least equivalent to those of an average human being. How could a mouse conceive of such an intelligence with abilities which are far above those it possesses itself, and a world-experience it cannot even begin to grasp?


If this seems improbable, it is because it is.


Consider the reverse scenario: a human being writing code for an artificial intelligence programme with abilities which are intended to be at least equivalent to those of a mouse (a humble and modest aspiration, unlike many of the claims made for AI). It is not that we cannot know how to code such a programme. It is that we cannot conceive of what that programme entails, that is, how to code for an experience of the world as a mouse experiences it, not as how a human would imagine what that experience would be like if a human became a mouse. How does a mouse experience the world, and what mental abilities does it have which enable it to seek out food and mates, to evade predators, and to navigate their way through a world full of incomprehensible objects and machines? How would we even begin to determine that?


But it is suggested that human beings can perform a feat equivalent to the first example above, namely, coding a computer which has intelligence far greater than that of an average human being. We are supposedly then on the path to developing super-intelligence, which will be able to solve all the problems of the world and enable us to enter into an age of peace and prosperity for all, with medical problems a distant memory, and ageing and physical decline overcome. We will no longer need to work since machines will do everything that is needed. Immortality in paradise awaits.


If this seems improbable, it is because it is.


Note that the purported age of universal peace and prosperity from the development of new technology has been often promised in the past – with the development of steam power on land and at sea, of electricity, of universal education, of the growth of science, of synthetic materials and mass production of consumer goods, and many other advances. Note the famous quote attributed to William Gibson, “The future is already here – it's just not evenly distributed.” The current pattern is for any increased prosperity arising from technology and increasing productivity to accumulate upwards, not outwards, so a few people end up owning obscene amounts of wealth, and control the lives and livelihoods of millions of others, who must survive on whatever salaries the billionaires deign to allocate.


So regarding any claim that technological advance will mean prosperity for all, we can ask sceptically, what is going to change from our current situation so that the already obscenely wealthy tech billionaires will suddenly decide to distribute that prosperity? They could do that now, but they don’t. So why would we expect that to change?


So much for the expectation of a social paradise to come.


Secondly, we need to look at the basis of this promised paradise, that is, the arrival of super-intelligence. There’s lots of debate about what that looks like, but there is enough agreement that it is possible for it to be held out as an achievable goal. Is this scenario likely?


Let’s look at how the creation of this super-intelligence is going to be achieved.


It is an incremental process which develops quickly into exponential growth. Firstly, we need to build a computer which is intelligent enough to achieve consciousness. This is assumed to be quite easy – we just need to build faster and more sophisticated computers than the ones we already have, computers which have the latest version of AI (artificial intelligence). So when we have designed and built such a computer system, that can then build for us the next level of computer intelligence, that is, we can code the first computer so it will design and build one with an even greater level of intelligence, and that one can then design and build an even greater computer, and so on, each step taking shorter and shorter time so that exponential growth is achieved and we quickly arrive at a computer which is infinitely more intelligent than us – super intelligence – which will solve all the problems of the world for us.


There’s only one problem, one really big one, that is. There are lots of lesser problems, but it’s the big one that concerns us, since if that is unsolvable, all the other problems fade into irrelevance.


Firstly, is it possible to build a computer system (the necessary hardware plus software enabling the generation of conscious AI) which is equal to human capabilities? Progress so far is less than impressive. If we have not yet achieved a computer which is undoubtedly conscious, that is, one which everyone can agree has achieved consciousness (in itself no small feat - both achieving it and gaining general agreement that it has been achieved), even then we are not yet on the path towards super intelligence. It is possible that we are aiming at a goal that is impossible to achieve, despite the billions of dollars being spent on that. So, is it likely that the new age of prosperity will arrive as a result of this enormous expenditure? If that were not the case, are we actually spending money on a mirage, money which would be better spent on providing housing, medical care, food, employment, etc. for those who are going without? What is the moral imperative here?


But let’s grant that it might be possible for us to build a computer AI which is conscious. Is this going to be as intelligent as an average human being? There are problems with defining intelligence and how it is measured, but let’s for the sake of argument hold that this is possible. So, is this computer as intelligent as an average human being?


If it is true as philosophers have argued that we cannot express everything that we know, there is more to our thinking than can be captured into a computer code attempting to replicate that. That presents us with a problem: how can we code a computer to be able to know all that a human being knows? And if we cannot do that, then no computer system we build will have greater intelligence than we do.


The consequence of that then is rather perplexing.


If no computer we build can ever reach average human level intelligence, then nor can we design a computer with the ability to design and build a computer which exceeds the intelligence of that first computer. In fact, any computer we build must be of lesser intelligence than us, and any computer which that one builds must be of even lesser intelligence, and so on, until we quickly arrive not at super intelligence but super stupidity. Then we have reached a level of intelligence that is insufficient for the computer to build the next generation of computers, and we have arrived not at super intelligence but stalemate. Each level of computer must be of lesser ability than the one which built it. Just consider the angst among users of ChatGTP who rejected ChatGTP5.0 and demanded to go back to an earlier model which they considered was superior. Why then would we expect increasing intelligence at each level? It is impossible to achieve, if we cannot express all that we know. Nor then could an AI express all that it “knows.” An ever-decreasing spiral is inevitable.


If we assume that we can learn to build a computer intelligence that exceeds our own, what would that look like? We have to be able to comprehend it, to form detailed concepts of it, in order to know what it is which we are seeking to code for. But if we can conceptualise such a computer intelligence, then have we already reached the level of intelligence and consciousness of such a computer? If the scenario we have before us is real, then for us to be able to design and build a computer with the intelligence and consciousness necessary for it in turn to be able to design and build a more intelligent computer still, and that one also has the intelligence and consciousness necessary to build an even more intelligent computer still, then we will have achieved exponential growth in computer intelligence and consciousness. Paradise awaits!!


But consider this: if we can design and build a computer with intelligence and consciousness at least equal to our own, which in turn can do the same, and that one can do the same, then what we have is not increasing intelligence at all.


That is, if it is possible for each one of a series of computers to design and build others even greater than itself, then it must be possible, indeed necessary, for us to design a computer that can design a computer that can design a computer, and so on, and so on. But if that is possible, it means that we will know how to build the final computer in the series ourselves, without having to set up a scenario in which increasingly capable computers build their even greater successors, because we have to be able to design the code which designs the code which designs the code, and so on, again. If we can design the code to design the code etc. etc., then we already know what that ultimate computer code is since we have designed the design for the design, etc.


To put it another way, in order to instruct the computer to design and build a more intelligent successor, we have to know what to tell it to do, and what to tell it to tell the next successor, and so on, again.


Why not skip the intermediate steps and build the super intelligence directly?


Except that is not possible, since we cannot even replicate a mouse-like mouse intelligence and consciousness. And I contend that is something we will never be able to do, since we have no way of coding a world-experience as experienced by a creature whose inner mental life is opaque to us, let alone coding a world-experience of our own.


One final point. And it’s a biggie.


We have no conclusive evidence that human intelligence (with all the problems of defining and measuring this) is increasing. Is this because there is a functional limit to the increase in human intelligence? To clarify: why is it that the descendants of extraordinary geniuses do not have at least the same, if not greater, genius levels themselves? Often the children of extraordinary people are fairly ordinary themselves. Intelligence does not increase automatically with each generation. Why is that? Is it because the gifts of God are distributed accordingly as he wills, rather than on the basis of parentage?


We have different gifts, according to the grace given to each of us. [Romans 12:6]


There are different kinds of gifts, but the same Spirit distributes them. There are different kinds of service, but the same Lord. There are different kinds of working, but in all of them and in everyone it is the same God at work. [1 Corinthians 12:4-6]


Now to each one the manifestation of the Spirit is given for the common good... All these are the work of one and the same Spirit, and he distributes them to each one, just as he determines. [1 Corinthians 12:7, 11]


If it were the case that human intelligence cannot be replicated in code, then we do not need to attempt to build computers which can develop super intelligence. And the money saved which would otherwise be expended on chasing an illusion (or delusion, perhaps) can be put to better use in a world desperately needing improvements in health, housing, fresh water and sanitation, medical care, democratic institutions and education, plus lots more. That is more likely to result in an increase in average human intelligence, and increased ability to address the many problems which face us.


Rather than seeking to resolve an artificial problem (how to build a super-intelligence) which is unnecessary and if I am right, impossible, we should focus on resolving real-world issues now.


The tech billionaires have the money to make much of this possible.


Let’s ask them to help us build a better world now rather than for them to continue chasing a dream at our expense.


Recent Posts

See All
The perils of uncontrolled AI

Chris Gousmett We can be confident that we will not become subject to AI despite the claims that it could “go rogue” and be trying to...

 
 
 
bottom of page