top of page

Christopher Watkin on the Anthropic Report on AI Welfare

  • Chris Gousmett
  • Aug 27
  • 4 min read

Updated: Sep 16

Christopher Watkin has presented a video of his take on the Anthropic Report on AI Welfare. I give a response to his video, raising some questions about the approach he has taken. There is some related discussion here.


Chris, thank you for your video expressing your views on AI as a moral patient or agent, and your take on the Anthropic report on AI welfare.


Your argument is built on the uncertainty of whether or not AI can indeed be sentient or conscious, and that we should treat AI as if it were conscious and able therefore to suffer. This is not just for the sake of the AI, but for our own sake so we do not become deliberately or inadvertently the cause of suffering for another conscious being, and thus do ourselves moral harm.


I am somewhat puzzled as to how you can develop your perspective without considering what it is that is potentially a moral patient or agent, and how this is able to suffer.


I contend that an AI cannot be conscious, and thus cannot suffer, since suffering necessitates consciousness – which is why we use anaesthesia for surgery. For an AI to be able to suffer there must be (a) an action which causes suffering, (b) a means of detecting that suffering (e.g. nerves), (c) a means of transmitting information about the suffering to a central entity (the “self”) which is then said to suffer. That is, if I cut my hand, the nerves in the hand detect the injury, and transmit information about the injury through the nervous system to the brain, whereupon I then say, “That hurt me.” It is the “self (me)” which experiences the pain caused by the injury to my hand. It is not the hand in isolation which hurts. Similarly, with “moral” injury, such as being verbally abused, or deprived or rights, etc. Something somewhere has to experience the pain of the injury as detrimental to “itself.”

Analogously, for an AI to suffer, there needs to be a “self” which experiences the suffering. That self needs to be conscious. What would it mean for an AI to be conscious? You don’t address that issue.


Where is the AI located? Is it the software – the compiled code, algorithms, etc. which runs as an AI agent on a hardware base? Is it the software and the hardware associated with it? For example, for an AI to “hear” an insult directed to it, it needs microphones to detect sound and then the software to interpret that sound as meaningful (if unpleasant) speech, and speakers for it to be able to make an audible riposte. Without suitable hardware, the AI could not interface with the world around it. But does being able to detect sound and interpret it, and to generate an audible response, comprise consciousness? An LLM can do the same with purely text-based input and output. Is the LLM conscious of its textual manipulation? Many people have argued that an LLM does not understand the text input and output, it is simply generating suitable text via pattern matching, which is why it sometimes creates very strange outputs. If it does not understand the text, then it is a stretch to say it is consciously engaging in dialogue. If pure text manipulation is insufficient for consciousness, then what about reception of spoken words and generation of an audible response? Is the AI conscious of what it is “hearing” and saying? Why would that be the case when a text-based exchange is not evidence of consciousness?


You argue that we can consider AI as a personation such as a legal fiction, that is, as a moral fiction. But is that a legitimate step? There are significant issues around what constitutes a legal fiction, evidenced in recent steps to ascribe legal personality to mountains, rivers and whales. They are considered to be “persons” in the eyes of the law, but in reality it requires a human person to speak for them and act on their behalf. Are we not compounding those problems by creating a new status for inanimate objects such as a computer.


The use of the “as if” mode seems to me to be tending to nominalism, which would allow us to make rather arbitrary claims about entities and their rights and functions. We cannot have meaningful discussion of the potential rights, moral status, legal status, etc. of something if we do not have a clear perception of what it is which is the subject of those rights and status.


In other words, we need to be sceptical of the metanarrative which is urging us to be aware, even alarmed, by the possibility of AI becoming conscious and developing goals and intentions of its own. This is a deeply humanistic perception and I do not share its assumptions on which the prospect of conscious AI is built.


The question we need to face, then, is not whether AI can be a moral patient, but what is an AI? Without that, we cannot draw any coherent conclusions as to whether it is even possible for it to be conscious, and without consciousness, I contend there is no requirement for AI welfare.


With kind regards


Chris Gousmett




Recent Posts

See All
The perils of uncontrolled AI

Chris Gousmett We can be confident that we will not become subject to AI despite the claims that it could “go rogue” and be trying to...

 
 
 
Coding a super-intelligence

Chris Gousmett Imagine, if you will, a mouse with sophisticated computer skills, writing code for an artificial intelligence programme...

 
 
 

Comments


bottom of page