This article also appears on my Patreon blog, which you can find here. Everything I put on Patreon is free, although donations in support of my writing are always appreciated 🙂
American neuroscientist and philosopher Sam Harris, and Canadian psychologist Paul Bloom, wrote an essay in 2018 for the New York Times entitled “It’s Westworld. What’s Wrong with Cruelty to Robots?”.
This is a blog post about its title.
The answer to the question it poses is in fact staring us in the face before we even start. The phrase “cruelty to robots” is a contradiction. It’s meaningless because it’s not actually a question about robots and cruelty; it’s a question about cultural heritage and harm. Cruelty to robots is harm to cultural heritage. It is the spectacle, or ritual, of the destruction of something we care about: our quest to become gods and remake existence in our image.
It’s the same thing as tearing down statues, as demolishing tower blocks, as letting neo-classical façades crumble.
What is a robot, and why would we care about cruelty to it?
As we’re not just talking about any robot here, we’re talking about humanoid, human-like robots, therein lies an uncertainty, because we recognise ourselves but we also recognise a collection of mechanical parts when we see one, and this introduces dissonance. The uncertainty is the basis for our whole fascination with robots in art, literature, film and music. We don’t know whether it’s “human”. Not human, but human enough.
This is, of course, an indictment of the limited (i.e. specialised) ways our brains work. The possibility that this thing could be conscious – and conscious at our level, or greater, rather than at the level of an ant, say – is leant greater weight by it being humanoid, of having a recognisable face.
And not only that, the frisson we feel at considering the human-like or human-plus consciousness of a Westworld-style humanoid robot is heightened by our frequent cultural references to such things. In fact, Westworld is precisely to blame. We can imagine shaking the hand of such a robot, sharing a car ride, making love with it, getting into a fight… even killing it. Possibly all without knowing it was artificial. And for that matter, what does “artificial” even mean? It’s a very blurry boundary, and this culturally-exciting ambiguity, which has provided the dramatic tension in many a Westworld, has engrained itself into our culture.
We’re ready to believe that a humanoid robot shares exactly the same fear and love that we do.
But what this really means is that we’re not in the realm of robots at all. A robot is mechanical, yes, but that’s all it is, or principally what it is. That’s its defining trait, so we need a higher order category. When you are talking about an entity indistinguishable from a human (without resort to cutting it open), it should probably be treated as a type of human first and a mechanical assemblage second. “Robot” should at that point be a subcategory. I’ve already touched upon it, but this superordinate category is also dependent on a second ingredient. We’ll come back to it, however.
Let’s instead imagine this from the other way round, starting with your granny’s pacemaker and replacement hip. These haven’t resulted in her becoming some bionic changeling who can no longer be recognised as human. She’s the same cuddly rogue she was before. But, what about a person whose arms and legs get replaced with bionic ones that do the same job. Imagine we’re years in the future and these limbs are sufficiently advanced not just to be undetectable to bystanders, but to provide an identical sensory experience to their owner too.
Alright, let’s go further. A manufactured kidney? A new heart? What about the brain? Ah, that’s a bit more difficult. Aside from the fact it’s not possible to replace a person’s brain with a manufactured one, and still less plausible to imagine the change being undetectable to others (especially given that a person’s personality can radically change after even a knock on the head), we’re going to have to use our imaginations.
But imagining a theoretical world in which you had a brain injury, let’s say, and had your entire brain replaced with a lab-grown copy that resulted in you being imperceptibly different in the judgement of the people around you, would we still say you were human? I think we surely would. What grounds could there be to say any different?
If we now go the whole hog and imagine a person who lives alone, with no immediate family, getting every part of their physical being changed for a copy made from metal and plastic, and that this proceeds one step at a time, a limb here, an internal organ there, and that each operation is conducted in secret, after which the patient carries on living their life without a soul around them realising anything has changed… Well, we would find we had circled right back to the Westworld humanoid robot. Like the Westworld robots, they are 100% composed of manufactured parts, but can similarly pass undetected in human society.
And although they would have memories of life before the operations, of a life embodied in flesh and blood, and would have memories of discussions preceding the operations, and of the hospital stays, and of the adaptation to the new parts – and although those memories would be “real” – they would have arrived in the brain by being copied from the old one. There is nowhere else these memories could come from and nowhere else they could go. The fact that the events which caused those memories to be written “really happened” is of fairly little importance. No one’s memory is fully accurate.
The things you remember didn’t happen exactly as you recall, or as someone else may recall.
I mentioned above that a second ingredient is required in addition to the requirement to be externally indistinguishable from a human. Here, we arrive at the doorstep of the much-discussed “hard problem of consciousness”. And in the hard problem there lies the frisson that drives the cultural fascination with humanoid robots.
The hard problem was formulated by Australian philosopher David Chalmers in a 1995 paper called Facing Up to the Problem of Consciousness, published in the Journal of Consciousness Studies. It goes as follows:
…even when we have explained the performance of all the cognitive and behavioral functions in the vicinity of experience—perceptual discrimination, categorization, internal access, verbal report—there may still remain a further unanswered question: Why is the performance of these functions accompanied by experience?
In other words, the reasons why some organisms don’t merely carry out the complex but philosophically “easy” activities of perception, filing, reporting etc., but instead accompany those tasks with a subjective experience, is a mystery. For example, why does being hungry feel like what it feels like to be hungry, as opposed to what it feels like to be thirsty?
Is it because being hungry involves one neuro-physiological system and being thirsty involves another, and there has to be some kind of subjective experience in order to motivate behaviour? Well, no. The second part is spurious and first part is consequently irrelevant.
One might say that subjective experience is not necessary to motivate behaviour. A tree grows upwards without (as far as we can tell) finding growing upwards pleasurable or staying small frightening. A refrigerator is able to “decide” when to turn on and off independent of any subjective feeling of being too hot or too cold. Even you… Your heart is beating, body carrying out the complex business of keeping you alive, without any subjective experience of those systems.
Your heart doesn’t beat because you like the way it feels. Why are these necessary systems absent of subjective content, yet not all?
Probably because those bodily systems do not directly need you to take action in the world. Being hungry motivates you to find food, but your heart needs to beat irrespective of how you’re acting in the world, so doesn’t need a subjective experience.
We cannot say. And therein lies the problem. Not only can we not know whether a given robot is experiencing something subjective – a thing it feels like to be that robot – but we have no way of knowing whether it might be anything like a human’s subjective experience. All we can do is assume, and the more humanoid a robot appears, the more we humans will presuppose there is a human consciousness in there.
Which is strange in a way. I mean, you don’t suppose the image of a human on a television screen has any subjective experience, even if it looks very much like a person. Yet if you scrape your car, you feel sorry for it, like you’ve let it down.
The reason for this is the same reason we have the concept of cultural heritage. The car is a constituent of your culture, the life you craft and convey, and it has importance generically in Western culture because of the changes it wrought as a technology, industry and re-configurer of places. When you damage your car, you are damaging the part of yourself that you project onto it. It is the same with humanoid robots.
So, what’s wrong with cruelty to robots? The lack of analysis of robots in terms of cultural heritage, that’s what.
If a humanoid robot has a human-like subjective experience, and if we can know that it does, it ceases to be (just) a robot and becomes a human, or a human-plus.
If we are, however, talking about robots that are robots because they’re definitionally distinct from consciousness-having entities, and not therefore those things first and robots second, we must group them along with microwave ovens, paintings, cars, books and clothing.
Tell me this: Have you ever torn a book in half? Not ripped an old phonebook in two in a fit of rage, I mean, but a favourite childhood book? Have you ever torn the cover off a much-loved novel and tossed it in the bin? I would guess not. But you’ve probably torn up thousands of flyers from the front door mat. What’s the difference? You weren’t going to read that novel again either.
What about a teddy bear? When your mother died, and after the funeral you found her faded old teddy from the 1930s at the top of a cupboard, did you bin it? Or did you view it as something like an eternal and living vessel that housed and channelled the thing that made her her? After your father passed, leaving that MG he’d owned since the 70s and been meaning to restore, for which he’d had so many plans, the friends you were going to make on the trips that never happened, did you sell it to a scrapper? Or did you make sure it went to a good home, to an enthusiast who would protect its legacy?
It’s all cultural heritage: the treasured classic, the old teddy, the humanoid robot. All meanings within a web of meanings.
The microwave? Well, there’s a reason there’s no such thing as cruelty to microwaves.