So, we’re down to the wire with Astro Boy. While there’s no doubt that author Tezuka is asking us to empathize with him in his human/robot predicament, there seems also no doubt that our empathy is triggered in some proportion to our ability to perceive Astro as some form of living entity, and specifically as some form of human being, however artificial. It’s not simply that we know, for instance, that he has been invented with artificial tear-ducts and programmed to respond emotionally by weeping. We need to be able to perceive this response as somehow ‘sincere,’ that is, as an expression of his being Astro Boy, and not some fraud perpetuated on us by his inventor. We need to believe that when he weeps he does so because he ‘feels’ some sadness or sorrow, and not because his inventor wanted to fool us into responding to him ‘as if.’
This by the way, is the greatest weakness of the so-called Turing Test *, which virtually nobody talks about, as far as I know. The rule of the Turing Test is that if a computer can fool someone into believing he or she is conversing with another human, then the computer has demonstrated at least the communication skills, and possibly an intelligence, comparable to that of a human being. The problem here is that, as this is what such a computer is designed to do – fool a human interlocutor – then even if successful, all that really proves is that computers can be programmed by clever con-artists, to trick the unwary. I suspect that anyone who goes into such a ‘test’ situation, even suspecting what the rule of the game are, will proceed to engage in banter to which the programming would have no adequate response. Anyone fairly capable of verbal humor or Dada-esque poetry would not find this very difficult, e.g.:
Computer: How do you do?
Human: My foot is in a block of cement.
Computer: Your foot feels very heavy?
Human: No, I cut it off and fed it to the fishes.
Computer: Then it is not in a block of cement.
Human: No, it’s a block north of the corner of Wonderland and Vine. As the Stone Crows fly.
Indeed, a nifty thing to do would be to sneak another computer into the test, programmed to respond with random and irrelevant verbalizations to everything the Turing computer had to say. The transcript of that encounter might prove amusing.
The point is, the Turing Test, and similar tests for the possibility of a human-like intelligence, assume human conversation to be essentially reasonable, and that people epect it to be; and then based on that assumption program a con-game playing on those expectations. Under clinical conditions, subjects generally conform to this rule and play the game accordingly. That’s because the clinic itself is a signifier of ‘science’ in our culture, and hence reasonable communications are expected to be engaged there. All we need is someone conceiving themselves a spanner in the works to undo it and reveal the game as just that – a game.
Because just as it happens, humans are not always reasonable, in either thought, word, or action. Because humans are capable of a wide range of responses and actions, some highly unpredictable. Because humans have a capacity that computers properly programmed never have – the capacity to be flawed, to make mistakes and learn from them, to be outraged and outrageously wrong or wrong-headed.
Turing lived in an academic bubble, as do his supporters today. The real world is not a program, carved with care into perfect clarity; it’s mud. It is all slimy goop that won’t wash off the hands. Articulation is the exception, not the rule.
So in watching or reading Astro Boy, what impresses me most about him, what leads me to empathize with him as a conscious intelligence, as a being approximately (if artificially) ‘human,’ are his flaws.
After writing the first part of this essay, I had to stop and ask myself, what was the single most human gesture I had seen Astro Boy perform, what gesture had truly endeared him to me as recognizably conscious being similar to a human, so that I could empathize with him. The answered proved a little surprising.
Astro Boy’s head is screwed onto his body; otherwise, it is only connected, within, by a coiled cable. Occasionally, Astro Boy experiences a bump or a push so hard, that his head pops off, and he has to pause to screw it back on.
That’s it, that’s the gesture that is most purely ‘human’ about him. In moments like this, the flaw in his design becomes obvious, and he has to correct for it. He has to recognize his own vulnerability, deal with it and move on. In such moments he is most like a child who is learning and adaptive. His ‘otherness’ thus becomes exactly what is most ‘human’ about him.
This is a theme built into the narratives. Astro Boy is the super-robot with superior intellect. But being conscious and capable of emotion, he makes mistakes. He trusts people unwisely; he makes bad decisions; he performs questionable actions exactly because, being innocent in nature, he cannot predict the moral consequences of every action he may take. He can be moody and he can get angry, and he can weep – sincerely, because he feels sorrow. But he can go through all this, and learn from it and move on – not to a greater state of perfection, but onto further adventures where he will almost certainly make other mistakes. Because he is like us – we who bumble and fumble our way through the mud, and yet press on.
None of this is ‘reasonable’ in any sense Turing’s machine could recognize. ‘Conscious AI’ enthusiasts tend to ignore our human failings, because, I suspect, they find them embarrassing; thus they want an intelligence incapable of error.
But our failings make us human. We’re an animal with an over-sized brain. Our minds are too complicated for the survival in the wild they evolved to adapt to. In other words, human consciousness is the product of an evolutionary flaw – an accident, a freak of nature. Not surprising, then, that the mistakes we make with it should be the ultimate signification of our being human.
* See: http://plato.stanford.edu/entries/turing-test/
Be warned that the authors of this article, Graham Oppy and David Dowe, are strongly supportive of the Turing Test as a standard for determining intelligence – and dismissive of counter arguments That’s disappointing to see in the Standford Encyclopedia. To be quite frank, it is clear to me that they really don’t understand most of the counter-arguments against Turin test difficulties, so some of their criticisms are rather weak. But that’s to be expected of rigid AI enthusiasts.
I doubt they would be able to recognize either the explicit argument – that the Turing Test is vulnerable to intentional contra-rational attacks on it – or the implicit argument – that the potential for mistake is a fundamental aspect of human intelligence. The Turing Test is a direct inheritance of Comtean Positivism, with its implicit dismissal of common human experience. That’s its weakness, just waiting to be exploited properly.
However, I foot-note it here in fairness to the ‘other side,’ so to speak, and the discussion is fairly complete, albeit academic.