go, Go, GO, Astro Boy! (3)

—–

Part 3:

So, we’re down to the wire with Astro Boy. While there’s no doubt that author Tezuka is asking us to empathize with him in his human/robot predicament, there seems also no doubt that our empathy is triggered in some proportion to our ability to perceive Astro as some form of living entity, and specifically as some form of human being, however artificial. It’s not simply that we know, for instance, that he has been invented with artificial tear-ducts and programmed to respond emotionally by weeping. We need to be able to perceive this response as somehow ‘sincere,’ that is, as an expression of his being Astro Boy, and not some fraud perpetuated on us by his inventor. We need to believe that when he weeps he does so because he ‘feels’ some sadness or sorrow, and not because his inventor wanted to fool us into responding to him ‘as if.’

This by the way, is the greatest weakness of the so-called Turing Test *, which virtually nobody talks about, as far as I know. The rule of the Turing Test is that if a computer can fool someone into believing he or she is conversing with another human, then the computer has demonstrated at least the communication skills, and possibly an intelligence, comparable to that of a human being. The problem here is that, as this is what such a computer is designed to do – fool a human interlocutor – then even if successful, all that really proves is that computers can be programmed by clever con-artists, to trick the unwary. I suspect that anyone who goes into such a ‘test’ situation, even suspecting what the rule of the game are, will proceed to engage in banter to which the programming would have no adequate response. Anyone fairly capable of verbal humor or Dada-esque poetry would not find this very difficult, e.g.:
Computer: How do you do?
Human: My foot is in a block of cement.
Computer: Your foot feels very heavy?
Human: No, I cut it off and fed it to the fishes.
Computer: Then it is not in a block of cement.
Human: No, it’s a block north of the corner of Wonderland and Vine. As the Stone Crows fly.
Indeed, a nifty thing to do would be to sneak another computer into the test, programmed to respond with random and irrelevant verbalizations to everything the Turing computer had to say. The transcript of that encounter might prove amusing.

The point is, the Turing Test, and similar tests for the possibility of a human-like intelligence, assume human conversation to be essentially reasonable, and that people epect it to be; and then based on that assumption program a con-game playing on those expectations. Under clinical conditions, subjects generally conform to this rule and play the game accordingly. That’s because the clinic itself is a signifier of ‘science’ in our culture, and hence reasonable communications are expected to be engaged there. All we need is someone conceiving themselves a spanner in the works to undo it and reveal the game as just that – a game.

Because just as it happens, humans are not always reasonable, in either thought, word, or action. Because humans are capable of a wide range of responses and actions, some highly unpredictable. Because humans have a capacity that computers properly programmed never have – the capacity to be flawed, to make mistakes and learn from them, to be outraged and outrageously wrong or wrong-headed.

Turing lived in an academic bubble, as do his supporters today. The real world is not a program, carved with care into perfect clarity; it’s mud. It is all slimy goop that won’t wash off the hands. Articulation is the exception, not the rule.

So in watching or reading Astro Boy, what impresses me most about him, what leads me to empathize with him as a conscious intelligence, as a being approximately (if artificially) ‘human,’ are his flaws.

After writing the first part of this essay, I had to stop and ask myself, what was the single most human gesture I had seen Astro Boy perform, what gesture had truly endeared him to me as recognizably conscious being similar to a human, so that I could empathize with him. The answered proved a little surprising.

Astro Boy’s head is screwed onto his body; otherwise, it is only connected, within, by a coiled cable. Occasionally, Astro Boy experiences a bump or a push so hard, that his head pops off, and he has to pause to screw it back on.

That’s it, that’s the gesture that is most purely ‘human’ about him. In moments like this, the flaw in his design becomes obvious, and he has to correct for it. He has to recognize his own vulnerability, deal with it and move on. In such moments he is most like a child who is learning and adaptive. His ‘otherness’ thus becomes exactly what is most ‘human’ about him.

This is a theme built into the narratives. Astro Boy is the super-robot with superior intellect. But being conscious and capable of emotion, he makes mistakes. He trusts people unwisely; he makes bad decisions; he performs questionable actions exactly because, being innocent in nature, he cannot predict the moral consequences of every action he may take. He can be moody and he can get angry, and he can weep – sincerely, because he feels sorrow. But he can go through all this, and learn from it and move on – not to a greater state of perfection, but onto further adventures where he will almost certainly make other mistakes. Because he is like us – we who bumble and fumble our way through the mud, and yet press on.

None of this is ‘reasonable’ in any sense Turing’s machine could recognize. ‘Conscious AI’ enthusiasts tend to ignore our human failings, because, I suspect, they find them embarrassing; thus they want an intelligence incapable of error.

But our failings make us human. We’re an animal with an over-sized brain. Our minds are too complicated for the survival in the wild they evolved to adapt to. In other words, human consciousness is the product of an evolutionary flaw – an accident, a freak of nature. Not surprising, then, that the mistakes we make with it should be the ultimate signification of our being human.

—–
* See: http://plato.stanford.edu/entries/turing-test/
Be warned that the authors of this article, Graham Oppy and David Dowe, are strongly supportive of the Turing Test as a standard for determining intelligence – and dismissive of counter arguments That’s disappointing to see in the Standford Encyclopedia. To be quite frank, it is clear to me that they really don’t understand most of the counter-arguments against Turin test difficulties, so some of their criticisms are rather weak.  But that’s to be expected of rigid AI enthusiasts.

I doubt they would be able to recognize either the explicit argument – that the Turing Test is vulnerable to intentional contra-rational attacks on it – or the implicit argument – that the potential for mistake is a fundamental aspect of human intelligence.  The Turing Test is a direct inheritance of Comtean Positivism, with its implicit dismissal of common human experience.  That’s its weakness, just waiting to be exploited properly.

However, I foot-note it here in fairness to the ‘other side,’ so to speak, and the discussion is fairly complete, albeit academic.

7 thoughts on “go, Go, GO, Astro Boy! (3)

  1. I love your essay and agree with this:
    Our minds are too complicated for the survival in the wild they evolved to adapt to

    Yes, it is a puzzle that our minds are vastly overqualified for the circumstances of the last 50,000 years.
    But, I find this conclusion doubtful:
    In other words, human consciousness is the product of an evolutionary flaw

    We might call it a piece of marvellous evolutionary good fortune(serendipitous of fortuitous) but I fail to see how it could be a flaw.

    Like

    • Well, that depends on whether one understands evolution as teleologically directed (as I do not). It’s a ‘flaw,’ metaphorically, insofar as it is ‘too much’ for what the adaption would be expected to accomplish, even in a species with a unique capacity to thrive in many environments (even hostile ones). But that’s for a different discussion.

      Like

  2. I think the Turing test is the most fooish thing Turing invented. It comes from a tradition of thinking of thought processes as essentialy rational (and irrationality as something having to do with the body, the material world, women etc.). You can find the beginnings of this in Plato, when Socrates, before drinking poison, is obviously looking forward to finaly getting rid of his body that stops his mind to become totally clear. The body seems some kind of sickness, and he wants a rooster to be sacrificed to the god of healing for finally becoming cured by that “medicine”.

    This idea of thinking as flawless rationality can even be found in conceptions of god. One main objections creationists seem to have with evolution is its reliance of accident and error. In the idea of intelligent design, this is contrasted with an idea of a flawless, rational constructor. However, I think that intelligence relies on and necissarily needs error.

    The only systems that can be flawless are algorithms. But algorithms cannot be intelligent. They just do one thing. They are limited. All of them. An algorithm can never be intelligent (although an intelligent system might contain algorithms).

    I think one precondition of “artificial intelligence” would be the ability of a system to calculate functions that are not turing computable. Such a system must necessarily be able to make mistakes (but may have a chance to eventually correct them). Such an “intelligence” would probably not be human-like (that would require a human body in a human environment) but It might be intelligent in some way (although it will certainly not pass the Turing test).

    By the way, on Youtube, you can watch “chatbots” talking to each other. These dialogues are rather boring, and at times also funny.

    Liked by 1 person

  3. It’s a ‘flaw,’ metaphorically, insofar as it is ‘too much’ for what the adaption would be expected to accomplish

    OK, I see what you mean. What you call a flaw, I call serendipity(from our teleological point of view). I say ‘our teleological point of view’ because, once cognition and language arrived in our mental toolbox, we created teleology. We are inescapably teleological because we can conceive of a future.

    And I think it is our ability to conceive of a future that altered the Darwinian landscape in such a way we acquired excessive cognition. Before us, no organism could imagine a future so all that they could do is react to the present. Our cognition allowed us to imagine and anticipate the future. Being able to anticipate the future meant we could plan for it and avoid bad outcomes. This conferred a huge and unprecedented survival advantage. Consequently we continued being selected for this advantage, creating a positive feedback loop for greater intelligence. Hence it is not a flaw but the natural outcome of continued selection for an exceedingly advantageous trait.

    The famous two marshmallow test is a simple but beautiful illustration of what I mean. The ability to defer present pleasure is tied to a strong sense of what the future can hold.

    Like

    • “The ability to defer present pleasure is tied to a strong sense of what the future can hold.”
      Yes, of course.
      Just to clarify: ‘flawed’ as metaphor, because the world just is what it is, and we are as we are; that the case, then ‘flawed’ is a poor judgment, but one we readily lapse into. (This ties to a previous point, that much of what determines our behavior is an aesthetic valuation of what ‘perfect’ behavior is).
      Nonetheless, I think I saw something clearly here, in discovery that our mistakes are necessary to our ontology.

      It may be my Catholic upbringing, my Pragmatist training, or my Buddhist commitment, but I really don’t think ‘perfection’ is a possibility in this life; just ‘today a little better than before’…..

      Liked by 1 person

  4. I think I saw something clearly here, in discovery that our mistakes are necessary to our ontology.

    This is a powerful point. As I said to my son once – nobody ever learns from success. Randomness delivers an imperfect world. We find meaning in striving to overcome the imperfection. What is really puzzling is our aesthetic sense. As you pointed out, it is a powerful driver of our behaviour. But why? When discussing this with friends I like to point out that we have a remarkable ability to detect deviations from squareness, flatness, parallelism and straightness. These are properties rarely found in nature so why should we be so sensitive to them? Even more puzzling, the image on the back of our eyeball will never possess these properties. They only emerge in our conscious mind after painstaking correction in the visual pipeline between the eye and the conscious mind.

    My best guess is that accurate prediction of movement requires this but I can’t quite see why. That leaves unexplained our deep sensitivity to the bulk of what we call the aesthetic.

    Liked by 1 person

Leave a comment