“(…) (Tezuka) generally thought that an emotional robot like Atom would not be possible until the end of the 21st century, if ever. (…) (B)ecause a robot can only be an approximation of a human.” *
I am finally getting ready to write an essay on Astro Boy – or as he’s better known in Japan, the Mighty Atom – the conscious robot hero admired by millions. But Astro Boy was never intended by its creator, Osamu Tezuka, to be a simple depiction of conscious AI in a robot body, righting wrongs and saving lives, etc. On the contrary, Tezuka understood, however intuitively, that he was opening a veritable Pandora’s box of problematics concerning what it means to be human, and how it is that two equally intelligent species of consciousness could share the same environment in a peaceful and equitable way.
As entry into this topic, I here present a general discussion concerning the conscious AI issue, drawn from comments I made at another website.
Scientia Salon recently had a very good article on the problem of trusting too much in the possibility of conscious artificial intelligence, especially when delegating military decisions to AI choices- which our military does today, consciousness be damned. ** This came at a most opportune moment for me, preparing an essay on the phenomenon of Astro Boy, the conscious robot hero from Japanese cartoons; and one of the questions I am wrestling with, is how do we perceive robots responding to stimuli in a ‘conscious’ manner? Notably, when this issue is addressed in science fiction, the robot is heavily anthropomorphized physically, often in the form of what we know as androids, or robots with synthetic skin etc. (Astro Boy has the form of a twelve-year-old boy). Obviously, we are looking for something quite human in our supposedly superior AI robots….
One has to stop and wonder why futurists, transhumanists, singularists, etc., all want to replace human beings with another species of conscious intelligence? Is being human really all that horrible that we might hope for its extinction?
Of course there are those who argue that such replacement somehow articulates a stage in ‘our’ evolution; but if our evolution implies the passing on of genetic material, that of course is ridiculous. Conscious robots could conceivably exist along side of us, but they would manifestly not be us.
The concerns bothering such as Stephen Hawking, who has recently voiced fears concerning the dangers of conscious AI becoming our masters (or destroyers), I think has to do with the problem of relating two distinctly different forms of intelligence, one alien (or AI) and ourselves, in the same environment. What if these conscious robots don’t recognize us as another ‘life-form’ worth preserving? ‘Well, we will program them to.’ Ah, but then they are not fully conscious, if they are completely programmed. Their consciousness would need the capacity to evolve on its own and may in the process reach the decision that their fore-bearers are a little embarrassing to the history of the great robot race….
In my opinion, the computational theory of mind on which ‘conscious AI’ hopes rest, is mistaken in the first instance – it began as a metaphor; we built computers, and then began asking how computer functions might mimic human thought. Now it seems taken by some as fact, with almost religious faith. Yet it is still metaphor. We built computers, computers didn’t build us.
As I always like to note, I will admit a robot can be conscious when it has sex organs, hormones, urges to fornicate, and ability to do so. I think it also needs to eat, belch, and defecate. It has to get enough sleep, or exhibit signs of exhaustion when it doesn’t. A robot experiencing hot flashes during menopause might convince me. Because consciousness arises from the body of which it is part and parcel. Besides mistaking trope for reality, the computational theory of mind sees the mind as mere thinking machine; that isn’t really what the mind is there to do. Thinking is just a happy (or unhappy) accident; the function of the mind is to find ways to satisfy the body.
But perhaps that’s what ‘strong AI’ futurists most wish to replace.
But even at the level of intellection, I see a problem here. Not only can AI not experience a biological body, they cannot reach intellectively further beyond the immediate moment to realize the meaning (in a holistic, non-linguistic sense) of having a particular body with a particular consciousness, filled with all the monuments and rubble of personal and social knowledge acquired through experience.
Let us here distinguish between the simple understanding of sentences – which no one doubts that computers can accomplish today – must accomplish, for their programming to translate into activity – and what we mean by ‘understanding’ as grasping deeper, broader insights into our existential experience.
It is said that on having accomplished victory at Waterloo, standing above the corpse littered battlefield, Wellington remarked, “Next to a battle lost, the saddest thing is a battle won.”
It is not the understanding of a sentence that computers are incapable of, but the understanding expressed in such a statement. No computer can understand what it means to start a nuclear war – the lives lost, the horror of facing life afterwards for the survivors, the damages done to human economics, culture and social fabric, the great weight of efforts at recovery – no computer could understand any of this, and I suggest that not only could no computer ever understand this, but no computer would ever need to understand this, since, assuming it were conscious, it would recognize human life and human values as fundamentally alien to it.
The discussion concerning the possibility of a computer achieving consciousness is impoverished by the assumption that the experience of a living human consciousness can be reduced to computation realized in algorithmic language. That’s absurd.
Consciousness may be an illusion, but what even those who claim so are clearly not willing to let go are our values, and the emotions responding to values realized or denied. And no computer can ever share these. Which is exactly why we have science fiction to wrestle with such questions, because no theory can elaborate this question properly.
No computer will ever weep over a dead son, or rejoice in the successful life of a daughter. No computer will ever suffer disappointment or need to find ways to live with it and carry on.
No computer will ever have to determine what is mere lust or truly love, control its anger and laugh at its own flaws. No computer will ever confront its own mortality.
All this is what makes us human, not the algorithms of computational thought.
Without this, the dangers of handing over to computers control of political policy, or economics, or nuclear weapons, are manifest.
I hate to say it, and I hope no one takes offense, but it must be said: an obsession with producing conscious AI may very well be pathological. It seems to border on it. The implicit disdain for the body, and suspicion of social connectivity, are evident. For the question remains: why would we want to do this? What is so wrong with being human?
Humans are not machines. The machinery only gets us to the point of experiencing life, it doesn’t experience it for us.
Frankly, ‘conscious AI’ advocates don’t have much of an argument. Under analyses, beneath their technical terminology and logical back-flips, what we find is a bunch of assertions: “it’s possible,” “we can do this,” humans are machines,” etc.
To even begin an argument here, they would need to produce a viable interpretation of human behavior and responses in purely mechanistic terms, including the phenomena I have previously described.
Alternatively, they could go ahead and build the conscious computer of their dreams within our lifetimes (which, due to my health, doesn’t give them much time).
To say, ‘no computer will ever do this,’ is admittedly simply an assertion. To couple this with an empirical reality – “weeping for a dead son,” etc., forms an enthymeme. An enthymeme is an abbreviated argument deployed rhetorically. The hidden (but not much) premises are that (1) any full experience of the reality of human consciousness arises from the human experience of being human; and (2) conscious AI, whatever its ontological status, is categorically not human; therefore (3) conscious AI cannot replicate the full experience of this reality.
Its basically a problem similar to the question whether an alien intelligence could recognize us as an intelligent life form. Computers could be ‘intelligent’ by some definition, and highly responsive, yet if it had a consciousness, it would not be anything like our own.
The empirically verifiable human reality informing the abbreviated claim in my enthymeme – this weeping, rejoicing, familial relationships, satisfaction with success, disappointment with failure, and all the values that ground these – is what a counter argument must account for. In order to do this one would need to first re-describe this reality in a purely digital, mechanistic schema.
Some may think my responses are emotionally motivated (say by fear, or nostalgia), but to be frank, I’m too old to be much concerned with how our species wastes its time in the future – I won’t be here.
My personal emotions are irrelevant. But I do suspect that there are many people who might be offended to find their personal values and experiences reduced to disembodied, unsociable machinery. That matters politically, in terms of getting funding for AI projects. (So it’s unsurprising that public ‘conscious AI’ advocates sugar-coat their message with promises of kinder gentler robots befriending us and ‘making our lives easier’ (i.e., unemployed). To be sure, the military wants unfeeling weapons programs, businesses want uncomplaining robots. But I don’t know if one can get voters on board funding based on the promise ‘we will replace you with a better machine.’
For many reasons then, the question of perfection of ‘conscious AI’ is nearly as void and irrelevant to our real daily lives as Medieval theological discussions of an immortal, immaterial soul. It can’t produce the empirical evidence needed, it can’t produce arguments grounded in human experience, it is filled with empty promises of an uploaded ‘afterlife’ utterly detached from that experience. Its perpetrators turn a deaf ear to any talk of possible difficulties or dangers. It survives largely because militarists and businessmen feed off it like cockroaches off confectionery sugar. Much like popes and priests once fed off theology.
That doesn’t mean the research isn’t useful; but the research doesn’t need the ideology. Frankly, I don’t think any of us do.
Teacher Mustachio: “I dunno how to put this, Astro, but you are different…. ‘Course, I ‘spose it’s because you’re a robot….”
Astro: “You mean I’ll never be able to become human?
Mustachio: “You need to think more about how to become a great robot, instead of forcing yourself to become human….”
Astro: “I think I get it now….” ***
* The Astro Boy Essays; Frederik Schodt; Stone Bridge Press; 2007; p118.
***Astro Boy, vol. 8; Osamu Tezuka; trans. Frederik J. Schodt; Dark Horse Comics, 2002; p. 99.