No theory of conscious AI (prelude to an Astro Boy essay)

abschema

—–
“(…) (Tezuka) generally thought that an emotional robot like Atom would not be possible until the end of the 21st century, if ever. (…) (B)ecause a robot can only be an approximation of a human.” *
—–

I am finally getting ready to write an essay on Astro Boy – or as he’s better known in Japan, the Mighty Atom – the conscious robot hero admired by millions. But Astro Boy was never intended by its creator, Osamu Tezuka, to be a simple depiction of conscious AI in a robot body, righting wrongs and saving lives, etc. On the contrary, Tezuka understood, however intuitively, that he was opening a veritable Pandora’s box of problematics concerning what it means to be human, and how it is that two equally intelligent species of consciousness could share the same environment in a peaceful and equitable way.

As entry into this topic, I here present a general discussion concerning the conscious AI issue, drawn from comments I made at another website.

Scientia Salon recently had a very good article on the problem of trusting too much in the possibility of conscious artificial intelligence, especially when delegating military decisions to AI choices- which our military does today, consciousness be damned. ** This came at a most opportune moment for me, preparing an essay on the phenomenon of Astro Boy, the conscious robot hero from Japanese cartoons; and one of the questions I am wrestling with, is how do we perceive robots responding to stimuli in a ‘conscious’ manner? Notably, when this issue is addressed in science fiction, the robot is heavily anthropomorphized physically, often in the form of what we know as androids, or robots with synthetic skin etc. (Astro Boy has the form of a twelve-year-old boy). Obviously, we are looking for something quite human in our supposedly superior AI robots….

One has to stop and wonder why futurists, transhumanists, singularists, etc., all want to replace human beings with another species of conscious intelligence? Is being human really all that horrible that we might hope for its extinction?

Of course there are those who argue that such replacement somehow articulates a stage in ‘our’ evolution; but if our evolution implies the passing on of genetic material, that of course is ridiculous. Conscious robots could conceivably exist along side of us, but they would manifestly not be us.

The concerns bothering such as Stephen Hawking, who has recently voiced fears concerning the dangers of conscious AI becoming our masters (or destroyers), I think has to do with the problem of relating two distinctly different forms of intelligence, one alien (or AI) and ourselves, in the same environment. What if these conscious robots don’t recognize us as another ‘life-form’ worth preserving? ‘Well, we will program them to.’ Ah, but then they are not fully conscious, if they are completely programmed. Their consciousness would need the capacity to evolve on its own and may in the process reach the decision that their fore-bearers are a little embarrassing to the history of the great robot race….

In my opinion, the computational theory of mind on which ‘conscious AI’ hopes rest, is mistaken in the first instance – it began as a metaphor; we built computers, and then began asking how computer functions might mimic human thought. Now it seems taken by some as fact, with almost religious faith. Yet it is still metaphor. We built computers, computers didn’t build us.

As I always like to note, I will admit a robot can be conscious when it has sex organs, hormones, urges to fornicate, and ability to do so. I think it also needs to eat, belch, and defecate. It has to get enough sleep, or exhibit signs of exhaustion when it doesn’t. A robot experiencing hot flashes during menopause might convince me. Because consciousness arises from the body of which it is part and parcel. Besides mistaking trope for reality, the computational theory of mind sees the mind as mere thinking machine; that isn’t really what the mind is there to do. Thinking is just a happy (or unhappy) accident; the function of the mind is to find ways to satisfy the body.

But perhaps that’s what ‘strong AI’ futurists most wish to replace.

But even at the level of intellection, I see a problem here. Not only can AI not experience a biological body, they cannot reach intellectively further beyond the immediate moment to realize the meaning (in a holistic, non-linguistic sense) of having a particular body with a particular consciousness, filled with all the monuments and rubble of personal and social knowledge acquired through experience.

Let us here distinguish between the simple understanding of sentences – which no one doubts that computers can accomplish today – must accomplish, for their programming to translate into activity – and what we mean by ‘understanding’ as grasping deeper, broader insights into our existential experience.

It is said that on having accomplished victory at Waterloo, standing above the corpse littered battlefield, Wellington remarked, “Next to a battle lost, the saddest thing is a battle won.”

It is not the understanding of a sentence that computers are incapable of, but the understanding expressed in such a statement. No computer can understand what it means to start a nuclear war – the lives lost, the horror of facing life afterwards for the survivors, the damages done to human economics, culture and social fabric, the great weight of efforts at recovery – no computer could understand any of this, and I suggest that not only could no computer ever understand this, but no computer would ever need to understand this, since, assuming it were conscious, it would recognize human life and human values as fundamentally alien to it.

The discussion concerning the possibility of a computer achieving consciousness is impoverished by the assumption that the experience of a living human consciousness can be reduced to computation realized in algorithmic language. That’s absurd.

Consciousness may be an illusion, but what even those who claim so are clearly not willing to let go are our values, and the emotions responding to values realized or denied. And no computer can ever share these. Which is exactly why we have science fiction to wrestle with such questions, because no theory can elaborate this question properly.

No computer will ever weep over a dead son, or rejoice in the successful life of a daughter. No computer will ever suffer disappointment or need to find ways to live with it and carry on.
No computer will ever have to determine what is mere lust or truly love, control its anger and laugh at its own flaws. No computer will ever confront its own mortality.

All this is what makes us human, not the algorithms of computational thought.

Without this, the dangers of handing over to computers control of political policy, or economics, or nuclear weapons, are manifest.

I hate to say it, and I hope no one takes offense, but it must be said: an obsession with producing conscious AI may very well be pathological.  It seems to border on it. The implicit disdain for the body, and suspicion of social connectivity, are evident. For the question remains: why would we want to do this? What is so wrong with being human?

Humans are not machines. The machinery only gets us to the point of experiencing life, it doesn’t experience it for us.

Frankly, ‘conscious AI’ advocates don’t have much of an argument. Under analyses, beneath their technical terminology and logical back-flips, what we find is a bunch of assertions: “it’s possible,” “we can do this,” humans are machines,” etc.

To even begin an argument here, they would need to produce a viable interpretation of human behavior and responses in purely mechanistic terms, including the phenomena I have previously described.

Alternatively, they could go ahead and build the conscious computer of their dreams within our lifetimes (which, due to my health, doesn’t give them much time).

To say, ‘no computer will ever do this,’ is admittedly simply an assertion. To couple this with an empirical reality – “weeping for a dead son,” etc., forms an enthymeme. An enthymeme is an abbreviated argument deployed rhetorically. The hidden (but not much) premises are that (1) any full experience of the reality of human consciousness arises from the human experience of being human; and (2) conscious AI, whatever its ontological status, is categorically not human; therefore (3) conscious AI cannot replicate the full experience of this reality.

Its basically a problem similar to the question whether an alien intelligence could recognize us as an intelligent life form. Computers could be ‘intelligent’ by some definition, and highly responsive, yet if it had a consciousness, it would not be anything like our own.

The empirically verifiable human reality informing the abbreviated claim in my enthymeme – this weeping, rejoicing, familial relationships, satisfaction with success, disappointment with failure, and all the values that ground these – is what a counter argument must account for. In order to do this one would need to first re-describe this reality in a purely digital, mechanistic schema.

Some may think my responses are emotionally motivated (say by fear, or nostalgia), but to be frank, I’m too old to be much concerned with how our species wastes its time in the future – I won’t be here.

My personal emotions are irrelevant.  But I do suspect that there are many people who might be offended to find their personal values and experiences reduced to disembodied, unsociable machinery. That matters politically, in terms of getting funding for AI projects. (So it’s unsurprising that public ‘conscious AI’ advocates sugar-coat their message with promises of kinder gentler robots befriending us and ‘making our lives easier’ (i.e., unemployed).  To be sure, the military wants unfeeling weapons programs, businesses want uncomplaining robots. But I don’t know if one can get voters on board funding based on the promise ‘we will replace you with a better machine.’

For many reasons then, the question of perfection of ‘conscious AI’ is nearly as void and irrelevant to our real daily lives as Medieval theological discussions of an immortal, immaterial soul. It can’t produce the empirical evidence needed, it can’t produce arguments grounded in human experience, it is filled with empty promises of an uploaded ‘afterlife’ utterly detached from that experience.  Its perpetrators turn a deaf ear to any talk of possible difficulties or dangers.  It survives largely because militarists and businessmen feed off it like cockroaches off confectionery sugar. Much like popes and priests once fed off theology.

That doesn’t mean the research isn’t useful; but the research doesn’t need the ideology. Frankly, I don’t think any of us do.

—–
Teacher Mustachio: “I dunno how to put this, Astro, but you are different…. ‘Course, I ‘spose it’s because you’re a robot….”
Astro: “You mean I’ll never be able to become human?
(…)
Mustachio: “You need to think more about how to become a great robot, instead of forcing yourself to become human….”
Astro: “I think I get it now….” ***
—–

* The Astro Boy Essays; Frederik Schodt; Stone Bridge Press; 2007; p118.

** https://scientiasalon.wordpress.com/2015/02/27/the-danger-of-artificial-stupidity/

***Astro Boy, vol. 8; Osamu Tezuka; trans. Frederik J. Schodt; Dark Horse Comics, 2002; p. 99.

Advertisements

10 thoughts on “No theory of conscious AI (prelude to an Astro Boy essay)

  1. I am not really convinced by your argumentation here. I am not a transhumanist, singularist etc. and sooner or later, I will probably tune my as-if-o-scope on these ideologies. I also think that AI is flawed, but for different reasons.

    However, it looks to me that you are equating consciousness with the human experience (maybe I understand you wrongly here). I don’t think this is correct.

    I also find it unlikely that there will be machines that have a human-like consciousness (although, maybe, I am afraid there is the possibility of a severely crippled one, see http://asifoscope.org/2012/11/20/an-open-letter-to-the-human-brain-project/).

    However, as long as we do not really understand how consciousness, i.e. the ability to have any form of subjective experience (of course a circular pseudo-definition, but if you are conscious I think you know what I mean 😉 ), is limited to humans.

    What we know, I think, is that somehow our consciousness depends on the structure and workings of our brain. If this is altered, by drugs, by injuries or also during certain phases of sleep, the experience is changed. People whose brain works differently also have a different experience (e.g. the experienced world of autists, if they have one, is probably quite different from what you and me experience – read Axel Brauns’ autobiographical “Buntschatten und Fledermäuse” for example (looks like it has not been translated into English, unfortunately)). I cannot even exclude the possibility that some parts of my brain or my body have a subjective side of their own, as inaccessible to me as that of another human being, although I do not believe this.

    I also have no problem thinking that some other animals, like, for example, orcas, crows or even squid, have some subjective side. I don’t know if they have (strictly speaking, I know it only about myself). However, if they do experience anything, their experienced world would, of course, be entirely different from our own. A conscious deep-see fish (if such a beast exists) would indeed have difficulties understanding what it means to lose a son, but so I have difficulties understanding what such an existence would feel like.

    If you are a dualist of the Cartesian brand, you might (or might not) limit consciousness to humans. If you do (as I think Descartes did) you might throw a dog into a pot of boiling water without any bad feeling because although the dog will behave as if it feels pain, you would assume it does not.
    If, on the other hand, you are a monist of the materialist brand, you would conclude from your own consciousness that material systems with consciousness are possible. As long as we do not have a theory of consciousness that makes clear why this is something only humans can have, you would have to concede the possibility of non-human conscious beings. You would be reluctant to boil a living dog (well, at least I would not do that). But then there is, without such a theory, no reason to exclude the possibility of some kind of conscious artificial system (and I don’t care if that would be a “machine” or not, that is a matter of definition). I do not see why the possibility of consciousness should be limited to non-artificial things.

    Just like the consciousness of a fish or an orca, if it exists, would be radically different from that of a human, such a consciousness of an artificial system would probably be very different from our own. But the missing ability of such a system to understand human experience as well as the missing ability of humans to understand the experienced world of such a system would not mean that it is not conscious.

    So I would take into consideration the possibility that consciousness as experienced by humans is only one possibility out of a wider spectrum (and even the human part of that spectrum is not so narrow; I have difficulties understanding some experiences of others and I am sure some people have difficulties understanding some of my own experiences as well).

    I also find the attempt to understand human cognitive processes in terms of information processing legitimate as long as it has not been demonstrated that this cannot work. And attempts to develop a theory of consciousness on such a basis are legitimate as long as there is no good argument showing that this could not work. Even if it does not, trying this line of research could at least provide some insights into why that is so and thus maybe provide some insight into or hints at how consciousness is possible.

    Like

    • Very good comment, and I actually agree with most of it. Indeed, a problem with the ‘conscious AI’ argument is that it presumes human consciousness to be the model of any consciousness (and there are some biologists and neurologists pursuing their work under the same presumption). That’s one of the reason I raise the point that, were AI to achieve some form of consciousness (which I think unlikely), it simply would not be ‘our’ consciousness, and might not be able to recognize ours as a consciousness (in the same way that there are those who won’t allow other forms of life to have some form of consciousness).

      In a previous post, discussing the question of whether fish feel pain, I remarked that it is probably true that fish don’t experience ‘pain’ as we understand it, but as living things, they continue to strive to live, and respond with obvious signs of stress to perceived threat to this. Such stress may not be ‘pain,’ but it is clearly some form of suffering, and may be common to all life, certainly among higher order forms.

      I tend to use the word ‘consciousness’ in reference to human experience, because that is simply common to discussions on the topic. However, my own preference is the term ‘sentience’ (as I learned to use the term as cognate for terms from Eastern philosophy); this is frequently equated with ‘consciousness,’ but its etymology makes it clearly derived from experiencing the senses, ‘feelings,’ rather than from capacity for some sort of ‘knowing awareness,’ which obviously implicates mind.

      That’s raises a curious problem – even if AI could achieve any form of consciousness, could it ever achieve sentience? I think not, and this grounds much of my argument here.

      I don’t have any problem with the research; but when I hear AI enthusiasts promising us that “robots will be our friends and companions” (yes, I actually heard that on the radio, just yesterday!), I admit cringe a little.

      Liked by 1 person

      • With sentience my argument would be the same: if we are material systems (i.e. there is no special “soul-stuff”), and I don’t know if this is so, but if we are, then we are an example for a material system having sentience, i.e. having some kind of feeling and/or emotion. If sentient material systems are possible I see no reason why we should be the only ones and I also see no reason that artificial systems like that would not be possible as well. As long as we don’t know how subjective experience comes about, I would not exclude this possibility.

        The robot as your “friend and companion” thing is probably marketing hype. It is easy to project such feelings into things (thik of children with their puppets or teddy bears, think of tamagochis) and one can make money out of that. Building a robot for such purposes is probably easier than building a robot that is useful, and people will buy such stuff and make their producers rich (and depleet the resources of the planet a little bit more by doing so).

        Like

  2. Hi EJ,
    ignoring your gratuitous kicks at your favourite prejudices, I think that was a good essay. As an aside, I note that your rants are hard to suppress, they tend to come out sotto voce 🙂

    But back to business. Like you I believe that conscious AI is exceedingly unlikely. My belief is based on some arguments by Edward Feser that seem to be particularly compelling(a subject for discussion another time). But we will have what I consider to be ‘near intelligence’ where we construct advanced computer assisted machinery and tools. This possibility will be very seductive to business as it tries to reduce its operational costs. Imagine that, no pension costs, no medical insurance, no vacation time and 24 hour operation. Undoubtedly this will happen and it will precipitate a profound economic crisis.

    That is because every worker is also a consumer. Every displaced worker is a lost customer. Creating unemployed workers is a form of economic suicide since it destroys the consumer base. Already we can see this process playing out in slow motion in Japan, except that the consumer base is not(yet) being destroyed by automation, it is being destroyed by declining birth rate and population aging.

    The decline in birth rate means fewer younger workers. These younger workers were an avid market for new products. Pensioners consume far less as they adjust to static and reduced income. The result is depressed consumption in Japan and this reduced demand is the true cause of deflation. This problem can only get worse as the population profile changes. This same scenario is beginning to play out across much of Europe.

    Now add increased factory automation into this mix. There will be more redundancies and the consumer base will start to shrink considerably as this adds to the problems of lower birth rate and aging population. This will precipitate the end of economic growth and indeed the beginning of economic decline. There will simply not be enough consumers.

    The increased concentration of wealth in the hands of the one percent will not help, because, despite their conspicuous consumption, they still do not spend proportionately more. The power to change this lies is in the hande of the one percent. But wealth insulates them from the problems of the 99% so their response will be a case of too little, too late. Therefore we can expect a major social crisis.

    Optimally, the only solution I can envisage is not one of laying off workers but making them work shorter hours for the same pay. This is a workable solution because workers with more leisure time spend more on leisure activities. This will rebalance the economy and return it to growth.

    But, don’t hold your breath. The corporate management I know are especially short-changed when it comes to wisdom.

    Liked by 1 person

  3. Now we come to my greatest fear. Remote warfare liberates us from moral restraints. And ‘near intelligence’ will greatly enable remote warfare, as we can already see in the case of drone attacks.

    Why that should be so can be understood in terms of circles of empathy.

    Imagine this scenario. I am hunting and see a distant wildebeest. I take careful aim, squeeze the trigger. A second or so later there is a satisfying thud and the wildebeest falls to the ground. I am exhilarated. We walk up to the wildebeest and see it is still alive. There is agonised pleading in its eyes as it struggles futilely against the severe wound. And now I begin to feel sickened. I pull out my knife to cut its throat and my courage nearly fails me. I am nauseated. The immediacy of death has pierced my moral armour. In the same way a soldier will find it easy to squeeze off a shot at a remote figure but find it hard to plunge in the bayonet.

    We show the strongest empathy for those closest to us and consequently show the greatest moral concern for them. The further the other is from the center of the circle of empathy, the less moral concern we show for them. Many things conspire to create greater moral distance. Physical distance, language, culture, history, anger, fear, hatred, race are all examples and this is why we are able to engage in war despite our moral scruples.

    And this brings us to the main problem. Remote warfare creates the greatest moral distance. Consequently we will find it easier to go to war, as Obama so happily demonstrates. There will be fewer body bags, fewer grieving relatives and fewer demonstrating students since we no longer need to conscript them.

    Not only will we find it easier to go to war, we will also take greater risks. When we see the enemy on a computer screen, free from risk to oneself, he is dehumanised and falls off the outer edge of our moral concerns. We will kill more readily and more indiscriminately.

    This is the future I fear. I already see it in the face of the legions of fanatics playing violent war games.

    Like

    • This is all too true. Indeed, embarrassingly, proponents of remote controlled warfare have no arguments against such a point beyond appeal to self interest – ‘this reduces *American* casualties.’ True, but it seems to increase what they call ‘collateral damage’ – i.e., civilian deaths, an issue they really prefer to leave unaddressed.

      So, to go further, what happens would we to hand control of remote warfare weapons over to yet another machine, however intelligent? Conceivably, human casualties – on either side – would mean nothing to it at all.

      Liked by 1 person

      • proponents of remote controlled warfare have no arguments against such a point beyond appeal to self interest

        Yes, indeed. This is the deadly trap created by today’s prevalent framework of moral consequentialism.

        There have been some debates about writing morality into machines, this is the wrong debate. We should be debating how to inculcate a suitable moral framework in the minds of their controllers.

        Liked by 1 person

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s