The trolley problem and the complexities of history

This was originally a response to a discussion concerning the so-called trolley problem – a supposed ethical dilemma involving a choice to allow a trolley to speed toward five innocent people; or hit a switch that may re-direct it toward another innocent person on another track; or simply throw a person in front of the train in order to save the lives of the other five. Basically, a choice between de-ontological or utilitarian ethics. I can’t remember whether it was devised by psychologists but is used by some philosophers as a thought experiment, or the other way around. It is, from my perspective, utterly useless.

Ethics can get very complicated. Or actually, it always is complicated, but when we make our actual decisions, we do so by focusing on specific details in the context in which the decisions are made.

Do we begin an understanding of ethics in Germany, by studying the behavior of the Germans and the Nazis in the ’30s and ’40s? Of course, but how could it be otherwise? And in such study our purpose is not to justify that behavior, but to understand it, and to derive principles, both positive and negative, according to which we have greater purchase over our own behavior in the future.

Having written a study on Hitler, I had to confront a wide range of behaviors in Germany in that era. In that confrontation, I had to ask some painful questions. What made highly intelligent and otherwise ethical doctors engage in crude and cruel ‘experiments’? Why did supposedly decent truck drivers willingly deliver Zylon B to the death camps, knowing what they were intended for? If one asked a young soldier whether it was right to beat an infant to death, he would not only have rejected that suggestion, he would have been appalled. Yet the next day he would then beat an infant to death, persuaded that the infant’s Jewish descent, or the presumed wisdom of the officer ordering him to do this, effectively excused him from responsibility.

After ordering the police to form what were death squads, to ‘clean up’ Jewish villages in Poland in the wake of the invasion, Himmler decided it was his duty to witness one of these mass executions. He came, he saw, he promptly threw up, disgusted with horror. Then he just as promptly reassured the men involved that they were engaging in terrible acts for the greater glory of Germany, and they would be well remembered for their ‘moral’ sacrifice. (By the way, the notion that these special police had to follow orders in performing mass murders happens to be a lie. If any of them felt they could not in good conscience participate, they were re-assigned to desk jobs back in Germany. Partly for this reason they were replaced by the more dedicated SS.)

It is little known, but the Supreme Court of Germany, at least up to the time of my study, had not ruled Hitler’s dictatorship or the laws made by him as illegitimate, but that they were completely constitutional for their time, but only superseded by the post-war constitution? That should give us pause.

Other odd facts raising troubling questions: Himmler was a school teacher who believed stars were ice crystals. But the Nazis condemned contemporary physics as “Jewish science;’ except of course when it could be used to build weapons. Goebbels had a doctorate in engineering – along with some 40,000 Nazis holding graduate degrees in various fields, including half the medical doctors in Germany.

A right-wing influence on the young in the ’20s and ’30s was a major folk music revival. One of the most popular poets in this era was Walt Whitman in translation. Germany was peppered with pagan-revival religious cults, a movement dating back a century previous. The concentration camps were modeled in part on relocation camps for American Indians in the previous century.

Although homosexuals were oppressed and sent to camps in the later ’30s, the leadership of the Nazi SA (Brownshirts) were notorious for their homosexual orgies (which led the General Chiefs of Staff to demand their execution, carried out in the Night of the Long Knives).

The Marxists in the Reichstag voted for Hitler’s chancellorship, thinking that would position them to better negotiate with the Nazis.

Sociological analysis indicates that a third of Germany’s population actively supported Hitler, another third decided to go along with him, because what the heck, what did they have to lose? The final third were opposed to Hitler, but after all, they were Germans, and respected his legitimate election. Given the brutal totalitarianism of the Nazis, by the time they thought to resist, they were stuck.

Hitler himself was a vegetarian, something of an ascetic who only indulged by pouring sugar in his wine; he ended up addicted to pain pills. He banned modern artists, but in his youth had hoped to become one. He was fond of Mickey Mouse cartoons. Once the war started he found himself losing interest in Wagner’s operas. He told his architect Spear that he wanted buildings that would make ‘beautiful ruins.’ He refused to marry his lover Eva Braun until the moment he determined that they both needed to die. In the bunker he admitted bitterly that Schopenhauer had been right that the way of ‘Will’ was an exercise in futility, and that the Germans had proven the weaker race after all.

Historical facts like these present a wide array of ethical and political problems that aren’t going to be solved by simplistic reduction to binary choices, readily determined by psychologists or moral absolutists.

What next, the ‘five-year old Hitler dilemma’? – ‘if you could go back in time and shoot Hitler at age five, would you do so?’ Yes; double tap – and always put one in the brain.

Who are those five people the trolley is racing towards? Answer that question and the problem might be easier to solve.


Violence and identity

“I wouldn’t have it any other way”

The Wild Bunch is a 1969 film directed by Sam Peckinpah (written by Peckinpah and Walon Green) [1]. Nominally a Western, it tells the story of a gang of aging outlaws in the days leading up to their last gun battle.

After a failed payroll robbery, in which more innocents are killed than combatants, five surviving outlaws make their way into Mexico, broke and dispirited. The lead outlaw, Pike Bishop, remarks to his colleague Dutch that he wants to make one last big haul and then “back off.” “Back off to what?” Dutch asks, for which there is no answer. Finally Dutch reminds Bishop “they’ll be waiting for us,” and Bishop, the eternal adventurer, replies “I wouldn’t have it any other way.”

In Mexico, the Bunch, including the two Gorch brothers, Lyle and Tector, and Sykes, an old man who rides with them, visit the home town of their youngest member, Angel, which has recently suffered a visit by Federal troops under General Mapache, during which anti-Huerta rebel sympathizers were rooted out and murdered. The Bunch forms an odd bond with the townsfolk, but they’re outlaws and they’re broke. Eventually they make a deal with Mapache (who is advised by Germans, eager to see Mexico allied with them in the impending war in Europe) to rob a US arms train across the border. This robbery is successful, and they return to Mexico with the stolen arms (including a machine gun) pursued, however, by a group of bounty hunters led by Deke Thorton, a former outlaw that Bishop once abandoned during a police raid on a bordello. Later ,the bounty hunters will wound Sykes, whom the Bunch will abandon to his fate.

Along the trail, Angel, a rebel sympathizer himself, has some Indian friends carry away a case of guns and another of ammunition. Angel, however, has been betrayed by the mother of a young woman he killed in a fit of anger for having run off to join Mapache’s camp followers. The outlaws complete their deal with Mapache, but surrender Angel over to Mapache.  Deciding to let Mapache deal with the bounty hunters, they return to the Army headquarters in the ruins of an old winery. However, their betrayal of Angel haunts them. After a brief period of whoring and drinking, they decide to confront Mapache and demand the return of their colleague. Mapache cuts Angel’s throat, and without hesitation Pike and Dutch shoot him down. At this point, the Bunch probably could take hostages and back off, but to what? Instead they throw themselves gleefully into a gun battle with some 200 Federales, and by taking control of the machine gun do quite a bit of damage. Eventually, however, the inevitable happens, and they end up dead, Pike shot by a young boy with a rifle.

As the surviving Federales limp out from the Army HQ, Thorton shows up. From there, he sends the bounty hunters home with the outlaws’ bodies, but remains to mourn the loss of his former friends. Sykes rides up with the rebel Indians who have saved him, and suggests Thorton join them. “It ain’t like it used to be, but it’ll do.” Laughing in the face of fate, they ride off to join the revolution.

The thematic power of the film hinges on two apposite recognitions. The first is that the outlaws are bad men. They rob, they cheat, they lie, they kill without compunction. They seem to hold nothing sacred and have no respect for any ethical code.

The second recognition is that this judgment is not entirely complete or correct. They have a sense of humor and an undeniable intelligence. They are able to sympathize with the oppressed villagers in Mexico. They have a sense of being bound together, and this is what leads them to their final gun battle.

The Bunch have lived largely wretched lives. As professional outlaws, they are dedicated to acquiring wealth by criminal means, but throughout the film, it is clear that wealth offered only two things for them: prostitutes and liquor. Although Pike was once in love and thinking of settling down, and (the asexual) Dutch speaks wistfully of buying a small ranch, they are just as committed to the outlaw lifestyle as the unrepentant Gorches; they just would rather believe otherwise.

This is because they are committed to a life of violence, to the thrills of dangerous heists, of chases across the landscape of the Southwest, and of gun fights. They rob largely to support that lifestyle, not the other way around.

The finale of the film has two major points of decision, the first determining the second. The first is when Pike, dressing after sex with a prostitute, sits on the bed finishing off a bottle of tequila.  That’s his life; and with the wealth gotten from the Mapache deal, he could continue it indefinitely. In the next room, the Gorch brothers, also drunk, argue with another prostitute over the price of her services. That’s their life, too. Meanwhile, Angel is getting tortured to death for being an outlaw with a conscience. Pike slams the empty bottle to the floor, and the march into battle begins.

The second point of decision has already been remarked on.  The moment after shooting Mapache, when they might have escaped, the Bunch choose to fight instead. Why do they do it? It’s not for the money, the drinking or the prostitutes.  Is it for revenge?  No, it’s because they live for the violence, and they do so as a team, and they have reached the moment at which they can live it to its logical conclusion.

Peckinpah remarked that, for that moment to carry any weight, the outlaws needed to be humanized to the extent that the audience could sympathize with them. He was, I think largely successful. But the film has been controversial, not only because of its portrayal of violence, but because in the climactic battle Peckinpah pushes our sympathies for the Bunch beyond mere recognition of their humanity.  They become heroic, larger than life, almost epic figures, challenging fate itself, in order to realize themselves, like Achilles on the field before Troy. And oddly, while not really acting heroically, they become heroes nonetheless, remembered by the revolutionaries who benefit from their sacrifice.

As a side remark, let’s note that Peckinpah was raised in a conservative Calvinist, Presbyterian household. But, like Herman Melville a century before, he was a Calvinist who could not believe in God.  In such a universe, some are damned, but no one is saved. We only realize our destiny by not having any. The Bunch destroy any future for themselves and thus, paradoxically, achieve their destiny. The fault is not in our stars, but in ourselves.

A Soldier’s Story

The Wild Bunch is set in the last months of the Huerte dictatorship (Spring of 1914), a phase of the series of rebellions, coups d’état, and civil wars known collectively as the Mexican Revolution. [2] Officially, this revolution began with the fall of the Diaz regime and ended with the success of the Institutional Revolutionary Party (PRI), but rebellions and bloodshed had already permeated the Diaz regime and continued a few years after the PRI came to power. In the official period of the revolution, casualties numbered approximately 1,000,000. When one discovers that the Federal Army only had about 200,000 men at any time, and that rebel armies counted their soldiers in the hundreds, one realizes that the majority of these casualties had to be non-combatants. Not surprisingly; the Federal Army, and some of the rebels, pursued a policy (advocated by our current US president) of family reprisal – once a rebel or a terrorist is identified, but cannot be captured or killed, his family is wiped out instead. Whole villages were massacred. Dozens of bodies would be tossed into a ditch and left to rot.

As I’ve said elsewhere, I’ve nothing against thought-experiments that raise ethical questions, only those that limit the possible answers unjustifiably. So let us now imagine ourselves in the mind of a young Federal soldier, whose commandant has ordered him to shoot a family composed of a grandmother, a sister, a brother – the latter having atrophied legs due to polio – and the sister’s six-year-old daughter. The relevant question here is not whether or not he will do this. He will. The question is why.

This is a kind of question that rarely, if ever, appears in ethical philosophy in the Analytic tradition. It is, however, taken quite seriously in Continental philosophy. There’s a good, if uncomfortable, reason for this. Continental thinkers write in a Europe that survived the devastation of World War II and live among both the survivors of the Holocaust and the perpetrators of it. Analytic philosophers decided not to bother raising too many questions concerning Nazism or the Holocaust. Indeed, in the US, the general academic approach to events in Germany in the 1930’s and 40’s has been that they constituted an aberration. Thus, even in studies of social psychology, the Nazi participants in the Holocaust are treated as examples of some sort of abnormality or test cases in extremities of assumed psychological, social, or moral norms.  This is utter nonsense. If it was true, then such slaughters would have been confined to Europe. And yet, very similar things went on in the Pacific Theater: during the Japanese invasion of China, the number of causalities is estimated as being into the tens of millions.

There were a million casualties resulting from the Turkish mass killing of the Armenians, long before the Holocaust.  There were several million victims of the Khmer Rouge in Cambodia, decades after the Holocaust.  Far from being some pscyho-social aberration, human beings  have a facility for organized cruelty and mass slaughter.

At any rate, assuming that our young Mexican soldier is not suffering from some abnormal psychology, what normative thoughts might be going through his mind as he is about to pull the trigger on the family lined up before him?

For the sake of argument, we’ll allow that he has moral intuitions, however he got them, that tell him that killing innocent people is simply wrong. But some process of thought leads him to judge otherwise; to act despite his intuition. Note that we are not engaging in psychology here and need not reflect on motivations beyond the ethical explanations he gives for his own behavior.

While not a complete listing, here are some probable thoughts he might be able to relay to us in such an explanation:

For the good of the country I joined the Army, and must obey the orders of my commanding officer.

I would be broke without the Army, and they pay me to obey such orders.

These people are Yaqui Indians, and as such are sub-human, so strictures against killing innocents do not apply.

I enjoy killing, and the current insurrection gives me a chance to do so legally.

So far, all that is explained is why the soldier either thinks personal circumstances impel him to commit the massacre or believes doing so is allowable within the context. But here are some judgments that make the matter a bit more complicated:

This is the family of a rebel, who must be taught a lesson.

Anyone contemplating rebellion must be shown where it will lead.

This family could become rebels later on. They must be stopped before that can happen.

All enemies of General Huerta/ the State/ Mexico (etc.) must be killed.

Must, must, must. One of the ethical problems of violence is that there exist a great many reasons for it, within certain circumstances, although precisely which circumstances differ considerably from culture to culture, social group to social group, and generation to generation. In fact, there has never been a politically developed society for which this has not been the case. Most obviously, we find discussions among Christians and the inheritors of Christian culture, concerning what would constitute a “just war” (which translates into “jihad” in Islamic cultures). But we need not get into the specifics of that. All states, regardless of religion, hold to two basic principles concerning the use of violence in the interests of the State: First, obviously, the right to maintain the State against external opposition; but also, secondly, the right of the State to use lethal force against perceived internal threats to the peace and stability of the community. We would like to believe that our liberal heritage has reduced our eliminated adherence to the latter principle, but we are lying to ourselves. Capital punishment is legal in the United States, and 31 states still employ it. The basic theory underlying it is quite clear: Forget revenge or protection of the community or questions of the convicted person’s responsibility – the State reserves the right to end a life deemed too troublesome to continue.

But any conception of necessary violence seriously complicates ethical consideration of violence per se. Because such conceptions are found in every culture and permeate every society – by way of teaching, the arts, laws, political debates, propaganda during wartime, etc. – it is likely that each of us has, somewhere in the back of our minds, some idea, some species of reasoning, some set of acceptable responses, cued to the notion that some circumstance somewhere, at some time, justify the use of force, even lethal force. Indeed, even committed pacifists have to undertake a great deal of soul-searching and study to recognize these reasons and uproot them, but they are unlikely ever to get them all.

Many more simply will never bother to make the effort. They are either persuaded by the arguments for necessary force, or they have been so indoctrinated into such an idea that they simply take it for granted.

Because there are several and diverse conceptions and principles of necessary violence floating around in different cultures, one can expect that this indoctrination occurs to various degrees and by various means. One problem this creates is that regardless of its origin, a given conception or principle can be extended by any given individual. So today I might believe violence is only necessary when someone attempts to rape my spouse, but tomorrow I might think it necessary if someone looks at my spouse the wrong way.

The wide variance in possible indoctrination also means a wide variety in the way such a principle can be recognized or articulated. This is especially problematic given differences in education among those of differing social classes. So among some, the indoctrination occurs largely through friends and families, and may be articulated only in the crude assertion of right – “I just had to beat her!” “I couldn’t let him disrespect me!” – while those who go through schools may express this indoctrination through well thought-out, one might say philosophical, reasoning: “Of a just war, Aquinas says…” or “Nietzsche remarks of the Ubermensch…” and so on. But we need to avoid letting such expressions, either crude or sophisticated, distract us from what is really going on here. The idea that some violence is necessary has become part of the thought process of the individual. Consequently, when the relevant presumed – and prepared-for – circumstances arise, not only will violence be enacted, but the perpetrator will have no sense of transgression in doing so. As far as he is concerned, he is not doing anything wrong, even should the violent act appear to contradict some other moral interdiction. The necessary violence has become a moral intuition and overrides other concerns. “I shouldn’t kill an innocent, but in this case, I must.”

Again, this is not psychology. After more than a century of pacifist rhetoric and institutionalized efforts to find non-violent means of “conflict resolution,” we want to say that we can take this soldier and “cure” of his violent instincts.  But, what general wants us to do that? What prosecutor, seeking the death penalty, wishes that of a juror?

The rhetoric of pacifism and the institutionalization of reasoning for non-violence is a good thing, don’t misunderstand me. But don’t let it lead us to misunderstand ourselves. There is nothing psychologically aberrant in the reasoning that leads people to justify violence, and in all societies such reasoning is inevitable. It’s part of our cultural identity.  Strangely enough, it actually strengthens our social ties, as yet another deep point of agreement between us.

Being Violent

I’m certain that, given the present intellectual climate, some readers will insist that what we have been discussing is psychology; that Evolutionary Psychology or genetics can explain this; that neuroscience can pin-point the exact location in the brain for it; that some form of psychiatry can cure us. All of which may be true (assuming that our current culture holds values closer to “the truth” than other cultures, which I doubt), but is nonetheless irrelevant. It should be clear that I’m trying to engage in a form of social ontology or what might be called historically-contingent ontology. And ethics really begins in ontology, as Aristotle understood.  We are social animals, not simply by some ethnological observation, but in the very core of our being. We just have a difficult time getting along with each other.

It’s possible to change. Beating other people up is just another way to bang our own heads against the wall; this can be recognized, and changed, so the situation isn’t hopeless. As a Buddhist, I accept the violence of my nature, but have certain means of reducing it, limiting it, and letting it go. There are other paths to that. But they can only be followed by individuals. And only individuals can effect change in their communities.

This means we have to accept the possibility that human ontology is not an a-temporal absolute, and I know there is a long bias against that, but if we are stuck with what we have always been, we are doomed.

Nonetheless, the struggle to change a society takes many years, even generations, and it is never complete. Humans are an indefinitely diverse species, with a remarkable capacity to find excuses for the most execrable and self-destructive behavior. There may come a time that humans no longer have or seek justifications for killing each other; but historically, the only universal claim we can make about violence is that we are violent by virtue of being human, and because we live in human society.



Reprinted from:

Simulation argument as gambling logic

I have submitted an essay to the Electric Agora, in which I critique the infamous Simulation Argument – that we are actually simulations running in a program designed by post-humans in the future – , made in its strictest form by Nick Bostrom of Oxford University. Since Bostrom’s argument deploys probability logic, and my argument rests on traditional logic, I admitted to the editors that I could be on shaky ground. However, I point out in the essay that if we adopt the probability logic of the claims Bostrom makes, we are left with certain absurdities; therefore, Bostrom’s argument collapses into universal claims that can be criticized in traditional logic. At any rate, if the Electric Agora doesn’t post the essay, I’ll put it up here; if they do, I’ll try to reblog it (although reblogging has been a chancey effort ever since WordPress updated its systems last year).


Towards the end of that essay, I considered how the Simulation Argument is used rhetorically to advocate for continuing advanced research in computer technology in hope that we will someday achieve a post-human evolution. The choice with which we are presented is pretty strict, and a little threatening – either we continue such research, advancing toward post-humanity – or we are doomed. This sounded to me an awful lot like Pascal’s Gambit – believe in god and live a good life, even if there is no god, or do otherwise and live miserably and burn in hell if there is a god. After submitting the essay I continued to think on that resemblance and concluded that the Simulation Argument is very much like Pascal’s Gambit and its rhetorical use in support of advancing computer research, much like Pascal’s use of his Gambit to persuade non-believers to religion, was actually functioning as a kind of gambling. This is actually more true of the Simulation Argument, since continued research into computer technology involves considerable expenditure of monies in both the private and the public sector, with post-human evolution being the offered pay-off to be won.


I then realized that there is a kind of reasoning that has not been fully described with any precision (although there have been efforts of a kind moving in this direction) which we will here call Gambling Logic. (There is such a field as Gambling Mathematics, but this is simply a mathematical niche in game theory.)


Gambling Logic can be found in the intersection of probability theory, game theory, decision theory and psychology. The psychology component is the most problematic, and perhaps the reason why Gambling Logic has not received proper study. While psychology as a field has developed certain statistical models to predict how what percentages of a given population will make certain decisions given certain choices (say, in marketing research), the full import of psychology in the practice of gambling is difficult to measure accurately, since it is multifaceted. Psychology in Gambling Logic not only must account for the psychology of the other players in the game besides the target subject, but the psychology of the target subject him/herself, and for the way the target subject reads the psychology of the other players and responds to her/his own responses in order to adapt to winning or losing. That’s because a gamble is not simply an investment risked on a possible/probable outcome, but the outcome either rewards the investment with additional wealth, or punishes it by taking it away without reward. But we are not merely accountants; the profit or loss in a true gamble is responded to emotionally, not mathematically. Further, knowing this ahead of the gamble, the hopeful expectation of reward, and anxiety over the possibility of loss, colors our choices. In a game with more than one player, the successful gambler knows this about the other players, and knows how to play on their emotions; and knows it about him/her self, and knows when to quit.


Pascal’s Gambit is considered an important moment in the development of Decision Theory. But Pascal understood that he wasn’t simply addressing our understanding of the probability of success or failure in making the decision between the two offered choices. He well understood that in the post-Reformation era in which he (a Catholic) was writing, seeing as it did the rise of personality-less Deism, and some suggestion of atheism as well, many in his audience could be torn with anxiety over the possibility that Christianity was groundless, over the possibility that there was no ground for any belief or for any moral behavior. He is thus reducing the possible choices his audience confronted to the two, and suggesting one choice as providing a less anxious life, even should it prove there were no god (but, hey, if there is and you believe you get to Paradise!).


In other words, any argument like Pascal’s Gambit functions rhetorically as Gambling Logic, because it operates on the psychology of its audience, promising them a stress-free future with one choice (reward), or fearful doom with the other (punishment).


So recognizing the Simulation Argument as a gamble, let’s look at the Gambling Logic at work in it.


Bostrom himself introduces it as resolving the following proposed trilemma:


1. “The fraction of human-level civilizations that reach a post-human stage (that is, one capable of running high-fidelity ancestor simulations) is very close to zero”, or

2. “The fraction of post-human civilizations that are interested in running ancestor-simulations is very close to zero”, or

3. “The fraction of all people with our kind of experiences that are living in a simulation is very close to one.”


According to Bostrom himself, at least one of these claims must be true.

It should be noted that this trilemma actually collapses into a simple dilemma, since the second proposition is so obviously untrue: in order to reach post-human status, our descendents will have to engage in such simulations even to accomplish such simulation capacity.


Further, the first proposition is actually considered so unlikely, it converts to its opposite in this manner (from my essay): “However, given the rapid advances in computer technology continuing unabated in the future, the probability of ‘the probability of humans surviving to evolve into a post-human civilization with world-simulating capabilities is quite low’ is itself low. The probability of humans evolving into a post-human civilization with world-simulating capabilities is thus high.”


Now at this point, we merely have the probabilistic argument that we are currently living as simulations. However, once the argument gets deployed rhetorically, what really happens to the first proposition is this:


If you bet on the first proposition (presumably by diverting funds from computer research into other causes with little hope of post-human evolution), your only pay-off will be extinction.


If you bet against the first proposition (convert it to its opposite and bet on that), you may or may not be betting on the third proposition, but the pay-off will be the same whether we are simulations or not, namely evolution into post-humanity.


If you bet on the third proposition, then you stand at least a 50% chance of earning that same pay-off, but only by placing your bet by financing further computer research that could lead to evolution into post-humanity.


So even though the argument seems to be using the conversion of the first proposition in support of a gamble on the third proposition, in fact the third proposition supports betting against the first proposition (and on its conversion instead).


What is the psychology this gamble plays on? I’ll just mention the two most obvious sources of anxiety and hope. The anxiety of course concerns the possibility of human extinction: most people who have children would certainly be persuaded that their anxiety concerning the possible future they leave their children to can be allayed somewhat by betting on computer research and evolution to post-humanity. And all who share a faith in a possible technological utopia in the future will be readily persuaded by to take the same gamble.


There is a more popular recent variation on the Simulation Gamble we should note – namely that the programmers of the simulation we are living are not our future post-human descendents, but super-intelligent aliens living on another world, possibly in another universe. But while this is rhetorically deployed for the same purpose as the original argument, to further funding (and faith) in technological research, it should be noted that the gamble is actually rather weaker. The ultimate pay-off is not the same, but rather appears to be communion with our programmers. Well, not so enticing as a post-human utopia, surely! Further, that there may be such super-intelligent aliens in our universe is not much of a probability; that they exist in a separate universe is not even a probability, it is mere possibility, suggested by certain mathematical modellings. The reason for the popularity of this gamble seems to arise from an ancient desire to believe in gods or angels, or just some Higher Intelligence capable of ordering our own existence (and redeeming all of our mistakes).


It might sound as if, in critiquing the Simulation Gamble, I am attacking research into advances in computer and related technology. Not only is that not the case, but it would be irrelevant. In the current economic situation, we are certainly going to continue such research, regardless of any possible post-human evolution or super-aliens. Indeed, we will continue such research even if it never contributes to post-human evolution, and post-human evolution never happens. Which means of course that the Simulation Gamble is itself utterly irrelevant to the choice of whether to finance such research or not. I’m sure that some, perhaps many, engaged in such research see themselves as contributing to post-human evolution, but that certainly isn’t what wins grants for research. People want new toys; that is a stronger motivation than any hope for utopia.


So the real function of the Simulation Gamble appears to be ideological: it’s but one more reason to have faith in a technological utopia in the future; one more reason to believe that science is about ‘changing our lives’ (indeed, changing ourselves) for the better. It is a kind of pep-talk for the true believers in a certain perspective on the sciences. But perhaps not a healthy perspective; after all, it includes a fear that, should science or technology cease to advance, the world crumbles and extinction waits.


I believe in science and technology pragmatically – when it works it works, when it doesn’t, it don’t. It’s not simply that I don’t buy the possibility of a post-human evolution (evolution takes millions of years, remember), but I don’t buy our imminent extinction either. The human species will continue to bumble along as it has for the past million years. If things get worse – and I do believe they will – this won’t end the species, but only set to rest certain claims for a right to arrogantly proclaim itself master of the world. We’re just another animal species after all. Perhaps the cleverest of the lot, but also frequently the most foolish. We are likely to cut off our nose to spite our face – but the odd thing is our resilience in the face of our own mistakes. Noseless, we will still continue breathing, for better or worse.



Bostrom’s original argument:

The legacy of Hegel

I found this essay on my computer, written some time ago, and decided – since I haven’t been posting here for a while – that I would go ahead and put it up, although it is not completely polished. Yes, it’s about Hegel again – don’t get too annoyed! – I hope not to write on this topic again for some time. But I do consider here some issues that extend beyond the immediate topic. So –


I like to describe Hegel as the cranky uncle one invites to Thanksgiving dinner, having to suffer his endless ramblings, because there is an inheretance worth suffering for.


Hegel’s language is well nigh impossible. He understands the way grammar shapes our thinking before any training in logic, and uses – often abuses – grammar, not only to persuade or convince, but to shape his readers’ responses, not only to his text, but to the world. After studying the Phenomenology of Mind, one can’t help but think dialectically for some time, whether one approves of Hegel or not. One actually has to find a way to ‘decompress’ and slowly withdraw, as from a drug. (Generally by reading completely dissimilar texts, like a good comic novel, or raunchy verses about sex.)


How did Hegel become so popular, given his difficulty? First of all, he answered certain problems raised in the wake of first Fichte’s near-solipsistic (but highly convincing) epistemology,and then in Schelling’s “philosophy of nature” (which had achieved considerable popularity among intellectuals by the time Hegel started getting noticed). But there was also the fact that he appears to have been an excellent and fascinating teacher at the University of Berlin. And we can see in his later lectures, which come to us largely through student notes, or student editing of Hegel’s notes, that, while the language remains difficult, there is an undeniable charm in his presentation. This raises questions, about how important teachers are in philosophy – do we forget that Plato was Socrates’ student, and what that must have meant to him?


Finally: Hegel is the first major philosopher who believed that knowledge, being partly the result of history and partly the result of social conditioning *, was in fact not dependent on individual will or insight, so much as being in the right place at the right time – the Idea, remember, is the protagonist of the Dialectic’s narrative. The importance of the individual, is that there is no narrative without the individual’s experience, no realization of the Idea without the individual’s achievement of knowledge.


However, despite this insistance on individual experience, Hegel is a recognizably ‘totalistic’ thinker: everything will be brought together eventually – our philosophy, our science, our religion, our politics, etc., will ultimately be found to be variant expressions of the same inner logic of human reasoning and human aspiration.


Even after Pragmatists abandoned Hegel – exactly because of this totalistic reading of history and experience – most of them recognized that Hegel had raised an important issue in this insistence – namely that there is a tendency for us to understand our cultures in a fashion that seemingly connects the various differences in experiences and ways of knowing so that we feel, to speak metaphorically, that we are swimming in the same stream as other members of our communities, largely in the same direction. Even the later John Dewey, who was perhaps the most directly critical of Hegel’s totalism, still strong believes that philosophy can tell the story of how culture comes together, why, eg., there can be a place for both science and the arts as variant explorations of the world around us. We see this culminate, somewhat, in Quine’s Web of Belief: different nodes in the web can change rapidly, others only gradually; but the web as a whole remains intact, so that what we believe not only has logical and evidentiary support, but also ‘hangs together’ – any one belief ‘makes sense’ in relation to our other beliefs.


(Notably, when British Idealism fell apart, its rebellious inheritors, eg., Russell and Ayers, went in the other direction, declaring that philosophy really had no need to explain anything in our culture other than itself and scientific theory.)


If we accept that knowledge forms a totalistic whole, we really are locked into Hegel’s dialectic, no matter how we argue otherwise.


Please note the opening clause “If we accept that knowledge forms a totalistic whole” – what follows here should be the question, is that what we are still doing, not only in philosophy but other fields of research? and I would suggest that while some of us have learned to do without, all too many are still trying to find the magic key that opens all doors; and when they attempt that, or argue for it, Hegel’s net closes over them – whether they’ve read Hegel or not. And that’s what makes him still worth engaging. Because while he’s largely forgotten – the mode of thought he recognizes and describes is still very much among us.


And this is precisely why I think writing about him and engaging his thought is so important. The hope that some philosophical system, or some science, or some political system will explain all and cure all is a failed hope, and there is no greater exposition of such hope than in the text of Hegel. The Dialectic is one of the great narrative structures of thought, and may indeed be a pretty good analog to the way in which we think our way through to knowledge, especially in the social sphere; it really is rather a persuasive reading of history, or at least the history of ideas. But it cannot accommodate differences that cannot be resolved if they do not share the same idea. For instance, the differing assumptions underlying physics as opposed to those of biology; or differing strategies in the writing of differing styles of novel or poetry; or consider the political problems of having quite different, even oppositional, cultures having to learn to live in the same space, even within the same city.


If Hegel is used to address possible futures, then of course such opposed cultures need to negate each other to find the appropriate resolution of their Dialectic. That seemed to work with the Civil War; but maybe not really. It certainly didn’t work in WWI – which is what led to Dewey finally rejecting Hegel, proposing instead that only a democratic society willing to engage in open-ended social experimentation and self-realization could really flourish, allowing difference itself to flourish.


Finally a totalistic narrative of one’s life will seem to make sense, and the Dialectic can be used to help it make sense. And when we tell our life-stories, whether aware of the Dialectic or no, this is to some extent what we are doing.


But the fact is, we must remember that – as Hume noted, and as re-enforced in the Eastern traditions – the ‘self’ is a convenient fiction; which means the story we tell about it is also fiction. On close examination, things don’t add up, they don’t hang together. One does everything one is supposed to do to get a professional degree, and then the economy takes a downturn, and there are no jobs. One does everything expected of a good son or daughter, and only to be abused. . One cares for one’s health and lives a good life – and some unpredictable illness strikes one down at an early age. I could go on – and not all of it is disappointment – but the point is that, while I know people who have exactly perfect stories to tell about successful lives, I also know others for whom living has proven so disjointed, it’s impossible to find the Idea that the Dialectic is supposed to reveal.


Yet the effort continues. We want to be whole as persons, we want to belong to a whole society. We want to know the story, of how we got here, why we belong here, and where all this is going to.


So in a previous essay **, I have given (I hope) a pretty accurate sketch of the Dialectic in outline – and why it might be useful, at least in the social sciences (it is really in Hegel that we first get a strong explication of the manner in which knowledge is socially conditioned). And the notion that stories have a logical structure – and thus effectively form arguments – I think intriguing and important. ***


But ultimately the Dialectic can not explain us. The mind is too full of jumble, and our lives too full of missteps on what might better be considered a ‘drunken walk’ than a march toward inevitable progress.


So why write about it? Because although in America, Hegel is now largely forgotten, but the Dialectic keeps coming back; all too many still want it – I don’t mean just the Continental tradition. I mean we are surrounded by those who wish for some Theory of Everything, not only in physics, but economics and politics, social theory, etc. And when we try to get that, we end up engaging the dialectical mode of thought,even if we have never read Hegel. He just happened to be able to see it in the thinkers of Modernity, beginning with Descartes and Luther. But we are still Moderns. And when we want to make the big break with the past and still read it as a story of progress leading to us; or when we think we’ve gotten ‘beyond’ the arguments of the day to achieve resolution of differences, and attain certain knowledge – Then we will inevitably engage the Dialectic. Because as soon as one wants to know everything, explain everything, finally succeed in the ‘quest for certainty’ (that Dewey finally dismissed as a pipe-dream), the Dialectic raises its enchanting head, replacing the Will of God that was lost with the arrival of Modernity.


That is why (regardless of his beliefs, which are by no means certain) Hegel’s having earned his doctorate in theology becomes important. Because as a prophet of Modernity, he recognized that the old religious narratives could only be preserved by way of sublation into a new narrative of the arrival of human mind replacing that divine will.


In a sense that is beautiful – the Phenomenology is in some way the story of human kind achieving divinity in and through itself. But in another way, it is fraught with dangers – have we Moderns freed ourselves from the tyranny of Heaven only to surrender ourselves to the tyranny of our own arrogance? Only time will tell.




* Much of what Hegel writes of social conditioning is actually implicit in Hume’s Conventionalism; Hegel systematizes it and makes it a cornerstone of his philosophy. (Kant, to the contrary, always assumes a purely rational individual ego; which is exactly the problem that Fichte had latched onto and reduced to ashes by trying to get to the root of human knowledge in desire.)



Full version:


*** I’ll emphasize this, because it is the single most important lesson I learned from Hegel – narrative is a logical structure, a story forms a logical argument, a kind of induction of particularities leading into thematic conclusions. I will hopefully return to this in a later essay.


Misadventures in the dialectic

Or, a nasty thing happened on the way to the forum

Originally published at:

Thus precisely in labour where there seemed to be merely some outsider’s mind and ideas involved, the bondsman becomes aware, through this re-discovery of himself by himself, of having and being a ‘mind of his own. [1]

When Hegel, in the Phenomenology of Mind, makes an abrupt transition from epistemology per se (how we know about anything at all) into an historicized social epistemology (how knowledge is socially and historically conditioned), he begins at an odd point in history, with an analysis of the relationship between lords and bondsmen; or, as it is better known, the Master-Slave dialectic.  What the Master learns in this dialectic is that he not only commands things, but does so through the mediation of commanding his slaves.  It is the Slave, however, who turns out to be the real protagonist in this narrative – what he learns is the necessity of living for others, and through that, his own independence from “things”; that is, from the material.

In a series of important lectures in the 1930’s, the Master-Slave Dialectic received an interpretation by the Russian emigre to France, Alexandre Kojeve, which had enormous impact on French intellectual history, especially on Existentialist thinkers like Sartre, as well as on the development of Lacanian psychoanalysis.  [2]  Although written more than ten years after Kojeve’s lectures, Albert Camus’ The Rebel (1951), a text widely popular among those who have never even heard of Kojeve, is in fact a response to Kojeve.

Within Existentialism itself (and in French philosophy generally), an ongoing debate over the Marxist implications of Kojeve’s lectures emerged.  Indeed, the Marxian narrative of the historical development of a Materialist Dialectic arriving at Modern capitalism (in preparation of a future communism),  depends on the Master-Slave Dialectic, because it assumes that the economy of the Roman Empire was principally a “slave economy” [3]; that is, slaves provided the primary means of production, as well as the central market (in the exchange of slaves) and the essential social structure, of the empire – there were slave owners, there were slaves, and there were cast-off slaves who, scrounging for work where they could find it, formed a nascent proletariat.

A reasonable interpretation of the Phenomenology (given Hegel’s own historical interests and biases) suggests that Hegel’s writing here arose as a meditation on the introduction of Christianity into the culture of Rome [4].  When Hegel wrote this, scholars believed – as they did until quite recently – that Christianity spread through the Empire by appealing to the poor; i.e., to slaves and former slaves [5].  Recent scholarship, however, has proven this untrue, and it appears that Christianity’s greatest appeal in Rome was to the middle classes – businessmen, lawyers, tradesmen [6].  (Only a middle class could afford the charitable social work that Christians engaged in.)  This does not really threaten Hegel (who, after all, is talking about ideas, and in a most general way), but it doomed Marxist historiography.

Evidence has been piling up that the economy of the Roman Empire was not primarily a slave economy, but a sophisticated capitalist one, based on international trade [7].  Even without the accumulating evidence, one realizes that it couldn’t have been otherwise.  The Roman Empire not only conducted trade with client states in the Mediterranean, but with co-existing empires over which they had no direct control, including those in India and China, as well as with cultures in Africa, which they had no desire to control.  Such trade could not be centered around a market for slaves – beyond precious metals or mere commodity exchange, there had to be negotiable systems of exchange of wealth with symbolic representation of equivalent value, namely money.  And where there is money, there is capitalism [8].

However, it is with some degree of irony that we can see that long before the archaeological evidence was unearthed and pieced together showing that Rome was in fact a capitalist society, there actually existed documentary evidence of this (since Nero), which has been available to literati since the 17th century.   I don’t mean accounting records, some Roman economist’s commentary or remarks made by some court historian.  I’m referring to a work of prose fiction; indeed, one of the funniest, most incisive, and, surprisingly, most realistic texts ever written:  The Satyricon, attributed to Petronius Arbiter [9].

We don’t really know who wrote Satyricon.  We don’t even know the original shape of it.  All we have are fragments, preserved in monastic libraries, until the 17th century, when secular book collectors got their hands on it thanks to Protestant looting of those libraries [10].   Some evidence suggests that the fragments are mere slivers from a much longer work, but internal evidence from the text itself shows a remarkable thematic consistency, suggesting that the fragments we do have at least form a narrative sequence within any larger whole. [11]

Satyricon is a wild ride through the underbelly of Roman society of its time.  The narrative is what later would be called a picaresque, a disjointed series of adventures of social outcasts, whose main interests in life are sex (primarily homoerotic) and food and finding some way to acquire the capital with which to procure them.   The narrator and protagonist of the story, Encolpius, has just dropped out of the Roman equivalent of an undergraduate course in literature, in order to compete with a former lover (Ascyltus) for the affections of a young boy, Giton. [12]  Being an educated lowlife, Encolpius isn’t interested in finding suitable employment, but instead tries attaching himself to well-to-do patrons.  This leads to bizarre sexual experiences, meetings with failed poets, tasteless feasts put on by Roman tradesmen, fake religious rites (always good for initiating orgies), and capture by pirates at sea.  As the fragments close, the story doesn’t appear to be going well, as Eumolpus, an aging poet and tutor to whom Encolpius has attached himself, fails to realize an inheritance, which effectively condemns him to death among those who had been supporting him.

The most famous sequence of the narrative is Encolpius’ attendance at a banquet thrown by a successful tradesman, Trimalchio.  The sequence is a fairly complete, unified set-piece.  We first find Trimalchio at a recreation center, playing ball.  When he has to urinate, a slave rushes up with a bucket so that Trimalchio can relieve himself while still playing.  Meanwhile, another slave counts the balls that Trimalchio recurrently loses in play (to recover later), so that his master can toss out a new ball with every flub, as if he hadn’t lost any.   The tone is thus set for one of the most outrageous displays of conspicuous consumption – and conspicuous waste – in the history of Western literature.

At length some slaves came in who spread upon the couches some coverlets upon which were embroidered nets and hunters stalking their game with boar-spears, and all the paraphernalia of the chase.  We knew not what to look for next, until a hideous uproar commenced, just outside the dining-room door, and some Spartan hounds commenced to run around the table all of a sudden.  A tray followed them, upon which was served a wild boar of immense size, wearing a liberty cap upon its head, and from its tusks hung two little baskets of woven palm fibre, one of which contained Syrian dates, the other, Theban.  Around it hung little suckling pigs made from pastry, signifying that this was a brood-sow with her pigs at suck.  It turned out that these were souvenirs intended to be taken home.  When it came to carving the boar, our old friend Carver, who had carved the capons, did not appear, but in his place a great bearded giant, with bands around his legs, and wearing a short hunting cape in which a design was woven.  Drawing his hunting-knife, he plunged it fiercely into the boar’s side, and some thrushes flew out of the gash.  Fowlers, ready with their rods, caught them in a moment, as they fluttered around the room and Trimalchio ordered one to each guest, remarking, “Notice what fine acorns this forest-bred boar fed on,” and as he spoke, some slaves removed the little baskets from the tusks and divided the Syrian and Theban dates equally among the diners. [13]

This would seem to support Marxian analysis of the culture of a slave-based economy; but there’s a problem with this.  Trimalchio’s biography has to be pieced together from his own remarks, those of his guests, as well as portraiture found on the walls of the hall leading to the banquet room.   But it amounts to this:  Trimalchio had been born a slave to a wealthy merchant.  He had proven so good at his chores that he rose to the position of steward of the estate of the merchant, who provided him with an allowance.  This he saved and invested until he could buy his freedom and position himself as inheritor of the merchant’s business [14].  Trimalchio has since spent his life acquiring greater wealth and rubbing it in the noses of failed businessmen whom he turns into his personal court of sycophants.

The banquet seems to be winding down, probably intended to end at dawn [15] (like Plato’s Symposium, which it somewhat parodies), when Trimalchio (always one to sing his own praises) reveals the intended epitaph on his tomb:

Here Rests G Pompeius Trimalchio

Freedman Of Maecenas

Decreed Augustal, Sevir In His Absence

He Could Have Been A Member Of

 Every Decuria Of Rome But Would Not

Conscientious Brave Loyal

He Grew Rich From Little And Left

Thirty Million Sesterces Behind

He Never Heard A Philosopher

 Farewell Trimalchio

 Farewell Passerby [16]

Well, that’s his story, and he’s sticking with it, even after death: a dash of truth in a swill of self-admiration.

After a violent argument with his wife (formerly a prostitute) over his bisexual promiscuity, Trimalchio then returns to this theme, by effectively staging his own funeral; whereat he eulogizes himself in the crudest manner possible, boasting of his use of sex, investments, and shady business practices to build a financial empire.  “So your humble servant, who was a frog, is now a king.”  [17]

So much for the slave coming to self-consciousness by realizing the importance of working for others!

The Satyricon is the rotten apple in the bushel, not only of literary history, but of the literature of history.  Besides being unabashedly pornographic, unrepentantly cynical in the nastiest way, and thoroughly disrespectful of social manners while dismissive of any aspiration toward decency and good fellowship, the Satyricon paints an unnervingly realistic portrait of the people of ancient Rome and of their social environment.   It’s not a pretty picture, and it fails to conform to any of the expectations into which we have been long indoctrinated, by traditional historical narratives or the works of art that disseminated these.  Rome was not just monumental architecture and statues in the forum.  It was an ugly, over-populated metropolis, with tenement slums, a criminal underworld, thriving markets riddled with unethical business practices.  Alcoholism and drug abuse were rampant, and the working classes found their greatest distraction in public displays of cruelty, in the arena.   But more importantly, the people, as we find them in the Satyricon, are completely like ourselves.  We’ve met these people, we see them all around us.  Donald Trump is just a variant Trimalchio.  And who hasn’t encountered a pedantic professor pummeling students with bloated jargon that even he doesn’t understand?   I myself knew someone rather like a straight Encolpius in college; a bright mathematics student, he went through seven different sexual relationships in one semester (his general attitude toward women was best expressed in his parody of a classic song: “nothing could be finah than to wake in some vagina in the mo-o-orning…”).  There was never a day I met him when he wasn’t drunk or hung-over.

Moral improvement, political progress, aspirations toward a greater enlightenment and a brighter future; fables we tell ourselves to bring order to our lives and provide our children with hope.  To all such pretense the Satyricon raises a middle finger (as occasionally do its characters in the text).

What has really changed in human nature since Petronius?  We claim to know more about the world, but apparently we still do not know ourselves.  For two thousand years, Europe was able to mask this lack of self-recognition with a powerful ideological machine, supported by a monumental institutional structure with intimidating influence among political leaders.  As this began to fall apart, scientists, philosophers, poets and political revolutionaries sought to develop a similarly powerful ideology with an equal ability to suppress self-recognition.  But these are only stories, after all – told in mathematics sometimes, more often in heated rhetoric, but all just fables that we hope are true.  The only real change Modernity brought us has been new technology.   And all the new technology has accomplished is providing new commodities for thriving markets riddled with unethical business practices and war-mongers.

Marx is dead, but Hegel survives, as one of the grand fables of Modernity’s explanations for why we have any ideology at all and why we feel satisfied with our supposed progress [18].  Reading Hegel helps us to understand how we wish to think of ourselves, and of the history that we believe created us.  But the Satyricon shows us people as they are, at least in any complex, mercantile culture that we care to call a civilization.  Not all people, but enough that we should be more aware of – see with greater clarity – our own social environment, which hasn’t really improved so much in three thousand years.


[1] Hegel, The Phenomenology of Mind; B. Self-consciousness, IV. The true nature of Self-Certainty, A. Independence and Dependence of Self-Consciousness: Lordship and Bondage.  J. B. Bailllie translation, 1910.

[2] Kojeve, who served in the French government after WWII, always claimed to be a Marxist, even a Stalinist, but slathered insults on the Soviet Union, and remained friends to the end with conservative political philosopher (and former student of Heidegger’s) Leo Strauss, whose best known student is Allan Bloom.  Bloom was the editor of the English translation of Kojeve’s lectures, 1969:   (Camus’ response to Kojeve, The Rebel, is also online:   Bloom’s best known student is Francis Fukuyama, who acted as de-facto philosophic counsel to the George W. Bush administration; his best known text: The End of History and the Last Man, 1989; essay prospectus:

[3] See:

[4] The Master-Slave Dialectic actually precedes a discussion of the Roman philosophies of Stoicism and Skepticism.  For Hegel, Christianity found its natural intellectual home in Rome, because Rome had produced the individualization of consciousness that Christianity requires, while exhausting all the reasonable expression of it possible within Roman culture itself.  (Per Hegel, Jewish culture, wherein Christianity originated, had found itself in a cul-de-sac of rigid, written “divine” law and inherited custom.)  By now, it should be obvious that we see in Hegel, not a theological explanation of history, but an historical explanation of theology, at least given the assumptions and accepted scholarly knowledge available to Hegel.

[5] Thus, for instance, Nietzsche’s claim that Christianity represented a “slave morality.”

[6] See: the review of scholarly opinion at:, especially section 13, Christianity was mostly made up of ‘middling-plus’ class folks: merchants, tradesmen, craftsmen.

[7] See:

[8] Even Marx understood this, which is one reason he hated the very idea of money. See: He just hoped that money had been a recent invention.  Nope; it’s been here throughout most of recorded history.  See:  I warn the reader that in this instance, the Wiki article is flawed, since it concentrates entirely on the history of money in the West.  In fact, there is evidence that the Chinese developed money at roughly the same time as the West, but paper currency much earlier.  See:

[9] Our translation is that of W. C. Firebaugh (1922), which includes fascinating, if dated, scholarly notes:

[10] See:  My suspicion – but this is only a guess – is that clergy believed the text worthy of preservation, despite its scandalous material, because it included necessary keys to colloquial Latin.  Some Roman slang is only preserved in the Satyricon.  Besides, as Augustine argued in Civitas Dei, not only was the Roman Empire a dung heap, but secular history, as opposed to Sacred History – i.e., the relationship between Man and God – was entirely a waste of time.  See:

[11] For instance:  Early in the text we get a discussion of the cannibalism performed on their children by mothers in besieged cities; and the existing text ends with Eumolpius demanding that his executioners eat his body.

[12] A requirement in the study of rhetoric, which tells us that Encolpius – like Augustine, two centuries later – was intended by his family for a career in law.

[13] Satyricon, Chapter Fortieth.

[14] And it certainly helped that he was the merchant’s lover, or “mistress,” as he remarks with drunken pride.

[15] It should end at dawn, but when Trimalchio hears the cock crow, he immediately orders it caught and cooked.

[16] Satyricon, Chapter Seventy-First.  “Decreed Augustal, Sevir In His Absence/ He Could Have Been A Member Of/ Every Decuria Of Rome But Would Not” – Trimalchio claims that he was appointed to the Priests of Augustus, and would have been welcome in any of the officially recognized cults of Rome; but (he implies) his modesty prohibited acceptance of such honors.

[17] Satyricon, Chapter Seventy-Seventh.

[18] In order to have an ideology, we must confront external disagreements with and internal contradictions to our beliefs, which are then resolved and appropriated, negated and cancelled, or marginalized and ignored.  We thus arrive at generalities that we comfortably assume are necessary and superior to those that came before.  Hegel’s is not the only description of this process, but it is in many ways the most powerful.  My argument here has been that the evidence of the Satyricon is that the margins keep coming back, the contradictions are rarely resolved, and it is an inevitable human trait to be thoroughly disagreeable.

Problems with Utilitarianism

Reading about Utilitarianism recently, I first asked myself what I knew about it. It is now recognizably a form of moral realism, positing a standard of moral conduct separable from personal experience or belief – the greatest good for the greatest number. It’s been many decades since I’ve read Bentham, but I seemed to recall there was at least a suggestion, at the beginning of Utilitarianism, that its basic principles were already implicit in actual practice, and that Utilitarianism merely promised clarification and perfection by application of ‘scientific’ methodology. If so, then originally Utilitarianism would not be a moral realism but a scientistic justification for, and institutionalization of, existing practices. However, such a Utilitarianism would be unsustainable due to objections from any number of positions taken by those who felt the then current practices somehow disenfranchised them, or injured them, or oppressed them. (Malthus’ argument that the poor should be allowed to die off is this kind of Utilitarianism, and one can imagine the poor and their advocates not being too happy with it.) If I were remembering the matter aright, it should be clear why Utilitarianism would mutate into a claim of a ‘good’ as an identifiable value separate from what any one individual or group would wish it to be.

In America, most political arguments are in fact Utilitarian in one sense or another – and really can’t be otherwise. A politician is always arguing that he or she represents the most important interests of the greater number of the electorate – how could they not?

My general point is that it’s easy to see why understanding Utilitarianism might be somewhat difficult for some (including myself). I don’t say that to defend it, but because I find it somewhat confused, with a checkered history, even though politically inevitable in a diverse population with democratic aspirations.

I was never very impressed with the philosophy of Utilitarianism, so I didn’t keep up with it much. Kant’s deontology may be just as wrong, but it is far more interesting, because it raises the question of just how far we can extend rationality into the realm of morals before we bump into the fundamental problem of any moral realism, (or meta-ethical analysis, for that matter), cultural differences.

At any rate, reviewing some background material today, I find that I was wrong about Bentham (he was in fact attempting reformation of existing practices), but right about the essentially confused nature of Utilitarianism. Higher level utilitarian arguments can be convincing (and the crude utilitarianism we find in politics can be persuasive); but the ground is very shaky.

Here is an interpretation of Bentham‘s general premise, from The SEP: “We are to promote pleasure and act to reduce pain. When called upon to make a moral decision one measures an action’s value with respect to pleasure and pain according to the following: intensity (how strong the pleasure or pain is), duration (how long it lasts), certainty (how likely the pleasure or pain is to be the result of the action), proximity (how close the sensation will be to performance of the action), fecundity (how likely it is to lead to further pleasures or pains), purity (how much intermixture there is with the other sensation). One also considers extent — the number of people affected by the action.” (

Assuming “we are to promote” – that is, we are obligated to promote – “pleasure and act to produce pain,” is committing ourselves to a standard separable from any particular instance of pleasure and pain. And this makes absolutely no sense. The First Noble Truth of Buddhism, that life is suffering, was derived – and remains derivable – from personal experience. (And if one hasn’t experienced it, then the way of the Buddha offers no solution.) But apparently Bentham distrusted experience as a guide, since it tends to generate morals based on personal prejudice; so where is this obligation to promote happiness coming from?

Secondly, Benthem is suggesting a calculus of pleasure and pain, when such are without any essential measure. Psychologists have tried for years to provide such measurement, with success limited to purely physical stimulation. But how much pain is experienced by a parent upon the loss of a child? How much pleasure in a wedding ceremony? What kind of pleasure do I feel when I learn a hated enemy is dead, such that I can measure it? What kind of sorrow and anger am I feeling in support of the African American community’s response to the alarming number of police shootings of unarmed men and women? On what scale should I rate it?

So, how generalizable is this presumed promotion of pleasure and pain? The last paragraph of my previous comment raises the inevitable cultural problem – pleasure and pain are not reducible to physical sensations, but, indeed, physical sensations are frequently responses to social events. But different cultures realize socialization in many different ways. Recently, I’ve read someone remarking that god hates homosexuals. While I have heard Protestant ministers make this claim, but Catholic clergy have ever followed the principle ‘hate the sin, but love the sinner,’ presuming this to be true of god. We know the ancient Greeks and Romans were quite tolerant of homosexuality; and the cultures of ancient India and Japan had ornate rules for ‘proper’ satisfaction of homosexual desires.

The SEP article quotes Bentham’s rejection of laws against homosexuality as an unnecessary impingement of personal sentiment on the general welfare thus:

“The circumstances from which this antipathy may have taken its rise may be worth enquiring to…. One is the physical antipathy to the offence…. The act is to the highest degree odious and disgusting, that is, not to the man who does it, for he does it only because it gives him pleasure, but to one who thinks [?] of it. Be it so, but what is that to him?”

One can sympathize with Bentham and still see that he has somewhat missed the point. People often feel greater security and greater pleasure in socialization when they have a sense that the culture they live in is homogeneous enough that they share values with the greater number of their fellow community members. The cultural differences concerning homosexuality indicate much wider cultural assumptions about the shared values of the differing communities – and not just about homosexuality, but about to what degree individual behavior may vary from community norms, about the appropriate means of tolerating such variance, about the ground and harshness of sanction concerning unacceptable variance. Once we begin studying cultural difference along such general lines, we begin to see in the details just how different cultures can get. Utilitarianism soon stands revealed as a set of assumptions and arguments within a *given* culture, and can no longer be universalized on a founding principle to which we all agree.

Beyond Bentham we come to the classical Utilitarian identification of ‘pleasure’ with ‘happiness,’ and this is not sustainable. It is a torture of reason to suggest that ascetics must be feeling some physical pleasure in their denial of physical pleasure; yet they may certainly be very happy. And yes, they may be feeling a psychological pleasure, but this may yet not be the source of their happiness, so much as their self-identification with their ascetic ideal, to which their psychological pleasure is mere response.

Which of course raises the apparently long-recognized critique of Utilitarianism’s insistence that ‘happiness’ is the ultimate goal of our moral decisions (whether we wish to admit it or not) – namely that it is simply not at all clear that all moral or ethical choices do in some sense, and ought to, move in the direction of increasing happiness. It is demonstrable that many ethical decisions we make do not lead to the greater happiness of one’s self or one’s community. My loss of faith did not bring happiness to me nor to the Catholic community in which I was raised. Commitment to civil rights in the 1960s meant recognizing that years of contention and further reformation and occasional strife would follow, as efforts to redress discrimination and increase acceptance of all races as fellow humans would need to continue indefinitely.

As I’ve noted before, where general ethics within a diverse community are concerned, I tend to think eclectically. There are some issues I would argue along deontological lines, others I think are better address with achieving personal virtuousness (virtue ethics); on other issues I can be a ruthlessly legalistic pragmatist or Hobbsean contract theorist; so of course there are issues I wouldn’t hesitate to address on Utilitarian grounds, especially in political matters.

But as a complete normative theory of ethical behavior, Utilitarianism still seems confused – and, frankly, an artifact of a given culture at a given time, which has largely passed into history.

A problem with eugenics

According to Wikipedia, “Eugenics (/juːˈdʒɛnɪks/; from Greek εὐγενής eugenes “well-born” from εὖ eu, “good, well” and γένος genos, “race, stock, kin”) is a set of beliefs and practices that aims at improving the genetic quality of the human population.” *


Here’s the problem with eugenics: it is built on an assumption that is grounded a presumption, concerning the values of the researchers involved.

The assumption is that the human species needs to be improved genetically; but this is grounded on the presumption that such improvement can be determined according to values upon which we should all agree. In fact of course, all such values are culturally bound – completely and inextricably. Thus the ‘improvment’ offered will always imply hopes and prejudices of a given group within a given culture. There is no way to realize eugenics that is not inherently ethno-centric or ethno-phobic.

I’m sure some here hope that eugenics can be used to discover and eliminate genetic predispositions to religious belief; but surely, a religious eugenicist has every right to hope that such can be done to eliminate predispositions toward atheism. After all, technology plays no favorites.

Further, the very assumption that the human species needs to be improved in this matter is itself highly questionable, since it implies the de-valuation of the species just as it is – it implies that there is something wrong about being human, that humans are inherently flawed – a residue of Abrahamic ‘fallen man’ mythology.

As an illuminating side-topic, consider: practioners of ‘bio-criminology’ (which I would argue is a pseudo-science) target genetic study of criminal populations that are overwhelmingly African in descent. They seem to hope that genetics will reveal genetic disposition to ‘violent’ behavior, such as, say, mugging. And the argument for targeting more African Americans than European Americans would be, that there just are more African Americans incarcerated for such behavior. The argument is clearly flawed since it completely disregards sociological knowledge about the conditions with which African Americans must deal in various communities in which crime rates are fairly high.

But consider: The practices of vulture capitalists playing the stock market, or collapsing viable companies into bankruptcy have clearly devasted far more lives than all the muggers in America. Yet there is never any suggestion from ‘bio-criminologists’ that geneticists should find the genes responsible for predispostions toward greed and callousness, dishonesty on the stock exchange or ruthless exploitation of employees. And there never will be, because white collar criminals contribute to college funds, establish foundations that offer grants, hire bio-criminologists into right-wing think tanks, etc.

Personally, I won’t consider any arguments for eugenics until I get a promise that we will target the behaviors of the real criminals in this society – like the ones who work on Wall Street.



As we read through the Wiki article, we find that there is a recent trend among some geneticists to use the term ‘eugenics’ to apply to any effort to use genetics to address ertain health conditions, such as inheritable diseases like Huntingtons, or to provide parents with the opportunity to decide whether to abort a fetus with such diseases. This is just a mistake. First, no one opposed to classical eugenics has ever argued that we shouldn’t use genetics to address ill health conditions or diseases – because we can do this without attempting to improve the species genetically, which is the ultimate goal of eugenics. Secondly, ressurrecting the term eugenics for what is pretty standard genetics, seems to bury history, or at least confuse our understanding of it. Third, the choice of whether to have a child or not given potential for heritable diseases, has long been available through understanding family histories – and it has not dissuaded a large number of people from having children despite family histories of such illnesses, because the choice to have a child or not is rarely restricted by purely rational consideration. Perhaps it should be, but it’s not. For such restrictions to have a large enough impact on the population to affect genetic improvement of it, they would have to be impelled from outside the family, perhaps by law, and then we would find ourselves directly in the arguments concerning classical eugenics, like the one I make above.

Finally, there’s the question of whther we really want to use genetics to improve the species at all, since it’s quite possible that naturally occuring reproduction actually contributes to the survival of the species, since we don’t know what environmental challenges the species will face in the future, and what may appear to be a weakness now, may prove to be a strength in another era.

I would say, let’s stop calling any serious genetics a form of eugenics, and let’s stop pretending that we are wise enouve to direct the course of human evolution.