Simulation argument as gambling logic

I have submitted an essay to the Electric Agora, in which I critique the infamous Simulation Argument – that we are actually simulations running in a program designed by post-humans in the future – , made in its strictest form by Nick Bostrom of Oxford University. Since Bostrom’s argument deploys probability logic, and my argument rests on traditional logic, I admitted to the editors that I could be on shaky ground. However, I point out in the essay that if we adopt the probability logic of the claims Bostrom makes, we are left with certain absurdities; therefore, Bostrom’s argument collapses into universal claims that can be criticized in traditional logic. At any rate, if the Electric Agora doesn’t post the essay, I’ll put it up here; if they do, I’ll try to reblog it (although reblogging has been a chancey effort ever since WordPress updated its systems last year).

 

Towards the end of that essay, I considered how the Simulation Argument is used rhetorically to advocate for continuing advanced research in computer technology in hope that we will someday achieve a post-human evolution. The choice with which we are presented is pretty strict, and a little threatening – either we continue such research, advancing toward post-humanity – or we are doomed. This sounded to me an awful lot like Pascal’s Gambit – believe in god and live a good life, even if there is no god, or do otherwise and live miserably and burn in hell if there is a god. After submitting the essay I continued to think on that resemblance and concluded that the Simulation Argument is very much like Pascal’s Gambit and its rhetorical use in support of advancing computer research, much like Pascal’s use of his Gambit to persuade non-believers to religion, was actually functioning as a kind of gambling. This is actually more true of the Simulation Argument, since continued research into computer technology involves considerable expenditure of monies in both the private and the public sector, with post-human evolution being the offered pay-off to be won.

 

I then realized that there is a kind of reasoning that has not been fully described with any precision (although there have been efforts of a kind moving in this direction) which we will here call Gambling Logic. (There is such a field as Gambling Mathematics, but this is simply a mathematical niche in game theory.)

 

Gambling Logic can be found in the intersection of probability theory, game theory, decision theory and psychology. The psychology component is the most problematic, and perhaps the reason why Gambling Logic has not received proper study. While psychology as a field has developed certain statistical models to predict how what percentages of a given population will make certain decisions given certain choices (say, in marketing research), the full import of psychology in the practice of gambling is difficult to measure accurately, since it is multifaceted. Psychology in Gambling Logic not only must account for the psychology of the other players in the game besides the target subject, but the psychology of the target subject him/herself, and for the way the target subject reads the psychology of the other players and responds to her/his own responses in order to adapt to winning or losing. That’s because a gamble is not simply an investment risked on a possible/probable outcome, but the outcome either rewards the investment with additional wealth, or punishes it by taking it away without reward. But we are not merely accountants; the profit or loss in a true gamble is responded to emotionally, not mathematically. Further, knowing this ahead of the gamble, the hopeful expectation of reward, and anxiety over the possibility of loss, colors our choices. In a game with more than one player, the successful gambler knows this about the other players, and knows how to play on their emotions; and knows it about him/her self, and knows when to quit.

 

Pascal’s Gambit is considered an important moment in the development of Decision Theory. But Pascal understood that he wasn’t simply addressing our understanding of the probability of success or failure in making the decision between the two offered choices. He well understood that in the post-Reformation era in which he (a Catholic) was writing, seeing as it did the rise of personality-less Deism, and some suggestion of atheism as well, many in his audience could be torn with anxiety over the possibility that Christianity was groundless, over the possibility that there was no ground for any belief or for any moral behavior. He is thus reducing the possible choices his audience confronted to the two, and suggesting one choice as providing a less anxious life, even should it prove there were no god (but, hey, if there is and you believe you get to Paradise!).

 

In other words, any argument like Pascal’s Gambit functions rhetorically as Gambling Logic, because it operates on the psychology of its audience, promising them a stress-free future with one choice (reward), or fearful doom with the other (punishment).

 

So recognizing the Simulation Argument as a gamble, let’s look at the Gambling Logic at work in it.

 

Bostrom himself introduces it as resolving the following proposed trilemma:

 

1. “The fraction of human-level civilizations that reach a post-human stage (that is, one capable of running high-fidelity ancestor simulations) is very close to zero”, or

2. “The fraction of post-human civilizations that are interested in running ancestor-simulations is very close to zero”, or

3. “The fraction of all people with our kind of experiences that are living in a simulation is very close to one.”

 

According to Bostrom himself, at least one of these claims must be true.

It should be noted that this trilemma actually collapses into a simple dilemma, since the second proposition is so obviously untrue: in order to reach post-human status, our descendents will have to engage in such simulations even to accomplish such simulation capacity.

 

Further, the first proposition is actually considered so unlikely, it converts to its opposite in this manner (from my essay): “However, given the rapid advances in computer technology continuing unabated in the future, the probability of ‘the probability of humans surviving to evolve into a post-human civilization with world-simulating capabilities is quite low’ is itself low. The probability of humans evolving into a post-human civilization with world-simulating capabilities is thus high.”

 

Now at this point, we merely have the probabilistic argument that we are currently living as simulations. However, once the argument gets deployed rhetorically, what really happens to the first proposition is this:

 

If you bet on the first proposition (presumably by diverting funds from computer research into other causes with little hope of post-human evolution), your only pay-off will be extinction.

 

If you bet against the first proposition (convert it to its opposite and bet on that), you may or may not be betting on the third proposition, but the pay-off will be the same whether we are simulations or not, namely evolution into post-humanity.

 

If you bet on the third proposition, then you stand at least a 50% chance of earning that same pay-off, but only by placing your bet by financing further computer research that could lead to evolution into post-humanity.

 

So even though the argument seems to be using the conversion of the first proposition in support of a gamble on the third proposition, in fact the third proposition supports betting against the first proposition (and on its conversion instead).

 

What is the psychology this gamble plays on? I’ll just mention the two most obvious sources of anxiety and hope. The anxiety of course concerns the possibility of human extinction: most people who have children would certainly be persuaded that their anxiety concerning the possible future they leave their children to can be allayed somewhat by betting on computer research and evolution to post-humanity. And all who share a faith in a possible technological utopia in the future will be readily persuaded by to take the same gamble.

 

There is a more popular recent variation on the Simulation Gamble we should note – namely that the programmers of the simulation we are living are not our future post-human descendents, but super-intelligent aliens living on another world, possibly in another universe. But while this is rhetorically deployed for the same purpose as the original argument, to further funding (and faith) in technological research, it should be noted that the gamble is actually rather weaker. The ultimate pay-off is not the same, but rather appears to be communion with our programmers. Well, not so enticing as a post-human utopia, surely! Further, that there may be such super-intelligent aliens in our universe is not much of a probability; that they exist in a separate universe is not even a probability, it is mere possibility, suggested by certain mathematical modellings. The reason for the popularity of this gamble seems to arise from an ancient desire to believe in gods or angels, or just some Higher Intelligence capable of ordering our own existence (and redeeming all of our mistakes).

 

It might sound as if, in critiquing the Simulation Gamble, I am attacking research into advances in computer and related technology. Not only is that not the case, but it would be irrelevant. In the current economic situation, we are certainly going to continue such research, regardless of any possible post-human evolution or super-aliens. Indeed, we will continue such research even if it never contributes to post-human evolution, and post-human evolution never happens. Which means of course that the Simulation Gamble is itself utterly irrelevant to the choice of whether to finance such research or not. I’m sure that some, perhaps many, engaged in such research see themselves as contributing to post-human evolution, but that certainly isn’t what wins grants for research. People want new toys; that is a stronger motivation than any hope for utopia.

 

So the real function of the Simulation Gamble appears to be ideological: it’s but one more reason to have faith in a technological utopia in the future; one more reason to believe that science is about ‘changing our lives’ (indeed, changing ourselves) for the better. It is a kind of pep-talk for the true believers in a certain perspective on the sciences. But perhaps not a healthy perspective; after all, it includes a fear that, should science or technology cease to advance, the world crumbles and extinction waits.

 

I believe in science and technology pragmatically – when it works it works, when it doesn’t, it don’t. It’s not simply that I don’t buy the possibility of a post-human evolution (evolution takes millions of years, remember), but I don’t buy our imminent extinction either. The human species will continue to bumble along as it has for the past million years. If things get worse – and I do believe they will – this won’t end the species, but only set to rest certain claims for a right to arrogantly proclaim itself master of the world. We’re just another animal species after all. Perhaps the cleverest of the lot, but also frequently the most foolish. We are likely to cut off our nose to spite our face – but the odd thing is our resilience in the face of our own mistakes. Noseless, we will still continue breathing, for better or worse.

 

—–

Bostrom’s original argument: http://www.simulation-argument.com/simulation.html.

The election’s over; what now?

Among the many gaffes, groundless accusations, false flags, insults and general whining these past couple weeks, Donald Trump assured his followers that he couldn’t possibly lose in Pennsylvania unless the election were rigged.   Let’s stop and consider the logic of that.  Trump was not relying on any polls (indeed he has taken to deny they matter).  He was not referring to a tsunami of letters to the editor of various news organizations, or some set of petitions.  His reference point seems to be entirely his own ‘gut,’ his confidence that everyone recognizes him as the ‘smartest guy in the room,’ who so many people love and admire.

Actually, my suspicion is that his true reference point is simply and only the applause he hears from fans at rallies.  If true, that tells us a lot about the man, first of all that he really doesn’t get the difference between fans applauding and an electorate voting.  But I think it is becoming more and more obvious that this is exactly the case.

But the logic of his assertion that he can only lose if the election’s rigged, extends beyond the rallies.  Basically, what he’s saying is, that since it s so obvious that he’s so smart, and would do such wonderful things, and is so beloved for this – the election is now immaterial.  Indeed, if Trump’s gut were a true measure of reality, then we shouldn’t hold the election at all.  Hillary should simply throw in the towel, and the House of Representatives appoint him to office.

The irony is that Trump is making his gut known on this matter at exactly the moment when it is now possible to admit that the next President of the United States will be Hillary Clinton.

Hillary Clinton is some not so nice things for a progressive – or even a liberal.  She does lie, she is dishonest, she is conniving and manipulative.  She’s also a neocon on foreign policy, and a neo-liberal on economics.  The judge she appoints to the Supreme Court will steadfast moderates – meaning that while the train-wreck that was the Roberts court is now over, it’s legacy will not be undone by any major reversals.  On top of that, she has now a small constituency of anti-Trump Republicans that she will have to accommodate after election.  In short, Clinton’s offers to become the most conservative Democratic administration since Woodrow Wilson.

However there is one thing Clinton is not, that Trump now obviously is – She is not mentally ill.

Call it sociopathy, or narcissism or delusions of grandeur or some other out-of-touch egomania, what you will.  Donald Trump has not the slightest clue as to the nature of the political process, the nature of government, what it means to be a political leader of the most powerful nation in a very complicated world order that is untethering at the seams in response to years of finance-capital-elite driven globalization.  (In fact, by some reports, he wouldn’t even know what to do in day to day administrative tasks, and is not entirely enthusiastic about becoming President for that reason.)

However – the good news is, that the election is all but over.  Whatever the final numbers prove, this is why Donald Trump has lost the election:

Demographics:  Besides loyal Democrats, Trump has alienated the majority of each of the following voting blocks:

African Americans, Asian Americans, Hispanic Americans, Muslim Americans, non-Muslim Americans from pre-dominantly Muslim nations, Mormons, Jews, Americans with disabilities, LGBT Americans, atheist Americans, scientists, veterans, parents of veterans, attractive women, not so attractive women, mothers, women who menstruate (I think that’s quite a number), Republican women, Republican moderates, Republican politicians struggling to retain their seats in the next congress, Republicans concerned for national security, Republican business people, the college educated (left and right), Americans who don’t like Putin, Americans who like babies – and the list goes on.  Trump has failed to alienate angry uneducated white men, but recent polls indicate that he’s no longer doing so well among them.  (And yes, he has alienated the Christian Right, but then hired Mike Pence for VP to make amends.)

But, that’s not all.  Trump has utterly failed to understand the post-nomination campaign process.  He has no ground game, few storefronts with door-to-door campaigners, few liasons with local Republican politicians.  (It’s not even clear he understands why that’s needed.)  He expected the RNC to fund his campaign, when part of the responsibility of the Presidential nominee is to raise funds for the Party.  He has isolated himself from the national press, failing to realize that he is expected, in part, to speak through them, especially were he to become the President.

It clear now that Trump has no strategy.  His pet boy Manafort may be able to guide him to battleground states, but in as lop-sided an election as this, he can’t just ignore previously safe ‘Red’ states – even Arizona, probably the most right-wing Republican state in the West, and one suffering severe tensions between dominant Anglos and a Mexican American underclass, is now in play.

But Trump’s biggest problem, of course, is his own mouth.  He can’t stop it.  That’s why he is clearly pathological.  He sounds like a robot when he reads a written speech, but when he goes off-text, he’s an uncontrollable, foam-at-the-mouth ranter, and self-promoter.  Even if his people could get him to reign it in, it’s probably too late.

The next big moment of the campaign season is the arrival of the Presidential debates.  My guess right now is that Trump will probably make it through one or two before he blows up.  After which he will ‘double-down’ on the narrative that the ‘system is so rigged against me, they won’t let me win,’ because by that time it will be obvious even to him that he has already effectively lost the election.

So the discussion progressives and liberals now need to begin is, what are we do during the Clinton administration – how do we further progressive causes and somehow begin winning seats in Congress and in State capitols?    That’s a long game to play; but otherwise we may have more nightmares like 2016 further down the road.

Problems with public discourse again (and again, and again…)

Recently, people have been been wondering about the clamor for correct speech, from both the Left and the Right. There are just some things we’re not supposed to talk about in certain quarters – whether this is a discussion of a rape narrated in a work of literature in an English studies course, or about the non-Christian deism or skepticism among the writers of the US Constitution. People are just too damn sensitive these days. We forget that an honest public discussion on shared concerns should deal with the realities of life’s experience, and the disappointments of history, however harsh. This is a problem that bubbles up time and again in American public discourse. America has been a Puritan culture since… well, since the Puritans first landed here. (They were not escaping the religious intolerance of England, they were running from the religious toleration they found in the Netherlands.)

Puritanism, need not be claimed by only one ideology. It is a rigid attitude toward social behavior, demanding that what one person, or one group, sees as the right and the good ought to be accepted by everyone and abided by. So there are many forms of puritanism, across the cultural and political spectrum. Since it stems from a ‘will to be right,’ which is endemic among those belonging to cultures open enough to engender serious disagreements, it will keep rearing its ugly head again and again, causing pain to those successfully repressed, and push-back of various rebellious spirits – including competing forms of puritanism.

But while we should always increase our understanding of the problem, that doesn’t mean we will ever be able to rectify it. The variable factors are too many, too historically entrenched, and too many people are invested is the most troublesome of them.

Two things I’d like to note. First, of course, the obvious – all societies engage in discourse management and limitation. ‘We don’t talk about such things;’ ‘a proper lady/gentleman would never use such language;’ ‘say that again, child, and I’ll wash your mouth out with soap!’ Such cautions were common in my youth. The free speech movement of the ’60s led to their eventual disuse; but they’ve obviously been replaced by other cautions, motivated by different interests. Were these eventually discarded, they would simply be replaced. Social interactions, to proceed smoothly, must have some sense of direction, and of boundaries that cannot be crossed. Some of these boundaries are rather obvious in a given context: A white supremacist skinhead should probably not spew his racism when he’s in the midst of bloods in the hood. Knowing such boundaries and maneuvering through them is part of the skill of speaking with others. An individual is his/her first censor, and should be.

Second: America doesn’t have only one culture, and never has. The very hope for one was lost with the Louisiana Purchase. Throughout the 19th century, when people wrote of ‘American culture,’ they were actually talking about the culture of the Eastern seaboard. By the 1920s, this myth became harder to sustain, as emergent cities in the West began defining themselves, while regional politicians began stoking grudges born in the Civil War against Eastern intellectualism, big banks in NYC, and the ever out of touch Washington politician. Meanwhile new media were developing to record and preserve (and market) the culture of quite limited communities – think of the blues and early country recordings from various locales in the South. But also think of the Western films that memorialized the fundamental differences between the Eastern and Western historical experience. Finally (but only for now), think of how the influx of immigrants in the late 19th/early 20th centuries effectively redefined many of the cities of the Eastern seaboard (and, later, elsewhere as well). The 1926 might find one reading The New Yorker, but just as likely, given one’s heitage, Der Groyser Kundes.

In the ’60s, which saw television become our major media for information and politics, combined with the rapid increase in the number of colleges, all sharing a similar curriculum, and the ride of national political movements, Americans effectively deluded themselves into believing there was a national culture. That could not be sustained. The social consequences of the national political movements included much good, but also considerable fragmentation along regional, political, economic, ethnic lines, but also along lines of locally generated sub-cultures, some cultures of choice. Now when people refer to an ‘American culture,’ they are really only talking about the culture projected on television, since TV is the only source of information that most Americans share. Unfortunately, all TV seems to deliver is further delusion, much of its ‘information’ of questionable quality and uncertain factual basis.

The fragmentation is an on-going process – the tendency appears to be a function of Modernity, and we find it in play during the Reformation, as Protestant churches splintered off from each other due to (often violent) doctrinal disputes. This fragmentation is thus an on-going historical process; groups are formed in opposition to other groups, coming together over a perceived sharing of values, only for its members to discover that they do not share the same motivations, and are not unanimous in their interpretation of those values. The group’s discourse management strategies break down, boundaries get crossed, and group members break off to form new groups, and so on.

‘Well,’ the question may be asked, ‘why aren’t we simply a bunch of mutually suspicious, antagonistic tribes at this point?’ Well, maybe we are. However, we have, at crucial historical moments, developed bureaucratic institutions and organizations that suffer from considerable inertia; and these institutions and organizations are really what bind most of us together.

(For instance, I prefer Bernie, but I’ll probably have to vote for Hillary in November, because I share more values and interests with the Democratic organization than the Republican one, and the institution of the US government remains relatively stable, even though apparently incapable of needed reform. But hopefully it would prove resistant to Trumpian subversion as well, should the worst come to pass….)

I here think of the countless essays I have read over the past 45 years that have deployed phrases like ‘we need to,’ ‘we ought to,’ we really should,’ concerning hopes of political, social, or economic reform. Not a single one of those essays actually contributed to political, social, or economic change.

I think it was maybe the late ’90s, when I was reading an essay insisting that ‘we need to do (x).’ when I suddenly realized: ‘no, we don’t need to do anything – it might be good to do (x); but since we don’t need to do it, and most people seem not inclined to do it, well, so it goes.’

Around that time I had another unhappy insight, into the nature of ‘the crisis of contemporary capitalism.’ There is no crisis of contemporary capitalism. Workers get screwed, lose their jobs, suffer in poverty – and that’s exactly what is needed to keep capitalism working. So was the recession of ’08, and the lame attempts at amelioration. Unemployment is built into the system; poverty is built into the system; uncertainty is built into the system. Social injustice is part of the American economy. Some use race to leverage this injustice, some gender, some age, some class, some education – but some prejudice must be formed and deployed to leverage injustice in the system, because the injustice is a necessary function of the system. One can no more imagine a capitalist economy without social injustice than one can imagine a species of tree without bark.

That means that social injustice cannot be corrected by sweeping movements without actual revolution; it has to be corrected incrementally, on a case by case basis, even where the case involves collectives. John L. Lewis, when asked why he was not a communist, replied (paraphrasing from memory), ‘Communists want utopia; I just want to make things better.’

It is a core problem with so-called Social Justice Warriors, or scientisimists, or religious zealots, or the Tea Partiers, etc. – that they honestly believe that if we all just get together and get our heads right, the world will spin in the desired direction.

That’s not true, and it’s not how history happens.

Read instead Martin Luther King’s “I have a dream.” King uses “we must” phraseology in only one paragraph, and it is not a call to social change, but a moral directive to those who already agree with his basic project. http://www.americanrhetoric.com/speeches/mlkihaveadream.htm

There’s no point in asking people to change. They have to want to change. Americans are unhappy; but they do not want to change. That’s the real problem here.

I’m not simply trying to say something about our economic system (although economic considerations underlie many of the issues here discussed). My point is that ‘what ails our discourse?’ is a question for those of us who believe that public discourse ‘ails’ – that the shared interchange of information and persuasion has developed obstacles to communication and shared agreements leading toward collective action. But I suggest that most people do not perceive any ailment here at all, and are not only content with the current universe of discourse, but actually find it socially useful in a number of ways (including economically).

Any time we are considering a seeming problem in a given society, it helps to ask three questions: 1. Do the people involved perceive a problem? 2. If they do, what are they willing to do about it? 3. If they don’t, or are not willing to do anything about it, then could this ‘problem’ actually be built into the social processes that keep the society functioning? In other words, a) it may not be causing anyone discomfort despite its inefficacy as a process, and b) even should it in some ways cause discomfort or even harm, it may be satisfying in other ways that keeps the given society functioning.

In short: on disinterested observation, it may appear to be a problem; but once all interests are taken into account, it may not be a real problem at all, or at least one that people are quite willing to live with.

Finally, I referenced Dr. King’s “I have a dream,” because that was a public address that really did contribute to a moment of social change. But how? At the time, everyone knew that change was in the wind – it had already begun with Brown v. Board of Education, and the Alabama marches, and it was not to be stopped. All King did was to provide it with a focus, a lightning rod of imagery expressing the fundamental hope that his audience held dear, while reminding those on the fence of the issue of the justice embedded in that hope. He doesn’t talk about what we should do – his audience already knows what they should do; he is telling us ‘now is the time to do it,’ and reminding us of the future it can lead us to.

In the condition of increasing fragmentation in 2016, it’s not clear that an address like King’s is possible or would have anything like the same effect. We do not know that change in a given direction is possible; we do not share the same hopes or dream the same future anymore. There is really no ‘we’ here to share this knowledge or these hopes. or take action based on these. Just a whole bunch of differing ‘us’ against ‘them’ tribes.

Unfortunately – most people, though they complain, seem quite willing to live with that.

Hitler’s Mom

I’ve remarked in comments here and elsewhere, that I once wrote a critical-rhetorical analysis of Adolf Hitler’s Mein Kampf  (circa 1994). I would like to submit here a chapter of that analysis, so that if further reference to it becomes necessary, there is evidence of its existence.

This particular chapter, as essay, reveals the somewhat odd relationship Hitler had with his mother, at least as far as can be gleaned from Hitler’s text.

The argument:  Hitler uses rhetoric to redefine his early experiences in a way that tends to bury facts, suppress anxieties concerning women and sexuality, and shape those experiences as seeming necessities of fate. The principle revealing rhetorical moment is when, having effectively blamed his mother for the poverty he suffered in Vienna (eliding his own irresponsibility as something of a wastrel), he tropes a new mother for himself (‘Dame Care’), which thenceforth effaces memory of his actual mother altogether.

One reason for posting here is the recent legalized publication of Mein Kampf  in Germany for the first time since WWII. It is also always a good thing to remind ourselves, not only of what Hitler and his  followers did, but the kind of people they were – especially since there are many such people among us, unfortunately even in politics.

Finally, I hope the essay demonstrates the helpful interaction between rhetoric and history (and indirectly psychology). The understanding of history is not simply the recording of facts, but a greater understanding of the people who make history, and of their motivations.

(Another time, I will probably tell something of the story of why I did not follow the full study through to publication, which is not without its own interest…. )
—–

Hitler’s Two Mothers, by E. John Winner:

My mother, to be sure, felt obliged to continue my education in accordance with my father’s wish; in other words, to have me study for the civil servant’s career.
(…)
Concerned over my illness, my mother finally consented to take me out of the Realschule and let me attend the Academy.
(…)
Two years later, the death of my mother put a sudden end to all my highflown plans.
It was the conclusion of a long and painful illness which from the beginning left little hope of recovery. Yet it was a dreadful blow, particularly for me. I had honored my father, but my mother I had loved.
(…)
What little my father had left had been largely exhausted by my mother’s grave illness(…). [1]
(…)
When my mother died, Fate, at least in one respect, had made its decisions. [2]
(…)
After the death of my mother I went to Vienna (…). [3]
(…)
I exalt it [poverty] for tearing me away from the hollowness of a comfortable life; for drawing the mother’s darling out of his soft downy bed and giving him ‘Dame Care’ for a new mother (…). [4]

In a book of some 300,000 words, much of it purporting to be autobiographical in nature, and from a man who had, at the time of writing, lived nearly half his life with his mother, the above handful of sentences and fragments comprise every last word Adolf Hitler chose to write about his mother in Mein Kampf.

Respect his father but love his mother? Considering her nearly complete absence from his autobiographical material here, one wonders if he thought much about her at all.

Historians tend to agree that Klara (Polzl) Hitler, the young wife of a middle-aged man (who, by some accounts, abused her), several of whose children died before the age of six, spoiled her only surviving son (excluding step-children), Adolf. One could expect that. And one could expect that the son would be devoted to his mother in return; or, if the relationship took a pathological turn, perhaps the son would respond to the fawning attentions of the mother with an equal pathology, in what might be termed a ‘love/hate’ relationship. It could take on even a sado-masochistic quality. Certainly Hitler’s character and reputation invite this interpretation. But, alas! why doesn’t it show up in what he writes of her?

To be sure, National Socialism’s brutally exclusionary ideology, and its dominating attitude toward women, are well known. The ideal ‘Aryan’ is a male; women, even in terms of their most admirable qualities, are little more than baby-making housekeepers. Their greatest virtue, accordingly, would be their ‘racial purity’ – bearers of good genes. Nothing else would be asked, or expected of them. Indeed, any more would be too much. (But see the discussions in When Biology Became Destiny. [5])

Hitler’s own attitudes were kinky enough, even as manifest and openly expressed, without (pace the once famous OSS psychoanalysis by Langer [6]) speculating on his behavior in the bedroom. Hitler was oft dependent on women, and yet uncomfortable around them. He idealized them, avoided them, talked down to them. He treated both of his known mistresses, Geli Raubal and Eva Braun, as sometimes amusing, sometimes annoying, children. The matter is made more pathetically complex when one remembers that Hitler surrounded himself with the promiscuously homosexual leadership of SA, as well as pathologically sado-masochistic anti-Semites like Jules Streicher. And Hitler’s rants in Mein Kampf – against syphilis, prostitution, and genetic contamination – are widely held as evidence of his own sexual pathology, although no one is quite sure anymore what that pathology might of been. Disgust? Suppressed desire? Certainly these rants are mad parodies of reformist rhetoric, and need closer reading. But when revivalist ministers begin foaming at the mouth about the terrors of Hell, are they expressing a personal fear? doing all they can to save the souls of their listeners? or simply selling a belief in exchange for filled coffers? (And who would want to believe in the terrors of hell, and to what purpose?)

At any rate, the point is, Hitler’s attitude towards women per se is enough of a question that anything he might say about the first woman he would know intimately, his own mother, would promise to reveal much about that more general attitude which so informed the ideology of a major political movement that would eventually dominate an entire nation.

But such is not the case. Hitler’s one great opportunity to wax sentimentally over the virtues of motherhood, in their embodiment in the form of his own mother, slips by with little remark.

In sum, the story Hitler tells is this: His mother (notice the absence of name) feels obliged to continue with his dead father’s plans for his son. Then her own illness brings about her death. This impoverishes the son. He leaves for Vienna, and further poverty. However, this turns out to be fortuitous – indeed, decided by Fate. The son learns more from the harshness of his poverty than he could have learned from his comforting mother and her comfortable lifestyle. So much so, that itself becomes his surrogate mother thenceforth, figuratively speaking.

But what is the nature of this figure of speech? The unnamed actual mother becomes displaced by the figural mother, ‘Dame Care’ – apparently a noblewoman, and a masterly one. The actual mother could teach Hitler nothing, but ‘Dame Care’ is insistent, unyielding. From her, he begins to learn his lessons about life. She is thus his real mother, since she fulfills the parenting function.; the actual is thus displaced by the real. The actual had obligations, but would not meet them; thus her position is surrendered to the real. Of course, the death of the actual was convenient; but perhaps more than that: Fate, again, takes a hand.
There is bitterness in this little fable, but there is more. Interestingly, no scholar I’ve read seems to pick up on it; but Hitler’s ambivalence towards his mother, and towards her death, are right there on the surface, nothing could be clearer. He “loved” her, but her death turns out to be a good thing. She loved him, but this was not such a good thing, he learned nothing thereby. Perhaps (could this thought have crossed his mind?) it would have been better for her to fulfill her obligations to the dead father and push Hitler to pursue a civil servant’s career? But he hated this idea. Nonetheless, he would not have suffered in poverty if she had done so. Yet if he hadn’t suffered in poverty, he would never at last have learned the world, would never become the leader of a historically important political movement. So it was good that she relented, and good also that she died, and good that her death forced him into a life of poverty, and good that she was revealed as not a very good mother (for loving too much) by a parenting figure, Dame Care, poverty, who proves a much better mother indeed – because she (poverty) does not love her son.

So the matter stands thus: Whatever he actually once felt for his mother, by the time of the writing of Mein Kampf, Hitler had condemned her for loving too much, and fulfilling her parental (rather than wifely) obligations. He turned instead to experience itself to be his guide, assuming it to have a will (and thus a personality), and an intention for the education it was giving him. His own experience thus becomes projected outside of himself, as a power greater than himself, directing him in his career. As much as to say: ‘This happens to me, but the “This” has its reason for happening; it wills me to move in a given direction. It is not me, yet it makes me who I am.’

On the surface this sounds terribly optimistic, a healthy means of learning from one’s mistakes, so to speak. But Hitler isn’t writing of mistakes – all this has been fated, there are no mistakes. Under this seeming surrender to experience lies a complete denial of the lessons that could be learned from experience. ‘This wills itself on me;’ i.e., ‘I am not doing this, it is doing itself.’

Perhaps a bit of factual detail helps clarify the matter.  In his subtle but unmistakable condemnation of his mother, Hitler effectively accuses her of impoverishing him by growing ill and dying, thus incurring costs for medical treatment and burial. But according to all his recent biographers, the evidence is clear that this is false. The family pension on which he lived at the time, continued after her death; he appears to have squandered it by acting as a kind of bargain-basement spend-thrift. To be sure, he had little; but what little he had he spent carelessly. Yet poverty, he claims, ‘happened’ to him, and he implies that this was his mother’s fault.

Well, if poverty is such a great teacher, perhaps Hitler owed his mother thanks, and this was his round-about way of expressing it. But the point is, Hitler, however he might be viewed objectively, presents himself as a kind of motherless child – the actual mother failed him; his ‘real’ mother, Dame Care, is simply a figure of speech. He is thus thrown into the arms of Fate, propelling him to his destiny….

And that is why Klara Polzl Hitler, to give her back the name he refuses her, so quickly disappears from view in the autobiographical passages of Mein Kampf. The love mother and son shared was but temporary weakness; its only contribution to his life was its closure. Any further memory of it – tracing possibilities to which he had turned his back – would merely prove annoying.

—–

[1] Adolf Hitler, Mein Kampf (hereafter MK), 1925; Trans. Ralph Manheim, Houghton Mifflin, 1943; p. 18.

[2] MK, p. 19.

[3] MK, p. 20.

[4] MK, p. 21.

[5] Renate Bridenthal, Atina Grossmann, Marion A. Kaplan, When Biology Became Destiny: Women in Weimar and Nazi Germany; Monthly Review Press, 1984.

[6] Walter C. Langer. The Mind of Adolf Hitler: The Secret Wartime Report; Basic Books, 1943.

Human sciences as probabilistic explanation

The thrust of this article is very simple: the explanations we find in the human sciences are nothing like the claims of causal certainty we frequently find in the natural sciences.

‘Sue hit Joe,’ the story goes, ‘because he insulted her.’

If the audience to this sentence knows both Sue and Joe, that may be the end of it, since their personalities are presumed to be understood. Yet greater explanation may be desirable, especially if there are aspects to the personalities of Sue or Joe of which those who know them are unaware.

Let’s enrich our narrative, with different scenarios.

‘Sue hit Joe, because he called her an ugly bitch.’ (Two variations in background: the general consensus is that Sue’s not attractive, or the general consensus is that she is.)

‘Sue hit Joe, because he called her a feminist dyke’ (including evident variations in background).

‘Sue hit Joe, because he called her a cockteaser.’ Let’s pause here, because the background variations to this rely less on general consensus or social fact concerning the two, and more on their internal motivations and personal boundaries. Joe might have said what he did because he’s contemptuous of Sue; or because he’s sexually frustrated in his longings for her. But Sue may be lashing out because she has unadmitted desires for Joe. She may also have personal gestures that are not flirtational, but may be seen as such by others, and strong personal boundaries; and she is motivated in lashing out to protect those boundaries.

But let’s go back to the ‘feminist dyke’ example. Joe’s insult hinges on the pejorative nature of the word ‘dyke;’ but there are social and personal facts the insult references: either Sue is a feminist or she is not; either she is a lesbian, or she is not. That seems cut and dried. But now the context demands to be opened up. In what situation did Joe insult Sue? Are they students at the prom? Are they in a barroom after a few drinks? Are they at a feminist political rally? Are they at a gay-lesbian rights rally? If so, are there camera’s recording them (enlarging their audience and providing them with a public stage)? Now they need not be presumed to know each other. They might be engaged in differing political signifying practices – Sue isn’t simply lashing out, she is making a statement.

A court would determine whether Joe’s provocative speech warranted physical assault in response. However, possible explanations of the event are now beginning to multiply, possibly beyond our powers to merge them into a single narrative. Was Joe drunk when he decided to attend a rally concerning a cause he was hostile to? Was Sue? did either of them recently break up with a loved one? Had either suffered a death in the family; the loss of a job? What if one or both of them happen to be in the military?

Remember: if we’re talking about a political rally, especially one attended by the media, we’re talking about a possibly national social context, getting interpreted by millions of people with differing political, social, cultural motivations. (Perhaps even economic: Newspaper editor: ‘Did Joe bleed?’ Reporter: ‘No.’ Editor: ‘Then it goes to page 2.’)

But let’s stretch out the time-line of our narrative and see how the explanations fares. One act does not follow immediately after another. that gives the participants time to think over their responses; time enough to doubt the impulse of those responses:

‘Joe said something about feminist lesbians; later, Sue hit him.’

Now we have the narrative, but it’s explanatory force is considerably weakened – it all depends on how we interpret ‘later.’ If ‘moment later,’ then Sue’s response is almost immediate; if four day’s later, then Sue has probably been simmering in her anger and might be expected to have reconsidered her response; if four days later, perhaps Sue’s thinking has become pathological, since she hasn’t used any of that time to reconsider different possible responses.

But let’s go back to the original narrative, and change its presuppositions:

‘Sue hit Joe, because she was drunk.’ Now we no longer bother with Joe’s behavior, but decide to explain Sue’s in the light of her possible drinking habits (and if the court sends her to rehab, that’s exactly the explanation the therapist will be concerned with).

I start here because it’s important to recognize that the way a social science discusses any behavior has to do with the focus of attention the science presumes. Psychologists researching alcoholic behaviors, or sociologists studying the increasing likelihood of violence from people who are inebriated, aren’t really going to be that interested in any presumed provocation for the behavior – which is not to say that they will be uninterested: for instance assume, for the moment, that Sue and Joe are related, in a family with a history of alcoholism and/or abuse. Then the provocation will take on increased importance – especially when brought before the legal system.

We should consider, then, that different social sciences having differently focused interests will develop different explanations for the same behavior. A researcher in political science may note whether at a rally, either Joe or Sue had been drinking, but only as an aside. The study will concern the volatile nature of personal confrontations over political issues, and the implications of the media broadcast of these conflicts for the coming election. A sociologist might be more concerned with the ways in which Sue and Joe identify with their different social groups, and why these groups come into conflict. And so on.

This ‘same behavior, different explanations’ phenomenon we find in the social sciences actually enriches the value these sciences have for us. Human behavior is extraordinarily complex, and understanding it cannot be reduced to ‘unified theory of everything,’ without doing injustice to the individuals and groups involved.

But therein lies the weakness of the social sciences, because, as sciences, they need to come up with generalized explanations, even within their specialized focus. Usually this takes the form of statistical analysis and probability predictions derived from these: ‘60% of women named Sue will behave violently, when a man named Joe utters words perceived as insulting, under conditions X, Y, Z.’ The problem with this is, what about the other 40% of women named Sue? Are they now to be held under suspicion, that meetings with any Joe might lead to violence? (The danger of any human science, as predictive of behavior – injustice to the individual. We are not all of a stamp. Otherwise there would have been no change throughout history.)

Unlike the natural sciences – where, at least at macro-levels, event B follows event A with complete regularity, as long as all subjects remain of the exact same class under exactly the same conditions – the social sciences can, at best, give us ‘rules of thumb.’ But these have importance, insofar as such ‘rules of thumb’ inform the intuitions that guide our judgments, and can provide us with a picture of ourselves -almost as broad, as deep, as variable and complex, as we humans actually are.

The meaninglessness of “race”

It occurs to me, that if one were to grant ‘race’ status to all the genetic differences that pass down through generations within given populations, expressing themselves in physical differences, we would have a multitude of ‘races,’ maybe hundreds; maybe even thousands. Pygmies, Bush People, Zulus, Swahili – these are all so phenotypically different, that we must reject any notion that there is a ‘race’ we can call “Negroid.” Similarly with the Irish, the Swedish, Hungarians, Southern and Northern Italians – etc., and the out-dated classification “Caucasian.” (And I admit I have enough Irish in me – part Pict, part Celt, part Moor, part Norse – that I would hate to be classified with the British! Up the Republic!)

The effort to define ‘races’ biologically is really an effort to find some meaningful way to categorize according to skin color. And you can’t get there from here.

I also wonder about the willingness to argue for what is neither scientifically supported nor anymore ethically acceptable. If we’re talking about a political ‘mess,’ well, politics is messy, especially given long established traditions and biases. But if we’re talking about a possible scientific “mess,” the whole notion of biological-realism ‘race’ seems to be about as messy – and as a-historical – as one could get.

If the word “race,” applied to those of differing genetic and ethnic backgrounds, is in anyway ambiguous and open to differing interpretations (if it is in anyway vague and unspecific) as is obvious, as really anyone with a decent education must admit – then of course the supposed categorization “race” can have no scientific value whatsoever.

Noteably, it appears that the only scientists continuing to use it in a meaningful way are those with open social agendas, such as ‘bio-criminologists,’ who hope that certain behaviors can be tagged to certain populations for better monitoring and therapeutic interventions. The problem is, these social agendas engender as many political problems as they seek to resolve. (‘So, ok, what do we do with these black people, anyway’ – I dunno; maybe treat them like human beings, providing education and jobs, perhaps? And it might help to keep white cops shooting them outright because they look different.)
Let’s face it:  ‘Race’ is an anachronism, a word and an understanding entirely social, with no scientific basis whatsoever. Mere excuse for political, economic, social and cultural biases – used to control the population drift in voting blocks, labor, intermarriage, and cultural enjoyment. Scientifically speaking, it is pure fiction – the remnant of fairy tales that we should have stop telling at least a century ago.

There is nothing scientific about it; it is pure pablum for immature minds unwilling to live in the present of our multi-cultural post-modern world.

Let’s view this matter in an historical perspective.

Historically, the term ‘stars’ once referred to any object seen in the night sky.

The term ‘star’ was made scientifically useful only by re-definition, exclusively encompassing those objects that could be interpreted as suns within given planetary systems.

The question then is whether ‘race’ can also be salvaged by redefinition. The answer would appear, no; because it carries far too much weight politically, socially, culturally, historically, none which can be adequately stripped from it.

One reason I mentioned tribal and ‘national’ phenotype differences, is because in the past, and in some regions still today, these have been taken as establishing “racial” identities – which has led (and still leads, in some places) to useless wars and genocidal ‘ethnic cleansing.’

Why hold on to a term that has been used for highly questionable purposes, when it lacks the precision needed to be useful in biologic categorization?

Those desperate to cling to ‘racial’ differences between us will seek out the slightest nuance, in genetics, in biological texts – in reports in popular media. Anything that will re-affirm their own preposterous sense of superiority.

I’m reminded here of the earnest young person, studying a billboard seen for the first time, insisting, ‘there must be some reason that things go better with Coke.’ Yeah, it’s called a sales pitch.

But here’s the biological fact of the matter: If I have a (non-contraceptively-inhibited) sexual encounter with a a member of the opposite sex of any supposed ‘race’ – black, yellow, ‘Chinese,’ Australian Aboriginal – a child will be produced. That’s because our genes are fundamentally the same, the differences being superficially phenotypically different. Because we belong to the same species.

Some will here interject discussion of ‘breeds’ as we see in other animals, but here’s the problem – ‘breeds’ are the result of externally controlled reproduction. But humans procreate uncontrollably – really, if he/she has two legs, we’ll copulate. And that difference makes all the difference. There is no external, internal, or genetic means of tracking the reproductive history of any particular human lineage. Thus, while the phenotypical differences are obvious, there is no grounding genotypical difference between the ‘races’ – the ethnically different from different locales.*

The phenotypical differences generate the beautiful kaleidoscope of human experience. But they don’t make us fundamentally different – on the contrary, they assure us that we are fundamentally the same. They could not have arisen were we in any way genetically different as racists want of us.
____

* Just by the way, it should be note that mixed heritage off-spring (so-called ‘mongrels’) of controlled breeding produce young hardier and more likely to survive than their pure bred parents – almost inevitably (apparently inheriting the most adaptive genes from both parental lineages). Can we not learn from this? Genetic purity is a fundamental flaw in the scheme of evolution. The greater the difference, the greater chance for survival.

The need to enforce law against the conservative religious

Do Christians and Muslims and Jews, all members of the same supposed ‘Abrahamic’ lineage of belief, worship the same God? (The Mormons form a special case, since they insist they belong to this lineage, but are in fact polytheists.) At any rate, as most of us may already know by now, apparently Christians at Wheaton College don’t think so. ( http://www.npr.org/2015/12/20/460480698/do-christians-and-muslims-worship-the-same-god ) (The ‘liberally’ minded professor of the article, suspended for suggesting to students that all the Abrahamic religious believe in one god, was finally fired for not recanting.)
Interestingly, the NPR article makes a spectacular theological mis-statement: “Christians, however, believe in a triune God: God the father, God the son (Jesus Christ) and the Holy Spirit.” No; that should read “SOME Christians, however, believe in a triune God: God the father, God the son (Jesus Christ) and the Holy Spirit.” (I would settle for ‘most;’ Unitarians are known as such because they don’t believe this.) Indeed, hundreds of ‘heretics’ have been slaughtered over the centuries for not accepting this ‘triune’ nature of the divine, while still claiming to be Christian. And of course differing interpretations of this triune nature have kept the Catholics and Orthodox at schism for almost as many centuries. (Is god three persons blended into one? or three manifestations of the same? Remember, people have died over this seeming splitting of hairs.)

Conservative believers of any religion generally have a very narrow understanding of the kind of god that they allow; and unfortunately these believers belong to competing sects (even if supposedly within a single religion), leading to interminable debate always threatening to break out into open violence.

Of course we should all know by now that Al-Bhagdadi of Daesh doesn’t even think all Muslims worship the same god (and would execute those not worshiping his. But that’s the fundamentalist way – utter paranoia that someone somewhere believes differently than they. What narrow minds these faithful have! And, how little faith – because of course, one can’t have faith in god, and fear that others might not not believe. If god is so powerful, what challenge could non-believers ever threaten him with? Obviously they are fearful for themselves – and on some level doubt the power of the god they keep threatening others with.

Probable psychological diagnosis: religious conservatism is born of guilt – the fear that one’s self does not truly believe in the manner expected by the mysterious ur-father (who, after all, reads all our deepest thoughts – so any doubt, he will know). Religious conservatives thus must constrain – punish – or destroy any who openly doubt without evident divine retribution (suggesting that doubt is beyond the power of the divine), in order to re-affirm their own faith (and thus deny doubts that subconsciously haunt them). Religious conservatism is thus an extreme form of projectively indirect (and vicariously masochistic) self control.

‘I’ll have to constrain – punish – or destroy you; otherwise I must punish or destroy myself (since leaving you be shows lack of self-constraint.)’

Does this make sense? No, of course not. But pathology never needs to make sense; it must only follow a lock-step of ‘reasoning’ – if B follows A, then C must be done – whether B actually does follow A or not. Basically, the conclusion is reached, then premises are decided to support it. Conservative religionists are paranoiacs who find sanction for their fears – and (often violent) reactions – in texts written in ancient tribes the historicity of which they cannot grasp and will not allow.

Liberal religionists have learned to prioritize their trust in god’s mercy and justice above their private fears and guilt. They are not threatened by differences nor by the thought or practices of non-believers, nor by those of believers in competing sects. But, though they frequently try, they can find no reconciliation with conservatives of the same religion, let alone those of other religions. Because conservative religionists will brook no reconciliation of differences. It is the very existence of difference that threatens them – and against which they act, through stridency of doctrine, segregation – or open violence.

I’m afraid the stridency of the conservatively religious makes rapprochement between them and others only enforceable by legislation. Once the law is established, for the safety and security of society as a whole, we can then tell the strident, ‘keep your god in your own damn house of worship, and leave the rest of us alone!’ (I know that also sounds somewhat strident; but really, one gets tired of getting preached at by every fanatic with a god.)

At any rate, the notion that religions, left to their own devices, can come to some equitable understanding, is frankly a little naive. It has happened, on occasion, in certain cosmopolitan centers in different cultures; but such peace is fragile unless enforced by law. Conservative believers have a difficult time accepting that others might not only believe differently than they, but might also live decent, meaningful, even happy lives believing differently. (That’s what really pisses them off.)

Our potential for violence: same as it ever was.

It’s a profound mistake to think that human nature has undergone gross improvement over the centuries. Biological evolution doesn’t work that way – why should we think social evolution (whatever that might be) does?

No one denies that progress occurs – in some fields within certain cultures, in given historical periods; but not without continuing potential for regress. When I was young, capital punishment seemed on its way to becoming a thing of the past; now the arguments against it are barely heard in public. Sometimes it is 2 steps forward, 1 step back; unfortunately, it can just as well prove the other way around.

I want to discuss an article by experimental psychologist Steven Pinker, which I found alternately amusing and irritating: http://www.theguardian.com/commentisfree/2015/sep/11/news-isis-syria-headlines-violence-steven-pinker

Pinker’s claim is that statistical evidence of declining incidence of wars and violent crimes, indicates that human nature has been changing for the better, that we are becoming more tolerant, and less likely to resort to violence in our relationships, both personal and political. “As modernity widens our circle of cooperation, we come to recognise the futility of violence and apply our collective ingenuity to reducing it.”

Pinker’s article is the most ridiculously skewed, narrowly focused argument I’ve read in a long time. But it’s quite in keeping with the painfully artificial optimism of academic progressivists – from Marxists still prophesying a global revolution to robotics experts promising ever greater ‘leisure’ for humanity (read: unemployment and poverty). Undoubtedly, the worst delusion fostered by Modernity is that social evolution (technological, cultural, psychological) can somehow hasten biological evolution – if we can’t somehow realize the full potential of our species, why, transhumanists will create another species that will inherit that potential and improve upon it.

Pinker only picks the ripest cherries for his argument. No talk at all about state surveillance and other oppressive measures used to keep the masses in line in many countries. No discussion of the exigencies of global capitalism, which has produced some pressure to avoid military confrontation, but which has also generated cultures of vicious competition and inhumane disinterest in the spread of poverty. Further, Pinker is pretending to offer an argument concerning statistics over history – but surely this is a profoundly truncated history we are given!

“From a high in the second world war of almost 300 battle deaths per 100,000 people per year, the rate rollercoasted downward, cresting at 22 during the Korean war, nine during Vietnam and five during the Iran-Iraq war before bobbing along the floor at fewer than 0.5 between 2001 and 2011.”

The Second World War was only 70 years ago; the potential for another such war remains problematic. One dirty bomb in the wrong city would shift these numbers somewhat, I should think.

But this is an odd way to count the dead, anyway. We’re not looking at some bugs under a microscope. The victims of ISIS would hardly breathe relief reading this article, ‘ah, well, but over all the species is doing better!’ Nor can this reckoning account for how WWII happened to begin with. The 19th century had its fair share of pacifist savants and progressivists promising the dawn of new eras of enlightenment and camaraderie. Yet WWII saw the violent death of more than 100 million globally in less than 10 years (if we see Spain, Manchuria, and Ethiopia as part of that war, as i think we should). What happened to all that pacifism, how did the new era of enlightenment meet such a bloody finale?

Pinker can’t even consider such a question, yet it’s the question that can’t be avoided when trying to make sense of his argument. The fact remains that neither political movements nor statistical trends can fully explain, or prophecy, what current social configurations will produce tomorrow or exactly how. How could the tragedy of 9/11 really lead to America launching an aggressive war of conquest against an uninvolved nation, leading to the chaos in the mid-east with which we deal today? Pinker reads this as just a spike on a graph – how impoverished an ‘explanation’ is that?

Do we really want to suppose that human nature leaves us less prone to violence now because the Soviet Union lost 10% in WWII but the US only lost less than .2% of its population in Vietnam – and even a smaller percentage in the current Mid-East entanglements? Outside of the weird statistical parameters needed to make that suggestion, the kind of argument going on, if we carry that suggestion to term, not only begs the question of what ‘more or less violence’ would actually mean (supposedly indicative of evolving empathy, charity, and tolerance, as we might define ‘niceness), but would actually beggar it by reducing it to a matter of the most obvious instances of egregious transgression.

Now, I don’t question the statistics Pinker is using, but the narrow selection of categories measured, and the kind of argument Pinker seems to be making, which, IMO, is facile and specious.

I’m reminded of the efficiency expert who needed to account for the contentment of the workers in a given factory, and whose sole criteria for this were the number of complaints workers made (in a company where any complaint would lead to immediate termination). Statistic: 0 complaints; conclusion: happy workers.

The measure is true; but there is something wrong with the choice of what’s to be measured. And the structure and style of the argument seems divorced from actual experience, because lacking any depth or breadth of consideration of context.

One long standing argument for capital punishment has been that it cannot be inhumane (which would violate the Constitutional interdict against inhumane punishment), because there are humane means of killing the sentenced person. So presumably, if one kills another ‘humanely,’ with legal authority, then no violence is involved, and everyone remains ‘nice’ and innocent? There is a line of reasoning that goes down that path, and it shows up whenever the SCOTUS has to decide cases involving methodology of execution. But the stronger, more basic argument for capital punishment is that the state reserves the right to violence against individuals and groups that threaten the interests of government or of the people as a whole.

That reservation (and I know of no national government that has foresworn it) tells us that, although global capitalism has for now largely re-channeled violence into forms of financial competition, the future of warfare has not been settled.

And that is what rebels, terrorists, fanatics and the occasional war-mongering dictator (and police and military responses to these) remind us – not that we are more violent than we have been in the past, but that potential for great violence remains within us pretty much the same as it ever was.

 

—–

From comments made at the Electric Agora.

Justice in the court of rhetoric

The court of rhetoric has two jurisdictions. The first is that of public discourse, and anyone is invited to the jury. The other is that of those trained to rhetorical analysis. That sounds as if the trained critic of rhetoric ought to be considered the ‘Supreme Court” of the whole domain, or at least, one might say, ‘the final court of appeal’. But in fact the matter is the other way around; the public decides what rhetoric is persuasive by their active responses to it – by being persuaded by it. The critic has largely an advisory role. The critic clarifies the claims, discovers the fallacies, weighs the epistemic ground of the rhetoric – the unstated assumptions, the evidence provided for the claims, the implications of tropes and innuendos and their possible consequences.

A number of problems recur in the court of rhetoric, which explains why many people, from fascistic censors to philosophers, mistrust or even hate it. The principle of these, as I have discussed before, is that rhetoric, to be properly judged as successful, is not to be judged on whether its claims are right or wrong; in order to understand rhetoric as rhetoric, the principle determination of successful rhetoric is whether it works or not – whether it persuades its intended audience. So rhetoric arguing for ethically repugnant positions may be considered successful, if in fact it wins over its audience. Nobody’s really happy with that (except the successful rhetorician), but it is true nonetheless – how could it be otherwise? Rhetoric is a tool, not a strict form of communication; its whole reason for existence is getting others to do what one wants – whether voting a certain way, buying a certain product, or simply experiencing certain feelings leading to certain acts or behavioral responses. There is no logic to the statement “I love you,” but its rhetorical value is clear; and lovers have been relying on it for many centuries. What does the statement communicate? Maybe that the utterer loves the audience; but maybe not. That judgment awaits on consequences.

That is another problem for the court of rhetoric: Rhetorical analysis and criticism, like any analysis, is directed towards the past – towards what has been said and what has unfolded as consequence to the success or failure of this. But rhetoric in practice is always directed towards the future – to hoped for events, behaviors, and consequences. That makes it difficult to adjudge a rhetorical usage successful or not until it has actually proved successful (or not). What a critic of rhetoric can achieve, concerning a current rhetoric practice, is determine the strength of its claims, the assumptions it depends on, the nature of its tropes and implications, the possible consequences of accepting these.

Yet this leads to another problem. The court of rhetoric does not have the same standard of judgment as that of logic. Logic judges much like a criminal court – the judgment is supposedly decided as absolute – “beyond a reasonable doubt.” The court of rhetoric, like civil procedure courts, decides on the standard of, “the weight of the evidence.” This is actually a just premise, because claimants before the court of rhetoric have opposing beliefs, not simply opposing interests. It would be unjust to one who actually believes in a position morally repugnant to others to assert that ‘no reasonable person would believe that, therefore they are lying.’ Of course they believe it – humans believe in a lot of objectionable, even repugnant things. They aren’t lying; they believe in what they are saying; the question then is whether their claims are weaker or stronger than counter-claims by those who believe otherwise.

To an absolutist mode of thought, trained in logic, that is really hard to comprehend. Yet the court of rhetoric can not function otherwise without itself committing injustice – otherwise it becomes mere tool to a censor’s agenda.

Yet a strong and well-informed critic of rhetoric ought to be able to demonstrate when ethically questionable rhetorical claims are also weak rhetorical claims, because what is ethically questionable often relies on prior claims that are inadequately supported. Donald Trump’s claim that most Mexican immigrants are involved in criminal behavior, or that American Muslims celebrated the 9/11 attacks, can be easily undercut through reference to statistics in the first instance, or reliable reports by those on scene in the second. So these are weak claims before the court of rhetoric. Yet Trump’s rhetoric resonates with a small percentage of the population riddled with fears of differing ethnic groups and differing religions. This must not only be acknowledged, but addressed. Simply saying that what Trump says is ‘untrue’ or ‘unjust’ misses the complexity of what is going on (and frankly does injustice to his presumed audience). Also, it sets up opponents of Trump with a blind side: First, we lose sight of the appeal he has for his audience, and thus will find it more difficult to understand that audience and find some way to appeal them with a countering rhetoric.  Then, if we think the issue is Trump’s being ‘wrong,’ or simply lying, this may lull us into believing that all we need do is dismiss what he says. But in the public arena, this amounts to ignoring what he says. That means that his potential audience have only what he says to rely on, to feel some comfort in their already held fears and beliefs. That means that Trump’s essentially weak claims will appear stronger to his audience than they actually are. The danger is if Trump’s rhetoric begins persuading a potential audience without adequate response. Then, as has happened all too often in the past, weak rhetorical claims could prove successful.

Which should remind us that the judgments made in the court of rhetoric actually have profound practical consequences. The chief of these is that its determinations contribute to a stronger rhetoric in response to ethically questionable claims. It’s not enough to say that Trump is ‘wrong;’ one has to win over his audience, or at least his potential audience. And that requires a stronger rhetoric than Trump himself deploys, supplementary to any logical or other reasonable arguments we make against what he has to say. (Clinton’s suggestion, that Trump’s remarks on Muslims would be used for recruitment to ISIS, while not strictly true when made, was actually a clever rhetorical move – which since has been somewhat validated.)

As with courts of civil law, and unlike criminal courts or that of logic (which chop between the black-and-white of true-or-false), the court of rhetoric must adjudicate cases on a grey scale. That is because opposing interests are rarely easy to decide between, especially if grounded in beliefs truly held by the opponents; and because rhetoric triggers a host of responses – emotional, social, cultural – that are not reducible to ‘reasonably held’ positions.*

The art of persuasion – its theory, its practice, its criticism – is not about what is wrong or right or true or false, and never about some ‘view from nowhere’ or what some god might want us to be – and certainly not about what world we might prefer to live in. It is about the world as it is, and about people as they are. That understandably frustrates us; but the world is by nature a disappointment.

—–

*Part of the reason for having a careful study of rhetoric is that it clears some of the ground for further study of human psychology and of social and cultural relationships.

Dogs pee on trees, humans tell lies

There is no inherent wrong lying. Lying, misrepresentation, deception, are simply tools in the language kit-bag we all carry, in order to communicate comfortably with others who may or may not share our perspectives. I have known quite a number of people rigidly committed to ‘absolute honesty’ (including myself, at one point in my life). I have never known anyone who has not told lies, who does not regularly or periodically, even routinely tell lies. Especially to themselves (how could one believe one’s self ‘absolutely honest’ otherwise?). Asking the human animal not to tell lies is as effective as asking canine animals not to spritz their scent on trees. It is a part of their being.

So: Two quotes, as comment:

“I assure you that (politicians do not lie). They never rise beyond the level of misrepresentation, and actually condescend to prove, to discuss, to argue. How different from the temper of the true liar, with his frank, fearless statements, his superb irresponsibility, his healthy, natural disdain of proof of any kind! After all, what is a fine lie? Simply that which is its own evidence.” – Oscar Wilde, “The Decay of Lying”

“(T)he wise thing is for us diligently to train ourselves to lie thoughtfully, judiciously; to lie with a good object, and not an evil one; to lie for others’ advantage, and not our own; to lie healingly, charitably, humanely, not cruelly, hurtfully, maliciously; to lie gracefully and graciously, not awkwardly and clumsily; to lie firmly, frankly, squarely, with head erect, not haltingly, tortuously, with pusillanimous mien, as being ashamed of our high calling.” – Mark Twain, “On the Decay of the Art of Lying”

The question is not whether we will lie – the only question is whether we lie viciously, or appropriately and kindly.

(Of course, being the King of New York, I find it useful not to lie about anything but my age – I’m really 16 – and the fact that I’ve been to the moon twice disguised as two different astronauts.)

In any event: As comparison (with a Kantian flavor as spiced by Schopenhauer), we would find it difficult to imagine a world in which everyone was trying to murder everyone else, at least some point in their life (although certain role-play computer games come awfully close). But it is not difficult to imagine a world in which everybody lies at some point to someone, because that is the is the world in which we live.

Take a specific example. Often we find the debates concerning what might be called ‘ethical lying,’ circling around the ‘white lie,’ a falsehood which benefits another. I want to consider a social imperative that effectively moots the ethic altogether.

Most of us in the working class are expected to lie at work. At the very least, we are expected to reply to an executive’s inquiry concerning our morale, that we are content with our jobs, and loyal to our employers. Not doing so can result in punitive repercussions, even expulsion from employment. ‘Not happy with your job? Well, don’t worry, you’re not working here anymore.’

This kind of lie-inducing situation has recurred with variation for many centuries; it’s a fundamental feature of any class structured society. In some previous cultures the punitive measures could include torture or death.

But contemporary employees are also expected to lie on behalf of their employers (as part of their presumed loyalty), the most obvious case being retail employees who must sell to clients for maximum profits, regardless of actual value to purchasing consumers.

Now before we condemn the dishonesty generated by such situations tout court, we should be aware that there are important gradations among differing power relations and the persons involved. My favored car mechanic appears to be pretty upfront – ‘these are our services, and this is what we charge,’ with no (apparent) add-ons. But the work is good enough so that if there are add-ons, I’m willing to ignore them.

There are also ranges within which such dishonesty reaches limits and actually becomes open to rebuttal and punishment. It is such moments of open transgression that are socially noticeable and raise issues of honesty and dishonesty in public discussion.

Now, to return then the more general problem: One reason we have no difficulty in condemning certain acts, like murder, as inherently wrong or immoral (whether categorically or prima facie) is because so few people actually perform such acts (when murders are committed en masse, of course, we call this “war” and criticize it using other criteria). But how can we hold as inherently wrong or immoral behavior that everybody engages in at some point or other?

This BTW has been the dilemma facing the moralists of major religions for generations, and has generates reams of religious apologetics, qualifications and hair-splitting, as well as practices of repentance and redemption, temperance and forgiveness. Eg., sexual intercourse without intent to reproduce is technically considered wrong even among major Christian churches, but there is a kind of wink or dispensation when it occurs between married partners – as long as it is not done ‘lustfully.’ ie., enjoying one’s own sex through the body of another. (But how is this even meaningful?)

Do we simply condemn the whole species as ‘evil’ (as some churches do)? Or do we admit that human behaviors are complex and indefinitely variable, and that ethics rather trails after many of them, as a means of understanding, rather than prescription. (And, yes, we can have an ethic that treats different behaviors in different ways – utilitarian sometimes, de-ontologically otherwise, virtue ethics for ourselves, Confucians regarding our parents, Aristotelian regarding our friends, etc., etc.

So what I suggest here is that we conceive of an ethic that treats behaviors universally engaged in, along a spectrum of social acceptability, in a different way from behaviors that are infrequent and/or openly transgressive.

Lying is simply a form of communication, and a behavioral tactic of social survival. The presumed transgressive character of the behavior, which is useful to teach in order to prescribe the limits of acceptability, is something of a pretense – again, useful; but not entirely honest.

But what human behavior could ever be?

“I thought I would be honest –
what a dream!”
– Browning

—–
Developed out of a comment made at: http://theelectricagora.com/2015/11/26/the-scrooge-charade/