Toward a phenomenology of television

I admit that I’ve lost anything but a passing interest in contemporary film and television. I’m not entirely in the dark on such matters; I browse Youtube occasionally, and I have a store nearby where I can find used DVDs for as little as a buck. A year ago, I got into a jag there, buying and binge watching police procedural from the the first decade of the present century. But in general, I don’t watch telvision and stay away from special effects spectaculars. (Although the last film I actually went to a theater to see was Godzilla 2014; but then I have a soft spot for Big Greeny from my childhood, and just wanted to make sure they treated him with respect. I doubt I’ll go to any of the proposed sequels, though.) And 3-D doesn’t interest me. Harpo Marx was once asked about Lenny Bruce who was achieving notoriety at the time; he replied “I have nothing against the comedy of today; it is just not my comedy.”

However, having had to study the phenomenon of television in grad school, and having invested considerable time in thinking, talking about, watching, and even, in my youth, making film, I do have some general remarks that may be useful here.

First, never lose sight of the economic background here. Both commercial cinema and television are primarily business enterprises. The purpose of film production is to provide entertainment enough to attract audiences willing to spend money on it. This has caused considerable friction between those who provide capital for production and those who come to filmmaking with a particular vision that they are hoping to realize.

The purpose of a television show is to produce enough of an audience to sell to advertisers. (This is obviously less true of the secondary markets, DVDs and on-demand viewing; technology has changed that dynamic, although it is still in effect on most of cable.) This is actually quite a lower bar than selling tickets to the theater, since the audience only needs enough incentive as necessary to get them to watch at a particular time for possible advertises. A show only needs to be less uninteresting than competing programs at the same time in order to achieve this.

With these concrete notices, we can get into the phenomenology of the two media. The most important thing to grasp here – both easily recognized and yet easily forgotten – is that what distinguishes these media from all others, and differentiates them from each other, is their relationships to time, and how the makers of these media handle those relationships. Of course every medium establishes a relationship to time, and this relationship effectively defines the medium to a large extent. But each medium does this in a unique way, as opposed to all other media. * Yet one of the problems we have in distinguishing film and television as each a distinct medium is that fictional television seems to have a relationship to time similar to that of theatrical drama or at least of film. This is not the case.
The true structural principle of television did not become recognizable until the late ’70s, when television began broadcasting 24 hours a day. By the late ’90s, when cable television was multiplying into literally hundreds of channels, it should have been obvious to all; but part of the success of television is that it depends on, and manipulates, our attention to the particular. Most people do not think of themselves as ‘watching television.’ They see themselves watching Seinfeld or Mad Men or The Tonight Show or ‘some documentary about the North Pole.’ On the immediate existential sense, they are quite right, it is the individual program to which they attend. The trouble is, when the Seinfeld rerun ends, many of them do not get up to do something more interesting in their lives; they sit there and watch Mad Men. Or at least let it play on while they discuss what it was like to live in the ‘60s, and then the Tonight Show… and if they can’t get to sleep, it’s that documentary about the North Pole on the Nature Channel, or an old movie on AMC (does it really matter which?), or an old Bewitched rerun….

Now it sounds like I’m painting a bleak portrait of the average television viewer. But such a viewer is what television is all about. And we should note that this says nothing against such viewers. They are presented with an existential dilemma: What to do with free time in a culture with little social cohesion and diminishing institutions that once provided that cohesion?

So, whereas film is about how to manage visuals and audio and story and acting in the compacted period of a couple hours, television is about how to provide endless hours of possible viewing. It is not about this or that particular show – trends are more telling at any given moment.That CSI and NCIS and The Closer and The Mentalist and Criminal Minds, etc., etc., all appear in the same decade tells us more of what people found interesting on television that decade than any one of these shows, and certainly more than any one episode.

Which brings me to my real point. Although there are still some decent films being made on the margins and in other countries, the history of the cinema I knew and loved is at an end. Despite the fact that the basic premise of both movies is that a group of talented warrior gather to defend the good against overwhelming force there is no way to get from The Seven Samurai to The Avengers. That there is a core narrative conflict they share only means that there are core narratives shared across cultures, and we’ve known that for a long time.

But while the aesthetics of The Avengers is substantially different from that of The Seven Samurai, there is certainly an aesthetic at work in it. I am not willing to grant that television has any aesthetic at all. We can certainly discuss how aesthetic values are deployed in individual shows and individual episodes. But these are almost always borrowed from other media, primarily film. Television, just as television has no aesthetic value. And that cannot be said for film.

One way to note this is admitting that the ‘talking heads’ television is what television does best. That and talk-overs (as in sports) or banter, playful or violent, as on reality TV shows. Fictional shows can deploy aesthetic values, true; but only to get the viewer to the talk show, the next commercial, the next episode. Anything that accomplishes that.

Of course, what we end up discussing is the individual show, or the individual episode. and because television lacks aesthetic value of its own, it can fill endless hours deploying a multitude of aesthetic values from other media – poetry recitals, staged plays, documentaries, thrillers, old films, old television, various sports, news and commentary – perhaps ad infinitum. That’s what makes commenting on individual shows so interesting – and yet undercuts any conclusion reached in such discussion. All the shows we find interesting today, will be forgotten in the wave of the next trend tomorrow. But don’t worry – there will always be reruns and DVDs. As long as there is a market for them, that is.
My general point here is such that it cannot be disconfirmed by any show, or group of shows, or discussion of these – such would only confirm part of my point, television’s dependence on our attention to particulars.

One way to see think of the general problem is to imagine rowing a boat in a river; upstream someone has tossed in a flower – perhaps it is even a paper flower, and we’ll allow it to be quite lovely. So it drifts by us, and we remark its loveliness, while not addressing the many rotten pine cones that surround it. Now do either the flower or the pinecones get us an aesthetic of the river? No. So ‘bad’ television tells us no more about the aesthetics of television than does ‘good’ television.

And the river trope has another use for us here. We know the flower was tossed into the river only recently; but the pine cones have been floating about us for some time. Yet to us, now rowing past these, the pine cones are contemporary with the flower.

We think of an old TV show, say Seinfeld as if it is a phenomenon of the past; it isn’t. Reruns are still playing in major markets, making it a viable competitor to Mad Men or even Game of Thrones. It is still contemporary television. (Television does not develop a-historically, but the history of its development has been somewhat different than for media where individual works are the primary product.) So an ‘aesthetics of television’ would need to account for that phenomenon as well – not just the aesthetics of Seinfeld or of Game of Thrones, but why it is these aesthetics are received by their differing audiences at the same moment in history – and allowing even that many will watch both. And I suggest it would also have to address the aesthetics deployed in ‘non-fiction’ television (scare-quotes because I’m not sure there is any such thing). I suggest this cannot be done. What television as television presents us is grist for the mills of sociology, semiotics, cultural history; but an aesthetics?

That doesn’t mean that we shouldn’t have criticism of individual episodes or discussion of favorite programs. In fact most of us having watched television or still watching it are doomed to this. But we should be aware that, reaching for the flower, we may end up with a rotten pine cone – or, what is most likely, simply a handful of water, slipping through our fingers. returning to a river we merely float along.

—–

* On the time issue: The art of cinema – that is, the cinema I know, which I admit is no longer of interest, except on the margins – is defined by the control of time. This is also true of music and drama, but in a different way, since the filmmaker has a tool neither of the other two have: editing. Films were made on the editing boards.

But this technique could be accomplished – at least to some extent – in the camera itself. Thus even amateur filmmakers, making home movies, deployed the aesthetic of the medium – a particular control of time that photography could not emulate. Thus, picking up a movie camera and operating it immediately engages an aesthetic, however poorly realized and however unrecognized, even by the one using the camera.

Stories are inevuitable in every media; exactly becasue of this, each medium must define itself in terms of its approach to and presentation of stories, not the stories themselves, since stories will occur inevitably – and when they do not, the audience will invent and impose one.

To be less elliptical then, film’s dominant concern was – and still is, although in a way I no longer recognize – vision, in both the literal and figurative senses of that term, as we experience it through time.

While such considerations are understood by producers of television, that’s not what television is about. Television is about filling time with whatever, and getting the viewer to the next block of time (as defined by producers and advertisers). If a talking head can do this, there’s your television.

Again: We viewers are not the consumers of television – that would be the advertisers. We are the commodity that television sells to them.

That changes everything.


A reply to

“Medium, Message, and Effect” by David Ottlinger:  https://theelectricagora.com/2017/05/30/medium-message-and-effect/

Mathematical Platonism: A Comedy

Mathematical Platonism holds that mathematical forms – equations, geometric forms, measurable relationships – are somehow embedded into the fabric of the universe, and are ‘discovered’ rather than invented by human minds.

From my perspective, humans respond to challenges of experience. However, within a given condition of experience, the range of possible responses is limited. In differing cultures, where similar conditions of experience apply, the resulting responses can also be expected to be similar. The precise responses and their precise consequences generate new conditions to be responded to – but again only within a range. So while the developments we find in differing cultures can oft end up being very different, they can also end up being very similar, and the trajectories of these developments can be traced backward, revealing their histories. These histories produce the truths we find in these cultures, and the facts that have been agreed upon within them. As these facts and the truths concerning them prove reliable, they are sustained until they don’t, at which point each culture will generate new responses that prove more reliable.

Since, again, the range of these responses within any given set of conditions is actually limited by the history of their development, we can expect differing cultures with similar sets of conditions to recognize a similar set of facts and truths in each other when they at last make contact. That’s when history really gets interesting, as the cultures attempt to come into concordance, or instead come into conflict – but, interestingly, in either case, partly what follows is that the two cultures begin borrowing from each other facts, truths, and possible responses to given challenges. ‘Universal’ truths, are simply those that all cultures have found equally reliable over time.

This is true about mathematical forms as well, the most resilient truths we develop in response to our experiences.  I don’t mean that maths are reducible to the empirical; our experiences include reading, social interatction, professional demands, etc., many of which will require continued development of previous inventions.  However, there’s no doubt that a great deal of practical mathematics have proven considerably reliable over the years.  Whereas, on the contrary, I find useless Platonic assertions that two-dimensional triangles or the formula ‘A = Π * r * r’   simply float around in space, waiting to be discovered.

So, in considering this issue, I came up with a little dialogue, concerning two friends trying to find – that is, discover – the mathematical rules for chess (since the Platonic position is that these rules, as they involve measurable trajectories, effectively comprise a mathematical form, and hence were discovered rather than invented).

Bob: Tom, I need some help here; I’m trying to find something, but it will require two participants.
Tom: Sure, what are we looking for.
B.: Well, it’s a kind of game. It has pieces named after court positions in a medieval castle.
T.: How do you know this?
B.: I reasoned it through, using the dialectic process as demonstrated in Plato’s dialogues. I asked myself, what is the good to be found in playing a game? And it occurred to me, that the good was best realized in the Middle Ages. Therefore, the game would need to be a miniaturization of Medieval courts and the contests held in them.
T.: Okay, fine, then let’s start with research into the history of the Middle Ages –
B.: No, no, history has nothing to do with this. That would mean that humans brought forth such a game through trial and error. We’re looking for the game as it existed prior to any human involvement.
T.: Well, why would there be anything like a game unless humans were involved in it?
B.: Because its a form; as a form, it is pure and inviolate by human interest.
T.: Then what’s the point in finding this game? Aren’t we interested in playing it?
B.: No, I want to find the form! Playing the game is irrelevant.
T.: I don’t see it, but where do you want to start.
B.: In the Middle Ages, they thought the world was flat; we’ll start with a flat surface.
T.: Fine, how about this skillet.
B.: But it must be such that pieces can move across it in an orderly fashion.
T.: All right, let’s try a highway; but not the 490 at rush hour….
B. But these orderly moves must follow a perpendicular or diagonal pattern; or they can jump part way forward and then to the side.
T.: You’re just making this up as you go along.
B.: No! The eternally true game must have pieces moving in a perpendicular, a diagonal, or a jump forward and laterally.
T.: Why not a circle?
B.: Circles are dangerous; they almost look like vaginas. We’re looking for the morally perfect game to play.
T.: Then maybe it’s some sort of building with an elevator that goes both up and sideways.
B.: No, it’s flat, I tell you… aha! a board is flat!
T.: So is a pancake.
B.: But a rectangular board allows perpendicular moves, straight linear moves, diagonal moves, and even jumping moves –
T.: It also allows circular moves.
B.: Shut your dirty mouth! At least now we know what we’re looking for. Come on, help me find it. (begins rummaging through a trash can.) Here it is, I’ve discovered it!
T.: What, that old box marked “chess?”
B.: It’s inside. It’s always inside, if you look for it.
T.: My kid brother threw that out yesterday. He invented a new game called ‘shmess’ which he says is far more interesting. Pieces can move in circles in that one!.
B,: (Pause.) I don’t want to play this game anymore. Can you help me discover the Higgs Boson?
T.: Is that anywhere near the bathroom? I gotta go….

Bob wants a “Truth” and Tom wants to play a game. Why is there any game unless humans wish to play it?

A mathematical form comes into use in one culture, and then years later again in a completely other culture;  assuming the form true, did it become true twice through invention?  Yes.  This is one of the unfortunate truths about truth: it can be invented multiple times.  That is precisely what history tells us.

So, Bob wants to validate certain ideas from history, while rejecting the history of those ideas. You can’t have it both ways. Either there is a history of ideas, in which humans participated to the extent of invention, or history is irrelevant, and you lose even “discovery.” The Higgs Boson, on the other hand, gets ‘discovered,’ because there is an hypothesis based on theory which is itself based on previous observations and validated theory, experimentation, observation, etc. In other words, a history of adapting thought to experience.  (No one doubts that there is a certain particle that seems to function in a certain way. But there is no Higgs Boson without a history of research in our effort to conceptualize a universe in which such is possible, and to bump into it, so to speak, using our invented instrumentation, and to name it, all to our own purposes.)

Plato was wrong, largely because he had no sense of history. Beyond the poetry of his dialogues (which has undoubted force), what was most interesting in his philosophy had to be corrected and systematized by Aristotle, who understood history; the practical value of education; the differences between cultures; and the weight of differing opinions. Perhaps we should call philosophy “Footnotes to Aristotle.”

But I will leave it to the readers here whether they are willing to grapple with a history of human invention in response to the challenges of experiences, however difficult that may seem; or whether they prefer chasing immaterial objects for which we can find no evidence beyond the ideas we ourselves produce.

The trolley problem and the complexities of history

This was originally a response to a discussion concerning the so-called trolley problem – a supposed ethical dilemma involving a choice to allow a trolley to speed toward five innocent people; or hit a switch that may re-direct it toward another innocent person on another track; or simply throw a person in front of the train in order to save the lives of the other five. Basically, a choice between de-ontological or utilitarian ethics. I can’t remember whether it was devised by psychologists but is used by some philosophers as a thought experiment, or the other way around. It is, from my perspective, utterly useless.

Ethics can get very complicated. Or actually, it always is complicated, but when we make our actual decisions, we do so by focusing on specific details in the context in which the decisions are made.

Do we begin an understanding of ethics in Germany, by studying the behavior of the Germans and the Nazis in the ’30s and ’40s? Of course, but how could it be otherwise? And in such study our purpose is not to justify that behavior, but to understand it, and to derive principles, both positive and negative, according to which we have greater purchase over our own behavior in the future.

Having written a study on Hitler, I had to confront a wide range of behaviors in Germany in that era. In that confrontation, I had to ask some painful questions. What made highly intelligent and otherwise ethical doctors engage in crude and cruel ‘experiments’? Why did supposedly decent truck drivers willingly deliver Zylon B to the death camps, knowing what they were intended for? If one asked a young soldier whether it was right to beat an infant to death, he would not only have rejected that suggestion, he would have been appalled. Yet the next day he would then beat an infant to death, persuaded that the infant’s Jewish descent, or the presumed wisdom of the officer ordering him to do this, effectively excused him from responsibility.

After ordering the police to form what were death squads, to ‘clean up’ Jewish villages in Poland in the wake of the invasion, Himmler decided it was his duty to witness one of these mass executions. He came, he saw, he promptly threw up, disgusted with horror. Then he just as promptly reassured the men involved that they were engaging in terrible acts for the greater glory of Germany, and they would be well remembered for their ‘moral’ sacrifice. (By the way, the notion that these special police had to follow orders in performing mass murders happens to be a lie. If any of them felt they could not in good conscience participate, they were re-assigned to desk jobs back in Germany. Partly for this reason they were replaced by the more dedicated SS.)

It is little known, but the Supreme Court of Germany, at least up to the time of my study, had not ruled Hitler’s dictatorship or the laws made by him as illegitimate, but that they were completely constitutional for their time, but only superseded by the post-war constitution? That should give us pause.

Other odd facts raising troubling questions: Himmler was a school teacher who believed stars were ice crystals. But the Nazis condemned contemporary physics as “Jewish science;’ except of course when it could be used to build weapons. Goebbels had a doctorate in engineering – along with some 40,000 Nazis holding graduate degrees in various fields, including half the medical doctors in Germany.

A right-wing influence on the young in the ’20s and ’30s was a major folk music revival. One of the most popular poets in this era was Walt Whitman in translation. Germany was peppered with pagan-revival religious cults, a movement dating back a century previous. The concentration camps were modeled in part on relocation camps for American Indians in the previous century.

Although homosexuals were oppressed and sent to camps in the later ’30s, the leadership of the Nazi SA (Brownshirts) were notorious for their homosexual orgies (which led the General Chiefs of Staff to demand their execution, carried out in the Night of the Long Knives).

The Marxists in the Reichstag voted for Hitler’s chancellorship, thinking that would position them to better negotiate with the Nazis.

Sociological analysis indicates that a third of Germany’s population actively supported Hitler, another third decided to go along with him, because what the heck, what did they have to lose? The final third were opposed to Hitler, but after all, they were Germans, and respected his legitimate election. Given the brutal totalitarianism of the Nazis, by the time they thought to resist, they were stuck.

Hitler himself was a vegetarian, something of an ascetic who only indulged by pouring sugar in his wine; he ended up addicted to pain pills. He banned modern artists, but in his youth had hoped to become one. He was fond of Mickey Mouse cartoons. Once the war started he found himself losing interest in Wagner’s operas. He told his architect Spear that he wanted buildings that would make ‘beautiful ruins.’ He refused to marry his lover Eva Braun until the moment he determined that they both needed to die. In the bunker he admitted bitterly that Schopenhauer had been right that the way of ‘Will’ was an exercise in futility, and that the Germans had proven the weaker race after all.

Historical facts like these present a wide array of ethical and political problems that aren’t going to be solved by simplistic reduction to binary choices, readily determined by psychologists or moral absolutists.

What next, the ‘five-year old Hitler dilemma’? – ‘if you could go back in time and shoot Hitler at age five, would you do so?’ Yes; double tap – and always put one in the brain.

Who are those five people the trolley is racing towards? Answer that question and the problem might be easier to solve.

 

Violence and identity

“I wouldn’t have it any other way”

The Wild Bunch is a 1969 film directed by Sam Peckinpah (written by Peckinpah and Walon Green) [1]. Nominally a Western, it tells the story of a gang of aging outlaws in the days leading up to their last gun battle.

After a failed payroll robbery, in which more innocents are killed than combatants, five surviving outlaws make their way into Mexico, broke and dispirited. The lead outlaw, Pike Bishop, remarks to his colleague Dutch that he wants to make one last big haul and then “back off.” “Back off to what?” Dutch asks, for which there is no answer. Finally Dutch reminds Bishop “they’ll be waiting for us,” and Bishop, the eternal adventurer, replies “I wouldn’t have it any other way.”

In Mexico, the Bunch, including the two Gorch brothers, Lyle and Tector, and Sykes, an old man who rides with them, visit the home town of their youngest member, Angel, which has recently suffered a visit by Federal troops under General Mapache, during which anti-Huerta rebel sympathizers were rooted out and murdered. The Bunch forms an odd bond with the townsfolk, but they’re outlaws and they’re broke. Eventually they make a deal with Mapache (who is advised by Germans, eager to see Mexico allied with them in the impending war in Europe) to rob a US arms train across the border. This robbery is successful, and they return to Mexico with the stolen arms (including a machine gun) pursued, however, by a group of bounty hunters led by Deke Thorton, a former outlaw that Bishop once abandoned during a police raid on a bordello. Later ,the bounty hunters will wound Sykes, whom the Bunch will abandon to his fate.

Along the trail, Angel, a rebel sympathizer himself, has some Indian friends carry away a case of guns and another of ammunition. Angel, however, has been betrayed by the mother of a young woman he killed in a fit of anger for having run off to join Mapache’s camp followers. The outlaws complete their deal with Mapache, but surrender Angel over to Mapache.  Deciding to let Mapache deal with the bounty hunters, they return to the Army headquarters in the ruins of an old winery. However, their betrayal of Angel haunts them. After a brief period of whoring and drinking, they decide to confront Mapache and demand the return of their colleague. Mapache cuts Angel’s throat, and without hesitation Pike and Dutch shoot him down. At this point, the Bunch probably could take hostages and back off, but to what? Instead they throw themselves gleefully into a gun battle with some 200 Federales, and by taking control of the machine gun do quite a bit of damage. Eventually, however, the inevitable happens, and they end up dead, Pike shot by a young boy with a rifle.

As the surviving Federales limp out from the Army HQ, Thorton shows up. From there, he sends the bounty hunters home with the outlaws’ bodies, but remains to mourn the loss of his former friends. Sykes rides up with the rebel Indians who have saved him, and suggests Thorton join them. “It ain’t like it used to be, but it’ll do.” Laughing in the face of fate, they ride off to join the revolution.

The thematic power of the film hinges on two apposite recognitions. The first is that the outlaws are bad men. They rob, they cheat, they lie, they kill without compunction. They seem to hold nothing sacred and have no respect for any ethical code.

The second recognition is that this judgment is not entirely complete or correct. They have a sense of humor and an undeniable intelligence. They are able to sympathize with the oppressed villagers in Mexico. They have a sense of being bound together, and this is what leads them to their final gun battle.

The Bunch have lived largely wretched lives. As professional outlaws, they are dedicated to acquiring wealth by criminal means, but throughout the film, it is clear that wealth offered only two things for them: prostitutes and liquor. Although Pike was once in love and thinking of settling down, and (the asexual) Dutch speaks wistfully of buying a small ranch, they are just as committed to the outlaw lifestyle as the unrepentant Gorches; they just would rather believe otherwise.

This is because they are committed to a life of violence, to the thrills of dangerous heists, of chases across the landscape of the Southwest, and of gun fights. They rob largely to support that lifestyle, not the other way around.

The finale of the film has two major points of decision, the first determining the second. The first is when Pike, dressing after sex with a prostitute, sits on the bed finishing off a bottle of tequila.  That’s his life; and with the wealth gotten from the Mapache deal, he could continue it indefinitely. In the next room, the Gorch brothers, also drunk, argue with another prostitute over the price of her services. That’s their life, too. Meanwhile, Angel is getting tortured to death for being an outlaw with a conscience. Pike slams the empty bottle to the floor, and the march into battle begins.

The second point of decision has already been remarked on.  The moment after shooting Mapache, when they might have escaped, the Bunch choose to fight instead. Why do they do it? It’s not for the money, the drinking or the prostitutes.  Is it for revenge?  No, it’s because they live for the violence, and they do so as a team, and they have reached the moment at which they can live it to its logical conclusion.

Peckinpah remarked that, for that moment to carry any weight, the outlaws needed to be humanized to the extent that the audience could sympathize with them. He was, I think largely successful. But the film has been controversial, not only because of its portrayal of violence, but because in the climactic battle Peckinpah pushes our sympathies for the Bunch beyond mere recognition of their humanity.  They become heroic, larger than life, almost epic figures, challenging fate itself, in order to realize themselves, like Achilles on the field before Troy. And oddly, while not really acting heroically, they become heroes nonetheless, remembered by the revolutionaries who benefit from their sacrifice.

As a side remark, let’s note that Peckinpah was raised in a conservative Calvinist, Presbyterian household. But, like Herman Melville a century before, he was a Calvinist who could not believe in God.  In such a universe, some are damned, but no one is saved. We only realize our destiny by not having any. The Bunch destroy any future for themselves and thus, paradoxically, achieve their destiny. The fault is not in our stars, but in ourselves.

A Soldier’s Story

The Wild Bunch is set in the last months of the Huerte dictatorship (Spring of 1914), a phase of the series of rebellions, coups d’état, and civil wars known collectively as the Mexican Revolution. [2] Officially, this revolution began with the fall of the Diaz regime and ended with the success of the Institutional Revolutionary Party (PRI), but rebellions and bloodshed had already permeated the Diaz regime and continued a few years after the PRI came to power. In the official period of the revolution, casualties numbered approximately 1,000,000. When one discovers that the Federal Army only had about 200,000 men at any time, and that rebel armies counted their soldiers in the hundreds, one realizes that the majority of these casualties had to be non-combatants. Not surprisingly; the Federal Army, and some of the rebels, pursued a policy (advocated by our current US president) of family reprisal – once a rebel or a terrorist is identified, but cannot be captured or killed, his family is wiped out instead. Whole villages were massacred. Dozens of bodies would be tossed into a ditch and left to rot.

As I’ve said elsewhere, I’ve nothing against thought-experiments that raise ethical questions, only those that limit the possible answers unjustifiably. So let us now imagine ourselves in the mind of a young Federal soldier, whose commandant has ordered him to shoot a family composed of a grandmother, a sister, a brother – the latter having atrophied legs due to polio – and the sister’s six-year-old daughter. The relevant question here is not whether or not he will do this. He will. The question is why.

This is a kind of question that rarely, if ever, appears in ethical philosophy in the Analytic tradition. It is, however, taken quite seriously in Continental philosophy. There’s a good, if uncomfortable, reason for this. Continental thinkers write in a Europe that survived the devastation of World War II and live among both the survivors of the Holocaust and the perpetrators of it. Analytic philosophers decided not to bother raising too many questions concerning Nazism or the Holocaust. Indeed, in the US, the general academic approach to events in Germany in the 1930’s and 40’s has been that they constituted an aberration. Thus, even in studies of social psychology, the Nazi participants in the Holocaust are treated as examples of some sort of abnormality or test cases in extremities of assumed psychological, social, or moral norms.  This is utter nonsense. If it was true, then such slaughters would have been confined to Europe. And yet, very similar things went on in the Pacific Theater: during the Japanese invasion of China, the number of causalities is estimated as being into the tens of millions.

There were a million casualties resulting from the Turkish mass killing of the Armenians, long before the Holocaust.  There were several million victims of the Khmer Rouge in Cambodia, decades after the Holocaust.  Far from being some pscyho-social aberration, human beings  have a facility for organized cruelty and mass slaughter.

At any rate, assuming that our young Mexican soldier is not suffering from some abnormal psychology, what normative thoughts might be going through his mind as he is about to pull the trigger on the family lined up before him?

For the sake of argument, we’ll allow that he has moral intuitions, however he got them, that tell him that killing innocent people is simply wrong. But some process of thought leads him to judge otherwise; to act despite his intuition. Note that we are not engaging in psychology here and need not reflect on motivations beyond the ethical explanations he gives for his own behavior.

While not a complete listing, here are some probable thoughts he might be able to relay to us in such an explanation:

For the good of the country I joined the Army, and must obey the orders of my commanding officer.

I would be broke without the Army, and they pay me to obey such orders.

These people are Yaqui Indians, and as such are sub-human, so strictures against killing innocents do not apply.

I enjoy killing, and the current insurrection gives me a chance to do so legally.

So far, all that is explained is why the soldier either thinks personal circumstances impel him to commit the massacre or believes doing so is allowable within the context. But here are some judgments that make the matter a bit more complicated:

This is the family of a rebel, who must be taught a lesson.

Anyone contemplating rebellion must be shown where it will lead.

This family could become rebels later on. They must be stopped before that can happen.

All enemies of General Huerta/ the State/ Mexico (etc.) must be killed.

Must, must, must. One of the ethical problems of violence is that there exist a great many reasons for it, within certain circumstances, although precisely which circumstances differ considerably from culture to culture, social group to social group, and generation to generation. In fact, there has never been a politically developed society for which this has not been the case. Most obviously, we find discussions among Christians and the inheritors of Christian culture, concerning what would constitute a “just war” (which translates into “jihad” in Islamic cultures). But we need not get into the specifics of that. All states, regardless of religion, hold to two basic principles concerning the use of violence in the interests of the State: First, obviously, the right to maintain the State against external opposition; but also, secondly, the right of the State to use lethal force against perceived internal threats to the peace and stability of the community. We would like to believe that our liberal heritage has reduced our eliminated adherence to the latter principle, but we are lying to ourselves. Capital punishment is legal in the United States, and 31 states still employ it. The basic theory underlying it is quite clear: Forget revenge or protection of the community or questions of the convicted person’s responsibility – the State reserves the right to end a life deemed too troublesome to continue.

But any conception of necessary violence seriously complicates ethical consideration of violence per se. Because such conceptions are found in every culture and permeate every society – by way of teaching, the arts, laws, political debates, propaganda during wartime, etc. – it is likely that each of us has, somewhere in the back of our minds, some idea, some species of reasoning, some set of acceptable responses, cued to the notion that some circumstance somewhere, at some time, justify the use of force, even lethal force. Indeed, even committed pacifists have to undertake a great deal of soul-searching and study to recognize these reasons and uproot them, but they are unlikely ever to get them all.

Many more simply will never bother to make the effort. They are either persuaded by the arguments for necessary force, or they have been so indoctrinated into such an idea that they simply take it for granted.

Because there are several and diverse conceptions and principles of necessary violence floating around in different cultures, one can expect that this indoctrination occurs to various degrees and by various means. One problem this creates is that regardless of its origin, a given conception or principle can be extended by any given individual. So today I might believe violence is only necessary when someone attempts to rape my spouse, but tomorrow I might think it necessary if someone looks at my spouse the wrong way.

The wide variance in possible indoctrination also means a wide variety in the way such a principle can be recognized or articulated. This is especially problematic given differences in education among those of differing social classes. So among some, the indoctrination occurs largely through friends and families, and may be articulated only in the crude assertion of right – “I just had to beat her!” “I couldn’t let him disrespect me!” – while those who go through schools may express this indoctrination through well thought-out, one might say philosophical, reasoning: “Of a just war, Aquinas says…” or “Nietzsche remarks of the Ubermensch…” and so on. But we need to avoid letting such expressions, either crude or sophisticated, distract us from what is really going on here. The idea that some violence is necessary has become part of the thought process of the individual. Consequently, when the relevant presumed – and prepared-for – circumstances arise, not only will violence be enacted, but the perpetrator will have no sense of transgression in doing so. As far as he is concerned, he is not doing anything wrong, even should the violent act appear to contradict some other moral interdiction. The necessary violence has become a moral intuition and overrides other concerns. “I shouldn’t kill an innocent, but in this case, I must.”

Again, this is not psychology. After more than a century of pacifist rhetoric and institutionalized efforts to find non-violent means of “conflict resolution,” we want to say that we can take this soldier and “cure” of his violent instincts.  But, what general wants us to do that? What prosecutor, seeking the death penalty, wishes that of a juror?

The rhetoric of pacifism and the institutionalization of reasoning for non-violence is a good thing, don’t misunderstand me. But don’t let it lead us to misunderstand ourselves. There is nothing psychologically aberrant in the reasoning that leads people to justify violence, and in all societies such reasoning is inevitable. It’s part of our cultural identity.  Strangely enough, it actually strengthens our social ties, as yet another deep point of agreement between us.

Being Violent

I’m certain that, given the present intellectual climate, some readers will insist that what we have been discussing is psychology; that Evolutionary Psychology or genetics can explain this; that neuroscience can pin-point the exact location in the brain for it; that some form of psychiatry can cure us. All of which may be true (assuming that our current culture holds values closer to “the truth” than other cultures, which I doubt), but is nonetheless irrelevant. It should be clear that I’m trying to engage in a form of social ontology or what might be called historically-contingent ontology. And ethics really begins in ontology, as Aristotle understood.  We are social animals, not simply by some ethnological observation, but in the very core of our being. We just have a difficult time getting along with each other.

It’s possible to change. Beating other people up is just another way to bang our own heads against the wall; this can be recognized, and changed, so the situation isn’t hopeless. As a Buddhist, I accept the violence of my nature, but have certain means of reducing it, limiting it, and letting it go. There are other paths to that. But they can only be followed by individuals. And only individuals can effect change in their communities.

This means we have to accept the possibility that human ontology is not an a-temporal absolute, and I know there is a long bias against that, but if we are stuck with what we have always been, we are doomed.

Nonetheless, the struggle to change a society takes many years, even generations, and it is never complete. Humans are an indefinitely diverse species, with a remarkable capacity to find excuses for the most execrable and self-destructive behavior. There may come a time that humans no longer have or seek justifications for killing each other; but historically, the only universal claim we can make about violence is that we are violent by virtue of being human, and because we live in human society.

Notes

  1. http://www.imdb.com/title/tt0065214/
  2. https://en.wikipedia.org/wiki/Mexican_Revolution

Reprinted from:https://theelectricagora.com/2017/02/11/violence-and-identity/

Simulation argument as gambling logic

I have submitted an essay to the Electric Agora, in which I critique the infamous Simulation Argument – that we are actually simulations running in a program designed by post-humans in the future – , made in its strictest form by Nick Bostrom of Oxford University. Since Bostrom’s argument deploys probability logic, and my argument rests on traditional logic, I admitted to the editors that I could be on shaky ground. However, I point out in the essay that if we adopt the probability logic of the claims Bostrom makes, we are left with certain absurdities; therefore, Bostrom’s argument collapses into universal claims that can be criticized in traditional logic. At any rate, if the Electric Agora doesn’t post the essay, I’ll put it up here; if they do, I’ll try to reblog it (although reblogging has been a chancey effort ever since WordPress updated its systems last year).

 

Towards the end of that essay, I considered how the Simulation Argument is used rhetorically to advocate for continuing advanced research in computer technology in hope that we will someday achieve a post-human evolution. The choice with which we are presented is pretty strict, and a little threatening – either we continue such research, advancing toward post-humanity – or we are doomed. This sounded to me an awful lot like Pascal’s Gambit – believe in god and live a good life, even if there is no god, or do otherwise and live miserably and burn in hell if there is a god. After submitting the essay I continued to think on that resemblance and concluded that the Simulation Argument is very much like Pascal’s Gambit and its rhetorical use in support of advancing computer research, much like Pascal’s use of his Gambit to persuade non-believers to religion, was actually functioning as a kind of gambling. This is actually more true of the Simulation Argument, since continued research into computer technology involves considerable expenditure of monies in both the private and the public sector, with post-human evolution being the offered pay-off to be won.

 

I then realized that there is a kind of reasoning that has not been fully described with any precision (although there have been efforts of a kind moving in this direction) which we will here call Gambling Logic. (There is such a field as Gambling Mathematics, but this is simply a mathematical niche in game theory.)

 

Gambling Logic can be found in the intersection of probability theory, game theory, decision theory and psychology. The psychology component is the most problematic, and perhaps the reason why Gambling Logic has not received proper study. While psychology as a field has developed certain statistical models to predict how what percentages of a given population will make certain decisions given certain choices (say, in marketing research), the full import of psychology in the practice of gambling is difficult to measure accurately, since it is multifaceted. Psychology in Gambling Logic not only must account for the psychology of the other players in the game besides the target subject, but the psychology of the target subject him/herself, and for the way the target subject reads the psychology of the other players and responds to her/his own responses in order to adapt to winning or losing. That’s because a gamble is not simply an investment risked on a possible/probable outcome, but the outcome either rewards the investment with additional wealth, or punishes it by taking it away without reward. But we are not merely accountants; the profit or loss in a true gamble is responded to emotionally, not mathematically. Further, knowing this ahead of the gamble, the hopeful expectation of reward, and anxiety over the possibility of loss, colors our choices. In a game with more than one player, the successful gambler knows this about the other players, and knows how to play on their emotions; and knows it about him/her self, and knows when to quit.

 

Pascal’s Gambit is considered an important moment in the development of Decision Theory. But Pascal understood that he wasn’t simply addressing our understanding of the probability of success or failure in making the decision between the two offered choices. He well understood that in the post-Reformation era in which he (a Catholic) was writing, seeing as it did the rise of personality-less Deism, and some suggestion of atheism as well, many in his audience could be torn with anxiety over the possibility that Christianity was groundless, over the possibility that there was no ground for any belief or for any moral behavior. He is thus reducing the possible choices his audience confronted to the two, and suggesting one choice as providing a less anxious life, even should it prove there were no god (but, hey, if there is and you believe you get to Paradise!).

 

In other words, any argument like Pascal’s Gambit functions rhetorically as Gambling Logic, because it operates on the psychology of its audience, promising them a stress-free future with one choice (reward), or fearful doom with the other (punishment).

 

So recognizing the Simulation Argument as a gamble, let’s look at the Gambling Logic at work in it.

 

Bostrom himself introduces it as resolving the following proposed trilemma:

 

1. “The fraction of human-level civilizations that reach a post-human stage (that is, one capable of running high-fidelity ancestor simulations) is very close to zero”, or

2. “The fraction of post-human civilizations that are interested in running ancestor-simulations is very close to zero”, or

3. “The fraction of all people with our kind of experiences that are living in a simulation is very close to one.”

 

According to Bostrom himself, at least one of these claims must be true.

It should be noted that this trilemma actually collapses into a simple dilemma, since the second proposition is so obviously untrue: in order to reach post-human status, our descendents will have to engage in such simulations even to accomplish such simulation capacity.

 

Further, the first proposition is actually considered so unlikely, it converts to its opposite in this manner (from my essay): “However, given the rapid advances in computer technology continuing unabated in the future, the probability of ‘the probability of humans surviving to evolve into a post-human civilization with world-simulating capabilities is quite low’ is itself low. The probability of humans evolving into a post-human civilization with world-simulating capabilities is thus high.”

 

Now at this point, we merely have the probabilistic argument that we are currently living as simulations. However, once the argument gets deployed rhetorically, what really happens to the first proposition is this:

 

If you bet on the first proposition (presumably by diverting funds from computer research into other causes with little hope of post-human evolution), your only pay-off will be extinction.

 

If you bet against the first proposition (convert it to its opposite and bet on that), you may or may not be betting on the third proposition, but the pay-off will be the same whether we are simulations or not, namely evolution into post-humanity.

 

If you bet on the third proposition, then you stand at least a 50% chance of earning that same pay-off, but only by placing your bet by financing further computer research that could lead to evolution into post-humanity.

 

So even though the argument seems to be using the conversion of the first proposition in support of a gamble on the third proposition, in fact the third proposition supports betting against the first proposition (and on its conversion instead).

 

What is the psychology this gamble plays on? I’ll just mention the two most obvious sources of anxiety and hope. The anxiety of course concerns the possibility of human extinction: most people who have children would certainly be persuaded that their anxiety concerning the possible future they leave their children to can be allayed somewhat by betting on computer research and evolution to post-humanity. And all who share a faith in a possible technological utopia in the future will be readily persuaded by to take the same gamble.

 

There is a more popular recent variation on the Simulation Gamble we should note – namely that the programmers of the simulation we are living are not our future post-human descendents, but super-intelligent aliens living on another world, possibly in another universe. But while this is rhetorically deployed for the same purpose as the original argument, to further funding (and faith) in technological research, it should be noted that the gamble is actually rather weaker. The ultimate pay-off is not the same, but rather appears to be communion with our programmers. Well, not so enticing as a post-human utopia, surely! Further, that there may be such super-intelligent aliens in our universe is not much of a probability; that they exist in a separate universe is not even a probability, it is mere possibility, suggested by certain mathematical modellings. The reason for the popularity of this gamble seems to arise from an ancient desire to believe in gods or angels, or just some Higher Intelligence capable of ordering our own existence (and redeeming all of our mistakes).

 

It might sound as if, in critiquing the Simulation Gamble, I am attacking research into advances in computer and related technology. Not only is that not the case, but it would be irrelevant. In the current economic situation, we are certainly going to continue such research, regardless of any possible post-human evolution or super-aliens. Indeed, we will continue such research even if it never contributes to post-human evolution, and post-human evolution never happens. Which means of course that the Simulation Gamble is itself utterly irrelevant to the choice of whether to finance such research or not. I’m sure that some, perhaps many, engaged in such research see themselves as contributing to post-human evolution, but that certainly isn’t what wins grants for research. People want new toys; that is a stronger motivation than any hope for utopia.

 

So the real function of the Simulation Gamble appears to be ideological: it’s but one more reason to have faith in a technological utopia in the future; one more reason to believe that science is about ‘changing our lives’ (indeed, changing ourselves) for the better. It is a kind of pep-talk for the true believers in a certain perspective on the sciences. But perhaps not a healthy perspective; after all, it includes a fear that, should science or technology cease to advance, the world crumbles and extinction waits.

 

I believe in science and technology pragmatically – when it works it works, when it doesn’t, it don’t. It’s not simply that I don’t buy the possibility of a post-human evolution (evolution takes millions of years, remember), but I don’t buy our imminent extinction either. The human species will continue to bumble along as it has for the past million years. If things get worse – and I do believe they will – this won’t end the species, but only set to rest certain claims for a right to arrogantly proclaim itself master of the world. We’re just another animal species after all. Perhaps the cleverest of the lot, but also frequently the most foolish. We are likely to cut off our nose to spite our face – but the odd thing is our resilience in the face of our own mistakes. Noseless, we will still continue breathing, for better or worse.

 

—–

Bostrom’s original argument: http://www.simulation-argument.com/simulation.html.

The legacy of Hegel

I found this essay on my computer, written some time ago, and decided – since I haven’t been posting here for a while – that I would go ahead and put it up, although it is not completely polished. Yes, it’s about Hegel again – don’t get too annoyed! – I hope not to write on this topic again for some time. But I do consider here some issues that extend beyond the immediate topic. So –

 

I like to describe Hegel as the cranky uncle one invites to Thanksgiving dinner, having to suffer his endless ramblings, because there is an inheretance worth suffering for.

 

Hegel’s language is well nigh impossible. He understands the way grammar shapes our thinking before any training in logic, and uses – often abuses – grammar, not only to persuade or convince, but to shape his readers’ responses, not only to his text, but to the world. After studying the Phenomenology of Mind, one can’t help but think dialectically for some time, whether one approves of Hegel or not. One actually has to find a way to ‘decompress’ and slowly withdraw, as from a drug. (Generally by reading completely dissimilar texts, like a good comic novel, or raunchy verses about sex.)

 

How did Hegel become so popular, given his difficulty? First of all, he answered certain problems raised in the wake of first Fichte’s near-solipsistic (but highly convincing) epistemology,and then in Schelling’s “philosophy of nature” (which had achieved considerable popularity among intellectuals by the time Hegel started getting noticed). But there was also the fact that he appears to have been an excellent and fascinating teacher at the University of Berlin. And we can see in his later lectures, which come to us largely through student notes, or student editing of Hegel’s notes, that, while the language remains difficult, there is an undeniable charm in his presentation. This raises questions, about how important teachers are in philosophy – do we forget that Plato was Socrates’ student, and what that must have meant to him?

 

Finally: Hegel is the first major philosopher who believed that knowledge, being partly the result of history and partly the result of social conditioning *, was in fact not dependent on individual will or insight, so much as being in the right place at the right time – the Idea, remember, is the protagonist of the Dialectic’s narrative. The importance of the individual, is that there is no narrative without the individual’s experience, no realization of the Idea without the individual’s achievement of knowledge.

 

However, despite this insistance on individual experience, Hegel is a recognizably ‘totalistic’ thinker: everything will be brought together eventually – our philosophy, our science, our religion, our politics, etc., will ultimately be found to be variant expressions of the same inner logic of human reasoning and human aspiration.

 

Even after Pragmatists abandoned Hegel – exactly because of this totalistic reading of history and experience – most of them recognized that Hegel had raised an important issue in this insistence – namely that there is a tendency for us to understand our cultures in a fashion that seemingly connects the various differences in experiences and ways of knowing so that we feel, to speak metaphorically, that we are swimming in the same stream as other members of our communities, largely in the same direction. Even the later John Dewey, who was perhaps the most directly critical of Hegel’s totalism, still strong believes that philosophy can tell the story of how culture comes together, why, eg., there can be a place for both science and the arts as variant explorations of the world around us. We see this culminate, somewhat, in Quine’s Web of Belief: different nodes in the web can change rapidly, others only gradually; but the web as a whole remains intact, so that what we believe not only has logical and evidentiary support, but also ‘hangs together’ – any one belief ‘makes sense’ in relation to our other beliefs.

 

(Notably, when British Idealism fell apart, its rebellious inheritors, eg., Russell and Ayers, went in the other direction, declaring that philosophy really had no need to explain anything in our culture other than itself and scientific theory.)

 

If we accept that knowledge forms a totalistic whole, we really are locked into Hegel’s dialectic, no matter how we argue otherwise.

 

Please note the opening clause “If we accept that knowledge forms a totalistic whole” – what follows here should be the question, is that what we are still doing, not only in philosophy but other fields of research? and I would suggest that while some of us have learned to do without, all too many are still trying to find the magic key that opens all doors; and when they attempt that, or argue for it, Hegel’s net closes over them – whether they’ve read Hegel or not. And that’s what makes him still worth engaging. Because while he’s largely forgotten – the mode of thought he recognizes and describes is still very much among us.

 

And this is precisely why I think writing about him and engaging his thought is so important. The hope that some philosophical system, or some science, or some political system will explain all and cure all is a failed hope, and there is no greater exposition of such hope than in the text of Hegel. The Dialectic is one of the great narrative structures of thought, and may indeed be a pretty good analog to the way in which we think our way through to knowledge, especially in the social sphere; it really is rather a persuasive reading of history, or at least the history of ideas. But it cannot accommodate differences that cannot be resolved if they do not share the same idea. For instance, the differing assumptions underlying physics as opposed to those of biology; or differing strategies in the writing of differing styles of novel or poetry; or consider the political problems of having quite different, even oppositional, cultures having to learn to live in the same space, even within the same city.

 

If Hegel is used to address possible futures, then of course such opposed cultures need to negate each other to find the appropriate resolution of their Dialectic. That seemed to work with the Civil War; but maybe not really. It certainly didn’t work in WWI – which is what led to Dewey finally rejecting Hegel, proposing instead that only a democratic society willing to engage in open-ended social experimentation and self-realization could really flourish, allowing difference itself to flourish.

 

Finally a totalistic narrative of one’s life will seem to make sense, and the Dialectic can be used to help it make sense. And when we tell our life-stories, whether aware of the Dialectic or no, this is to some extent what we are doing.

 

But the fact is, we must remember that – as Hume noted, and as re-enforced in the Eastern traditions – the ‘self’ is a convenient fiction; which means the story we tell about it is also fiction. On close examination, things don’t add up, they don’t hang together. One does everything one is supposed to do to get a professional degree, and then the economy takes a downturn, and there are no jobs. One does everything expected of a good son or daughter, and only to be abused. . One cares for one’s health and lives a good life – and some unpredictable illness strikes one down at an early age. I could go on – and not all of it is disappointment – but the point is that, while I know people who have exactly perfect stories to tell about successful lives, I also know others for whom living has proven so disjointed, it’s impossible to find the Idea that the Dialectic is supposed to reveal.

 

Yet the effort continues. We want to be whole as persons, we want to belong to a whole society. We want to know the story, of how we got here, why we belong here, and where all this is going to.

 

So in a previous essay **, I have given (I hope) a pretty accurate sketch of the Dialectic in outline – and why it might be useful, at least in the social sciences (it is really in Hegel that we first get a strong explication of the manner in which knowledge is socially conditioned). And the notion that stories have a logical structure – and thus effectively form arguments – I think intriguing and important. ***

 

But ultimately the Dialectic can not explain us. The mind is too full of jumble, and our lives too full of missteps on what might better be considered a ‘drunken walk’ than a march toward inevitable progress.

 

So why write about it? Because although in America, Hegel is now largely forgotten, but the Dialectic keeps coming back; all too many still want it – I don’t mean just the Continental tradition. I mean we are surrounded by those who wish for some Theory of Everything, not only in physics, but economics and politics, social theory, etc. And when we try to get that, we end up engaging the dialectical mode of thought,even if we have never read Hegel. He just happened to be able to see it in the thinkers of Modernity, beginning with Descartes and Luther. But we are still Moderns. And when we want to make the big break with the past and still read it as a story of progress leading to us; or when we think we’ve gotten ‘beyond’ the arguments of the day to achieve resolution of differences, and attain certain knowledge – Then we will inevitably engage the Dialectic. Because as soon as one wants to know everything, explain everything, finally succeed in the ‘quest for certainty’ (that Dewey finally dismissed as a pipe-dream), the Dialectic raises its enchanting head, replacing the Will of God that was lost with the arrival of Modernity.

 

That is why (regardless of his beliefs, which are by no means certain) Hegel’s having earned his doctorate in theology becomes important. Because as a prophet of Modernity, he recognized that the old religious narratives could only be preserved by way of sublation into a new narrative of the arrival of human mind replacing that divine will.

 

In a sense that is beautiful – the Phenomenology is in some way the story of human kind achieving divinity in and through itself. But in another way, it is fraught with dangers – have we Moderns freed ourselves from the tyranny of Heaven only to surrender ourselves to the tyranny of our own arrogance? Only time will tell.

 

—–

 

* Much of what Hegel writes of social conditioning is actually implicit in Hume’s Conventionalism; Hegel systematizes it and makes it a cornerstone of his philosophy. (Kant, to the contrary, always assumes a purely rational individual ego; which is exactly the problem that Fichte had latched onto and reduced to ashes by trying to get to the root of human knowledge in desire.)

 

** https://nosignofit.wordpress.com/2016/10/13/hegels-logical-consciousness/

Full version: http://theelectricagora.com/2016/10/12/hegels-logical-consciousness

 

*** I’ll emphasize this, because it is the single most important lesson I learned from Hegel – narrative is a logical structure, a story forms a logical argument, a kind of induction of particularities leading into thematic conclusions. I will hopefully return to this in a later essay.

 

Misadventures in the dialectic

Or, a nasty thing happened on the way to the forum

Originally published at:https://theelectricagora.com/2016/11/04/misadventures-in-the-dialectic-or-a-nasty-thing-happened-on-the-way-to-the-forum/

Thus precisely in labour where there seemed to be merely some outsider’s mind and ideas involved, the bondsman becomes aware, through this re-discovery of himself by himself, of having and being a ‘mind of his own. [1]

When Hegel, in the Phenomenology of Mind, makes an abrupt transition from epistemology per se (how we know about anything at all) into an historicized social epistemology (how knowledge is socially and historically conditioned), he begins at an odd point in history, with an analysis of the relationship between lords and bondsmen; or, as it is better known, the Master-Slave dialectic.  What the Master learns in this dialectic is that he not only commands things, but does so through the mediation of commanding his slaves.  It is the Slave, however, who turns out to be the real protagonist in this narrative – what he learns is the necessity of living for others, and through that, his own independence from “things”; that is, from the material.

In a series of important lectures in the 1930’s, the Master-Slave Dialectic received an interpretation by the Russian emigre to France, Alexandre Kojeve, which had enormous impact on French intellectual history, especially on Existentialist thinkers like Sartre, as well as on the development of Lacanian psychoanalysis.  [2]  Although written more than ten years after Kojeve’s lectures, Albert Camus’ The Rebel (1951), a text widely popular among those who have never even heard of Kojeve, is in fact a response to Kojeve.

Within Existentialism itself (and in French philosophy generally), an ongoing debate over the Marxist implications of Kojeve’s lectures emerged.  Indeed, the Marxian narrative of the historical development of a Materialist Dialectic arriving at Modern capitalism (in preparation of a future communism),  depends on the Master-Slave Dialectic, because it assumes that the economy of the Roman Empire was principally a “slave economy” [3]; that is, slaves provided the primary means of production, as well as the central market (in the exchange of slaves) and the essential social structure, of the empire – there were slave owners, there were slaves, and there were cast-off slaves who, scrounging for work where they could find it, formed a nascent proletariat.

A reasonable interpretation of the Phenomenology (given Hegel’s own historical interests and biases) suggests that Hegel’s writing here arose as a meditation on the introduction of Christianity into the culture of Rome [4].  When Hegel wrote this, scholars believed – as they did until quite recently – that Christianity spread through the Empire by appealing to the poor; i.e., to slaves and former slaves [5].  Recent scholarship, however, has proven this untrue, and it appears that Christianity’s greatest appeal in Rome was to the middle classes – businessmen, lawyers, tradesmen [6].  (Only a middle class could afford the charitable social work that Christians engaged in.)  This does not really threaten Hegel (who, after all, is talking about ideas, and in a most general way), but it doomed Marxist historiography.

Evidence has been piling up that the economy of the Roman Empire was not primarily a slave economy, but a sophisticated capitalist one, based on international trade [7].  Even without the accumulating evidence, one realizes that it couldn’t have been otherwise.  The Roman Empire not only conducted trade with client states in the Mediterranean, but with co-existing empires over which they had no direct control, including those in India and China, as well as with cultures in Africa, which they had no desire to control.  Such trade could not be centered around a market for slaves – beyond precious metals or mere commodity exchange, there had to be negotiable systems of exchange of wealth with symbolic representation of equivalent value, namely money.  And where there is money, there is capitalism [8].

However, it is with some degree of irony that we can see that long before the archaeological evidence was unearthed and pieced together showing that Rome was in fact a capitalist society, there actually existed documentary evidence of this (since Nero), which has been available to literati since the 17th century.   I don’t mean accounting records, some Roman economist’s commentary or remarks made by some court historian.  I’m referring to a work of prose fiction; indeed, one of the funniest, most incisive, and, surprisingly, most realistic texts ever written:  The Satyricon, attributed to Petronius Arbiter [9].

We don’t really know who wrote Satyricon.  We don’t even know the original shape of it.  All we have are fragments, preserved in monastic libraries, until the 17th century, when secular book collectors got their hands on it thanks to Protestant looting of those libraries [10].   Some evidence suggests that the fragments are mere slivers from a much longer work, but internal evidence from the text itself shows a remarkable thematic consistency, suggesting that the fragments we do have at least form a narrative sequence within any larger whole. [11]

Satyricon is a wild ride through the underbelly of Roman society of its time.  The narrative is what later would be called a picaresque, a disjointed series of adventures of social outcasts, whose main interests in life are sex (primarily homoerotic) and food and finding some way to acquire the capital with which to procure them.   The narrator and protagonist of the story, Encolpius, has just dropped out of the Roman equivalent of an undergraduate course in literature, in order to compete with a former lover (Ascyltus) for the affections of a young boy, Giton. [12]  Being an educated lowlife, Encolpius isn’t interested in finding suitable employment, but instead tries attaching himself to well-to-do patrons.  This leads to bizarre sexual experiences, meetings with failed poets, tasteless feasts put on by Roman tradesmen, fake religious rites (always good for initiating orgies), and capture by pirates at sea.  As the fragments close, the story doesn’t appear to be going well, as Eumolpus, an aging poet and tutor to whom Encolpius has attached himself, fails to realize an inheritance, which effectively condemns him to death among those who had been supporting him.

The most famous sequence of the narrative is Encolpius’ attendance at a banquet thrown by a successful tradesman, Trimalchio.  The sequence is a fairly complete, unified set-piece.  We first find Trimalchio at a recreation center, playing ball.  When he has to urinate, a slave rushes up with a bucket so that Trimalchio can relieve himself while still playing.  Meanwhile, another slave counts the balls that Trimalchio recurrently loses in play (to recover later), so that his master can toss out a new ball with every flub, as if he hadn’t lost any.   The tone is thus set for one of the most outrageous displays of conspicuous consumption – and conspicuous waste – in the history of Western literature.

At length some slaves came in who spread upon the couches some coverlets upon which were embroidered nets and hunters stalking their game with boar-spears, and all the paraphernalia of the chase.  We knew not what to look for next, until a hideous uproar commenced, just outside the dining-room door, and some Spartan hounds commenced to run around the table all of a sudden.  A tray followed them, upon which was served a wild boar of immense size, wearing a liberty cap upon its head, and from its tusks hung two little baskets of woven palm fibre, one of which contained Syrian dates, the other, Theban.  Around it hung little suckling pigs made from pastry, signifying that this was a brood-sow with her pigs at suck.  It turned out that these were souvenirs intended to be taken home.  When it came to carving the boar, our old friend Carver, who had carved the capons, did not appear, but in his place a great bearded giant, with bands around his legs, and wearing a short hunting cape in which a design was woven.  Drawing his hunting-knife, he plunged it fiercely into the boar’s side, and some thrushes flew out of the gash.  Fowlers, ready with their rods, caught them in a moment, as they fluttered around the room and Trimalchio ordered one to each guest, remarking, “Notice what fine acorns this forest-bred boar fed on,” and as he spoke, some slaves removed the little baskets from the tusks and divided the Syrian and Theban dates equally among the diners. [13]

This would seem to support Marxian analysis of the culture of a slave-based economy; but there’s a problem with this.  Trimalchio’s biography has to be pieced together from his own remarks, those of his guests, as well as portraiture found on the walls of the hall leading to the banquet room.   But it amounts to this:  Trimalchio had been born a slave to a wealthy merchant.  He had proven so good at his chores that he rose to the position of steward of the estate of the merchant, who provided him with an allowance.  This he saved and invested until he could buy his freedom and position himself as inheritor of the merchant’s business [14].  Trimalchio has since spent his life acquiring greater wealth and rubbing it in the noses of failed businessmen whom he turns into his personal court of sycophants.

The banquet seems to be winding down, probably intended to end at dawn [15] (like Plato’s Symposium, which it somewhat parodies), when Trimalchio (always one to sing his own praises) reveals the intended epitaph on his tomb:

Here Rests G Pompeius Trimalchio

Freedman Of Maecenas

Decreed Augustal, Sevir In His Absence

He Could Have Been A Member Of

 Every Decuria Of Rome But Would Not

Conscientious Brave Loyal

He Grew Rich From Little And Left

Thirty Million Sesterces Behind

He Never Heard A Philosopher

 Farewell Trimalchio

 Farewell Passerby [16]

Well, that’s his story, and he’s sticking with it, even after death: a dash of truth in a swill of self-admiration.

After a violent argument with his wife (formerly a prostitute) over his bisexual promiscuity, Trimalchio then returns to this theme, by effectively staging his own funeral; whereat he eulogizes himself in the crudest manner possible, boasting of his use of sex, investments, and shady business practices to build a financial empire.  “So your humble servant, who was a frog, is now a king.”  [17]

So much for the slave coming to self-consciousness by realizing the importance of working for others!

The Satyricon is the rotten apple in the bushel, not only of literary history, but of the literature of history.  Besides being unabashedly pornographic, unrepentantly cynical in the nastiest way, and thoroughly disrespectful of social manners while dismissive of any aspiration toward decency and good fellowship, the Satyricon paints an unnervingly realistic portrait of the people of ancient Rome and of their social environment.   It’s not a pretty picture, and it fails to conform to any of the expectations into which we have been long indoctrinated, by traditional historical narratives or the works of art that disseminated these.  Rome was not just monumental architecture and statues in the forum.  It was an ugly, over-populated metropolis, with tenement slums, a criminal underworld, thriving markets riddled with unethical business practices.  Alcoholism and drug abuse were rampant, and the working classes found their greatest distraction in public displays of cruelty, in the arena.   But more importantly, the people, as we find them in the Satyricon, are completely like ourselves.  We’ve met these people, we see them all around us.  Donald Trump is just a variant Trimalchio.  And who hasn’t encountered a pedantic professor pummeling students with bloated jargon that even he doesn’t understand?   I myself knew someone rather like a straight Encolpius in college; a bright mathematics student, he went through seven different sexual relationships in one semester (his general attitude toward women was best expressed in his parody of a classic song: “nothing could be finah than to wake in some vagina in the mo-o-orning…”).  There was never a day I met him when he wasn’t drunk or hung-over.

Moral improvement, political progress, aspirations toward a greater enlightenment and a brighter future; fables we tell ourselves to bring order to our lives and provide our children with hope.  To all such pretense the Satyricon raises a middle finger (as occasionally do its characters in the text).

What has really changed in human nature since Petronius?  We claim to know more about the world, but apparently we still do not know ourselves.  For two thousand years, Europe was able to mask this lack of self-recognition with a powerful ideological machine, supported by a monumental institutional structure with intimidating influence among political leaders.  As this began to fall apart, scientists, philosophers, poets and political revolutionaries sought to develop a similarly powerful ideology with an equal ability to suppress self-recognition.  But these are only stories, after all – told in mathematics sometimes, more often in heated rhetoric, but all just fables that we hope are true.  The only real change Modernity brought us has been new technology.   And all the new technology has accomplished is providing new commodities for thriving markets riddled with unethical business practices and war-mongers.

Marx is dead, but Hegel survives, as one of the grand fables of Modernity’s explanations for why we have any ideology at all and why we feel satisfied with our supposed progress [18].  Reading Hegel helps us to understand how we wish to think of ourselves, and of the history that we believe created us.  But the Satyricon shows us people as they are, at least in any complex, mercantile culture that we care to call a civilization.  Not all people, but enough that we should be more aware of – see with greater clarity – our own social environment, which hasn’t really improved so much in three thousand years.

Notes

[1] Hegel, The Phenomenology of Mind; B. Self-consciousness, IV. The true nature of Self-Certainty, A. Independence and Dependence of Self-Consciousness: Lordship and Bondage.  J. B. Bailllie translation, 1910.

[2] Kojeve, who served in the French government after WWII, always claimed to be a Marxist, even a Stalinist, but slathered insults on the Soviet Union, and remained friends to the end with conservative political philosopher (and former student of Heidegger’s) Leo Strauss, whose best known student is Allan Bloom.  Bloom was the editor of the English translation of Kojeve’s lectures, 1969:

https://u.osu.edu/dialecticseastandwest/files/2016/02/KOJEVE-introduction-to-the-reading-of-hegel-zg6tm7.pdf.   (Camus’ response to Kojeve, The Rebel, is also online: https://libcom.org/files/The-Rebel-Albert-Camus.pdf.)   Bloom’s best known student is Francis Fukuyama, who acted as de-facto philosophic counsel to the George W. Bush administration; his best known text: The End of History and the Last Man, 1989; essay prospectus: http://www.wesjones.com/eoh.htm

[3] See: http://www.marxist.com/historical-materialism-study-guide.htm.

[4] The Master-Slave Dialectic actually precedes a discussion of the Roman philosophies of Stoicism and Skepticism.  For Hegel, Christianity found its natural intellectual home in Rome, because Rome had produced the individualization of consciousness that Christianity requires, while exhausting all the reasonable expression of it possible within Roman culture itself.  (Per Hegel, Jewish culture, wherein Christianity originated, had found itself in a cul-de-sac of rigid, written “divine” law and inherited custom.)  By now, it should be obvious that we see in Hegel, not a theological explanation of history, but an historical explanation of theology, at least given the assumptions and accepted scholarly knowledge available to Hegel.

[5] Thus, for instance, Nietzsche’s claim that Christianity represented a “slave morality.”

[6] See: the review of scholarly opinion at: http://christianthinktank.com/urbxctt.html, especially section 13, Christianity was mostly made up of ‘middling-plus’ class folks: merchants, tradesmen, craftsmen.

[7] See:  https://en.wikipedia.org/wiki/Roman_economy

[8] Even Marx understood this, which is one reason he hated the very idea of money. See: https://www.marxists.org/archive/marx/works/1844/manuscripts/power.htm. He just hoped that money had been a recent invention.  Nope; it’s been here throughout most of recorded history.  See:  https://en.wikipedia.org/wiki/History_of_money.  I warn the reader that in this instance, the Wiki article is flawed, since it concentrates entirely on the history of money in the West.  In fact, there is evidence that the Chinese developed money at roughly the same time as the West, but paper currency much earlier.  See: http://www.nbbmuseum.be/en/2007/09/chinese-invention.htm.

[9] Our translation is that of W. C. Firebaugh (1922), which includes fascinating, if dated, scholarly notes:  http://onlinebooks.library.upenn.edu/webbin/gutbook/lookup?num=5225.

[10] See:  http://bookmendc.blogspot.com/2010/10/transmission-of-text-of-petronius.html.  My suspicion – but this is only a guess – is that clergy believed the text worthy of preservation, despite its scandalous material, because it included necessary keys to colloquial Latin.  Some Roman slang is only preserved in the Satyricon.  Besides, as Augustine argued in Civitas Dei, not only was the Roman Empire a dung heap, but secular history, as opposed to Sacred History – i.e., the relationship between Man and God – was entirely a waste of time.  See: http://sacs-stvi.org/augustine-on-the-concept-of-history.

[11] For instance:  Early in the text we get a discussion of the cannibalism performed on their children by mothers in besieged cities; and the existing text ends with Eumolpius demanding that his executioners eat his body.

[12] A requirement in the study of rhetoric, which tells us that Encolpius – like Augustine, two centuries later – was intended by his family for a career in law.

[13] Satyricon, Chapter Fortieth.

[14] And it certainly helped that he was the merchant’s lover, or “mistress,” as he remarks with drunken pride.

[15] It should end at dawn, but when Trimalchio hears the cock crow, he immediately orders it caught and cooked.

[16] Satyricon, Chapter Seventy-First.  “Decreed Augustal, Sevir In His Absence/ He Could Have Been A Member Of/ Every Decuria Of Rome But Would Not” – Trimalchio claims that he was appointed to the Priests of Augustus, and would have been welcome in any of the officially recognized cults of Rome; but (he implies) his modesty prohibited acceptance of such honors.

[17] Satyricon, Chapter Seventy-Seventh.

[18] In order to have an ideology, we must confront external disagreements with and internal contradictions to our beliefs, which are then resolved and appropriated, negated and cancelled, or marginalized and ignored.  We thus arrive at generalities that we comfortably assume are necessary and superior to those that came before.  Hegel’s is not the only description of this process, but it is in many ways the most powerful.  My argument here has been that the evidence of the Satyricon is that the margins keep coming back, the contradictions are rarely resolved, and it is an inevitable human trait to be thoroughly disagreeable.