Mathematical Platonism: A Comedy

Mathematical Platonism holds that mathematical forms – equations, geometric forms, measurable relationships – are somehow embedded into the fabric of the universe, and are ‘discovered’ rather than invented by human minds.

From my perspective, humans respond to challenges of experience. However, within a given condition of experience, the range of possible responses is limited. In differing cultures, where similar conditions of experience apply, the resulting responses can also be expected to be similar. The precise responses and their precise consequences generate new conditions to be responded to – but again only within a range. So while the developments we find in differing cultures can oft end up being very different, they can also end up being very similar, and the trajectories of these developments can be traced backward, revealing their histories. These histories produce the truths we find in these cultures, and the facts that have been agreed upon within them. As these facts and the truths concerning them prove reliable, they are sustained until they don’t, at which point each culture will generate new responses that prove more reliable.

Since, again, the range of these responses within any given set of conditions is actually limited by the history of their development, we can expect differing cultures with similar sets of conditions to recognize a similar set of facts and truths in each other when they at last make contact. That’s when history really gets interesting, as the cultures attempt to come into concordance, or instead come into conflict – but, interestingly, in either case, partly what follows is that the two cultures begin borrowing from each other facts, truths, and possible responses to given challenges. ‘Universal’ truths, are simply those that all cultures have found equally reliable over time.

This is true about mathematical forms as well, the most resilient truths we develop in response to our experiences.  I don’t mean that maths are reducible to the empirical; our experiences include reading, social interatction, professional demands, etc., many of which will require continued development of previous inventions.  However, there’s no doubt that a great deal of practical mathematics have proven considerably reliable over the years.  Whereas, on the contrary, I find useless Platonic assertions that two-dimensional triangles or the formula ‘A = Π * r * r’   simply float around in space, waiting to be discovered.

So, in considering this issue, I came up with a little dialogue, concerning two friends trying to find – that is, discover – the mathematical rules for chess (since the Platonic position is that these rules, as they involve measurable trajectories, effectively comprise a mathematical form, and hence were discovered rather than invented).

Bob: Tom, I need some help here; I’m trying to find something, but it will require two participants.
Tom: Sure, what are we looking for.
B.: Well, it’s a kind of game. It has pieces named after court positions in a medieval castle.
T.: How do you know this?
B.: I reasoned it through, using the dialectic process as demonstrated in Plato’s dialogues. I asked myself, what is the good to be found in playing a game? And it occurred to me, that the good was best realized in the Middle Ages. Therefore, the game would need to be a miniaturization of Medieval courts and the contests held in them.
T.: Okay, fine, then let’s start with research into the history of the Middle Ages –
B.: No, no, history has nothing to do with this. That would mean that humans brought forth such a game through trial and error. We’re looking for the game as it existed prior to any human involvement.
T.: Well, why would there be anything like a game unless humans were involved in it?
B.: Because its a form; as a form, it is pure and inviolate by human interest.
T.: Then what’s the point in finding this game? Aren’t we interested in playing it?
B.: No, I want to find the form! Playing the game is irrelevant.
T.: I don’t see it, but where do you want to start.
B.: In the Middle Ages, they thought the world was flat; we’ll start with a flat surface.
T.: Fine, how about this skillet.
B.: But it must be such that pieces can move across it in an orderly fashion.
T.: All right, let’s try a highway; but not the 490 at rush hour….
B. But these orderly moves must follow a perpendicular or diagonal pattern; or they can jump part way forward and then to the side.
T.: You’re just making this up as you go along.
B.: No! The eternally true game must have pieces moving in a perpendicular, a diagonal, or a jump forward and laterally.
T.: Why not a circle?
B.: Circles are dangerous; they almost look like vaginas. We’re looking for the morally perfect game to play.
T.: Then maybe it’s some sort of building with an elevator that goes both up and sideways.
B.: No, it’s flat, I tell you… aha! a board is flat!
T.: So is a pancake.
B.: But a rectangular board allows perpendicular moves, straight linear moves, diagonal moves, and even jumping moves –
T.: It also allows circular moves.
B.: Shut your dirty mouth! At least now we know what we’re looking for. Come on, help me find it. (begins rummaging through a trash can.) Here it is, I’ve discovered it!
T.: What, that old box marked “chess?”
B.: It’s inside. It’s always inside, if you look for it.
T.: My kid brother threw that out yesterday. He invented a new game called ‘shmess’ which he says is far more interesting. Pieces can move in circles in that one!.
B,: (Pause.) I don’t want to play this game anymore. Can you help me discover the Higgs Boson?
T.: Is that anywhere near the bathroom? I gotta go….

Bob wants a “Truth” and Tom wants to play a game. Why is there any game unless humans wish to play it?

A mathematical form comes into use in one culture, and then years later again in a completely other culture;  assuming the form true, did it become true twice through invention?  Yes.  This is one of the unfortunate truths about truth: it can be invented multiple times.  That is precisely what history tells us.

So, Bob wants to validate certain ideas from history, while rejecting the history of those ideas. You can’t have it both ways. Either there is a history of ideas, in which humans participated to the extent of invention, or history is irrelevant, and you lose even “discovery.” The Higgs Boson, on the other hand, gets ‘discovered,’ because there is an hypothesis based on theory which is itself based on previous observations and validated theory, experimentation, observation, etc. In other words, a history of adapting thought to experience.  (No one doubts that there is a certain particle that seems to function in a certain way. But there is no Higgs Boson without a history of research in our effort to conceptualize a universe in which such is possible, and to bump into it, so to speak, using our invented instrumentation, and to name it, all to our own purposes.)

Plato was wrong, largely because he had no sense of history. Beyond the poetry of his dialogues (which has undoubted force), what was most interesting in his philosophy had to be corrected and systematized by Aristotle, who understood history; the practical value of education; the differences between cultures; and the weight of differing opinions. Perhaps we should call philosophy “Footnotes to Aristotle.”

But I will leave it to the readers here whether they are willing to grapple with a history of human invention in response to the challenges of experiences, however difficult that may seem; or whether they prefer chasing immaterial objects for which we can find no evidence beyond the ideas we ourselves produce.

Reasoning, evidence, and/or not miracles

This week at Plato’s Footnote, Massimo Piglucci posted a brief discussion on how the use of probability reasoning, especially of the Bayesian variety, can be used to dispel contemporary myths such as anti-vaccination paranoia, trutherism concerning the events of 9/11/01, and bitherism concerning Former President Obama.

https://platofootnote.wordpress.com/2017/01/16/anatomy-of-a-frustrating-conversation/

 

The comments thread became an object lesson in just how difficult it is to discuss such matters with those who hold mythic beliefs – every silly conspiracy theory was given vent on it. I myself felt it useful to briefly engage an apologist for miracle belief, with someone misrepresenting the argument against such belief as put forth by David Hume, referenced in Piglucci’s article. I would like to present and preserve that conversation here, because it is representative of the discussions on the comment thread, but also representative of the kinds of discussions reasonable people generally have with those so committed to their beliefs that they are open to neither reasoning nor evidence against them.

 

Asserting that Hume begins by declaring miracles simply impossible (and thus pursuing a circular argument), a commenter handled jbonnicerenoreg writes:

 

“The possibility of something should be the first step in a n argument, since of something is impossible there is no need to argue about it. For example, Hume says that miracles are impossible so it is not necessary to look at a particular miracle probability. I believe Hume’s argument does more than the reasoning warrants. ”

 

My reply:

That isn’t Hume’s argument at all. Hume argues that since miracles violate the laws of nature, the standard of evidence for claims for their occurrence is considerably higher than claims of even infrequent but natural events (such as someone suddenly dying from seemingly unknown causes – which causes we now know include aneurisms, strokes, heart failure, etc. etc.). Further, the number of people historically who have never experienced a miracle far outweighs the number who claim they have, which suggests questions of motivations to such reports. Finally, Hume remarks that all religions have miracle claims, and there is no justification for accepting the claims of one religion over any other, in which case we would be left with having to accept all religions as equally justified, which would be absurd, given that each religion is embedded with claims against all other religions.

 

Hume doesn’t make a probability argument, but his argument suggests a couple; for instance, given the lack of empirical evidence, and the infrequency of eye-witness accounts (with unknown motivations), the probability of miracles occurring would seem to be low. At any rate, I don’t remember Hume disputing the logical possibility of miracles, but does demand that claims made for them conform to reason and empirical experience.

 

jbonnicerenoreg,: “If you witness Lazurus rise from the dead, and if you know he was correctly entombed, then your evidence is sense experience–the same as seeing a live person. Hume’s standard of evidence is always about historical occurrences.”

 

My reply:

If such an experience were to occur, it might be considered ’empirical’ to the one who has the experience; but the report of such an experience is not empirical evidence of the occurrence, it is mere hearsay.

 

Unless you want to claim that you were there at the supposed raising of Mr. Lazarus, I’m afraid all we have of it is a verbal report in a document lacking further evidentiary justification, for a possible occurrence that supposedly happened 2000 years ago – which I think makes it an historical occurrence.

 

And no, Hume’s standard of evidence is clearly not simply about historical occurrences, although these did concern him, since his bread-and-butter publications were in history. But if miracles are claimed in the present day, then they must be documented in such a way that a reasonable skeptic can be persuaded to consider them. And it would help even more if they were repeatable by anyone who followed the appropriate ritual of supplication. Otherwise, I feel I have a right to ask, why do these never happen when I’m around?

 

7+ billion people on the planet right now, and I can’t think of a single credible report, with supporting evidence, of anyone seeing someone raised from the dead. Apparently the art of it has been lost?

 

Look, I have a friend whose mother died much too young, in a car crash, 25 years ago. Could you send someone over to raise her from the dead? I suppose bodily decomposition may make it a little difficult, but surely, if the dead can be raised they should be raised whole. Zombies with their skin falling off are difficult to appreciate, aesthetically.

 

jbonnicerenoreg,: “I suggest that if you can get over yourself, please read Hume carefully and comment with quotes. I will be glad to answer any questions you may have about the logic of the argument.”

 

My reply:

Well, that you’ve lowered yourself to cheap ad hominem once your argument falls apart does not speak much for your faith in your position.

 

However, I will give you one quote from Hume’s An Enquiry Concerning Human Understanding, Section X, “On Miracles”:

 

A wise man, therefore, proportions his belief to the evidence. In such conclusions as are founded on an infallible experience, he expects the event with the last degree of assurance, and regards his past experience as a full proof of the future existence of that event. In other cases, he proceeds with more caution: he weighs the opposite experiments: he considers which side is supported by the greater number of experiments: to that side he inclines, with doubt and hesitation; and when at last he fixes his judgement, the evidence exceeds not what we properly call probability. All probability, then, supposes an opposition of experiments and observations, where the one side is found to overbalance the other, and to produce a degree of evidence, proportioned to the superiority. A hundred instances or experiments on one side, and fifty on another, afford a doubtful expectation of any event; though a hundred uniform experiments, with only one that is contradictory, reasonably beget a pretty strong degree of assurance. In all cases, we must balance the opposite experiments, where they are opposite, and deduct the smaller number from the greater, in order to know the exact force of the superior evidence.

( http://www.bartleby.com/37/3/14.html )

 

I think Massimo and I are reading such a remark rather fairly, whereas you preferred to bull in with something you may have found on some Apologists web-site, or made up whole cloth. It was you who needed to provide quotes and reasoning, BTW, since your counter-claim is opposed to the experience of those of us who actually have read Hume.

 

By the way, I admit I did make a mistake in my memory of Hume – He actually is making a probability argument, quite overtly.

 

jbonnicerenoreg,: “A beautiful quote and one which I hope we all take seriously put into practise.

Hume is arguing against those who at that time would say something like “miracles prove Christianity is true”. You can see that his argument is very strong against that POV. However, he never takes up the case of a person witnessing a miracle. Of course, that is because “observations and experiments” are impossible in history since the past is gone and all we have is symbolic reports which you call “hearsay”. My congratlations for taking the high road and only complaining that I never read Hume!”

 

My reply:

Thank you for the congratulations, I’m glad we could part on a high note after reaching mutual understanding.

 

Notice that jbonnicerenoreg really begins with a confusion between the possible and the probable.  One aspect of a belief in myths is the odd presumption that all things possible are equally probable, and hence ‘reasonable.’  I suppose one reason I had forgotten Hume’s directly probabilistic argument was because probabilistic reasoning now seems to me a wholly necessary part of reasoning, to the point that it doesn’t need remarking.  Bu, alas, it does need remarking, time and again, because those who cling to myth always also cling to the hope – nay, insistence – that if there is something possible about their precious myth, then it ought to be given equal consideration along with what is probable. given the nature and weight of available evidence.  Notice also that jbonnicerenoreg tries to sneak, sub-rosa, as it were, the implicit claim that eye-witnesses to miracles – such as the supposed authors of the Bible – ought to be given credence as reporting an experience, rather than simply reporting a hallucination, or a fabricating an experience for rhetorical or other purposes.  Finally, notice that when I play on and against this implicit claim, jbonnicerenoreg tries an interesting tactic – he surrenders the problem of historical reportage, while continue to insist that witnessing miracles is still possible (which if verified would mean we would need to give greater weight to those historic reports after all!).  But there again, we see the confusion – the possible must be probable, if one believes the myth strongly enough.

 

And if we believe in fairies strong enough, Tinkerbelle will be saved from Captain Hook.

 

This won’t do at all.  The bare possibility means nothing.  Anything is possible as long as it doesn’t violate the principle of non-contradiction.  A squared circle is impossible; but given the nature of the space-time continuum posited by Einstein, a spherical cube may not only be possible but probable, presuming a finite universe.  But the probability of my constructing or finding an object I can grasp in my hand, that is both a sphere and a cube is not very high, given that we exist in a very small fragment of Einstein’s universe, and Newtonian physics and Euclidean geometry suit it better than applied Relativity on a universal scale.  All things in their proper measure, in their proper time and place. 

 

But the problem with miracles is that they are never in their proper time and place, to the extent that one wonders what their proper time and place might be, other than in works of fiction.  Why raise Lazarus from the dead if he’s just going to die all over again?  Why raise Lazarus instead of the guy in the grave next to his?  Why do this in an era and in a place lacking in any sophisticated means of documentary recording?  And why would a divine being need to make such a show of power?    Wouldn’t raw faith be enough for him, must he have eye-witnesses as well? 

 

And of course that’s the real problem for jbonnicerenoreg.  For miracles to achieve anything that looks like a probability, one first has to believe in god (or in whatever supernatural forces capable of producing such miracles).  There’s no other way for it.  Without that belief, a miracle is bare possibility and hardly any probability at all.   And I do not share that belief.

 

Simulation argument as gambling logic

I have submitted an essay to the Electric Agora, in which I critique the infamous Simulation Argument – that we are actually simulations running in a program designed by post-humans in the future – , made in its strictest form by Nick Bostrom of Oxford University. Since Bostrom’s argument deploys probability logic, and my argument rests on traditional logic, I admitted to the editors that I could be on shaky ground. However, I point out in the essay that if we adopt the probability logic of the claims Bostrom makes, we are left with certain absurdities; therefore, Bostrom’s argument collapses into universal claims that can be criticized in traditional logic. At any rate, if the Electric Agora doesn’t post the essay, I’ll put it up here; if they do, I’ll try to reblog it (although reblogging has been a chancey effort ever since WordPress updated its systems last year).

 

Towards the end of that essay, I considered how the Simulation Argument is used rhetorically to advocate for continuing advanced research in computer technology in hope that we will someday achieve a post-human evolution. The choice with which we are presented is pretty strict, and a little threatening – either we continue such research, advancing toward post-humanity – or we are doomed. This sounded to me an awful lot like Pascal’s Gambit – believe in god and live a good life, even if there is no god, or do otherwise and live miserably and burn in hell if there is a god. After submitting the essay I continued to think on that resemblance and concluded that the Simulation Argument is very much like Pascal’s Gambit and its rhetorical use in support of advancing computer research, much like Pascal’s use of his Gambit to persuade non-believers to religion, was actually functioning as a kind of gambling. This is actually more true of the Simulation Argument, since continued research into computer technology involves considerable expenditure of monies in both the private and the public sector, with post-human evolution being the offered pay-off to be won.

 

I then realized that there is a kind of reasoning that has not been fully described with any precision (although there have been efforts of a kind moving in this direction) which we will here call Gambling Logic. (There is such a field as Gambling Mathematics, but this is simply a mathematical niche in game theory.)

 

Gambling Logic can be found in the intersection of probability theory, game theory, decision theory and psychology. The psychology component is the most problematic, and perhaps the reason why Gambling Logic has not received proper study. While psychology as a field has developed certain statistical models to predict how what percentages of a given population will make certain decisions given certain choices (say, in marketing research), the full import of psychology in the practice of gambling is difficult to measure accurately, since it is multifaceted. Psychology in Gambling Logic not only must account for the psychology of the other players in the game besides the target subject, but the psychology of the target subject him/herself, and for the way the target subject reads the psychology of the other players and responds to her/his own responses in order to adapt to winning or losing. That’s because a gamble is not simply an investment risked on a possible/probable outcome, but the outcome either rewards the investment with additional wealth, or punishes it by taking it away without reward. But we are not merely accountants; the profit or loss in a true gamble is responded to emotionally, not mathematically. Further, knowing this ahead of the gamble, the hopeful expectation of reward, and anxiety over the possibility of loss, colors our choices. In a game with more than one player, the successful gambler knows this about the other players, and knows how to play on their emotions; and knows it about him/her self, and knows when to quit.

 

Pascal’s Gambit is considered an important moment in the development of Decision Theory. But Pascal understood that he wasn’t simply addressing our understanding of the probability of success or failure in making the decision between the two offered choices. He well understood that in the post-Reformation era in which he (a Catholic) was writing, seeing as it did the rise of personality-less Deism, and some suggestion of atheism as well, many in his audience could be torn with anxiety over the possibility that Christianity was groundless, over the possibility that there was no ground for any belief or for any moral behavior. He is thus reducing the possible choices his audience confronted to the two, and suggesting one choice as providing a less anxious life, even should it prove there were no god (but, hey, if there is and you believe you get to Paradise!).

 

In other words, any argument like Pascal’s Gambit functions rhetorically as Gambling Logic, because it operates on the psychology of its audience, promising them a stress-free future with one choice (reward), or fearful doom with the other (punishment).

 

So recognizing the Simulation Argument as a gamble, let’s look at the Gambling Logic at work in it.

 

Bostrom himself introduces it as resolving the following proposed trilemma:

 

1. “The fraction of human-level civilizations that reach a post-human stage (that is, one capable of running high-fidelity ancestor simulations) is very close to zero”, or

2. “The fraction of post-human civilizations that are interested in running ancestor-simulations is very close to zero”, or

3. “The fraction of all people with our kind of experiences that are living in a simulation is very close to one.”

 

According to Bostrom himself, at least one of these claims must be true.

It should be noted that this trilemma actually collapses into a simple dilemma, since the second proposition is so obviously untrue: in order to reach post-human status, our descendents will have to engage in such simulations even to accomplish such simulation capacity.

 

Further, the first proposition is actually considered so unlikely, it converts to its opposite in this manner (from my essay): “However, given the rapid advances in computer technology continuing unabated in the future, the probability of ‘the probability of humans surviving to evolve into a post-human civilization with world-simulating capabilities is quite low’ is itself low. The probability of humans evolving into a post-human civilization with world-simulating capabilities is thus high.”

 

Now at this point, we merely have the probabilistic argument that we are currently living as simulations. However, once the argument gets deployed rhetorically, what really happens to the first proposition is this:

 

If you bet on the first proposition (presumably by diverting funds from computer research into other causes with little hope of post-human evolution), your only pay-off will be extinction.

 

If you bet against the first proposition (convert it to its opposite and bet on that), you may or may not be betting on the third proposition, but the pay-off will be the same whether we are simulations or not, namely evolution into post-humanity.

 

If you bet on the third proposition, then you stand at least a 50% chance of earning that same pay-off, but only by placing your bet by financing further computer research that could lead to evolution into post-humanity.

 

So even though the argument seems to be using the conversion of the first proposition in support of a gamble on the third proposition, in fact the third proposition supports betting against the first proposition (and on its conversion instead).

 

What is the psychology this gamble plays on? I’ll just mention the two most obvious sources of anxiety and hope. The anxiety of course concerns the possibility of human extinction: most people who have children would certainly be persuaded that their anxiety concerning the possible future they leave their children to can be allayed somewhat by betting on computer research and evolution to post-humanity. And all who share a faith in a possible technological utopia in the future will be readily persuaded by to take the same gamble.

 

There is a more popular recent variation on the Simulation Gamble we should note – namely that the programmers of the simulation we are living are not our future post-human descendents, but super-intelligent aliens living on another world, possibly in another universe. But while this is rhetorically deployed for the same purpose as the original argument, to further funding (and faith) in technological research, it should be noted that the gamble is actually rather weaker. The ultimate pay-off is not the same, but rather appears to be communion with our programmers. Well, not so enticing as a post-human utopia, surely! Further, that there may be such super-intelligent aliens in our universe is not much of a probability; that they exist in a separate universe is not even a probability, it is mere possibility, suggested by certain mathematical modellings. The reason for the popularity of this gamble seems to arise from an ancient desire to believe in gods or angels, or just some Higher Intelligence capable of ordering our own existence (and redeeming all of our mistakes).

 

It might sound as if, in critiquing the Simulation Gamble, I am attacking research into advances in computer and related technology. Not only is that not the case, but it would be irrelevant. In the current economic situation, we are certainly going to continue such research, regardless of any possible post-human evolution or super-aliens. Indeed, we will continue such research even if it never contributes to post-human evolution, and post-human evolution never happens. Which means of course that the Simulation Gamble is itself utterly irrelevant to the choice of whether to finance such research or not. I’m sure that some, perhaps many, engaged in such research see themselves as contributing to post-human evolution, but that certainly isn’t what wins grants for research. People want new toys; that is a stronger motivation than any hope for utopia.

 

So the real function of the Simulation Gamble appears to be ideological: it’s but one more reason to have faith in a technological utopia in the future; one more reason to believe that science is about ‘changing our lives’ (indeed, changing ourselves) for the better. It is a kind of pep-talk for the true believers in a certain perspective on the sciences. But perhaps not a healthy perspective; after all, it includes a fear that, should science or technology cease to advance, the world crumbles and extinction waits.

 

I believe in science and technology pragmatically – when it works it works, when it doesn’t, it don’t. It’s not simply that I don’t buy the possibility of a post-human evolution (evolution takes millions of years, remember), but I don’t buy our imminent extinction either. The human species will continue to bumble along as it has for the past million years. If things get worse – and I do believe they will – this won’t end the species, but only set to rest certain claims for a right to arrogantly proclaim itself master of the world. We’re just another animal species after all. Perhaps the cleverest of the lot, but also frequently the most foolish. We are likely to cut off our nose to spite our face – but the odd thing is our resilience in the face of our own mistakes. Noseless, we will still continue breathing, for better or worse.

 

—–

Bostrom’s original argument: http://www.simulation-argument.com/simulation.html.

The legacy of Hegel

I found this essay on my computer, written some time ago, and decided – since I haven’t been posting here for a while – that I would go ahead and put it up, although it is not completely polished. Yes, it’s about Hegel again – don’t get too annoyed! – I hope not to write on this topic again for some time. But I do consider here some issues that extend beyond the immediate topic. So –

 

I like to describe Hegel as the cranky uncle one invites to Thanksgiving dinner, having to suffer his endless ramblings, because there is an inheretance worth suffering for.

 

Hegel’s language is well nigh impossible. He understands the way grammar shapes our thinking before any training in logic, and uses – often abuses – grammar, not only to persuade or convince, but to shape his readers’ responses, not only to his text, but to the world. After studying the Phenomenology of Mind, one can’t help but think dialectically for some time, whether one approves of Hegel or not. One actually has to find a way to ‘decompress’ and slowly withdraw, as from a drug. (Generally by reading completely dissimilar texts, like a good comic novel, or raunchy verses about sex.)

 

How did Hegel become so popular, given his difficulty? First of all, he answered certain problems raised in the wake of first Fichte’s near-solipsistic (but highly convincing) epistemology,and then in Schelling’s “philosophy of nature” (which had achieved considerable popularity among intellectuals by the time Hegel started getting noticed). But there was also the fact that he appears to have been an excellent and fascinating teacher at the University of Berlin. And we can see in his later lectures, which come to us largely through student notes, or student editing of Hegel’s notes, that, while the language remains difficult, there is an undeniable charm in his presentation. This raises questions, about how important teachers are in philosophy – do we forget that Plato was Socrates’ student, and what that must have meant to him?

 

Finally: Hegel is the first major philosopher who believed that knowledge, being partly the result of history and partly the result of social conditioning *, was in fact not dependent on individual will or insight, so much as being in the right place at the right time – the Idea, remember, is the protagonist of the Dialectic’s narrative. The importance of the individual, is that there is no narrative without the individual’s experience, no realization of the Idea without the individual’s achievement of knowledge.

 

However, despite this insistance on individual experience, Hegel is a recognizably ‘totalistic’ thinker: everything will be brought together eventually – our philosophy, our science, our religion, our politics, etc., will ultimately be found to be variant expressions of the same inner logic of human reasoning and human aspiration.

 

Even after Pragmatists abandoned Hegel – exactly because of this totalistic reading of history and experience – most of them recognized that Hegel had raised an important issue in this insistence – namely that there is a tendency for us to understand our cultures in a fashion that seemingly connects the various differences in experiences and ways of knowing so that we feel, to speak metaphorically, that we are swimming in the same stream as other members of our communities, largely in the same direction. Even the later John Dewey, who was perhaps the most directly critical of Hegel’s totalism, still strong believes that philosophy can tell the story of how culture comes together, why, eg., there can be a place for both science and the arts as variant explorations of the world around us. We see this culminate, somewhat, in Quine’s Web of Belief: different nodes in the web can change rapidly, others only gradually; but the web as a whole remains intact, so that what we believe not only has logical and evidentiary support, but also ‘hangs together’ – any one belief ‘makes sense’ in relation to our other beliefs.

 

(Notably, when British Idealism fell apart, its rebellious inheritors, eg., Russell and Ayers, went in the other direction, declaring that philosophy really had no need to explain anything in our culture other than itself and scientific theory.)

 

If we accept that knowledge forms a totalistic whole, we really are locked into Hegel’s dialectic, no matter how we argue otherwise.

 

Please note the opening clause “If we accept that knowledge forms a totalistic whole” – what follows here should be the question, is that what we are still doing, not only in philosophy but other fields of research? and I would suggest that while some of us have learned to do without, all too many are still trying to find the magic key that opens all doors; and when they attempt that, or argue for it, Hegel’s net closes over them – whether they’ve read Hegel or not. And that’s what makes him still worth engaging. Because while he’s largely forgotten – the mode of thought he recognizes and describes is still very much among us.

 

And this is precisely why I think writing about him and engaging his thought is so important. The hope that some philosophical system, or some science, or some political system will explain all and cure all is a failed hope, and there is no greater exposition of such hope than in the text of Hegel. The Dialectic is one of the great narrative structures of thought, and may indeed be a pretty good analog to the way in which we think our way through to knowledge, especially in the social sphere; it really is rather a persuasive reading of history, or at least the history of ideas. But it cannot accommodate differences that cannot be resolved if they do not share the same idea. For instance, the differing assumptions underlying physics as opposed to those of biology; or differing strategies in the writing of differing styles of novel or poetry; or consider the political problems of having quite different, even oppositional, cultures having to learn to live in the same space, even within the same city.

 

If Hegel is used to address possible futures, then of course such opposed cultures need to negate each other to find the appropriate resolution of their Dialectic. That seemed to work with the Civil War; but maybe not really. It certainly didn’t work in WWI – which is what led to Dewey finally rejecting Hegel, proposing instead that only a democratic society willing to engage in open-ended social experimentation and self-realization could really flourish, allowing difference itself to flourish.

 

Finally a totalistic narrative of one’s life will seem to make sense, and the Dialectic can be used to help it make sense. And when we tell our life-stories, whether aware of the Dialectic or no, this is to some extent what we are doing.

 

But the fact is, we must remember that – as Hume noted, and as re-enforced in the Eastern traditions – the ‘self’ is a convenient fiction; which means the story we tell about it is also fiction. On close examination, things don’t add up, they don’t hang together. One does everything one is supposed to do to get a professional degree, and then the economy takes a downturn, and there are no jobs. One does everything expected of a good son or daughter, and only to be abused. . One cares for one’s health and lives a good life – and some unpredictable illness strikes one down at an early age. I could go on – and not all of it is disappointment – but the point is that, while I know people who have exactly perfect stories to tell about successful lives, I also know others for whom living has proven so disjointed, it’s impossible to find the Idea that the Dialectic is supposed to reveal.

 

Yet the effort continues. We want to be whole as persons, we want to belong to a whole society. We want to know the story, of how we got here, why we belong here, and where all this is going to.

 

So in a previous essay **, I have given (I hope) a pretty accurate sketch of the Dialectic in outline – and why it might be useful, at least in the social sciences (it is really in Hegel that we first get a strong explication of the manner in which knowledge is socially conditioned). And the notion that stories have a logical structure – and thus effectively form arguments – I think intriguing and important. ***

 

But ultimately the Dialectic can not explain us. The mind is too full of jumble, and our lives too full of missteps on what might better be considered a ‘drunken walk’ than a march toward inevitable progress.

 

So why write about it? Because although in America, Hegel is now largely forgotten, but the Dialectic keeps coming back; all too many still want it – I don’t mean just the Continental tradition. I mean we are surrounded by those who wish for some Theory of Everything, not only in physics, but economics and politics, social theory, etc. And when we try to get that, we end up engaging the dialectical mode of thought,even if we have never read Hegel. He just happened to be able to see it in the thinkers of Modernity, beginning with Descartes and Luther. But we are still Moderns. And when we want to make the big break with the past and still read it as a story of progress leading to us; or when we think we’ve gotten ‘beyond’ the arguments of the day to achieve resolution of differences, and attain certain knowledge – Then we will inevitably engage the Dialectic. Because as soon as one wants to know everything, explain everything, finally succeed in the ‘quest for certainty’ (that Dewey finally dismissed as a pipe-dream), the Dialectic raises its enchanting head, replacing the Will of God that was lost with the arrival of Modernity.

 

That is why (regardless of his beliefs, which are by no means certain) Hegel’s having earned his doctorate in theology becomes important. Because as a prophet of Modernity, he recognized that the old religious narratives could only be preserved by way of sublation into a new narrative of the arrival of human mind replacing that divine will.

 

In a sense that is beautiful – the Phenomenology is in some way the story of human kind achieving divinity in and through itself. But in another way, it is fraught with dangers – have we Moderns freed ourselves from the tyranny of Heaven only to surrender ourselves to the tyranny of our own arrogance? Only time will tell.

 

—–

 

* Much of what Hegel writes of social conditioning is actually implicit in Hume’s Conventionalism; Hegel systematizes it and makes it a cornerstone of his philosophy. (Kant, to the contrary, always assumes a purely rational individual ego; which is exactly the problem that Fichte had latched onto and reduced to ashes by trying to get to the root of human knowledge in desire.)

 

** https://nosignofit.wordpress.com/2016/10/13/hegels-logical-consciousness/

Full version: http://theelectricagora.com/2016/10/12/hegels-logical-consciousness

 

*** I’ll emphasize this, because it is the single most important lesson I learned from Hegel – narrative is a logical structure, a story forms a logical argument, a kind of induction of particularities leading into thematic conclusions. I will hopefully return to this in a later essay.

 

Problems with Utilitarianism

Reading about Utilitarianism recently, I first asked myself what I knew about it. It is now recognizably a form of moral realism, positing a standard of moral conduct separable from personal experience or belief – the greatest good for the greatest number. It’s been many decades since I’ve read Bentham, but I seemed to recall there was at least a suggestion, at the beginning of Utilitarianism, that its basic principles were already implicit in actual practice, and that Utilitarianism merely promised clarification and perfection by application of ‘scientific’ methodology. If so, then originally Utilitarianism would not be a moral realism but a scientistic justification for, and institutionalization of, existing practices. However, such a Utilitarianism would be unsustainable due to objections from any number of positions taken by those who felt the then current practices somehow disenfranchised them, or injured them, or oppressed them. (Malthus’ argument that the poor should be allowed to die off is this kind of Utilitarianism, and one can imagine the poor and their advocates not being too happy with it.) If I were remembering the matter aright, it should be clear why Utilitarianism would mutate into a claim of a ‘good’ as an identifiable value separate from what any one individual or group would wish it to be.

In America, most political arguments are in fact Utilitarian in one sense or another – and really can’t be otherwise. A politician is always arguing that he or she represents the most important interests of the greater number of the electorate – how could they not?

My general point is that it’s easy to see why understanding Utilitarianism might be somewhat difficult for some (including myself). I don’t say that to defend it, but because I find it somewhat confused, with a checkered history, even though politically inevitable in a diverse population with democratic aspirations.

I was never very impressed with the philosophy of Utilitarianism, so I didn’t keep up with it much. Kant’s deontology may be just as wrong, but it is far more interesting, because it raises the question of just how far we can extend rationality into the realm of morals before we bump into the fundamental problem of any moral realism, (or meta-ethical analysis, for that matter), cultural differences.

At any rate, reviewing some background material today, I find that I was wrong about Bentham (he was in fact attempting reformation of existing practices), but right about the essentially confused nature of Utilitarianism. Higher level utilitarian arguments can be convincing (and the crude utilitarianism we find in politics can be persuasive); but the ground is very shaky.

Here is an interpretation of Bentham‘s general premise, from The SEP: “We are to promote pleasure and act to reduce pain. When called upon to make a moral decision one measures an action’s value with respect to pleasure and pain according to the following: intensity (how strong the pleasure or pain is), duration (how long it lasts), certainty (how likely the pleasure or pain is to be the result of the action), proximity (how close the sensation will be to performance of the action), fecundity (how likely it is to lead to further pleasures or pains), purity (how much intermixture there is with the other sensation). One also considers extent — the number of people affected by the action.” (http://plato.stanford.edu/entries/utilitarianism-history/)

Assuming “we are to promote” – that is, we are obligated to promote – “pleasure and act to produce pain,” is committing ourselves to a standard separable from any particular instance of pleasure and pain. And this makes absolutely no sense. The First Noble Truth of Buddhism, that life is suffering, was derived – and remains derivable – from personal experience. (And if one hasn’t experienced it, then the way of the Buddha offers no solution.) But apparently Bentham distrusted experience as a guide, since it tends to generate morals based on personal prejudice; so where is this obligation to promote happiness coming from?

Secondly, Benthem is suggesting a calculus of pleasure and pain, when such are without any essential measure. Psychologists have tried for years to provide such measurement, with success limited to purely physical stimulation. But how much pain is experienced by a parent upon the loss of a child? How much pleasure in a wedding ceremony? What kind of pleasure do I feel when I learn a hated enemy is dead, such that I can measure it? What kind of sorrow and anger am I feeling in support of the African American community’s response to the alarming number of police shootings of unarmed men and women? On what scale should I rate it?

So, how generalizable is this presumed promotion of pleasure and pain? The last paragraph of my previous comment raises the inevitable cultural problem – pleasure and pain are not reducible to physical sensations, but, indeed, physical sensations are frequently responses to social events. But different cultures realize socialization in many different ways. Recently, I’ve read someone remarking that god hates homosexuals. While I have heard Protestant ministers make this claim, but Catholic clergy have ever followed the principle ‘hate the sin, but love the sinner,’ presuming this to be true of god. We know the ancient Greeks and Romans were quite tolerant of homosexuality; and the cultures of ancient India and Japan had ornate rules for ‘proper’ satisfaction of homosexual desires.

The SEP article quotes Bentham’s rejection of laws against homosexuality as an unnecessary impingement of personal sentiment on the general welfare thus:

“The circumstances from which this antipathy may have taken its rise may be worth enquiring to…. One is the physical antipathy to the offence…. The act is to the highest degree odious and disgusting, that is, not to the man who does it, for he does it only because it gives him pleasure, but to one who thinks [?] of it. Be it so, but what is that to him?”

One can sympathize with Bentham and still see that he has somewhat missed the point. People often feel greater security and greater pleasure in socialization when they have a sense that the culture they live in is homogeneous enough that they share values with the greater number of their fellow community members. The cultural differences concerning homosexuality indicate much wider cultural assumptions about the shared values of the differing communities – and not just about homosexuality, but about to what degree individual behavior may vary from community norms, about the appropriate means of tolerating such variance, about the ground and harshness of sanction concerning unacceptable variance. Once we begin studying cultural difference along such general lines, we begin to see in the details just how different cultures can get. Utilitarianism soon stands revealed as a set of assumptions and arguments within a *given* culture, and can no longer be universalized on a founding principle to which we all agree.

Beyond Bentham we come to the classical Utilitarian identification of ‘pleasure’ with ‘happiness,’ and this is not sustainable. It is a torture of reason to suggest that ascetics must be feeling some physical pleasure in their denial of physical pleasure; yet they may certainly be very happy. And yes, they may be feeling a psychological pleasure, but this may yet not be the source of their happiness, so much as their self-identification with their ascetic ideal, to which their psychological pleasure is mere response.

Which of course raises the apparently long-recognized critique of Utilitarianism’s insistence that ‘happiness’ is the ultimate goal of our moral decisions (whether we wish to admit it or not) – namely that it is simply not at all clear that all moral or ethical choices do in some sense, and ought to, move in the direction of increasing happiness. It is demonstrable that many ethical decisions we make do not lead to the greater happiness of one’s self or one’s community. My loss of faith did not bring happiness to me nor to the Catholic community in which I was raised. Commitment to civil rights in the 1960s meant recognizing that years of contention and further reformation and occasional strife would follow, as efforts to redress discrimination and increase acceptance of all races as fellow humans would need to continue indefinitely.

As I’ve noted before, where general ethics within a diverse community are concerned, I tend to think eclectically. There are some issues I would argue along deontological lines, others I think are better address with achieving personal virtuousness (virtue ethics); on other issues I can be a ruthlessly legalistic pragmatist or Hobbsean contract theorist; so of course there are issues I wouldn’t hesitate to address on Utilitarian grounds, especially in political matters.

But as a complete normative theory of ethical behavior, Utilitarianism still seems confused – and, frankly, an artifact of a given culture at a given time, which has largely passed into history.

A problem with eugenics

According to Wikipedia, “Eugenics (/juːˈdʒɛnɪks/; from Greek εὐγενής eugenes “well-born” from εὖ eu, “good, well” and γένος genos, “race, stock, kin”) is a set of beliefs and practices that aims at improving the genetic quality of the human population.” *

 

Here’s the problem with eugenics: it is built on an assumption that is grounded a presumption, concerning the values of the researchers involved.

The assumption is that the human species needs to be improved genetically; but this is grounded on the presumption that such improvement can be determined according to values upon which we should all agree. In fact of course, all such values are culturally bound – completely and inextricably. Thus the ‘improvment’ offered will always imply hopes and prejudices of a given group within a given culture. There is no way to realize eugenics that is not inherently ethno-centric or ethno-phobic.

I’m sure some here hope that eugenics can be used to discover and eliminate genetic predispositions to religious belief; but surely, a religious eugenicist has every right to hope that such can be done to eliminate predispositions toward atheism. After all, technology plays no favorites.

Further, the very assumption that the human species needs to be improved in this matter is itself highly questionable, since it implies the de-valuation of the species just as it is – it implies that there is something wrong about being human, that humans are inherently flawed – a residue of Abrahamic ‘fallen man’ mythology.

As an illuminating side-topic, consider: practioners of ‘bio-criminology’ (which I would argue is a pseudo-science) target genetic study of criminal populations that are overwhelmingly African in descent. They seem to hope that genetics will reveal genetic disposition to ‘violent’ behavior, such as, say, mugging. And the argument for targeting more African Americans than European Americans would be, that there just are more African Americans incarcerated for such behavior. The argument is clearly flawed since it completely disregards sociological knowledge about the conditions with which African Americans must deal in various communities in which crime rates are fairly high.

But consider: The practices of vulture capitalists playing the stock market, or collapsing viable companies into bankruptcy have clearly devasted far more lives than all the muggers in America. Yet there is never any suggestion from ‘bio-criminologists’ that geneticists should find the genes responsible for predispostions toward greed and callousness, dishonesty on the stock exchange or ruthless exploitation of employees. And there never will be, because white collar criminals contribute to college funds, establish foundations that offer grants, hire bio-criminologists into right-wing think tanks, etc.

Personally, I won’t consider any arguments for eugenics until I get a promise that we will target the behaviors of the real criminals in this society – like the ones who work on Wall Street.

—–

* https://en.wikipedia.org/wiki/Eugenics

As we read through the Wiki article, we find that there is a recent trend among some geneticists to use the term ‘eugenics’ to apply to any effort to use genetics to address ertain health conditions, such as inheritable diseases like Huntingtons, or to provide parents with the opportunity to decide whether to abort a fetus with such diseases. This is just a mistake. First, no one opposed to classical eugenics has ever argued that we shouldn’t use genetics to address ill health conditions or diseases – because we can do this without attempting to improve the species genetically, which is the ultimate goal of eugenics. Secondly, ressurrecting the term eugenics for what is pretty standard genetics, seems to bury history, or at least confuse our understanding of it. Third, the choice of whether to have a child or not given potential for heritable diseases, has long been available through understanding family histories – and it has not dissuaded a large number of people from having children despite family histories of such illnesses, because the choice to have a child or not is rarely restricted by purely rational consideration. Perhaps it should be, but it’s not. For such restrictions to have a large enough impact on the population to affect genetic improvement of it, they would have to be impelled from outside the family, perhaps by law, and then we would find ourselves directly in the arguments concerning classical eugenics, like the one I make above.

Finally, there’s the question of whther we really want to use genetics to improve the species at all, since it’s quite possible that naturally occuring reproduction actually contributes to the survival of the species, since we don’t know what environmental challenges the species will face in the future, and what may appear to be a weakness now, may prove to be a strength in another era.

I would say, let’s stop calling any serious genetics a form of eugenics, and let’s stop pretending that we are wise enouve to direct the course of human evolution.

Thinking Nominalism, Living Pragmatism

Nobody really wants the sloppy, childlike relativism that some self-proclaimed ‘post-Modernists’ espouse – even they don’t want it, since it would make their proclamations and espousals nonsensical. But relativism is not all one thing, it’s available in various types and to varying degrees. Dealing with any relativism in a useful manner requires considerable thought, caution, and care.

It is one of the most difficult concepts to get our minds around, that the world we know is only known through the concepts our minds generate (or that are communicated to us by others). Since these concepts are generally constructed via some linguistic or otherwise systematized communication processes, it follows that our ‘knowledge’ of the world is really largely a knowledge of what we say about the world. Even if I kick a rock (ala Sam Johnson), this experience will only make sense through my signifying response to it in a given context. Even expressions like ‘ow!’ or ‘ouch!’ can be seen to be some responsive effort to make sense of the experience; i.e., announcement that a painful event/sensation has occurred.

We’ve all had the experience of feeling some tiny sting on our arms; we slap at it reflexively. What is it? I pull my hand away, and there on the palm is a flattened body with broken wings, and I say, ‘oh, a bug.’ But if I pull my hand away and there is no flattened body on it, there still arises some thought in mind, such as ‘oh, probably a bug.’ And it is probably a bug, but that doesn’t matter – more important is recognizing that whatever it was, I have made sense of it by interpreting it and expressing this interpretation. And if it never happens again, and I never find any further evidence that it was a bug, yet a bug it will be in my memory.

I confess that I am something of a classical (i.e., traditional or Medieval) Nominalist – I’m sometimes unsure that we know anything ‘out there’ at all, except that it exists (but I’m also something of a Pragmatist, so this doesn’t really cause me any loss of sleep). But one doesn’t have to go so far as Nominalism to see that any claim we can make of the world beyond ourselves is thoroughly mediated by the system of the language by which we make the claim, and thoroughly dependent on context – not only the context of the particular world in which we speak, but the the context of the language we speak itself, and all the social reality that requires we admit.

Nominalism is a position taken regarding the problematic relationship between universals and particulars. This relationship can only be worked through in language.

It should be noted that there are certainly signifying practices other than language; but there can be no experience with reality that does not engage – and hence is not mediated by – signifying practices. (An infant reaching for the mother’s breast is signifying something, and reaching for what signifies to it.) Whether infants have ‘concepts’ seems irrelevant, or badly phrased. That an infant responds to the world reliant on persistence of objects hardly means that it has a concept of persistence of objects. This seems to beggar the very concept of a concept.

One of the questions inadvertently raised here is whether knowledge is to be equated with the hoary Positivist standard of Justified True Belief; because an infant certainly has no belief to be justified. – the truth of the breast is the immediate presence of the breast, and the justification of that is satisfaction of hunger. But the infant surely does not ‘believe’ this in any way  he or she can articulate, but merely reaches for the breast. Yet infants surely know, in a meaningful way, the breast – and the success or failure to get satisfaction from it – and intimately.

I’m not sure that the notion of knowledge being reducible to Justified True Belief, makes any sense outside of language, since analysis of a ‘justified true belief’ requires formulation into claims in a language system.

I noted parenthetically that my Nominalist position (concerning universals) did not cause me loss of sleep because I am also something of a Pragmatist. In pragmatism, knowledge need not be equitable to JTB. Reliability, as ground for responding to the world, often seems to have a stronger claim.

I earlier used the term “signifying” exactly to avoid getting into a technical distinctions between signifying systems. But I will introduce one technical term which may be of use here, which is that of Charles Sanders Peirce: interpretant. The interpretant to a sign is primarily composed of responses to the sign, which may be conceptualization or may be some form of action or speech-act, or some inner sensation. If we think in terms of signification and how various organisms respond to signs, we can avoid the dangers of ascribing language to an infant, and still have a means of addressing how they interact with their environment and each other in significant ways. And we can also avoid the trap of conceiving of our entire existence as somehow fundamentally linguistic. We are the language speaking animal, but we have other non-linguistic significant interactions with each other and the environment.

Pragmatism is a post-Idealist philosophy (Peirce was taught to recite Kant’s First Critique – in German! – at an early age; Dewey was an avowed Hegelian until WWI). Idealism makes a claim, actually similar to that of Logical Positivism, that knowledge is primarily or wholly the result of theory construction, and thus must be articulated linguistically. * Pragmatism begins with the recognition that this cannot be the case.

So the question may come down to whether what we know needs be communicated in language, or whether some other form of signification can be rich enough to inform our responses to the world.

But that does not mean we can be free of signification all together. The sting on the arm is a sign; what I say of it is an attempt to understand its significance, as response to it. If (assuming the scenario that I cannot see or find the bug or bug-parts) I come down with symptoms (signs) of malaria, that will enrich the signification of my response, and will also point to (sign) the species of bug that stung me. None of this need be predicated on the understanding that there is an inherent ‘bugness’ (some universal bug-hood) in the bug, the theory of which I must be familiar with before I form a proposition concerning it. And that is what I see as the real issue here.

—–
* This falls into the Nominalist trap: if all knowledge is theoretical, and all theories concern universals, and all existent entities are individuals, then the most we can say we know is our own theories, since individuals are not universals, but universals need to be constructed to account for them.. Unless, that is, we allow that knowledge is not all one thing and that there is not only one way of knowing. I’m glad that my doctor has a theory of malaria that can be relied on should I come down with it, so I can get properly treated. But I know I was stung, and what that felt like, without any theory to account for it. The interpretation of it is, however inevitable, as making sense of the matter, and certainly necessary if I become sick and need to articulate to a doctor what I think happened.