Mathematical Platonism: A Comedy

Mathematical Platonism holds that mathematical forms – equations, geometric forms, measurable relationships – are somehow embedded into the fabric of the universe, and are ‘discovered’ rather than invented by human minds.

From my perspective, humans respond to challenges of experience. However, within a given condition of experience, the range of possible responses is limited. In differing cultures, where similar conditions of experience apply, the resulting responses can also be expected to be similar. The precise responses and their precise consequences generate new conditions to be responded to – but again only within a range. So while the developments we find in differing cultures can oft end up being very different, they can also end up being very similar, and the trajectories of these developments can be traced backward, revealing their histories. These histories produce the truths we find in these cultures, and the facts that have been agreed upon within them. As these facts and the truths concerning them prove reliable, they are sustained until they don’t, at which point each culture will generate new responses that prove more reliable.

Since, again, the range of these responses within any given set of conditions is actually limited by the history of their development, we can expect differing cultures with similar sets of conditions to recognize a similar set of facts and truths in each other when they at last make contact. That’s when history really gets interesting, as the cultures attempt to come into concordance, or instead come into conflict – but, interestingly, in either case, partly what follows is that the two cultures begin borrowing from each other facts, truths, and possible responses to given challenges. ‘Universal’ truths, are simply those that all cultures have found equally reliable over time.

This is true about mathematical forms as well, the most resilient truths we develop in response to our experiences.  I don’t mean that maths are reducible to the empirical; our experiences include reading, social interatction, professional demands, etc., many of which will require continued development of previous inventions.  However, there’s no doubt that a great deal of practical mathematics have proven considerably reliable over the years.  Whereas, on the contrary, I find useless Platonic assertions that two-dimensional triangles or the formula ‘A = Π * r * r’   simply float around in space, waiting to be discovered.

So, in considering this issue, I came up with a little dialogue, concerning two friends trying to find – that is, discover – the mathematical rules for chess (since the Platonic position is that these rules, as they involve measurable trajectories, effectively comprise a mathematical form, and hence were discovered rather than invented).

Bob: Tom, I need some help here; I’m trying to find something, but it will require two participants.
Tom: Sure, what are we looking for.
B.: Well, it’s a kind of game. It has pieces named after court positions in a medieval castle.
T.: How do you know this?
B.: I reasoned it through, using the dialectic process as demonstrated in Plato’s dialogues. I asked myself, what is the good to be found in playing a game? And it occurred to me, that the good was best realized in the Middle Ages. Therefore, the game would need to be a miniaturization of Medieval courts and the contests held in them.
T.: Okay, fine, then let’s start with research into the history of the Middle Ages –
B.: No, no, history has nothing to do with this. That would mean that humans brought forth such a game through trial and error. We’re looking for the game as it existed prior to any human involvement.
T.: Well, why would there be anything like a game unless humans were involved in it?
B.: Because its a form; as a form, it is pure and inviolate by human interest.
T.: Then what’s the point in finding this game? Aren’t we interested in playing it?
B.: No, I want to find the form! Playing the game is irrelevant.
T.: I don’t see it, but where do you want to start.
B.: In the Middle Ages, they thought the world was flat; we’ll start with a flat surface.
T.: Fine, how about this skillet.
B.: But it must be such that pieces can move across it in an orderly fashion.
T.: All right, let’s try a highway; but not the 490 at rush hour….
B. But these orderly moves must follow a perpendicular or diagonal pattern; or they can jump part way forward and then to the side.
T.: You’re just making this up as you go along.
B.: No! The eternally true game must have pieces moving in a perpendicular, a diagonal, or a jump forward and laterally.
T.: Why not a circle?
B.: Circles are dangerous; they almost look like vaginas. We’re looking for the morally perfect game to play.
T.: Then maybe it’s some sort of building with an elevator that goes both up and sideways.
B.: No, it’s flat, I tell you… aha! a board is flat!
T.: So is a pancake.
B.: But a rectangular board allows perpendicular moves, straight linear moves, diagonal moves, and even jumping moves –
T.: It also allows circular moves.
B.: Shut your dirty mouth! At least now we know what we’re looking for. Come on, help me find it. (begins rummaging through a trash can.) Here it is, I’ve discovered it!
T.: What, that old box marked “chess?”
B.: It’s inside. It’s always inside, if you look for it.
T.: My kid brother threw that out yesterday. He invented a new game called ‘shmess’ which he says is far more interesting. Pieces can move in circles in that one!.
B,: (Pause.) I don’t want to play this game anymore. Can you help me discover the Higgs Boson?
T.: Is that anywhere near the bathroom? I gotta go….

Bob wants a “Truth” and Tom wants to play a game. Why is there any game unless humans wish to play it?

A mathematical form comes into use in one culture, and then years later again in a completely other culture;  assuming the form true, did it become true twice through invention?  Yes.  This is one of the unfortunate truths about truth: it can be invented multiple times.  That is precisely what history tells us.

So, Bob wants to validate certain ideas from history, while rejecting the history of those ideas. You can’t have it both ways. Either there is a history of ideas, in which humans participated to the extent of invention, or history is irrelevant, and you lose even “discovery.” The Higgs Boson, on the other hand, gets ‘discovered,’ because there is an hypothesis based on theory which is itself based on previous observations and validated theory, experimentation, observation, etc. In other words, a history of adapting thought to experience.  (No one doubts that there is a certain particle that seems to function in a certain way. But there is no Higgs Boson without a history of research in our effort to conceptualize a universe in which such is possible, and to bump into it, so to speak, using our invented instrumentation, and to name it, all to our own purposes.)

Plato was wrong, largely because he had no sense of history. Beyond the poetry of his dialogues (which has undoubted force), what was most interesting in his philosophy had to be corrected and systematized by Aristotle, who understood history; the practical value of education; the differences between cultures; and the weight of differing opinions. Perhaps we should call philosophy “Footnotes to Aristotle.”

But I will leave it to the readers here whether they are willing to grapple with a history of human invention in response to the challenges of experiences, however difficult that may seem; or whether they prefer chasing immaterial objects for which we can find no evidence beyond the ideas we ourselves produce.

The legacy of Hegel

I found this essay on my computer, written some time ago, and decided – since I haven’t been posting here for a while – that I would go ahead and put it up, although it is not completely polished. Yes, it’s about Hegel again – don’t get too annoyed! – I hope not to write on this topic again for some time. But I do consider here some issues that extend beyond the immediate topic. So –

 

I like to describe Hegel as the cranky uncle one invites to Thanksgiving dinner, having to suffer his endless ramblings, because there is an inheretance worth suffering for.

 

Hegel’s language is well nigh impossible. He understands the way grammar shapes our thinking before any training in logic, and uses – often abuses – grammar, not only to persuade or convince, but to shape his readers’ responses, not only to his text, but to the world. After studying the Phenomenology of Mind, one can’t help but think dialectically for some time, whether one approves of Hegel or not. One actually has to find a way to ‘decompress’ and slowly withdraw, as from a drug. (Generally by reading completely dissimilar texts, like a good comic novel, or raunchy verses about sex.)

 

How did Hegel become so popular, given his difficulty? First of all, he answered certain problems raised in the wake of first Fichte’s near-solipsistic (but highly convincing) epistemology,and then in Schelling’s “philosophy of nature” (which had achieved considerable popularity among intellectuals by the time Hegel started getting noticed). But there was also the fact that he appears to have been an excellent and fascinating teacher at the University of Berlin. And we can see in his later lectures, which come to us largely through student notes, or student editing of Hegel’s notes, that, while the language remains difficult, there is an undeniable charm in his presentation. This raises questions, about how important teachers are in philosophy – do we forget that Plato was Socrates’ student, and what that must have meant to him?

 

Finally: Hegel is the first major philosopher who believed that knowledge, being partly the result of history and partly the result of social conditioning *, was in fact not dependent on individual will or insight, so much as being in the right place at the right time – the Idea, remember, is the protagonist of the Dialectic’s narrative. The importance of the individual, is that there is no narrative without the individual’s experience, no realization of the Idea without the individual’s achievement of knowledge.

 

However, despite this insistance on individual experience, Hegel is a recognizably ‘totalistic’ thinker: everything will be brought together eventually – our philosophy, our science, our religion, our politics, etc., will ultimately be found to be variant expressions of the same inner logic of human reasoning and human aspiration.

 

Even after Pragmatists abandoned Hegel – exactly because of this totalistic reading of history and experience – most of them recognized that Hegel had raised an important issue in this insistence – namely that there is a tendency for us to understand our cultures in a fashion that seemingly connects the various differences in experiences and ways of knowing so that we feel, to speak metaphorically, that we are swimming in the same stream as other members of our communities, largely in the same direction. Even the later John Dewey, who was perhaps the most directly critical of Hegel’s totalism, still strong believes that philosophy can tell the story of how culture comes together, why, eg., there can be a place for both science and the arts as variant explorations of the world around us. We see this culminate, somewhat, in Quine’s Web of Belief: different nodes in the web can change rapidly, others only gradually; but the web as a whole remains intact, so that what we believe not only has logical and evidentiary support, but also ‘hangs together’ – any one belief ‘makes sense’ in relation to our other beliefs.

 

(Notably, when British Idealism fell apart, its rebellious inheritors, eg., Russell and Ayers, went in the other direction, declaring that philosophy really had no need to explain anything in our culture other than itself and scientific theory.)

 

If we accept that knowledge forms a totalistic whole, we really are locked into Hegel’s dialectic, no matter how we argue otherwise.

 

Please note the opening clause “If we accept that knowledge forms a totalistic whole” – what follows here should be the question, is that what we are still doing, not only in philosophy but other fields of research? and I would suggest that while some of us have learned to do without, all too many are still trying to find the magic key that opens all doors; and when they attempt that, or argue for it, Hegel’s net closes over them – whether they’ve read Hegel or not. And that’s what makes him still worth engaging. Because while he’s largely forgotten – the mode of thought he recognizes and describes is still very much among us.

 

And this is precisely why I think writing about him and engaging his thought is so important. The hope that some philosophical system, or some science, or some political system will explain all and cure all is a failed hope, and there is no greater exposition of such hope than in the text of Hegel. The Dialectic is one of the great narrative structures of thought, and may indeed be a pretty good analog to the way in which we think our way through to knowledge, especially in the social sphere; it really is rather a persuasive reading of history, or at least the history of ideas. But it cannot accommodate differences that cannot be resolved if they do not share the same idea. For instance, the differing assumptions underlying physics as opposed to those of biology; or differing strategies in the writing of differing styles of novel or poetry; or consider the political problems of having quite different, even oppositional, cultures having to learn to live in the same space, even within the same city.

 

If Hegel is used to address possible futures, then of course such opposed cultures need to negate each other to find the appropriate resolution of their Dialectic. That seemed to work with the Civil War; but maybe not really. It certainly didn’t work in WWI – which is what led to Dewey finally rejecting Hegel, proposing instead that only a democratic society willing to engage in open-ended social experimentation and self-realization could really flourish, allowing difference itself to flourish.

 

Finally a totalistic narrative of one’s life will seem to make sense, and the Dialectic can be used to help it make sense. And when we tell our life-stories, whether aware of the Dialectic or no, this is to some extent what we are doing.

 

But the fact is, we must remember that – as Hume noted, and as re-enforced in the Eastern traditions – the ‘self’ is a convenient fiction; which means the story we tell about it is also fiction. On close examination, things don’t add up, they don’t hang together. One does everything one is supposed to do to get a professional degree, and then the economy takes a downturn, and there are no jobs. One does everything expected of a good son or daughter, and only to be abused. . One cares for one’s health and lives a good life – and some unpredictable illness strikes one down at an early age. I could go on – and not all of it is disappointment – but the point is that, while I know people who have exactly perfect stories to tell about successful lives, I also know others for whom living has proven so disjointed, it’s impossible to find the Idea that the Dialectic is supposed to reveal.

 

Yet the effort continues. We want to be whole as persons, we want to belong to a whole society. We want to know the story, of how we got here, why we belong here, and where all this is going to.

 

So in a previous essay **, I have given (I hope) a pretty accurate sketch of the Dialectic in outline – and why it might be useful, at least in the social sciences (it is really in Hegel that we first get a strong explication of the manner in which knowledge is socially conditioned). And the notion that stories have a logical structure – and thus effectively form arguments – I think intriguing and important. ***

 

But ultimately the Dialectic can not explain us. The mind is too full of jumble, and our lives too full of missteps on what might better be considered a ‘drunken walk’ than a march toward inevitable progress.

 

So why write about it? Because although in America, Hegel is now largely forgotten, but the Dialectic keeps coming back; all too many still want it – I don’t mean just the Continental tradition. I mean we are surrounded by those who wish for some Theory of Everything, not only in physics, but economics and politics, social theory, etc. And when we try to get that, we end up engaging the dialectical mode of thought,even if we have never read Hegel. He just happened to be able to see it in the thinkers of Modernity, beginning with Descartes and Luther. But we are still Moderns. And when we want to make the big break with the past and still read it as a story of progress leading to us; or when we think we’ve gotten ‘beyond’ the arguments of the day to achieve resolution of differences, and attain certain knowledge – Then we will inevitably engage the Dialectic. Because as soon as one wants to know everything, explain everything, finally succeed in the ‘quest for certainty’ (that Dewey finally dismissed as a pipe-dream), the Dialectic raises its enchanting head, replacing the Will of God that was lost with the arrival of Modernity.

 

That is why (regardless of his beliefs, which are by no means certain) Hegel’s having earned his doctorate in theology becomes important. Because as a prophet of Modernity, he recognized that the old religious narratives could only be preserved by way of sublation into a new narrative of the arrival of human mind replacing that divine will.

 

In a sense that is beautiful – the Phenomenology is in some way the story of human kind achieving divinity in and through itself. But in another way, it is fraught with dangers – have we Moderns freed ourselves from the tyranny of Heaven only to surrender ourselves to the tyranny of our own arrogance? Only time will tell.

 

—–

 

* Much of what Hegel writes of social conditioning is actually implicit in Hume’s Conventionalism; Hegel systematizes it and makes it a cornerstone of his philosophy. (Kant, to the contrary, always assumes a purely rational individual ego; which is exactly the problem that Fichte had latched onto and reduced to ashes by trying to get to the root of human knowledge in desire.)

 

** https://nosignofit.wordpress.com/2016/10/13/hegels-logical-consciousness/

Full version: http://theelectricagora.com/2016/10/12/hegels-logical-consciousness

 

*** I’ll emphasize this, because it is the single most important lesson I learned from Hegel – narrative is a logical structure, a story forms a logical argument, a kind of induction of particularities leading into thematic conclusions. I will hopefully return to this in a later essay.

 

Thinking Nominalism, Living Pragmatism

Nobody really wants the sloppy, childlike relativism that some self-proclaimed ‘post-Modernists’ espouse – even they don’t want it, since it would make their proclamations and espousals nonsensical. But relativism is not all one thing, it’s available in various types and to varying degrees. Dealing with any relativism in a useful manner requires considerable thought, caution, and care.

It is one of the most difficult concepts to get our minds around, that the world we know is only known through the concepts our minds generate (or that are communicated to us by others). Since these concepts are generally constructed via some linguistic or otherwise systematized communication processes, it follows that our ‘knowledge’ of the world is really largely a knowledge of what we say about the world. Even if I kick a rock (ala Sam Johnson), this experience will only make sense through my signifying response to it in a given context. Even expressions like ‘ow!’ or ‘ouch!’ can be seen to be some responsive effort to make sense of the experience; i.e., announcement that a painful event/sensation has occurred.

We’ve all had the experience of feeling some tiny sting on our arms; we slap at it reflexively. What is it? I pull my hand away, and there on the palm is a flattened body with broken wings, and I say, ‘oh, a bug.’ But if I pull my hand away and there is no flattened body on it, there still arises some thought in mind, such as ‘oh, probably a bug.’ And it is probably a bug, but that doesn’t matter – more important is recognizing that whatever it was, I have made sense of it by interpreting it and expressing this interpretation. And if it never happens again, and I never find any further evidence that it was a bug, yet a bug it will be in my memory.

I confess that I am something of a classical (i.e., traditional or Medieval) Nominalist – I’m sometimes unsure that we know anything ‘out there’ at all, except that it exists (but I’m also something of a Pragmatist, so this doesn’t really cause me any loss of sleep). But one doesn’t have to go so far as Nominalism to see that any claim we can make of the world beyond ourselves is thoroughly mediated by the system of the language by which we make the claim, and thoroughly dependent on context – not only the context of the particular world in which we speak, but the the context of the language we speak itself, and all the social reality that requires we admit.

Nominalism is a position taken regarding the problematic relationship between universals and particulars. This relationship can only be worked through in language.

It should be noted that there are certainly signifying practices other than language; but there can be no experience with reality that does not engage – and hence is not mediated by – signifying practices. (An infant reaching for the mother’s breast is signifying something, and reaching for what signifies to it.) Whether infants have ‘concepts’ seems irrelevant, or badly phrased. That an infant responds to the world reliant on persistence of objects hardly means that it has a concept of persistence of objects. This seems to beggar the very concept of a concept.

One of the questions inadvertently raised here is whether knowledge is to be equated with the hoary Positivist standard of Justified True Belief; because an infant certainly has no belief to be justified. – the truth of the breast is the immediate presence of the breast, and the justification of that is satisfaction of hunger. But the infant surely does not ‘believe’ this in any way  he or she can articulate, but merely reaches for the breast. Yet infants surely know, in a meaningful way, the breast – and the success or failure to get satisfaction from it – and intimately.

I’m not sure that the notion of knowledge being reducible to Justified True Belief, makes any sense outside of language, since analysis of a ‘justified true belief’ requires formulation into claims in a language system.

I noted parenthetically that my Nominalist position (concerning universals) did not cause me loss of sleep because I am also something of a Pragmatist. In pragmatism, knowledge need not be equitable to JTB. Reliability, as ground for responding to the world, often seems to have a stronger claim.

I earlier used the term “signifying” exactly to avoid getting into a technical distinctions between signifying systems. But I will introduce one technical term which may be of use here, which is that of Charles Sanders Peirce: interpretant. The interpretant to a sign is primarily composed of responses to the sign, which may be conceptualization or may be some form of action or speech-act, or some inner sensation. If we think in terms of signification and how various organisms respond to signs, we can avoid the dangers of ascribing language to an infant, and still have a means of addressing how they interact with their environment and each other in significant ways. And we can also avoid the trap of conceiving of our entire existence as somehow fundamentally linguistic. We are the language speaking animal, but we have other non-linguistic significant interactions with each other and the environment.

Pragmatism is a post-Idealist philosophy (Peirce was taught to recite Kant’s First Critique – in German! – at an early age; Dewey was an avowed Hegelian until WWI). Idealism makes a claim, actually similar to that of Logical Positivism, that knowledge is primarily or wholly the result of theory construction, and thus must be articulated linguistically. * Pragmatism begins with the recognition that this cannot be the case.

So the question may come down to whether what we know needs be communicated in language, or whether some other form of signification can be rich enough to inform our responses to the world.

But that does not mean we can be free of signification all together. The sting on the arm is a sign; what I say of it is an attempt to understand its significance, as response to it. If (assuming the scenario that I cannot see or find the bug or bug-parts) I come down with symptoms (signs) of malaria, that will enrich the signification of my response, and will also point to (sign) the species of bug that stung me. None of this need be predicated on the understanding that there is an inherent ‘bugness’ (some universal bug-hood) in the bug, the theory of which I must be familiar with before I form a proposition concerning it. And that is what I see as the real issue here.

—–
* This falls into the Nominalist trap: if all knowledge is theoretical, and all theories concern universals, and all existent entities are individuals, then the most we can say we know is our own theories, since individuals are not universals, but universals need to be constructed to account for them.. Unless, that is, we allow that knowledge is not all one thing and that there is not only one way of knowing. I’m glad that my doctor has a theory of malaria that can be relied on should I come down with it, so I can get properly treated. But I know I was stung, and what that felt like, without any theory to account for it. The interpretation of it is, however inevitable, as making sense of the matter, and certainly necessary if I become sick and need to articulate to a doctor what I think happened.

 

Philosophy and its hope (2)

In my previous post, I wrote on the question of whether philosophy per se, but especially professional philosophy, needed to address the concerns of the communities in which it appears.  Here I will more specifically address the historical problems professional philosophy has experienced by not addressing such concerns.  This includes the infamous fracture between what is known as ‘Anglo-American’ or Analytic, philosophy, and what known as ‘Continental’ (more accurately, Phenomenological) philosophy. *

I attended graduate philosophy courses at the University of New Mexico back in the early ’90s. At the time, the department there was dominated by hard Analytics; a couple aging professors were kept on to teach “history of” courses, regardless of their own expertise. (My favorite professor, Fred Gillette Sturm, who taught me Peirce, was one of these; but his expertise was in Liberation Theology and South American philosophy – which of course were not considered ‘real philosophy’ among Analytics.) But the department was having problems. First, the interests of the Analytic tradition were still determined by the Logico-Positivist agenda; and as such were particularly narrow. The problem with such narrow interests is that they’re not text-productive. There’s not only little innovation one can do in ‘P)Q’ language analysis, but there isn’t much commentary one can write on the texts that did achieve innovation. Thus not only did UNM have difficulty placing its doctoral students, but it also had to release younger assistant profs having difficulty getting published. Embarrassingly, the courses that attracted the most students at the undergraduate level were those the department heads wished would just go away – courses taught by the ‘historians,’ in aesthetics and culture, in Latin American, Asian, or German philosophy.

In returning to UNM Philosophy’s web-site occasionally over the years, what I’ve found is that the Department resolved this problem by, first, handing over the department to moderate (post)Analytics (particularly Wittgensteinians and Austinians), who turned to the ‘historians’ for advice. The department is now notable as ‘eclectic,’ including Phenomenologists, specialists in ‘environmental philosophy,’ and in regional culture and the arts, feminists – as well as the post-positivist Analytics (who apparently continue to hold administrative responsibility and the power that implies).

But we should remember that the dominant voices of the Analytic tradition never really cared about what happened in State universities like UNM. They were entrenched in the IVY League. Quine, for instance, was happily situated at Harvard, and I know of no evidence that he cared a whit about what happened outside the Ivy League thinkers who were his principle interlocutors. The Positivists and their immediate inheritors were content with a narrow philosophy of language, because they had no concern for the professional survival of those outside their immediate community.

This left that community, and the Analytic tradition inheriting those concerns, utterly isolated, not only from basic professional interests, including survival of programs (and of students) outside that community, but cut-off from other fields of research as well.

The point I was trying to make in my previous post is that non-philosophers in many fields, and even outside the academy, have a real interest in what philosophy has to say to us, in terms of the basic interests motivating our understandings of the world. The logical-positivists denied the validity of such interest; the political-institutional fall-out of that has been, in part the reduction of interest – and funding – for philosophy departments.

At SUNY Albany for my Doctorate in English, I’d already found the other possibility for dealing with the publication-impoverishment of the Analytic tradition. In the Philosophy Department at Albany, the ‘historians’ were effectively ‘ghettoized’ (largely restricted to teaching undergraduate courses), and the way the Department dealt with the lack-of-publication problem was by re-designing its program to emphasize the newer, publication-richer ‘Cognitive Science.’ That turn, promising a ‘boom,’ would later prove a ‘bust’ for many departments, as Cognitive Science integrated with the Computer Sciences or the neuro-sciences. Many young Cognitive Science philosophers drifted away from philosophy, into AI studies, or neurology, or mathematics. Why not? Cognitive Science was never about anything philosophical – with wisdom – to begin with. Why waste time with epistemology if the algorithms of neurological responses to stimuli could be measured for AI duplication, instead? – and with much richer grants than philosophy could ever attract.

Around then, a number of conservative Analytics came out to slam the influence of Continentalism in the Humanities. I was rather perturbed that these philosophers had missed an important point. In Literary studies, no precise criticism can be practiced without some theoretical sophistication; indeed every text of criticism comes embedded with some theory, however crude. The New Critics had run a long ways on presumptions drawn from classical rhetoric, Kant, and Coleridges’ misreading of Kant (supplemented by Hegel and, most recently, T. S. Eliot). By the late ’70s, criticism based on these theoretic resources had been pretty much exhausted.

This opened a theoretical void in Literary studies; had the Analytic tradition a rich reading of literature or other cultural concerns, its proponents could have filled that void. Instead, they had virtually nothing to say on such matters. The French post-Structuralists did. Who were young Literaturists to drawn on for text-productive, publishable, theoretically informed criticism?

I’m trying to indicate that there are important extrinsic reasons for maintaining study of the history of philosophy. The Enlightenment philosophers had interesting things to say on a wide range of topics, and various fields drew on them for theoretical support for a long time. But once that resource grew stale, a healthy philosophy should have been able to fill that void. The Analytic tradition couldn’t do that. But such voids will be filled. The Continental tradition is much better informed in the history of philosophy, and thus was able to transmute the Enlightenment philosophies into new, text-productive forms of thought. Before condemning the Continental tradition, remember that it provided a great many young academics in various fields with the source material for publication – and thus jobs and security. If the Analytic tradition could have done better – it should have. It didn’t.

The situation has improved somewhat. Analytic philosophers now seem to write comfortably in conversational tones (important to opening access to their texts to a wider audience), and seem to be reaching out beyond their discipline. But the damage has been done. It is unclear whether professional philosophy in America can fully recover from it.

I have no problems with there being an academic field/ discipline of philosophy. Of course; for one thing there is no other way to preserve the history of philosophic thought; and for another, there’s no other way to advance it authoritatively.

But for philosophy to survive in the academy, it needs to address at least these four issues:

1) It needs to be text-productive, and directed toward expanding the publication possibilities (as much as to say, opportunities for employment and tenure) for graduating doctorates.

2) It must be able to address the needs of other fields of study that are theory-dependent – and I’m not talking about the sciences.

3) It must develop tolerance for variant grammars and rhetorics of text productive discourse (yes, I’m talking about Continentalists and non-Analytic English or American philosophers).  It needs to be ‘eclectic’.

4) And, yes, it must reach out beyond the Academy, and be willing to include the writing and thought, not only of Academics in other fields, but thinkers outside of the Academy all together.

I’m not in the Academy, so I couldn’t begin to say how this would be accomplished.

But during the Reagan era and it’s immediate aftermath, quite a number of colleges got rid of their philosophy departments all together. Addressing the four issues above won’t necessarily ward off the budget-choppers, but may help in arguing against them.

Again, I’ll emphasize this point, because I was there, and I know how important it is. When Literature studies needed theoretical renewal, The Analytics were not there – the French Post-Structuralists were. If Analytics don’t like that, they shouldn’t bother ridiculing it – Let them give professional Literaturists an alternative theory of literature. (Remember, we’re talking about people’s livelihood, not some esoteric ‘principle of truth’ or whatever – I’m a Pragmatist; I don’t have much time for ‘Justified True Belief,’ ‘P)Q,’ ‘thought-experiment’ wheel-spinning. Certainly the future employees at various non-Ivy-League colleges across the country don’t.)

——

* Just as historical side-note, both the Analytic tradition and the Phenomenological tradition originated in Germany.  The only strong stream of American philosophy is Pragmatism – which, admittedly, also arose as a response to German philosophy.  Meanwhile, English philosophy uncontaminated by German thought, comes to an end with Mill’s later turn toward what we would call progressive politics.

Justice in the court of rhetoric

The court of rhetoric has two jurisdictions. The first is that of public discourse, and anyone is invited to the jury. The other is that of those trained to rhetorical analysis. That sounds as if the trained critic of rhetoric ought to be considered the ‘Supreme Court” of the whole domain, or at least, one might say, ‘the final court of appeal’. But in fact the matter is the other way around; the public decides what rhetoric is persuasive by their active responses to it – by being persuaded by it. The critic has largely an advisory role. The critic clarifies the claims, discovers the fallacies, weighs the epistemic ground of the rhetoric – the unstated assumptions, the evidence provided for the claims, the implications of tropes and innuendos and their possible consequences.

A number of problems recur in the court of rhetoric, which explains why many people, from fascistic censors to philosophers, mistrust or even hate it. The principle of these, as I have discussed before, is that rhetoric, to be properly judged as successful, is not to be judged on whether its claims are right or wrong; in order to understand rhetoric as rhetoric, the principle determination of successful rhetoric is whether it works or not – whether it persuades its intended audience. So rhetoric arguing for ethically repugnant positions may be considered successful, if in fact it wins over its audience. Nobody’s really happy with that (except the successful rhetorician), but it is true nonetheless – how could it be otherwise? Rhetoric is a tool, not a strict form of communication; its whole reason for existence is getting others to do what one wants – whether voting a certain way, buying a certain product, or simply experiencing certain feelings leading to certain acts or behavioral responses. There is no logic to the statement “I love you,” but its rhetorical value is clear; and lovers have been relying on it for many centuries. What does the statement communicate? Maybe that the utterer loves the audience; but maybe not. That judgment awaits on consequences.

That is another problem for the court of rhetoric: Rhetorical analysis and criticism, like any analysis, is directed towards the past – towards what has been said and what has unfolded as consequence to the success or failure of this. But rhetoric in practice is always directed towards the future – to hoped for events, behaviors, and consequences. That makes it difficult to adjudge a rhetorical usage successful or not until it has actually proved successful (or not). What a critic of rhetoric can achieve, concerning a current rhetoric practice, is determine the strength of its claims, the assumptions it depends on, the nature of its tropes and implications, the possible consequences of accepting these.

Yet this leads to another problem. The court of rhetoric does not have the same standard of judgment as that of logic. Logic judges much like a criminal court – the judgment is supposedly decided as absolute – “beyond a reasonable doubt.” The court of rhetoric, like civil procedure courts, decides on the standard of, “the weight of the evidence.” This is actually a just premise, because claimants before the court of rhetoric have opposing beliefs, not simply opposing interests. It would be unjust to one who actually believes in a position morally repugnant to others to assert that ‘no reasonable person would believe that, therefore they are lying.’ Of course they believe it – humans believe in a lot of objectionable, even repugnant things. They aren’t lying; they believe in what they are saying; the question then is whether their claims are weaker or stronger than counter-claims by those who believe otherwise.

To an absolutist mode of thought, trained in logic, that is really hard to comprehend. Yet the court of rhetoric can not function otherwise without itself committing injustice – otherwise it becomes mere tool to a censor’s agenda.

Yet a strong and well-informed critic of rhetoric ought to be able to demonstrate when ethically questionable rhetorical claims are also weak rhetorical claims, because what is ethically questionable often relies on prior claims that are inadequately supported. Donald Trump’s claim that most Mexican immigrants are involved in criminal behavior, or that American Muslims celebrated the 9/11 attacks, can be easily undercut through reference to statistics in the first instance, or reliable reports by those on scene in the second. So these are weak claims before the court of rhetoric. Yet Trump’s rhetoric resonates with a small percentage of the population riddled with fears of differing ethnic groups and differing religions. This must not only be acknowledged, but addressed. Simply saying that what Trump says is ‘untrue’ or ‘unjust’ misses the complexity of what is going on (and frankly does injustice to his presumed audience). Also, it sets up opponents of Trump with a blind side: First, we lose sight of the appeal he has for his audience, and thus will find it more difficult to understand that audience and find some way to appeal them with a countering rhetoric.  Then, if we think the issue is Trump’s being ‘wrong,’ or simply lying, this may lull us into believing that all we need do is dismiss what he says. But in the public arena, this amounts to ignoring what he says. That means that his potential audience have only what he says to rely on, to feel some comfort in their already held fears and beliefs. That means that Trump’s essentially weak claims will appear stronger to his audience than they actually are. The danger is if Trump’s rhetoric begins persuading a potential audience without adequate response. Then, as has happened all too often in the past, weak rhetorical claims could prove successful.

Which should remind us that the judgments made in the court of rhetoric actually have profound practical consequences. The chief of these is that its determinations contribute to a stronger rhetoric in response to ethically questionable claims. It’s not enough to say that Trump is ‘wrong;’ one has to win over his audience, or at least his potential audience. And that requires a stronger rhetoric than Trump himself deploys, supplementary to any logical or other reasonable arguments we make against what he has to say. (Clinton’s suggestion, that Trump’s remarks on Muslims would be used for recruitment to ISIS, while not strictly true when made, was actually a clever rhetorical move – which since has been somewhat validated.)

As with courts of civil law, and unlike criminal courts or that of logic (which chop between the black-and-white of true-or-false), the court of rhetoric must adjudicate cases on a grey scale. That is because opposing interests are rarely easy to decide between, especially if grounded in beliefs truly held by the opponents; and because rhetoric triggers a host of responses – emotional, social, cultural – that are not reducible to ‘reasonably held’ positions.*

The art of persuasion – its theory, its practice, its criticism – is not about what is wrong or right or true or false, and never about some ‘view from nowhere’ or what some god might want us to be – and certainly not about what world we might prefer to live in. It is about the world as it is, and about people as they are. That understandably frustrates us; but the world is by nature a disappointment.

—–

*Part of the reason for having a careful study of rhetoric is that it clears some of the ground for further study of human psychology and of social and cultural relationships.

Dogs pee on trees, humans tell lies

There is no inherent wrong lying. Lying, misrepresentation, deception, are simply tools in the language kit-bag we all carry, in order to communicate comfortably with others who may or may not share our perspectives. I have known quite a number of people rigidly committed to ‘absolute honesty’ (including myself, at one point in my life). I have never known anyone who has not told lies, who does not regularly or periodically, even routinely tell lies. Especially to themselves (how could one believe one’s self ‘absolutely honest’ otherwise?). Asking the human animal not to tell lies is as effective as asking canine animals not to spritz their scent on trees. It is a part of their being.

So: Two quotes, as comment:

“I assure you that (politicians do not lie). They never rise beyond the level of misrepresentation, and actually condescend to prove, to discuss, to argue. How different from the temper of the true liar, with his frank, fearless statements, his superb irresponsibility, his healthy, natural disdain of proof of any kind! After all, what is a fine lie? Simply that which is its own evidence.” – Oscar Wilde, “The Decay of Lying”

“(T)he wise thing is for us diligently to train ourselves to lie thoughtfully, judiciously; to lie with a good object, and not an evil one; to lie for others’ advantage, and not our own; to lie healingly, charitably, humanely, not cruelly, hurtfully, maliciously; to lie gracefully and graciously, not awkwardly and clumsily; to lie firmly, frankly, squarely, with head erect, not haltingly, tortuously, with pusillanimous mien, as being ashamed of our high calling.” – Mark Twain, “On the Decay of the Art of Lying”

The question is not whether we will lie – the only question is whether we lie viciously, or appropriately and kindly.

(Of course, being the King of New York, I find it useful not to lie about anything but my age – I’m really 16 – and the fact that I’ve been to the moon twice disguised as two different astronauts.)

In any event: As comparison (with a Kantian flavor as spiced by Schopenhauer), we would find it difficult to imagine a world in which everyone was trying to murder everyone else, at least some point in their life (although certain role-play computer games come awfully close). But it is not difficult to imagine a world in which everybody lies at some point to someone, because that is the is the world in which we live.

Take a specific example. Often we find the debates concerning what might be called ‘ethical lying,’ circling around the ‘white lie,’ a falsehood which benefits another. I want to consider a social imperative that effectively moots the ethic altogether.

Most of us in the working class are expected to lie at work. At the very least, we are expected to reply to an executive’s inquiry concerning our morale, that we are content with our jobs, and loyal to our employers. Not doing so can result in punitive repercussions, even expulsion from employment. ‘Not happy with your job? Well, don’t worry, you’re not working here anymore.’

This kind of lie-inducing situation has recurred with variation for many centuries; it’s a fundamental feature of any class structured society. In some previous cultures the punitive measures could include torture or death.

But contemporary employees are also expected to lie on behalf of their employers (as part of their presumed loyalty), the most obvious case being retail employees who must sell to clients for maximum profits, regardless of actual value to purchasing consumers.

Now before we condemn the dishonesty generated by such situations tout court, we should be aware that there are important gradations among differing power relations and the persons involved. My favored car mechanic appears to be pretty upfront – ‘these are our services, and this is what we charge,’ with no (apparent) add-ons. But the work is good enough so that if there are add-ons, I’m willing to ignore them.

There are also ranges within which such dishonesty reaches limits and actually becomes open to rebuttal and punishment. It is such moments of open transgression that are socially noticeable and raise issues of honesty and dishonesty in public discussion.

Now, to return then the more general problem: One reason we have no difficulty in condemning certain acts, like murder, as inherently wrong or immoral (whether categorically or prima facie) is because so few people actually perform such acts (when murders are committed en masse, of course, we call this “war” and criticize it using other criteria). But how can we hold as inherently wrong or immoral behavior that everybody engages in at some point or other?

This BTW has been the dilemma facing the moralists of major religions for generations, and has generates reams of religious apologetics, qualifications and hair-splitting, as well as practices of repentance and redemption, temperance and forgiveness. Eg., sexual intercourse without intent to reproduce is technically considered wrong even among major Christian churches, but there is a kind of wink or dispensation when it occurs between married partners – as long as it is not done ‘lustfully.’ ie., enjoying one’s own sex through the body of another. (But how is this even meaningful?)

Do we simply condemn the whole species as ‘evil’ (as some churches do)? Or do we admit that human behaviors are complex and indefinitely variable, and that ethics rather trails after many of them, as a means of understanding, rather than prescription. (And, yes, we can have an ethic that treats different behaviors in different ways – utilitarian sometimes, de-ontologically otherwise, virtue ethics for ourselves, Confucians regarding our parents, Aristotelian regarding our friends, etc., etc.

So what I suggest here is that we conceive of an ethic that treats behaviors universally engaged in, along a spectrum of social acceptability, in a different way from behaviors that are infrequent and/or openly transgressive.

Lying is simply a form of communication, and a behavioral tactic of social survival. The presumed transgressive character of the behavior, which is useful to teach in order to prescribe the limits of acceptability, is something of a pretense – again, useful; but not entirely honest.

But what human behavior could ever be?

“I thought I would be honest –
what a dream!”
– Browning

—–
Developed out of a comment made at: http://theelectricagora.com/2015/11/26/the-scrooge-charade/

A lie is not a statement to be analyzed logically

This will begin a trilogy of thoughts on the problem of lying, one of which will, hopefully, appear on another, more general site (but if it is not accepted there, I’ll post it here). Hopefully, recurrent readers of this blog will recognize the relation between this discussion and a recent post on collective fiction making – https://nosignofit.wordpress.com/2015/11/27/collective-fiction-making-as-reality/ (and other posts here concerning the fictive nature of much of our story-telling, rhetoric, and presumed knowledge).

—–

After reading an article by Gerald Dworkin ( http://opinionator.blogs.nytimes.com/2015/12/14/can-you-justify-these-lies/  ), considering the possible ethical justifications for telling a lie, I realized that the Analytic philosophy tradition’s efforts to develop an adequate theory of the lie – as logically analyzable statement – is frankly rather impoverished.

From Dworkin: “John lies to Mary if he says X, believes X to be false, and intends that Mary believe X.”

This is the baseline definition of the lie, at least in Analytic philosophy. See James Mahon’s SEP article: http://plato.stanford.edu/entries/lying-definition/

Unfortunately, this definition, while useful in a dictionary, is misplaced in an encyclopedia. It is woefully incomplete.

From Mahon:
“Consider the following joke about two travelers on a train from Moscow (reputed to be Sigmund Freud’s favorite joke) (reference: G. A. Cohen):

Trofim: Where are you going?
Pavel: To Pinsk.
Trofim: Liar! You say you are going to Pinsk in order to make me believe you are going to Minsk. But I know you are going to Pinsk.

Pavel does not lie to Trofim, since his statement to Trofim is truthful, even if he intends that Trofim be deceived by this double bluff.”

Actually, Trofim is correct, Pavel is lying. The problem with the Analytic theorizing over lying is that, despite needing to contextualize lying, especially when considering it’s moral or ethical justification in certain situations, it doesn’t really grasp the profoundly social underlying structure, which necessarily includes audience expectations and the liar’s manipulation of these. Pavel knows Trofim doesn’t trust him, and so effectively lies to this expectation (not knowing how deeply Trofim distrusts him, to the point that he reveals the lie as a truth). This sort of situation, wherein a sentence can be both truth in one sense, and yet lie as to audience expectation, is not accountable in most Analytic philosophy, where the matter should be decidable on the basis of sentential analysis, predicated on a justified true belief model of knowledge. Real lying is not about sentences, and it isn’t even about what anyone believes; it’s about social relationships and expectations. One can speak a lie without needing to believe the sentence spoken to be untrue – or indeed, without believing anything about it at all. (Pavel may not believe he’s going to Pinsk, he just wants Trofim to think he’s going to Minsk.) What’s important is the expectation of the audience within the context.

So: when considering the ethics of lying, one has to approach the matter on a case-by-case basis; otherwise, injustice will be done to those who behave in good will, or those who feel socially compelled. I’m not sure a sustainable universal or general theoretical statement on the matter is even possible, given the social contextualization of the behavior.

Those wishing to maintain the purity of the logical analysis of lies as statements seek to maintain a rigid distinction between the lie and other forms of deception. In practice, this distinction cannot be maintained. Elsewhere in the SEP article, Mahon writes:

“If it is granted that a person is not making a statement when, for example, she wears a wedding ring when she is not married, or wears a police uniform when she is not a police officer, it follows that she cannot be lying by doing these things.”

But I do not grant this; or, rather, I hold that its incompleteness trivializes it *. The notion that an unmarried woman wearing a wedding ring (aware of how others will perceive this, in a given cultural context) is not a kind of lie, is uninformed as to how humans communicate through non-verbal signification, and the complex ways that the verbal and non-verbal relate.

Now, is the woman wearing the ring engaged in cruel play on innocents for the sake of vanity? or is she protecting herself in a threatening social context? That depends on the context, and on the expectations others have for her.

(Which. BTW, also tells us a little something about the social usefulness of cosmetics and apparel, doesn’t it?)

—–
* As a matter of social fact, everyone who is not a professional Analytic philosopher knows full well that fashion makes a statement.
—–

Developed out of a comment made at: https://platofootnote.wordpress.com/2015/12/18/platos-suggestions-9/