13 May 2013
The least interesting views on almost any philosophical question will inevitably (inevitably, at least, in our age of industrial-technological civilization driven by scientific innovation) be those of some eminent scientist who delivers himself of a philosophical position without bothering to inform himself on the current state of research on the philosophical question in question, and usually, at the same time, decrying the aridity of philosophical discussion. (While this is not true of all scientific opinion on matters philosophical, it is mostly true.) So as not to make such a sweeping charge without naming names, I will here name Francis Crick as a perfect embodiment of this, and to this end I will attempt to describe what I will call “Crick’s Deepity.”
“Crick’s Deepity” sounds like the name of some unusual topographical feature that would be pointed out on local maps for the amusement of travelers, so I will have to explain what I mean by this. What is “Crick’s deepity”?
The “Crick” of the title is none other than Francis Crick, famous for sharing the credit for discovering the structure of DNA with Watson. It will take a little longer to explain what a “deepity” is. I’ve gotten the term from Daniel Dennett, who has introduced the idea in several talks (available on Youtube), and since having learned about it from watching a video of a Dennett talk I found the term on the Urban Dictionary, so it has a certain currency. A deepity is a misleading statement which seems to be profound but is not; construed in one sense, it is simply false; construed in another sense, it is true, but trivially true.
The most commonly adduced deepities are those that depend upon the ambiguity of quotation marks, so they work much better when delivered as part of a lecture rather than when written down. Dennett uses this example — Love is just a word. If we are careful with our quotation marks, this becomes either “‘love’ is just a word” (trivially true) or “love is just a word” (false).
Twentieth century analytical philosophy expended much effort on clarifying the use of quotation marks, which are surprisingly important in mathematical logic and philosophical logic (Quine even formulated quasi-quotes in order to try to dispel the confusion surrounding the use-mention distinction). The use-mention distinction also became important once Tarski formulated his disquotational theory of truth, which employes the famous example, “‘Snow is white’ is true if and only if snow is white.” The interested reader can pursue on his own the relationship between deepities and disquotationalism; perhaps there is a paper or a dissertation here.
In one of his lectures that mention deepities Dennett elaborates: “A deepity is a proposition that seems to be profound because it is actually logically ill-formed.” Dennett follows his deepity, “Love is just a word,” with the assertion that, in its non-trivial sense, “whatever love is, it isn’t a word.” The logical structure of this assertion is, “Whatever x is, it isn’t an F” (or, better, “There is a x, and x is not F”). What Dennett is saying here is that it is a category mistake to assert, in this case, that “x is an F” (that “love is a word”).
Whether or not a category mistake is a logical error is perhaps open to question, while use-mention errors seem to be clearly logical errors. There is, however, a long history of treating theories of categories as part of philosophical logic, so that a category error (like conflating mind with matter, or with material processes) is a logical error. Clearly, however, Dennett is treating his examples of deepities as logically ill-formed as a result of being category errors. “Whatever love is, it isn’t a word,” he says, and he says that because it would be a category error to ascribe the property of “being a word” to love, except when love is invoked as a word. (If we liked, we could limit deepities to use/mention confusions only, and in fact the entry for “deepity” in the Urban Dictionary implies as much, but while Dennett himself used a use/mention confusion to illustrate the idea of a deepity, I don’t think that it was his intention to limit deepities to use/mention confusions only, as in his expositions of the idea he defines a deepity in terms of its being logically ill-formed.)
Now, that being said, and, I trust, being understood, we pass along to further deepities. Once we pass beyond obvious and easily identifiable confusions, fallacies, and paradoxes, the identification of deepities becomes controversial rather than merely an amusing exercise. It would be easy to identify theological deepities that Dennett’s audience would likely reject — religion is a soft target, and easy to ridicule — but it is more interesting to go after hard targets. I want to introduce the particular deepity that one find’s in Crick’s book The Amazing Hypothesis:
“The Astonishing Hypothesis is that ‘You,’ your joys and your sorrows, your memories and your ambitions, your sense of personal identity and free will, are in fact no more than the behavior of a vast assembly of nerve cells and their associated molecules. As Lewis Carrol’s Alice might have phrased it: ‘You are nothing but a pack of neurons.’ This hypothesis is so alien to the ideas of most of people alive today that it can truly be called astonishing.”
Francis Crick, The Amazing Hypothesis: The Scientific Search for the Soul, New York: Touchstone, 1994, p. 3
No one should be astonished by this hypothesis; reductionism is as old as human thought. The key passage here is “no more than,” although in similar passages by other authors one finds the expression, “nothing but,” as in, “x is nothing but y.” This is the paradigmatic form of reductionism.
Some of my readers might be a bit slack-jawed (perhaps even, might I say, astonished) to see me call this paradigmatic instance of scientific reductionism a “deepity.” In taking up Dennett’s term “deepity” and applying it to the sort of scientistic approach to which Dennet would likely be sympathetic is clearly a case of my employing the term in a manner unintended by Dennett, perhaps even constituting a use that Dennett himself would deny was valid, if he knew of it. Indeed, Dennett is quite clear about his own reductionist view of mind, and of the similarity of his own views to those of Crick.
Dennett, however, is pretty honest as a philosopher, and he freely acknowledges the possibility that he might be wrong (a position that C. S. Pierce called “fallibilism”). For example, Dennet wrote, “What about my own reductios of the views of others? Have they been any fairer? Here are a few to consider. You decide.” In the following paragraph of the same book, Intuition Pumps And Other Tools for Thinking, Dennett described what he considers to be the over-simplification of Crick’s views on consciousness:
“You would think that Sir John Eccles, the Catholic dualist, and Francis Crick, the atheist materialist, would have very little in common, aside from their Nobel prizes. But at least for a while their respective view of consciousness shared a dubious oversimplification. many nonscientists don’t appreciate how wonderful oversimplifications can be in science; the cut through the hideous complexity with a working model that is almost right, postponing the messy details until later. Arguably the best us of ‘over’-simplification is the history of science was the end run by Crick and James Watson to find the structure of DNA while Linus Pauling and others were trudging along trying to make sense of the details. Crick was all for the trying the bold stroke just in case it solved the problem in one fell swoop, but of course that doesn’t always work.”
Daniel C. Dennett, Intuition Pumps And Other Tools for Thinking, 2. “By Parody of Reasoning”: Using Reductio ad Absurdum
Dennett then described Crick’s reductionist hypothesis (I’m leaving a lot out here; the reader is referred to the full account in Dennett’s book):
“…then [Crick] proposed a strikingly simply hypothesis: the conscious experience of red, for instance, was activity in the relevant red-sensitive neurons of that retinal area.”
Dennett, Op. cit.
Dennett followed this with counter-arguments that he himself offered (suggesting that Dennett is not himself quite the reductionist that he paints himself as being in popular lectures), but said of Crick that, “He later refined his thinking on this score, but still, he and neuroscientist Christof Koch, in their quest for what they called the NCC (the neural correlates of consciousness), never quite abandoned their allegiance to this idea.” Indeed, not only did Crick not abandon the idea, he went on to write an entire book about it.
It would be a mistake to take Crick’s reductionism in regard to consciousness in isolation, because it occupies a privileged place in a privileged scientific narrative. Vilayanur S. Ramachandran placed Crick and Watson’s discovery of the structure of DNA in the venerable context of repeated conceptual revolutions since the scientific revolution itself:
The history of ideas in the last few centuries has been punctuated by major upheavals in thought that have turned our worldview upside down and created what Thomas Kuhn called “scientific revolutions.” The first of these was the Copernican revolution, that, far from being the centre of the Universe, the Earth is a mere speck of dust revolving around the Sun. Second came Darwin’s insight that we humans do not represent the pinnacle of creation, we are merely hairless neotonous apes that happen to be slightly cleverer than our cousins. Third, the Freudian revolution, the view that our behaviour is governed largely by a cauldron of unconscious motives and desires. Fourth — Crick and Watson’s elucidation of DNA structure and the genetic code, banishing vitalism forever from science. And now, thanks once again partly to Crick, we are poised for the greatest revolution of all — understanding consciousness — understanding the very mechanism that made those earlier revolutions possible! As Crick often reminded us, it’s a sobering thought that all our motives, emotions, desires, cherished values, and ambitions — even what each of us regards as his very own ‘self’ are merely the activity of a hundred billion tiny wisps of jelly in the brain. He referred to this as the “astonishing hypothesis” the title of his last book (echoed by Jim Watson’s quip “There are only molecules, everything else is sociology”).
Vilayanur S. Ramachandran, Perception, 2004, volume 33, pages 1151-1154
The narrative of the materialist reduction of mind to brain or to brain function fits nicely into the overarching scientific narrative of conceptual revolutions that are a rebuke to human pride. That the rebuke to human pride remains such a central theme in the ascetic practice of science merely shows the continuity of science with its medieval scholastic antecedents, in which the punishment of human pride was no less a central doctrine. Indeed, what we might call the Copernican imperative of contemporary science has become the dominant narrative to science to the point that few other narratives are taken seriously. (It is also wrong, or at very least misleading, but that is a topic for another, future, post.) Thus the Copernican imperative is a lot like the (repeatedly disputed) idea of progress in industrial-technological civilization: no matter how hard we try to find another paradigm to organize our understanding, we keep coming back to it. (For example, I have mentioned Kevin Kelly’s explicit arguments for progress in several posts, as in Progress, Stagnation, and Retrogression.)
Placing Crick’s thought in the context of the narrative that furnishes much of its meaning suggests further contexts for Crick’s thought — the ultimate intellectual context that inspired Crick, as well as alternative contexts that place a very different meaning and value on Crick’s reductionism. Surprisingly, as it turns out, the ultimate context of Crick’s views is the most simple-minded theologically-tinged science imaginable, which at once makes Dennett’s above-quoted observation about Crick’s and Eccles’ common ground pregnant with meaning.
Crick’s contempt for philosophical approaches to the problem of consciousness is so thick it practically drips off the page, and furnishes a perfect example of what I have called fashionable anti-philosophy. Despite Crick’s contempt for philosophy, Crick jumps directly into the use of theological language by repeatedly invoking the idea of a human “soul” — indeed, his book is subtitled, “the scientific search for the soul.” This is an important clue. Crick rejects philosophy, but he embraces theology. In other words, Crick’s position is theological, and Crick’s theological frame of mind is at least in part responsible for Crick’s dismissive attitude to philosophy.
Many contemporary philosophers (not to mention contemporary scientists) tie themselves into knots trying to avoid saying that thought and ideas and the mind are distinct from material bodies and physical processes, not because they can’t tell the difference between the two (like G. E. Moore’s famous dream in which he couldn’t distinguish propositions from tables), but because to acknowledge the difference between thoughts and things seems to commit one to a philosophical trajectory that cannot ultimately avoid converging on Cartesian dualism — and if there is any consensus in contemporary philosophy, it is the rejection of Cartesian dualism.
How are thoughts different from things, in so far as we understand “things” in this context to be corporeal bodies? The examples are so numerous and so obvious that it scarcely seems worth the trouble to cite a few of them, but since many people — Crick and Dennett among them — give straight-faced accounts of reductionism, I guess it is necessary. So, think of a joke. Or have someone tell you a joke. If the joke is really funny, you will be amused; maybe you will even laugh. But if you had an exhaustive delineation of brain structure and brain processes that correspond with the joke, nowhere in the brain structure or processes would you find any thing funny or amusing. If you are a brain scientist you might find these brain structures and processes to be fascinating, but unless you’re a bit eccentric you are not likely to find them to be funny.
Similar considerations hold for tragedy: watch or read a great tragedy, and then see if you can find anything tragic in the brain structures and processes that correspond with viewing or reading a tragedy. If you are honest, you will find nothing tragic about brain structures and processes. Again, take two ideas, one of which is logically entailed by the other — of, if you like, take a syllogism and make it easy on yourself: Socrates is a man, All men are mortal, Therefore Socrates is mortal. Find the brain structures and processes that correspond to these three propositions, and see if there is any relationship of logical entailment between the brain structures and processes. But how in the world could a brain structure or process be logically entailed by another brain structure or process? This is simply not the kind of property that brain processes and structures possess.
Being funny or being tragic or being logically entailed by another proposition are properties that ideas might have but they are not the kind of properties that physical structures or processes possess. Physical structures have properties like length, breadth, and depth, while physical processes might have properties like temporal duration, chemical composition, or electrical charge (brain processes might have all three properties). It would be senseless, on the other hand, to speak of the length, breadth, depth, chemical composition or electrical charge of an idea. It is nonsense to say that, “The concept ‘horse’ is three inches wide.” Not true or false — just meaningless. It is equally nonsense to say that, “The pelvis is tragic.”
To conflate thoughts and things is a category mistake, and in so far as category mistakes are violations of philosophical logic, expressions that formulate category mistakes are logically ill-formed. When logically-ill formed propositions seem profound — the sort of thing which, if true, would be earth-shattering — but in fact are merely false, then you have what Dennett calls a “deepity.” Thus Crick’s deepity is his identification of “your joys and your sorrows, your memories and your ambitions, your sense of personal identity and free will” with “the behavior of a vast assembly of nerve cells and their associated molecules.” If this were true, it would be earth-shattered, but in fact it is a logically ill-formed expression that is a deepity. Whatever your joys, sorrows, and memories are, they certainly are not the behavior of nerve cells. That much should be uncontroversial, so let us call a spade a space, and a deepity a deepity.
. . . . .
. . . . .
. . . . .
. . . . .
13 February 2013
In The Accidental World I said that individuals possess axiological uniqueness in virtue of ontological uniqueness — the very contingency of the world, the historical accidents of which we are the consequences, furnishes us with the concrete expressions of our individuality: faces, bodies, boundaries, borders — all that is ours.
It may have appeared mildly ironic to some that I should begin my trip to Japan with a meditation on individuality. Japan is, after all, known in the west as the source of the proverb that the stake that sticks up gets hammered down (出る杭は打たれる。 Deru kui wa utareru).
While Japan is stereotypically a land of stultifying conformity, Ruth Benedict’s classic study of Japan, The Chrysanthemum and the Sword, the result of wartime research commissioned by the U.S. Office of War Information, presents the reader with a sequence of dramatic contradictions of the Japanese character:
“The Japanese are, to the highest degree, both aggressive and unaggressive, both militaristic and aesthetic, both insolent and polite, rigid and adaptable, submissive and resentful of being pushed around, loyal and treacherous, brave and timid, conservative and hospitable to new ways. They are terribly concerned about what other people will think of their behavior, and they are also overcome by guilt when other people know nothing of their misstep. Their soldiers are disciplined to the hilt but are also insubordinate.”
Ruth Benedict, The Chrysanthemum and the Sword, Chapter I, “Assignment: Japan”
In other words, the Japanese are human, all-too-human. There was never a more persuasive argument for universal human nature than a detailed study of the life of a people that reveals their inner nature to be as conflicted as the inner nature of any other people.
In the same spirit of Benedict’s contradictory character traits, one would expect that the Japanese are at once both pervasively conformist and yet profoundly individualistic. The same might be said of Europeans, Africans, Latins, Americans, and so on.
What is distinctive about a people and a culture — that which is distinctively theirs and not ours — is the way in which the conflicted components of human nature are manifested in social institutions. And social institutions can vary quite significantly. Every society must find a way to keep the better part of its people fed, clothed, washed, housed, and occupied, but within these rather generous parameters there are ample opportunities for social experiments not duplicated elsewhere in the world.
Human beings, being all derived from a single speciation event, have a unity that cultural institutions do not possess. Social institutions, far more than individuals, embody the historical accidents that vary from place to place and time to time.
What we find when we travel are human beings, the same as human beings any other place on the planet, but whose lives have been shaped by the geographical and historical accidents that remain localized — unlike ourselves. We individuals do not remain localized. Like our prehistoric ancestors, we can start walking, and if we walk long enough and far enough (and maybe canoe for a while as well) we will find ourselves in another world shaped by other forces of geography and history than those familiar to us.
It is an accident that any of us happens live where we live, just as it is an accident where we happen to be born. It is partially an accident, and partially a matter of choice, where we happen to travel. If we start walking, we first find ourselves at our neighbor’s, and then our neighbor’s neighbor, and so on. Their lives are as accidental as are our lives. That they are the closest Other (and therefore representative of the narcissism of small differences) is as much an accident as where we happen to be born ourselves.
If we spend a little more time planning our expeditions, not merely setting out to walk away from our accidental home, but seeking a place in the world that agrees with our temperament, tastes, or preferences — that, too, is an accident, for while human nature (if there is any) may be traced to a single speciation event, individual temperament is an accident of history, and the places in the world that happen to offer aesthetic, intellectual, pragmatic, or other satisfaction to the individual mind do so as a matter of chance.
De gustibus non est disputandum.
. . . . .
. . . . .
. . . . .
27 October 2012
What is a definitive formulation?
Recently on my other blog I discussed the philosophical pursuit of definitive formulations. What is a definite formulation? The reader will, I am sure, immediately see that giving a concise and accurate idea of what constitutes a definitive formulation would itself require a definitive formulation of a definitive formulation.
I don’t yet have a definitive formulation of what constitutes a definitive formulation. I could simply say that it is a formulation of a concept that could serve as a definition, but this wouldn’t be very helpful. Here is how I characterized it in my other post:
“…a handful of short, clear, concise, and intuitively accessible sentences…”
“…to put this in clear and simple terms, if I have a definitive formulation, that means if you stopped me on the street and asked me to explain myself while standing on one foot, I could do it. Lacking definitive formulations, the attempted explanation would go on a little too long to be comfortable (or safely balanced) on one foot.”
Lacking a definitive formulation of an idea that is central to our thought means that we can only say what Augustine said of time in his Confessions:
What then is time? If no one asks me, I know: if I wish to explain it to one that asketh, I know not: yet I say boldly that I know, that if nothing passed away, time past were not; and if nothing were coming, a time to come were not; and if nothing were, time present were not. (11.14.17)
quid est ergo tempus? si nemo ex me quaerat, scio; si quaerenti explicare velim, nescio. fidenter tamen dico scire me quod, si nihil praeteriret, non esset praeteritum tempus, et si nihil adveniret, non esset futurum tempus, et si nihil esset, non esset praesens tempus.
In some cases, I think that we can move beyond this Augustinian limit to definition, and it is when we hit upon a definitive formulation that we are able to do this.
It seems appropriate that I should give a concrete example of something that I would identify as a definitive formulation, and since I have recently hit upon a formulation that I rather like, I will try to use this to show what a definitive formulation is.
What is temperament?
I have written several posts about temperament, including Temperamental Diversity, A Third Temperament, Intellectual Personalities and Temperament and Civilization. I don’t think that philosophy, science, or socio-political thought has yet done justice to the role that temperament plays in the world.
But what is temperament? The seventh of ten definitions in the Oxford English Dictionary (which of the ten is the closest to the sense of “temperament” as I have been using the word) defines temperament as follows:
“Constitution or habit of mind, esp. as depending upon or connected with physical constitution; natural disposition”
The sixth of the OED definitions defines temperament in terms of the four humours recognized in medieval medical theory and practice:
“In mediæval physiology: The combination of the four cardinal humours (see humour n. 2b) of the body, by the relative proportion of which the physical and mental constitution were held to be determined; known spec. as animal temperament; also, The bodily habit attributed to this, as sanguine temperament, choleric temperament, phlegmatic temperament, or melancholic temperament (see the adjs.).”
In traditional philosophical parlance, a dictionary definition gives us a nominal definition, but as philosophers what we really want is a real definition. While the philosophical distinction between nominal and real definitions is ancient and widely familiar, and therefore probably ought to remain untouched, I think it is more intuitive to call these two kinds of definition formal definition and metaphysical definition. A formal definition situates the meaning of a term within a formal system, perhaps within the system of language, whereas a metaphysical definition situates the meaning of a term within the structure of the world. So I guess what I am saying here is that one function of a definitive formulation is to give a metaphysical definition — but to be able to do so without requiring the exposition of an entire metaphysical system. You can imagine why this might be difficult.
So, what would I offer as a definitive formulation of temperament, that (hopefully) goes beyond the formal (i.e., nominal) definition in the OED? I define temperament as follows:
Temperament is the intellectual expression of individual variability.
I hope that the reader doesn’t find this too anti-climactic. I’ll try to explain why I find this to be a fruitful formulation.
The charm of an idea
A definitive formulation, as I understand it, has an aphoristic quality: it is brief, concise, sententious, and pregnant with meaning. It also has a certain indefinable “appeal” that, like most forms of appealingness, is compelling to some even while it leaves others cold.
Wittgenstein formulated this appeal by calling it the “charm” that some proofs in mathematics and the foundations of mathematics possess. The later Wittgenstein was concerned to criticize the whole Cantorian conception of set theory and transfinite numbers, and much of Wittgenstein’s later philosophical of mathematics has this purpose implicitly as the center of the exposition. (In connection with this, I have previously mentioned Brouwer’s influence on Wittgenstein in Saying, Showing, Constructing, and more recently wrote more about Brouwer in One Hundred Years of Intuitionism and Formalism.)
Here’s what Wittgenstein said about mathematical “charm” in his lectures of 1939:
“The proof has a certain charm if you like that kind of thing; but that is irrelevant. That fact that is has this charm is a very minor point and is not the reason why those calculations were made.–That is colossally important. The calculations have their use not in charm but in their practical consequences.”
“It is quite different if the main role or sole interest is this charm — if the whole interest is showing that a line does cut when it doesn’t, which sets the whole mind in a whirl, and gives the pleasant feeling of paradox. If you can show that there are numbers bigger than the infinite, your head whirls. This may be the chief reason this was invented.”
Ludwig Wittgenstein, Wittgenstein’s Lectures on the Foundations of Mathematics, Cambridge, 1939, edited by Cora Diamond, University of Chicago Press, 1989, p. 16
With this in mind, I am well aware that the “charm” that I find in my definitive formulation of temperament may well be lost on others. The fact that an idea that has a certain charm for one person has none for another is itself a function of temperament. Individuals of different temperaments will find an intellectual charm in different formulations.
Part of the charm that a formulation has (or fails to have) is the connections that it forges to familiar theories. A definitive formulation, among its other functions, contextualizes a less familiar or less precise concept in an established theory or theories, enabling a systematic exploration and exposition of the idea in relation to familiar and therefore more thoroughly explored theories. Well known theories provide clear parameters for an idea, which, when formerly known only in a vague and imprecise form, had no clear parameters.
In formulating temperament as the intellectual expression of individual variation I am contextualizing human temperament in evolutionary theory, and thereby suggesting an interpretation of temperament based in and drawing upon evolutionary psychology. Thus evolutionary theory provides the parameters for temperament understood as the intellectual expression of individual variability.
Individual variability is one of the drivers of natural selection. When distinct individuals have distinct properties, a selection event may favor (select for) some properties while disfavoring (select against) other properties. Usually we think of the properties of an organism as being structural features of an organism: one finch has a longer beak than another, or one ape is better at walking on two legs than another. These differences might disappear into the dustbin of natural history if no selection event comes along that favors one or the other. But if a selection event does occur, and it favors some structural attribute of an organism that varies among individuals, the favored individuals will go on to experience differential survival and reproduction.
While we usually think of selection in structural terms, a selection event can also select for behaviors. Organisms can adapt to their environment through behaviors just as certainly (and much more rapidly) than through structural changes in their bodies. Behavioral adaptation is no less significant in natural history than structural adaptation.
At very least with the emergence of human beings, and probably also with other species, both hominid precursors of homo sapiens and other large-brained mammals, mind emerged in natural history. With the emergence of mind, there emerged also a novel basis of selection. Some minds are constituted in one way, while other minds are constituted in other ways. In other words, the same individual variability we find in bodies and behaviors are also to be found in minds.
If a selection event occurs that should happen to favor (or disfavor) any one kind of mind over any other kind of mind, those possessing the favored minds will enjoy differential survival and reproduction. With individual variability of minds represented in a sentient population — individual temperaments that lead individuals to think in different ways, and value things in different ways, and deliberate over alternatives in different ways — there is the continual possibility of natural selection.
The more variety of minds that there are, the greater the number of alternatives amongst which a selection event can select, the greater the likelihood that some one temperament is more fitted to survive the particular conditions that obtain than other temperaments.
Thus to formulate temperament as the intellectual expression of individual variability is to place mind within natural history.
To place mind within nature is a metaphysical formulation.
. . . . .
. . . . .
. . . . .
22 August 2012
The idea of the individual has been central to Western Civilization; we can discern its earliest manifestations in ancient Greece, when potters signed their work and bragged that they were better than other potters; we can see its further development in the Italy of the renaissance, when men of virtú like Machiavelli and Lorenzo the Magnificent forcefully asserted themselves as rightful masters of their time; we can see the new forms that it has taken after the Industrial Revolution, where the office towers of New York, like the medieval towers of San Gimignano, assert the ascendancy and priority of the individual.
Whether you love it or hate it, you have to acknowledge that the US is where individualism has reached its most unconditional realization. Some people glory in American individualism, and some despise it. If a member of the commentariat or the punditocracy wants to put a positive spin on individualism, they will call it “rugged individualism,” whereas if they want to put a negative spin on individualism, they will call it “rampant individualism.” There are plenty of examples of both of these attitudes, and I invite the reader to stay alert for these linguistic clues in future reading.
When earlier today I posted a longish piece on Tumblr about Appearance and Reality in Demographics, I continued to think about the recent poll results that I mentioned there, WIN-Gallup International ‘Religiosity and Atheism Index’ reveals atheists are a small minority in the early years of 21st century, as well as an earlier poll from the Pew Forum, U. S. Religious Landscape Survey, that I mentioned some years ago (in 2008) in More on Republican Disarray. In particular, I thought about how wrong prognosticators, forecasters, and social commentators have been about the development of religion in the US. There is an obvious reason for this. The US is not only a disproportionately religious nation-state (as revealed in numerous polls), it is also, as I noted above, a disproportionately individualistic nation-state, and the confluence of these ideological trends, the religious and the individualistic, means that US culture is marked by religious individualism and individual religion.
I touched on this peculiar character of religion in America — i.e., religious individualism — in my post American Civilization, in which I cited the song Highwayman, jointed performed by Johnny Cash, Willie Nelson, Kris Kristofferson, and Waylon Jennings (and written by Jimmy Webb). This is an obvious pop culture example of what I am getting at, but the careful reader of classic American fiction will also reveal a religious individualism that frequently issues in pluralism, diversity, and the frankly eclectic. To put it bluntly, people believe whatever they want to believe.
The attempt to pigeonhole American religious belief and practice always founders on the rock of religious individualism, which cannot be reliably classified in ideological terms. It is not consistently left or right, radical or traditional, liberal or conservative, activist or quietist — or, rather, it is all of these things at different times for different individuals.
Individual religion takes the form of individual choice, and different individuals choose differently for themselves, and choose differently at different times in their life. This was one of the interesting results of the Pew Forum poll I mentioned above, which found a high level of religious observance in the US (everyone expected that), but when prying deeper found that, “More than one-quarter of American adults (28%) have left the faith in which they were raised in favor of another religion.”
While this may not sound too shocking prima facie, it would be difficult to overemphasize how historically unusual this is. One of the conflicts that marked the shift from the medieval world to the modern world in European history was that between the personal principle in law and the territorial principle in law (which latter emerges with the advent of the nation-state). Given the personal principle in law, an individual is judged according to his community. If you were a Christian on pilgrimage to the Holy Land and were accused of a crime in a Muslim country, you would be dealt with according to Christian law, not Muslim law. That how it was supposed to work, and sometimes it did work that way, and for the decentralized societies of medieval Europe the personal principle in law fit the loosely coupled structures of a nearly non-existent state.
The personal principle in law persists today in the institution of diplomatic immunity, but apart from diplomats, those accused of a crime will be tried according to the law of the geographically defined nation-state where the crime occurred, and this legal process will have little or nothing to do with the ethnicity or traditional community of the accused individual. Again, that’s the way it’s supposed to work, though it is not difficult to cite violations of this principle.
The personal principle in law is all about ethnicity and tradition and individual identity being defined by a traditional community, which in turn defined the individual in terms of his or her role in that community. The idea that an individual might change their religion was like suggesting that an individual could put on or take off an identity like a suit of clothes. This would have been utterly incomprehensible to our ancestors; for the US it is now a fait accompli, and the basis for the organization of our society. Just as serial monogamy has come to characterize American courtship and marriage patterns, so too serial faith choices, adopted sequentially throughout the life of the individual as that individual experiences personal crises that precipitate temporary religious identification, characterize American religious patterns.
Indeed, one of the perennial themes of American life is that of personal re-invention (i.e., the putting on and taking off of identity). In the US, failure is not final. If things aren’t working out for you in Boston, you can move to Philadelphia, as Benjamin Franklin did. In a social context of personal re-invention and geographical fungibility, what counts is not one’s abject subordination to the community into which one happens to be born, but one’s cleverness and persistence in finding a place where one can feel at home. Part of this personal quest is also finding a faith in which one can feel at home, and this is not necessarily the faith of one’s parents or of one’s community.
In the context of religious individualism, orthodoxy counts for nothing. Or it counts for everything, but only because each man has his own orthodoxy, and there is no social mechanism in place in industrial-technological civilization to force the acquiescence of any individual to any other individual’s orthodoxy.
Even those who celebrate orthodoxy and who would welcome mechanisms of social control to force acquiescence to orthodoxy, cannot escape, at least while in America, the necessity of defining their own orthodoxy on their own terms. They are, in Rousseau’s terms, forced to be free, which in this context means they are forced to be religious individualists.
. . . . .
. . . . .
. . . . .
11 August 2012
In my last post, Taking Responsibility for Our Interpretations, I wanted to emphasize how both individuals and political wholes (social groups) seek to vacate their responsibilities by cloaking them in a specious facticity, so that an interpretation of the world is treated as if it were something more than or other than a mere interpretation. One of the most common ways of doing this in relation to history is to formulate an interpretation of history, whether personal or social, as “destiny.”
We are all painfully familiar with loaded terms from historiography like “destiny,” “progress,” “inevitability,” and the like. We find them impartially on the left and the right. In fact, the most strongly ideologically motivated institutions make a practice of most grievously distorting history to fit a particular model that flatters the ideology in question. All one need do is recall the utopian plans of communism and Nazism from the previous century to understand the extent to which visions of the past and the future supposedly inherent in the very nature of things issue in dystopian consequences.
I realize that I’ve engaged with this issue recently in slightly different terms. In Gibbon, Sartre, and the Eurozone I formulated two principles that I called Gibbon’s Principle and Sartre’s Principle. Gibbon’s Principle is that the authority of a social whole is inalienable. Sartre’s Principle is that the authority of the individual is inalienable. In other words, even if a social whole or an individual engages in the pretense of surrendering its autonomy, this is an act of bad faith (mauvaise foi) because the social whole or the individual retains the autonomy to act even as it denies this autonomy to itself. Gibbon’s Principle as applied to history means taking responsibility for the history of social wholes; Sartre’s Principle as applied to history means taking responsibility for the individual’s personal history.
It may seem a bit incredible to compare the benign Eurozone to malevolently utopian visions like communism or Nazism, but the narratives employed to defend the Euro — the inevitability of European integration and its historical irreversibility — are on a par with inherentist narratives that make claims upon history that cannot be sustained. In Gibbon, Sartre, and the Eurozone I compared the attempt to make the Eurozone permanent to the Cuban attempt to incorporate its present socio-political regime as a permanent feature of its constitution, which latter I had discussed in The Imperative of Regime Survival.
It is significant in this connection that the US experienced a traumatic challenge to its national claims of permanence that took the form of the Civil War. Had I been alive in the 1860s, I suspect that I would have argued that it was utter folly to craft a national constitution that had provisions for adding to the territories of the United States but no provisions for the peaceful succession of regions that no longer desired to be part of the US. Because there were no peaceful provisions for succession, the succession took the form of militant succession, which was answered by militancy on the part of those who believed the Union to be indissoluble.
So am I arguing that the Confederates were right? That would certainly put me in an awkward position. If the South had peacefully succeeded from the Union, it is entirely possible that the Balkanization of North American would have yielded a map of minor states such as we find in South America (after the breakup of Gran Colombia), though it is equally possible that the fractured Union would have left only two successor states in North America. Counterfactuals are difficult to argue with any kind of confidence precisely because inherentist and essentialist conceptions of history almost never provide an adequate narrative of what happens.
Regardless of what might have happened, what did in fact happen is the the unity of the US was imposed by force of arms, more or less guaranteeing the US a continental land empire without any power able to seriously challenge the US in the Western hemisphere. This likely resulted in the US repeatedly intervening in the internecine quarrels of Europe until the US itself took responsibility for European security, eventually winning the Cold War and becoming the dominant world power. None of this was inevitable, but it has been given the air of inevitability by nationalistic narratives of American exceptionalism.
There is a sense in which the Cuban narrative of a permanent revolutionary government and the Eurozone narrative of indissolubility seek to emulate the apparently successful indissolubility revealed by the US national experience. Who, after all, would not want to be the exception to the mutability of all human things?
. . . . .
. . . . .
. . . . .
9 August 2012
I can remember the first time that I came to realize that history is a powerful tool for conveying in interpretation. History isn’t just an account of the past, a chronicle of names, dates, and places, that only becomes distorted when the facts were selected and organized according to some idea that was no part of the facts as they occurred. History is always a selection of past facts and always organized according to some idea or other. No history can be complete, including all facts, so that every history is partial, and a partial selection of relevant facts means that there must be some principle of selection, and it is the principle selection of relevant facts that is the idea that governs even the most objective of histories.
This realization that history is always an interpretation came to me when I was writing extensively on the history of logic (some time in the early 1990s, I think). This may seem an unlikely point of origin for an essentially political realization, but the history of logic, no less than the history of princes and thrones and battles, is a human, all-too-human story with its distinctive protagonists who each put forward their particular version of the events that go to make up the history of logic, and which in the most tendentious accounts culminate in their work of the individual formulating the given narrative of logic.
What is true for logic is true in spades for the histories of less abstract and more human, all-too-human stories. The narratives we rely on to orientate ourselves within the world — narratives of our own personal history, narratives of our families, narratives of our communities, nation-states, cultures, civilizations, and species — are interpretations of events even when every event incorporated in the narrative is objectively and unproblematically true. Meaning and value are given to facts and events when they are made part of a story that has meaning and value for those who create stories, those who transmit stories, and those who listen to stories.
Traditional narrative history tells a story; when you begin a story, you already know what kind of story you’re going to tell — whether it’s a romance or a comedy or a tragedy — since for any of these genres a successful telling of the story requires that the genre be “set up” in the very first lines of the tale. This has been made particularly clear by Hayden White’s detailed typology of narratives in his book Metahistory, in which he sedulously distinguishes modes of emplotment, argumentation and ideology.
Even while traditional narrative history has continued to dominate popular historical writing, academic historiography has moved ever further away from narrative models of historical exposition. In several posts I have mentioned the influence of Braudel and the Annales school of historiography, which, influenced by mid-century structuralism on the European continent (think Claude Lévi-Strauss), brought a much more “scientific” approach to writing history. Braudel’s writing is so accomplished that we scarcely notice he is writing more as a scientist than an historian, but this development was only to continue and to escalate as scientific historiography migrated to the New World and had the resources of Big Science upon which to draw.
While scientific historiography possesses the gold standard in terms of objectivity and the veracity of the facts employed, science writers tend to be much less sophisticated and less subtle writers than traditional historians, so when the inevitable popularizations of ideas in the vanguard of science emerge they tend to be penned with the kind of naïve optimism one would expect of the Enlightenment, with a generous admixture of theological posturing and ham-handed moralizing (I have briefly addressed the latter two in Higgs: what was left unsaid). The result is that when scientific historiography enters the marketplace of ideas, it, too, is freighted with meanings and values that are independent of the facts presented, although the scientific framework of the discovery and exposition of the facts sometimes conceals the moral message.
Well, none of this should really be new to any of us. Any sophisticated reader is already aware of the cautions I have formulated above about interpretations versus facts, and already in the nineteenth century Nietzsche put the whole matter in a particularly unambiguous formulation when he said that, “Against that positivism which stops before phenomena, saying ‘there are only facts,’ I should say: no, it is precisely facts that do not exist, only interpretations.” Nevertheless, my recent reflections have once again impressed me with the importance of this observation.
I have mentioned in several posts how much Sartre’s lecture Existentialism is a Humanism has influenced my thinking over the years. I was reflecting on this again recently, and the lesson that I took away from this most recent review was the importance of taking responsibility for our interpretations, including if not especially our interpretations of history.
Here is a passage from Sartre that I quoted previously in Of moral choices and existential choices, in which Sartre has just told a story of how a student came to him to ask whether he should stay at home to be a comfort to his mother or if he should leave to join the resistance:
“…I can neither seek within myself for an authentic impulse to action, nor can I expect, from some ethic, formulae that will enable me to act. You may say that the youth did, at least, go to a professor to ask for advice. But if you seek counsel — from a priest, for example you have selected that priest; and at bottom you already knew, more or less, what he would advise. In other words, to choose an adviser is nevertheless to commit oneself by that choice. If you are a Christian, you will say, consult a priest; but there are collaborationists, priests who are resisters and priests who wait for the tide to turn: which will you choose? Had this young man chosen a priest of the resistance, or one of the collaboration, he would have decided beforehand the kind of advice he was to receive. Similarly, in coming to me, he knew what advice I should give him, and I had but one reply to make. You are free, therefore choose, that is to say, invent. No rule of general morality can show you what you ought to do: no signs are vouchsafed in this world.”
Jean-Paul Sartre, Existentialism is a Humanism
By concluding this passage with, “no signs are vouchsafed in this world,” Sartre is not only saying that each must take responsibility for explicit decisions and actions, but also for our identification of signs and what we make of them. Contrary to Sartre’s declaration of the absence of signs, I think that most people do sincerely believe that signs are vouchsafed in this world. I have come to think of this belief in signs as a way to avoid responsibility for one’s interpretations. If one says, e.g., “a rainbow appeared in the sky as I was contemplating suicide, and I realized that this was a sign from on high that I should not kill myself,” one is surrendering one’s autonomy even while acting — the moral equivalent of keeping one’s cake and eating it too.
I don’t think that most people have a problem with the explicit judgments they formulate when they say things like, “I think…” or “I believe…” or “I have decided to…” since these are clear statements of personal responsibility for one’s decisions and actions. But interpretations can be much more subtle — in some cases, perhaps in many cases, interpretations are so subtle that they are difficult to understand as interpretations rather than as cold, hard facts.
Individuals who have never had their Weltanschauung called into question are particularly vulnerable to giving their interpretations an air of facticity. In so far as travel can place an individual into a situation in which everything formerly taken for granted is questioned (something I touched upon in Being the Other), one of the virtues of travel is to make one aware of one’s Weltanschuung, and to know that there is nothing necessary about the particular interpretations that one gives to particular states of affairs.
Of course, travel in and of itself is not enough. Some people, when they travel, surround themselves with their compatriots so that they are never exposed to an unaccustomed world without the support of like-minded fellows. People do exactly the same thing without bothering to travel: i.e., always surrounding themselves with like-minded individuals and never placing themselves in a situation in which their beliefs can be radically questioned — or even gently questioned.
Thus we see that the work of taking responsibility for our interpretations is the painful work of self-knowledge even to the point of self-alientation. For this, few have the requisite hardihood. But we must try.
For those who do possess the intestinal fortitude for self-examination that reveals interpretations as interpretations, stripping them of their spurious facticity, there is an added aesthetic benefit: it is from this point of view, seeing the world for what it is, that we are able to see and to forget the name of the thing on sees.
The uninterpreted world — what Husserl called the prepredicative world — is an ideal, and as an ideal it is likely to be elusive and difficult of accomplishment. But that is no argument against it. As Spinoza said, All noble things are as difficult as they are rare. Taking full responsibility for our interpretations is both difficult and rare, but it is a noble ideal to pursue.
. . . . .
. . . . .
. . . . .
1 August 2012
In the classic Ingmar Bergman film The Seventh Seal, a Swedish knight, Antonius Block, returns to his native Sweden after ten years of crusading in the Holy Land. Upon his return he encounters the figure of Death, which whom he engages in a chess match, and as the game of death proceeds, the knight and his squire, which latter has become so disillusioned as to be cynical, see the ravages of the Black Death, see a witch burned, and see flagellants whipping themselves in a frenzy of religiously-inspired self-mortification (curiously parallel to the religiously-inspired violence in which the crusading knight himself as participated). The knight has returned from a traumatic experience to find not respite but further trauma. All in all, this is not the sort of homecoming for which one would wish.
The knight has been on crusade, but what is a crusade but an armed pilgrimage? At the same time that knights were traveling on crusade, others were traveling the same roads as unarmed pilgrims. The knight going to the Holy Land to do battle with the infidel is as much a pilgrim as the friar with this staff is a pilgrim. It was commonplace in the middle ages for religious officials to offer absolution of sins to knight for fulfilling their religious duty to go on crusade to liberate the Holy Land.
The experience of return after many years of absence, whether due to crusade or pilgrimage, would have commonly been as unsettling as Bergman’s knight coming home to the plague and a chess match with Death — though not likely as dramatic. In that other famous case of a return after ten years’ absence, The Odyssey, Odysseus on his return to Ithaca must deal with the unruly Suitors of Penelope — but after dispatching them all, he is eventually accepted by his wife and then by his father. In other words, Odysseus experiences a kind of closure and resolution; the only closure for Bergman’s knight is that of death.
Can we go home again? Is it even possible to go home again, to the same home, as one’s selfsame self, after having walked abroad in life (as the ghost of Jacob Marley puts it)? Or is it impossible to step twice into the same river because new waters are always flowing upon us? Kierkegaard devoted an entire book to this question, Repetition. Kierkegaard frames the question like this: is a repetition possible? Homer says yes. Bergman says no.
Kierkegaard had a personal stake in the question, since he had tossed over Regine Olsen after having proposed to her, and after her acceptance. He ran. In other words, Kierkegaard was a cad, and it bothered his conscience. He wanted to know if he could make up for it. In a way, he did, though he probably didn’t know it. A friend of mine who studied Kierkegaard much more intensively than I ever did, told me that in her later married life to another man, Regine Olsen and her husband spent their spare time reading Kierkegaard’s devotional treatises to each other. Strange, no? But life is full of strange occurrences.
Is a repetition possible? Do we get a second chance? Can we go home again? The questions are inter-related, but they are not the same. Rather, they are same for some, but not for all. And I think we can formulate it like this: those for whom defamiliarization is the more traumatic have a second chance upon return; those for whom refamiliarization is the more traumatic do not regard homecoming as a second chance, but look forward to their next departure as their second chance. In either case, a repetition is possible, but in no sense guaranteed. So I must side with Homer as against Bergman, but I must also observe that the difference in an individual’s response to defamiliarization and refamiliarization marks the ground of a distinction: if you are not on one side, you are on the other.
. . . . .
. . . . .
. . . . .
The Russian formalist literary critic Viktor Shklovsky introduced the term “defamliarization” to indicate that function of literature and art which is to make the familiar strange in order to see that which is most common in a new light. It is not only art that serves this function. Science often serves in the capacity of defamiliarization and forces us to see familiar aspects of the world in new ways. Travel may be considered a personal form of defamiliarization. I touched on this earlier in Being the Other when I wrote:
“…the ignorant traveler bumbles through the business of ordinary life in a foreign country, though the business of ordinary life feels quite extraordinary. The extraordinariness of the everyday is another familiar feature of travel, and this can be expressed in ways that are both illuminating and embarrassing.”
If travel is a form of defamiliarization, then returning from travel constitutes a kind of refamiliarization. I often thought of this when returning from my earlier travels, when I would be away for a month at a time, as it always felt difficult to resume the mundane details of mundane life; even after the most spartan and ascetic travel — and if I described my early travel to you, I think you would agree that it was pretty spartan — one does not easily fit back into one’s life at home. Thus travel is not only a defamiliarization of the world, it is also a defamiliarization of oneself.
If that weren’t enough, travel also involves a process of defamiliarization with one’s own expectations for travel. A bus stop is not an auspicious place to be dropped off in a new and unfamiliar country, but it is likely that the traveler will find himself or herself unceremoniously dropped off at a bus stop or staggering out of train station and wondering what comes next. The important thing here is that this is precisely what is new: one doesn’t know what comes next.
The expectations that a new traveler has for a distant land — derived from a lifetime of travel posters, glossy brochures, full color magazine spreads, films of the exotic unknown, and travel memoirs both witty and insightful — are likely to be disappointed by the same infrastructure of industrialized civilization that makes international travel quick, convenient, affordable, and accessible. The disruption to one’s schedule by travel is reduced to a day of sitting on an airplane and being shuttled between various lines and waiting rooms and officials examining papers.
Upon arrival at one’s destination, one travels through the outlying industrial development that inevitably surround airports, and after this one is treated to a view of the extensive suburbs that have swelled all the cities of the industrial age. It may not be until the next day, when one emerges from one’s hotel after a night recovering from the previous day’s travel, that one comes to the historic center of an ancient city and finally begins to see the objects of touristic pilgrimage, which by now seem rather small and insignificant when surrounded by a metropolis that has but little relationship to one’s tourist intentions. The only place that I can recall that was immediately striking upon stepping out of the train station was Venice, and that was in 1989 — by now its character may well have changed.
The refamiliarization of returning home involves this same process in reverse order: one detaches and disentangles oneself from the landscape and the people and the way of life to which one has quickly become accustomed, and indeed even fond of — itself a painful process, as it often feels like a betrayal of oneself to leave that which one has sought and finally found, so that departure feels like exile rather than being the opposite of exile — and one passes by degrees back into the infrastructure of industrialized civilization, back from the countryside, into the center of a capital city, then through its suburbs and its outlying industrial districts until one at last arrives at the forlorn landscape of an airport, with its steel and glass buildings and its asphalt tarmac… the very picture of bleakness and desolation, if ever there was an uninviting spectacle welcome one on one’s journey “home” (which we must now put in scare quotes because the prospect of return no longer feels like home).
The airport, a waystation for touristic pilgrims, has all the anonymity and neutrality one would expect from a transient space not intended as a place for any kind of familiarity at all, but rather a place to make the transition from the familiar to the unfamiliar, or from the unfamiliar back to the familiar.
If the airport were not already enough of a shock, then there is the abrupt re-insertion into the matrix of ordinary life and work, the telephone ringing, errands to run, obligations to meet, and a life to be lived that no longer feels like one’s own.
Which is the more profoundly jarring and disturbing experience — defamiliarization or refamiliarization?
. . . . .
. . . . .
. . . . .