10 October 2013
Life Lessons from Morally Compromised Philosophers
With particular attention to the Heidegger case
I began this blog with the idea that I would write about current events from a philosophical perspective and said in my initial post that I wanted to see history through the prism of ideas. This continues to be my project, however imperfectly conceived or unevenly executed. It is a project that necessitates engagement both with the world and with philosophy simultaneously. And so it is that my posts have ranged widely over warfare and the history of ideas, inter alia, and as a consequence of this dual mandate I have often found myself reading and citing sources that are not the common run of reading for philosophers. Some philosophers, however, are both influential and controversial, and Martin Heidegger has become one such philosopher. Heidegger’s influence in philosophy has only grown since his death (primarily in Continental thought), but the controversy about his involvement with Nazism has kept pace and grown along with Heidegger’s reputation.
It may help my readers in the US to understand the impact of the Heidegger controversy to compare it to the intersection of evil and ideals in an iconic American thinker, taking as our example a man more familiar than Heidegger, who was an iconic continental thinker. Take Thomas Jefferson, for example. Some years ago (in 1998, to be specific) I saw two television documentaries about the life of Thomas Jefferson. The first was a typical laudatory television documentary about one of the American founding fathers (I didn’t take notes at the time, so I don’t know which documentary this was, but it may well have been the 1997 Ken Burns film about Jefferson, which I recently re-watched to confirm my memory of its ambiguous treatment of Jefferson’s relationship to this slaves), which touched upon the possibility of Jefferson fathering children by his slave Sally Hemmings, while not taking the idea very seriously.
Then in 1998 the news came out of DNA tests that proved conclusively that Jefferson had fathered the children of his slave Sally Hemmings, and the scientific nature of the evidence rapidly inroads among Jefferson scholars, who had been slow to acknowledge Jefferson’s “shadow family” (as such families were once called in the Ante-Bellum south). The consensus of Jefferson scholars changed so rapidly that it makes one’s head spin — but only after two hundred years of denial. And there remain those today who continue to deny Jefferson’s paternity of Sally Hemmings’ children.
Not long after this news was made public, I saw another documentary about Jefferson in which the whole issue was treated very differently; the perspective of this documentary accepted as unproblematic Jefferson’s paternity of Sally Hemmings’ children, and examined Jefferson’s life and ideas in the light of this “shadow family.” I don’t think that Jefferson suffered at all from this latter documentary treatment; he definitely came across less as an icon and more as a fallible human being, which is not at all objectionable. It is, in fact, more human, and more believable.
Though Jefferson did not suffer in my estimation because he was revealed to be human, all-too-human, there is nevertheless something deeply disturbing about the image of Jefferson sitting down to dinner with his white family while being served at dinner by his mulatto children that he sired with with slaves, and it is deeply disturbing in a way that it not at all unlike the way that it is deeply disturbing to know that when Heidegger met Karl Löwith in 1936 near Rome (two years after Heidegger left his Rectorship in Freiburg) that Heidegger wore a Nazi swastika pin on his lapel the entire time, knowing that Löwith was a Jew who had been forced to flee Nazi Germany. One cannot but wonder, on a purely human level, apart from any ideology, how one person could be so utterly unconcerned with the well being of another.
It would be disingenuous to attempt to defend the indefensible by making the claim that all intellectuals of Jefferson’s time were conflicted over slavery; this simply was not the case. Schopenhauer, for example, consistently wrote against slavery and never showed the slightest sign of wavering on the issue, but, of course, Schopenhauer’s income did not depend on slaves, while Jefferson’s did.
We know that Jefferson struggled mightily with the question of slavery in his later years, as is the case with most conflicted men tying himself in knots trying to square the actual record of his life with his ideals. It is easy to dismiss individuals, even those who have struggled with the contradictions in their life, as mere hypocrites, but the charge of hypocrisy, while carrying great emotional weight, is the least interesting charge that can be made against a man’s ideas. As I wrote in my Variations on the Theme of Life, “The world is mendacious through and through; mendacity is the human condition. To renounce hypocrisy is to renounce the world and to institute an asceticism that cannot ever be realized in practice.” (section 169)
Heidegger does not seem to have been conflicted about his Nazism in the way that Jefferson was conflicted about slavery. Many years after the Second World War, when the record of Nazi death camps was known to all, Heidegger could still refer to the “inner truth greatness of this movement,” while in the meeting with Löwith mentioned above Heidegger was quite explicit that his political engagement with Nazism was a direct consequence of his philosophical views.
One obvious and well-trodden path for handling a philosopher’s political “indiscretions” is to hold that a philosopher’s theoretical works are a thing apart, elevated above the world like Plato’s Forms — one might even say sublated in the Hegelian sense: at once elevated, suspended, and canceled. This strategy allows one to read any philosopher and ignore any detail of life that one chooses. I don’t think that this constitutes a good contribution to intellectual honesty.
I myself was once among those who read philosophers for their philosophical ideas only, and while I was never a Heidegger enthusiast or a Heidegger defender, I thought of Heidegger’s political engagement with Nazism as mostly irrelevant to his philosophy. At some point I don’t clearly recall, I become intensely interested in Heidegger’s Nazism, and there was a flood of books telling the whole sorry story to feed my interest: Heidegger And Nazism by Victor Farias, which was the book the opened by Heidegger’s Nazi past to scrutiny, On Heidegger’s Nazism and Philosophy by Tom Rockmore, The Heidegger Controversy: A Critical Reader edited by Richard Wolin, Heidegger’s Crisis: Philosophy and Politics in Nazi Germany by Hans Sluga, Heidegger, philosophy, Nazism by Julian Young, The Shadow of that Thought by Dominique Janicaud, and most recent and perhaps the most devastating of them all, Heidegger: The Introduction of Nazism into Philosophy in Light of the Unpublished Seminars of 1933-1935 by Emmanuel Faye.
Even with all this material now available on Heidegger’s Nazi past, Heidegger still has his apologists and defenders. Beyond the steadfast apologists for Heidegger — who are perhaps more compromised than Heidegger himself — there are a variety of strategies to excuse Heidegger from his involvement with the Nazis, as when Heidegger’s Nazism is called an “episode” or a “period,” or characterized as “compromise, opportunism, or cowardice” (as in Julian Young’s Heidegger, philosophy, Nazism, p. 4). Young also uses the terms conviction, commitment, and flirtation, though Young ultimately exculpates Heidegger, asserting that, “…neither the early philosophy of Being and Time, nor the later, post-war philosophy, nor even the philosophy of the mid-1930s — works such as the Introduction to Metaphysics with respect to which critics often feel themselves to have an open-and-shut case — stand in any essential connection to Nazism.” (Op. cit., p. 5)
Heidegger’s engagement with fascism represents the point at which Heidegger’s ideas demonstrate their relationship to the ordinary business of life, and this is a conjuncture of the first importance. This is, indeed, identical to the task I set myself in writing this blog: to demonstrate the relationship between life and ideas. And Heidegger, I came to realize, was a particularly clear and striking case of the intersection of life and thought, though not the kind of example that most philosophers would want to claim as their own. I can fully understand why a philosopher would simply prefer to distance themselves from Heidegger and, while not denying Heidegger’s Nazism, would choose not to talk about it either. But that Heidegger thereby becomes a problem for philosophy and philosophers is precisely what makes him interesting. We philosophers must claim Heidegger as one of our own, even if we are sickened by his Nazism, which was no mere “flirtation” or “episode,” but constituted a life-long commitment.
Heidegger was not merely a Nazi ideologue, but also briefly a Nazi official. The Nazification of the professions was central to the strategy of Nazi social revolution (with its own professional institution, the Ahnenerbe), and a willing collaborator such as Heidegger, prepared to Nazify a university, was a valuable asset to the Nazi party. Ultimately, however, Heidegger was embroiled in an internal conflict within the Nazi party, and when the SA was purged and many of its leaders killed on Night of the Long Knives, the Strasserist SA faction lost out decisively, and Heidegger with them. Thereafter Heidegger was watched by the Nazi party, and Heidegger defenders have used this party surveillance to argue that Heidegger was regarded as a subversive by the Nazi party. He was a subversive, in fact, but only because he represented a faction of Nazism that had been suppressed. Heidegger continued as a Nazi party member, and paid his party dues right up to the end of the war. We see, then, that the SA purge was not merely a brutal struggle for power within the Nazi party, but also an episode in the history of ideas. This is interesting and important, even if it is also horrific.
The more carefully we study Heidegger’s philosophy, and read it in relation to his life, the more we can understand the relation of even the most subtle and sophisticated philosophy to ideological commitment and to the ordinary business of life. And it wasn’t only Heidegger who compromised himself. There is Frege’s political diary, less well known than Heidegger’s political views, and the much more famous case of Sartre and Camus. There are at least two book-length studies of the public quarrel and falling-out between Sartre and Camus (Sartre and Camus: A Historic Confrontation and Camus and Sartre: The Story of a Friendship and the Quarrel that Ended It by Ronald Aronson). Camus most definitely comes off looking better in this quarrel, with Sartre, the sophisticated technical philosopher, looking like a party-line communist while Camus, the writer, the literary man, showing true independence of spirit. The political lives of Camus and Sartre have been written about extensively, but even still Heidegger remains an interesting case because of the impenetrable complexity of his thought and the manifest horrors of the regime he served. There ought to be a disconnect here, but there isn’t, and this, again, is interesting and important even if it is horrific.
I have had to ask myself if my interest in Heidegger’s Nazism is prurient (in so far as there is a purely intellectual sense of “prurient”). There is something a little discomfiting about becoming fascinated by studying a great philosopher’s engagement with fascism. I am not innocent in this either. I, too, am a morally compromised philosopher. Perhaps the most I can hope for is to be aware of what I am involved in by making a careful study of philosophy’s involvement in politics. Naïvété strikes me as inexcusable in this context. I hope I have not been naïve.
I have not scrupled to read, to think about, and to quote individuals who were not only ideologically associated with crimes of unprecedented magnitude, but who have personally carried out capital crimes. In the case of Theodore “Ted” Kaczynski, who was personally responsible for several murders, I have carefully read his manifesto, Industrial Society and its Future (read it several times through, in fact), have thought about it, and have quoted it. Others who have been influenced by Kaczynski’s work and have publicly discussed it have felt the need to apologize for it, like scientists who consider using the research of Nazi doctors. But an apology feels like an excuse. I don’t want to make excuses.
Heidegger, like Nazism itself, is a lesson from history. We can benefit from studying Heidegger by learning how the most sophisticated philosophical justifications can be formulated for the most vulgar and the most reprehensible of purposes. But we cannot learn the lesson without studying the lesson. Studying the lessons of history may well corrupt us. That is a danger we must confront, and a risk we must take.
. . . . .
. . . . .
. . . . .
25 September 2013
Hegel is not remembered as the clearest of philosophical writers, and certainly not the shortest, but among his massive, literally encyclopedic volumes Hegel also left us one very short gem of an essay, “Who Thinks Abstractly?” that communicates one of the most interesting ideas from Hegel’s Phenomenology of Mind. The idea is simple but counter-intuitive: we assume that knowledgeable individuals employ more abstractions, while the common run of men content themselves with simple, concrete ideas and statements. Hegel makes that point that the simplest ideas and terms that tend to be used by the least knowledgeable among us also tend to be the most abstract, and that as a person gains knowledge of some aspect of the world the abstraction of a terms like “tree” or “chair” or “cat” take on concrete immediacy, previous generalities are replaced by details and specificity, and one’s perspective becomes less abstract. (I wrote about this previously in Spots Upon the Sun.)
We can go beyond Hegel himself by asking a perfectly Hegelian question: who thinks abstractly about history? The equally obvious Hegelian response would be that the historian speaks the most concretely about history, and it must be those who are least knowledgeable about history who speak and think the most abstractly about history.
“…it is difficult to imagine that any of the sciences could treat time as a mere abstraction. Yet, for a great number of those who, for their own purposes, chop it up into arbitrary homogenous segments, time is nothing more than a measurement. In contrast, historical time is a concrete and living reality with an irreversible onward rush… this real time is, in essence, a continuum. It is also perpetual change. The great problems of historical inquiry derive from the antithesis of these two attributes. There is one problem especially, which raises the very raison d’être of our studies. Let us assume two consecutive periods taken out of the uninterrupted sequence of the ages. To what extent does the connection which the flow of time sets between them predominate, or fail to predominate, over the differences born out of the same flow?”
Marc Bloch, The Historian’s Craft, translated by Peter Putnam, New York: Vintage, 1953, Chapter I, sec. 3, “Historical Time,” pp. 27-29
The abstraction of historical thought implicit in Hegel and explicit in Marc Bloch is, I think, more of a problem that we commonly realize. Once we look at the problem through Hegelian spectacles, it becomes obvious that most of us think abstractly about history without realizing how abstract our historical thought is. We talk in general terms about history and historical events because we lack the knowledge to speak in detail about exactly what happened.
Why should it be any kind of problem at all that we think abstractly about history? People say that the past is dead, and that it is better to let sleeping dogs lie. Why not forget about history and get on with the business of the present? All of this sounds superficially reasonable, but it is dangerously misleading.
Abstract thinking about history creates the conditions under which the events of contemporary history — that is to say, current events — are conceived abstractly despite our manifold opportunities for concrete and immediate experience of the present. This is precisely Hegel’s point in “Who Thinks Abstractly?” when he invites the reader to consider the humanity of the condemned man who is easily dismissed as a murderer, a criminal, or a miscreant. But we not only think in such abstract terms of local events, but also if not especially in regard to distant events, and large events that we cannot experience personally, so that massacres and famines and atrocities are mere massacres, mere famines, and mere atrocities because they are never truly real for us.
There is an important exception to all this abstraction, and it is the exception that shapes us: one always experiences the events of one’s own life with concrete immediacy, and it is the concreteness of personal experience contrasted to the abstractness of everything else not immediately experienced that is behind much (if not all) egocentrism and solipsism.
Thus while it is entirely possible to view the sorrows and reversals of others as abstractions, it is almost impossible to view one’s own sorrows and reversals in life as abstractions, and as a result of the contrast between our own vividly experienced pain and the abstract idea of pain in the life of another we have a very different idea of all that takes place in the world outside our experience as compared to the small slice of life we experience personally. This observation has been made in another context by Elaine Scarry, who in The Body in Pain: The Making and Unmaking of the World rightly observed that one’s own pain is a paradigm of certain knowledge, while the pain of another is a paradigm of doubt.
Well, this is exactly why we need to make the effort to see the big picture, because the small picture of one’s own life distorts the world so severely. But given our bias in perception, and the unavoidable point of view that our own embodied experience gives to us, is this even possible? Hegel tried to arrive at the big picture by seeing history whole. In my post The Epistemic Overview Effect I called this the “overview effect in time” (without referencing Hegel).
Another way to rise above one’s anthropic and individualist bias is the overview effect itself: seeing the planet whole. Frank White, who literally wrote the book on the overview effect, The Overview Effect: Space Exploration and Human Evolution, commented on my post in which I discussed the overview effect in time and suggested that I look up his other book, The Ice Chronicles, which discusses the overview effect in time.
I have since obtained a copy of this book, and here are some representative passages that touch on the overview effect in relation to planetary science and especially glaciology:
“In the past thirty-five years, we have grown increasingly fascinated with our home planet, the Earth. What once was ‘the world’ has been revealed to us as a small planet, a finite sphere floating in a vast, perhaps infinite, universe. This new spatial consciousness emerged with the initial trips into Low Earth Orbit…, and to the moon. After the Apollo lunar missions, humans began to understand that the Earth is an interconnected unity, where all things are related to one another, and there what happens on one part of the planet affects the whole system. We also saw that the Earth is a kind of oasis, a place hospitable to life in a cosmos that may not support living systems, as we know them, anywhere else. This is the experience that has come to be called ‘The Overview Effect’.”
Paul Andrew Mayewski and Frank White, The Ice Chronicles: The Quest to Understand Global Climate Change, University Press of New England: Hanover and London, 2002, p. 15
“The view of the whole Earth serves as a natural symbol for the environmental movement. it leaves us unable to ignore the reality that we are living on a finite ‘planet,’ and not a limitless ‘world.’ That planet is, in the words of another astronaut, a lifeboat in a hostile space, and all living things are riding in it together. This realization formed the essential foundation of an emerging environmental awareness. The renewed attention on the Earth that grew out of these early space flights also contributed to an intensified interest in both weather and climate.”
Paul Andrew Mayewski and Frank White, The Ice Chronicles: The Quest to Understand Global Climate Change, University Press of New England: Hanover and London, 2002, p. 20
“Making the right choices transcends the short-term perspectives produced by human political and economic considerations; the long-term habitability of our home planet is at stake. In the end, we return to the insights brought to us by our astronauts and cosmonauts as the took humanity’s first steps in the universe: We live in a small, beautiful oasis floating through a vast and mysterious cosmos. We are the stewards of this ‘good Earth,’ and it is up to us to learn how to take good care of her.”
Paul Andrew Mayewski and Frank White, The Ice Chronicles: The Quest to Understand Global Climate Change, University Press of New England: Hanover and London, 2002, p. 214
It is interesting to note in this connection that glaciology yielded one of the earliest forms of scientific dating techniques, which is varve chronology, originating in Sweden in the nineteenth century. Varve chronology dates sedimentary layers by the annual layers of alternating coarse and fine sediments from glacial runoff — making it something like dendrochronology, except for ice instead of trees.
Scientific historiography can give us a taste of the overview effect, though considerable effort is required to acquire the knowledge, and it is not likely to have the visceral impact of seeing the overview effect with your own eyes. Even an idealistic philosophy like that of Hegel, as profoundly different as this is from the empiricism of scientific historiography, can give a taste of the overview effect by making the effort to see history whole and therefore to see ourselves within history, as a part of an ongoing process. Probably the scientists of classical antiquity would have been delighted by the overview effect, if only they had had the opportunity to experience it. Certainly they had an inkling of it when they proved that the Earth is spherical.
There are many paths to the overview effect; we need to widen these paths even as we blaze new trails, so that the understanding of the planet as a finite and vulnerable whole is not merely an abstract item of knowledge, but also an immediately experienced reality.
. . . . .
. . . . .
. . . . .
29 June 2013
In several posts I have referred to moral horror and the power of moral horror to shape our lives and even to shape our history and our civilization (cf., e.g., Cosmic Hubris or Cosmic Humility?, Addendum on the Avoidance of Moral Horror, and Against Natural History, Right and Left). Being horrified on a uniquely moral level is a sui generis experience that cannot be reduced to any other experience, or any other kind of experience. Thus the experience of moral horror must not be denied (which would constitute an instance of failing to do justice to our intuitions), but at the same time it cannot be uncritically accepted as definitive of the moral life of humanity.
Our moral intuitions tell us what is right and wrong, but they do not tell us what is or is not (i.e., what exists or what does not exist). This is the upshot of the is-ought distinction, which, like moral horror, must not be taken as an absolute principle, even if it is a rough and ready guide in our thinking. It is perfectly consistent, if discomfiting, to explicitly acknowledge the moral horrors of the world, and not to deny that they exist even while acknowledging that they are horrific. Sometimes the claim is made that the world itself is a moral horror. Joseph Campbell attributes this view to Schopenhauer, saying that according to Schopenhauer the world is something that never should have been.
Apart from the horrors of the world as a central theme of mythology, it is also to be found in science. There is a famous quote from Darwin that illustrates the acknowledgement of moral horror:
“There seems to me too much misery in the world. I cannot persuade myself that a beneficent & omnipotent God would have designedly created the Ichneumonidæ with the express intention of their feeding within the living bodies of caterpillars, or that a cat should play with mice.
Letter from Charles Darwin to Asa Gray, 22 May 1860
This quote from Darwin underlines another point repeatedly made by Joseph Campbell: that different individuals and different societies draw different lessons from the same world. For some, the sufferings of the world constitute an affirmation of divinity, while for Darwin and others, the sufferings of the world constitute a denial of divinity. That being said, it is not the point I would like to make today.
Far more common than the acceptance of the world’s moral horrors as they are is the denial of moral horrors, and especially the denial that moral horrors will occur in the future. On one level, a pragmatic level, we like to believe that we have learned our lessons from the horrors of our past, and that we will not repeat them precisely because we have perpetrated horrors in past and came to realize that they were horrors.
To insist that moral horrors can’t happen because it would offend our sensibilities to acknowledge such a moral horror is a fallacy. Specifically, the moral horror fallacy is a special case of the argumentum ad baculum (argument to the cudgel or appeal to the stick), which is in turn a special case of the argumentum ad consequentiam (appeal to consequences).
Here is one way to formulate the fallacy:
Such-and-such constitutes a moral horror,
It would be unconscionable for a moral horror to take place,
Therefore, such-and-such will not take place.
For “such-and-such” you can substitute “transhumanism” or “nuclear war” or “human extinction” and so on. The inference is fallacious only when the shift is made from is to ought or from ought to is. If confine our inference exclusively either to what is or what ought to be, we do not have a fallacy. For example:
Such-and-such constitutes a moral horror,
It would be unconscionable for a moral horror to take place,
Therefore, we must not allow such-and-such to take place.
…is not fallacious. It is, rather, a moral imperative. If you do not want a moral horror to occur, then you must not allow it to occur. This is what Kant called a hypothetical imperative. This is a formulation entirely in terms of what ought to be. We can also formulate this in terms of what is:
Such-and-such constitutes a moral horror,
Moral horrors do not occur,
Therefore, such-and-such does not occur.
This is a valid inference, although it is false. That is to say, this is not a formal fallacy but a material fallacy. Moral horrors do, in fact, occur, so the premise stating that moral horrors do not occur is a false premise, and the conclusion drawn from this false premise is a false conclusion. (If one denies that moral horrors do, in fact, take place, then one argues for the truth of this inference.)
Moral horrors can and do happen. They are even visited upon us numerous times. After the Holocaust everyone said “never again,” yet subsequent history has not spared us further genocides. Nor will it spare us further genocides and atrocities in the future. We cannot infer from our desire to be spared further genocides and atrocities that they will not come to pass.
More interesting than the fact that moral horrors continue to be perpetrated by the enlightened and technologically advanced human societies of the twenty-first century is the fact that the moral life of humanity evolves, and it often is the case that the moral horrors of the future, to which we look forward with fear and trembling, sometimes cease to be moral horrors by the time they are upon us.
Malthus famously argued that, because human population growth outstrips the production of food (and Malthus was particularly concerned with human beings, but he held this to be a universal law affecting all life) that humanity must end in misery or vice. By “misery” Malthus understood mass starvation — which I am sure that most of us today would agree is misery — and by “vice” Malthus meant birth control. In other words, Malthus viewed birth control as a moral horror comparable to mass starvation. This is not a view that is widely held today.
A great many unprecedented events have occurred since Malthus wrote his Essay on the Principle of Population. The industrialization of agriculture not only provided the world with plenty of food for an unprecedented increase in human population, it did so while farming was reduced to a marginal sector of the economy. And in the meantime birth control has become commonplace — we speak of it today as an aspect of “reproductive rights” — and few regard it as a moral horror. However, in the midst of this moral change and abundance, starvation continues to be a problem, and perhaps even more of a moral horror because there is plenty of food in the world today. Where people are starving, it is only a matter of distribution, and this is primarily a matter of politics.
I think that in the coming decades and centuries that there will be many developments that we today regard as moral horrors, but when we experience them they will not be quite as horrific as we thought. Take, for instance, transhumanism. Francis Fukuyama wrote a short essay in Foreign Policy magazine, Transhumanism, in which he identified transhumanism as the world’s most dangerous idea. While Fukuyama does not commit the moral horror fallacy in any explicit way, it is clear that he sees transhumanism as a moral horror. In fact, many do. But in the fullness of time, when our minds will have changed as much as our bodies, if not more, transhumanism is not likely to appear so horrific.
On the other hand, as I noted above, we will continue to experience moral horrors of unprecedented kinds, and probably also on an unprecedented scope and scale. With the human population at seven billion and climbing, our civilization may well experience wars and diseases and famines that kill billions even while civilization itself continues despite such depredations.
We should, then, be prepared for moral horrors — for some that are truly horrific, and others that turn out to be less than horrific once they are upon us. What we should not try to do is to infer from our desires and preferences in the present what must be or what will be. And the good news in all of this is that we have the power to change future events, to make the moral horrors that occur less horrific than they might have been, and to prepare ourselves intellectually to accept change that might have, once upon a time, been considered a moral horror.
. . . . .
. . . . .
. . . . .
20 June 2013
The Classical Greek Intellectual Foundations
of Agrarian-Ecclesiastical Civilization
One of Voltaire’s most famous witticisms was that the Holy Roman Empire was neither holy, nor Roman, nor an empire. Such contradictions abound history; as Barbara Tuchman noted, we should expect them rather than be offended by them: “Contradictions… are part of life, not merely a matter of conflicting evidence. I would ask the reader to expect contradictions, not uniformity.” (I just happened to notice today that Michael Shermer quotes this passage in a Youtube video.) In this spirit of historical contradiction it could be observed that the intellectual framework of agrarian-ecclesiastical civilization was neither agrarian nor ecclesiastical, but rather reflected the high point of Greek civilization in classical antiquity.
The intellectual space of agrarian-ecclesiastical civilization — the paradigm if you prefer Kuhnian language, or the epistēmē if you prefer the terminology of Foucault — was the result of what we might call the “world-builders” of classical antiquity, of them I would like to call attention to three: Aristotle, Euclid, and Ptolemy.
Aristotle, Euclid, and Ptolemy were the architects of the “closed world” that Alexander Koyré famously contrasted to the infinite universe that was to emerge (slowly, gradually, and at times painfully, as Koyré would demonstrate in detail) from the scientific revolution as played out in the work of Copernicus, Kepler, Galileo, and many others (the architects of the infinite universe):
The infinite cannot be traversed, argued Aristotle; now the stars turn around, therefore… But the stars do not turn around; they stand still, therefore… It is thus not surprising that in a rather short time after Copernicus some bold minds made the step that Copernicus refused to make, and asserted that the celestial sphere, that is the sphere of the fixed stars of Copernican astronomy, does not exist, and that the starry heavens, in which the stars are placed at different distances from the earth, “extendeth itself infinitely up.”
Alexander Koyré, From the Closed World to the Infinite Universe, Baltimore, Md.: The Johns Hopkins Press, 1957, p. 35
Aristotle was the comprehensive philosopher who not only had respect for empirical observation (something Plato consistently devalued) but also formulated a system of deductive logic that made it possible for him to connect empirical observations together into a theoretical structure with great explanatory power. Aristotle, then, did not deal with isolated facts, but with theories. Each new fact, each new observation, can in this way be fit within the overall structure of a theory which in Aristotle extends from the summum genus on top to the inferior species on the bottom. There is a place for everything and everything is in its place. The much later conception of a “great chain of being” — a central idea to later agrarian-ecclesiastical civilization — has its origins in the Aristotelian construct.
Euclid and Ptolemy, while comprehensive each within their own disciplines, were nowhere near as comprehensive as Aristotle; it was Aristotle’s philosophy that was the system of the world to which Euclid and Ptolemy contributed. Even though Aristotle distinguished many sciences later recognized as independent intellectual disciplines, with only two exceptions none of these sciences came to be systematically developed in antiquity (except perhaps for Aristotle’s own research in biology). Mathematics and astronomy were the two sciences that were systematically developed in antiquity as sciences recognizable as such, and still recognizable today as sciences.
While later thought, especially medieval thought, made much of the theory of the syllogism found in Aristotle’s Prior Analytics, Aristotle’s theory of science in the Posterior Analytics received much less attention. It was, nevertheless, the theoretical basis of Euclid’s systematic exposition of geometry on the basis of first principles. Euclid brought Aristotle’s world-building and logical rigor into mathematics, and wrote a book on geometry that was used as a textbook well into the twentieth century. We can today read ancient Greek mathematicians as contemporaries, and we can learn something from them; we can similarly read Ptolemy’s treatise on astronomy, the Almagest, as a serious work of astronomy, though we would have less to learn from it than from ancient mathematics.
Aristotle, Euclid, and Ptolemy date (roughly) from what Jaspers called the Axial Age; while peoples elsewhere in the world of maturing agrarian-ecclesiastical civilization were creating religions, the Greeks were creating philosophy of science, and this proved to be a lasting contribution. This was the axialization of Western civilization during the period of agrarian-ecclesiastical civilization.
Aristotle provided the philosophical foundations for the thought of later Western civilization up until the scientific revolution, and even after modern science began to change the world, Aristotle’s influence continued to echo in the work of later scientists. Even up into the early modern period, when we see the first signs of modern science taking shape in Galileo’s work on physics and cosmology, scientists were still writing their treatises in the Euclidean manner. Galileo’s early works on motion and mechanics are almost scholastic in tone, but are not as well remembered as his Sidereal Messenger or Dialogues Concerning the Two Chief World Systems. Even Newton’s Principia is laid out more geometrico.
The emergence of industrial-technological civilization from agrarian-ecclesiastical civilization was a process that began with the scientific revolution and continues to this day as the consequences of the industrial revolution continue to unfold, continuing the change the world in which we live. The transitional periods between macro-historical periods — which I have called macro-historical revolutions — are themselves periods of hundreds of years in duration. In fact, the first such macro-historical revolution, which inaugurated the macro-historical division of agrarian-ecclesiastical civilization, may have been a transition measurable in thousands of years.
In my immediately previous post, The Agrarian-Ecclesiastical Thesis, I suggested that, given the counter-market, counter-developmental mechanisms institutionalized in agrarian-ecclesiastical civilization, that its failure is to allow a revolution to take place. The long history of agrarian-ecclesiastical civilization — which might be stretched to as much as 15,000 years, depending upon when we date the first domestication of crops and the first settled, quasi-urban villages enabled by domesticated agriculture — witnessed many revolutions, all of which failed except for the last, which issued in the catastrophic collapse of agrarian-ecclesiastical civilization and the emergence of industrial-technological civilization.
That I have called contemporary civilization “industrial-technological civlization” and the civilization the preceded it “agrarian-ecclesiastical civilization,” and given that the latter so closely conforms to the distinction between economic infrastructure and ideological superstructure, I am trying to make a point about the overall structure of civilizations, even civilizations that inhabit distinct macro-historical divisions?
The source of Marx’s distinction between economic infrastructure (or economic base) and ideological superstructure is to be found in his A Contribution to The Critique of Political Economy. It is worth revisiting Marx’s formulation. The crucial passage is as follows:
In the social production which men carry on they enter Into definite relations that are indispensable and independent of their will, these relations of production correspond to a definite stage of development of their material powers of production. The sum total of these relations of production constitutes the economic structure of society — the real foundation, on which rise legal and political superstructures and to which correspond definite forms of social consciousness. The mode of production in material life determines the general character of social, political, and spiritual processes of life. It is not the consciousness of men that determines their existence, but, on the contrary, their social existence determines their consciousness.
Marx, Karl, A Contribution to The Critique of Political Economy, translated from the Second German Edition by N. I. Stone, Chicago: Charles H. Kerr & Company, 1911, Author’s Preface, pp. 11-12
Marx’s formulation is a straight-forward social implementation of a materialist theory of the relation of mind to body, so that we can say at least that Marx was a consistent materialist. Marx’s consistent materialism yields consistent results in the analysis of societies, which in some instances seems to be highly successful and offers us some insight. But not always. No schema can be quite true when stretched to fit every possible instance, and this is true of Marx’s consistent materialism. It collapses when confronted by societies in which there is no distinction between economics and ideology (each of these terms broadly construed).
It would be an interesting intellectual exercise to formulate a binomial nomenclature of civilizations characterizing each in terms of its economic infrastructure and ideological superstructure, but this is too schematic to the quite true. One point I have tried to argue several times (but for which I still lack a definitive formulation) is that distinct civilizations are not distinct implementations of one and the same idea of civilization, but rather distinct civilizations embody distinct ideas as to the nature and aims of civilization. So while “agrarian-ecclesiastical civilization” nicely fits the economic infrastructure/ideological superstructure model, “industrial-tecnnological civilization” does not fit as nicely. While there is a sense in which technology has become an ideology, it is in no sense an ideological superstructure in the same way that institutionalized religion served as the ideological superstructure of agrarian-ecclesiastical civilization.
. . . . .
. . . . .
. . . . .
3 November 2012
How do we orient ourselves within historiography? This may sound like an odd question; I will try to make it sound like a sensible question, and a question with relevance extending far beyond the bounds of historiography narrowly construed.
One way to orient oneself within historiography is to accept and elaborate upon a familiar schema of historical periodization. There are many from which to choose. For example, if one divides Western history into ancient, medieval and modern periods, and then goes on to describe the character of medieval civilization, this constitutes a kind of orientation within historiography. Others working on the medieval period will recognize your approach based on a received conception of periodization and will critique the effort accordingly.
While I often write about problematic issues in historical periodization, I am going to consider a very different orientation within historiography today, and this might be considered to be a methodological orientation, based on how one assesses and organizes the objects of historical knowledge.
A familiar distinction within historiography is that between the synchonic and the diachronic. I have written about this distinction in Synchronic and Diachronic Approaches to Civilization and Synchronic and Diachronic Geopolitical Theories. “Synchrony” and “diachrony” sound like forbidding technical terms, but the concepts they attempt to capture are not at all difficult. Synchrony is the present construed broadly enough to admit of short term historical interaction, while diachrony typically takes a narrower view but a longer span of time. Sometimes this is expressed by saying that synchrony is across time while diachrony is through time.
Another distinction often made is that between the nomothetic and the ideographic. Again, these are intimidating technical terms, but the ideas are simple. Nomothetic (which comes from the Greek “nomos” for “law” or “norm”) approaches are concerned with law-like transitions in time: cause and effect. For example, you intentionally touch a stove not knowing that it is hot, you burn your finger, you withdraw your hand and give a shout of pain. Ideographic approaches do not quite constitute the negation of cause and effect, but they focus on all that is merely contigent, accidental, and unpredictable in life. For example, while looking at some distraction out of the corner of your eye, you trip, and in seeking to catch your fall you touch a hot stove and burn your finger.
When we put together these two historiographical distinctions — synchronic and diachronic, nomothetic and ideographic — we get four possible permutations of historiographical methodology, as follows:
● nomothetic synchrony
Law-like interaction of all elements within a broadly-defined present
● ideographic synchrony
Contingent interactions of all elements within a broadly-defined present
● nomothetic diachrony
Law-like succession of related events through historical time (especially “deep time”)
● ideographic diachrony
Contingent succession of related events through historical time
This schematic representation of historiographical methodologies is in no wise intended to be exhaustive; I’m sure if I continued to think about this, all kinds of conditions, qualifications, and additions would occur to me. For example, one obvious way to give this much more subtlety and sophistication would be to define each of the above methodological orientations for each division of what I have called ecological temporality, i.e., define each method for each level of time, from the micro-temporality of lived experience to the meta-temporality of the unfolding of ideas in history. I’m not going to attempt to do this at present, I just wanted to give a sense of the simplified schematism I am employing here, which I hope has some relevance despite its simplicity.
All of this sounds very abstract, but if just the right intuitive illustrations of each concept can be found, the concepts will gain in concreteness and depth, and their usefulness will be immediately understood. I can’t claim that I have yet assembled the perfect intuitive illustrations for all four of these methodologies, but I will give you what I have at present, and as I continue to think about this I will (hopefully) add some telling examples.
Nomothetic synchrony, as a method of highlighting the law-like interaction of all elements within a broadly-defined present, is perhaps the most difficult to intuitively illustrate. What “the present” includes is ambiguous, but I have said that the present is “broadly-defined,” so you will understand that the present is not here the punctiform present but something more like “current events.” Current events are continually feeding back on themselves by being repeated in the media and iterated throughout numerous cultural channels. Not all of this feedback, and not all of these iterations, are law-like, but some are. For example, procedural rationality — laws, rules, and regulations intended to bring order and system to the ordinary business of life — constitutes a highly complex set of law-like interactions in the present. In natural history, in contradistinction to human history, ecology is, in a sense, an instance of nomothetic synchrony, and that genre of writing/study once called “nature studies” which focuses on life cycles and predictable patterns within a defined and limited ecosystem, habitat, or niche. Anything, then, that we can describe in ecological terms can also be described in terms of nomothetic synchrony, and since I have taken the trouble to define metaphysical ecology, this category is potentially highly comprehensive. For example, if we call sociology the ecology of society, or we call cosmology galactic ecology, these disciplines could both be treated in terms of nomothetic synchrony.
Ideographic synchrony as constituted by all contingent interactions within a broadly-defined present might be summed up as William James famously summarized sensory perception for an infant: “The baby, assailed by eyes, ears, nose, skin, and entrails at once, feels it all as one great blooming, buzzing, confusion.” Ideographic synchrony is a blooming, buzzing confusion. Anarchic processes like financial markets and warfare might be good illustrations of ideographic synchrony. Of course, markets are supposed to behave according to procedural rationality, and wars are supposed to be fought according to a strategy — but we have all heard of the “fog of war” and of battlefield “friction” (both concepts due to Clausewitz), as we have all heard that no plan survives contact with the enemy. Similarly, no trading strategy survives exposure to the market.
Nomothetic diachrony, the law-like succession of related events through historical time, is the paradigmatic form of historical thought, but more often than not an elusive ideal. Many “laws of history” have been proposed, but none have been widely accepted. The only law of history that has survived is not from history, but from biology: natural selection. Evolution, while often apparently random and pervasively contingent, is a perfect illustration of law-like transitions through deep time. The “big history” movement is also a paradigm case of nomothetic diachrony, with the central theoretical narrative being that of increasing complexity.
Ideographic diachrony, the contingent succession of related events through historical time, can be illustrated in several imaginative ways. The biography of an individual primarily consists of a tight focus on a contingent sequence of events (events in the life of one individual) through a period of time not limited to the broadly-defined present. Many writers like to dwell on the role of the merely contingent and even the spectacularly accidental in history, as with Pascal’s several remarks about how if Cleopatra’s nose had had another shape, history would be different — a particular theme that has been since taken up by others (as in Daniel J. Boorstin’s book, Cleopatra’s Nose: Essays on the Unexpected). There is also the famous rhyme about how “for want of a nail a kingdom fell” which also focuses on the disproportionate historical influence of accidental contingencies. The “butterfly effect” is another illustration.
These four concepts — nomothetic synchrony, ideographic synchrony, nomothetic diachrony, and ideographic diachrony — provide a kind of methodological orientation in historiography. But it is more than merely methodological, since particular methods imply particular metaphysical orientations as well. Someone who holds the cataclysmic conception of history — based upon a denial of human agency — is likely to pursue an ideographic methodology rather than a nomothetic methodology. However, the four conceptions of history that I have defined don’t neatly map on the four methodologies defined above, so I can’t just connect these two quadripartite schemas straight across, showing that each conception of history has an associated methodology.
It’s more complicated than that. It usually is with history.
. . . . .
. . . . .
. . . . .
22 October 2012
Waiting at the End of History
for the Coming of the Zero Hour
What does French literary criticism have to do with geopolitics, geostrategy, and far future scenarios of human civilization? Everything, as it turns out.
Roland Barthes wrote a book titled Writing Degree Zero; one could say that it is a work of literary criticism, but as with much sophisticated scholarship it is more than this. French literary criticism is not a scholarly undertaking for the faint at heart.
Barthes compares what he calls “writing degree zero” to the writing of a journalist; we can similarly compare history degree zero with the history found in journalism. In journalism, nothing ever happens, and at the same time something is always happening. It is the contemporary incarnation of the cyclical conception of history, in which nothing in essentials changes even while accidental change is the pervasive order of the day. (In Italy this is called “Gatopardismo.”) This is history reduced to white noise.
Here is Barthes’ own formulation of writing degree zero:
“Proportionately speaking, writing at the degree zero is basically in the indicative mood, or if you like, amodal; it would be accurate to say that it is a journalist’s writing. If it were not precisely the case that journalism develops, in general, optative or imperative (that is, emotive) forms. The new neutral writing takes place in the midst of all those ejaculations and judgments, without becoming involved in any of them; it consists precisely in their absence. But this absence is complete, it implies no refuge, no secret; one cannot therefore say that it is an impassive mode of writing; rather, that is is innocent.”
Roland Barthes, Writing Degree Zero, translated by Annette Lavers and Colin Smith, New York: Hill and Wang, 1977 (originally published 1953), pp. 76-77
It has been said that Barthes’ book is parochial, and certainly his central concern is French literature, and the situation (or, if you prefer, the dilemma) of the French writer. Barthes was a man of his place and time, and the book sets itself questions that scarcely resonate in early twenty-first century America: How can writing be revolutionary? We’ve come a long way since 1968.
Barthes was clearly vexed that a lot of writing by professed communists was anything but revolutionary. It was, in fact — horror of horrors — bourgeois, and little better than shilling shockers, penny dreadfuls, and yellow journalism. Barthes, then, was asking how it was possible for someone with truly revolutionary ideas to write in a revolutionary manner.
One must recall that at this time there were two kinds of writers in France: communists who supported Stalin and made excuses for him, and communists who did not support Stalin and made no excuses for him. (If you have the chance, I urge you to see the wonderful film Red Kiss, which is a bit difficult to find, but worth the effort for its illustration of the period.) The most famous literary-intellectual-philosophical dispute of the time — that between Sartre and Camus — perfectly exemplified this. Camus, not one to make excuses for anyone, said he would be neither a victim nor an executioner. Sartre, after resisting the blandishments of communism for many years, eventually became the most unimaginative of communists, defended Stalin and Mao, and had his lackeys take Camus to task in print.
Barthes explicitly cites the style of Camus as embodying the qualities of writing of the zero degree, though I think that Barthes was so personally involved in the idea of literature that his identification of Camus as writing degree zero was not in any sense intended as a political slander — or, for that matter, as a literary slander. (I hope that more informed readers will correct me if I am wrong.)
Journalism, then, is historiography degree zero, and in so far as journalists produce (as they like to say) the first draft of history, and in so far as this first draft is subsequently iterated in later drafts of history, historiography more closely approximates the zero degree. (If you prefer reading sitreps to journalism — they’re pretty much the same thing — you can reformulate the preceding sentence.) And then again, in so far as mass journalism is consumed by a mass audience, and that mass audience goes on to create contemporary history, in a mass spectacle of life imitating art, history itself, and not merely the recounting of history in historiography, approaches the zero degree. The new neutral history — uninvolved, disengaged, absent — is the perfect characterization of the mass politics of mass man.
There are elections, there are debates, there is television news 24/7 and radio talk shows 24/7, there are still a few newspapers and magazines sacrificing dead trees, and there is of course the blogosphere resonating with the voices of the millions (like myself) who have no access to the media megaphone and who prefer the web to a soapbox. All of this feeds into the appearance that there is always something going on. But we know that almost nothing changes for all the sound and fury. It doesn’t really matter who wins the election, since the rich will still be rich and the poor will still be poor.
Have we already, then, reached history degree zero? Are we living at the end of history? Is this what the end of days looks like? Not quite. Not quite yet.
One of the most famous and familiar motifs of Marx’s thought is that history is driven by ideological conflict. It is a very Victorian, very Darwinian, very nineteenth century idea. History understood as an ideological conflict has characterized the modern period of Western history, even if it was not always obvious what people were fighting for. Sometimes it was obvious what men were fighting for, and this was especially true in the wake of revolutions: those who died to defend the American Revolution or the French Revolution or the Russian Revolution knew, to some extent at least, what they were fighting for.
For Marx, the locomotive of history was the class struggle, and it was the nature of class struggle to erupt into revolutionary action. Revolutions, as I noted above, had the property of clarifying what it’s all about. You’re on one side of the barricades or the other. Marx was right to focus on revolutions, but wrong to focus on the class struggle.
We can arrive at a more satisfactory understanding of modern history if we take social class out of Marx’s class struggle and make the class a variable for which we can substitute any political entity whatsoever. Thus we arrive at a formal conception of political struggle: a social class can struggle against a nation-state; a nation-state can struggle against a royal family; a royal family can struggle against a city-state, and so on, and so forth.
The convergence of the international system on the model of the nation-state system has given us the appearance that nation-states struggle with nation-states, and as life has imitated art — in this case, the art of political thought — we have steadily been reduced to the monoculture of a single kind of political entity — nation-states — engaged in a single kind of struggle. Francis Fukuyama called this political system “liberal democracy” and this condition “the end of the history.” I guess one name is as good as any other name; I would call it political homogenization.
In many posts I have discussed Francis Fukuyama’s “end of history” thesis (a thesis, I might add, heavily indebted to French scholarship, and especially to Alexandre Kojève’s reading of Hegel — note that Kojève was an acquaintance of Leo Strauss and his work was translated by Allen Bloom, noted literary critic and cranky academic who wrote The Closing of the American Mind). I have pointed out that, despite the many dismissive critiques of Fukuyama’s “end of history” thesis, and claims of a “return of history,” that Fukuyama himself still holds a modified version of the thesis, and this is that contemporary liberal democratic society is the sole remaining viable form of political society (cf. Gödel’s Lesson for Geopolitics, in which I noted that Fukuyama is still thinking through his thesis twenty years on, as befits a philosopher).
As it turns out, there is a political level below that of the “end of history” and this is the absence of history — history degree zero.
A single remaining political ideology signifies History Degree One, and in the theater of political ideologies, liberal democracy is, for Fukuyama, the last man standing — but if this last man standing is a straw man, and we knock over this straw man, what then? If it can be shown that liberal democracy is a failure also, along with communism and fascism, nationalism and socialism, internationalism and fundamentalism, what comes next?
What then? Zero hour. History degree zero.
Even the end of history waits for further developments, and the future of the end of history is Zero Hour.
. . . . .
. . . . .
. . . . .
20 October 2012
Three Little Words: “Where are they?”
In The Visibility Presumption I examined some issues in relation to the response to the Fermi paradox by those who claim that a technological singularity would likely overtake any technologically advanced civilization. I don’t see how the technological singularity visited upon an alien species makes them any less visible (in the sense of “visible” relevant to SETI) nor any less likely to be interested in exploration, adventure, or the quest for scientific knowledge — and finding us would constitute a major scientific discovery for some xenobiological species that had matured into a peer industrial-technological civilization.
The more I think about the Fermi paradox — and I have been thinking a lot about it lately — and the more I contextualize the Fermi paradox in my own emerging theory of civilization — which is a theory I am attempting to formulate in the purest tradition of Russellian generality so that it is equally applicable to human civilization and to any non-human civilization — the more I have come to think that our civilization is relatively isolated in the cosmos, being perhaps one of the few civilizations, or the only civilization, in the Milky Way, and one among only a handful of civilizations in the local cluster of galaxies or our supercluster.
Having an opinion on the Fermi paradox, and even making an attempt to argue for a particular position, does not however relieve one of the intellectual responsibility of exploring all aspects of the paradox. I have also come to think, while reflecting on the Fermi paradox, that the paradox itself has been fruitful in pushing those who care to think about it toward better formulations of the nature and consequences of industrial-technological civilization and of interstellar civilization — whether that of a supposed xenocivilization, or that of ourselves now and in the future.
The human experience of economic and technological growth in the wake of the industrial revolution has made us aware that if there are other peer species in the universe, and if these peer species undergo a process of the development of civilization anything like our own, then these peer species may also have experienced or will experience the escalating exponential growth of economic organization and technological complexity that we have experienced. Looking at our own civilization, again, it seems that the natural telos of continued economic and technological development — for we see no natural or obvious impediment to such continued development — is for human civilization to extend itself beyond the confines of the Earth and the establish itself throughout the solar system and eventually throughout the galaxy and beyond. This natural teleology has been called “The Expansion Hypothesis” by John M. Smart. Smart credits the expansion hypothesis to Kardashev, and while it is implicit in Kardashev, Kardashev himself does not formulate the idea explicitly and does not use the term “expansion hypothesis.”
The natural teleology of civilization
I have taken the term “natural teleology” from contemporary philosophical expositions of Aristotle’s distinction between final causes and efficient causes. We can get something of a flavor of Aristotle’s idea of natural teleology (without going too deep into the controversy over final causes) from this paragraph from the second book of Aristotle’s Physics:
We also speak of a thing’s nature as being exhibited in the process of growth by which its nature is attained. The ‘nature’ in this sense is not like ‘doctoring’, which leads not to the art of doctoring but to health. Doctoring must start from the art, not lead to it. But it is not in this way that nature (in the one sense) is related to nature (in the other). What grows qua growing grows from something into something. Into what then does it grow? Not into that from which it arose but into that to which it tends. The shape then is nature.
Aristotle is a systematic philosopher, in which any one doctrine is related to many other doctrines, so that an excerpt really doesn’t do him justice; if the reader cares to, he or she can can look into this more deeply by reading Aristotle and his commentators. But I must say this much in elaboration: the idea of natural teleology is problematic because it suggests a teleological conception of the whole of nature and all of its parts, and ever since Darwin we have understood that many claims to natural teleology are simply the expression of anthropic bias.
Still, kittens grow into cats and puppies grow into dogs (if they live to maturity), and it is pointless to deny this. What is important here is to tightly circumscribe the idea of natural teleology so that we don’t throw out the baby with the bathwater. The difficulty comes in distinguishing the baby from the bathwater in which the baby is immersed. Unless we want to end up with the idea of a natural teleology for human beings and the lives they live — this was the “human nature” that Sartre emphatically denied — we must deny final causes to agents, or find some other principle of distinction.
Are civilizations a natural kind for which we can posit a natural teleology, i.e., a form or a nature toward which they naturally tend as they grow and develop? My answer to this is ambiguous, but it is a principled ambiguity: yes and no. Yes, because some aspects of civilization are clearly developmental, when an institution is growing toward its fulfillment, while other aspects of civilization are clearly non-developmental. But civilization is so complex a whole that there is no simple way to separate the developmental and the non-developmental aspects of any one given civilization.
When we examine high points of civilization like Athens under Pericles or Florence during the Renaissance, we can recognize after the fact the slow build up to these cultural heights, which cannot clearly be distinguished from economic, civil, urban, and military development. The natural teleology of a civilization is the attainment of excellence in its particular mode of being, just as Aristotle said that the great-souled man aims at excellence in his life, but the path to that excellence is as varied as the different lives of individuals and the difference histories of civilizations.
Now, I don’t regard this brief exposition of the natural teleology of civilization as anything like a definitive formulation, but a definitive formulation of something so complex and subtle would require years of work. I will save this for another time, rather, counting on the reader’s charity (if not indulgence) to grant me the idea that at least in some respects civilizations tend toward fulfilling an apparent telos implicit in its developmental history.
The Preemption Hypothesis
What I am going to suggest here as another response to the Fermi paradox will sound to some like just another version of the technological singularity response, but I want to try to show that what I am suggesting is a more general conception than that — a potential structural failure of civilization, as it were — and as a more comprehensive concept the technological singularity response to the Fermi paradox can be subsumed under it as a particular instance of civilizational preemption.
The more general conception of a response to the silentium universi I call the preemption hypothesis. According to the preemption hypothesis, the ordinary course of development of industrial-technological civilization — which, if extrapolated, would seem to point to a nearly inevitable expansion of that civilization beyond its home planet and eventually across interstellar space as its natural teleology — is preempted by the emergence of a completely different kind of civilization, a radically different kind of civilization, or by post-civilization, so that the expected natural teleology of the preempted civilization is interrupted and never comes to fruition.
Thus “the lights go out” for a given alien civilization not because that civilization destroys itself (the Doomsday argument, Solution no. 27 in Webb’s book) and not because it collapses into permanent stagnation or even catastrophic civilizational failure (existential risks outlined by Nick Bostrum), and not because it completes a natural cycle of growth, maturity, decay, and death, but rather because it moves on to the next stage of social institution that lies beyond civilization. In simplest terms, the preemption hypothesis is that industrial-technological civilization, for which the expansion hypothesis holds, is preempted by post-civilization, for which the expansion hypothesis no longer holds. Post-civilization is a social institution derived from civilization but no longer recognizably civilization.
The idea of a technological singularity is one kind of potential preemption of industrial-technological civilization, but certainly not the only possible kind of preemption. There are many possible forms of civilizational preemption, and any attempted list of possible preemptions is limited only by our imagination and our parochial conception of civilization, the latter being informed exclusively by human civilization. It is entirely possible, as another example of preemption, that once a civilization attains a certain degree of technological development, everyone recognizes the pointlessness of the the whole endeavor, all the machines are shut down, and the entire population turns to philosophical contemplation as the only worthy undertaking in life.
Acceleration and Preemption
I have previously argued that civilizations come to maturity in an Axial Age. The Axial Age is a conception due to Karl Jaspers, but I have suggested a generalization that holds for any society that achieves a sufficient degree of development and maturity. What Jaspers postulated for agricultural civilizations, and understood to be a turning point for the world entire, I believe holds for most civilizations, and that each stage in the overall development of civilization may have such a turning point.
Also, the history of human civilization reveals an acceleration. Nomadic hunter-gatherer society required hundreds of thousands of years before it matured into a condition capable of producing the great cave paintings of the upper Paleolithic (which I call the Axialization of the Nomadic Paradigm). The agricultural civilizations that superseded Paleolithic societies with the Neolithic Agricultural Revolution required thousands of years to mature to the point of producing what Jaspers called an Axial Age (The Axial Age for Jaspers).
Industrial civilization has not yet produced an industrialized axialization (though we may look back someday and understand one to have been achieved in retrospect), but the early modern civilization that seemed to be producing a decisively different way of life than the medieval period that preceded it experienced a catastrophic preemption: it did not come to fulfillment on its own terms. In Modernism without Industrialism I argued that modern civilization was effectively overtaken by the sudden and catastrophic emergence of industrialization, which set civilization on an entirely new course.
At each stage of the development of human society the maturation of that society, measured by the ability of that society to give a coherent account of itself in a comprehensive cosmological context (also known as mythology), has come sooner than the last, with the abortive civilization of modernism, Enlightenment, and the scientific revolution derailed and suddenly superseded by a novel and unprecedented development from within civilization. Modernism was preempted by accelerating events, and, specifically, by accelerating technology. It is possible that there are other forms of accelerating development that could derail or preempt that course of development that at present appears to be the natural teleology of industrial-technological civilization.
The Dystopian Hypothesis
Because the most obvious forms of the preemption hypothesis, in terms of the prospects for civilization most widely discussed today, would include the technological singularity, transhumanism, and The Transcension Hypothesis, and also because of the human ability (probably reinforced by the survival value of optimism) to look on the bright side of things, we may lose sight of equally obvious sub-optimal forms of preemption. Sub-optimal forms of civilizational preemption, in which civilization does not pass on to developments of greater complexity more technically difficult achievement, could be separately identified as the dystopian hypothesis.
In Miserable and Unhappy Civilizations I suggested that the distinction Freud made between neurotic misery and ordinary human unhappiness can be extended to encompass a distinction between a civilization in the grip of neurotic misery as distinct from a civilization experiencing ordinary civilizational unhappiness. I cited the example of the religious wars of early modern Europe as an example of civilization experiencing neurotic misery. It is possible that neurotic misery at the civilizational level could be perpetuated across time and space so that neurotic misery became the enduring condition of civilization. (This might be considered an instance of what Nick Bostrum called “flawed realization” in his analysis of existential risk.)
It would likely be the case that neurotically miserable civilization — which we might also call dystopian civilization — would be incapable of anything beyond perpetuating its miserable existence from one day to the next. The dystopian hypothesis could be assimilated to solution no. 23 in Webb’s book, “They have no desire to communicate,” but there many be many reasons that a civilization lacks a desire to communicate over interstellar distances with other civilizations, so I think that the dystopian lack of motivation deserves its own category as a response to the Fermi paradox.
Whether or not chronic and severe dystopianism could be considered a post-civilization institution and therefore a preemption of industrial-technological civilization is open to question. I will think about this.
. . . . .
. . . . .
. . . . .
22 August 2012
The idea of the individual has been central to Western Civilization; we can discern its earliest manifestations in ancient Greece, when potters signed their work and bragged that they were better than other potters; we can see its further development in the Italy of the renaissance, when men of virtú like Machiavelli and Lorenzo the Magnificent forcefully asserted themselves as rightful masters of their time; we can see the new forms that it has taken after the Industrial Revolution, where the office towers of New York, like the medieval towers of San Gimignano, assert the ascendancy and priority of the individual.
Whether you love it or hate it, you have to acknowledge that the US is where individualism has reached its most unconditional realization. Some people glory in American individualism, and some despise it. If a member of the commentariat or the punditocracy wants to put a positive spin on individualism, they will call it “rugged individualism,” whereas if they want to put a negative spin on individualism, they will call it “rampant individualism.” There are plenty of examples of both of these attitudes, and I invite the reader to stay alert for these linguistic clues in future reading.
When earlier today I posted a longish piece on Tumblr about Appearance and Reality in Demographics, I continued to think about the recent poll results that I mentioned there, WIN-Gallup International ‘Religiosity and Atheism Index’ reveals atheists are a small minority in the early years of 21st century, as well as an earlier poll from the Pew Forum, U. S. Religious Landscape Survey, that I mentioned some years ago (in 2008) in More on Republican Disarray. In particular, I thought about how wrong prognosticators, forecasters, and social commentators have been about the development of religion in the US. There is an obvious reason for this. The US is not only a disproportionately religious nation-state (as revealed in numerous polls), it is also, as I noted above, a disproportionately individualistic nation-state, and the confluence of these ideological trends, the religious and the individualistic, means that US culture is marked by religious individualism and individual religion.
I touched on this peculiar character of religion in America — i.e., religious individualism — in my post American Civilization, in which I cited the song Highwayman, jointed performed by Johnny Cash, Willie Nelson, Kris Kristofferson, and Waylon Jennings (and written by Jimmy Webb). This is an obvious pop culture example of what I am getting at, but the careful reader of classic American fiction will also reveal a religious individualism that frequently issues in pluralism, diversity, and the frankly eclectic. To put it bluntly, people believe whatever they want to believe.
The attempt to pigeonhole American religious belief and practice always founders on the rock of religious individualism, which cannot be reliably classified in ideological terms. It is not consistently left or right, radical or traditional, liberal or conservative, activist or quietist — or, rather, it is all of these things at different times for different individuals.
Individual religion takes the form of individual choice, and different individuals choose differently for themselves, and choose differently at different times in their life. This was one of the interesting results of the Pew Forum poll I mentioned above, which found a high level of religious observance in the US (everyone expected that), but when prying deeper found that, “More than one-quarter of American adults (28%) have left the faith in which they were raised in favor of another religion.”
While this may not sound too shocking prima facie, it would be difficult to overemphasize how historically unusual this is. One of the conflicts that marked the shift from the medieval world to the modern world in European history was that between the personal principle in law and the territorial principle in law (which latter emerges with the advent of the nation-state). Given the personal principle in law, an individual is judged according to his community. If you were a Christian on pilgrimage to the Holy Land and were accused of a crime in a Muslim country, you would be dealt with according to Christian law, not Muslim law. That how it was supposed to work, and sometimes it did work that way, and for the decentralized societies of medieval Europe the personal principle in law fit the loosely coupled structures of a nearly non-existent state.
The personal principle in law persists today in the institution of diplomatic immunity, but apart from diplomats, those accused of a crime will be tried according to the law of the geographically defined nation-state where the crime occurred, and this legal process will have little or nothing to do with the ethnicity or traditional community of the accused individual. Again, that’s the way it’s supposed to work, though it is not difficult to cite violations of this principle.
The personal principle in law is all about ethnicity and tradition and individual identity being defined by a traditional community, which in turn defined the individual in terms of his or her role in that community. The idea that an individual might change their religion was like suggesting that an individual could put on or take off an identity like a suit of clothes. This would have been utterly incomprehensible to our ancestors; for the US it is now a fait accompli, and the basis for the organization of our society. Just as serial monogamy has come to characterize American courtship and marriage patterns, so too serial faith choices, adopted sequentially throughout the life of the individual as that individual experiences personal crises that precipitate temporary religious identification, characterize American religious patterns.
Indeed, one of the perennial themes of American life is that of personal re-invention (i.e., the putting on and taking off of identity). In the US, failure is not final. If things aren’t working out for you in Boston, you can move to Philadelphia, as Benjamin Franklin did. In a social context of personal re-invention and geographical fungibility, what counts is not one’s abject subordination to the community into which one happens to be born, but one’s cleverness and persistence in finding a place where one can feel at home. Part of this personal quest is also finding a faith in which one can feel at home, and this is not necessarily the faith of one’s parents or of one’s community.
In the context of religious individualism, orthodoxy counts for nothing. Or it counts for everything, but only because each man has his own orthodoxy, and there is no social mechanism in place in industrial-technological civilization to force the acquiescence of any individual to any other individual’s orthodoxy.
Even those who celebrate orthodoxy and who would welcome mechanisms of social control to force acquiescence to orthodoxy, cannot escape, at least while in America, the necessity of defining their own orthodoxy on their own terms. They are, in Rousseau’s terms, forced to be free, which in this context means they are forced to be religious individualists.
. . . . .
. . . . .
. . . . .
22 July 2012
A couple of days ago in describing my pilgrimage to Kinn I suggested that the phenomenon of pilgrimage is a Wittgensteinian “form of life,” and as a form of life we may understand it better if we confine ourselves to the material infrastructure while setting aside the formal superstructure that surrounds the form of life we call pilgrimage. But in a fine-grained account of pilgrimage we must distinguish between those forms of pilgrimage that, when taking the long view of the big picture, become conflated.
As I attempted to show, in different ways, in Epistemic Orders of Magnitude and P or not-P, both la longue durée and the fine-grained view have their place in our epistemic development — respectively, and roughly, they represent the non-constructive and the constructive perspectives on experience — and we ought to be equally diligent in exploring the consequences of each perspective, since we have something important to learn from each.
I tried to suggest a similarly comprehensive synthesis yesterday in A Meditation upon the Petroglyphs of Ausevik, when remarking that an extrapolation of a personal philosophy of history, when drawn out to a sufficient extent coincides with the history of the world entire. In other words, non-constructivism represents the furthest reach of constructivist thought, which immediately suggests the contrary perspective, i.e., that constructivism represents the furthest reach of non-constructive thought. Constructivism is non-constructivism in extremis; non-construtivism is constructivism in extremis. To translate this once again into historico-personal terms, the history of the world entire coincides with an intimately personal philosophy of history when the former is extrapolated to the greatest extent of its possible scope.
In a fine-grained account of pilgrimage (in contradistinction to pilgrimage understood in outline, in the context of la longue durée), at the level of personal experience that is constructive because every detail is of necessity immediately exhibited in intuition and nothing whatsoever is demonstrated, we can distinguish many forms of pilgrimage. There are religious pilgrimages, such as the Sunnivaleia, there are personal pilgrimages, such as my pilgrimage to Kinn, there are aesthetic pilgrimages, such as when the custom dictated the young gentlemen of good families and fortune would take the “Grand Tour” of Europe, there are political pilgrimages, as when a candidate for office visits a politically significant place — and there are even philosophical pilgrimages. I have previously made some minor philosophical pilgrimages, as when I sought out Kierkegaard’s grave in Copenhagen and similarly visited Schopenhauer’s grave in Frankfurt. Today I made another philosophical pilgrimage, by visiting the small town of Skjolden, where Wittgenstein spent time working on the ideas that would later becomes the Tractatus Logico-Philosophicus.
In the letters that Wittgenstein subsequently exchanged with his acquaintances in Skjolden (which have, of course, been published along with the rest of his correspondence), the people of Skjolden almost always close their letters by observing that Skjolden is as it always was and ever will be, essentially unchanged in the passage of time. I wrote about this previously in The Charms of Small Town Norway. It seems to be true that life changes very slowly, almost imperceptibly, in the fjord country of Norway, as life always changes slowly in isolated, mountainous regions the world over. The peoples who retreat from the onrushing advance of civilization to the margins of the world where they will not be bothered, are not the kind of peoples who wish to indulge in change for the sake of change. It is this latter attitude that typifies industrial-technological civilization, which is still largely confined to the regions of the world fully given over to agricultural civilization. The margins of the world before industrialization largely coincide with the margins of the world after industrialization.
Wittgenstein, I think, left little impact upon Skjolden. He didn’t make waves, as it were, and didn’t want to make waves. Life in Skjolden is probably little changed in essentials from when Wittgenstein isolated himself in a small, bare hut at the end of a fjord in order to think and write about logic. I think that Wittgenstein would have liked this — or, at least, that he would have preferred this near absence of influence. The fjords are unchanged since Wittgenstein lived here, even if life has been modernized, and they still provide a refuge for those who would seek a world largely untouched by what Wittgenstein in his later years would call, “the main current of European and American civilization,” from which he felt profoundly alienated.
. . . . .
. . . . .
. . . . .