3 November 2012
How do we orient ourselves within historiography? This may sound like an odd question; I will try to make it sound like a sensible question, and a question with relevance extending far beyond the bounds of historiography narrowly construed.
One way to orient oneself within historiography is to accept and elaborate upon a familiar schema of historical periodization. There are many from which to choose. For example, if one divides Western history into ancient, medieval and modern periods, and then goes on to describe the character of medieval civilization, this constitutes a kind of orientation within historiography. Others working on the medieval period will recognize your approach based on a received conception of periodization and will critique the effort accordingly.
While I often write about problematic issues in historical periodization, I am going to consider a very different orientation within historiography today, and this might be considered to be a methodological orientation, based on how one assesses and organizes the objects of historical knowledge.
A familiar distinction within historiography is that between the synchonic and the diachronic. I have written about this distinction in Synchronic and Diachronic Approaches to Civilization and Synchronic and Diachronic Geopolitical Theories. “Synchrony” and “diachrony” sound like forbidding technical terms, but the concepts they attempt to capture are not at all difficult. Synchrony is the present construed broadly enough to admit of short term historical interaction, while diachrony typically takes a narrower view but a longer span of time. Sometimes this is expressed by saying that synchrony is across time while diachrony is through time.
Another distinction often made is that between the nomothetic and the ideographic. Again, these are intimidating technical terms, but the ideas are simple. Nomothetic (which comes from the Greek “nomos” for “law” or “norm”) approaches are concerned with law-like transitions in time: cause and effect. For example, you intentionally touch a stove not knowing that it is hot, you burn your finger, you withdraw your hand and give a shout of pain. Ideographic approaches do not quite constitute the negation of cause and effect, but they focus on all that is merely contigent, accidental, and unpredictable in life. For example, while looking at some distraction out of the corner of your eye, you trip, and in seeking to catch your fall you touch a hot stove and burn your finger.
When we put together these two historiographical distinctions — synchronic and diachronic, nomothetic and ideographic — we get four possible permutations of historiographical methodology, as follows:
● nomothetic synchrony
Law-like interaction of all elements within a broadly-defined present
● ideographic synchrony
Contingent interactions of all elements within a broadly-defined present
● nomothetic diachrony
Law-like succession of related events through historical time (especially “deep time”)
● ideographic diachrony
Contingent succession of related events through historical time
This schematic representation of historiographical methodologies is in no wise intended to be exhaustive; I’m sure if I continued to think about this, all kinds of conditions, qualifications, and additions would occur to me. For example, one obvious way to give this much more subtlety and sophistication would be to define each of the above methodological orientations for each division of what I have called ecological temporality, i.e., define each method for each level of time, from the micro-temporality of lived experience to the meta-temporality of the unfolding of ideas in history. I’m not going to attempt to do this at present, I just wanted to give a sense of the simplified schematism I am employing here, which I hope has some relevance despite its simplicity.
All of this sounds very abstract, but if just the right intuitive illustrations of each concept can be found, the concepts will gain in concreteness and depth, and their usefulness will be immediately understood. I can’t claim that I have yet assembled the perfect intuitive illustrations for all four of these methodologies, but I will give you what I have at present, and as I continue to think about this I will (hopefully) add some telling examples.
Nomothetic synchrony, as a method of highlighting the law-like interaction of all elements within a broadly-defined present, is perhaps the most difficult to intuitively illustrate. What “the present” includes is ambiguous, but I have said that the present is “broadly-defined,” so you will understand that the present is not here the punctiform present but something more like “current events.” Current events are continually feeding back on themselves by being repeated in the media and iterated throughout numerous cultural channels. Not all of this feedback, and not all of these iterations, are law-like, but some are. For example, procedural rationality — laws, rules, and regulations intended to bring order and system to the ordinary business of life — constitutes a highly complex set of law-like interactions in the present. In natural history, in contradistinction to human history, ecology is, in a sense, an instance of nomothetic synchrony, and that genre of writing/study once called “nature studies” which focuses on life cycles and predictable patterns within a defined and limited ecosystem, habitat, or niche. Anything, then, that we can describe in ecological terms can also be described in terms of nomothetic synchrony, and since I have taken the trouble to define metaphysical ecology, this category is potentially highly comprehensive. For example, if we call sociology the ecology of society, or we call cosmology galactic ecology, these disciplines could both be treated in terms of nomothetic synchrony.
Ideographic synchrony as constituted by all contingent interactions within a broadly-defined present might be summed up as William James famously summarized sensory perception for an infant: “The baby, assailed by eyes, ears, nose, skin, and entrails at once, feels it all as one great blooming, buzzing, confusion.” Ideographic synchrony is a blooming, buzzing confusion. Anarchic processes like financial markets and warfare might be good illustrations of ideographic synchrony. Of course, markets are supposed to behave according to procedural rationality, and wars are supposed to be fought according to a strategy — but we have all heard of the “fog of war” and of battlefield “friction” (both concepts due to Clausewitz), as we have all heard that no plan survives contact with the enemy. Similarly, no trading strategy survives exposure to the market.
Nomothetic diachrony, the law-like succession of related events through historical time, is the paradigmatic form of historical thought, but more often than not an elusive ideal. Many “laws of history” have been proposed, but none have been widely accepted. The only law of history that has survived is not from history, but from biology: natural selection. Evolution, while often apparently random and pervasively contingent, is a perfect illustration of law-like transitions through deep time. The “big history” movement is also a paradigm case of nomothetic diachrony, with the central theoretical narrative being that of increasing complexity.
Ideographic diachrony, the contingent succession of related events through historical time, can be illustrated in several imaginative ways. The biography of an individual primarily consists of a tight focus on a contingent sequence of events (events in the life of one individual) through a period of time not limited to the broadly-defined present. Many writers like to dwell on the role of the merely contingent and even the spectacularly accidental in history, as with Pascal’s several remarks about how if Cleopatra’s nose had had another shape, history would be different — a particular theme that has been since taken up by others (as in Daniel J. Boorstin’s book, Cleopatra’s Nose: Essays on the Unexpected). There is also the famous rhyme about how “for want of a nail a kingdom fell” which also focuses on the disproportionate historical influence of accidental contingencies. The “butterfly effect” is another illustration.
These four concepts — nomothetic synchrony, ideographic synchrony, nomothetic diachrony, and ideographic diachrony — provide a kind of methodological orientation in historiography. But it is more than merely methodological, since particular methods imply particular metaphysical orientations as well. Someone who holds the cataclysmic conception of history — based upon a denial of human agency — is likely to pursue an ideographic methodology rather than a nomothetic methodology. However, the four conceptions of history that I have defined don’t neatly map on the four methodologies defined above, so I can’t just connect these two quadripartite schemas straight across, showing that each conception of history has an associated methodology.
It’s more complicated than that. It usually is with history.
. . . . .
. . . . .
. . . . .
22 October 2012
Waiting at the End of History
for the Coming of the Zero Hour
What does French literary criticism have to do with geopolitics, geostrategy, and far future scenarios of human civilization? Everything, as it turns out.
Roland Barthes wrote a book titled Writing Degree Zero; one could say that it is a work of literary criticism, but as with much sophisticated scholarship it is more than this. French literary criticism is not a scholarly undertaking for the faint at heart.
Barthes compares what he calls “writing degree zero” to the writing of a journalist; we can similarly compare history degree zero with the history found in journalism. In journalism, nothing ever happens, and at the same time something is always happening. It is the contemporary incarnation of the cyclical conception of history, in which nothing in essentials changes even while accidental change is the pervasive order of the day. (In Italy this is called “Gatopardismo.”) This is history reduced to white noise.
Here is Barthes’ own formulation of writing degree zero:
“Proportionately speaking, writing at the degree zero is basically in the indicative mood, or if you like, amodal; it would be accurate to say that it is a journalist’s writing. If it were not precisely the case that journalism develops, in general, optative or imperative (that is, emotive) forms. The new neutral writing takes place in the midst of all those ejaculations and judgments, without becoming involved in any of them; it consists precisely in their absence. But this absence is complete, it implies no refuge, no secret; one cannot therefore say that it is an impassive mode of writing; rather, that is is innocent.”
Roland Barthes, Writing Degree Zero, translated by Annette Lavers and Colin Smith, New York: Hill and Wang, 1977 (originally published 1953), pp. 76-77
It has been said that Barthes’ book is parochial, and certainly his central concern is French literature, and the situation (or, if you prefer, the dilemma) of the French writer. Barthes was a man of his place and time, and the book sets itself questions that scarcely resonate in early twenty-first century America: How can writing be revolutionary? We’ve come a long way since 1968.
Barthes was clearly vexed that a lot of writing by professed communists was anything but revolutionary. It was, in fact — horror of horrors — bourgeois, and little better than shilling shockers, penny dreadfuls, and yellow journalism. Barthes, then, was asking how it was possible for someone with truly revolutionary ideas to write in a revolutionary manner.
One must recall that at this time there were two kinds of writers in France: communists who supported Stalin and made excuses for him, and communists who did not support Stalin and made no excuses for him. (If you have the chance, I urge you to see the wonderful film Red Kiss, which is a bit difficult to find, but worth the effort for its illustration of the period.) The most famous literary-intellectual-philosophical dispute of the time — that between Sartre and Camus — perfectly exemplified this. Camus, not one to make excuses for anyone, said he would be neither a victim nor an executioner. Sartre, after resisting the blandishments of communism for many years, eventually became the most unimaginative of communists, defended Stalin and Mao, and had his lackeys take Camus to task in print.
Barthes explicitly cites the style of Camus as embodying the qualities of writing of the zero degree, though I think that Barthes was so personally involved in the idea of literature that his identification of Camus as writing degree zero was not in any sense intended as a political slander — or, for that matter, as a literary slander. (I hope that more informed readers will correct me if I am wrong.)
Journalism, then, is historiography degree zero, and in so far as journalists produce (as they like to say) the first draft of history, and in so far as this first draft is subsequently iterated in later drafts of history, historiography more closely approximates the zero degree. (If you prefer reading sitreps to journalism — they’re pretty much the same thing — you can reformulate the preceding sentence.) And then again, in so far as mass journalism is consumed by a mass audience, and that mass audience goes on to create contemporary history, in a mass spectacle of life imitating art, history itself, and not merely the recounting of history in historiography, approaches the zero degree. The new neutral history — uninvolved, disengaged, absent — is the perfect characterization of the mass politics of mass man.
There are elections, there are debates, there is television news 24/7 and radio talk shows 24/7, there are still a few newspapers and magazines sacrificing dead trees, and there is of course the blogosphere resonating with the voices of the millions (like myself) who have no access to the media megaphone and who prefer the web to a soapbox. All of this feeds into the appearance that there is always something going on. But we know that almost nothing changes for all the sound and fury. It doesn’t really matter who wins the election, since the rich will still be rich and the poor will still be poor.
Have we already, then, reached history degree zero? Are we living at the end of history? Is this what the end of days looks like? Not quite. Not quite yet.
One of the most famous and familiar motifs of Marx’s thought is that history is driven by ideological conflict. It is a very Victorian, very Darwinian, very nineteenth century idea. History understood as an ideological conflict has characterized the modern period of Western history, even if it was not always obvious what people were fighting for. Sometimes it was obvious what men were fighting for, and this was especially true in the wake of revolutions: those who died to defend the American Revolution or the French Revolution or the Russian Revolution knew, to some extent at least, what they were fighting for.
For Marx, the locomotive of history was the class struggle, and it was the nature of class struggle to erupt into revolutionary action. Revolutions, as I noted above, had the property of clarifying what it’s all about. You’re on one side of the barricades or the other. Marx was right to focus on revolutions, but wrong to focus on the class struggle.
We can arrive at a more satisfactory understanding of modern history if we take social class out of Marx’s class struggle and make the class a variable for which we can substitute any political entity whatsoever. Thus we arrive at a formal conception of political struggle: a social class can struggle against a nation-state; a nation-state can struggle against a royal family; a royal family can struggle against a city-state, and so on, and so forth.
The convergence of the international system on the model of the nation-state system has given us the appearance that nation-states struggle with nation-states, and as life has imitated art — in this case, the art of political thought — we have steadily been reduced to the monoculture of a single kind of political entity — nation-states — engaged in a single kind of struggle. Francis Fukuyama called this political system “liberal democracy” and this condition “the end of the history.” I guess one name is as good as any other name; I would call it political homogenization.
In many posts I have discussed Francis Fukuyama’s “end of history” thesis (a thesis, I might add, heavily indebted to French scholarship, and especially to Alexandre Kojève’s reading of Hegel — note that Kojève was an acquaintance of Leo Strauss and his work was translated by Allen Bloom, noted literary critic and cranky academic who wrote The Closing of the American Mind). I have pointed out that, despite the many dismissive critiques of Fukuyama’s “end of history” thesis, and claims of a “return of history,” that Fukuyama himself still holds a modified version of the thesis, and this is that contemporary liberal democratic society is the sole remaining viable form of political society (cf. Gödel’s Lesson for Geopolitics, in which I noted that Fukuyama is still thinking through his thesis twenty years on, as befits a philosopher).
As it turns out, there is a political level below that of the “end of history” and this is the absence of history — history degree zero.
A single remaining political ideology signifies History Degree One, and in the theater of political ideologies, liberal democracy is, for Fukuyama, the last man standing — but if this last man standing is a straw man, and we knock over this straw man, what then? If it can be shown that liberal democracy is a failure also, along with communism and fascism, nationalism and socialism, internationalism and fundamentalism, what comes next?
What then? Zero hour. History degree zero.
Even the end of history waits for further developments, and the future of the end of history is Zero Hour.
. . . . .
. . . . .
. . . . .
20 October 2012
Three Little Words: “Where are they?”
In The Visibility Presumption I examined some issues in relation to the response to the Fermi paradox by those who claim that a technological singularity would likely overtake any technologically advanced civilization. I don’t see how the technological singularity visited upon an alien species makes them any less visible (in the sense of “visible” relevant to SETI) nor any less likely to be interested in exploration, adventure, or the quest for scientific knowledge — and finding us would constitute a major scientific discovery for some xenobiological species that had matured into a peer industrial-technological civilization.
The more I think about the Fermi paradox — and I have been thinking a lot about it lately — and the more I contextualize the Fermi paradox in my own emerging theory of civilization — which is a theory I am attempting to formulate in the purest tradition of Russellian generality so that it is equally applicable to human civilization and to any non-human civilization — the more I have come to think that our civilization is relatively isolated in the cosmos, being perhaps one of the few civilizations, or the only civilization, in the Milky Way, and one among only a handful of civilizations in the local cluster of galaxies or our supercluster.
Having an opinion on the Fermi paradox, and even making an attempt to argue for a particular position, does not however relieve one of the intellectual responsibility of exploring all aspects of the paradox. I have also come to think, while reflecting on the Fermi paradox, that the paradox itself has been fruitful in pushing those who care to think about it toward better formulations of the nature and consequences of industrial-technological civilization and of interstellar civilization — whether that of a supposed xenocivilization, or that of ourselves now and in the future.
The human experience of economic and technological growth in the wake of the industrial revolution has made us aware that if there are other peer species in the universe, and if these peer species undergo a process of the development of civilization anything like our own, then these peer species may also have experienced or will experience the escalating exponential growth of economic organization and technological complexity that we have experienced. Looking at our own civilization, again, it seems that the natural telos of continued economic and technological development — for we see no natural or obvious impediment to such continued development — is for human civilization to extend itself beyond the confines of the Earth and the establish itself throughout the solar system and eventually throughout the galaxy and beyond. This natural teleology has been called “The Expansion Hypothesis” by John M. Smart. Smart credits the expansion hypothesis to Kardashev, and while it is implicit in Kardashev, Kardashev himself does not formulate the idea explicitly and does not use the term “expansion hypothesis.”
The natural teleology of civilization
I have taken the term “natural teleology” from contemporary philosophical expositions of Aristotle’s distinction between final causes and efficient causes. We can get something of a flavor of Aristotle’s idea of natural teleology (without going too deep into the controversy over final causes) from this paragraph from the second book of Aristotle’s Physics:
We also speak of a thing’s nature as being exhibited in the process of growth by which its nature is attained. The ‘nature’ in this sense is not like ‘doctoring’, which leads not to the art of doctoring but to health. Doctoring must start from the art, not lead to it. But it is not in this way that nature (in the one sense) is related to nature (in the other). What grows qua growing grows from something into something. Into what then does it grow? Not into that from which it arose but into that to which it tends. The shape then is nature.
Aristotle is a systematic philosopher, in which any one doctrine is related to many other doctrines, so that an excerpt really doesn’t do him justice; if the reader cares to, he or she can can look into this more deeply by reading Aristotle and his commentators. But I must say this much in elaboration: the idea of natural teleology is problematic because it suggests a teleological conception of the whole of nature and all of its parts, and ever since Darwin we have understood that many claims to natural teleology are simply the expression of anthropic bias.
Still, kittens grow into cats and puppies grow into dogs (if they live to maturity), and it is pointless to deny this. What is important here is to tightly circumscribe the idea of natural teleology so that we don’t throw out the baby with the bathwater. The difficulty comes in distinguishing the baby from the bathwater in which the baby is immersed. Unless we want to end up with the idea of a natural teleology for human beings and the lives they live — this was the “human nature” that Sartre emphatically denied — we must deny final causes to agents, or find some other principle of distinction.
Are civilizations a natural kind for which we can posit a natural teleology, i.e., a form or a nature toward which they naturally tend as they grow and develop? My answer to this is ambiguous, but it is a principled ambiguity: yes and no. Yes, because some aspects of civilization are clearly developmental, when an institution is growing toward its fulfillment, while other aspects of civilization are clearly non-developmental. But civilization is so complex a whole that there is no simple way to separate the developmental and the non-developmental aspects of any one given civilization.
When we examine high points of civilization like Athens under Pericles or Florence during the Renaissance, we can recognize after the fact the slow build up to these cultural heights, which cannot clearly be distinguished from economic, civil, urban, and military development. The natural teleology of a civilization is the attainment of excellence in its particular mode of being, just as Aristotle said that the great-souled man aims at excellence in his life, but the path to that excellence is as varied as the different lives of individuals and the difference histories of civilizations.
Now, I don’t regard this brief exposition of the natural teleology of civilization as anything like a definitive formulation, but a definitive formulation of something so complex and subtle would require years of work. I will save this for another time, rather, counting on the reader’s charity (if not indulgence) to grant me the idea that at least in some respects civilizations tend toward fulfilling an apparent telos implicit in its developmental history.
The Preemption Hypothesis
What I am going to suggest here as another response to the Fermi paradox will sound to some like just another version of the technological singularity response, but I want to try to show that what I am suggesting is a more general conception than that — a potential structural failure of civilization, as it were — and as a more comprehensive concept the technological singularity response to the Fermi paradox can be subsumed under it as a particular instance of civilizational preemption.
The more general conception of a response to the silentium universi I call the preemption hypothesis. According to the preemption hypothesis, the ordinary course of development of industrial-technological civilization — which, if extrapolated, would seem to point to a nearly inevitable expansion of that civilization beyond its home planet and eventually across interstellar space as its natural teleology — is preempted by the emergence of a completely different kind of civilization, a radically different kind of civilization, or by post-civilization, so that the expected natural teleology of the preempted civilization is interrupted and never comes to fruition.
Thus “the lights go out” for a given alien civilization not because that civilization destroys itself (the Doomsday argument, Solution no. 27 in Webb’s book) and not because it collapses into permanent stagnation or even catastrophic civilizational failure (existential risks outlined by Nick Bostrum), and not because it completes a natural cycle of growth, maturity, decay, and death, but rather because it moves on to the next stage of social institution that lies beyond civilization. In simplest terms, the preemption hypothesis is that industrial-technological civilization, for which the expansion hypothesis holds, is preempted by post-civilization, for which the expansion hypothesis no longer holds. Post-civilization is a social institution derived from civilization but no longer recognizably civilization.
The idea of a technological singularity is one kind of potential preemption of industrial-technological civilization, but certainly not the only possible kind of preemption. There are many possible forms of civilizational preemption, and any attempted list of possible preemptions is limited only by our imagination and our parochial conception of civilization, the latter being informed exclusively by human civilization. It is entirely possible, as another example of preemption, that once a civilization attains a certain degree of technological development, everyone recognizes the pointlessness of the the whole endeavor, all the machines are shut down, and the entire population turns to philosophical contemplation as the only worthy undertaking in life.
Acceleration and Preemption
I have previously argued that civilizations come to maturity in an Axial Age. The Axial Age is a conception due to Karl Jaspers, but I have suggested a generalization that holds for any society that achieves a sufficient degree of development and maturity. What Jaspers postulated for agricultural civilizations, and understood to be a turning point for the world entire, I believe holds for most civilizations, and that each stage in the overall development of civilization may have such a turning point.
Also, the history of human civilization reveals an acceleration. Nomadic hunter-gatherer society required hundreds of thousands of years before it matured into a condition capable of producing the great cave paintings of the upper Paleolithic (which I call the Axialization of the Nomadic Paradigm). The agricultural civilizations that superseded Paleolithic societies with the Neolithic Agricultural Revolution required thousands of years to mature to the point of producing what Jaspers called an Axial Age (The Axial Age for Jaspers).
Industrial civilization has not yet produced an industrialized axialization (though we may look back someday and understand one to have been achieved in retrospect), but the early modern civilization that seemed to be producing a decisively different way of life than the medieval period that preceded it experienced a catastrophic preemption: it did not come to fulfillment on its own terms. In Modernism without Industrialism I argued that modern civilization was effectively overtaken by the sudden and catastrophic emergence of industrialization, which set civilization on an entirely new course.
At each stage of the development of human society the maturation of that society, measured by the ability of that society to give a coherent account of itself in a comprehensive cosmological context (also known as mythology), has come sooner than the last, with the abortive civilization of modernism, Enlightenment, and the scientific revolution derailed and suddenly superseded by a novel and unprecedented development from within civilization. Modernism was preempted by accelerating events, and, specifically, by accelerating technology. It is possible that there are other forms of accelerating development that could derail or preempt that course of development that at present appears to be the natural teleology of industrial-technological civilization.
The Dystopian Hypothesis
Because the most obvious forms of the preemption hypothesis, in terms of the prospects for civilization most widely discussed today, would include the technological singularity, transhumanism, and The Transcension Hypothesis, and also because of the human ability (probably reinforced by the survival value of optimism) to look on the bright side of things, we may lose sight of equally obvious sub-optimal forms of preemption. Sub-optimal forms of civilizational preemption, in which civilization does not pass on to developments of greater complexity more technically difficult achievement, could be separately identified as the dystopian hypothesis.
In Miserable and Unhappy Civilizations I suggested that the distinction Freud made between neurotic misery and ordinary human unhappiness can be extended to encompass a distinction between a civilization in the grip of neurotic misery as distinct from a civilization experiencing ordinary civilizational unhappiness. I cited the example of the religious wars of early modern Europe as an example of civilization experiencing neurotic misery. It is possible that neurotic misery at the civilizational level could be perpetuated across time and space so that neurotic misery became the enduring condition of civilization. (This might be considered an instance of what Nick Bostrum called “flawed realization” in his analysis of existential risk.)
It would likely be the case that neurotically miserable civilization — which we might also call dystopian civilization — would be incapable of anything beyond perpetuating its miserable existence from one day to the next. The dystopian hypothesis could be assimilated to solution no. 23 in Webb’s book, “They have no desire to communicate,” but there many be many reasons that a civilization lacks a desire to communicate over interstellar distances with other civilizations, so I think that the dystopian lack of motivation deserves its own category as a response to the Fermi paradox.
Whether or not chronic and severe dystopianism could be considered a post-civilization institution and therefore a preemption of industrial-technological civilization is open to question. I will think about this.
. . . . .
. . . . .
. . . . .
22 August 2012
The idea of the individual has been central to Western Civilization; we can discern its earliest manifestations in ancient Greece, when potters signed their work and bragged that they were better than other potters; we can see its further development in the Italy of the renaissance, when men of virtú like Machiavelli and Lorenzo the Magnificent forcefully asserted themselves as rightful masters of their time; we can see the new forms that it has taken after the Industrial Revolution, where the office towers of New York, like the medieval towers of San Gimignano, assert the ascendancy and priority of the individual.
Whether you love it or hate it, you have to acknowledge that the US is where individualism has reached its most unconditional realization. Some people glory in American individualism, and some despise it. If a member of the commentariat or the punditocracy wants to put a positive spin on individualism, they will call it “rugged individualism,” whereas if they want to put a negative spin on individualism, they will call it “rampant individualism.” There are plenty of examples of both of these attitudes, and I invite the reader to stay alert for these linguistic clues in future reading.
When earlier today I posted a longish piece on Tumblr about Appearance and Reality in Demographics, I continued to think about the recent poll results that I mentioned there, WIN-Gallup International ‘Religiosity and Atheism Index’ reveals atheists are a small minority in the early years of 21st century, as well as an earlier poll from the Pew Forum, U. S. Religious Landscape Survey, that I mentioned some years ago (in 2008) in More on Republican Disarray. In particular, I thought about how wrong prognosticators, forecasters, and social commentators have been about the development of religion in the US. There is an obvious reason for this. The US is not only a disproportionately religious nation-state (as revealed in numerous polls), it is also, as I noted above, a disproportionately individualistic nation-state, and the confluence of these ideological trends, the religious and the individualistic, means that US culture is marked by religious individualism and individual religion.
I touched on this peculiar character of religion in America — i.e., religious individualism — in my post American Civilization, in which I cited the song Highwayman, jointed performed by Johnny Cash, Willie Nelson, Kris Kristofferson, and Waylon Jennings (and written by Jimmy Webb). This is an obvious pop culture example of what I am getting at, but the careful reader of classic American fiction will also reveal a religious individualism that frequently issues in pluralism, diversity, and the frankly eclectic. To put it bluntly, people believe whatever they want to believe.
The attempt to pigeonhole American religious belief and practice always founders on the rock of religious individualism, which cannot be reliably classified in ideological terms. It is not consistently left or right, radical or traditional, liberal or conservative, activist or quietist — or, rather, it is all of these things at different times for different individuals.
Individual religion takes the form of individual choice, and different individuals choose differently for themselves, and choose differently at different times in their life. This was one of the interesting results of the Pew Forum poll I mentioned above, which found a high level of religious observance in the US (everyone expected that), but when prying deeper found that, “More than one-quarter of American adults (28%) have left the faith in which they were raised in favor of another religion.”
While this may not sound too shocking prima facie, it would be difficult to overemphasize how historically unusual this is. One of the conflicts that marked the shift from the medieval world to the modern world in European history was that between the personal principle in law and the territorial principle in law (which latter emerges with the advent of the nation-state). Given the personal principle in law, an individual is judged according to his community. If you were a Christian on pilgrimage to the Holy Land and were accused of a crime in a Muslim country, you would be dealt with according to Christian law, not Muslim law. That how it was supposed to work, and sometimes it did work that way, and for the decentralized societies of medieval Europe the personal principle in law fit the loosely coupled structures of a nearly non-existent state.
The personal principle in law persists today in the institution of diplomatic immunity, but apart from diplomats, those accused of a crime will be tried according to the law of the geographically defined nation-state where the crime occurred, and this legal process will have little or nothing to do with the ethnicity or traditional community of the accused individual. Again, that’s the way it’s supposed to work, though it is not difficult to cite violations of this principle.
The personal principle in law is all about ethnicity and tradition and individual identity being defined by a traditional community, which in turn defined the individual in terms of his or her role in that community. The idea that an individual might change their religion was like suggesting that an individual could put on or take off an identity like a suit of clothes. This would have been utterly incomprehensible to our ancestors; for the US it is now a fait accompli, and the basis for the organization of our society. Just as serial monogamy has come to characterize American courtship and marriage patterns, so too serial faith choices, adopted sequentially throughout the life of the individual as that individual experiences personal crises that precipitate temporary religious identification, characterize American religious patterns.
Indeed, one of the perennial themes of American life is that of personal re-invention (i.e., the putting on and taking off of identity). In the US, failure is not final. If things aren’t working out for you in Boston, you can move to Philadelphia, as Benjamin Franklin did. In a social context of personal re-invention and geographical fungibility, what counts is not one’s abject subordination to the community into which one happens to be born, but one’s cleverness and persistence in finding a place where one can feel at home. Part of this personal quest is also finding a faith in which one can feel at home, and this is not necessarily the faith of one’s parents or of one’s community.
In the context of religious individualism, orthodoxy counts for nothing. Or it counts for everything, but only because each man has his own orthodoxy, and there is no social mechanism in place in industrial-technological civilization to force the acquiescence of any individual to any other individual’s orthodoxy.
Even those who celebrate orthodoxy and who would welcome mechanisms of social control to force acquiescence to orthodoxy, cannot escape, at least while in America, the necessity of defining their own orthodoxy on their own terms. They are, in Rousseau’s terms, forced to be free, which in this context means they are forced to be religious individualists.
. . . . .
. . . . .
. . . . .
22 July 2012
A couple of days ago in describing my pilgrimage to Kinn I suggested that the phenomenon of pilgrimage is a Wittgensteinian “form of life,” and as a form of life we may understand it better if we confine ourselves to the material infrastructure while setting aside the formal superstructure that surrounds the form of life we call pilgrimage. But in a fine-grained account of pilgrimage we must distinguish between those forms of pilgrimage that, when taking the long view of the big picture, become conflated.
As I attempted to show, in different ways, in Epistemic Orders of Magnitude and P or not-P, both la longue durée and the fine-grained view have their place in our epistemic development — respectively, and roughly, they represent the non-constructive and the constructive perspectives on experience — and we ought to be equally diligent in exploring the consequences of each perspective, since we have something important to learn from each.
I tried to suggest a similarly comprehensive synthesis yesterday in A Meditation upon the Petroglyphs of Ausevik, when remarking that an extrapolation of a personal philosophy of history, when drawn out to a sufficient extent coincides with the history of the world entire. In other words, non-constructivism represents the furthest reach of constructivist thought, which immediately suggests the contrary perspective, i.e., that constructivism represents the furthest reach of non-constructive thought. Constructivism is non-constructivism in extremis; non-construtivism is constructivism in extremis. To translate this once again into historico-personal terms, the history of the world entire coincides with an intimately personal philosophy of history when the former is extrapolated to the greatest extent of its possible scope.
In a fine-grained account of pilgrimage (in contradistinction to pilgrimage understood in outline, in the context of la longue durée), at the level of personal experience that is constructive because every detail is of necessity immediately exhibited in intuition and nothing whatsoever is demonstrated, we can distinguish many forms of pilgrimage. There are religious pilgrimages, such as the Sunnivaleia, there are personal pilgrimages, such as my pilgrimage to Kinn, there are aesthetic pilgrimages, such as when the custom dictated the young gentlemen of good families and fortune would take the “Grand Tour” of Europe, there are political pilgrimages, as when a candidate for office visits a politically significant place — and there are even philosophical pilgrimages. I have previously made some minor philosophical pilgrimages, as when I sought out Kierkegaard’s grave in Copenhagen and similarly visited Schopenhauer’s grave in Frankfurt. Today I made another philosophical pilgrimage, by visiting the small town of Skjolden, where Wittgenstein spent time working on the ideas that would later becomes the Tractatus Logico-Philosophicus.
In the letters that Wittgenstein subsequently exchanged with his acquaintances in Skjolden (which have, of course, been published along with the rest of his correspondence), the people of Skjolden almost always close their letters by observing that Skjolden is as it always was and ever will be, essentially unchanged in the passage of time. I wrote about this previously in The Charms of Small Town Norway. It seems to be true that life changes very slowly, almost imperceptibly, in the fjord country of Norway, as life always changes slowly in isolated, mountainous regions the world over. The peoples who retreat from the onrushing advance of civilization to the margins of the world where they will not be bothered, are not the kind of peoples who wish to indulge in change for the sake of change. It is this latter attitude that typifies industrial-technological civilization, which is still largely confined to the regions of the world fully given over to agricultural civilization. The margins of the world before industrialization largely coincide with the margins of the world after industrialization.
Wittgenstein, I think, left little impact upon Skjolden. He didn’t make waves, as it were, and didn’t want to make waves. Life in Skjolden is probably little changed in essentials from when Wittgenstein isolated himself in a small, bare hut at the end of a fjord in order to think and write about logic. I think that Wittgenstein would have liked this — or, at least, that he would have preferred this near absence of influence. The fjords are unchanged since Wittgenstein lived here, even if life has been modernized, and they still provide a refuge for those who would seek a world largely untouched by what Wittgenstein in his later years would call, “the main current of European and American civilization,” from which he felt profoundly alienated.
. . . . .
. . . . .
. . . . .
16 July 2012
Last Valentine’s Day Chinese Vice President Xi Jinping made some remarks at the US State Department that were widely reported at the time as “defending China’s human rights record. There is a transcript of the Vice President’s remarks on the website of the US Embassy in Beijing, and from those remarks I will quote a couple of the crucial paragraphs in which Vice President Xi Jinping explicitly discussed human rights:
“…China has made tremendous and well-recognized achievements in the field of human rights over the past 30 plus years since reform and opening up. Of course, there is always room for improvement when it comes to human rights. Given China’s huge population, considerable regional diversity, and uneven development, we’re still faced with many challenges in improving people’s livelihood and advancing human rights.”
“The Chinese Government will always put people’s interests first and take seriously people’s aspirations and demands. We will, in the light of China’s national conditions, continue to take concrete and effective policies and measures to promote social fairness, justice and harmony, and push forward China’s course of human rights.”
Chinese leaders usually avoid explicit remarks on human rights, but there are a few times when I have read accounts of remarks made in Western countries by visiting Chinese officials who do their best to make a strong case for human rights with Chinese characteristics. After all, when Chinese officials come to Western nation-states they cannot avoid the protesters who would not be able to protest in China. But the official Chinese government line on human rights, though not often explicitly formulated, when it is articulated is unapologetic on those issues that most provoke international outcry.
I wrote above that the Chinese “do their best to make a strong case” for the Chinese conception of human right, and I realize that this could sound condescending or patronizing, but it is not intended as such. I think it would be fair to say that the Chinese have a very different conception of human rights than that which informs the thought and policy of Western peoples, and that many of the disagreements over human rights issues are individuals talking at cross purposes because they do not understand each other.
Moreover, and no less importantly, what I am here calling the Chinese conception of human rights is in no sense confined to China, and can often be found given forceful and eloquent expression by Western thinkers. …
What, then, is the Chinese conception of human rights? And if there is any such thing as a Chinese conception of human rights, how does it differ from Western conceptions of human rights? Chinese Vice President Xi Jinping formulated the Chinese conception of human rights in terms of “improving people’s livelihood” and promoting, “social fairness, justice and harmony.” This is a good summary. When I began to write this post I looked for a different speech with even more forceful formulations, but I wasn’t able to find exactly what I was looking for, since my memory preserved too little the that example to find it again.
In other speeches by PRC officials I have come across explicit contrasts between the Chinese effort to improve standards of living across the board for 1.3 billion people — which is, admittedly, a daunting task — with Western ideas of individual liberties and ensuring the rights of minorities. This is the crux of the issue: the individual vs. the social whole. The Chinese tendency is to prioritize the social whole over the individual; the Western tradition has been to ensure the inviolability of the individual, although this is a tradition that has been honored more in the breech than the observance.
Since the individual is a minority of one, the tradition of safeguarding minority rights can be folded into individual rights, though I know that many would disagree with me here. And I am sure that some will see what I have called the Chinese conception of human rights not as an alternative to the Western conception of human rights, but as a smokescreen behind which to hide their actual contempt for human rights. This is where Western formulations of the Chinese conception of human rights become important — important, at least, for Westerners to understand a point of view different than their own, because Western thinkers will argue for a non-individualistic conception of human rights according to Western norms of political and moral thought.
I recently found a good example of this in Robert Kaplan, who has lately been contributing to Strategic Forecasting. In a piece titled Defining Humanitarianism, Kaplan wrote:
“The very amoral and abstract reasoning behind the preservation of the balance of power in maritime Asia, through the deployment of warships and fighter jets, actually is as humanitarian as intervening in Bosnia or Libya was.”
“Nixon’s diplomacy gave China implicit security guarantees regarding the Soviet Union, Japan and Taiwan. Thus, when Deng Xiaoping came to power a few short years later, he had the option — because China was now externally secure for the first time in more than a century — to concentrate on internal capitalist-style development. China’s economic growth would dramatically lift the living standards and expand the personal freedoms of more than a billion people throughout East Asia. That’s humanitarianism!”
“…realism in the service of the American national interest is the most humanitarian approach possible.”
“…the issue is not idealism versus realism, for realism can sometimes save lives more than idealism.”
Kaplan isn’t explicitly stating the contrast between two conceptions of humanitarianism, but the distinction informs his essay throughout, and he argues strongly that foreign policy “realism” is more humanitarian because it saves a greater number of lives and improves standards of living to a greater degree for a greater number of people. This is a straight-forwardly utilitarian conception of humanitarianism: the greatest good for the greatest number — and this utilitarian conception of humanitarianism corresponds to a utilitarian conception of human rights: human rights under this conception is best defended by way of utilitarian humanitarianism.
So the Chinese conception of human rights is simply a utilitarian conception of human rights, and it can be contrasted to any number of non-utilitarian theories, such as consequentialism, deontology, or the Kantian kingdom of ends.
As I stated above, there are any number of Western defenders of utilitarian conceptions of humanitarianism and human rights. There are passionate defenders of communitarianism who essentially privilege the community over the individual, and while I don’t think many conscientious communitarians would want to explicitly defend China’s human rights record, on the level of principle they are advocating essentially the same thing as the leaders of China say when they claim to have improved the lives of more than a billion people.
. . . . .
. . . . .
. . . . .
4 June 2012
In several previous posts I have discussed how novel technologies will often display a sigmoid growth curve, starting with a gradual development, suddenly experiencing an exponential increase in complexity, sophistication, and efficacy, followed by a long plateau of little or no development after that technology has achieved maturity. The posts in which I described this development include:
In Blindsided by History I wrote:
“Present technologies will stall, and they will eventually be superseded by unpredicted and unpredictable technologies that will emerge to surpass them. Those who remain fixated on existing technologies will be blindsided by the new technologies, and indeed may simply fail to recognize new technologies for what they are when they do in fact appear.”
The phenomena of one technology superseding another results in Technological Succession. In my post on technological succession I wrote the following:
The overtaking of a stalled technology that remains at a given plateau by another technology that fulfills a similar need (although by way of a distinct method) is an extension of a society with stable institutions that was able to bring to fruition a mature technology. With a mature technology in place, and stable economic and social institutions built upon this technology, there emerges an incentive to continue or to expand these institutions to a greater extent, at a cheaper cost, more efficiently, more effectively, and with less effort. This attempt to do previous technology one better is, in turn, a spur to social changes that will call forth further innovations. It could be argued that the Industrial Revolution emerged from just such an escalation of social and technology coevolution.
Technological succession, then, develops in parallel with the social succession of institutions capable of fostering further technological development by different means once a given technology stalls. In this post I made a distinction between mature technologies (another name for stalled technologies), which are technologies that have passed through their exponential growth phase and have plateaued at a stable level, and perennial technologies, which are technologies that do not experience exponential growth curves in their development — things like knives that have always been a part of the human “toolkit” and always will be. This distinction between mature and perennial technologies I then developed according to a biological analogy:
By analogy with microevolution (evolution within a species) and macroevolution (evolution from one species into another) in biology, we can see the microevolution and macroevolution of technologies. Perennial technologies exhibit micorevolution. No new technological “species” emerge from the incremental changes in perennial technologies. Technological macroevolution is the succession of a stalled technology by a new, immature technology, which latter still possesses the possibility of development. Mature technologies experience adaptive radiation under coevolutionary pressures, and this macroevolution can result in new technological species.
The coevolutionary pressures are those social institutions that make demands upon a technology to continue its development in the face of advancing social developments, which latter might include expanding populations, higher standards of living, raised expectations and soaring ambitions.
Even if another technology does not come along to further extend the social functions served by the mature and now stalled technology, the incentives to continue to go one better with technology remains, and this incentive drives the attempt to try to squeeze more performance out of mature technologies that would, if surpassed in the process of technological succession, remain stalled at a stable plateau of development. The result of pushing for more performance from a stalled technology is what I will call decadent technology (though I could just as well call this baroque technology).
The obvious examples that come to mind of decadent technologies are either of a humorous or theatrical character (or both). Steampunk and tubepunk are obvious examples of the intentional elaboration of a decadent technology for aesthetic and theatrical effect. As genres of art and literature, steampunk and tubepunk aren’t seeking to supply the wants of mass society (except for aesthetic wants, which respond to a different class of coevolutionary pressures).
Another example of decadent technology is that of race car engines. If you want to go really fast, it would make more sense to strap a jet engine onto set of wheels (which would look like a steampunk contraption), but racing mostly means specialized internal combustion engines — engines pushed about as far as the technology of the internal combustion engine can be pushed. It is obvious, from the thousands of photographs in car magazines, that the builders of racing engines can an aesthetic pleasure in their creations. However, these engines are not merely aesthetic exercises like steampunk, because by pushing the technology of the internal combustion engine to its limits, much more horsepower can be obtained. Thus a decadent technology can be effective, though it quickly begins to reach a level of diminishing returns, and further investment yields progressively less of a return. That is why these engines are not models of efficiency that the mass producers of automobiles look to for technological developments (though this is often used as an excuse for car manufacturers to sponsor drag racing) but rather they are expressions of mechanical ambition. Like I wrote above, if you want to go really fast, you can build a jet; the challenge is to build an internal combustion engine with the power of a jet, and this is a challenge in which both builders of racing engines and race spectators enjoy.
Most examples of decadent technology are not as theatrical and not as much fun as steampunk and race cars, but the principles are essentially the same. Microchip technology, following the social coevolutionary pressure of fulfilling the prophecy of Moore’s Law, is close to becoming a decadent technology. If some other technology for computing fundamentally different from silicone wafer technology does not emerge soon (like quantum computing, which still seems to be some way off), the producers of microchips will come under considerable economic pressure to drive silicone technology beyond its natural (i.e., physical) limits and transform it into a decadent technology.
. . . . .
. . . . .
. . . . .
. . . . .
17 March 2012
One of the greatest contributions to science in the twentieth century was Jane Goodall’s study of chimpanzees in the wild at Gombe, Tanzania. Although Goodall’s work represents a major advance in ethology, it did not come without criticism. Here is how Adrian G. Weiss described some of this criticism:
Jane received her Ph.D. from Cambridge University in 1965. She is one of only eight other people to earn a Ph.D. without a bachelor’s (Montgomery 1991). Her adviser, Robert Hinde, said her methods were not professional, and that she was doing her research wrong. Jane’s major mistake was naming her “subjects”. The animals should be given numbers. Jane also used descriptive, narrative writing in her observations and calculations. She anthropomorphized her animals. Her colleagues and classmates thought she was “doing all wrong”. Robert Hinde did approve her thesis, even though she returned with all of his corrections with the original names and anthropomorphizing.
Most innovative science breaks the established rules of the time. If the innovative science is eventually accepted, it eventually also becomes the basis of a new orthodoxy. Given time, that orthodoxy will be displaced as well, as more innovative work demonstrates new ways of acquiring knowledge. As the old orthodoxy passes out of fashion it often falls either into neglect or may become the target of criticism as vicious as that directed at new and innovative research.
I have to imagine that it was this latter phenomenon of formerly accepted scientific discourses falling out of favor and becoming the target of ridicule that inspired one of Foucault’s most famous quotes (which I have cited previously on numerous occasions): “A real science recognizes and accepts its own history without feeling attacked.” Here is the same quote with more context:
Each of my works is a part of my own biography. For one or another reason I had the occasion to feel and live those things. To take a simple example, I used to work in a psychiatric hospital in the 1950s. After having studied philosophy, I wanted to see what madness was: I had been mad enough to study reason; I was reasonable enough to study madness. I was free to move from the patients to the attendants, for I had no precise role. It was the time of the blooming of neurosurgery, the beginning of psychopharmacology, the reign of the traditional institution. At first I accepted things as necessary, but then after three months (I am slow-minded!), I asked, “What is the necessity of these things?” After three years I left the job and went to Sweden in great personal discomfort and started to write a history of these practices. Madness and Civilization was intended to be a first volume. I like to write first volumes, and I hate to write second ones. It was perceived as a psychiatricide, but it was a description from history. You know the difference between a real science and a pseudoscience? A real science recognizes and accepts its own history without feeling attacked. When you tell a psychiatrist his mental institution came from the lazar house, he becomes infuriated.
Truth, Power, Self: An Interview with Michel Foucault — October 25th, 1982, Martin, L. H. et al (1988) Technologies of the Self: A Seminar with Michel Foucault, London: Tavistock. pp.9-15
It remains true that many representatives of even the most sophisticated contemporary sciences react as though attacked when reminded of their discipline’s history. This is true not least because much of science has an unsavory history — at least, by contemporary standards, a lot of scientific history is unsavory, and this gives us reason to believe that many of our efforts today will, in the fullness of time, be consigned to the unsavory inquiries of the past which carry with them norms, evaluations, and assumptions that are no longer considered to be acceptable in polite society. This is, of course, deeply ironic (I could say hypocritical if I wanted to be tendentious) since the standard of acceptability in polite society is one of the most stultifying norms imaginable.
It has long been debated within academia whether history is a science, or an art, or perhaps even a sui generis literary genre with a peculiar respect for evidence. There is no consensus on this question, and I suspect it will continue to be debated so long as the Western intellectual tradition persists. History, at least, is a recognized discipline. I know of no recognized discipline of the study of civilizations, which in part is why I recently wrote The Future Science of Civilizations.
There is, at present, no science of civilization, though there are many scientists who have written about civilization. I don’t know if there are any university departments on “Civilization Studies,” but if there aren’t, there should be. We can at least say that there is an established literary genre, partly scientific, that is concerned with the problems of civilization (including figures as diverse as Toynbee and Jared Diamond). Even among philosophers, who have a great love of writing, “The philosophy of x,” there are very few works on “the philosophy of civilization” — some, yes, but not many — and, I suspect, few if any departments devoted to the philosophy of civilization. This is a regrettable ellipsis.
When, in the future, we do have a science of civilization, and perhaps also a philosophy of civilization (or, at very least, a philosophy of the science of civilization), this science will have to come to terms with its past as every science has had to (or eventually will have to). The prehistory of the science of civilization is already fairly well established, and there are several known classics of the genre. Many of these classics of the study of civilization are as thoroughly unsavory by contemporary standards as one could possibly hope. The history of pronouncements on civilization is filled with short-sighted, baldly prejudiced, privileged, ethnocentric, and thoroughly anthropocentric formulations. For all that, they still may have something of value to offer.
A technological typology of human societies that is no longer in favor is the tripartite distinction between savagery, barbarism, and civilization. This belongs to the prehistory of the prehistory of civilization, since it establishes the natural history of civilization and its antecedents.
Edward Burnett Tylor proposed that human cultures developed through three basic stages consisting of savagery, barbarism, and civilization. The leading proponent of this savagery-barbarism-civilization scale came to be Lewis Henry Morgan, who gave a detailed exposition of it in his 1877 book Ancient Society (the entire book is conveniently available online for your reading pleasure). A quick sketch of the typology can be found at ANTHROPOLOGICAL THEORIES: Cross-Cultural Analysis.
One of the interesting features of Morgan’s elaboration of Tylor’s idea is his concern to define his stages in terms of technology. From the “lower status of savagery” with its initial use of fire, through a middle stage at which the bow and arrow is introduced, to the “upper status of savagery” which includes pottery, each stage of human development is marked by a definite technological achievement. Similarly with barbarism, which moves through the domestication of animals, irrigation, metal working, and a phonetic alphabet. This breakdown is, in its own way, more detailed than many contemporary decompositions of human social development, as well as being admirably tied to material culture and therefore amenable to confirmation and disconfirmation through archaeological research.
Today, of course, we are much too sophisticated to use terms like “savagery” or “barbarism.” These terms are now held in ill repute, as they are thought to suggest strongly negative evaluations. A friend of mine who studied anthropology told me that the word “primitive” is now referred to as “the P-word” within the discipline, so unacceptable has it become. To call a people (even an historical people now extinct) “savage” is similarly considered beyond the pale. We don’t call people “savage” or “primitive” any more. But the dangers of these terminological obsessions are that we get hung up on the terms and no longer consider theories on their theoretical merits. Jane Goodall’s theoretical work was eventually accepted despite her use of proper names in ethology, and now it is not at all uncommon for researchers to name their subjects that belong to other species.
Some theoreticians, moreover, have come to recognize that there are certain things that can be learned through sympathizing with one’s subject that simply cannot be learned in any other way (score one posthumously for Bergson’s conception of “intellectual sympathy”). Of course, science need not limit itself to a single paradigm of valid research. We can have a “big tent” of science with ample room for many methodologies, and hopefully also with plenty of room for disagreements.
It would be an interesting exercise to take a “dated” work like Lewis Henry Morgan’s book Ancient Society, leave the theoretical content intact, and change only the names. In fact, we could formalize Morgan’s gradations, using numbers instead of names just as Jane Goodall was urged to do. I suspect that Morgan’s work would be treated rather better in this case in comparison to the contemporary reception of its original terminology. We ought to ask ourselves why this is the case. Perhaps it is too much to hope for a “big tent” of science so capacious that it could hold Lewis Henry Morgan’s terminology alongside that of contemporary anthropology, but we have arrived at a big tent of science large enough to hold Jane Goodall’s proper names alongside tagged and numbered specimens.
. . . . .
. . . . .
. . . . .
31 January 2012
A revaluation of agricultural civilization
In several posts I have made a tripartite distinction in human history between hunter-gatherer nomadism, agriculturalism, and industrialism. There is a sense, then, from the perspective of la longue duree, that the macro-historical division of agriculturalism constitutes the “middle ages” of human social development. Prior to agriculturalism, nothing like this settled way of life even existed; now, later, from the perspective of industrialized civilization, agriculture is an enormous industry that can feed seven billion people, but it is a demographically marginal activity that occupies only a small fragment of our species. During those “middle ages” of agriculturalism (comprising maybe fifteen thousand years of human society) the vast bulk of our species was engaged in agricultural production. The very small class of elites oversaw agricultural production and its distribution, and the small class of the career military class or the career priestly class facilitated the work of elites in overseeing agricultural production. This civilizational focus is perhaps unparalleled by any other macro-historical epoch of human social development (and I have elsewhere implicitly referred to this focus in Pure Agriculturalism).
The advent of agricultural civilization was simultaneously the advent of settled civilization, and the transition from agriculturalism to industrialism left the institution of settled civilization in place. Other continuities are also still in place, and many of these continuities from agriculturalism to industrialism are simply the result of the youth of industrial civilization. When industrial civilization is ten thousand years old — should it survive so long, which is not at all certain — I suspect that it will preserve far fewer traces of its agricultural past. For the present, however, we live in a milieu of agricultural institutions held over from the long macro-historical division of agriculturalism and emergent institutions of a still-inchoate industrialism.
The institutions of agricultural civilization are uniquely macabre, and it is worthwhile to inquiry as to how an entire class of civilizations (all the civilizations that belong within the macro-historical division of settled agriculturalism) could come to embody a particular (and, indeed, a peculiar) moral-aesthetic tenor. What do I mean by “macabre”? The online Merriam-Webster dictionary defines “macabre” as follows:
1: having death as a subject: comprising or including a personalized representation of death
2: dwelling on the gruesome
3: tending to produce horror in a beholder
All of the above characterize settled agricultural civilization, which has death as its subject, dwells upon the gruesome, and as a consequence tends to produce horror in the beholder.
The thousand years of medieval European society, which approximated pure agriculturalism perhaps more closely than many other agricultural civilizations (and which we might call a little bit of civilization in its pure form), stands as a monument to the macabre, especially after the experience of the Black Death (bubonic plague), which gave the culture of Europe a decidedly death-obsessed aspect still to be seen in graphically explicit painting and sculpture. But medieval Europe is not unique in this respect; all settled agricultural civilization, to a greater or a lesser extent, has a macabre element at its core. The Agricultural Apocalypse that I wrote about in my previous post constitutes a concrete expression of the horrors that agricultural civilization has inflicted upon itself. What makes agricultural civilization so horrific? What is the source of the macabre Weltanschauung of agriculturalism?
Both the lives of nomadic hunter-gatherers and the lives of settled agriculturalists are bound up with a daily experience of death: human beings must kill in order to live, and other living beings must die so that human beings can live. Occasionally a human being dies so that another species may live, and while this still happens in our own time when someone is eaten by a bear or a mountain lion, it happens much less often that the alternative, which explains why there are seven billion human beings on the planet while no other vertebrate predator comes close to these numbers. The only vertebrate species that flourish are those that we allow to flourish (there are, for example, about sixteen billion chickens in the world), with the exception of a few successful parasitic species such as rats and seagulls. (Even then, there are about five billion rats on the planet, and each rat weighs only a faction of the mass of a human being, so that total human biomass is disproportionately great.)
Although nomadic hunter-gatherers and settled agriculturalists both confront pervasive experiences of death, the experience of death is different in each case, and this difference in the experience and indeed in the practice of death informs everything about human life that is bound up in this relationship to death. John Stuart Mill wrote in his The Utility of Religion:
“Human existence is girt round with mystery: the narrow region of our experience is a small island in the midst of a boundless sea, which at once awes our feelings and stimulates our imagination by its vastness and its obscurity. To add to the mystery, the domain of our earthly existence is not only an island in infinite space, but also in infinite time. The past and the future are alike shrouded from us: we neither know the origin of anything which is, nor, its final destination. If we feel deeply interested in knowing that there are myriads of worlds at an immeasurable, and to our faculties inconceivable, distance from us in space; if we are eager to discover what little we can about these worlds, and when we cannot know what they are, can never satiate ourselves with speculating on what they may be; is it not a matter of far deeper interest to us to learn, or even to conjecture, from whence came this nearer world which we inhabit; what cause or agency made it what it is, and on what powers depend its future fate?”
While Mill wrote that human existence is girt round with mystery, he might well have said that human existence is girt round with death, and in many religious traditions death and mystery or synonymous. The response to the death that surrounds human existence, and the kind of death that surrounds human existence, shapes the mythological traditions of the people so girt round.
Joseph Campbell explicitly recognized the striking difference in mythologies between nomadic hunter-gatherers and settled agricultural peoples. This is a theme to which Campbell returns time and again in his books and lectures. The mythologies of hunting peoples, Campbell maintained, revolved around placating the spirits of killed prey, while the mythologies of agricultural peoples resolved around sacrifice, according to the formula that, since life grows out of death, in order to create more life, one must create more death. Hence sacrifice. Campbell clearly explains a link between the mythologies peculiar to macro-historically distinct peoples, but why should peoples respond so strongly (and so differently) to distinct experiences of death? And, perhaps as importantly, why should peoples respond mythologically to death? To answer this question demands a more fundamental perspective upon human life in its embeddedness in socio-cultural milieux, and we can find such a perspective in a psychoanalytic interpretation of history derived from Freud.
It is abundantly obvious, in observing the struggle for life, that organisms are possessed of a powerful instinct to preserve the life of the individual at all costs and to reproduce that life (sometimes called eros or libido), but Freud theorized that, in addition to the survival instinct that there is also a “death drive” (sometimes called thanatos). Here is Freud’s account of the death drive:
“At one time or another, by some operation of force which still completely baffles conjecture, the properties of life were awakened in lifeless matter. Perhaps the process was a prototype resembling that other one which later in a certain stratum of living matter gave rise to consciousness. The tension then aroused in the previously inanimate matter strove to attain an equilibrium; the first instinct was present, that to return to lifelessness. The living substance at that time had death within easy reach; there was probably only a short course of life to run, the direction of which was determined by the chemical structure of the young organism. So through a long period of time the living substance may have been constantly created anew, and easily extinguished, until decisive external influences altered in such a way as to compel the still surviving substance to ever greater deviations from the original path of life, and to ever more complicated and circuitous routes to the attainment of the goal of death. These circuitous ways to death, faithfully retained by the conservative instincts, would be neither more nor less than the phenomena of life as we now know it. If the exclusively conservative nature of the instincts is accepted as true, it is impossible to arrive at any other suppositions with regard to the origin and goal of life.”
Sigmund Freud, Beyond the Pleasure Principle, authorized translation from the second German edition by C. J. M. Hubback, London and Vienna: The International Psycho-Analytical Press, 1922, pp. 47-48
The death drive, or thanatos, does not appear to be as urgent as the drive to live and to reproduce, but according to Freud it is equally implicated in society and culture. Moreover, given the emergence of war from the same settled agricultural societies that practiced a mythology of sacrifice (according to Campbell), there has been a further “production” of death by the social organization made possible by settled societies. It is to be expected that the production of death by sacrifice in order to ensure a good harvest would become entangled with the production of death in order to ensure the continuity of the community, and indeed in societies in which war became highly ritualized (e.g., Aztec civilization and Japanese civilization) there is a strong element of sacrifice in combat.
Freud’s explanation of the death drive may strike the reader as a bit odd and perhaps unlikely, but the mechanism that Freud is proposing is not all that different from Sartre’s contention that being-for-itself seeks to become being-in-itself (to put it simply, everyone wants to be God): life — finite life, human life — is problematic, unstable, uncertain, subject to calamity, and pregnant with every kind of danger. Why would such a contingent, finite being not desire to possess the quiescence and security of being-in-itself, to be free of all contingencies, which Shakespeare called all the ills that flesh is heir to? The mythologies that Campbell describes as being intrinsic to nomadic and settled peoples are mechanisms that attempt to restore the equilibrium to the world that has been disturbed by human activity.
Agricultural civilization is the institutionalization of the death drive. The mythology of sacrifice institutionalizes death as the norm and even the ideal of agricultural civilizations. As such, settled agricultural civilization is (has been) a pathological permutation of human society that has resulted in the social equivalent of neurotic misery. That is to say, agricultural civilization is a civilization of neurotic misery, but all civilization need not be neurotically miserable. The Industrial Revolution has accomplished part of the world of overcoming the institutions of settled agriculturalism, but we still retain much of its legacy. To make the complete transition from the neurotic misery of settled agricultural civilization to ordinary civilizational unhappiness will require an additional effort above and beyond industrialization.
Despite the explicit recognition of a Paleolithic Golden Age prior to settled agriculturalism, there is a strong bias in contemporary civilization against nomadism and in favor of settled civilization. Both Kenneth Clark’s Civilisation: A Personal View and Jacob Bronowski’s The Ascent of Man (both of which I have cited with approval in many posts) make broad evaluative judgments to the detriment of nomadic societies — an entirely superfluous judgment, as though the representatives of settled civilization felt that they needed to defend an existential orientation of their civilization by condemning the way of life of uncivilized peoples, who are called savages and barbarians. The contempt that has been shown for the world’s surviving nomadic peoples — the Sami, the Gypsies, and others — as well as programs of forced sedentarization — e.g., among the Kyrgyz — show the high level of emotional feeling that still attaches to the difference between fundamentally distinct forms of life, even when one pattern of life has become disproporationately successful and no longer needs to defend itself against the depredations of the other.
Given this low esteem in which existential alternatives are held, it is important to see settled agricultural civilization, as well as its direct descendent, settled industrial civilization, in their true colors and true dimensions, and to explicitly recognize the pathological and explicitly macabre elements of the civilization that we have called our own in order to see it for what it is and therefore to see its overcoming as an historical achievement for the good the species.
We are not yet free of the institutions of settled agricultural civilization, which means that we are not yet free of a Weltanschauung constructed around macabre rituals focused on death. And despite the far-reaching changes to life that have come with the Industrial Revolution, there is no certainly that the developments that separate us from the settled agricultural macabre will continue. I wrote above that, given the consolidation of industrial civilization, we will probably have institutions far less agricultural in character, but it remains possible that the industrialism may falter, may collapse, or may even, after consolidating itself as a macro-historical division, give way to a future macro-historical division in which the old ways of agriculturalism will be reasserted.
I count among the alternatives of future macro-historical developments the possibility of pastoralization and neo-agriculturalism. In any civilization largely constituted by either the historical processes of pastoralization of neo-agriculturalism, agriculture would once again play a central and perhaps a dominant role in the life of the people. In a future macro-historical division in which agriculture was once again the dominant feature of human experience, I would expect that the macabre character of agricultural civilization would once against reassert itself in a new mythology eventually consolidated in the axialization of a future historical paradigm centered on agriculture.
. . . . .
. . . . .
. . . . .