8 June 2015
In several posts I have discussed the need for a science of civilization (cf., e.g., The Future Science of Civilizations), and this is a theme I intended to continue to pursue in future posts. It is no small matter to constitute a new science where none has existed, and to constitute a new science for an object of knowledge as complex as civilization is a daunting task.
The problem of constituting a science of civilization, de novo for all intents and purposes, may be seen in the light of Husserl’s attempt to constitute (or re-constitute) philosophy as a rigorous science, which was a touchstone of Husserl’s work. Here is a passage from Husserl’s programmatic essay, “Philosophy as Strict Science” (variously translated) in which Husserl distinguishes between profundity and intelligibility:
“Profundity is the symptom of a chaos which true science must strive to resolve into a cosmos, i.e., into a simple, unequivocal, pellucid order. True science, insofar as it has become definable doctrine, knows no profundity. Every science, or part of a science, which has attained finality, is a coherent system of reasoning operations each of which is immediately intelligible; thus, not profound at all. Profundity is the concern of wisdom; that of methodical theory is conceptual clarity and distinctness. To reshape and transform the dark gropings of profundity into unequivocal, rational propositions: that is the essential act in methodically constituting a new science.”
Edmund Husserl, “Philosophy as Rigorous Science” in Phenomenology and the Crisis of Philosophy, edited by Quentin Lauer, New York: Harper, 1965 (originally “Philosophie als strenge Wissenschaft,” Logos, vol. I, 1911)
Recently re-reading this passage from Husserl’s essay I realized that much of what I have attempted in the way of “methodically constituting a new science” of civilization has taken the form of attempting to follow Husserl’s pursuit of “unequivocal, rational propositions” that eschew “the dark gropings of profundity.” I think much of the study of civilization, immersed as it is in history and historiography, has been subject more often to profound meditations (in the sense that Husserl gives to “profound”) than conceptual clarity and distinctness.
The Cartesian demand for clarity and distinctness is especially interesting in the context of constituting a science of civilization given Descartes’ famous disavowal of history (on which cf. the quote from Descartes in Big History and Scientific Historiography); if an historical inquiry is the basis of the study of civilization, and history consists of little more than fables, then a science of civilization becomes rather dubious. The emergence of scientific historiography, however, is relevant in this context.
The structure of Husserl’s essay is strikingly similar to the first lecture in Russell’s Our Knowledge of the External World. Both Russell and Husserl take up major philosophical movements of their time (and although the two were contemporaries, each took different examples — Husserl, naturalism, historicism, and Weltanschauung philosophy; Russell, idealism, which he calls “the classical tradition,” and evolutionism), primarily, it seems, to show how philosophy had gotten off on the wrong track. The two works can profitably be read side-by-side, as Russell is close to being an exemplar of the naturalism Husserl criticized, while Husserl is close to being an exemplar of the idealism that Russell criticized.
Despite the fundamental difference between Husserl and Russell, each had an idea of rigor and each attempted to realize in their philosophical work, and each thought of that rigor as bringing the scientific spirit into philosophy. (In Kierkegaard and Russell on Rigor I discussed Russell’s conception of rigor and its surprising similarity to Kierkegaard’s thought.) Interestingly, however, the two did not criticize each other directly, though they were contemporaries and each knew of the other’s work.
The new science Russell was involved in constituting was mathematical logic, which Roman Ingarden explicitly tells us that Husserl found inadequate for the task of a scientific philosophy:
“It is maybe unexpected and surprising that Husserl who was trained as a mathematician did not seek salvation for philosophy in the mathematical method which had from time to time stood out like a beacon as an ideal worthy of imitation by philosophers. But mathematical logic could not satisfy him… above all he fought for responsibility in philosophical research and devoted many years to the elaboration of a method which, according to him, was to secure for philosophy the status of a science.”
Roman Ingarden, On the Motives which Led Husserl to Transcendental Idealism, Translated from the Polish by Arnor Hannibalsson, Den Haag: Martinus Nijhoff, 1975, p. 9.
Ingarden’s discussion of Husserl is instructive, in so far as he notes the influence of mathematical method upon Husserl’s thought, but also that Husserl did not try to employ a mathematical method directly in philosophy. Rather, Husserl invested his philosophical career in the formulation of a new methodology that would allow the values of rigorous scientific practice to be expressed in philosophy and through a philosophical method — a method that might be said to be parallel to or mirroring the mathematical method, or derived from the same thematic motives as those that inform mathematical methodology.
The same question is posed in considering the possibility of a rigorously scientific method in the study of civilization. If civilization is sui generis, is a sui generis methodology necessary to the formulation of a rigorous theory of civilization? Even if that methodology is not what we today know as the methodology of science, or even if that methodology does not precisely mirror the rigorous method of mathematics, there may be a way to reason rigorously about civilization, though it has yet to be given an explicit form.
The need to think rigorously about civilization I took up implicitly in Thinking about Civilization, Suboptimal Civilizations, and Addendum on Suboptimal Civilizations. (I considered the possibility of thinking rigorously about the human condition in The Human Condition Made Rigorous.) Ultimately I would like to make my implicit methodology explicit and so to provide a theoretical framework for the study of civilization.
Since theories of civilization have been, for the most part, either implicit or vague or both, there has been little theoretical framework to give shape or direction to the historical studies that have been central to the study of civilization to date. Thus the study of civilization has been a discipline adrift, without a proper research program, and without an explicit methodology.
There are at least two sides to the rigorous study of civilization: theoretical and empirical. The empirical study of civilization is familiar to us all in the form of history, but history studied as history, as opposed to history studied for what it can contribute to the theory of civilization, are two different things. One of the initial fundamental problems of the study of civilization is to disentangle civilization from history, which involves a formal rather than a material distinction, because both the study of civilization and the study of history draw from the same material resources.
How do we begin to formulate a science of civlization? It is often said that, while science begins with definitions, philosophy culminates in definitions. There is some truth to this, but when one is attempting to create a new discipline one must be both philosopher and scientist simultaneously, practicing a philosophical science or a scientific philosophy that approaches a definition even as it assumes a definition (admittedly vague) in order for the inquiry to begin. Husserl, clearly, and Russell also, could be counted among those striving for a scientific philosophy, while Einstein and Gödel could be counted as among those practicing a philosophical science. All were engaged in the task of formulating new and unprecedented disciplines.
This division of labor between philosophy and science points to what Kant would have called the architectonic of knowledge. Husserl conceived this architectonic categorically, while we would now formulate the architectonic in hypothetico-deductive terms, and it is Husserl’s categorical conception of knowledge that ties him to the past and at times gives his thought an antiquated cast, but this is merely an historical contingency. Many of Husserl’s formulations are dated and openly appeal to a conception of science that no longer accords with what we would likely today think of as science, but in some respects Husserl grasps the perennial nature of science and what distinguishes the scientific mode of thought from non-scientific modes of thought.
Husserl’s conception of science is rooted in the conception of science already emergent in the ancient world in the work of Aristotle, Euclid, and Ptolemy, and which I described in Addendum on the Agrarian-Ecclesiastical Thesis. Russell’s conception science is that of industrial-technological civilization, jointly emergent from the scientific revolution, the political revolutions of the eighteenth century, and the industrial revolution. With the overthrow of scholasticism as the basis of university curricula (which took hundreds of years following the scientific revolution before the process was complete), a new paradigm of science was to emerge and take shape. It was in this context that Husserl and Russell, Einstein and Gödel, pursued their research, employing a mixture of established traditional ideas and radically new ideas.
In a thorough re-reading of Husserl we could treat his conception of science as an exercise to be updated as we went along, substituting an hypothetico-deductive formulation for each and every one of Husserl’s categorical formulations, ultimately converging upon a scientific conception of knowledge more in accord with contemporary conceptions of scientific knowledge. At the end of this exercise, Husserl’s observation about the different between science and profundity would still be intact, and would still be a valuable guide to the transformation of a profound chaos into a pellucid cosmos.
This ideal, and ever more so the realization of this ideal, ultimately may not prove to be possible. Husserl himself in his later writings famously said, “Philosophy as science, as serious, rigorous, indeed apodictically rigorous, science — the dream is over.”(It is interesting to compare this metaphor of a dream to Kant’s claim that he was awoken from his dogmatic slumbers by Hume.) The impulse to science returns, eventually, even if the idea of an apodictically rigorous science has come to seem a mere dream. And once the impulse to science returns, the impulse to make that science rigorous will reassert itself in time. Our rational nature asserts itself in and through this impulse, which is complementary to, rather than contradictory of, our animal nature. To pursue a rigorous science of civilization is ultimately as human as the satisfaction of any other impulse characteristic of our species.
. . . . .
. . . . .
. . . . .
. . . . .
4 June 2015
Today marks the 26th anniversary of the Tiananmen massacre. In the past year it almost looked like similar sights would be repeated in Hong Kong, as the “Umbrella Revolution” protesters showed an early resolve and seemed to be making some headway. But the regime in Beijing kept its cool and a certain patience, and simply waited out the protesters. Perhaps the protesters will return, but they will have a difficult time regaining the historical momentum of the moment. It would take another incident of some significance to spark further unrest in Hong Kong. The Chinese state has both the patience and the economic momentum to dictate its version of events. Hence the importance of maintaining the June 4th incident in living memory.
Just yesterday I was talking to a Chinese friend and I opined that, with the growth of the Chinese economy and Chinese citizens working all over the world, the government might have increasing difficulty in maintaining its regime of control over information within the Chinese mainland. I was told that it was not difficult to make the transition between what you can say in China and what you can’t say in China, in comparison of the relative freedom of Chinese to say whatever they think when outside mainland China. One simply assumes the appropriate persona when in China. As a westerner, I have a difficult time accepting this, but the way in which it was described to me was perfectly authentic and I have no reason to doubt it.
Over the past weeks and months there have been many signs of China’s continued assumption of the role of a “responsible stakeholder” in the global community, with the initial success of gaining the cooperation of other nation-states in the fledgling Asian Infrastructure Investment Bank (AIIB) and the Financial Times last Tuesday noted, “…the IMF’s decision later this year about whether to include China in the basket of currencies from which it makes up its special-drawing-rights will be keenly watched.” (“What Fifa tells us about global power” by Gideon Rachman) The very idea of a global reserve currency that is not fully convertible and fully floating strikes me as nothing short of bizarre — since the value of the currency is not then determined by the markets, its value must be established politically — but that just goes to show you what economic power can achieve. And all of this takes place against the background of China’s ongoing land reclamation on small islands in the South China Sea, which is a source of significant tension. But the tension has not derailed the business deals.
If China’s grand strategy (or, rather, the grand strategy of the Chinese communist party) is to make China a global superpower with both hard power (military power projection capability) and soft power (social and cultural prestige), and to do so while retaining the communist party’s absolute grip on power (presumably assuming the legitimacy of that grip on power), one must acknowledge that this strategy has been on track successfully for decades. Assume, for purposes of argument, that this grand strategy continues successfully on track. I have to wonder if the Chinese communist party has a plan to eventually allow the history of the Tiananmen massacre to be known, once subsequent events have sufficiently changed the meaning of that the event (by “proving” that the party was “right” because their policies led to the success of China, therefore their massacre should be excused as understandable in the service of a greater good), or is the memory of the Tiananmen massacre to be forever sequestered? Since the Chinese leadership has proved their ability to think big over the long term, I would guess that there must be internal documents that deal explicitly with this question, though I don’t suppose this internal debate will ever become public knowledge.
I have read many times, from many different sources, that young party members are set to study the lessons of the fall of dictators and one-party states elsewhere in the world. Perhaps they also study damaging historical revelations as carefully, and have developed a plan to manage knowledge of the Tiananmen massacre at some time in the future. It is not terribly difficult to imagine China attempting to use the soft power of the great many Confucius Institute franchises it has sponsored (480 worldwide at latest count) to slowly and gradually shape the discourse around China and the biggest PR disaster in the history of the Chinese communist party, paving the way to eventually opening a discussion of Tiananmen entirely on Chinese terms. I suppose that’s what I would do, if I was a member of the Standing Committee of the Central Politburo. But, again, I am a westerner and am liable to utterly misjudge Chinese motivations. I will, however, continue to wonder about their long game in relation to Tiananmen, and to look for signs in the tea leaves that will betray that game.
. . . . .
Previous posts on Tiananmen Anniversaries:
2013 A Dream Deferred
. . . . .
. . . . .
. . . . .
. . . . .
27 May 2015
Is it possible for human beings to care about the fate of strangers? This is at once a profound philosophical question and an immediately practical question. The most famous response to this question is perhaps that of John Donne:
“No man is an island, entire of itself; every man is a piece of the continent, a part of the main. If a clod be washed away by the sea, Europe is the less, as well as if a promontory were, as well as if a manor of thy friend’s or of thine own were: any man’s death diminishes me, because I am involved in mankind, and therefore never send to know for whom the bells tolls; it tolls for thee.”
John Donne, Devotions upon Emergent Occasions, XVII. Nunc lento sonitu dicunt, morieris. Now, this bell tolling softly for another, says to me: Thou must die.
Immanuel Levinas spoke of “the community of those with nothing in common,” in an attempt to get at the human concern for other human beings of whom we know little or nothing. More recently, there is this from Bill Gates:
“When I talk to friends about global health, I often run into a strange paradox. The idea of saving one person’s life is profound and thrilling. But I’ve found that when you talk about saving millions of lives — it sounds almost meaningless. The higher the number, the harder it is to wrap your head around.”
Bill Gates, opening paragraph of An AIDS Number That’s Almost Too Big to Believe
Gates presents this as a paradox, but in social science it is a well-known and well-studied cognitive bias known as the Identifiable victim effect. One researcher who has studied this cognitive bias is Paul Slovic, whose work was discussed by Sam Harris in the following passage:
“…when human life is threatened, it seems both rational and moral for our concern to increase with the number of lives at stake. And if we think that losing many lives might have some additional negative consequences (like the collapse of civilization), the curve of our concern should grow steeper still. But this is not how we characteristically respond to the suffering of other human beings.”
“Slovic’s experimental work suggests that we intuitively care most about a single, identifiable human life, less about two, and we grow more callous as the body count rises. Slovic believes that this ‘psychic numbing’ explains the widely lamented fact that we are generally more distressed by the suffering of single child (or even a single animal) than by a proper genocide. What Slovic has termed ‘genocide neglect’ — our reliable failure to respond, both practically and emotionally, to the most horrific instances of unnecessary human suffering — represents one of the more perplexing and consequential failures of our moral intuition.”
“Slovic found that when given a chance to donate money in support of needy children, subjects give most generously and feel the greatest empathy when told only about a single child’s suffering. When presented with two needy cases, their compassion wanes. And this diabolical trend continues: the greater the need, the less people are emotionally affected and the less they are inclined to give.”
Sam Harris, The Moral Landscape, Chapter 2
Skip down another paragraph and Harris adds this:
“The fact that people seem to be reliably less concerned when faced with an increase in human suffering represents an obvious violation of moral norms. The important point, however, is that we immediately recognize how indefensible this allocation of emotional and material resources is once it is brought to our attention.”
While Harris has not hesitated to court controversy, and speaks the truth plainly enough as he sees it, by failing to place what he characterizes as norms of moral reasoning in an evolutionary context he presents us with a paradox (the above section of the book is subtitled “Moral Paradox”). Really, this kind of cognitive bias only appears paradoxical when compared to a relatively recent conception of morality liberated from parochial in-group concerns.
For our ancestors, focusing on a single individual whose face is known had a high survival value for a small nomadic band, whereas a broadly humanitarian concern for all human beings would have been disastrous in equal measure. Today, in the context of industrial-technological civilization we can afford to love humanity; if our ancestors had loved humanity rather than particular individuals they knew well, they likely would have gone extinct.
Our evolutionary past has ill prepared us for the perplexities of population ethics in which the lives of millions may rest on our decisions. On the other hand, our evolutionary past has well prepared us for small group dynamics in which we immediately recognize everyone in our in-group and with equal immediacy identify anyone who is not part of our in-group and who therefore belongs to an out-group. We continue to behave as though our decisions were confined to a small band of individuals known to us, and the ability of contemporary telecommunications to project particular individuals into our personal lives as though we knew them, as if they were part of our in-group, plays into this cognitive bias.
While the explicit formulation of Identifiable victim effect is recent, the principle has been well known for hundreds of years at least, and has been as compellingly described in historical literature as in recent social science, as, for example, in Adam Smith:
“Let us suppose that the great empire of China, with all its myriads of inhabitants, was suddenly swallowed up by an earthquake, and let us consider how a man of humanity in Europe, who had no sort of connexion with that part of the world, would be affected upon receiving intelligence of this dreadful calamity. He would, I imagine, first of all, express very strongly his sorrow for the misfortune of that unhappy people, he would make many melancholy reflections upon the precariousness of human life, and the vanity of all the labours of man, which could thus be annihilated in a moment. He would too, perhaps, if he was a man of speculation, enter into many reasonings concerning the effects which this disaster might produce upon the commerce of Europe, and the trade and business of the world in general. And when all this fine philosophy was over, when all these humane sentiments had been once fairly expressed, he would pursue his business or his pleasure, take his repose or his diversion, with the same ease and tranquillity, as if no such accident had happened. The most frivolous disaster which could befall himself would occasion a more real disturbance. If he was to lose his little finger to-morrow, he would not sleep to-night; but, provided he never saw them, he will snore with the most profound security over the ruin of a hundred millions of his brethren, and the destruction of that immense multitude seems plainly an object less interesting to him, than this paltry misfortune of his own.”
Adam Smith, Theory of Moral Sentiments, Part III, chapter 3, paragraph 4
And immediately after Hume made his famous claim that, “Reason is, and ought only to be the slave of the passions, and can never pretend to any other office than to serve and obey them,” he illustrated the claim with an observation similar to Smith’s:
“Where a passion is neither founded on false suppositions, nor chuses means insufficient for the end, the understanding can neither justify nor condemn it. It is not contrary to reason to prefer the destruction of the whole world to the scratching of my finger. It is not contrary to reason for me to chuse my total ruin, to prevent the least uneasiness of an Indian or person wholly unknown to me. It is as little contrary to reason to prefer even my own acknowledgeed lesser good to my greater, and have a more ardent affection for the former than the latter.”
David Hume, A Treatise of Human Nature, Book II, Part III, section 3
Bertrand Russell has well described how the expression of this cognitive bias can take on the conceit of moral superiority in the context of romanticism:
“Cultivated people in eighteenth-century France greatly admired what they called la sensibilité, which meant a proneness to emotion, and more particularly to the emotion of sympathy. To be thoroughly satisfactory, the emotion must be direct and violent and quite uninformed by thought. The man of sensibility would be moved to tears by the sight of a single destitute peasant family, but would be cold to well-thought-out schemes for ameliorating the lot of peasants as a class. The poor were supposed to possess more virtue than the rich; the sage was thought of as a man who retires from the corruption of courts to enjoy the peaceful pleasures of an unambitious rural existence.”
Bertrand Russell, A History of Western Philosophy, Part II. From Rousseau to the Present Day, CHAPTER XVIII “The Romantic Movement”
Russell’s account of romanticism provides some of the missing rationalization whereby a cognitive bias clearly at variance with norms of moral reasoning is justified as being the “higher” moral ground. Harris seems to suggest that, as soon as this violation of moral reasoning is pointed out to us, we will change. But we don’t change, for the most part. Our rationalizations change, but our behavior rarely does. And indeed studies of cognitive bias have revealed that even when experimental subjects are informed of cognitive biases that should be obvious ex post facto, most will continue to defend choices that unambiguously reflect cognitive bias.
I have personally experienced the attitude described by Russell (despite the fact that I have not lived in eighteenth-century France) more times than I care to recall, though I find myself temperamentally on the side of those formulating well-thought-out schemes for the amelioration of the lot of the destitute as a class, rather than those moved to tears by the sight of a single destitute family. From these personal experiences of mine, anecdotal evidence suggests to me that if you attempt to live by the quasi-utilitarianism advocated by Russell and Harris, others will regard you as cold, unfeeling, and lacking in the milk of human kindness.
The cognitive bias challenge to presumptive norms of moral reasoning is also a profound challenge to existential risk mitigation, since existential risk mitigation deals in the largest numbers of human lives saved, but is a well-thought-out scheme for ameliorating the lot of human beings as a class, and may therefore have little emotional appeal compared to putting an individual’s face on a problem and then broadcasting that face repetitively.
We have all heard that the past is the foreign county, and that they do things differently there. (This line comes from the 1953 novel The Go-Between by L. P. Hartley.) We are the past of some future that has yet to occur, and we will in turn be a foreign country to that future. And, by the same token, the future is a foreign country, and they do things differently there. Can we care about these foreigners with their foreign ways? Can we do more than care about them, and actually change our behavior in the present in order to ensure on ongoing future, however foreign that future is from our parochial concerns?
In Bostrom’s paper “Existential Risk Prevention as Global Priority” (Global Policy, Volume 4, Issue 1, February 2013) the author gives a lower bound of 1016 potential future lives saved by existential risk mitigation (though he also gives “a lower bound of 1054 human-brain-emulation subjective life-years” as a possibility), but if the “collapse of compassion” is a function of the numbers involved, the higher the numbers we cite for individuals saved as a result of existential risk mitigation, the less will the average individual of today care.
Would it be possible to place an identifiable victim in the future? This is difficult, but we are all familiar with appeals to the world we leave to our children, and these are attempts to connect identifiable victims with actions that may prejudice the ability of human beings in the future to live lives of value commensurate with our own. It would be possible to construct some grand fiction, like Plato’s “noble lie” in order to interest the mass of the public in existential risk mitigation, but this would not be successful unless it became some kind of quasi-religious belief exempted from falsification that becomes the receptacle of our collective hopes. This does not seem very plausible (or sustainable) to me.
Are we left, then, to take the high road? To try to explain in painstaking (and off-putting) detail the violation of moral norms involved in our failure to adequately address existential risks, thereby putting our descendants in mortal danger? Certainly if an attempt to place an identifiable victim in the future is doomed to failure, we have no remaining option but the attempt at a moral intervention and relentless moral education that could transform the moral lives of humanity.
I do not think either of the above approaches to resolving the identifiable victim challenge to existential risk mitigation would be likely to be successful. I can put this more strongly yet: I think both approaches would almost certainly result in a backlash and would therefore be counter-productive to existential risk mitigation efforts. The only way forward that I can see is to present existential risk mitigation under the character of the adventure and exploration made possible by a spacefaring civilization that would, almost as an unintended consequence, secure the redundancy and autonomy of extraterrestrial centers of human civilization.
Human beings (at least as I know them) have a strong distaste for moral lectures and do not care to be told to do anything for their own good, but if you present them with the possibility of adventure and excitement that promises new experiences to every individual and possibly even the prospect of the extraterrestrial equivalent of a buried treasure, or even a pot of gold at the of the rainbow, you might enlist the selfishness and greed of individuals in a great cause on behalf of Earth and all its inhabitants, so that each individual is moved, as it were, by an invisible hand to promote an end which was no part of his intention.
. . . . .
. . . . .
Existential Risk: The Philosophy of Human Survival
13. Existential Risk and Identifiable Victims
. . . . .
. . . . .
. . . . .
. . . . .
23 May 2015
In my recent post on Proxy War in Yemen I asserted that the concept of a proxy war, while primarily associated with the Cold War, can be applied to the war now being fought indirectly between Saudi Arabia and Iran in Yemen. A narrow conception of proxy wars would not have this application, and would be more confined to its original introduction and usage. Thus is could be rightly said that I was applying a broad conception of a proxy war. This was my intent.
What has been said above of proxy wars can also be said of war in general: that there are narrow and broad conceptions. Narrow conceptions are usually a function of a particular historical context of usage. If you asked an inhabitant of Periclean Athens to define war, they might have answered that war was a clash between hoplites from different city-states facing each other as a phalanx. For such a narrow conception of war, the innovations that Alexander introduced into the Macedonian phalanx might pose a definitional challenge: is it or is it not a phalanx, and is war employing this instrument a war, or something related to war through descent with modification?
In many contexts I have pursued the exposition of what I call the extended sense of a concept, in which a familiar concept is systematically subjected to variation, extrapolation, extension, and generalization in order to see how comprehensive a conception can be made. I have been influenced in this respect by Bertrand Russell, whose imperative to generalization I previously quoted in The Science of Time and The Genealogy of the Technium:
“It is a principle, in all formal reasoning, to generalize to the utmost, since we thereby secure that a given process of deduction shall have more widely applicable results…”
Bertrand Russell, An Introduction to Mathematical Philosophy, Chapter XVIII, “Mathematics and Logic”
Open-textured concepts are best suited to Russellian generalization. What is an open-textured concept? Here is one account:
“According to Austin and Wittgenstein, words have clear conditions of application only against a background of ‘normal circumstances’ corresponding to the type of context in which the words were used in the past. There is no ‘convention’ to guide us as to whether or not a particular expression applies in some extraordinary situation. This is not because the meaning of the word is ‘vague’, but because the application of words ultimately depends on there being a sufficient similarity between the new situation of use and past situations. The relevant dimensions of similarity are not fixed once and for all; this is what generates ‘open texture’ (Waismann 1951).”
Routledge Encyclopedia of Philosophy, London and New York: Routledge, 1998, “Pragmatics”
More briefly, Stephan Barker wrote of open texture: “Our tendencies concerning the use of the word form a loosely knit pattern which does not definitely provide for all possibilities.” (Philosophy of Mathematics, “Introduction: The Open Texture of Language” p. 11) Barker goes on to use the Copernican analysis of celestial motion as an example of open texture. If “move” means to change position relative to Earth, then certainly the Earth cannot, by definition move. But what Copernicus did is to extend our conception of movement beyond the concept of movement that was limited to the special case of the surface of the Earth. One could say that Copernicus formulated an extended concept of motion.
It seems to me that war is a perfect example of an open-textured concept, and one that can readily (and indeed has been repeatedly) extended by changed circumstances. As civilization has grown, war has grown — in scope, scale, fatality, and complexity. The growth of war has been twofold: 1) growth in the absolute size of war (quantitative), and 2) growth in the complexity and sophistication of war (qualitative). Once we understand that war is an open-textured concept, the Russellian imperative comes into play, and the philosophical impulse is to generalize war to the greatest possible extent and thus to arrive at an extended conception of warfare.
Recently in VE Day: Seventy Years I suggested the possibility of the existential viability of warfare, which sounds like an odd way to speak of war, as though we were concerned to maintain war in existence, when many if not most individuals view the extirpation of war as the goal of civilization. But war and civilization are coextensive, and this implies that the viability of war is linked to the viability of civilization. In the long ten thousand year history of agricultural civilization warfare took many different and distinct forms. These different forms of warfare were driven by both quantitative and qualitative growth in war. The advent of industrialized warfare (cf. A Century of Industrialized Warfare) forced us once again to expand the scope and scale of what we call war.
Industrialized warfare coincided with the social consequences of industrialization — the growth of conurbations, mass communications, rapid transportation, and popular sovereignty, inter alia — and all of these developments forced warfare to become mass war fought by mass man. Industrialization allowed for a rapid increase in scale that outstripped qualitative development, and this almost exclusively quantitative increase in warfare gave us the concept of total war. (The idea of total war preceded that of industrialization, but I would argue that the term only came into its proper significant in the wake of mass war, i.e., that industrialized mass war is the natural teleology of the concept of total war.)
Industrialized total war did not persist long; if it had, we would have destroyed ourselves. Thus the rapid development of total war executed a perfect dialectical inversion and gave us the contemporary conception of limited war. We don’t even talk in terms of “limited war” any more because all wars are limited. An unlimited war today — total war — would be too devastating to contemplate. During the Cold War, a common euphemism for the MAD scenario of a massive nuclear exchange was “the unthinkable.” Of course, some did think the unthinkable, and they in turn became symbolic of an unmentionable engagement with the unthinkable (Curtis LeMay and Herman Kahn come to mind in this respect). The strange world of pervasive yet limited conflict to which we have now become accustomed has no place for total war, but it is perhaps no less strange than the paradigm of warfare that preceded it, consisting of mass conscript armies engaged in total industrialized warfare between nation-states.
Yet we have found countless ways to wage limited wars, with new conceptions of war appearing regularly with changes in technology and social organization. There is proxy war, guerrilla war, irregular war, asymmetrical warfare, swarm warfare, and so on. Perhaps the most recent extension of the concept of war is that of hybrid warfare, which has received much attention lately. (Russian actions in east Ukraine are often characterized in terms of hybrid warfare.) It is arguable that the many “experiments” with limited war following the end of the period of industrialized total war have qualitatively expanded and extended our conception of war in a way parallel to the quantitative expansion and extension of our conception of war driven by industrialization. Thus hybrid war, or some successor to hybrid war that is yet to be visited upon us (through descent with modification), may be understood as the qualitative form of total war.
Hybrid warfare is an illustration of how the scope and scape of warfare are related and can come to permeate society even when war is not “total” in the sense used prior to nuclear weapons (i.e., the quantitative sense of total war). The duration of the local and limited wars we have managed to fight under the nuclear umbrella is limited only by the willingness of participants to engage in long-term low-intensity warfare. We have learned much from this experience. While the world wars of the first half of the twentieth century taught us that democratic nation-states could field armies of millions and project unprecedented power for a few years’ duration, the local and limited wars of the second half of the twentieth century taught us that democratic nation-states cannot sustain long term warfare. Whatever the initial war enthusiasm, the populace grows tired of it, and eventually turns against it. If wars are to be fought, they must be fought within the political constraints of the form of social organization available in any given historical period.
On the other side, national insurgencies often possess a willingness to continue fighting virtually indefinitely (there has been insurgent conflict in Colombia for almost a half century, i.e., the entire period of post-industrialized total war), but when these groups come to realize that, despite their nationalist aspirations, they have been used as the pawns in someone else’s war (i.e., they have been serving someone else’s national aspirations), they are as likely to switch sides as not. Moreover, civil governance following long civil wars — regardless of which side in the conflict wins, if in fact any side wins — is almost always disastrous, and low-intensity warfare is essentially traded for high-intensity civil strife. Police do the killing instead of soldiers (but many of the police are former soldiers).
As warfare becomes pervasively represented throughout the culture, it represents the return (for it has occurred many times in human history) of warfare as a cultural activity, something I discussed in an early post Civilization and War as Social Technologies, i.e., war is a social technology, like civilization, that allows us to do certain things and to accomplish certain ends. For example, war is a decision procedure among nation-states who can agree upon nothing except that they will not allow a local and limited war to grow into a general and total war.
Warfare has, once again, adapted to changed conditions and thereby demonstrated its existential viability when war itself has risen to the level of an existential risk to the species and our civilization.
. . . . .
. . . . .
. . . . .
. . . . .
17 May 2015
When we hear “proxy war” we think of the Cold War, but the idea of a proxy war can be extrapolated beyond the particular circumstances of the Cold War to apply to any war fought between two or more nation-states that is not fought on the territory of the nation-states in question. Yemen has become a battleground, a proxy war, within the larger de facto war taking place within Islamic civilization (which I have touched upon in The Neurotic Misery of Islamic Civilization and The Problem of Islamic Terrorism). Yemen is, one might say (with a certain ruefulness), the “perfect” venue for a proxy war in the region. On the edge of the Arabian Peninsula, directly bordering on Saudi Arabia, the Yemeni government is not strong enough to enforce an internal security regime, and is routinely referred to as a “failed state” (cf. Yemen and Warfare in Failed States).
In a couple of posts on the developments in Yemen during the events following the Arab Spring — Definitive Ambiguity in Yemen and Saleh gives the Saudis the Slip — I discussed the murkiness of Yemeni politics. As we now see, the definitive ambiguity in Yemen has given way to civil war and proxy war. The situation in Yemen has calmed down for the moment, but it is the nature of proxy wars to pass through cycles of relative calm punctuated by flareups of spectacular violence. We should expect to see further such flareups.
Yemen has long been a primarily tribal society, and as such it has been an easy mark for outside powers, who can usually find a willing client among the many tribes. The country was split in two during the Cold War between North Yemen and South Yemen in an earlier proxy war. In more recent events, after the ouster of Ali Abdullah Saleh and the installation of Abd Rabbuh Mansur Hadi during the turbulence of the Arab Spring, Houthi rebels, Shia backed by Iran, established control over a considerable portion of the country, sending Hadi packing, and Saudi Arabia responded by bombing Yemen to push back against Houthi gains. Interestingly, former president Saleh has sided with the Houthis (cf. Eyeing return, Yemen’s ousted Saleh aids Houthis).
There is a backstory to former Yemeni President Ali Abdullah Saleh’s proclamation of support for the Houthi rebels making progress in Yemen. During the Arab Spring (seems like a long time ago now, right?), when autocrats who expected (and attempted to enforce) a life tenure in office were falling left and right, the Saudis pressured Saleh into giving up power. Saleh, apparently a wily character, tried his best to hang on, and even slipped out of Saudi Arabia after receiving medical treatment in the Kingdom. One suspects that this current ploy is (among other things) an opportunity for Saleh to poke Saudi Arabia in the eye with a stick after they were instrumental in his ouster from power.
So Yemen finds itself between a rock and a hard place, with Iran backing proxies and Saudi Arabia bombing the country. Iran, the contemporary representative of an ancient civilization derived from the west Asia cluster, Persia, possesses a dimension of prestige that extends to before the the advent of Islamic civilization. This might seem a bit recondite to enter into contemporary geopolitical hardball, but it is not far below the surface. The Financial Times quoted Ali Akbar Velayati, the foreign policy adviser to Ayatollah Khamenei, as saying, “Yemen is an independent country with an old civilisation, much older than Saudi Arabia.” The subtext of this message is that Iran is an independent country with an old civilzation, much older than Saudi Arabia.
It has been argued that the conflicts among Islamic nation-states are not religious conflicts per se, assimilating conflict within Islamic civilization to conflict within the nation-state paradigm, and doing so where that paradigm is at its weakest, even as groups like ISIS seek to score ideological points by flaunting conventions of the nation-state, as in their pointed abrogation of territorial boundaries (cf. ISIS and Sykes-Picot).
It has also been argued that Iran and groups like Al Qaeda and the Taliban have been for decades contesting for the title of vanguard of revolutionary Islam, with the idea being whichever can prove itself the more radical and ruthless will win the acclaim of the Islamic masses, and that this rivalry transcends the split between Sunni and Shia because it pits the Ummah against Dar al-Harb, and presumably unifies the Islamic masses against a common enemy. (One then wonders why ISIS, most recent representative of radical Islam, makes a point of mass executions of those they regard as infidels, most of whom are fellow Muslims, although not sharing the exact beliefs of ISIS.)
If both of these arguments are taken seriously, then we could safely ignore the Sunni/Shia split in Islamic civilization and proceed to predict the actions of agents in current regional conflicts in purely secular terms, without reference to Islam. At this point, we realize that this is a familiar argument and that we have seen it before. This is exactly the sort of thing that Sam Harris has criticized in his many books on the role of religion in public life: the moderate views of the many come to facilitate the radical views of the few, as the radicals are dismissed as not “really” representing the religious views of the community, therefore they can safely be ignored and treated as criminals, terrorists, insurgents, or whatever. All the while, unquestioned moderate religious beliefs are the backdrop that gives plausibility and prestige to radical views disclaimed by moderates. (In Hearts and Minds and Akhand Bharat and Ghazwa-e-hind I called this the principle of facilitating moderation.) The Sunni/Shia split is embedded in the moderate representatives of Islam, and cannot be disentangled from regional diplomacy without falsifying events on the ground.
The illusion of a secular conflict in MENA, in so far as this illusion is perpetuated, will turn diplomacy into a sideshow unrelated to the reality on the ground, and ineffectual for that reason. The most recent message from Al-Khalifah Ibrahim, Ameer Al-Mu’mineen, Al-Sheikh Al-Mujaahid Abu Bakr Al-Husayni Al-Qurashi Al-Baghdadi, “March Forth Whether Light or Heavy,” takes pains to disavow any secular interpretation of the actions of ISIS:
O Muslims, Islam was never for a day the religion of peace. Islam is the religion of war. Your Prophet (peace be upon him) was dispatched with the sword as a mercy to the creation. He was ordered with war until Allah is worshiped alone. He (peace be upon him) said to the polytheists of his people, “I came to you with slaughter.” He fought both the Arabs and non-Arabs in all their various colors. He himself left to fight and took part in dozens of battles. He never for a day grew tired of war.
In the public discourse of the US, the recent nuclear agreement with Iran was primarily about delaying the eventual Iranian acquisition of nuclear weapons in order to ameliorate the perception of an existential threat to Israel. Here in the west we have our own problems with mainstream religious moderates making excuses for religious extremists, who use their extremist credentials to establish their bona fides with the Christian masses. Thus the Israeli-Iran conflict plays well in the US press, and is uncontroversial because all political parties in the US support Israel. But the recent U.S.-GCC Summit at Camp David (cf. More than Keeping Up the Facade: The U.S.-GCC Summit at Camp David by Anthony H. Cordesman) reveals that there is much more going on in the deal with Iran than is part of the public discourse of ensuring Israeli security.
The US has long-standing security relationships with Sunni Arab states, and especially with Saudi Arabia (which spends six times more on its military than does Iran). The Gulf Sunni Arab states are worried that a US-Iranian rapprochement will mean that long-frozen Iranian assets will be made available to Iran, and, with the reintegration of Iran in the global financial community, Iran will have even more money to back its regional proxies, which have long been Iran’s most effective foreign policy tool. This is a legitimate concern on the part of the Gulf Arab states (Saudi Arabia itself knows all too well the soft power it buys with the money is spreads around; Iran does the same thing with far less money, but with hard power assets thrown into the deal), but this is not a concern that plays well in the US press, and no Saudi prince is going to receive an invitation to address a joint session of Congress, especially over White House objections. Moreover, there is an ideological overlap between the Salafist extremism actively supported by Saudi Arabia and the extremism of ISIS (an overlap that goes beyond their common Sunni beliefs), and, if this were to be widely discussed in the US press, Iran would look good by comparison.
The nuclear deal with Iran is as relevant, if not more relevant, to Saudi Arabia than it is to Israel. It is widely understood that Saudi Arabia partially funded Pakistan’s nuclear weapons program with the understanding that, if Saudi Arabia wants nuclear weapons, then they will be made available. Thus Saudi Arabia has access to nuclear weapons without having to host the industrial infrastructure of the nuclear fuel cycle on its own soil — a triumph of plausible deniability. The Iranian pursuit of nuclear weapons, while primarily about regime survivability, must also be seen in the light of Saudi Arabia’s deniable nuclear capability (which can be understood as an instance of nuclear ambiguity).
. . . . .
. . . . .
. . . . .
. . . . .
14 May 2015
Recent news items have related that a couple of staples of late Soviet-era military technology may be returned to production and deployment, specifically the Mil Mi-14 (cf. Re-commissioned? Soviet nuke-capable sub-killing copter comeback slated) and the Tu-160 “Blackjack” bomber (cf. ‘Blackjack’ comeback: Russia to renew production of its most powerful strategic bomber).
In many earlier posts I have noted the surprisingly vigorous afterlife of Soviet-era military technology, as the Moskit P-270 “sunburn” anti-ship missile and the VA-111 Shkval supercavitating torpedo remain formidable weapons systems. Much of this Soviet-era weaponry can be retro-fitted with contemporary electronics, turning previously “dumb” weapons into “smart” weapons, i.e., precision guided munitions, making them even more formidable, and, as such, they can fulfill combat roles they could not previously fulfill, and in some cases they can fulfill combat roles that did not previously exist.
Russia has, in addition, continued to produce new weapons systems that are the evolutionary descendents of Soviet-era systems, as with the latest air defense system, the S-400 Triumf, recently in the news because Russia has sold or considered selling these systems to China, India, Iran, and Syria, and the newest Russian tank, the T-14 Armata, which was in the news because one stalled in the rehearsal for the May Day parade in Moscow. The resurrection of Soviet-era weapons systems is distinct from these weapons systems in continual production and regularly updated with improvements in technology.
There is an obvious narrative to account for the return to service of Soviet-era military technology, and that obvious narrative is that Vladimir Putin wants to return Russia to the international stature it enjoyed while the Soviet Union was perceived as a superpower equal to the US. For reasons of national prestige and Russian national pride, Russia is dusting off old weapons systems and at times even returning to former methods of military patrols dating to the Cold War. The most obvious examples of this have been Russian long-range bomber patrols using Tupolev Tu-95 “Bear” bombers, which, with their turboprop engines, are virtually flying antiques. I discussed a particularly striking example of Russian air patrols in Sweden and Finland in NATO?
There is also an obvious economic rationale for the resurrection of Soviet-era weapons systems, which is that the design and testing of major weapons systems has become so expensive that many of these weapons systems have entered a “death spiral,” such that even if a nation-state could afford the R&D costs, the finished product would be too expensive to produce in sufficient numbers to be combat effective. Updating known weapons platforms can be a much more cost effective way to approach this problem than starting from scratch. Enormous savings can be realized on the testing, training, and deployment phases of a weapons system.
There is, however, much more going on here than any attempt on the part of Putin to compensate for perceived personal or national failures. The world has changed in its political structure since the post-WWII settlement that shaped the second half of the twentieth century and the immediate aftermath of the Cold War. The political (and technological) changes have changed how wars are fought. I have mentioned in many posts that the paradigm of peer-to-peer conventional engagements between mass conscript armies has effectively fallen out of contemporary history. The Cold War was based on this paradigm, with NATO and the Warsaw Pact roughly equally matched, although sufficiently different in detail that no one could predict with confidence the outcome of a conventional war in Europe, and whether or how a conventional war in Europe would escalate into a nuclear war (and, again, whether a nuclear war in Europe would escalate into globally mutually assured destruction).
“…war under the nuclear umbrella involved a devolution of war from total and absolute war, including the use of nuclear weapons, to conventional war, using all means short of nuclear weapons, and exercising restraint with these means in order to avoid triggering a nuclear strike. Next, war under the ‘no fly’ umbrella of imposed air superiority involved a devolution of war from everything that has happened since Douhet’s The Command of the Air was published, to a state of combat prior to Douhet’s deadly vision. War under the ‘no fly’ umbrella means war limited to ground combat, almost as though the age of air power had never been known.”
Having just finished listening to the book Level Zero Heroes: The Story of U.S. Marine Special Operations in Bala Murghab, Afghanistan I realized that expectations of warfighting in the twenty-first century have driven the development of rules of engagement (ROE) to the point of negating the overwhelming air superiority of the most technologically advanced nation-states. When each individual decision to drop a bomb in combat is run through a political infrastructure that includes individuals with mixed motives, combat is driven down to a level at which the only actions that can be approved are those taken by individual soldiers with the weapons they carry. This has the effect of giving plausible deniability to a nation-state, as individual soldiers are considered expendable and can be prosecuted if they make decisions in combat that fail to conform with the ideological justifications given for a military engagement.
Strategic weapons systems have always been primarily political. The devolution of warfare has meant that the most sophisticated weapons systems are being politicized from the top down, which has the practical consequence that even a superpower like the US engages primarily only in close-quarters small arms skirmishes. The big ticket, expensive, and technologically sophisticated weapons systems are frequently used only for a “show of force” (SOF) in order to intimidate, using the sound of a jet’s engines to obtain a temporary advantage in a combat environment in which a political decision has been made not to make full use of the air assets available.
There are several possible explanations for the devolution of warfare, and I have discussed some of them previously. One obvious explanation is that war has become too destructive, but human beings love war so much they must find a way of limiting the destructiveness of war if they are going to continue enjoying it, so the devolution of war serves the purposes of limiting war to a survivable level. I have made this argument several times, so I think that it has some merit, but that it is not the whole story. (I recently made a variation of this argument in Existential Threat Narratives.)
There is another approach to this problem that has just occurred to me today as I was formulating the above thoughts, and this is that the history of warfare has exhibited a pattern of settling into a culturally determined routine (such as I described in Civilization and War as Social Technologies in regard to the ritualized violence of the Aztec “Flower Battle”, Samurai swordsmanship, and the Mandan Sundance) which is then interrupted when a geographically isolated region comes into contact with a peer or near-peer civilization, with which it has no established customs of limiting violence to a survivable level. The example that comes to mind is the nearly continual warfare in the Italian peninsula among mercenary armies fighting for individual city-states in the late medieval period, which was, however, not very destructive. At this time, Italy was mostly cut off from Europe by the Alps, but this changed when the French marched into Italy under Charles VIII with 25,000 men in 1494-1498, which brought a new and much less forgiving form of war to the Italian peninsula.
Human civilization is now effectively global, and that means that no nation-state is truly isolated from any other nation-state. We are not only aware of the activities of our neighbors, we are often (painfully) aware of events occurring in distant parts of the world, which are not so distant any more. No one today could say of any quarter of the world what Neville Chamberlain said of Czechslovakia, “How horrible, fantastic, incredible it is that we should be digging trenches and trying on gas-masks here because of a quarrel in a far away country between people of whom we know nothing.”
Warfare has become a commons, and if we want to preserve this commons, we must manage it. Hence the world entire may evolve toward global ritualized, symbolic violence of the sort previously only seen in geographically isolated regions. There are no more geographically isolated regions, and with the planet as a single region warfare may tend to evolve in the direction in which it previously evolved in widely separated societies when all enemies were known and conflict was primarily a matter of prestige requirements. Globalization may now be expressed through the unification of warfare under a common set of customs intended to limit and control violence.
There is a sense in which this is a profoundly sad realization, for what it says about human nature, but there is another sense in which this is a hopeful realization, as it points to a human nature that implicitly recognizes an existential threat and modifies its behavior accordingly. If all violence could be transformed into something ritualized, symbolic, and sustainable, we would have a chance to devote our economy and industry toward the long term survivability of our species and our planet with some confidence that destructiveness will be limited from here on out.
. . . . .
. . . . .
. . . . .
. . . . .
13 May 2015
A story on the BBC, US Christians numbers ‘decline sharply’, poll finds, made me aware of a new poll by the Pew Research Center, reported in America’s Changing Religious Landscape. It is unusual for such a poll result to be reported so bluntly. Some time ago in Appearance and Reality in Demographics I noted that the WIN/GIP “Religiosity and Atheism Index” poll that I discussed in American Religious Individualism, had been reported under the headline WIN-Gallup International ‘Religiosity and Atheism Index’ reveals atheists are a small minority in the early years of 21st century, which seems to have been purposefully contrived to give the reader the wrong impression of what the poll revealed. This newest headline is another matter entirely. It is becoming more difficult to conceal the fact the traditional religious belief is on the decline.
While religious observance appears to be one of the most pervasive features of civilization from its inception, the example of Europe demonstrates that religious belief can pretty much vanish once conditions change. The US remains an anomaly as an industrialized nation-state with unusually high popular identification with religious faith, but the US may yet experience the kind of catastrophic collapse of religious observance that occurred in Europe from the middle of the twentieth century onward. In other worlds, secularization may yet come to America. Of course, if widespread secularism comes to the US, it will not play out as it played out in Europe, because these societies are so profoundly different.
The secularization thesis was widely believed in the middle of the twentieth century (when secularization was transforming European society), and then was widely abandoned at the end of the twentieth century as surging religious fundamentalism and religiously-inspired terrorism grabbed headlines and appeared to some (as strange as this may sound) as a sign of religious vitality. I discussed the secularization thesis in Secularization (which I characterized in terms of confirmation and disconfirmation in history) and more recently in The Existential Precarity of Civilization.
It is important to understand the religious backlash against modernity that became apparent in the later twentieth century in the context of traditionalism, as the role of a narrowly conceived religious belief is often made central in the debates over secularization, but this can be deceptive. In this context, “traditionalism” means any ideology or belief system dating from before the industrial revolution (which marked the advent of a new form of civilization), and so is a much wider concept than religion simpliciter, which is the most common exemplar of traditionalism.
Beliefs and practices associated with the pre-industrial form of our culture of origin persisted for ten thousand years (from the origins of civilization to the industrial revolution) and so they have left an enormous cultural legacy, and they are still very powerful elements of the human imagination. Almost every famous work of art which is a cultural point of reference for westerners (think of the Nike of Samothrace, Michelangelo’s David, or the Mona Lisa), dates to this pre-industrialized period. The industrial revolution meant the dissolution of these ancient institutions and practices, sometimes within the life span of a single individual. The entire economic basis of civilization changed.
Even though civilization was forced to change, the cultural legacy of the past remains, and its hold upon the human mind remains. Although we live in modern industrialized societies, we don’t grow our own food, and we live alone in cities and not in multi-generational households, we continue to honor traditions that have become disconnected from our daily lives. Eventually the disconnect leads to cognitive dissonance as traditional attitudes come face-to-face with modern realities. There are two ways to attempt to address the cognitive dissonance: 1) a return to traditionalism, or 2) the abandonment of traditionalism.
It is impossible to return to a traditional (pre-industrial) way of life in an industrialized nation-state because you can’t just start farming in the middle of a city or create a multi-generational household out of thin air. So the return to traditionalism simply means the aggressive assertion of traditionalist claims, however empty these claims are. The most familiar form that the aggressive assertion of traditional claims can take is that of religious fundamentalism. This is not the only form of traditionalism, but it has become symbolic of traditionalism, and, as Pippa Norris and Ronald Inglehart have noted in their paper Are high levels of existential security conducive to secularization? A response to our critics, “…residual and symbolic elements often remain, such as formal adherence to religious identities and beliefs, even when their substantive meaning has faded away.”
If industrial-technological civilization endures (i.e., if it does not succumb to existential risk), all traditionalism is doomed to extinction. However, that does not mean that religion is doomed to extinction. Although I have defended the secularization hypothesis, secularization is only a stage in the transition from agrarian-ecclesiastical civilization to industrial-technological civilization. The idea that the extinction of tradition (given the gradual lapse of a now-defunct form of civilization) is the same as the extinction of religion is only possible through a conflation of traditionalism and religion. This conflation is as invidious to the understanding of history as is the misinterpretation (at times a willful misinterpretation) of the secularization hypothesis.
Religion can and often does take non-traditional forms, but the (historically recent) experience of agrarian-ecclesiastical civilization, in which all social organization was subordinate to theological principles, has distorted our perception of the role of religion and civilization, and led to the conflation of religion and tradition.
In Europe Returns to its Roots I discussed the tentative return to pre-Christian forms of religion in Europe in the wake of secularization. Such cultural movements will, of course, be influenced by subsequent developments of civilization. No more than we can return to traditionalism now that traditional agrarian ways of life have disappeared can we return to Neolithic religious practices, but whatever religious practices there are must be consonant with the life of the people.
On my other blog I produced a series of posts concerned with the relation of religion to civilization, extending from the Paleolithic past to into the future. These posts include:
These were a mere sketch, of course, and one might well invest an entire lifetime in attempting to describe the relation between civilization and religion. The take-away lesson is that religion is a perennial aspect of human experience, and so it will be a perennial part of civilization, but it is a mistake to conflate religion and traditionalism. After the extinction of traditionalism, once terrestrial industrialization achieves totality (not only eliminating traditional ways of life, but also greatly reducing existential precarity), religion will remain, but it will not be the religions of the Axial Age that defined agrarian-ecclesiastical civilization.
When secularization comes to America, then, we should be surprised neither by the rear-guard action of traditionalism to defend the claims of a now-vanished civilization, nor by the inevitable emergence and rise of religious beliefs and practices independent of traditionalism. Expect popular accounts to conflate the two, but a developmental understanding of the relationship of civilization and religion reveals how starkly different they are.
. . . . .
. . . . .
. . . . .
. . . . .
8 May 2015
In Seventy Years, posted on 01 September 2009, I acknowledged the seventieth anniversary of the beginning of open armed conflict that began the Second World War. In that post I wrote:
During the middle of the twentieth century civilization experienced a convulsion of apocalyptic proportions. The sky was filled with airplanes, the sea was filled with ships above and below, great cities were destroyed in a single night, entire populations were displaced, and millions upon millions of people were killed.
Now, more than five years later, it is time to commemorate the termination of that apocalyptic conflict, in so far as the war came to an end in Europe on VE Day (Victory Europe), Tuesday, 08 May 1945. A week earlier Hitler had committed suicide in his Berlin bunker. Most German cities had been reduced to rubble long before. In the week between Hitler’s suicide and VE Day, Joseph Goebbels, appointed Reich Chancellor by Hitler just before his suicide, committed suicide with his wife after killing their six children, Tito had triumphed in what would become Yugoslavia, the Soviets took Berlin, Rangoon was liberated from the Japanese, and Mauthausen concentration camp was liberated.
Nazi Germany formally surrendered unconditionally at the Western Allied Headquarters in Rheims, France, on Monday 07 May 1945, but the ceasefire took effect one minute after midnight on Tuesday 08 May 1945. Reich President Karl Dönitz ordered the surrender, and General Alfred Jodl signed for Germany. Later at Nuremberg where both were tried as major war criminals, Jodl was sentenced to death; Dönitz spent ten years in Spandau prison.
As though a portent of what was to come, on the same day, Tuesday, 08 May 1945, the Sétif massacre occurred, when a victory parade turned ugly. French police attempted to seize anti-colonial banners held by the crowd of about 5,000 Muslim marchers in Sétif and the scuffle turned into a firefight. (Similar events occurred in Guelma and Kherrata.) In the ensuing days, both sides took reprisals on the other. Thousands died (how many thousands is still in dispute).
The French held out in Algeria and Indochina even has the British surrendered control of India, the Jewel in the Crown, in 1947. Colonial conflicts and the consequent de-colonialization struggles became a proxy battleground of the Cold War, played out in the lives of impoverished peoples in Africa, Asia, and South America. Struggles for national liberation were transmuted into ideological conflicts in which Russia and China supplied arms to those who would self-identify as communists, and the US and Europe supplied arms to those who would identify as anti-communists. It is arguable that the legacy of this struggle has shaped the contemporary world more profoundly that the apocalyptic proportions of the Second World War, which, considered only in terms of open armed conflict, endured for less than six years.
The end of a catastrophic conventional war, in which regular armies numbering in the millions of soldiers, airmen, and sailors met in pitched battle on the ground, in the air, and on the sea, ending in definitive defeat and unconditional surrender for the Axis powers, marked the beginning of protracted, seemingly interminable unconventional conflicts between small numbers of irregular combatants who rarely met in battle, and whose wars almost never ended in definitive defeat or surrender. Thus the end of the Second World War was as much of a turning point as the war itself.
It remains an open question at the present time if our planet will ever return to the WWII paradigm of armed conflict, in which the planet entire is convulsed by a short, sharp, and definitive war (and, if so, if anyone would survive), or if the development of civilization has permanently rendered such conflicts antiquated. War, like civilization, may not disappear, but it does evolve, and the existential viability of war (if one can speak of such) is predicated upon the possibility of the essential nature of warfare changing.
It is possible that we have witnessed such a change with the change in armed conflict that followed the end of the Second World War. However, a further change in the essential nature of the civilizations engaging in warfare would drive further changes in the essential nature of war. This could take the form of returning to an earlier paradigm of armed conflict, or issuing in unprecedented forms of armed conflict. As I pointed out some years ago, civilization and war are locked in a co-evolutionary relationship.
. . . . .
. . . . .
. . . . .
. . . . .
2 May 2015
An introduction to brinkmanship
The emerging consensus in the financial press is that Greece must default on its debt obligations. No longer is the question, “Will Greece default?” but rather then question is, “When will Greece default?” After a pause, another question comes up: “How exactly will Greece default?” Will a Greek default mean Greece leaves the Eurozone (or, rather, the EMU, the European Monetary Union), or will Greece default and a way will be found to keep the country in the EMU? The more these questions are followed by further questions the more obvious it becomes that those asking the questions are seeking justifications and rationalizations to retain Greece within the Eurozone even as it defaults.
During the last episode of Greek default brinkmanship it became increasingly obvious that the powers that be would find a way to avoid Greek default and exit from the EMU (known by the ugly coinage “Grexit”). How do we know this? There was no significant shorting of the Euro in currency markets. Greek bonds took a hit, but they didn’t collapse. In the final analysis, no one really believed that anything dire would happen. Financial markets remained calm. Now that we are once again approaching the brink, and the drumbeat in the financial press is that Greece must default this time, again financial markets are mostly calm. The Euro is not plunging in value (the Euro is lower in value, but not at historic lows), and Greek bonds recently rallied on the assumption that the sidelining of Yanis Varoufakis would make negotiations easier. It seems, once again, that the conventional wisdom is that the worst will be avoided. In other words, a way will be found for Greece to default on its debt and to remain within the EMU so as to create the fewest waves in the markets.
There are at least two interesting things to notice about this process. The first is how far an institution (or institutions) can be pushed in a desired direction in order to obtain a desired result. The Eurozone is today a rather different entity than when the Eurozone treaties were drafted in the late 1990s and the Eurozone was only imagined. Today the Eurozone is at a crossroads, but as important as the crossroads is the long road behind it — a road of repeated and flagrant violations of the Maastricht criteria that were to govern the Eurozone, in which no nation-state has been held to account for its violations. In this context, the further violations required to keep Greece after default in the EMU do not seem particularly outrageous, as they would have seemed to those drafting the Maastricht criteria.
The “convergence” that didn’t happen
Here a little history is in order, and not the history that you are likely to get from those tying themselves in knots to try to find ways not to put the Eurozone asunder. The conditions for accession to the EMU (also known as “convergence criteria”) are known as the “Maastricht criteria” (cf. Who can join and when?):
● Price stability, to show inflation is controlled;
● Soundness and sustainability of public finances, through limits on government borrowing and national debt to avoid excessive deficit;
● Exchange-rate stability, through participation in the Exchange Rate Mechanism (ERM II) for at least two years without strong deviations from the ERM II central rate;
● Long-term interest rates, to assess the durability of the convergence achieved by fulfilling the other criteria.
Of course, these are statements of general principle and not quantifiable economic measures, but the Eurozone also has stipulated quantifiable economic measures, and there is a lot of fine print involved in these stipulations.
It is now known and generally acknowledged that Greece did not meet the convergence criteria when it was admitted into the EMU. It doesn’t take much research to find the documentation on this, but you do have to have a memory that goes back more than ten years. Also cf. The politics of the Maastricht convergence criteria by Paul De Grauwe.
Plausible deniability for the Eurozone
To understand why Greece failed to meet accession criteria but was admitted anyway one must enter into the mindset of those laying the groundwork for the EMU. The Eurozone’s monetary union was viewed as a shoe-in for success, and getting in on the ground floor was seen as something as a coup for a marginal economy like Greece, which had hitched its wagon to a star. The people of Greece had only to sit back and watch their economy soar into the stratosphere, pulled along by German and French economies. By allowing Greece into the EMU with a wink and a nod, the EU has plausible deniability when it comes to Greek entry into the Eurozone — their papers were in order, if falsified — but no one at the time really believed the Greece met the Maastricht criteria.
In all fairness, while the Eurozone did not enforce its own accession conditions for the entrance of Greece into the EMU, other nation-states within the Eurozone have repeatedly and routinely failed to meet Eurozone convergence criteria, and they have not been held to account. No consequences follow from having too large of a budge deficit or allowing inflation to get out of hand. The individual economies within the Eurozone appear to enjoy complete impunity in regard to the convergence criteria. This is how the Eurozone has arrived at its present position, which is that of trying to find excuses to allow Greece to default while remaining within the institutional structure of the Eurozone and the EMU.
Cognitive bias as a guide to political economy
To return to the two things I said above deserve to be noted in the present situation, the second thing to notice is that, however far an institution (or institutions) can be pushed, there eventually comes a breaking point — the straw that breaks the camel’s back, as it were — and the real brinkmanship going on is not whether Greece will default or whether Greece will leave the Eurozone, but whether the Eurozone will push its institutions to the breaking point. I want to pause over this ancient problem of brinkmanship and breaking points, because recent scholarship can shed light on this in an unexpected way.
A good portion of Daniel Kahneman’s book about cognitive biases, Thinking, Fast and Slow (especially Part I, section 9, “Answering an Easier Question”), is devoted to cognitive biases in which we substitute an answer for a difficult question with an easier question that we know how to answer and to which we can give a definitive answer. I don’t think that we can stress strongly enough how important (and how under-appreciated) this insight is in relation to economics and politics. All you have to do is to read the reasoning of traders in volatile commodities, and review their elaborate justifications for investments that miss the point of the biggest questions, in order to see how profoundly this affects our world today. Because it is relatively easy to talk about quantitative measures of the economy, and what these have predicted in the past, but it is very difficult to say exactly when public discontent is rising to the point that an unprecedented disruption (or a revolution) is about to occur, it is not surprising that economists and politicians alike prefer to answer the easy question, and sometimes they even convince themselves that the easy question is the only question.
The theology of the insurance adjustor
Not to worry. Insurance companies are ready for such unprecedented events. I have often reflected on the theology of the insurance adjustor who must adjudicate between events anticipated by the language of a policy and those events not anticipated or predicted, and so come under the all-embracing umbrella of “Acts of God.” Wikipedia says that, “An act of God is a legal term for events outside human control, such as sudden natural disasters, for which no one can be held responsible.” This term of art from the insurance industry can paper over a multitude of sins and cognitive biases: deal with the easy problem you’ve substituted for the difficult problem, and then when the difficult problem asserts itself, call it an “Act of God” (or the political equivalent thereof).
If we are honest, we must admit that we do not know what will become of the Eurozone and the EMU. Trying to predict the future of an enterprise so large and so complex is like trying to predict the weather: we can say pretty well what will happen tomorrow, within certain parameters, but the farther we go into the future the more our models for predicting the future diverge, until at some point different models are making inconsistent if not antithetical predictions. This is the essence of a chaotic system, and financial markets and political communities are chaotic systems.
A political and not an economic union
The Eurozone is not fundamentally economic, but political. It is a political project masquerading as an economic project, and while diplomacy often requires masquerades, when the music stops and the ball comes to an end, the masks must come off. Because the Eurozone is a political project, the glosses on its presumed political meaning are legion. I have read accounts in reputable media claiming that it was the intention of the Eurozone that, once economic unification had started, member states would lurch from crisis to crisis, and these crises would force member states to surrender political sovereignty, thus slowly transforming the Eurozone into a political union — perhaps the political union it should have been from its inception. I wouldn’t go quite this far, but such an account at least understands that only political union would make possible the wealth transfers within the Eurozone that would make the EMU workable in the longer term.
Since these is no clear idea of what the Eurozone stands for, one cannot convict the Eurozone of hypocrisy or contradiction. And there is no question that the Eurozone can find some way for Greece to default and to remain within the Eurozone, but any such arrangement will have to accept that Greece will in no sense be an equal member of the Eurozone and EMU. What, then, will Greece be?
What will become of Greece?
Quite some time ago I noted the possibility of “Euroization,” that is to say, the adoption of the Euro as a currency by a nation-state (or other political entity) not part of the EU, much less the EMU. There is precedent for this in dollarization — the use of the US dollar outside US territories. The Ecuadorian economy dollarized, and the Argentinian economy is partially dollarized, with real estate purchases traditionally transacted in US dollars and its many dollar-denominated financial instruments.
If Greece defaults but remains within the EMU, it will become a de facto “Euroized” economy that employs the Euro as its currency, but which has little real participation in the European economy. The Greek economy is not large enough, even in its presumed implosion, to seriously threaten the economies of the other EMU nation-states. If Greece defaults and exits the EMU, both Greece and the remaining nation-states of the EMU will pass through a painful adjustment, but Greece would probably be better off than languishing in the perpetual twilight of Euroized poor cousin to the EMU.
Some consequences of a Greek exit form the EMU are quite easy to guess. Tourism has been a major component of the Greek economy for some decades, and it is likely that most of the upmarket hotels patronized by foreign visitors will price their rooms in dollars or Euros, and in so doing a major sector of the Greek economy will take in hard, convertible foreign currencies. This alone will keep a substantial portion of the Greek economy in operation, even if no one wants to think of their country as nothing but a tourist destination. This is not at all unusual. Many hotels I have stayed at in South America price their rooms in dollars, and some will only take dollars. I especially noticed this in Argentina when I was there in 2010. Even as the Argentine economy stumbles under mismanagement, those who have a hotel that attracts foreign guests capable of paying in hard convertible currencies can do quite well in such an economy desperate for dollars. But while the Greek economy can subsist, after a fashion, on tourism, agriculture was always the strength of the Argentinian economy, and tourism does not represent a substantial contribution to the overall economy.
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .