29 June 2014
A Century of Industrialized Warfare:
Headlines around the World
The day after Gavrilo Princip assassinated the Austro-Hungarian Archduke Franz Ferdinand and his wife Sophie in Sarajevo, the event was headline news all over the world, reaching all the way to Klamath Falls, Oregon, where The Evening Herald, a local newspaper that published from 1906 to 1942, boldly proclaimed that the assassination may lead to war. They were right — more right than they knew.
The role of telecommunications and the media in the first global industrialized war was central, and this was revealed hard on the heels of the role of terrorism in the actual assassination. Still in our time, the role of the mass media in breathlessly reporting terrorism plays a central role in the 24/7 news cycle, shaping both public policy and public opinion, which latter, in mass societies, plays a driving role in events. Mass man and mass media feed off each other and escalate events, sometimes in destructive ways.
In an earlier age, it might have taken weeks for the news to travel around Europe, and months to make it around the world, but the technologies of newsprint (invented by Charles Fenerty in 1844), Linotype machines (invented by Ottmar Mergenthaler in 1884, the same year the Maxim gun was invented), the telegraph (first demonstrated by Samuel F.B. Morse in 1844, the same year newsprint was invented), transoceanic telegraph cables (the first completed in 1858, which failed shortly thereafter, but after several attempts regular transatlantic telegraphy was established in 1866), and the wireless telegraph (patented by Marconi in 1896, but preceded by a long train of antecedent science and technology), a nearly instantaneous global communications network was established and continually improved from that time to the present day.
With a global communications network in place, news of the assassination of Franz Ferdinand was known around the world within hours of its occurrence, and global industrial-technological civilization responded as quickly with headlines and official responses to the assassination. Belgrade wired its official condolences for the killing to Vienna on the 29th, in England King George V decreed seven days of mourning, and then in Russia Czar Nicholas II, in a kind of grief one-upmanship, ordered twelve days of mourning.
Serbian Prime Minister Nikola Pašić publicly renounced the Black Hand terrorist organization that was behind the assassination, even while Milan Ciganovich, a Serbian state railway employee who was also spying on the Black Hand for Pašić, was smuggled out of Belgrade by Pašić and sent to Montenegro. Despite official condolences wired to Vienna, when several days later the Austro-Hungarian government asked whether the Serbian government had opened a judicial inquiry into the assassination, the response was that, “nothing has been done so far and the matter is of no concern to the Serbian government.”
. . . . .
. . . . .
A Century of Industrialized Warfare
2. Headlines around the World
. . . . .
. . . . .
. . . . .
. . . . .
28 June 2014
A Century of Industrialized Warfare:
Assassination in Sarajevo
There is something horrifically appropriate in the fact that the trigger for the First World War exactly one hundred years ago today was an act of terrorism. By the end of the twentieth century terrorism would again be a trigger for global events, but in the meantime the largest wars in planetary history were fought as symmetrical contests between peer or near-peer nation-states, and then the non-war, non-peace of the Cold War involved an ongoing contest between two power blocs that dominated the international system. Terrorism kicked off global industrialized war, and now since peer-to-peer global conflict has all but disappeared, terrorism is once again a power in the world, after being submerged by much larger and more systematic forms of violence. Terrorism has come into its own again, so that the assassination in Sarajevo appears not only as the momentous trigger of the first global industrialized war, but also has a foreshadowing of the world that would follow the long sequence of global wars of the twentieth century. We could, with some justification, call the twentieth century the Second Hundred Years’ War.
Before the First World War there had been smaller, regional industrialized wars. The American Civil War, with its use of rifled guns and artillery, the Gatling gun, and ironclads was an early glimpse of what was to come. The War of the Pacific (1879-1883) was another prescient conflict, as it may be thought of as the first “resource” war — it was also called the “Saltpetre War,” and demonstrated that nation-states would go to war to secure essential resources for their industries. Most demonstrative of all was the Russo-Japanese War of 1904-1905. Its use of machine guns (the Maxim gun was invented in 1884) and the Battle of Tsushima between steel battleships, in which wireless telegraphy played an important role, foreshadowed the kind of warfare that would typify the twentieth century. (American President Teddy Roosevelt received the Nobel Peace Prize for negotiating the Treaty of Portsmouth, which brought the war to an end.)
Despite these earlier intimations of industrialized warfare, the First World War was unprecedented in scope, scale, and catastrophic consequences. Millions died; empires fell; and a new way of war became inescapable. Any belligerent who persisted with outdated weaponry or tactics was not merely defeated in battle, but his social and political institutions were likely to be annihilated. Imperial Germany, Tsarist Russia, and the Austro-Hungarian Empire were all annihilated in a war they made possible. Global colonial empires were activated both to open new and distant fronts, as well as to bring colonial troops to Europe to witness the civilized Europeans at their most savage. After a long period of relative stability, the world was rapidly turned upside down, and in four years’ time the decisive break with the past had been made. Everyone knew that there was no going back. How could the assassination of one marginal man by another marginal man in a marginal provincial city be the trigger for the first global industrialized war?
In a relatively stable international system, wars almost by definition erupt only on the margins of the most advanced political institutions, and the more stable these institutions, and the longer lived, the further outward the margins are pushed, until the margins of the most advanced political powers are pushed into a region that has never benefited from the stability. The Balkans, always on the periphery of Europe but never one of the great centers of European civilization (at least, not since Periclean Athens), met this condition almost perfectly. Still largely rural, poor, and undeveloped, the peoples of the Balkans were nevertheless exposed to the most advanced ideas of Europe, and nationalism was one of the most powerful of these ideas. The idea of nationalism, and of a nation-state as the political expression of nationalism, was inflammatory in the ethnic mixture of the Balkans. The quotes that can be cited in relation to the Balkans are all so perfect that it is difficult to choose among them. Otto von Bismarck predicted, “One day the great European War will come out of some damned foolish thing in the Balkans.” And, in explanation of why this should be so, Winston Churchill is supposed to have said, “The Balkans produce more history than they can consume.” Sarajevo was, in a sense, at the center of this periphery, and we should, like Bismarck, expect an incident in such a place to be the source of instability in an otherwise stable international system.
Aged Austro-Hungarian emperor Franz Joseph had already lost his son and heir to a spectacular and scandalous suicide, and had to turn to the unpromising Franz Ferdinand as his heir. Though not the first choice in the succession to the throne of Austria-Hungary, the Archduke Franz Ferdinand took to the role as well as anyone might be expected to step into such a role. Although often described as something of a dullard (similar things were said of the last Russian Tsar, also soon to be shot), Franz Ferdinand was in fact a reformer, and it is impossible to imagine how different the twentieth century would have been if there had been no First World War, if Franz Ferdinand has ascended to the Dual Monarchy, and had been in a position to put his reforms into practice, dragging the reluctant Hapsburg Empire into the modern world without requiring the sacrifice of millions (starting with the heir to the throne himself) for this to happen. Precisely because Franz Ferdinand was in a position to influence the fate of the Hapsburg Empire, a strike at the Archduke was an existential threat to everything that empire represented — as it turned out, a successful existential threat, which, by striking the monarchy itself, decapitated the empire. Thus while authors have competed with each other to describe Franz Ferdinand in unflattering terms, he was the crucial man in the Hapsburg Empire, and not the marginal figure he is sometimes made out to be.
Gavrilo Princip was a committed terrorist, i.e., a man who was prepared to kill and to die for ideological reasons. In other words, Gavrilo Princip was the prototype, the progenitor, and the model of a type of figure that would become increasingly common in the twentieth century, and who is still common in our time. Ideologically motivated terrorism requires an inscrutable synthesis of individualism and self-sacrifice that could not have been produced before the industrial revolution, and the conditions for producing the type in any number only came to full fruition in the twentieth century, with its mass societies of millions and its rising living standards that encouraged even the lowliest to think that they could leave their mark upon history. History was no longer beyond the reach of the ordinary man: history had become personal. A similar sentiment was expressed by a very different spirit, Rupert Brooke, in his poem Peace: “Now, God be thanked Who has matched us with His hour.” Sarajevo, Franz Ferdinand, and Gavrilo Princip were all together matched to their hour, and the confluence of these three meant that the global industrial-technological civilization taking shape at that time should be crucially shaped by global industrialized warfare.
. . . . .
. . . . .
A Century of Industrialized Warfare
1. Assassination in Sarajevo
. . . . .
. . . . .
. . . . .
. . . . .
26 June 2014
Once upon a time it was believed that the world was eternal and unchanging. The inconvenient truth of life and death of Earth was accommodated by a distinction between the sublunary and the superlunary: in Ptolemaic astronomy, the “sublunary” was everything in the cosmos below the sphere of the moon, and this was subject to time and change and suffering; the superlunary was everything in the cosmos beyond the sphere of the moon, which was eternal, perfect, unchanging, and permanent. Thus it was a major problem when Galileo turned his telescope on the moon and saw craters, and when he looked at the sun he saw spots. This wasn’t supposed to happen.
As a result of Galileo and the scientific revolution, we are still re-thinking the world, and each time we think that we have the world caught in a net of concepts, it escapes once again. Up until 1999 it was widely believed that the universe was expanding at a decreasing rate, and the only question was whether there was enough mass for this expansion to eventually grind to a halt, and then perhaps the universe would contract again, or if the universe would just keep coasting along in its expansion. Now it seems that the expansion of the universe is speeding up, and it is widely thought that, in a very early stage of the universe’s existence, it underwent an extremely rapid phase of expansion (called inflation).
When the scientific revolution at long last came to biology, Darwin and evolution and natural selection exploded in the scientific imagination, and suddenly a human history that had seemed neat and compact and easily circumscribed became very old, large, and messy. We recognize today that all life on the planet evolved, and that in the short interval of human life, the human mind has evolved, language has evolved, social institutions have evolved, civilization has evolved, and technology has evolved perhaps more rapidly than anything else.
The evolution of human social institutions has meant the evolution of human meanings, values, and purposes: precisely those aspects of human life that were once invested with permanency and unchangeableness in an earlier paradigm of human knowledge. Human knowledge evolves also. Science as the systematic pursuit of knowledge (since the scientific revolution, and especially since the advent of industrial-technological civilization, which is driven forward by science) has pushed the evolution of human knowledge beyond all precedent and expectation. As I recently noted in The Moral Truth of Science, science is a method and not a body and knowledge, and even the method itself changes as it is refined over time and adapted to different fields of study.
Slowly, painfully slowly, we are becoming accustomed to an evolving world in which all things are subject to change. The process does not necessarily get easier, though one might easily suppose we get numbed by change. In fact, when all our previous assumptions are forced to huddle down in a single relict of archaic thought, it can be extraordinarily difficult to get past this last stubborn knot of human thought that has attached itself passionately to the past.
I think that it will be like this with our moral ideas, which are likely to be sheltered for some time to come, and in so far as they are sheltered, they will conceal more prejudices that we would like to admit. Even those among us who are considered progressive, if not radical, can take a position that essentially protects our moral prejudices of the past. John Stuart Mill was among the most reasonable of men, and it is difficult to disagree with his claims. While in his day utilitarianism was considered radical by some, now Mill is understood to be an early proponent of the political liberalism that is taken for granted today. But the quasi-logical form that Mill gave to his ultimate moral assumptions is entirely consistent with the fideism of radical Ockhamists or Kierkegaard.
Here is a classic passage from a classic work by Mill:
Questions of ultimate ends are not amenable to direct proof. Whatever can be proved to be good, must be so by being shown to be a means to something admitted to be good without proof. The medical art is proved to be good by its conducing to health; but how is it possible to prove that health is good? The art of music is good, for the reason, among others, that it produces pleasure; but what proof is it possible to give that pleasure is good? If, then, it is asserted that there is a comprehensive formula, including all things which are in themselves good, and that whatever else is good, is not so as an end, but as a mean, the formula may be accepted or rejected, but is not a subject of what is commonly understood by proof. We are not, however, to infer that its acceptance or rejection must depend on blind impulse, or arbitrary choice. There is a larger meaning of the word proof, in which this question is as amenable to it as any other of the disputed questions of philosophy. The subject is within the cognisance of the rational faculty; and neither does that faculty deal with it solely in the way of intuition. Considerations may be presented capable of determining the intellect either to give or withhold its assent to the doctrine; and this is equivalent to proof.
John Stuart Mill, Utilitarianism, Chapter 1
Formulating his moral thought in the context of proof, Mill appeals to the logical tradition of western philosophy, going back to Aristotle. We can already find this dilemma of logical thought explicitly formulated in classical antiquity. Commenting on a passage from Aristotle’s Physics (193a3) that reads: “…to try to prove the obvious from the unobvious is the mark of a man incapable of distinguishing what is self-evident and what is not,” Simplicius wrote:
“…the words ‘the mark of a man incapable of distinguishing between what is self-evident and what is not’ typify the who who is anxious to prove by means of other things that nature, which is self-evident, is not self-evident. And it is even worse if they are to be proved by means of what is less knowable, which is what must happen in the case of things that are all too obvious. The man who wants to employ proof for everything eventually destroys proof. For if the evident must be the starting point of proof, the man who thinks that the evident needs proof no longer agrees that anything is evident, not does he leave any basis of proof, and so he leaves no proof either.”
Simplicius: On Aristotle Physics 2, translated by Barrie Fleet, London and New York: Bloomsbury Academic, 1997, p. 25
The axiological equivalence of self-evidence is intrinsic value, that is to say, self-value. The tradition of intrinsic value in English moral thought arguably reaches its apogee in G. E. Moore’s Principia Ethica, in which intrinsic value is a theme that occurs throughout the work:
“We must know both what degree of intrinsic value different things have, and how these different things may be obtained. But the vast majority of questions which have actually been discussed in Ethics—all practical questions, indeed—involve this double knowledge; and they have been discussed without any clear separation of the two distinct questions involved. A great part of the vast disagreements prevalent in Ethics is to be attributed to this failure in analysis. By the use of conceptions which involve both that of intrinsic value and that of causal relation, as if they involved intrinsic value only, two different errors have been rendered almost universal. Either it is assumed that nothing has intrinsic value which is not possible, or else it is assumed that what is necessary must have intrinsic value. Hence the primary and peculiar business of Ethics, the determination of what things have intrinsic value and in what degrees, has received no adequate treatment at all.”
G. E. Moore, Principia Ethica, section 17
The English, for the most part, had little affinity for Bergson, but it was Bergson who opened up moral philosophy to its temporal reality embedded in changing human experience. In several posts — Epistemic Space: Mapping Time and Object Disoriented Axiology among them — I have discussed Bertrand Russell’s antipathy to Bergson, even though Russell himself was once of the most powerful and passionate advocates of science, and it has been science that has forced us to put aside our equilibrium assumptions and to engage with a dynamic world that forces change upon us even if we would deny it.
The world as we understand it today, from the smallest quantum fluctuations to the evolution of the universe entire, is a dynamic world in which change is the only constant. In such a world, which our traditional eschatologies have invested with eternal moral significance, we would be better served by also abandoning equilibrium assumptions in ethics. There are trivial ways in which this occurs, as when we recognize that different objects have different moral values at different times; there are also more radical ways to think of a morally dynamic world, such as a world in which moral principles themselves must change.
In Bostrom’s qualitative categories of risk, the risks of greatest scope are identified as trans-generational and pan-generational (with the possibility of a risks of cosmic scope also noted). Both the idea of the trans-generational and the pan-generational are essentially categories of intrinsic value over time. when existential risks of smaller scope are considered, they are limited to personal, local, or global circumstances. These smaller, local risks when understood in contradistinction to trans-generational and the pan-generational can also be seen as instances of intrinsic value over time, through shorter periods of time appropriate to personal time, social time, or global time.
While it is gratifying to see this recognition of intrinsic value over time, we can go farther by considering the natural history of value. The simple and fundamental lesson of the natural history of value is that value changes over time, and that particular objects may be the bearers of intrinsic value for a temporary period of time, taking on this value and then ultimately surrendering it. Moreover, intrinsic value itself changes over time, as do the forms in which it is manifested and embodied.
When Sartre gave his famous lecture “Existentialism is a Humanism,” he took the bull by the horns and faced straight on the claims that had been made that existentialism was a gloomy philosophy of despair, quietism, and pessimism. Of his critics Sartre said, “what is annoying them is not so much our pessimism, but, much more likely, our optimism. For at bottom, what is alarming in the doctrine that I am about to try to explain to you is — is it not? — that it confronts man with a possibility of choice.” For Sartre, existentialism is, at bottom, an optimistic philosophy because it affirms the reality of choice and human agency. And so, too, the recognition of the natural history of value — that value is not a fixed and unchanging feature of the world — is an optimistic doctrine preferable to any and all false hopes.
Questioning ancient moral prejudices, as Sartre often did, almost always results in claims on behalf of traditionalists that the sky is falling, and that by opening Pandora’s Box we have unleashed evils into the world that cannot be contained. But to observe that intrinsic value changes over time is no counsel of despair, as when Bertrand Russell (as I recently quoted in Developing an Existential Perspective) said that, “…only on the firm foundation of unyielding despair, can the soul’s habitation henceforth be safely built.” That intrinsic value is subject to change means that the intrinsic value of the world may increase or decrease, and if it may increase, we ourselves may be the agents of this change.
. . . . .
. . . . .
. . . . .
18 June 2014
The recent military successes of ISIS (Islamic State of Iraq and al-Shams — ad-Dawlat al-Islāmiyya fī’l-‘Irāq wa’sh-Shām — also known as ISIL, Islamic State of Iraq and the Levant) in sweeping aside the Iraqi army and taking control of Mosul, Tikrit, and Falluja, has been a surprise. Iraq had fallen out of the news cycle, which has, of late, been dominated by Putin’s Russia and the turmoil in Ukraine. Now the cameras and reporters are heading back to Iraq to try to discover what went wrong, and in so doing they are also going back to school to try to understand why one of the rallying cries of ISIS is the effective nullification of the Sykes–Picot Agreement.
Here is one statement from an ISIS sympathizer that I managed to find, after hearing it quoted in another source (which latter I have since not been able to relocate):
“In the name of God, the beneficent, the merciful, this is one of the destruction (mechanism, devices) of the Safavid Iraqi army (referring to the Shiite Safavid dynasty in Iran). This is their flag. All the prayers belong to God. And to you [God] goes all our gratitude. This is the end of Sykes-Picot borders. This is God’s grace. What remains of any borders of Muslim land. Oh God all our prayers belong to you. This is their destruction. They ran away. By God’s blessing. They are the lions of the Levant. Peace be upon you, God is great. This is their evil flag, we will remove it, God willing. For ISIL. That is God’s grace. God’s blessing on them.”
I found this text at Raw: ISIL Fighters Attack on Iraq-Syria Border, and despite the fact that I have found the line “This is the end of Sykes-Picot borders” quoted in other media sources, this is the only place that I could find the context of this quote. There is more on the role if the Sykes–Picot Agreement in the ideology of ISIS in How ISIS Is Tearing Up The Century-old Map Of The Middle East by Charles M. Sennott on the MintPress news site.
It is all very well to chant about the end of Sykes-Picot borders, but what does it mean? How are we to understand Islamist militants being pushed out of Iraq into the civil war in Syria, only to burst back over the border and take possession of Mosul, Iraq’s second city. And why was one of the symbolic actions of that crossing back into Iraq from Syria the use of a bulldozer to push through the earthen berm that defines the border in this part of the Levant?
An intelligent (but limited) article on the BBC by Fawaz A Gerges, London School of Economics, Iraq’s central government suffers mortal blow, diagnoses the problems in Iraq exclusively in terms of short term causes (since the ouster of Saddam Hussein). Gerges even invokes the Weberian concept of sovereignty to explain Iraq’s state failure: “It is doubtful if Baghdad could ever establish a monopoly on the use of force in the country, or exercise authority and centralised control over rebellious Sunni Arabs and semi-independent Kurdistan.” Gerges implies by his analysis that one can adequately understand the conflict in Iraq (and presumably also in Syria) with reference to the last ten or twenty years of political developments. This is an inadequate historical framework. We must go back a hundred years to examine the Sykes–Picot Agreement, and this agreement came to have the significance that it did only because of what preceded it.
Like the division of Europe made at the Yalta Conference before Hitler was defeated on the battlefield (though the end was in sight), the Sykes–Picot Agreement divided the Levant before the Ottoman Empire, which had ruled these lands, was decisively defeated. But we all know that the Ottoman Empire was the “sick man of Europe,” and even the Tsar, precariously perched on his own empire as he was, seemed secure in comparison to the Ottoman sultans. All had witnessed the decline of the Ottoman Empire, and it only remained to wait for (or hasten) its fall.
The Sykes–Picot Agreement was controversial even before it came into effect. Stratfor noted in The Intrigue Lying Behind Iraq’s Jihadist Uprising by Reva Bhalla that:
“When the French and British were colluding over the post-Ottoman map in 1916, czarist Russia quietly acquiesced as Paris and London divided up the territories. Just a year later, in 1917, the Soviets threw a strategic spanner into the Western agenda by publishing the Sykes-Picot agreement, planting the seeds for Arab insurrection and thus ensuring that Europe’s imperialist rule over the Middle East would be anything but easy.”
In “Isis defies repeated efforts to destroy its capability” in the Financial Times (Thursday 12 June 2014), Erika Solomon writes, “Aspiring to create an Islamic caliphate, Isis is already operating over a state-sized amount of territory of its own, stretching east of Aleppo, through desert frontiers into western Iraq.” Solomon quotes analyst Hayder al-Khoei as saying, “A few months ago, Isis was mostly doing hit and run attacks, albeit sophisticated ones. Now it’s holding territory. That’s what’s scary: they feel capable of confronting the state,” and quotes ISIS sympathizer “Shami Witness” (who may be the same individual responsible for the longer quote above) as saying, “Their aim is to expand reasonably, and the goal is definitely Baghdad now.”
The establishment of a new caliphate, the Sykes–Picot Agreement, and the collapse of the Ottoman Empire are linked as differing perspectives on the same historical object. The end of the Ottoman Empire was, to be sure, an opportunity for European colonialism, but it was at the same time the end of an ancient Islamic institution that had endured for more than a thousand years: the Ottoman sultan was the last caliph to rule an Islamic territorial empire and to preside over a dynasty. The Sykes–Picot Agreement is symbolically important not only as an expression of European colonialism and imperial impunity, but also as the agreement that defined the terms by which the last caliphate came to an end (though it was the Grand National Assembly of Ankara who deposed the Ottoman sultan ‘Abd al- Majid II and abolished the caliphate in February 1924).
For many Jihadis and militant Islamists, the establishment of a new caliphate is the unwavering aim to which they are committed with a symbolic determination equal to the symbolic humiliation that they attribute to the Sykes–Picot Agreement. In The Management of Savagery, which I previously cited in The Farther Reaches of Civilization (and which we might characterize as a call for revolutionary violence on the part of Islamic militants), the author laments ineffectual Muslim efforts to secure an Islamic state:
…the Muslims and their organizations quarreled about what they had to do to establish the state of Islam according to the prophetic method. It is a dishonorable and disgraceful affair. Even though the people of Islam possess the largest resources (needed for) achieving success controlling the state, those who did not have the resources very easily became rulers of states and those who had the resources became exiles who did not possess a single meter of land on which to die peacefully.
The people built their states, laid its foundations, and buttressed them. They made its pillars firm and they secured its resources and they instructed the ummah as they saw fit. They acquired advanced positions while the people of Islam were still debating and quarreling about the ideal method for establishing the Islamic state! All of the debaters claim that their proof for what they believed regarding the establishment of the Islamic state was derived from the prophetic method.
Regrettably, some of the people still think that this method needs more investigation and research and many of the people of religion still gather the people together in order to tell them about the ideal method for causing the downfall of the Taghuts or the ideal method of reviving the State of the Caliphate.
The Management of Savagery: The Most Critical Stage Through Which the Umma Will Pass, Abu Bakr Naji, Translated by William McCants, Funding for this translation was provided by the John M. Olin Institute for Strategic Studies at Harvard University, and any use of this material must include a reference to the Institute. 23 May 2006
What is (or what was) the caliphate? Here is one perspective:
The caliphate (al-khilāfa) is the term denoting the form of government that came into existence in Islamic lands after the death of the Prophet Muhammad and is considered to have survived until the first decades of the 20th century. It derives from the title caliph (khalīfa, pl. khulafā’ or khalā’if), referring to Muslim sovereigns who claimed authority over all Muslims. The caliphate refers not only to the office of the caliph but also to the period of his reign and to his dominion—in other words, the territory and peoples over whom he ruled. The office itself soon developed into a form of hereditary monarchy, although it lacked fixed rules on the order of succession and based its legitimacy on claims of political succession to Muhammad. The caliphate was constrained by neither any fixed geographical location or boundaries nor particular institutions; rather, it was coterminous with the reign of a monarch or a dynasty.
Gerhard Bowering, editor, The Princeton Encyclopedia of Islamic Political Thought, Princeton University Press, 2013, p. 81
As with any historical institution, the more one reads the history of the caliphate, the more complex the story becomes, and the more difficult it is to extract any one historical lesson from the tangle of particular instances that constituted the institution while it was viable. Whatever the historical ambiguities of the caliphate, the ISIS militants are among those Islamist groups for which the establishment of a new caliphate is a central imperative. It is, because of its historical complexity, an imperative that comes with strings attached, but ISIS may yet prove itself to be the organization that can realize this now century-old dream. I do not think that this is likely, but it is at least possible.
Despite the strong ideological orientation of ISIS, the militant group apparently has no scruples about profiting from its activities. An article in The Guardian by Martin Chulov in Baghdad, How an arrest in Iraq revealed Isis’s $2bn jihadist network, claims that a recent intelligence coup revealed ISIS to have amassed a fortune worth 875 million dollars, all meticulously documented. Try to imagine a group of radical militants with nearly a billion dollars in their control — it is a wonder that they only took Mosul and didn’t go all the way to Baghdad while they were on a roll. (As with the quote above, this story in the Guardian in the only source I could find for this information.)
In a further demonstration of pragmatism, the radicalized and ideologically-motivated militant Islamists of ISIS are not blind to the fact that they cannot merely proclaim a new caliphate, but that any new caliphate must be credible — militarily, politically, ideologically, and religiously. For a caliphate to be credible, it must be established across the divisions of the Sykes–Picot Agreement, and it must hold and administer this territory according to the contemporary paradigm of the nation-state, because this is the recognizable form of political power in our time. (It does not matter that the Islamic conception of the Ummah has more in common with the personal principle in law and the nation-state is the territorial principle in law made manifest.) A caliphate must furthermore be able to defend itself, and command the approbation of at least some Islamic scholars, preferably the most eminent among them. This will be difficult. Grand Ayatollah Ali al-Sistani has already said, “Defense of Iraq and its people and holy sites is a duty on every citizen who can carry arms and fight terrorists.”
A new caliphate must be existentially viable in order to be credible. To establish a caliphate only to see it ignominiously go down in defeat would probably be a political disaster much greater than failing to re-establish a caliphate. In this, Islamist militants of many different loyalties who in common look toward a new caliphate seem to be as one, and ISIS seems to understand this as well. Whether or not they can make it a reality, only time will tell.
. . . . .
. . . . .
. . . . .
12 June 2014
Scientific civilization changes when scientific knowledge changes, and scientific knowledge changes continuously. Science is a process, and that means that scientific civilization is based on a process, a method. Science is not a set of truths to which one might assent, or from which one might withhold one’s assent. It is rather the scientific method that is central to science, and not any scientific doctrine. Theories will evolve and knowledge will change as the scientific method is pursued, and the method itself will be refined and improved, but method will remain at the heart of science.
Pre-scientific civilization was predicated on a profoundly different conception of knowledge: the idea that truth is to be found at the source of being, the fons et origo of the world (as I discussed in my last post, The Metaphysics of the Bureaucratic Nation-State). Knowledge here consists of delineating the truth of the world prior to its later historical accretions, which are to be stripped away to the extent possible. More experience of the world only further removes us from the original source of the world. The proper method of arriving at knowledge is either through the study of the original revelation of the original truth, or through direct communion with the source and origin of being, which remains unchanged to this day (according to the doctrine of divine impassibility).
The central conceit of agrarian-ecclesiastical civilization to be based upon revealed eternal verities has been so completely overturned that its successor civilization, industrial-technological civilization, recognizes no eternal verities at all. Even the scientific method, that drives the progress of science, is continually being revised and refined. As Marx put it in the Communist Manifesto: “All fixed, fast-frozen relations, with their train of ancient and venerable prejudices and opinions, are swept away, all new-formed ones become antiquated before they can ossify. All that is solid melts into air…”
Scientific civilization always looks forward to the next development in science that will resolve our present perplexities, but this comes at the cost of posing new questions that further put off the definitive formulation of scientific truth, which remains perpetually incomplete even as it expands and becomes more comprehensive.
This has been recently expressed by Kevin Kelly in an interview:
“Every time we use science to try to answer a question, to give us some insight, invariably that insight or answer provokes two or three other new questions. Anybody who works in science knows that they’re constantly finding out new things that they don’t know. It increases their ignorance, and so in a certain sense, while science is certainly increasing knowledge, it’s actually increasing our ignorance even faster. So you could say that the chief effect of science is the expansion of ignorance.”
The Technium: A Conversation with Kevin Kelly [02.03.2014]
Scientific civilization, then, is not based on a naïve belief in progress, as is often alleged, but rather embodies an idea of progress that is securely founded in the very nature of scientific knowledge. There is nothing naïve in the scientific conception of knowledge; on the contrary, the scientific conception of knowledge had a long and painfully slow gestation in western civilization, and it is rather the paradigm that science supplants, the theological conception of knowledge (according to which all relevant truths are known from the outset, and are never subject to change), that is the naïve conception of knowledge, sustainable only in the infancy of civilization.
We are coming to understand that our own civilization, while not yet mature, is a civilization that has developed beyond its infancy to the degree that the ideas and institutions of infantile civilization are no longer viable, and if we attempt to preserve these ideas and institutions beyond their natural span, the result may be catastrophic for us. And so we have come to the point of conceptualizing our civilization in terms of existential risk, which is a thoroughly naturalistic way of thinking about the fate and future of humanity, and is amenable to scientific treatment.
It would be misleading to attribute our passing beyond the infancy of civilization to the advent of the particular civilization we have today, industrial-technological civilization. Even without the industrial revolution, scientific civilization would likely have gradually come to maturity, in some form or another, as the scientific revolution dates to that period of history that could be called modern civilization in the narrow sense — what I have called Modernism without Industrialism. And here by “maturity” I do not mean that science is exhausted and can produce no new scientific knowledge, but that we become reflexively aware of what we are doing when we do science. That is to say, scientific maturity is when we know ourselves to be engaged in science. In so far as “we” in this context means scientists, this was probably largely true by the time of the industrial revolution; in so far as “we” means mass man of industrial-technological civilization, it is not yet true today.
The way in which science enters into industrial-technological civilization — i.e., by way of spurring forward the open loop of industrial-technological civilization — means that science has been incorporated as an integral part of the civilization that immediately and disruptively followed the scientific civilization of modernism without industrialism (according to the Preemption Hypothesis). While the industrial revolution disrupted and preempted almost every aspect of the civilization that preceded it, it did not disrupt or preempt science, but rather gave a new urgency to science.
In several posts I have speculated on possible counterfactual civilizations (according to the counterfactuals implicit in naturalism), that is to say, forms of civilization that were possible but which were not actualized in history. One counterfactual civilization might have been agrarian-ecclesiastical civilization undisrupted by the scientific or industrial revolutions. Another counterfactual civilization might have been modern civilization in the narrow sense (i.e., Modernism without Industrialism) coming to maturity without being disrupted and preempted by the industrial revolution. It now occurs to me that yet another counterfactual form of civilization could have been that of industrialization without the scientific conception of knowledge or the systematic application of science to industry.
How could this work? Is it even possible? Perhaps not, and certainly not in the long term, or with high technology, which cannot exist without substantial scientific understanding. But the simple expedient of powered machinery might have come about by the effort of tinkerers, as did much of the industrial revolution as it happened. If we look at the halting and inconsistent efforts in the ancient world to produce large scale industries we get something of this idea, and this we could call industrialism without modernity. Science was not yet at the point at which it could be very helpful in the design of machinery; none of the sciences were yet mathematicized. And yet some large industrial enterprises were built, though few in number. It seems likely that it was not the lack of science that limited industrialization in classical antiquity, but the slave labor economy, which made labor-saving devices pointless.
There are, today, many possibilities for the future of civilization. Technically, these are future contingents (like Aristotle’s sea battle tomorrow), and as history unfolds one of these contingencies will be realized while the others become counterfactuals or are put off yet further. And in so far as there is a finite window of opportunity for a particular future contingent to come into being, beyond that window all unactualized contingents become counterfactuals.
. . . . .
. . . . .
. . . . .
. . . . .
4 June 2014
There is a spectre haunting China — the spectre of Tiananmen. It is now a quarter century since the June 4 incident, as it is known among the Chinese. The Chinese government is concerned that the symbolic significance of 25 years since the carnage in Tiananmen Square will mean the resurfacing of memories that the communist party of China has diligently sought to suppress and conceal. Within China, they have been largely successful, but they have not exorcised the spectre of Tiananmen, which haunts public consciousness even as it is carefully expunged. Can a nation forget? Ought a nation to forget? To put the question in a new light, does a nation have the right to forget? Does China have the right to forget the Tiananmen massacre?
There has been a great deal of attention recently focused on what is now called “the right to be forgotten,” as the result of a European Court of Justice ruling that has forced the search engine Google to give individuals the opportunity to petition for the removal of links that connect their names with events in their past. This present discussion of a right to be forgotten may be only the tip of an iceberg of future conflicts between privacy and transparency. It is to be expected that different societies will take different paths in attempting to negotiate some kind of workable compromise between privacy and transparency, as we can already see in this court ruling Europe going in one direction — a direction that will not necessarily be followed by other politically open societies.
The Chinese communist party that presided over the Tiananmen massacre would certainly like the event to disappear from public consciousness, and to pretend as though it never happened, and the near stranglehold that the communist party exercises over society means that it is largely successful within the geographical extent of China. But outside China, and even in Hong Kong and Taiwan, the memory does not fade away as the communist party hopes, but remains, held in a kind of memory trust for the day when all Chinese can know the truth of Chinese history. A hundred years from now, when the communist party no longer rules China, and the the details of its repression are a fading memory that no one will want to remember, Tiananmen will continue to be the “defining act” of modern Chinese history, as it has been identified by Bao Tong (as reported in the recent book People’s Republic of Amnesia: Tiananmen Revisited by Louisa Lim).
The right to be forgotten could be understood as an implementation of the right to privacy, but it is also suggestive of the kind of control of history routinely practiced by totalitarian societies, and most notoriously by Stalin, who had individuals who had fallen out of favor excised from history books and painted out of pictures and photographs, so that it was as though the individual had never existed at all. It has been suggested that this extreme control of history was intended to send a message to dissidents or potential dissidents of the pointlessness of any political action taken against the state, because the state could effectively make them disappear from history, and their act of defiance would ultimately have no meaning at all.
Many have observed that there is no right to privacy written into the US Constitution, and some have proposed an amendment to the Constitution that would secure such a legal right to privacy. I found one such proposed amendment, worded as follows:
“Each person has the right to privacy, including the right to keep personal information private; to communicate with others privately; and to make decisions concerning his or her body.”
But a nation-state is not a person, not an individual, and while advocates of the nation-state and the system of international anarchy that prevails among nation-states claim on the behalf of the nation-state supra-personal rights, I think that the moral intuitions that predominate in our time deny to political entities — in principle, if not always in practice — the kind of rights that persons have, or ought to have, and I further suspect that among those who advocate a right to privacy or a right to be forgotten, than they would not likely extend this right to political entities.
Few would argue that the individual deserves greater consideration when it comes to privacy than a political entity. This idea has already been incorporated into law. In libel and slander cases, individuals considered private citizens are viewed in a different light by the courts than public figures such as politicians and celebrities, and I am sure that at least one of the motivations on behalf of the “right to be forgotten” is the idea that private citizens deserve a certain anonymity and a higher level of protection. Nevertheless, the opportunities for abuse of the right to be forgotten are so obvious, and so apparently easily exploited, that it is at least questionable whether a right to be forgotten can be considered an implementation of one aspect of a right to privacy (which latter, as noted above, does not itself have legal standing in most nation-states).
I think that the worry that individuals will be dogged by a past on the internet that they would rather forget is overstated. We hear about the egregious cases in which individuals lose their jobs because of off-color photographs from years before, but the media emphasis that falls upon these cases tends to obscure how social networks actually function. On most online social networks, individuals post a vast amount of material, the vast bulk of which is rapidly pushed into the past by new posts piling up on top of them. Most things are forgotten quite quickly, and it takes a real effort to locate some post from the past amid the sheer amount of material.
The exception to this rapid receding even of the recent past is what has come to be called the Streisand Effect: when the attempt to suppress information results in the wider dissemination of the same information. In other words, it is often the attempt to suppress information that creates a situation in which a right to be forgotten becomes an issue. If an individual or a nation-state did not try to sanitize its past, much of these past would naturally fall into obscurity and would eventually be forgotten.
The institutional memories of nation-states guarantee, on the one hand, that many things will not be forgotten, while on the other hand the equally institutional suppression of events, or versions of events, can become something like an imperative to forget, that buries in the silent grave of the past all that the institution and its agents do not want on the conscience of the nation-state. Nietzsche once wrote that, “My memory says, ‘I have done this.’ My pride says, ‘I could not have done this.’ Soon my memory yields.” This, I think, is equally true for nation-state and for individuals.
It is this imperative to forget, to put behind that which is a burden to the conscience of the individual or the institution, that provokes the opposite reaction — the moral demand that a memory not be forgotten, and this is why one of the most familiar political slogans is, “Never forget.” There is a Wikipedia article on “Never forget,” calling it, “a political slogan used to urge commemoration and remembrance for national tragedies,” and noting that, “It is often used in conjunction with ‘never again’.” Both of these slogans are as appropriate for Tiananmen as for any other national tragedy one might care to name.
In Twenty-one years since Tiananmen I mentioned the then-recently published diary of Li Peng, who compared Tiananmen to the Cultural Revolution, and justified the Tiananmen crackdown as necessary to avoid another tragedy of Chinese history on the scale of the Cultural Revolution. Thus for Li Peng, the massacre at Tiananmen on 04 June 1989 was itself undertaken in the spirit of “Never again.” During the Cultural Revolution, China has scarcely more government than Somalia has today; the state during the Cultural Revolution was essentially represented by roving bands of Red Guards who killed and destroyed virtually at will. The attitude of Li Peng and other communist leaders who ordered the massacre was, “Never forget” the Cultural Revolution, and never allow it to happen again. In their eagerness to avoid another national tragedy, they created another national tragedy that in its turn has become a focus of the imperative to never forget.
The emergence of the memory of Tiananmen as an imperative to never forget, no less than the imperative to never forget the Cultural Revolution, poses a problem for the authority of the Chinese communist party, and the party has taken the familiar Stalinist path of attempting to control institutional memory. Rather, however, than the brutal amnesia of Stalinist Russia, when disgraced party members were painted out of heroic celebrations on communist triumph with a certain awkwardness so as to remind the people that individuals can be forgotten and written out of history, the Chinese have approached the problem of controlling history as a pervasive low-level intervention.
An article in the Wall Street Journal, Tiananmen Crackdown Shaped China’s Iron-Fisted Approach to Dissent, describes the method of the Chinese police for dealing with dissidents:
“In taking down Mr. Zhang, police applied a well-honed, layered strategy to nip opposition in the bud. His moves were carefully tracked online and in real life. He was apprehended just before the Chinese New Year, when it was less likely to attract attention, and then quietly released into a life of isolation. ‘These are strategies that have been used over and over again,’ says Maya Wang, Asia researcher with Human Rights Watch. ‘Tiananmen also started small. The government has to be on the lookout for sparks… They’ve been working on this for 25 years’.”
The skittishness of Chinese authorities entails a low threshold for intervention, meaning that the state feels it must act on the smallest suspicion of dissent. It is this skittishness that led to the suppression of a movement as apparently innocuous as Falun Gong.
We all know that tyrants and dictators eviscerate civil society, leaving nothing to a people but the dictator himself, or his cronies, so that the people are utterly reliant on the state for all things; here there is no alternative to the one, universal institution of dictatorship. While China’s economic opening to the world has been so dramatic that there has been a tendency to view Beijing’s totalitarianism as a perhaps kinder and gentler totalitarianism, in actual fact the low threshold for dissidence in the wake of Tiananmen has meant systematically dismantling and deconstructing any and all spontaneous institutions of civil society, wrecking any promising social movement that might serve as an alternative focus for social organization not dictated by the communist party.
This evisceration of civil society, at all levels and across all institutions, may well mean yet another “Never forget, never again” moment will define China’s future history. Without robust institutions of civil society outside the exclusive control of China’s communist party, weathering the coming storms of history will not be easy, and the communist party of China is building into its rule a kind of brittleness that will not serve either itself for the people of China when the country experiences the kind of strategic shocks that are inevitable in the long term history of a nation-state.
In the meantime, the Chinese communist party will continue to assert its right to forget its own unpleasant past, and to defend this right by policing its own amnesia. This, again, incorporates a kind of brittleness into the rule of the party, even a kind of schizophrenia in actively seeking to suppress not only a memory, but also public consciousness of the meaning of China’s modern history.
. . . . .
Previous posts on Tiananmen Anniversaries:
2013 A Dream Deferred
. . . . .
. . . . .
. . . . .
3 June 2014
A distinction often employed in historiography is that between the diachronic and the synchronic. I have written about this distinction in several posts including Axes of Historiography, Ecological Temporality and the Axes of Historiography, Synchronic and Diachronic Geopolitical Theories, and Synchronic and Diachronic Approaches to Civilization.
It is common for this distinction be be explained by saying that the diachronic perspective is through time and the synchronic perspective is across time. I don’t find this explanation to be helpful or intuitively insightful. I prefer to say that the diachronic perspective is concerned with succession while the synchronic perspective is concerned with interaction within a given period of time. Sometimes I try to drive this point home by using the phrases “diachronic succession” and “synchronic interaction.”
In several posts I have emphasized that futurism is the historiography of the future, and history the futurism of the past. In this spirit, it is obvious that the future, like the past, can also be approached diachronically or synchronically. That is to say, we can think of the future in terms of a succession of events, one following upon another — what Shakespeare called such a dependency of thing on thing, as e’er I heard in madness — or in terms of the interaction of events within a given period of future time. Thus we can distinguish diachronic futurism and synchronic futurism. This is a difference that makes a difference.
One of the rare points at which futurism touches upon public policy and high finance is in planning for the energy needs of power-hungry industrial-technological civilization. If planners are convinced that the future of energy production lies in a particular power source, billions of dollars may follow, so real money is at stake. And sometimes real money is lost. When the Washington Public Power Supply System (abbreviated as WPPSS, and which came to be pronounced “whoops”) thought that nuclear power was the future for the growing energy needs of the Pacific Northwest, they started to build no fewer than five nuclear power facilities. For many reasons, this turned out to be a bad bet on the future, and WPPSS defaulted on 2.25 billion dollars of bonds.
The energy markets provide a particularly robust demonstration of synchrony, so that within the broadly defined “present” — that is to say, in the months or years that constitute the planning horizon for building major power plants — we can see a great number of interactions within the economy that resemble nothing so much as the checks and balances that the writers of the US Constitution built into the structure of the federal government. But while the founders sought political checks and balances to disrupt the possibility of any one part of the government becoming disproportionately powerful, the machinations of the market (what Adam Smith called the “invisible hand”) constitute economic checks and balances that often frustrate the best laid schemes of mice and men.
Energy markets are not only a concrete and pragmatic exercise in futurism, they are also a sector that tends to great oversimplification and are to vulnerable to bubbles and panics that have contributed to a boom-and-bust cycle in the industry that has had disastrous consequences. The captivity of energy markets to public perceptions has led to a lot of diachronic extrapolation of present trends in the overall economy and in the energy sector in particular. I’ve written some posts on diachronic extrapolation — The Problem with Diachronic Extrapolation and Diachronic Extrapolation and Uniformitarianism — in an attempt to point out some of the problems with straight line extrapolations of current trends (not to mention the problems with exponential extrapolation).
An example of diachronic extrapolation carried out in great detail is the book $20 Per Gallon: How the Inevitable Rise in the Price of Gasoline Will Change Our Lives for the Better by Christopher Steiner, which I discussed in Are Happy Days Here Again?, speculating on how the economy will change as gasoline prices continue to climb, and written as though nothing else would happen at the same time that gas prices are going up. If we could treat one energy source — like gasoline — in ideal isolation, this might be a useful exercise, but this isn’t the case.
When the price of fossil fuels increase, several things happen simultaneously. More investment comes into the industry, sources that had been uneconomical to tap start to become commercially viable, and other sources of energy that had been expensive relative to fossil fuels become more affordable relative to the increasing price of their alternatives. Also, with the passage of time, new technologies become available that make it both more efficient and more cost effective to extract fossil fuels previously not worth the effort to extract. Higher technologies not only affect production, but also consumption: the extracted fossil fuels will be used much more efficiently than in the past. And any fossil fuels that lie untapped — such as, for example, the oil presumed to be under ANWR — are essentially banked in the ground for a future time when their extraction will be efficient, effective, and can be conducted in a manner consistent with the increasingly stringent environmental standards that apply to such resources.
Energy industry executives have in the past had difficulty in concealing their contempt for alternative and renewable resources, and for decades the mass media aided and abetted this by not taking these sources seriously. But that is changing now. The efficiency of solar electric and wind turbines has been steadily improving, and many European nation-states have proved that these technologies can be scaled up to supply an energy grid on an industrial scale. For those who look at the big picture and the long term, there is no question that solar electric will be a dominant form of energy; the only problem is that of storage, we are told. But the storage problem for solar electricity is a lot like the “eyesore” problem for wind turbines: it has only been an effective objection because the alternatives are not taken seriously, and propaganda rather than research has driven the agenda. The Earth is bathed in sunlight at all times, but one side is always dark. a global energy grid — well within contemporary technological means — could readily supply energy from lighted side to the dark side.
Even this discussion is too limited. The whole idea of a “national grid” is predicated upon an anarchic international system of nation-states in conflict, and the national energy grid becomes in turn a way for nation-states to defend their geographical territory by asserting control of energy resources within that territory. There is no need for a national energy grid, or for each nation-state to have a proprietary grid. We possess the technology today for decentralized energy production and consumption that could move away from the current paradigm of a national energy grid of widely distributed consumption and centralized production.
But it is not my intention in this context to write about alternative energy, although this is relevant to the idea of synchrony in energy markets. I cite alternative energy sources because this is a particular blindspot for conventional thinking about energy. Individuals — especially individuals in positions of power and influence — get trapped in energy groupthink no less than strategic groupthink, and as a result of being virtually unable to conceive of any energy solution that does not conform to the present paradigm, those who make public energy policy are often blindsided by developments they did not anticipate. Unfortunately, they do so with public money, picking winners and losers, and are wrong much of the time, meaning losses to the public treasury.
When an economy, or a sector of the economy, is subject to stresses, that economy or sector may experience failure — whether localized and containable, or catastrophic and contagious. In the wake of the late financial crisis, we have heard about “stress testing” banks. Volatility in energy markets stress tests the components of the energy markets. Since this is a real-world event and not a test, different individuals respond differently. Individuals representing institutional interests respond as one would expect institutions to respond, but in a market as complex and as diversified as the energy market, there are countless small actors who will experiment with alternatives. Usually this experimentation does not amount to much, as the kind of resources that institutions possess are not invested in them, but this can change incrementally over time. The experimental can become a marginal sector, and a marginal sector can grow until it becomes too large to ignore.
All of these events in the energy sector — and more and better besides — are occurring simultaneously, and the actions of any one agent influence the actions of all other agents. It is a fallacy to consider any one energy source in isolation from others, but it is a necessary fallacy because no one can understand or anticipate all the factors that will enter into future production and consumption. Energy is the lifeblood of industrial-technological civilization, and yet it is beyond the capacity of that civilization to plan its energy future, which means that industrial-technological civilization cannot plan its own future, or foresee the form that it will eventually take.
Synchrony in energy markets occurs at an order of magnitude that defies all prediction, no matter how hard-headed or stubbornly utilitarian in conception the energy futurism involved. The big picture reveals patterns — that fossil fuels dominate the present, and solar electric is likely to dominate the future — but it is impossible to say in detail how we will get from here to there.
. . . . .
. . . . .
. . . . .
27 May 2014
A great deal of contemporary political stability — much more than we usually like to think — is predicated upon the careful management of public opinion and the engineering of consent. The masses that constitute mass society in an age of mass man have the vote, and as voters they play a role in the liberal democracies that populate Fukuyama’s end of history, but we must observe that the role the voters play in democracy is carefully circumscribed. (A perfect example of this is the lack of transparency built into the US electoral college, adding layers of procedural rationality between the voters and the outcome of the process.) There is always a tension in liberal democracies predicated upon the management of public opinion of how far and how hard the masses can be pushed. If they are pushed too hard, they riot, or they fail to cooperate with the dominant political paradigm. If they are not pushed hard enough, or if they are not sufficiently fearful of authority, again, they might riot, or they might not work hard enough to keep the wheels of industry turning.
So political elites don’t push, they nudge. The nauseating paternalism of the “nudge” mentality among contemporary politicians (which, instead of being called “engineering consent,” which is a term that carries unfortunate connotations, is now called, “active engineering of choice architecture”) derived the book Nudge: Improving Decisions about Health, Wealth, and Happiness, seeks to apply the findings of behavioral economics to public policy decisions — with the proviso, of course, that it is the people in charge, the people who make public policy, who know best, and if we want a better world we need to give them a free hand to shape our choices. Unfortunately, the working class masses are not in a position to actively engineer the choice architecture of political leaders, although it is at least arguable that the political elite need an engineered choice architecture far more than the masses.
The European Union has been testing the boundaries of how far the European masses can be pushed (or nudged) to cooperate in bringing about the vision of a unified Europe, and with the Euroskeptics winning in many different regions of Europe, it appears that the European masses are pushing back by failing to cooperate with the dominant political paradigm. The political class of the European Union has just been handed a sharp rebuke that is a reminder of the limits of engineering consent, and they have been remarkably open and honest about it. German Chancellor Angela Merkel was quoted on the BBC on the need for economic development, “This is the best answer to the disappointed people who voted in a way we didn’t wish for.”
This European openness about the failure of Europe’s political class to effectively engineer the consent of the governed for the political and economic programs planned by the political elite is an important corrective to the American tendency to see conspiracies and secret cabals behind every unexpected turn of events. In Europe, the politicians have been honest that they wanted one result, and the people gave a different result. French President François Hollande was quoted as saying he would, “reaffirm that the priority is growth, jobs and investment.” Why are Merkel and Hollande united in seeing the need for jobs and economic development? Because they know that workers making good wages and who see a future for themselves and their families will mostly let the politicians have their way. It is when times are not good that voters push back against the grandiose dreams of politicians that seem to have little or no practical benefit. Europe’s political class is well aware that if the European masses have growth, jobs, and investment that they will be far more compliant at election time.
However, Hollande also said of the Eurozone financial crisis (now apparently safely in the past) that Europe had survived, “but at what price? An austerity that has ended up disheartening the people.” This latter statement demonstrates the degree to which Hollande fails to understand What is going on even as the ground shifts beneath his feet. One must understand that when European politicians talk about “austerity” what they really mean is resisting unchecked deficit spending, which would then be justified on Keynesian grounds. (I earlier called this the Europeanese of the financial crisis.) It isn’t “austerity” that has disheartened the people; it is Europe that has disheartened the people, the Europe of the European Union, but this realization is almost impossible for true believers in the European idea.
The tension between the masses in representative democracies and their putative political representatives has become obvious and explicit with this EU election in which “Euroskeptics” have been the most successful candidates. This tension can also be understood by way a very simple thought experiment: If you really had a free choice to elect whomever you liked as your political leader(s), are the political representatives you have now the ones you would choose? I think that any honest answer to this question must be, “No.” And this leaves us with the further question as to how these “leaders” came into power if they are not the choice of the people. The answer is relatively simple: these where the leaders that the political system produced for the consumption of the public. The public isn’t happy with its leaders, and the leaders aren’t happy with the public, but they are stuck with each other.
There is a limit to the extent to which the disconnect between rulers and ruled can grow before a social system becomes unworkable. Early in this blog in Social Consensus in Industrialized Society I suggested that two paradigms for the social organization of industrial society had been tried and found wanting, and that we are today searching for a further paradigm of social consensus to supersede those that have failed us. The mutual alienation between political elites and working masses in the liberal democracies of today is a symptom of the lack of social consensus, but in so far as these classes of society feel stuck with each other we have not yet reached the limits of the disconnect.
However, this mutual alienation tells us something else that is interesting, and this is the continued role of mythological political visions in an age of apparent pragmatism. The alienation that lies at the root of what Eric Voegelin called “gnosticism” in politics is here revealed as the alienation of the leadership of a democratic society from the people they presumptively represent (Hollande said of the EU that it had become, “remote and incomprehensible”) and of the people from its “leadership.”
Gnosticism is a worldview in which secret knowledge is reserved for initiates into the higher mysteries. Here is one of Voegelin’s definitions of gnosis:
“…a purported direct, immediate apprehension or vision of truth without the need for critical reflection; the special quality of a spiritual and cognitive elite.”
Eric Voegelin, Autobiographical Reflections, Collected Works Vol. 34, Columbia University, 2006, Glossary of Terms, p. 160
How does the claim to gnosis reveal itself in our pragmatic, bureaucratic age? Gnosis is necessarily distinct for each of the political classes, each of which has created its own political mythology in which it is an unique and indispensable historical actor on an eschatological stage. For mass man, gnosis takes the form of “consciousness raising,” whether being made aware, for the first time, of his status as a worker (proletarian), his race, his ethnicity, or any other property that can be employed to distinguish the elect. Access to official secrets is the special privilege and the secret knowledge of the elite political classes — the elect of the nation-state — so that to compromise these secrets and the privilege of access to them is to call into question the political mythology of the elites.
The creation of universal surveillance states is part of the this development, since the efficient management of mass man is predicated upon knowing the masses better than the mass knows itself — knowing what the mass wants, what will placate its tantrums, how hard it can be pushed, and, then the masses push back, how they can be most effectively distracted, mollified, and redirected. The extreme reaction to the revelation of official secrets as we have seen in the hysterical responses on the part of the elite political classes to Wikileaks and the Snowden leaks are the result of challenging the political mythos of the ruling elite.
In Europe, residual nationalism, ethnocentrism, and communism still resonate with some sectors of the electorate, and all of these can be be the focus of a purported gnosis; it is precisely the fragmented and divided nature of these loyalties that has kept Europe a patchwork of warring nation-states, and which threatens to torpedo the idea of a unified Europe. In the US, the intellectual lives of the workers have evolved in a different direction, which has resulted in an entirely new political mythology born out of a syncretism of conspiracy theories. (Political conspiracy theories also play a significant role in Africa, Arabia, and parts of Asia; perhaps they will yet come to the European masses.) The elite political classes are contemptuous of the conspiracy theories that excite the masses, even when these conspiracy theories verge uncomfortably close to the truth, but they are jealous in the extreme of their own “secret” knowledge obtained through surveillance. Thus we experience what Ed Snowden has called the Merkel Effect, wherein a member of the elite political class is subject to the very surveillance to which they have subjected others, and it is regarded as a scandal. The masses, on the other hand, are often defiant when their conspiracy theories are subject to rational examination, calling into question their own “secret” knowledge of how the world functions.
It is important to note that both the rise of conspiracy theories on the part of the masses and the rise of surveillance on the part of elite classes are parallel developments. Both classes of society are seeking forms of secret knowledge — that is say, this is the perfect illustration of Voegelin’s thesis on the role of gnosticism in contemporary political societies.
. . . . .
Some past posts in which I have considered Europe, the European Union, and the Eurozone…
. . . . .
. . . . .
. . . . .
24 May 2014
In my post on why the future doesn’t get funded I examined the question of unimaginative funding that locks up the better part of the world’s wealth in “safe” investments. In that post I argued that the kind of person who achieves financial success is likely to do so as a result of putting on blinders and following a few simple rules, whereas more imaginative individuals who want adventure, excitement, and experimentation in their lives are not likely to be financially successful, but they are more likely to have a comprehensive vision of the future — precisely what is lacking among the more stable souls who largely control the world’s financial resources.
Of course, the actual context of investment is much more complex than this, and individuals are always more interesting and more complicated than the contrasting caricatures that I have presented. But while the context of investment is more complicated than I have presented it in my previous sketch of venture capital investment, that complexity does not exonerate the unimaginative investors who have a more complex inner life than I have implied. Part of the complexity of the situation is a complexity that stems from self-deception, and I will now try to say something about the role of self-deception on the part of venture capitalists.
One of the problem with venture capital investments, and one the reasons that I have chosen to write on this topic, is that the financial press routinely glorifies venture capitalists as financial visionaries who are midwives to the future as they finance ventures that other more traditional investors and institutional investors would not consider. While it is true that venture capitalists do finance ventures that others will not finance, as I pointed on in the above-linked article, no one takes on risk for risk’s sake, so that it is the most predictable and bankable of the ventures that haven’t been funded that get funding from the lenders of last resort.
Venture capitalists, I think, have come to rather enjoy their status in the business community as visionaries, and are often seen playing the role in their portentous pronouncements made in interviews with the Wall Street Journal and other organs of the financial community. By and large, however, venture capitalists are not visionaries. But many of them have gotten lucky, and herein lies the problem. If someone thinks that they understand the market and where it is going, and they make an investment that turns out to be successful, they will take this as proof of their understanding of the mechanisms of the market.
This is actually an old philosophical paradox that was in the twentieth century given the name of the Gettier paradox. Here’s where the idea comes from: many philosophers have defined knowledge as justified true belief (something that I previously discussed in A Note on Plantinga). I myself object to this definition, and hold, in the Scholastic tradition, that something known is not a belief, and something believed cannot be said to be known. So, as I see it, knowledge is no kind of belief at all. Nevertheless, many philosophers persist in defining knowledge as justified true belief, even though there is a problem with this definition. The problem with the definition of knowledge as justified true belief is the Gettier paradox. The Gettier paradox is the existence of counter-examples that are obviously not knowledge, but which are both true and justified.
Before this idea was called the Gettier paradox, Betrand Russell wrote about it in his book Human Knowledge. When stated in terms of “non-defeasibility conditions” and similar technical ideas, the Gettier paradox sounds rather daunting, but it is actually quite a simple idea, and one that Russell identified with simple examples:
“It is clear that knowledge is a sub-class of beliefs: every case of knowledge is a case of true belief, but not vice versa. It is very easy to give examples of true beliefs that are not knowledge. There is the man who looks at a clock which is not going, though he thinks it is, and who happens to look at it at the moment when it is right; this man acquires a true belief as to the time of day, but cannot be said to have knowledge. There is the man who believes, truly, that the last name of the Prime Minister in 1906 began with a B, but who beleives this because he thinks that Balfour was Prime Minister then, whereas in fact it was Campbell Bannerman. There is the lucky optimist who, having bought a lottery ticket, has an unshakeable conviction that he will will, and, being lucky, does win. Such instances can be multiplied indefinitely, and show that you cannot claim to have known merely because you turned out to be right.”
Bertrand Russell, Human Knowledge: Its Scope and Limits, New York: Simon and Schuster, 1964, pp. 154-155
Of Russell’s three examples, I like the first best because it so clearly delineates the idea of justified true belief that fails to qualify as knowledge. You look at a stopped clock that indicates noon, and it happens to be noon. You infer from the hands on the dial that it is noon. That inference if your justification. It is, in fact, noon, so your belief is true. But this justified true belief is based upon accident and circumstance, and we would not wish to reduce all knowledge to accident and circumstance. Russell’s last example involves an “unshakeable conviction,” that is to say, a particular state of belief (what analytical philosophers today might call a doxastic context), so it isn’t quite the pure example of justified true belief as the others.
An individual’s understanding of history is often replete with justified true beliefs that aren’t knowledge. We look at the record of the past and we think we understand, and things do seem to turn out as we expected, and yet we still do not have knowledge of the past (or of the present, much less of the future). When we read the tea leaves wrongly, we are right for the wrong reasons, and when we are right for the wrong reasons, our luck will run out, sooner rather than later.
Contemporary history — the present — is no less filled with misunderstandings when we believe that we understand what it is happening, we anticipate certain events on the basis of these beliefs, and the events that we anticipate do come to pass. This problem compounds itself, because each prediction borne out raises the confidence of the investor, who is them more likely to trust his judgments in the future. To be right for the wrong reasons is to be deceived into believing that one understands that which one does not understand, while to be wrong for the right reason is to truly understand, and to understand better than before because one’s views have been corrected and one understands both that they have been corrected and how they have been corrected. Growth of knowledge, in true Popperian fashion, comes from criticism and falsification.
This problem is particularly acute with venture capitalists. A venture capital firm early in its history makes a few good guesses and becomes magnificently wealthy. (We don’t hear about the individuals and firms that fail right off the bat, because they disappear; this is called survivorship bias.) This is the nature of venture capital; you invest in a number of enterprises expecting most to fail, but the one that succeeds succeeds so spectacularly that it more than makes up for the other failures. But the venture capital firm comes to believe that it understands the direction that the economy is headed. They no longer think of themselves as investors, but as sages. These individuals and firms come to exercise an influence over what gets funded and what does not get funded that is closely parallel to the influence that, say, Anna Wintour, has over fashion markets.
Few venture capital firms can successfully follow up on the successes that initially made them fabulously wealthy. Some begin to shift to more conservative investments, and their portfolios can look more like the sage of Omaha than a collection of risky start ups. Others continue to try to stake out risky positions, and fail almost as spectacularly as their earlier successes. The obvious example here is the firm of Kleiner Perkins.
Kleiner Perkins focused on a narrow band of technology companies at a time when tech stocks were rapidly increasing, also known as the “tech bubble.” Anyone who invested in tech stocks at this time, prior to the bubble bursting, made a lot of money. Since VC focuses on short-term start-up funding, they were especially positioned to profit from a boom that quickly spiraled upward before it crashed back down to the earth. In short — and this is something everyone should understand without difficulty — they were in the right place at the right time. After massive losses they threw a sop to their injured investors by cutting fees and tried to make it look like they were doing something constructive by restructuring their organization — also known as “rearranging the deck chairs on the Titanic.” But they still haven’t learned their lesson, because instead of taking classic VC risks with truly new ideas, they are relying on people who “proved” themselves at the tech start-ups that they glaringly failed to fund, Facebook and Twitter. This speaks more to mortification than confidence. Closing the barn door after the horse has escaped isn’t going to help matters.
Again, this is a very simplified version of events. Actual events are much more complex. Powerful and influential individuals who anticipate events can transform that anticipation into a self-fulfilling prophecy. There are economists who have speculated that it was George Soros’ shorting of the Thai Baht that triggered the Asian financial crisis of 1997. So many people thought that Soros was right that they started selling off Thai Baht, which may have triggered the crisis. Many smaller economies now take notice when powerful investors short their currency, taking preemptive action to head off speculation turning into a stampede. Similarly, if a group of powerful and influential investors together back a new business venture, the mere fact that they are backing it may turn an enterprise that might have failed into a success. This is part of what Keynes meant when he talked about the influence of “animal spirits” on the market.
What Keynes called “animal spirits” might also be thought of as cognitive bias. I don’t think that it one can put too much emphasis on the role of cognitive bias in investment decisions, and especially in the role of the substitution heuristic when it comes to pricing risk. In Global Debt Market Roundup I noted this:
It seems that China’s transition from an export-led growth model to a consumer-led growth model based on internal markets is re-configuring the global commodities markets, as producers of raw materials and feedstocks are hit by decreased demand while manufacturers of consumer goods stand to gain. I think that this influence on global markets is greatly overstated, as China’s hunger for materials for its industry will likely decrease gradually over time (a relatively predictable risk), while the kind of financial trainwreck that comes from disregarding political and economic instability can happen very suddenly, and this is a risk that is difficult to factor in because it is almost impossible to predict. So are economists assessing the risk they know, according to what Daniel Kahneman calls a “substitution heuristic” — answering a question that they know, because the question at issue is either too difficult or intractable to calculation? I believe this to be the case.
Most stock pickers simply don’t have what it takes in order to understand the political dynamics of a large (and especially an unstable) nation-state, so instead of trying to engage in the difficult task of puzzling out the actual risk, an easier question is substituted for the difficult question that cannot be answered. And thus it is that even under political conditions in which wars, revolution, and disruptive social instability could result in an historically unprecedented loss or expropriation of wealth, investors find a way to convince themselves that it is okay to return their money to region (or to an enterprise) likely to mismanage any funds that are invested. The simpler way to put this is to observe that greed gets ahead of good sense and due diligence.
Keynes thought that the animal spirits (i.e., cognitive biases) were necessary to the market functioning. Perhaps he was right. Perhaps venture capital also can’t function without investors believing themselves to be right, and believing that they understand what is going on, when in fact they are wrong and they do not understand what is going on. But unless good sense and due diligence are allowed to supplement animal spirits, a day of reckoning will come when apparent gains unravel and some unlucky investor or investors are left holding the bag.
. . . . .
. . . . .
. . . . .