4 July 2015
The viability of political entities
There is a well-known story that Benjamin Franklin was asked as he left Independence Hall as the deliberations of the Constitutional Convention of 1787 were in their final day, “Well, Doctor, what have we got — a Republic or a Monarchy?” Franklin’s famous response to this was, “A Republic, madam — if you can keep it.” (The source of this anecdote is from notes of Dr. James McHenry, a Maryland delegate to the Convention, first published in The American Historical Review, vol. 11, 1906.)
The qualification implies the difficulty of the task of keeping a republic together, and keeping it republican. If doing so were easy, Franklin would not have bothered to note that qualification. That he did note it, in the spirit of a witticism, reminds me of another witticism from the American Revolution — quite literally an instance of gallows humor: “Gentlemen, we must now all hang together, or we shall most assuredly all hang separately.” This, too, was from Benjamin Franklin.
The men who fomented the American Revolution, and who went on to hold the Constitutional Convention, were no starry-eyed dreamers. They were tough-minded in the sense that William James used that phrase. They had no illusions about human nature and human society. Their decision to break with England, and their later decision to write the Constitution, was a calculated risk. They reasoned their way to revolution, and they well knew that all that all that they had done, and all that they had risked, could come to ruin.
And still that American project could come to ruin. It is a work in progress, and though it now has some history behind it, as long as it continues in existence it shares in the uncertainty of all human things.
Recently in Transhumanism and Adaptive Radiation I wrote:
“If human freedom were something ideal and absolute, it would not be subject to revision as a consequence of technological change, or any change in contingent circumstances. But while we often think of freedom as an ideal, it is rather grounded in pragmatic realities of action. If a lever or an inclined plane make it possible for you to do something that it was impossible to do without them, then these machines have expanded the scope of human agency; more choices are available as a result, and the degrees of human freedom are multiplied.”
The same can be said of the social technologies of government: if you can do something with them that you cannot do without them, you have expanded the scope of human freedom. The hard-headed attitude of the founders of the republic understood that freedom is grounded in the pragmatic realities of action. It was because of this that the American project has enjoyed the success that it has realized to date. And the freedoms that it facilitates are always subject to revision as the machinery of government evolves. Again, this freedom is not an ideal, but a practical reality.
It is not enough merely to keep the republic, as though preserved under glass. The trajectory of its evolution must be managed, so that it continues to facilitate freedom under the changing conditions to which it is subject. Freedom is subject to contingencies as the fate of the republic is subject to contingencies, and it too can come to ruin just as the republic could yet come to ruin. The challenge remains the same challenge Franklin threw back at his questioner: “If you can keep it.”
. . . . .
Happy 4th of July!
. . . . .
. . . . .
. . . . .
. . . . .
3 July 2015
Traditional units of measure
Quite some time ago in Linguistic Rationalization I discussed how the adoption of the metric system throughout much of the world meant the loss of traditional measuring systems that were intrinsic to the life of the people, part of the local technology of living, as it were. In that post I wrote:
“The gains that were derived from the standardization of weights and measures… did not come without a cost. Traditional weights and measures were central to the lives and the localities from which they emerged. These local systems of weights and measures were, until they were obliterated by the introduction of the metric system, a large part of local culture. With the metric system supplanting these traditional weights and measures, the traditional culture of which they were a part was dealt a decisive blow. This was not the kind of objection that men of the Enlightenment would have paused over, but with our experience of subsequent history it is the kind of thing that we think of today.”
Perhaps it is not the kind of thing many think of today; most people do not mourn the loss of traditional systems of measurement, but it should be recalled that these traditional systems of measurement were not arbitrary — they were based on the typical experience of individuals in the certain milieu, and they reflected the life and economy of a people, who measured the things that they needed to measure.
It is often noted that languages have an immediate relation to the life of a people — the most common example cited is that of the number of words for snow in the languages of the native peoples of the far north. Weights and measures — in a sense, the language of commerce — also reflect the life of a people in the same immediate way as their vocabulary. Language and measurement are linked: much of the earliest writing preserved from the Fertile Crescent consists of simple accounting of warehouse stores.
A particular example can illustrate what I have in mind. It is common to give the measurement of horses in hands. The hand as as unit of measurement has been standardized as four inches, but it is obvious that the origins of the unit is derived from a human hand. Everyone has an admittedly vague idea of the average size of a human hand, and this gives an anthropocentric measurement of horses, which have been crucial to many if not most human economies. The unit of a hand is intuitive and practical, and it continues to be used by individuals who work with horses. It is, indeed, part of the “lore” of horsemanship. Many traditional units of measurement are like this: derived from the human body — as Pythagoras said, man is the measure of all things — they are intuitive and part of the lore of a tradition. To replace these traditional units has a certain economic rationale, but there is a loss if that replacement is successful. More often (as in measuring horses today), both traditional and SI units are employed.
Units of measure unique to a discipline
One response to the loss of traditional units is to define new units in terms of a system of weights and measures — today, usually the metric system — which reflect the particular concerns of a particular discipline. Having a unit of measurement peculiar to a discipline creates a jargon peculiar to a discipline, which is not necessarily a good thing. However, a unit of measurement unique to a discipline makes it possible to think in terms peculiar to the discipline. This “thinking one’s way into” some mode of thought is probably insufficiently appreciated, but it it quite common in the sciences. There are, for example, many different units that are used to measure energy. In principle, only one unit is necessary, and all units of measuring energy can be given a metric equivalent today, but it is not unusual for the energy of a furnace to be measured in BTUs while the energy of a particle accelerator is measured in electronvolts (eV).
For a science of civilization there must be quantifiable measurements, and quantifiable measurements imply a unit of measure. It is a relatively simple matter to employ (or, if you like, to exapt) existing units of measurement for an unanticipated field of research, but it is also possible to formulate new units of measurement specific to a scientific research program — units that are explicitly conceived and applied with the peculiar object of study of the science in view. It is arguable that the introduction of a unit of measurement specific to civilization would contribute to the formulation of a conceptual framework that allows one to think in terms of civilization in a way not possible, for example, in the borrowed terminology of historiography or some other discipline.
Thinking our way into civilization
With this in mind, I would like to suggest the possibility of a unit of time specific to civilization. We already have terms for ten years (a decade), a hundred years (a century), and a thousand years (a millennium), so that it would make sense to employ a metric of years for the quantification of civilization. The basic unit of time in the metric system is the second, and we can of course define the year in terms of the number of seconds in a year. The measurement of time in terms of a year derives from natural cosmological cycles, like the measurement of time in terms of days. With the increase in the precision of atomic clocks, it became necessary to abandon the calibration of the second in terms of celestial events, and this calibration is now done in terms of nuclear physics. Nevertheless, the year, like the day, remains an anthropocentric unit of time that we all understand and that we are likely to continue to use.
Suppose we posit a period of a thousand years as the basic temporal unit for the measurement of civilization, and we call this unit the chronom. In other words, suppose we think of civilization in increments of 1,000 years. In the spirit of a decimal system we can define a series of units derived from the chronom by powers of ten. The chronom is 1,000 years or 103 years; 1 centichronom is 100 or 102 years (a century), 1 decichronom is 10 years or 101 years (a decade), and 1 millichronom is 1.0 year or 100 years. In other other direction, in increasing size, 1 decachronom is 10 chronom or 10,000 years (104 years), 1 hectochronom is 100 chronom or 100,000 years (105 years), 1 kilochronom is 1,000 chronom or 1,000,000 years (106 years or 1.0 Ma, or mega-annum), and thus we have arrived at the familiar motif of a million year old supercivilization. Continuing upward we eventually would come to the megachronom, which is 1,000,000 chronom or 109 years or 1.0 Ga., i.e., giga-annum, at which point we reach the billion year old supercivilizations discussed by Ray Norris (cf. How old is ET?).
From such a starting point — and I am not suggesting that what I have written above should be the starting point; I have only given an illustration to suggest to the reader what might be possible — it would be possible to extrapolate further coherent units of measure. We would want to do so in terms of non-anthropocentric units, and, moreover, non-geocentric units. While the metric system is a great improvement (in terms of the standardization of scientific practice) over traditional units of measure, it is still a geocentric unit of measure (albeit appealing to geocentrism in an extended sense).
Traditional units of measurement were parochial; the metric system was based on the Earth itself, and so not unique to any nation-state, but still local in a cosmological sense. If we were to extrapolate a metric for civilization according to constants of nature (like the speed of light, or some property of matter such as now exploited by atomic clocks), we would begin to formulate a non-anthropocentric set of units for civilization. A temporal metric for the quantitative study of civilization suggests the possibility of also having a spatial metric for the quantitative study of civilization. For example, a unit of space could be defined that is the area covered by light traveling for 1 chronom. A sphere with a radius of one light year would entirely contain a civilization confined to the region of its star. That could be a useful metric for spacefaring civilizations.
What would be the benefit of such a system to quantify civilization? As I noted above, a system of measurement unique to a discipline allows us to think in terms of the discipline. Units of measurement for the quantification of civilization would allow us to think our way into civilization, and so possibly to avoid some of the traditional prejudices of historiographical thinking which have dominated thinking about civilization so far. Moreover, a non-anthropocentric system of civilization metrics would allow us to think our way into a non-anthropocentric metric for civilization, which would better enable us to recognize other civilizations when we have the opportunity to seek them out.
What I am suggesting here is a process of defamiliarization by way of scientific metrics to take the measure of something so familiar — human civilization — that it is difficult for us to think of it in objective terms. Previously in Kierkegaard and Russell on Rigor I discussed how a defamiliarizing process can be a constituent of rigorous thought. In so far as we aspire to the study of civilization as a rigorous science, the defamiliarization of a scientific set of metrics for quantifying civilization can be a part of that effort.
. . . . .
. . . . .
. . . . .
. . . . .
28 June 2015
In several posts I have described what I called the STEM cycle, which typifies our industrial-technological civilization. The STEM cycle involves scientific discoveries employed in new technologies, which are in turn engineered into industries which supply new instruments to science resulting in further scientific discoveries. For more on the STEM cycle you can read my posts The Industrial-Technological Thesis, Industrial-Technological Disruption, The Open Loop of Industrial-Technological Civilization, Chronometry and the STEM Cycle, and The Institutionalization of the STEM Cycle.
Industrial-technological civilization is a species of the genus of scientific civilizations (on which cf. David Hume and Scientific Civilization and The Relevance of Philosophy of Science to Scientific Civilization). Ultimately, it is the systematic pursuit of science that drives industrial-technological civilization forward in its technological progress. While it is arguable whether contemporary civilization can be said to embody moral, aesthetic, or philosophical progress, it is unquestionable that it does embody technological progress, and, almost as an epiphenomenon, the growth of scientific knowledge. And while knowledge may not grow evenly across the entire range of human intellectual accomplishment, so that we cannot loosely speak of “intellectual progress,” we can equally unambiguously speak of scientific progress, which is tightly-coupled with technological and industrial progress.
Now, it is a remarkable feature of science that there are no secrets in science. Science is out in the open, as it were (which is one reason the appeal to embargoed evidence is a fallacy). There are scientific mysteries, to be sure, but as I argued in Scientific Curiosity and Existential Need, scientific mysteries are fundamentally distinct from the religious mysteries that exercised such power over the human mind during the epoch of agrarian-ecclesiastical civilization. You can be certain that you have encountered a complete failure to understand the nature of science when you hear (or read) of scientific mysteries being assimilated to religious mysteries.
That there are no secrets in science has consequences for the warfare practiced by industrial-technological civilization, i.e., industrialized war based on the application of scientific method to warfare and the exploitation of technological and industrial innovations. While, on the one hand, all wars since the first global industrialized war have been industrialized war, since the end of the Second World War, now seventy years ago, on the other hand, no wars have been mass wars, or, if you prefer, total wars, as a result of the devolution of warfare.
Today, for example, any competent chemist could produce phosgene or mustard gas, and anyone who cares to inform themselves can learn the basic principles and design of nuclear weapons. I made this point some time ago in Weapons Systems in an Age of High Technology: Nothing is Hidden. In that post I wrote:
Wittgenstein in his later work — no less pregnantly aphoristic than the Tractatus — said that nothing is hidden. And so it is in the age of industrial-technological civilization: Nothing is hidden. Everything is, in principle, out in the open and available for public inspection. This is the very essence of science, for science progresses through the repeatability of its results. That is to say, science is essentially an iterative enterprise.
Although science is out in the open, technology and engineering are (or can be made) proprietary. There is no secret science or sciences, but technologies and industrial engineering can be kept secret to a certain degree, though the closer they approximate science, the less secret they are.
I do not believe that this is well understood in our world, given the pronouncements and policies of our politicians. There are probably many who believe that science can be kept secret and pursued in secret. Human history is replete with examples of the sequestered development of weapons systems that rely upon scientific knowledge, from Greek Fire to the atom bomb. But if we take the most obvious example — the atomic bomb — we can easily see that the science is out in the open, even while the technological and engineering implementation of that science was kept secret, and is still kept secret today. However, while no nation-state that produces nuclear weapons makes its blueprints openly available, any competent technologist or engineer familiar with the relevant systems could probably design for themselves the triggering systems for an implosion device. Perhaps fewer could design the trigger for a hydrogen bomb — this came to Stanislaw Ulam in a moment of insight, and so represents a higher level of genius, but Andrei Sakharov also figured it out — however, a team assembled for the purpose would also certainly hit on the right solution if given the time and resources.
Science nears optimality with it is practiced openly, in full view of an interested public, and its results published in journals that are read by many others working in the field. These others have their own ideas — whether to extend research already preformed, reproduce it, or to attempt to turn it on its head — and when they in turn pursue their research and publish their results, the field of knowledge grows. This process is exponentially duplicated and iterated in a scientific civilization, and so scientific knowledge grows.
When Lockheed’s Skunkworks recently announced that they were working on a compact fusion generator, many fusion scientists were irritated that the Skunkworks team did not publish their results. The fusion research effort is quite large and diverse (something I wrote about in One Hundred Years of Fusion), and there is an expectation that those working in the field will follow scientific practice. But, as with nuclear weapons, a lot is at stake in fusion energy. If a private firm can bring proprietary fusion electrical generation technology to market, it stands to be the first trillion dollar enterprise in human history. With the stakes that high, Lockheed’s Skunkworks keeps their research tightly controlled. But this same control slows down the process of science. If Lockheed opened its fusion research to peer review, and others sought to duplicate the results, the science would be driven forward faster, but Lockheed would stand to lose its monopoly on propriety fusion technology.
Fusion science is out in the open — it is the same as nuclear science — but particular aspects and implementations of that science are pursued under conditions of industrial secrecy. There is no black and white line that separates fusion science from fusion technology research and fusion engineering. Each gradually fades over into the other, even when the core of each of science, technology, and engineering can be distinguished (this is an instance of what I call the Truncation Principle).
The stakes involved generate secrecy, and the secrecy involved generates industrial espionage. Perhaps the best known example of industrial espionage of the 20th century was the acquisition of the plans for the supersonic Concorde, which allowed the Russians to get their “Konkordski” TU-144 flying before the Concorde itself flew. Again, the science of flight and jet propulsion cannot be kept secret, but the technological and engineering implementations of that science can be hidden to some degree — although not perfectly. Supersonic, and now hypersonic, flight technology is a closely guarded secret of the military, but any enterprise with the funding and the mandate can eventually master the technology, and will eventually produce better technology and better engineering designs once the process is fully open.
Because science cannot be effectively practiced in private (it can be practiced, but will not be as good as a research program pursued jointly by a community of researchers), governments seek the control and interdiction of technologies and materials. Anyone can learn nuclear science, but it is very difficult to obtain fissionables. Any car manufacturer can buy their rival’s products, disassemble them, and reserve engineer their components, but patented technologies are protected by the court system for a certain period of time. But everything in this process is open to dispute. Different nation-states have different patent protection laws. When you add industrial espionage to constant attempts to game the system on an international level, there are few if any secrets even in proprietary technology and engineering.
The technologies that worry us the most — such as nuclear weapons — are purposefully retarded in their development by stringent secrecy and international laws and conventions. Moreover, mastering the nuclear fuel cycle requires substantial resources, so that mostly limits such an undertaking to nation-states. Most nation-states want to get along to go along, so they accept the limitations on nuclear research and choose not to build nuclear weapons even if they possess the industrial infrastructure to do so. And now, since the end of the Cold War, even the nation-states with nuclear arsenals do not pursue the development of nuclear technology; so-called “fourth generation nuclear weapons” may be pursued in the secrecy of government laboratories, but not with the kind of resources that would draw attention. It is very unlikely that they are actually being produced.
Why should we care that nuclear technology is purposefully slowed and regulated to the point of stifling innovation? Should we not consider ourselves fortunate that governments that seem to love warfare have at least limited the destruction of warfare by limiting nuclear weapons? Even the limitation of nuclear weapons comes at a cost. Just as there is no black and white line separating science, technology, and engineering, there is no black and white line that separates nuclear weapons research from other forms of research. By clamping down internationally on nuclear materials and nuclear research, the world has, for all practical purposes, shut down the possibility of nuclear rockets. Yes, there are a few firms researching nuclear rockets that can be fueled without the fissionables that could also be used to make bombs, but these research efforts are attempts to “design around” the interdictions of nuclear technology and nuclear materials.
We have today the science relevant to nuclear rocketry; to master this technology would require practical experience. It would mean designing numerous designs, testing them, and seeing what works best. What works best makes its way into the next iteration, which is then in its turn improved. This is the practical business of technology and engineering, and it cannot happen without an immersion into practical experience. But the practical experience in nuclear rocketry is exactly what is missing, because the technology and materials are tightly controlled.
Thus we already can cite a clear instance of how existential risk mitigation becomes the loss of an existential opportunity. A demographically significant spacefaring industry would be an existential opportunity for humanity, but if the nuclear rocket would have been the breakout technology that actualized this existential opportunity, we do not know, and we may never know. Nuclear weapons were early recognized as an existential risk, and our response to this existential risk was to consciously and purposefully put a brake on the development of nuclear technology. Anyone who knows the history of nuclear rockets, of the NERVA and DUMBO programs, of the many interesting designs that were produced in the early 1960s, knows that this was an entire industry effectively strangled in the cradle, sacrificed to nuclear non-proliferation efforts as though to Moloch. Because science cannot be kept secret, entire industries must be banned.
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
18 June 2015
What is the waiting gambit? The waiting gambit is the idea that, if we wait for the right moment, conditions will be better (whether in the moral sense or the practical sense, or both) at a later time to undertake some initiative for which conditions now are not propitious. In other words, conditions for future initiatives will improve, but conditions are not right at the present time for these same initiatives. Our patience will be rewarded, in only we can forbear from action at the present moment. Good things come to those who wait.
I have previously written about the sociology of waiting in Epistemic Space: Mapping Time, in which I observed:
While I am sympathetic to Russell’s rationalism, I think that Bergson had a point in his critique of spatialization, but Bergson did not go far enough with this idea. Not only has there been a spatialization of time, there has also been a temporalization of space. We see this in the contemporary world in the prevalence of what I call transient spaces: spaced designed to pass through but not spaces in which to abide. Airports, laundromats, bus stations, and sidewalks are all transient spaces. The social consequences of industrialization that have forced us to abide by the regime of the calendar and the time clock by the very fact of quantifying time into discrete regions and apportioning them according to a schedule also forces us to wait. The waiting room ought to be recognized as one of the central symbols of our age; the waiting room is par excellence the temporalization of space.
The waiting gambit on the largest scale, i.e., on the scale of civilization, is, quite simply, to transform the Earth entire into a waiting room, perpetually on the verge of the new world that lies beyond. Why wait, rather than act upon the future now? This deceptively simple question is quite difficult to answer adequately. I will attempt an answer, however, though it is not likely to be fully satisfying nor adequate to the subtlety of the problem. One reason this question is so complicated is that there are many dimensions of human experience that it addresses; the waiting gambit comes in many forms.
The most familiar form of the waiting gambit on the civilizational scale is the oft-heard claim that we cannot expect to go into space until we get our house in order here on Earth. “How can we spend money on space travel when we have such pressing problems here on Earth?” This gives to the waiting gambit a moral bite: we are not worthy to go into space, because there are still problems are Earth; we have to solve our problems on Earth first, and then we can think about going into space. But is there anyone who truly believes that this Earthly utopia will ever be realized? Isn’t it pretty clear by now that there will be no Earthly utopia, no point in time when all terrestrial problems will be solved, so that waiting for the coming of the Millennium in order to initiate a spacefaring effort is as much as saying that it will never happen? There is a fundamental contradiction involved in the idea that we can do nothing and become perfect in the meantime; if we do nothing, we will not become perfect, not now, not tomorrow, and not the day after tomorrow.
The waiting gambit in its moral form is not the only possibility. There is also the pragmatic rationalization of the waiting game: acting now is impractical; if we wait, it will be easier, less expensive, and more convenient to act. Certainly there is a tension between inefficiently constructing a space-based infrastructure at present — an option we have possessed since the middle of the twentieth century — or waiting for better technologies that will enable a much more efficient construction of space-based infrastructure. If we proceed at present, it may require diverting resources from other enterprises, but if we wait we may succumb to existential risk; to commit oneself to wait is more or less to commit oneself to a principled stagnation.
There is also the argument for waiting based on safety. To act now is unsafe, but if we wait, it will be safer to act in the future. As with the terrestrial utopia argument for waiting, the safety argument for waiting becomes an excuse never to act. As we become more affluent and more comfortable, what we identify as a danger, or an unacceptable imperfection in society, shifts to ever-more-subtle and elusive dangers, so that fear plays an increasingly disproportionate role as risks decrease while fear remains nearly constant. There will always be dangers, and even as the dangers are minimized they will grow in proportion until they seem overwhelming, hence there will always be reason to continue to wait rather than to act.
It is of the essence of the waiting gambit that many different rationalizations and justifications are employed for waiting. At each stage in the process when a new justification emerges, it seems like a rational and legitimate choice to continue to wait, but viewed from a larger perspective, it becomes apparent that the waiting is merely waiting for its own sake, and the transient excuses offered for waiting change even as we wait. Once waiting becomes normative, action becomes pathological.
Can an entire civilization wait? Would we not, in waiting, create a civilization of waiting, that is to say, a civilization constituted by waiting? I do not believe that an entire civilization can wait all the while pretending it is dedicated to some future good — but only when the time is right.
Civilizations must be judged as the existentialists judged individuals. There is a passage from Sartre that I have quoted previously (in Existence Precedes Essence) that addresses this:
“…in reality and for the existentialist, there is no love apart from the deeds of love; no potentiality of love other than that which is manifested in loving; there is no genius other than that which is expressed in works of art. The genius of Proust is the totality of the works of Proust; the genius of Racine is the series of his tragedies, outside of which there is nothing. Why should we attribute to Racine the capacity to write yet another tragedy when that is precisely what he did not write? In life, a man commits himself, draws his own portrait and there is nothing but that portrait. No doubt this thought may seem comfortless to one who has not made a success of his life. On the other hand, it puts everyone in a position to understand that reality alone is reliable; that dreams, expectations and hopes serve to define a man only as deceptive dreams, abortive hopes, expectations unfulfilled; that is to say, they define him negatively, not positively.”
Jean-Paul Sartre, “Existentialism is a Humanism” 1946, translated by Philip Mairet
Similarly for civilizations: in history, a civilization commits itself, draws its own portrait, and at the end of the day there is nothing but that portrait. This is as much as saying that civilization has not an essence, but a history — something I earlier hinted at, following Ortega y Gasset in An Existentialist Philosophy of History. The principles of an existentialist philosophy of history, as with existential philosophy generally, can be adopted and adapted, mutatis mutandis, for an existentialist philosophy of civilization.
This is, as Sartre noted, a harsh standard by which to judge, whether judging an individual or a civilization. It is not comforting for those who employ the waiting gambit, whether in their own life or in the social life of a community. Nevertheless, we should accustom ourselves to the view that there is no civilization apart from the deeds of civilization. Reality alone is reliable.
. . . . .
. . . . .
. . . . .
. . . . .
In my previous post, The Study of Civilization as Rigorous Science I drew upon examples from both Edmund Husserl and Bertrand Russell — the Godfathers, respectively, of contemporary continental and analytical philosophy — to illustrate some of the concerns of constituting a new science de novo, which is what a science of civilization must be.
In particular, I quoted Husserl to the effect that true science eschews “profundity” in favor of Cartesian clarity and distinctness. Since Husserl himself was none-too-clear a writer, his exposition of a distinction between profundity and clarity might not be especially clear. But another example occurred to me. There is a wonderful passage from Bertrand Russell in which he describes the experience of intellectual insight:
“Every one who has done any kind of creative work has experienced, in a greater or less degree, the state of mind in which, after long labour, truth, or beauty, appears, or seems to appear, in a sudden glory — it may be only about some small matter, or it may be about the universe. The experience is, at the moment, very convincing; doubt may come later, but at the time there is utter certainty. I think most of the best creative work, in art, in science, in literature, and in philosophy, has been the result of such a moment. Whether it comes to others as to me, I cannot say. For my part, I have found that, when I wish to write a book on some subject, I must first soak myself in detail, until all the separate parts of the subject matter are familiar; then, some day, if I am fortunate, I perceive the whole, with all its parts duly interrelated. After that, I only have to write down what I have seen. The nearest analogy is first walking all over a mountain in a mist, until every path and ridge and valley is separately familiar, and then, from a distance, seeing the mountain whole and clear in bright sunshine.”
Bertrand Russell, A History of Western Philosophy, CHAPTER XV, “The Theory of Ideas”
Russell returned to this metaphor of seeing a mountain whole after having wandered in the fog of the foothills on several occasions. For example:
“The time was one of intellectual intoxication. My sensations resembled those one has after climbing a mountain in a mist, when, on reaching the summit, the mist suddenly clears, and the country becomes visible for forty miles in every direction.”
Bertrand Russell, The Autobiography of Bertrand Russell: 1872-1914, Chapter 6, “Principia Mathematica”
“Philosophical progress seems to me analogous to the gradually increasing clarity of outline of a mountain approached through mist, which is vaguely visible at first, but even at last remains in some degree indistinct. What I have never been able to accept is that the mist itself conveys valuable elements of truth. There are those who think that clarity, because it is difficult and rare, should be suspect. The rejection of this view has been the deepest impulse in all my philosophical work.”
Bertrand Russell, The Basic Writings of Bertrand Russell, Preface
Russell’s description of intellectual illumination employing the metaphor of seeing a mountain whole is an example of the what I have called the epistemic overview effect — being able to place the parts of knowledge within a larger epistemic whole gives us a context for understanding that is not possible when confined to any parochial, local, or limited perspective.
If we employ Russell’s metaphor to illustrate Husserl’s distinction between the profound and the pellucid we immediately see that an attempt at an exposition which is confined to wandering in the foothills enshrouded in mist and fog has the character of profundity, but when the sun breaks through, the fog lifts, and the mist evaporates, we see clearly and distinctly that which we had before known only imperfectly and at that point we are able to give an exposition in terms of Cartesian clarity and distinctness. Russell’s insistence that he never thought that the mist contained any valuable elements of truth is of a piece with Husserl eschewing profundity.
Just so, a science of civilization should surprise us with unexpected vistas when we see the phenomenon of civilization whole after having familiarized ourselves with each individual parts of it separately. When the moment of illumination comes, dispelling the mists of profundity, we realize that it is no loss at all to let go of the profundity that has, up to that time, been our only guide. The definitive formulation of a concept, a distinction, or a principle can suddenly cut through the mists that we did not even realize were clouding our thoughts, revealing to us the perfect clarity that had eluded us up to that time. As Russell noted that, “this view has been the deepest impulse in all my philosophical work,” so too this is the deepest impulse in my attempt to understand civilization.
. . . . .
. . . . .
. . . . .
. . . . .
8 June 2015
In several posts I have discussed the need for a science of civilization (cf., e.g., The Future Science of Civilizations), and this is a theme I intended to continue to pursue in future posts. It is no small matter to constitute a new science where none has existed, and to constitute a new science for an object of knowledge as complex as civilization is a daunting task.
The problem of constituting a science of civilization, de novo for all intents and purposes, may be seen in the light of Husserl’s attempt to constitute (or re-constitute) philosophy as a rigorous science, which was a touchstone of Husserl’s work. Here is a passage from Husserl’s programmatic essay, “Philosophy as Strict Science” (variously translated) in which Husserl distinguishes between profundity and intelligibility:
“Profundity is the symptom of a chaos which true science must strive to resolve into a cosmos, i.e., into a simple, unequivocal, pellucid order. True science, insofar as it has become definable doctrine, knows no profundity. Every science, or part of a science, which has attained finality, is a coherent system of reasoning operations each of which is immediately intelligible; thus, not profound at all. Profundity is the concern of wisdom; that of methodical theory is conceptual clarity and distinctness. To reshape and transform the dark gropings of profundity into unequivocal, rational propositions: that is the essential act in methodically constituting a new science.”
Edmund Husserl, “Philosophy as Rigorous Science” in Phenomenology and the Crisis of Philosophy, edited by Quentin Lauer, New York: Harper, 1965 (originally “Philosophie als strenge Wissenschaft,” Logos, vol. I, 1911)
Recently re-reading this passage from Husserl’s essay I realized that much of what I have attempted in the way of “methodically constituting a new science” of civilization has taken the form of attempting to follow Husserl’s pursuit of “unequivocal, rational propositions” that eschew “the dark gropings of profundity.” I think much of the study of civilization, immersed as it is in history and historiography, has been subject more often to profound meditations (in the sense that Husserl gives to “profound”) than conceptual clarity and distinctness.
The Cartesian demand for clarity and distinctness is especially interesting in the context of constituting a science of civilization given Descartes’ famous disavowal of history (on which cf. the quote from Descartes in Big History and Scientific Historiography); if an historical inquiry is the basis of the study of civilization, and history consists of little more than fables, then a science of civilization becomes rather dubious. The emergence of scientific historiography, however, is relevant in this context.
The structure of Husserl’s essay is strikingly similar to the first lecture in Russell’s Our Knowledge of the External World. Both Russell and Husserl take up major philosophical movements of their time (and although the two were contemporaries, each took different examples — Husserl, naturalism, historicism, and Weltanschauung philosophy; Russell, idealism, which he calls “the classical tradition,” and evolutionism), primarily, it seems, to show how philosophy had gotten off on the wrong track. The two works can profitably be read side-by-side, as Russell is close to being an exemplar of the naturalism Husserl criticized, while Husserl is close to being an exemplar of the idealism that Russell criticized.
Despite the fundamental difference between Husserl and Russell, each had an idea of rigor and each attempted to realize in their philosophical work, and each thought of that rigor as bringing the scientific spirit into philosophy. (In Kierkegaard and Russell on Rigor I discussed Russell’s conception of rigor and its surprising similarity to Kierkegaard’s thought.) Interestingly, however, the two did not criticize each other directly, though they were contemporaries and each knew of the other’s work.
The new science Russell was involved in constituting was mathematical logic, which Roman Ingarden explicitly tells us that Husserl found inadequate for the task of a scientific philosophy:
“It is maybe unexpected and surprising that Husserl who was trained as a mathematician did not seek salvation for philosophy in the mathematical method which had from time to time stood out like a beacon as an ideal worthy of imitation by philosophers. But mathematical logic could not satisfy him… above all he fought for responsibility in philosophical research and devoted many years to the elaboration of a method which, according to him, was to secure for philosophy the status of a science.”
Roman Ingarden, On the Motives which Led Husserl to Transcendental Idealism, Translated from the Polish by Arnor Hannibalsson, Den Haag: Martinus Nijhoff, 1975, p. 9.
Ingarden’s discussion of Husserl is instructive, in so far as he notes the influence of mathematical method upon Husserl’s thought, but also that Husserl did not try to employ a mathematical method directly in philosophy. Rather, Husserl invested his philosophical career in the formulation of a new methodology that would allow the values of rigorous scientific practice to be expressed in philosophy and through a philosophical method — a method that might be said to be parallel to or mirroring the mathematical method, or derived from the same thematic motives as those that inform mathematical methodology.
The same question is posed in considering the possibility of a rigorously scientific method in the study of civilization. If civilization is sui generis, is a sui generis methodology necessary to the formulation of a rigorous theory of civilization? Even if that methodology is not what we today know as the methodology of science, or even if that methodology does not precisely mirror the rigorous method of mathematics, there may be a way to reason rigorously about civilization, though it has yet to be given an explicit form.
The need to think rigorously about civilization I took up implicitly in Thinking about Civilization, Suboptimal Civilizations, and Addendum on Suboptimal Civilizations. (I considered the possibility of thinking rigorously about the human condition in The Human Condition Made Rigorous.) Ultimately I would like to make my implicit methodology explicit and so to provide a theoretical framework for the study of civilization.
Since theories of civilization have been, for the most part, either implicit or vague or both, there has been little theoretical framework to give shape or direction to the historical studies that have been central to the study of civilization to date. Thus the study of civilization has been a discipline adrift, without a proper research program, and without an explicit methodology.
There are at least two sides to the rigorous study of civilization: theoretical and empirical. The empirical study of civilization is familiar to us all in the form of history, but history studied as history, as opposed to history studied for what it can contribute to the theory of civilization, are two different things. One of the initial fundamental problems of the study of civilization is to disentangle civilization from history, which involves a formal rather than a material distinction, because both the study of civilization and the study of history draw from the same material resources.
How do we begin to formulate a science of civlization? It is often said that, while science begins with definitions, philosophy culminates in definitions. There is some truth to this, but when one is attempting to create a new discipline one must be both philosopher and scientist simultaneously, practicing a philosophical science or a scientific philosophy that approaches a definition even as it assumes a definition (admittedly vague) in order for the inquiry to begin. Husserl, clearly, and Russell also, could be counted among those striving for a scientific philosophy, while Einstein and Gödel could be counted as among those practicing a philosophical science. All were engaged in the task of formulating new and unprecedented disciplines.
This division of labor between philosophy and science points to what Kant would have called the architectonic of knowledge. Husserl conceived this architectonic categorically, while we would now formulate the architectonic in hypothetico-deductive terms, and it is Husserl’s categorical conception of knowledge that ties him to the past and at times gives his thought an antiquated cast, but this is merely an historical contingency. Many of Husserl’s formulations are dated and openly appeal to a conception of science that no longer accords with what we would likely today think of as science, but in some respects Husserl grasps the perennial nature of science and what distinguishes the scientific mode of thought from non-scientific modes of thought.
Husserl’s conception of science is rooted in the conception of science already emergent in the ancient world in the work of Aristotle, Euclid, and Ptolemy, and which I described in Addendum on the Agrarian-Ecclesiastical Thesis. Russell’s conception science is that of industrial-technological civilization, jointly emergent from the scientific revolution, the political revolutions of the eighteenth century, and the industrial revolution. With the overthrow of scholasticism as the basis of university curricula (which took hundreds of years following the scientific revolution before the process was complete), a new paradigm of science was to emerge and take shape. It was in this context that Husserl and Russell, Einstein and Gödel, pursued their research, employing a mixture of established traditional ideas and radically new ideas.
In a thorough re-reading of Husserl we could treat his conception of science as an exercise to be updated as we went along, substituting an hypothetico-deductive formulation for each and every one of Husserl’s categorical formulations, ultimately converging upon a scientific conception of knowledge more in accord with contemporary conceptions of scientific knowledge. At the end of this exercise, Husserl’s observation about the different between science and profundity would still be intact, and would still be a valuable guide to the transformation of a profound chaos into a pellucid cosmos.
This ideal, and ever more so the realization of this ideal, ultimately may not prove to be possible. Husserl himself in his later writings famously said, “Philosophy as science, as serious, rigorous, indeed apodictically rigorous, science — the dream is over.”(It is interesting to compare this metaphor of a dream to Kant’s claim that he was awoken from his dogmatic slumbers by Hume.) The impulse to science returns, eventually, even if the idea of an apodictically rigorous science has come to seem a mere dream. And once the impulse to science returns, the impulse to make that science rigorous will reassert itself in time. Our rational nature asserts itself in and through this impulse, which is complementary to, rather than contradictory of, our animal nature. To pursue a rigorous science of civilization is ultimately as human as the satisfaction of any other impulse characteristic of our species.
. . . . .
. . . . .
. . . . .
. . . . .
4 June 2015
Today marks the 26th anniversary of the Tiananmen massacre. In the past year it almost looked like similar sights would be repeated in Hong Kong, as the “Umbrella Revolution” protesters showed an early resolve and seemed to be making some headway. But the regime in Beijing kept its cool and a certain patience, and simply waited out the protesters. Perhaps the protesters will return, but they will have a difficult time regaining the historical momentum of the moment. It would take another incident of some significance to spark further unrest in Hong Kong. The Chinese state has both the patience and the economic momentum to dictate its version of events. Hence the importance of maintaining the June 4th incident in living memory.
Just yesterday I was talking to a Chinese friend and I opined that, with the growth of the Chinese economy and Chinese citizens working all over the world, the government might have increasing difficulty in maintaining its regime of control over information within the Chinese mainland. I was told that it was not difficult to make the transition between what you can say in China and what you can’t say in China, in comparison of the relative freedom of Chinese to say whatever they think when outside mainland China. One simply assumes the appropriate persona when in China. As a westerner, I have a difficult time accepting this, but the way in which it was described to me was perfectly authentic and I have no reason to doubt it.
Over the past weeks and months there have been many signs of China’s continued assumption of the role of a “responsible stakeholder” in the global community, with the initial success of gaining the cooperation of other nation-states in the fledgling Asian Infrastructure Investment Bank (AIIB) and the Financial Times last Tuesday noted, “…the IMF’s decision later this year about whether to include China in the basket of currencies from which it makes up its special-drawing-rights will be keenly watched.” (“What Fifa tells us about global power” by Gideon Rachman) The very idea of a global reserve currency that is not fully convertible and fully floating strikes me as nothing short of bizarre — since the value of the currency is not then determined by the markets, its value must be established politically — but that just goes to show you what economic power can achieve. And all of this takes place against the background of China’s ongoing land reclamation on small islands in the South China Sea, which is a source of significant tension. But the tension has not derailed the business deals.
If China’s grand strategy (or, rather, the grand strategy of the Chinese communist party) is to make China a global superpower with both hard power (military power projection capability) and soft power (social and cultural prestige), and to do so while retaining the communist party’s absolute grip on power (presumably assuming the legitimacy of that grip on power), one must acknowledge that this strategy has been on track successfully for decades. Assume, for purposes of argument, that this grand strategy continues successfully on track. I have to wonder if the Chinese communist party has a plan to eventually allow the history of the Tiananmen massacre to be known, once subsequent events have sufficiently changed the meaning of that the event (by “proving” that the party was “right” because their policies led to the success of China, therefore their massacre should be excused as understandable in the service of a greater good), or is the memory of the Tiananmen massacre to be forever sequestered? Since the Chinese leadership has proved their ability to think big over the long term, I would guess that there must be internal documents that deal explicitly with this question, though I don’t suppose this internal debate will ever become public knowledge.
I have read many times, from many different sources, that young party members are set to study the lessons of the fall of dictators and one-party states elsewhere in the world. Perhaps they also study damaging historical revelations as carefully, and have developed a plan to manage knowledge of the Tiananmen massacre at some time in the future. It is not terribly difficult to imagine China attempting to use the soft power of the great many Confucius Institute franchises it has sponsored (480 worldwide at latest count) to slowly and gradually shape the discourse around China and the biggest PR disaster in the history of the Chinese communist party, paving the way to eventually opening a discussion of Tiananmen entirely on Chinese terms. I suppose that’s what I would do, if I was a member of the Standing Committee of the Central Politburo. But, again, I am a westerner and am liable to utterly misjudge Chinese motivations. I will, however, continue to wonder about their long game in relation to Tiananmen, and to look for signs in the tea leaves that will betray that game.
. . . . .
Previous posts on Tiananmen Anniversaries:
2013 A Dream Deferred
. . . . .
. . . . .
. . . . .
. . . . .
27 May 2015
Is it possible for human beings to care about the fate of strangers? This is at once a profound philosophical question and an immediately practical question. The most famous response to this question is perhaps that of John Donne:
“No man is an island, entire of itself; every man is a piece of the continent, a part of the main. If a clod be washed away by the sea, Europe is the less, as well as if a promontory were, as well as if a manor of thy friend’s or of thine own were: any man’s death diminishes me, because I am involved in mankind, and therefore never send to know for whom the bells tolls; it tolls for thee.”
John Donne, Devotions upon Emergent Occasions, XVII. Nunc lento sonitu dicunt, morieris. Now, this bell tolling softly for another, says to me: Thou must die.
Immanuel Levinas spoke of “the community of those with nothing in common,” in an attempt to get at the human concern for other human beings of whom we know little or nothing. More recently, there is this from Bill Gates:
“When I talk to friends about global health, I often run into a strange paradox. The idea of saving one person’s life is profound and thrilling. But I’ve found that when you talk about saving millions of lives — it sounds almost meaningless. The higher the number, the harder it is to wrap your head around.”
Bill Gates, opening paragraph of An AIDS Number That’s Almost Too Big to Believe
Gates presents this as a paradox, but in social science it is a well-known and well-studied cognitive bias known as the Identifiable victim effect. One researcher who has studied this cognitive bias is Paul Slovic, whose work was discussed by Sam Harris in the following passage:
“…when human life is threatened, it seems both rational and moral for our concern to increase with the number of lives at stake. And if we think that losing many lives might have some additional negative consequences (like the collapse of civilization), the curve of our concern should grow steeper still. But this is not how we characteristically respond to the suffering of other human beings.”
“Slovic’s experimental work suggests that we intuitively care most about a single, identifiable human life, less about two, and we grow more callous as the body count rises. Slovic believes that this ‘psychic numbing’ explains the widely lamented fact that we are generally more distressed by the suffering of single child (or even a single animal) than by a proper genocide. What Slovic has termed ‘genocide neglect’ — our reliable failure to respond, both practically and emotionally, to the most horrific instances of unnecessary human suffering — represents one of the more perplexing and consequential failures of our moral intuition.”
“Slovic found that when given a chance to donate money in support of needy children, subjects give most generously and feel the greatest empathy when told only about a single child’s suffering. When presented with two needy cases, their compassion wanes. And this diabolical trend continues: the greater the need, the less people are emotionally affected and the less they are inclined to give.”
Sam Harris, The Moral Landscape, Chapter 2
Skip down another paragraph and Harris adds this:
“The fact that people seem to be reliably less concerned when faced with an increase in human suffering represents an obvious violation of moral norms. The important point, however, is that we immediately recognize how indefensible this allocation of emotional and material resources is once it is brought to our attention.”
While Harris has not hesitated to court controversy, and speaks the truth plainly enough as he sees it, by failing to place what he characterizes as norms of moral reasoning in an evolutionary context he presents us with a paradox (the above section of the book is subtitled “Moral Paradox”). Really, this kind of cognitive bias only appears paradoxical when compared to a relatively recent conception of morality liberated from parochial in-group concerns.
For our ancestors, focusing on a single individual whose face is known had a high survival value for a small nomadic band, whereas a broadly humanitarian concern for all human beings would have been disastrous in equal measure. Today, in the context of industrial-technological civilization we can afford to love humanity; if our ancestors had loved humanity rather than particular individuals they knew well, they likely would have gone extinct.
Our evolutionary past has ill prepared us for the perplexities of population ethics in which the lives of millions may rest on our decisions. On the other hand, our evolutionary past has well prepared us for small group dynamics in which we immediately recognize everyone in our in-group and with equal immediacy identify anyone who is not part of our in-group and who therefore belongs to an out-group. We continue to behave as though our decisions were confined to a small band of individuals known to us, and the ability of contemporary telecommunications to project particular individuals into our personal lives as though we knew them, as if they were part of our in-group, plays into this cognitive bias.
While the explicit formulation of Identifiable victim effect is recent, the principle has been well known for hundreds of years at least, and has been as compellingly described in historical literature as in recent social science, as, for example, in Adam Smith:
“Let us suppose that the great empire of China, with all its myriads of inhabitants, was suddenly swallowed up by an earthquake, and let us consider how a man of humanity in Europe, who had no sort of connexion with that part of the world, would be affected upon receiving intelligence of this dreadful calamity. He would, I imagine, first of all, express very strongly his sorrow for the misfortune of that unhappy people, he would make many melancholy reflections upon the precariousness of human life, and the vanity of all the labours of man, which could thus be annihilated in a moment. He would too, perhaps, if he was a man of speculation, enter into many reasonings concerning the effects which this disaster might produce upon the commerce of Europe, and the trade and business of the world in general. And when all this fine philosophy was over, when all these humane sentiments had been once fairly expressed, he would pursue his business or his pleasure, take his repose or his diversion, with the same ease and tranquillity, as if no such accident had happened. The most frivolous disaster which could befall himself would occasion a more real disturbance. If he was to lose his little finger to-morrow, he would not sleep to-night; but, provided he never saw them, he will snore with the most profound security over the ruin of a hundred millions of his brethren, and the destruction of that immense multitude seems plainly an object less interesting to him, than this paltry misfortune of his own.”
Adam Smith, Theory of Moral Sentiments, Part III, chapter 3, paragraph 4
And immediately after Hume made his famous claim that, “Reason is, and ought only to be the slave of the passions, and can never pretend to any other office than to serve and obey them,” he illustrated the claim with an observation similar to Smith’s:
“Where a passion is neither founded on false suppositions, nor chuses means insufficient for the end, the understanding can neither justify nor condemn it. It is not contrary to reason to prefer the destruction of the whole world to the scratching of my finger. It is not contrary to reason for me to chuse my total ruin, to prevent the least uneasiness of an Indian or person wholly unknown to me. It is as little contrary to reason to prefer even my own acknowledgeed lesser good to my greater, and have a more ardent affection for the former than the latter.”
David Hume, A Treatise of Human Nature, Book II, Part III, section 3
Bertrand Russell has well described how the expression of this cognitive bias can take on the conceit of moral superiority in the context of romanticism:
“Cultivated people in eighteenth-century France greatly admired what they called la sensibilité, which meant a proneness to emotion, and more particularly to the emotion of sympathy. To be thoroughly satisfactory, the emotion must be direct and violent and quite uninformed by thought. The man of sensibility would be moved to tears by the sight of a single destitute peasant family, but would be cold to well-thought-out schemes for ameliorating the lot of peasants as a class. The poor were supposed to possess more virtue than the rich; the sage was thought of as a man who retires from the corruption of courts to enjoy the peaceful pleasures of an unambitious rural existence.”
Bertrand Russell, A History of Western Philosophy, Part II. From Rousseau to the Present Day, CHAPTER XVIII “The Romantic Movement”
Russell’s account of romanticism provides some of the missing rationalization whereby a cognitive bias clearly at variance with norms of moral reasoning is justified as being the “higher” moral ground. Harris seems to suggest that, as soon as this violation of moral reasoning is pointed out to us, we will change. But we don’t change, for the most part. Our rationalizations change, but our behavior rarely does. And indeed studies of cognitive bias have revealed that even when experimental subjects are informed of cognitive biases that should be obvious ex post facto, most will continue to defend choices that unambiguously reflect cognitive bias.
I have personally experienced the attitude described by Russell (despite the fact that I have not lived in eighteenth-century France) more times than I care to recall, though I find myself temperamentally on the side of those formulating well-thought-out schemes for the amelioration of the lot of the destitute as a class, rather than those moved to tears by the sight of a single destitute family. From these personal experiences of mine, anecdotal evidence suggests to me that if you attempt to live by the quasi-utilitarianism advocated by Russell and Harris, others will regard you as cold, unfeeling, and lacking in the milk of human kindness.
The cognitive bias challenge to presumptive norms of moral reasoning is also a profound challenge to existential risk mitigation, since existential risk mitigation deals in the largest numbers of human lives saved, but is a well-thought-out scheme for ameliorating the lot of human beings as a class, and may therefore have little emotional appeal compared to putting an individual’s face on a problem and then broadcasting that face repetitively.
We have all heard that the past is the foreign county, and that they do things differently there. (This line comes from the 1953 novel The Go-Between by L. P. Hartley.) We are the past of some future that has yet to occur, and we will in turn be a foreign country to that future. And, by the same token, the future is a foreign country, and they do things differently there. Can we care about these foreigners with their foreign ways? Can we do more than care about them, and actually change our behavior in the present in order to ensure on ongoing future, however foreign that future is from our parochial concerns?
In Bostrom’s paper “Existential Risk Prevention as Global Priority” (Global Policy, Volume 4, Issue 1, February 2013) the author gives a lower bound of 1016 potential future lives saved by existential risk mitigation (though he also gives “a lower bound of 1054 human-brain-emulation subjective life-years” as a possibility), but if the “collapse of compassion” is a function of the numbers involved, the higher the numbers we cite for individuals saved as a result of existential risk mitigation, the less will the average individual of today care.
Would it be possible to place an identifiable victim in the future? This is difficult, but we are all familiar with appeals to the world we leave to our children, and these are attempts to connect identifiable victims with actions that may prejudice the ability of human beings in the future to live lives of value commensurate with our own. It would be possible to construct some grand fiction, like Plato’s “noble lie” in order to interest the mass of the public in existential risk mitigation, but this would not be successful unless it became some kind of quasi-religious belief exempted from falsification that becomes the receptacle of our collective hopes. This does not seem very plausible (or sustainable) to me.
Are we left, then, to take the high road? To try to explain in painstaking (and off-putting) detail the violation of moral norms involved in our failure to adequately address existential risks, thereby putting our descendants in mortal danger? Certainly if an attempt to place an identifiable victim in the future is doomed to failure, we have no remaining option but the attempt at a moral intervention and relentless moral education that could transform the moral lives of humanity.
I do not think either of the above approaches to resolving the identifiable victim challenge to existential risk mitigation would be likely to be successful. I can put this more strongly yet: I think both approaches would almost certainly result in a backlash and would therefore be counter-productive to existential risk mitigation efforts. The only way forward that I can see is to present existential risk mitigation under the character of the adventure and exploration made possible by a spacefaring civilization that would, almost as an unintended consequence, secure the redundancy and autonomy of extraterrestrial centers of human civilization.
Human beings (at least as I know them) have a strong distaste for moral lectures and do not care to be told to do anything for their own good, but if you present them with the possibility of adventure and excitement that promises new experiences to every individual and possibly even the prospect of the extraterrestrial equivalent of a buried treasure, or even a pot of gold at the of the rainbow, you might enlist the selfishness and greed of individuals in a great cause on behalf of Earth and all its inhabitants, so that each individual is moved, as it were, by an invisible hand to promote an end which was no part of his intention.
. . . . .
. . . . .
Existential Risk: The Philosophy of Human Survival
13. Existential Risk and Identifiable Victims
. . . . .
. . . . .
. . . . .
. . . . .
23 May 2015
In my recent post on Proxy War in Yemen I asserted that the concept of a proxy war, while primarily associated with the Cold War, can be applied to the war now being fought indirectly between Saudi Arabia and Iran in Yemen. A narrow conception of proxy wars would not have this application, and would be more confined to its original introduction and usage. Thus is could be rightly said that I was applying a broad conception of a proxy war. This was my intent.
What has been said above of proxy wars can also be said of war in general: that there are narrow and broad conceptions. Narrow conceptions are usually a function of a particular historical context of usage. If you asked an inhabitant of Periclean Athens to define war, they might have answered that war was a clash between hoplites from different city-states facing each other as a phalanx. For such a narrow conception of war, the innovations that Alexander introduced into the Macedonian phalanx might pose a definitional challenge: is it or is it not a phalanx, and is war employing this instrument a war, or something related to war through descent with modification?
In many contexts I have pursued the exposition of what I call the extended sense of a concept, in which a familiar concept is systematically subjected to variation, extrapolation, extension, and generalization in order to see how comprehensive a conception can be made. I have been influenced in this respect by Bertrand Russell, whose imperative to generalization I previously quoted in The Science of Time and The Genealogy of the Technium:
“It is a principle, in all formal reasoning, to generalize to the utmost, since we thereby secure that a given process of deduction shall have more widely applicable results…”
Bertrand Russell, An Introduction to Mathematical Philosophy, Chapter XVIII, “Mathematics and Logic”
Open-textured concepts are best suited to Russellian generalization. What is an open-textured concept? Here is one account:
“According to Austin and Wittgenstein, words have clear conditions of application only against a background of ‘normal circumstances’ corresponding to the type of context in which the words were used in the past. There is no ‘convention’ to guide us as to whether or not a particular expression applies in some extraordinary situation. This is not because the meaning of the word is ‘vague’, but because the application of words ultimately depends on there being a sufficient similarity between the new situation of use and past situations. The relevant dimensions of similarity are not fixed once and for all; this is what generates ‘open texture’ (Waismann 1951).”
Routledge Encyclopedia of Philosophy, London and New York: Routledge, 1998, “Pragmatics”
More briefly, Stephan Barker wrote of open texture: “Our tendencies concerning the use of the word form a loosely knit pattern which does not definitely provide for all possibilities.” (Philosophy of Mathematics, “Introduction: The Open Texture of Language” p. 11) Barker goes on to use the Copernican analysis of celestial motion as an example of open texture. If “move” means to change position relative to Earth, then certainly the Earth cannot, by definition move. But what Copernicus did is to extend our conception of movement beyond the concept of movement that was limited to the special case of the surface of the Earth. One could say that Copernicus formulated an extended concept of motion.
It seems to me that war is a perfect example of an open-textured concept, and one that can readily (and indeed has been repeatedly) extended by changed circumstances. As civilization has grown, war has grown — in scope, scale, fatality, and complexity. The growth of war has been twofold: 1) growth in the absolute size of war (quantitative), and 2) growth in the complexity and sophistication of war (qualitative). Once we understand that war is an open-textured concept, the Russellian imperative comes into play, and the philosophical impulse is to generalize war to the greatest possible extent and thus to arrive at an extended conception of warfare.
Recently in VE Day: Seventy Years I suggested the possibility of the existential viability of warfare, which sounds like an odd way to speak of war, as though we were concerned to maintain war in existence, when many if not most individuals view the extirpation of war as the goal of civilization. But war and civilization are coextensive, and this implies that the viability of war is linked to the viability of civilization. In the long ten thousand year history of agricultural civilization warfare took many different and distinct forms. These different forms of warfare were driven by both quantitative and qualitative growth in war. The advent of industrialized warfare (cf. A Century of Industrialized Warfare) forced us once again to expand the scope and scale of what we call war.
Industrialized warfare coincided with the social consequences of industrialization — the growth of conurbations, mass communications, rapid transportation, and popular sovereignty, inter alia — and all of these developments forced warfare to become mass war fought by mass man. Industrialization allowed for a rapid increase in scale that outstripped qualitative development, and this almost exclusively quantitative increase in warfare gave us the concept of total war. (The idea of total war preceded that of industrialization, but I would argue that the term only came into its proper significant in the wake of mass war, i.e., that industrialized mass war is the natural teleology of the concept of total war.)
Industrialized total war did not persist long; if it had, we would have destroyed ourselves. Thus the rapid development of total war executed a perfect dialectical inversion and gave us the contemporary conception of limited war. We don’t even talk in terms of “limited war” any more because all wars are limited. An unlimited war today — total war — would be too devastating to contemplate. During the Cold War, a common euphemism for the MAD scenario of a massive nuclear exchange was “the unthinkable.” Of course, some did think the unthinkable, and they in turn became symbolic of an unmentionable engagement with the unthinkable (Curtis LeMay and Herman Kahn come to mind in this respect). The strange world of pervasive yet limited conflict to which we have now become accustomed has no place for total war, but it is perhaps no less strange than the paradigm of warfare that preceded it, consisting of mass conscript armies engaged in total industrialized warfare between nation-states.
Yet we have found countless ways to wage limited wars, with new conceptions of war appearing regularly with changes in technology and social organization. There is proxy war, guerrilla war, irregular war, asymmetrical warfare, swarm warfare, and so on. Perhaps the most recent extension of the concept of war is that of hybrid warfare, which has received much attention lately. (Russian actions in east Ukraine are often characterized in terms of hybrid warfare.) It is arguable that the many “experiments” with limited war following the end of the period of industrialized total war have qualitatively expanded and extended our conception of war in a way parallel to the quantitative expansion and extension of our conception of war driven by industrialization. Thus hybrid war, or some successor to hybrid war that is yet to be visited upon us (through descent with modification), may be understood as the qualitative form of total war.
Hybrid warfare is an illustration of how the scope and scape of warfare are related and can come to permeate society even when war is not “total” in the sense used prior to nuclear weapons (i.e., the quantitative sense of total war). The duration of the local and limited wars we have managed to fight under the nuclear umbrella is limited only by the willingness of participants to engage in long-term low-intensity warfare. We have learned much from this experience. While the world wars of the first half of the twentieth century taught us that democratic nation-states could field armies of millions and project unprecedented power for a few years’ duration, the local and limited wars of the second half of the twentieth century taught us that democratic nation-states cannot sustain long term warfare. Whatever the initial war enthusiasm, the populace grows tired of it, and eventually turns against it. If wars are to be fought, they must be fought within the political constraints of the form of social organization available in any given historical period.
On the other side, national insurgencies often possess a willingness to continue fighting virtually indefinitely (there has been insurgent conflict in Colombia for almost a half century, i.e., the entire period of post-industrialized total war), but when these groups come to realize that, despite their nationalist aspirations, they have been used as the pawns in someone else’s war (i.e., they have been serving someone else’s national aspirations), they are as likely to switch sides as not. Moreover, civil governance following long civil wars — regardless of which side in the conflict wins, if in fact any side wins — is almost always disastrous, and low-intensity warfare is essentially traded for high-intensity civil strife. Police do the killing instead of soldiers (but many of the police are former soldiers).
As warfare becomes pervasively represented throughout the culture, it represents the return (for it has occurred many times in human history) of warfare as a cultural activity, something I discussed in an early post Civilization and War as Social Technologies, i.e., war is a social technology, like civilization, that allows us to do certain things and to accomplish certain ends. For example, war is a decision procedure among nation-states who can agree upon nothing except that they will not allow a local and limited war to grow into a general and total war.
Warfare has, once again, adapted to changed conditions and thereby demonstrated its existential viability when war itself has risen to the level of an existential risk to the species and our civilization.
. . . . .
. . . . .
. . . . .
. . . . .