27 February 2017
In my previous post, Do the clever animals have to die?, I considered the “ultimate concern” (to borrow a phrase from Paul Tillich) of existential risk mitigation: the survival of life and other emergent complexities beyond the habitability of its homeworld or home planetary system. While a planetary system could be inhabited for hundreds of millions of years in most cases, and possibly for billion or tens of billions of years (the latter in the case of red dwarf stars, as in the recently discovered planetary system at TRAPPIST-1, which appears to be a young star with a long history ahead of it), there are yet many events that could occur that could render a homeworld or an entire planetary system uninhabitable, or which could be sufficiently catastrophic that a civilization clustered in the vicinity of a single star would almost certainly be extirpated by them (e.g., a sufficiently large gamma ray burst, GRB, from outside our solar system, or a sufficiently large coronal mass ejection, CME, from within our solar system).
Because any civilization that endures for cosmologically significant periods of time must have established multiple independent centers of civilization, and will probably have survived its homeworld having become uninhabitable, mature advanced civilizations may view this condition as definitive of a mature civilization. Having ensured their risk of extinction against existential threats through establishing multiple independent centers of civilization, these advanced civilizations may not regard as a “peer” (i.e., not regard as a fellow advanced civilization) any civilization that still remains tightly-coupled to its homeworld.
It nevertheless may be the case (if there are, or will be, multiple examples of advanced civilizations) that some civilizations choose to remain tightly-coupled to their homeworlds. We can posit this as the condition of a certain kind of civilization. In the question and answer segment following my 2015 talk, What kind of civilizations build starships? a member of the audience, Alex Sherwood, suggested, in contradistinction to the expansion hypothesis, a constancy hypothesis, according to which a civilization does not expand and does not contract, but rather remains constant; I would prefer to call this the equilibrium hypothesis. One way in which a civilization might exemplify the constancy hypothesis would be for it to remain tightly-coupled to its homeworld.
Some subset of homeworld-coupled civilizations will probably experience extinction due to this choice. Such a homeworld-coupled civilization might choose, instead of establishing multiple independent centers of civilization as existential risk mitigation, to instead establish de-extinction and backup measures that would allow civilization to be restored on its homeworld despite any realized existential risks. However, while this approach to civilizational longevity may ensure the existence of a civilization over the billions of years of the life of its parent star, if a civilization does not want the historical accident of the age of its parent star to determine its ongoing viability, then such a civilization must abandon its homeworld and eventually also its home planetary system.
A civilization might continue to exemplify the equilibrium hypothesis by maintaining the unity and distinctiveness of its civilization despite needing to pursue megastructure-scale projects in order to ensure its ongoing existential viability. The idea of constructing a Shkadov thruster to move a star was partly inspired by this particular conception of the equilibrium hypothesis, as a star might, by this method, be moved to another, younger star, and the homeworld transferred into the orbit of that younger star. In this way, the relationship to the parent star is de-coupled, but the relationship to homeworld remains exclusive. At yet another remove, an entire civilization might simply choose to pick up from its homeworld and transfer itself to another chosen world. (As an historical analogy, consider the ancient city of Knidos, which was founded on the Datça Peninsula, but as the city grew in size and wealth, the city fathers decided that they needed to start again, so they built themselves a new and grander city nearby, and moved the entire city to this new location.) This conception of the equilibrium hypothesis would de-couple a civilization from both parent star and homeworld, but could still maintain the civilization as a unique and distinctive whole, thus continuing that civilization in its equilibrium condition.
A civilization that establishes multiple independent centers of civilization (and thus, to some degree, exemplifies the expansion hypothesis) might still retain strong connections to its homeworld — only not the connection of dependency. Such civilizations fully independent of a homeworld might be said to be loosely-coupled to their homeworld, in contradistinction to civilizations tightly-coupled to their homeworld and exemplifying the equilibrium hypothesis. Expansionary civilizations might remain in close contact with a homeworld for as long as the homeworld was habitable, only to fully abandon it when the homeworld could no longer support life.
Eventually, as the climate changes and the continents move and the surface of Earth is entirely rearranged, as would be experienced by a billion-year-old civilization, almost all terrestrial cities and monuments will disappear, and even the familiar look of Earth will change until it eventually becomes unrecognizable. The heritage of terrestrial civilization might be preserved in part by moving entire monuments to other worlds, or to no world at all, but perhaps to a permanent artificial habitat that is not a planet. Terrestrial places might be recreated on other worlds (or, again, on no world at all) in a grand gesture of historical reconstruction.
There might be other surprising ways of preserving our terrestrial heritage, such as building projects that were never realized on Earth. For example, some future civilization might choose to build Étienne-Louis Boullée’s design for an enormous cenotaph commemorating Isaac Newton, or Antoni Gaudí’s unbuilt skyscraper, or indeed any number of countless projects conceived but never built. An entire city of unbuilt buildings could be constructed on other worlds, which would be new cities, cities never before built, but cities in the tradition of our terrestrial heritage, maintaining the connection to our homeworld even while looking to a future de-coupled from that homeworld.
A civilization that outlasts its homeworld could be said to be de-coupled from its homeworld, though the homeworld will always be the origin of the intelligent agent that is the progenitor of a civilization, and hence a touchstone and a point of reference — like a hometown that one has left in order to pursue a career in the wider world. One would expect historical reconstruction and reenactment in order to maintain our intimacy with the past, which is, at the same time, our intimacy with our homeworld, should we become de-coupled from Earth. If humanity goes on to expand into the universe, establishing multiple independent centers of civilization, including gestures of respect to our terrestrial past in the form of reconstruction, the eventual loss of the Earth to habitability may not come as such a devastating blow if some trace of Earth was preserved.
When the uninhabitability of the Earth does become a definite prospect, and should civilization endure up to that time, that future civilization’s opportunities for historical preservation and conservation will be predicated upon the technological resources available at that time, and what conception of authenticity prevails in that future age. A civilization of sufficiently advanced technology might simply preserve its homeworld entire, as a kind of museum, moving it to wherever would be convenient in order to maintain it in some form that it would be visited by antiquaries and eccentrics. Or such a future civilization might deem such preservation to be undesirable, and only certain artifacts would be removed before the planet entire was consumed by the sun as it expands into a red giant star. In an emergency abandonment of Earth, what could be evacuated would be limited, and principles of selection therefore more rigorous — but also constrained by opportunity. In the event of emergency abandonment, there might also be the possibility of returning for salvage after the emergency had passed.
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
28 June 2015
In several posts I have described what I called the STEM cycle, which typifies our industrial-technological civilization. The STEM cycle involves scientific discoveries employed in new technologies, which are in turn engineered into industries which supply new instruments to science resulting in further scientific discoveries. For more on the STEM cycle you can read my posts The Industrial-Technological Thesis, Industrial-Technological Disruption, The Open Loop of Industrial-Technological Civilization, Chronometry and the STEM Cycle, and The Institutionalization of the STEM Cycle.
Industrial-technological civilization is a species of the genus of scientific civilizations (on which cf. David Hume and Scientific Civilization and The Relevance of Philosophy of Science to Scientific Civilization). Ultimately, it is the systematic pursuit of science that drives industrial-technological civilization forward in its technological progress. While it is arguable whether contemporary civilization can be said to embody moral, aesthetic, or philosophical progress, it is unquestionable that it does embody technological progress, and, almost as an epiphenomenon, the growth of scientific knowledge. And while knowledge may not grow evenly across the entire range of human intellectual accomplishment, so that we cannot loosely speak of “intellectual progress,” we can equally unambiguously speak of scientific progress, which is tightly-coupled with technological and industrial progress.
Now, it is a remarkable feature of science that there are no secrets in science. Science is out in the open, as it were (which is one reason the appeal to embargoed evidence is a fallacy). There are scientific mysteries, to be sure, but as I argued in Scientific Curiosity and Existential Need, scientific mysteries are fundamentally distinct from the religious mysteries that exercised such power over the human mind during the epoch of agrarian-ecclesiastical civilization. You can be certain that you have encountered a complete failure to understand the nature of science when you hear (or read) of scientific mysteries being assimilated to religious mysteries.
That there are no secrets in science has consequences for the warfare practiced by industrial-technological civilization, i.e., industrialized war based on the application of scientific method to warfare and the exploitation of technological and industrial innovations. While, on the one hand, all wars since the first global industrialized war have been industrialized war, since the end of the Second World War, now seventy years ago, on the other hand, no wars have been mass wars, or, if you prefer, total wars, as a result of the devolution of warfare.
Today, for example, any competent chemist could produce phosgene or mustard gas, and anyone who cares to inform themselves can learn the basic principles and design of nuclear weapons. I made this point some time ago in Weapons Systems in an Age of High Technology: Nothing is Hidden. In that post I wrote:
Wittgenstein in his later work — no less pregnantly aphoristic than the Tractatus — said that nothing is hidden. And so it is in the age of industrial-technological civilization: Nothing is hidden. Everything is, in principle, out in the open and available for public inspection. This is the very essence of science, for science progresses through the repeatability of its results. That is to say, science is essentially an iterative enterprise.
Although science is out in the open, technology and engineering are (or can be made) proprietary. There is no secret science or sciences, but technologies and industrial engineering can be kept secret to a certain degree, though the closer they approximate science, the less secret they are.
I do not believe that this is well understood in our world, given the pronouncements and policies of our politicians. There are probably many who believe that science can be kept secret and pursued in secret. Human history is replete with examples of the sequestered development of weapons systems that rely upon scientific knowledge, from Greek Fire to the atom bomb. But if we take the most obvious example — the atomic bomb — we can easily see that the science is out in the open, even while the technological and engineering implementation of that science was kept secret, and is still kept secret today. However, while no nation-state that produces nuclear weapons makes its blueprints openly available, any competent technologist or engineer familiar with the relevant systems could probably design for themselves the triggering systems for an implosion device. Perhaps fewer could design the trigger for a hydrogen bomb — this came to Stanislaw Ulam in a moment of insight, and so represents a higher level of genius, but Andrei Sakharov also figured it out — however, a team assembled for the purpose would also certainly hit on the right solution if given the time and resources.
Science nears optimality with it is practiced openly, in full view of an interested public, and its results published in journals that are read by many others working in the field. These others have their own ideas — whether to extend research already preformed, reproduce it, or to attempt to turn it on its head — and when they in turn pursue their research and publish their results, the field of knowledge grows. This process is exponentially duplicated and iterated in a scientific civilization, and so scientific knowledge grows.
When Lockheed’s Skunkworks recently announced that they were working on a compact fusion generator, many fusion scientists were irritated that the Skunkworks team did not publish their results. The fusion research effort is quite large and diverse (something I wrote about in One Hundred Years of Fusion), and there is an expectation that those working in the field will follow scientific practice. But, as with nuclear weapons, a lot is at stake in fusion energy. If a private firm can bring proprietary fusion electrical generation technology to market, it stands to be the first trillion dollar enterprise in human history. With the stakes that high, Lockheed’s Skunkworks keeps their research tightly controlled. But this same control slows down the process of science. If Lockheed opened its fusion research to peer review, and others sought to duplicate the results, the science would be driven forward faster, but Lockheed would stand to lose its monopoly on propriety fusion technology.
Fusion science is out in the open — it is the same as nuclear science — but particular aspects and implementations of that science are pursued under conditions of industrial secrecy. There is no black and white line that separates fusion science from fusion technology research and fusion engineering. Each gradually fades over into the other, even when the core of each of science, technology, and engineering can be distinguished (this is an instance of what I call the Truncation Principle).
The stakes involved generate secrecy, and the secrecy involved generates industrial espionage. Perhaps the best known example of industrial espionage of the 20th century was the acquisition of the plans for the supersonic Concorde, which allowed the Russians to get their “Konkordski” TU-144 flying before the Concorde itself flew. Again, the science of flight and jet propulsion cannot be kept secret, but the technological and engineering implementations of that science can be hidden to some degree — although not perfectly. Supersonic, and now hypersonic, flight technology is a closely guarded secret of the military, but any enterprise with the funding and the mandate can eventually master the technology, and will eventually produce better technology and better engineering designs once the process is fully open.
Because science cannot be effectively practiced in private (it can be practiced, but will not be as good as a research program pursued jointly by a community of researchers), governments seek the control and interdiction of technologies and materials. Anyone can learn nuclear science, but it is very difficult to obtain fissionables. Any car manufacturer can buy their rival’s products, disassemble them, and reserve engineer their components, but patented technologies are protected by the court system for a certain period of time. But everything in this process is open to dispute. Different nation-states have different patent protection laws. When you add industrial espionage to constant attempts to game the system on an international level, there are few if any secrets even in proprietary technology and engineering.
The technologies that worry us the most — such as nuclear weapons — are purposefully retarded in their development by stringent secrecy and international laws and conventions. Moreover, mastering the nuclear fuel cycle requires substantial resources, so that mostly limits such an undertaking to nation-states. Most nation-states want to get along to go along, so they accept the limitations on nuclear research and choose not to build nuclear weapons even if they possess the industrial infrastructure to do so. And now, since the end of the Cold War, even the nation-states with nuclear arsenals do not pursue the development of nuclear technology; so-called “fourth generation nuclear weapons” may be pursued in the secrecy of government laboratories, but not with the kind of resources that would draw attention. It is very unlikely that they are actually being produced.
Why should we care that nuclear technology is purposefully slowed and regulated to the point of stifling innovation? Should we not consider ourselves fortunate that governments that seem to love warfare have at least limited the destruction of warfare by limiting nuclear weapons? Even the limitation of nuclear weapons comes at a cost. Just as there is no black and white line separating science, technology, and engineering, there is no black and white line that separates nuclear weapons research from other forms of research. By clamping down internationally on nuclear materials and nuclear research, the world has, for all practical purposes, shut down the possibility of nuclear rockets. Yes, there are a few firms researching nuclear rockets that can be fueled without the fissionables that could also be used to make bombs, but these research efforts are attempts to “design around” the interdictions of nuclear technology and nuclear materials.
We have today the science relevant to nuclear rocketry; to master this technology would require practical experience. It would mean designing numerous designs, testing them, and seeing what works best. What works best makes its way into the next iteration, which is then in its turn improved. This is the practical business of technology and engineering, and it cannot happen without an immersion into practical experience. But the practical experience in nuclear rocketry is exactly what is missing, because the technology and materials are tightly controlled.
Thus we already can cite a clear instance of how existential risk mitigation becomes the loss of an existential opportunity. A demographically significant spacefaring industry would be an existential opportunity for humanity, but if the nuclear rocket would have been the breakout technology that actualized this existential opportunity, we do not know, and we may never know. Nuclear weapons were early recognized as an existential risk, and our response to this existential risk was to consciously and purposefully put a brake on the development of nuclear technology. Anyone who knows the history of nuclear rockets, of the NERVA and DUMBO programs, of the many interesting designs that were produced in the early 1960s, knows that this was an entire industry effectively strangled in the cradle, sacrificed to nuclear non-proliferation efforts as though to Moloch. Because science cannot be kept secret, entire industries must be banned.
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
18 June 2015
What is the waiting gambit? The waiting gambit is the idea that, if we wait for the right moment, conditions will be better (whether in the moral sense or the practical sense, or both) at a later time to undertake some initiative for which conditions now are not propitious. In other words, conditions for future initiatives will improve, but conditions are not right at the present time for these same initiatives. Our patience will be rewarded, in only we can forbear from action at the present moment. Good things come to those who wait.
I have previously written about the sociology of waiting in Epistemic Space: Mapping Time, in which I observed:
While I am sympathetic to Russell’s rationalism, I think that Bergson had a point in his critique of spatialization, but Bergson did not go far enough with this idea. Not only has there been a spatialization of time, there has also been a temporalization of space. We see this in the contemporary world in the prevalence of what I call transient spaces: spaced designed to pass through but not spaces in which to abide. Airports, laundromats, bus stations, and sidewalks are all transient spaces. The social consequences of industrialization that have forced us to abide by the regime of the calendar and the time clock by the very fact of quantifying time into discrete regions and apportioning them according to a schedule also forces us to wait. The waiting room ought to be recognized as one of the central symbols of our age; the waiting room is par excellence the temporalization of space.
The waiting gambit on the largest scale, i.e., on the scale of civilization, is, quite simply, to transform the Earth entire into a waiting room, perpetually on the verge of the new world that lies beyond. Why wait, rather than act upon the future now? This deceptively simple question is quite difficult to answer adequately. I will attempt an answer, however, though it is not likely to be fully satisfying nor adequate to the subtlety of the problem. One reason this question is so complicated is that there are many dimensions of human experience that it addresses; the waiting gambit comes in many forms.
The most familiar form of the waiting gambit on the civilizational scale is the oft-heard claim that we cannot expect to go into space until we get our house in order here on Earth. “How can we spend money on space travel when we have such pressing problems here on Earth?” This gives to the waiting gambit a moral bite: we are not worthy to go into space, because there are still problems are Earth; we have to solve our problems on Earth first, and then we can think about going into space. But is there anyone who truly believes that this Earthly utopia will ever be realized? Isn’t it pretty clear by now that there will be no Earthly utopia, no point in time when all terrestrial problems will be solved, so that waiting for the coming of the Millennium in order to initiate a spacefaring effort is as much as saying that it will never happen? There is a fundamental contradiction involved in the idea that we can do nothing and become perfect in the meantime; if we do nothing, we will not become perfect, not now, not tomorrow, and not the day after tomorrow.
The waiting gambit in its moral form is not the only possibility. There is also the pragmatic rationalization of the waiting game: acting now is impractical; if we wait, it will be easier, less expensive, and more convenient to act. Certainly there is a tension between inefficiently constructing a space-based infrastructure at present — an option we have possessed since the middle of the twentieth century — or waiting for better technologies that will enable a much more efficient construction of space-based infrastructure. If we proceed at present, it may require diverting resources from other enterprises, but if we wait we may succumb to existential risk; to commit oneself to wait is more or less to commit oneself to a principled stagnation.
There is also the argument for waiting based on safety. To act now is unsafe, but if we wait, it will be safer to act in the future. As with the terrestrial utopia argument for waiting, the safety argument for waiting becomes an excuse never to act. As we become more affluent and more comfortable, what we identify as a danger, or an unacceptable imperfection in society, shifts to ever-more-subtle and elusive dangers, so that fear plays an increasingly disproportionate role as risks decrease while fear remains nearly constant. There will always be dangers, and even as the dangers are minimized they will grow in proportion until they seem overwhelming, hence there will always be reason to continue to wait rather than to act.
It is of the essence of the waiting gambit that many different rationalizations and justifications are employed for waiting. At each stage in the process when a new justification emerges, it seems like a rational and legitimate choice to continue to wait, but viewed from a larger perspective, it becomes apparent that the waiting is merely waiting for its own sake, and the transient excuses offered for waiting change even as we wait. Once waiting becomes normative, action becomes pathological.
Can an entire civilization wait? Would we not, in waiting, create a civilization of waiting, that is to say, a civilization constituted by waiting? I do not believe that an entire civilization can wait all the while pretending it is dedicated to some future good — but only when the time is right.
Civilizations must be judged as the existentialists judged individuals. There is a passage from Sartre that I have quoted previously (in Existence Precedes Essence) that addresses this:
“…in reality and for the existentialist, there is no love apart from the deeds of love; no potentiality of love other than that which is manifested in loving; there is no genius other than that which is expressed in works of art. The genius of Proust is the totality of the works of Proust; the genius of Racine is the series of his tragedies, outside of which there is nothing. Why should we attribute to Racine the capacity to write yet another tragedy when that is precisely what he did not write? In life, a man commits himself, draws his own portrait and there is nothing but that portrait. No doubt this thought may seem comfortless to one who has not made a success of his life. On the other hand, it puts everyone in a position to understand that reality alone is reliable; that dreams, expectations and hopes serve to define a man only as deceptive dreams, abortive hopes, expectations unfulfilled; that is to say, they define him negatively, not positively.”
Jean-Paul Sartre, “Existentialism is a Humanism” 1946, translated by Philip Mairet
Similarly for civilizations: in history, a civilization commits itself, draws its own portrait, and at the end of the day there is nothing but that portrait. This is as much as saying that civilization has not an essence, but a history — something I earlier hinted at, following Ortega y Gasset in An Existentialist Philosophy of History. The principles of an existentialist philosophy of history, as with existential philosophy generally, can be adopted and adapted, mutatis mutandis, for an existentialist philosophy of civilization.
This is, as Sartre noted, a harsh standard by which to judge, whether judging an individual or a civilization. It is not comforting for those who employ the waiting gambit, whether in their own life or in the social life of a community. Nevertheless, we should accustom ourselves to the view that there is no civilization apart from the deeds of civilization. Reality alone is reliable.
. . . . .
. . . . .
. . . . .
. . . . .
27 May 2015
Is it possible for human beings to care about the fate of strangers? This is at once a profound philosophical question and an immediately practical question. The most famous response to this question is perhaps that of John Donne:
“No man is an island, entire of itself; every man is a piece of the continent, a part of the main. If a clod be washed away by the sea, Europe is the less, as well as if a promontory were, as well as if a manor of thy friend’s or of thine own were: any man’s death diminishes me, because I am involved in mankind, and therefore never send to know for whom the bells tolls; it tolls for thee.”
John Donne, Devotions upon Emergent Occasions, XVII. Nunc lento sonitu dicunt, morieris. Now, this bell tolling softly for another, says to me: Thou must die.
Immanuel Levinas spoke of “the community of those with nothing in common,” in an attempt to get at the human concern for other human beings of whom we know little or nothing. More recently, there is this from Bill Gates:
“When I talk to friends about global health, I often run into a strange paradox. The idea of saving one person’s life is profound and thrilling. But I’ve found that when you talk about saving millions of lives — it sounds almost meaningless. The higher the number, the harder it is to wrap your head around.”
Bill Gates, opening paragraph of An AIDS Number That’s Almost Too Big to Believe
Gates presents this as a paradox, but in social science it is a well-known and well-studied cognitive bias known as the Identifiable victim effect. One researcher who has studied this cognitive bias is Paul Slovic, whose work was discussed by Sam Harris in the following passage:
“…when human life is threatened, it seems both rational and moral for our concern to increase with the number of lives at stake. And if we think that losing many lives might have some additional negative consequences (like the collapse of civilization), the curve of our concern should grow steeper still. But this is not how we characteristically respond to the suffering of other human beings.”
“Slovic’s experimental work suggests that we intuitively care most about a single, identifiable human life, less about two, and we grow more callous as the body count rises. Slovic believes that this ‘psychic numbing’ explains the widely lamented fact that we are generally more distressed by the suffering of single child (or even a single animal) than by a proper genocide. What Slovic has termed ‘genocide neglect’ — our reliable failure to respond, both practically and emotionally, to the most horrific instances of unnecessary human suffering — represents one of the more perplexing and consequential failures of our moral intuition.”
“Slovic found that when given a chance to donate money in support of needy children, subjects give most generously and feel the greatest empathy when told only about a single child’s suffering. When presented with two needy cases, their compassion wanes. And this diabolical trend continues: the greater the need, the less people are emotionally affected and the less they are inclined to give.”
Sam Harris, The Moral Landscape, Chapter 2
Skip down another paragraph and Harris adds this:
“The fact that people seem to be reliably less concerned when faced with an increase in human suffering represents an obvious violation of moral norms. The important point, however, is that we immediately recognize how indefensible this allocation of emotional and material resources is once it is brought to our attention.”
While Harris has not hesitated to court controversy, and speaks the truth plainly enough as he sees it, by failing to place what he characterizes as norms of moral reasoning in an evolutionary context he presents us with a paradox (the above section of the book is subtitled “Moral Paradox”). Really, this kind of cognitive bias only appears paradoxical when compared to a relatively recent conception of morality liberated from parochial in-group concerns.
For our ancestors, focusing on a single individual whose face is known had a high survival value for a small nomadic band, whereas a broadly humanitarian concern for all human beings would have been disastrous in equal measure. Today, in the context of industrial-technological civilization we can afford to love humanity; if our ancestors had loved humanity rather than particular individuals they knew well, they likely would have gone extinct.
Our evolutionary past has ill prepared us for the perplexities of population ethics in which the lives of millions may rest on our decisions. On the other hand, our evolutionary past has well prepared us for small group dynamics in which we immediately recognize everyone in our in-group and with equal immediacy identify anyone who is not part of our in-group and who therefore belongs to an out-group. We continue to behave as though our decisions were confined to a small band of individuals known to us, and the ability of contemporary telecommunications to project particular individuals into our personal lives as though we knew them, as if they were part of our in-group, plays into this cognitive bias.
While the explicit formulation of Identifiable victim effect is recent, the principle has been well known for hundreds of years at least, and has been as compellingly described in historical literature as in recent social science, as, for example, in Adam Smith:
“Let us suppose that the great empire of China, with all its myriads of inhabitants, was suddenly swallowed up by an earthquake, and let us consider how a man of humanity in Europe, who had no sort of connexion with that part of the world, would be affected upon receiving intelligence of this dreadful calamity. He would, I imagine, first of all, express very strongly his sorrow for the misfortune of that unhappy people, he would make many melancholy reflections upon the precariousness of human life, and the vanity of all the labours of man, which could thus be annihilated in a moment. He would too, perhaps, if he was a man of speculation, enter into many reasonings concerning the effects which this disaster might produce upon the commerce of Europe, and the trade and business of the world in general. And when all this fine philosophy was over, when all these humane sentiments had been once fairly expressed, he would pursue his business or his pleasure, take his repose or his diversion, with the same ease and tranquillity, as if no such accident had happened. The most frivolous disaster which could befall himself would occasion a more real disturbance. If he was to lose his little finger to-morrow, he would not sleep to-night; but, provided he never saw them, he will snore with the most profound security over the ruin of a hundred millions of his brethren, and the destruction of that immense multitude seems plainly an object less interesting to him, than this paltry misfortune of his own.”
Adam Smith, Theory of Moral Sentiments, Part III, chapter 3, paragraph 4
And immediately after Hume made his famous claim that, “Reason is, and ought only to be the slave of the passions, and can never pretend to any other office than to serve and obey them,” he illustrated the claim with an observation similar to Smith’s:
“Where a passion is neither founded on false suppositions, nor chuses means insufficient for the end, the understanding can neither justify nor condemn it. It is not contrary to reason to prefer the destruction of the whole world to the scratching of my finger. It is not contrary to reason for me to chuse my total ruin, to prevent the least uneasiness of an Indian or person wholly unknown to me. It is as little contrary to reason to prefer even my own acknowledgeed lesser good to my greater, and have a more ardent affection for the former than the latter.”
David Hume, A Treatise of Human Nature, Book II, Part III, section 3
Bertrand Russell has well described how the expression of this cognitive bias can take on the conceit of moral superiority in the context of romanticism:
“Cultivated people in eighteenth-century France greatly admired what they called la sensibilité, which meant a proneness to emotion, and more particularly to the emotion of sympathy. To be thoroughly satisfactory, the emotion must be direct and violent and quite uninformed by thought. The man of sensibility would be moved to tears by the sight of a single destitute peasant family, but would be cold to well-thought-out schemes for ameliorating the lot of peasants as a class. The poor were supposed to possess more virtue than the rich; the sage was thought of as a man who retires from the corruption of courts to enjoy the peaceful pleasures of an unambitious rural existence.”
Bertrand Russell, A History of Western Philosophy, Part II. From Rousseau to the Present Day, CHAPTER XVIII “The Romantic Movement”
Russell’s account of romanticism provides some of the missing rationalization whereby a cognitive bias clearly at variance with norms of moral reasoning is justified as being the “higher” moral ground. Harris seems to suggest that, as soon as this violation of moral reasoning is pointed out to us, we will change. But we don’t change, for the most part. Our rationalizations change, but our behavior rarely does. And indeed studies of cognitive bias have revealed that even when experimental subjects are informed of cognitive biases that should be obvious ex post facto, most will continue to defend choices that unambiguously reflect cognitive bias.
I have personally experienced the attitude described by Russell (despite the fact that I have not lived in eighteenth-century France) more times than I care to recall, though I find myself temperamentally on the side of those formulating well-thought-out schemes for the amelioration of the lot of the destitute as a class, rather than those moved to tears by the sight of a single destitute family. From these personal experiences of mine, anecdotal evidence suggests to me that if you attempt to live by the quasi-utilitarianism advocated by Russell and Harris, others will regard you as cold, unfeeling, and lacking in the milk of human kindness.
The cognitive bias challenge to presumptive norms of moral reasoning is also a profound challenge to existential risk mitigation, since existential risk mitigation deals in the largest numbers of human lives saved, but is a well-thought-out scheme for ameliorating the lot of human beings as a class, and may therefore have little emotional appeal compared to putting an individual’s face on a problem and then broadcasting that face repetitively.
We have all heard that the past is the foreign county, and that they do things differently there. (This line comes from the 1953 novel The Go-Between by L. P. Hartley.) We are the past of some future that has yet to occur, and we will in turn be a foreign country to that future. And, by the same token, the future is a foreign country, and they do things differently there. Can we care about these foreigners with their foreign ways? Can we do more than care about them, and actually change our behavior in the present in order to ensure on ongoing future, however foreign that future is from our parochial concerns?
In Bostrom’s paper “Existential Risk Prevention as Global Priority” (Global Policy, Volume 4, Issue 1, February 2013) the author gives a lower bound of 1016 potential future lives saved by existential risk mitigation (though he also gives “a lower bound of 1054 human-brain-emulation subjective life-years” as a possibility), but if the “collapse of compassion” is a function of the numbers involved, the higher the numbers we cite for individuals saved as a result of existential risk mitigation, the less will the average individual of today care.
Would it be possible to place an identifiable victim in the future? This is difficult, but we are all familiar with appeals to the world we leave to our children, and these are attempts to connect identifiable victims with actions that may prejudice the ability of human beings in the future to live lives of value commensurate with our own. It would be possible to construct some grand fiction, like Plato’s “noble lie” in order to interest the mass of the public in existential risk mitigation, but this would not be successful unless it became some kind of quasi-religious belief exempted from falsification that becomes the receptacle of our collective hopes. This does not seem very plausible (or sustainable) to me.
Are we left, then, to take the high road? To try to explain in painstaking (and off-putting) detail the violation of moral norms involved in our failure to adequately address existential risks, thereby putting our descendants in mortal danger? Certainly if an attempt to place an identifiable victim in the future is doomed to failure, we have no remaining option but the attempt at a moral intervention and relentless moral education that could transform the moral lives of humanity.
I do not think either of the above approaches to resolving the identifiable victim challenge to existential risk mitigation would be likely to be successful. I can put this more strongly yet: I think both approaches would almost certainly result in a backlash and would therefore be counter-productive to existential risk mitigation efforts. The only way forward that I can see is to present existential risk mitigation under the character of the adventure and exploration made possible by a spacefaring civilization that would, almost as an unintended consequence, secure the redundancy and autonomy of extraterrestrial centers of human civilization.
Human beings (at least as I know them) have a strong distaste for moral lectures and do not care to be told to do anything for their own good, but if you present them with the possibility of adventure and excitement that promises new experiences to every individual and possibly even the prospect of the extraterrestrial equivalent of a buried treasure, or even a pot of gold at the of the rainbow, you might enlist the selfishness and greed of individuals in a great cause on behalf of Earth and all its inhabitants, so that each individual is moved, as it were, by an invisible hand to promote an end which was no part of his intention.
. . . . .
. . . . .
Existential Risk: The Philosophy of Human Survival
13. Existential Risk and Identifiable Victims
. . . . .
. . . . .
. . . . .
. . . . .
25 April 2015
When Thinking about civilization this also entails thinking about compromised forms of civilization as well as the end of civilization. Ideally, a comprehensive theory of civilization would be able to account for both civilizations that flourish and prosper as well as those that fail to flourish, and which stagnate, decline, or disappear, or which develop in an undesirable direction (flawed realization). One can think of stagnation and decline as selective or partial collapse; contrariwise, civilizational collapse can be understood as the totality of stagnation or decline (the fulfillment of decline, if you will, which shows that not only progress but also decay can be formulated in teleological terms).
In what follows I will adopt the term “suboptimal civilizations” to indicate those civilizations that have weathered existential threats and which have not gone extinct, but have continued in existence, albeit in a damaged, deformed, or otherwise compromised form due to being subject to stresses beyond that civilization’s level of resilience. A suboptimal civilization, then, is a civilization that has fallen prey to existential risk or risks, but is still extant.
A civilization may become extinct even when the species that produced that civilization has not gone extinct. Thus the extinction of civilizations is a separate and distinct question from that of the extinction of species. However, the extinction of a species is likely to be much more tightly coupled to the extinction of a civilization, though we could construct scenarios in which a civilization is continued by some other species, or some other agent, than that which originated a given civilization. Generally speaking, those existential risks that lead to the extinction of a civilization are extinction and subsequent ruination; those existential risks that lead to suboptimal civilizations are stagnation and flawed realization.
There is a philosophical problem when it comes to judging civilizations of the past that have transitioned into contemporary forms of civilization, losing their identity in the process, but leaving a legacy in the form of a continuing influence. One way to deal with this problem is to distinguish between civilizations that attained maturity and those that did not. Is a civilization that failed to attain maturity because it was preempted by another form of civilization now to be considered extinct? The obvious example that I have in mind, and which I have cited numerous times, is that of early modern European civilization, which I have called modernism without industrialism, which rapidly was transformed by the industrial revolution, which latter preempted the “natural” development of modernity before that modernity had achieved maturity.
I will not attempt at present to define maturity for civilization, but my assumption will be that the maturity of a civilization will have something to do with the bringing to fulfillment of the essential idea of a civilization. I am not prepared to say how the essential idea of a civilization is to be identified, or how it is to be judged to have come to fulfillment, but this should be sufficient to give the reader an intuitive sense of what I have in mind.
The range of suboptimal civilizations, including those trapped in the social equivalent of neurotic misery, might be quite considerable. Toynbee formulated a range of concepts to understand suboptimal civilizations, including abortive civilizations, arrested civilizations, and fossil civilizations. Extrapolating from Toynbee’s conceptions of suboptimal civilizations, I formulated the idea of submerged civilizations in my post In the Shadow of Civilization.
Toynbee’s conceptions of suboptimal civilizations are imaginative and poetic, but more qualitative than quantitative conceptions. In order to do this in the spirit of science, we would want our comprehensive theory of civilization to incorporate quantifiable metrics for the success or failure of a civilization. At our present stage of social development, it is controversial to compare civilizational traditions and to rate any one tradition as “higher” or “more advanced” than any other tradition (an idea I discussed in Comparative Concepts in the Study of Civilization), as representatives of those civilizations that rate lower on any proposed scale are offended by the metric employed, and they will usually suggest alternative metrics by which their preferred civilizational metric fares much better, while the civilizational tradition that fared better under the other metric would not come off as well by this alternative metric. The attempt by the nation-state of Bhutan to measure “gross national happiness,” may be taken as an example of this, although I am not sure that this is a helpful measure.
It would also be desirable in a comprehensive theory of civilization to formulate metrics for the viability or sustainability of a given civilization. In some cases, metrics for the success of civilization might coincide with metrics for the viability of civilization, but the possibility of very long lived civilizations that are less than ideal — suboptimal civilizations — points out the limitations of defining civilizational success in terms of civilizational survival. In some cases viability and optimality will coincide, while in some cases they will not coincide, and suboptimal civilizations that survive existential risks in a compromised form will be an example of such non-coincidence. The survival of a stagnant civilization can be a matter of mere cosmic good fortune, whereby a particular planet enjoys an uncommonly clement cosmic climate for an uncharacteristically long period of time (while other contingent factors may mean that the climate for civilizational development to maturity is not equally clement).
There are many ways to explore the idea of suboptimal civilization, as was observed above there are many ways for a civilization to languish in suboptimality. Indeed, it may be the case that the essential idea of a civilization has a much smaller class of circumstances in which that idea comes to full fruition and maturity, and a much larger class of circumstances in which that idea fails to mature for any number of distinct reasons, so that suboptimal civilizations are likely to outnumber civilizations that have attained optimality.
There is another philosophical problem, related to the problem noted above, in identifying the continuity of a civilization, so that a later stage of development can be considered the fulfillment, or failure of fulfillment, of some earlier civilizational idea, and not the emergence of a new idea not yet brought to fulfillment. I have previously considered this problem in several posts on the invariant properties of civilization. If a civilization emerges that seems to lack heretofore invariant properties of civilization, is to identified as a new form of civilization, or as non-civilization? Another way to formulate the problem is to ask whether civilization is an open-textured concept. The problem is posed every time an unprecedented development occurs in the history of civilization, so that the problem re-emerges at every stage in the history of a tradition, since the unprecedented is always occurring in one form or another. Let me provide an example of what I mean by this claim.
Imagine, if you will (as a thought experiment), that there were social scientists prior to the scientific revolution who studied their contemporaneous society much as we study our own societies today, and further suppose that despite the disadvantages such pre-modern social scientists would have labored under, that they manage to assemble reasonably accurate data sets that allows them to model the world in which they live and the history up to that point that had resulted in the world in which they lived (that is, the world of modernism without industrialism).
If you were to show pre-modern social scientists the spike in demographics, technology, energy use, and urbanization that attended the industrial revolution they might deny that any such development was even possible, and if they admitted that it was possible, they might say that a world so transformed would not constitute civilization as they understood civilization. They would be right, in a sense, to characterize our world today, after the industrial revolution, as a post-civilizational institution, derived perhaps from the long tradition of civilization with which they were familiar, but not really a part of this tradition. I implied as much recently when I wrote that, “It could be argued that traditional society… has already collapsed and has been incrementally replaced by an entirely different kind of society. For this is surely what has happened in the wake of the industrial revolution, which destroyed more aspects of traditional society than any Marxist, any revolutionary, or any atheist.” (cf. Is society existentially dependent upon religion?)
The thought experiment that I have suggested here in regard to the industrial revolution could also be performed in regard to the Neolithic agricultural revolution, although in this case we could not properly speak of an ancient civilization. Humanity as a species might have attained a great antiquity and even have made use of its intellectual gifts without having passed through any stage of large-scale settlement. This is an especially interesting thought experiment when we reflect that the paradigmatically human activities of art and technology predate civilization and may be understood in isolation from civilization, and might have developed separately from civilization. The rate of technological innovation prior to the advent of civilization was very slow, but it was not zero, and extrapolated to a sufficient age it would have produced an impressive technology, though this would have taken an order of magnitude longer than it took as a result of the industrial revolution. Something like civilization, but not exactly civilization as we know it, might have emerged from a very old human society that had not adopted large-scale settlement and consequently the institutions of settled civilization.
This ancient human society that had never crossed the threshold of civilization proper — at least in some senses a suboptimal form of social organization, even if not a suboptimal civilization — suggests yet another thought experiment: an ancient civilization that, despite its antiquity, never passes the threshold to become a Kardashevian supercivilization. The motif of a million-year-old civilization is a common one, Kardashev called them “supercivilizations” and Sagan often speculated on their histories, but what about the possibility of a million-year-old civilization that never develops technologically and never experiences an industrial revolution?
If we plot out the history of technology and population (among other metrics) on a graph and extrapolate from trends prior to the industrial revolution (when these metrics suddenly spike) we can easily see the possibility of a very old civilization — tens of thousands or hundreds of thousands of years old — that would be the result of a simple diachronic extrapolation of trends that had characterized human life from the emergence of hominids up until the industrial revolution. This is at least possible as a counter-factual, and conceivable by way of an analogy with our prehistoric past.
The very old civilization that would be the result of a straight-forward diachonic extrapolation of civilization prior to the industrial revolution, given climatological conditions that allow for continual development, would be a civilization conceived in terms proportional to human history. We often forget that, prior to Homo sapiens, there is a multi-million year history of hominids with minimal toolkits that changed almost not at all over a million or even two million years. The human condition need not change appreciably even over very long periods of time.
A million year old agricultural civilization would probably look much like a 2,000 year old civilization, except that it would have a very long history, which means either a massive archive if continuity is maintained, or a lot of ruins and buried artifacts of the past if continuity has not been maintained. Would we have anything to learn from a million-year-old civilization that was not a supercivilization? Consider the possibility of art and literature a million years in development — the steady rate at which civilization prior to the industrial revolution produced masterpieces of art suggests that civilization without industrialization would be a very old agrarian civilization that was laden with a million years’ worth of art treasures. In this case a suboptimal civilization would be productive of values that would not and could not be achieved under an optimal civilization, which ought to make us question the optimality of optimal civilization where our presuppositions of optimality are drawn from industrialization.
. . . . .
. . . . .
. . . . .
. . . . .
23 December 2014
The Cold War forced us to think in global terms. In other words, it forced us to think in planetary terms. The planet was divided into two armed camps, with one camp led by the US presiding over NATO and the other camp led by the USSR presiding over the Warsaw Pact. Every action taken, or every action forborne, was weighed and judged against its planetary consequences, and this became most evident when faced with the ultimate Cold War nightmare, a massive nuclear exchange between the superpowers that came to known as MAD for mutually assured destruction. It is at least arguable that the idea of anthropogenic existential risk emerged from the Cold War MAD scenarios.
The visionary thinking of the Cold War period has been tainted by its association with what was then openly called “the unthinkable” — a massive thermonuclear exchange — but the true visionaries are not the ones who narrated a utopian fantasy that we would all have liked to believe, but rather the visionaries are the ones who unflinchingly explored the implications of what Karl Jaspers called “the new fact.” Anthropogenic extinction became technologically possible with the advent of the nuclear era, and because it was made possible, it became a pressing need to discuss it honestly. In this sense, the great visionaries of the recent past have been men like Guilio Douhet and Herman Kahn
Douhet’s work predates the nuclear age, but Douhet was a great visionary of air power, and the extent to which Douhet understood that air power would change warfare is remarkable:
“No longer can areas exist in which life can be lived in safety and tranquility, nor can the battlefield any longer be limited to actual combatants. On the contrary, the battlefield will be limited only by the boundaries of the nations at war, and all of their citizens will become combatants, since all of them will be exposed to the aerial offensives of the enemy. There will be no distinction any longer between soldiers and civilians. The defenses on land and sea will no longer serve to protect the country behind them; nor can victory on land or sea protect the people from enemy aerial attacks unless that victory insures the destruction, by actual occupation of the enemy’s territory, of all that gives life to his aerial forces.”
Giulio Douhet, The Command of the Air, translated by Dino Ferrari, Washington D.C.: Air Force History and Museums Program, 1998, pp. 9-10
There have been many predictions for future warfare that have not been borne out in practice, but with hindsight we can see that Douhet was right about almost everything he predicted, and, more importantly, he was right for the right reasons. He saw, he understood, he drew the correct implications, and he laid out his vision in admirable clarity.
The Cold War standoff between the US and the USSR was a consequence of the implications of air power already glimpsed by Douhet (in 1921), and raised to a higher order of magnitude by advanced technology weapons systems. When Douhet wrote this work, there were as yet no jet engines, no ballistic missiles, and no nuclear weapons, but Douhet’s vision was so comprehensive and accurate that these major technological innovations did not alter the basic framework that he predicted. Citizens did become combatants, and the citizens of each side were held hostage by the other. This is the essence of the MAD scenario.
The increasing efficacy of nuclear weapons and their delivery systems did not substantially change Douhet’s framework, but by raising the stakes of destructiveness, nuclear weapons, jet bombers, and missiles did change the scope of warfare from mere localized destruction to a potential planetary catastrophe. Many scientists began to discuss the potential consequences for life and civilization of the use of nuclear weapons, and many of the physicists who worked on the Manhattan Project later felt misgivings for their role in releasing the nuclear genie from the bottle.
These concerns were not confined to western scientists. In an internal report to USSR leadership, Soviet nuclear physicist Igor Kurchatov wrote bluntly about the possibility of human extinction in the event of nuclear war:
“Calculations show that if, in the case of war, weapons that already exist are used, levels of radioactive emissions and concentration of radioactive substances, which are biologically harmful to human life and vegetation, will be created on a significant portion of the earth’s surface. The rate of growth of atomic explosives is such that in just a few years the stockpile will be large enough to create conditions under which the existence of life on the whole globe will be impossible. The explosion of around one hundred hydrogen bombs could lead to this result.”
“There is no hope that organisms, and the human organism in particular, will adjust themselves to higher levels of radioactivity on earth. This adjustment can take place only through a prolonged process of evolution. So we cannot but admit that mankind faces the enormous threat of an end to all life on earth.”
Igor Kurchatov “The Danger of Atomic War” 1954
Kurchtov’s formulations are striking in their unaffected naturalism and the bluntness of the message that he sought to communicate. Even as Kurchatov wrote of the end of the world he avoided histrionics. His account of human extinction is what Colin McGinn might call “flatly natural.” The result of a dispassionately scientific account of the end of the world is perhaps the more powerful for avoiding emotional and rhetorical excess.
The space age began three years after Kurchatov’s memo on the dangers of nuclear war, when Sputnik was launched on 04 October 1957. Thereafter a “space race” paralleled the arms race and became a new venue for superpower competition. Bertrand Russell, for example, was scathing in his righteous ridicule of the space program as being merely a symptom of the Cold War. (Chad Trainer has discussed this in Earth to Russell.)
It has become a commonplace of commentary on the Apollo missions that this was the occasion of an intellectual turning point in our collective self-understanding. The photograph of Earth taken from space on the way to the moon was a way to communicate some hint of the “overview effect” to the public. Again, we were forced to think in planetary terms by this new image of Earth hanging isolated against the blackness of space. Earth was achingly beautiful, we all saw, but also terribly vulnerable.
The Cold War arms race and space race came together during the latter part of the twentieth century in a kind of cosmic pessimism over the very possibility of the longevity of any civilization whatever, here extrapolated far beyond the Earth to the possibility of any other inhabited planet.
When Carl Sagan wrote his Cosmos: A Personal Journey during the height of the Cold War, the concern over nuclear war was such that the term L in the Drake equation (the length of time a SETI-capable civilization is transmitting or receiving) was frequently judged to be quite short, only a few hundred years at most. This is given a poignant depiction in Carl Sagan’s Dream described in the last episode of Cosmos.
It could be said that nuclear weapons and space exploration driven by political competition opened our eyes to our place in the cosmos in a way that might not have made a similar impression if the stakes had not been so high. Samuel Johnson is often quoted for his line, “Depend upon it, Sir, when a man knows he is to be hanged in a fortnight, it concentrates his mind wonderfully.” Similarly it could be said that the Cold War and the nuclear arms race brought the whole of humanity face-to-face with extinction, and we pulled back from the brink. The danger is not over, but the human species has been changed by the experience of imminent destruction.
. . . . .
. . . . .
. . . . .
. . . . .
7 November 2014
Twentieth century American analytical philosopher W. V. O. Quine said that, “Philosophy of science is philosophy enough.” (The Ways of Paradox, “Mr. Strawson on Logical Theory”) In so saying Quine was making explicit the de facto practice on which Anglo-American analytical philosophy was converging: if philosophy was going to be tolerated at all (even among professional philosophers!) it must delimit its horizons to science, as only in the conceptual clarification of science had philosophy any remaining role to play in the modern world. Philosophy of science was a preoccupation of philosophers throughout the twentieth century, from early positivist formulations in the early part of the century, through post-positivist formulations, to profoundly ambiguous reflections upon the rationality of science in Thomas Kuhn’s The Structure of Scientific Revolutions.
I have previously addressed the condition of contemporary philosophy in Philosophy Institutionalized, in which I noted that among the philosophical schools of our time, “there is a common thread, and that common thread is not at all difficult to discern: it is the relationship of thought to the relentless expansion of industrial-technological civilization.” I would like to take this idea a step further, and consider how philosophy might be both embedded in contemporary civilization and how it might look beyond the particular human condition of the present moment of history and also embrace something larger.
The position of philosophy in agrarian-ecclesiastical civilization was preeminent, and second only to theology. India had a uniquely philosophical civilization in which schools of thought wildly proliferated and were elaborated over the course of hundreds of years. In those agrarian-ecclesiastical civilizations in which religion simpliciter was the organizing principle, initially crude religious ideas were eventually given sophisticated and subtle formulations in an advanced technical vocabulary largely derived from philosophy. Where the explicitly religious impulse was less prominent than the philosophical impulse, a philosophical civilization came into being, as in the Balkans and the eastern Mediterranean, starting with ancient Greece and its successor civilizations.
With the end of agrarian-ecclesiastical civilization, as it was preempted by industrial-technological civilization, this tradition of philosophical preeminence in intellectual inquiry was lost, and philosophy, no longer being central to the motivating imperatives of civilization, became progressively more and more marginalized, until today, when it is largely an intellectual whipping boy that scientists point out as an object lesson of how not to engage in intellectual activity.
“…science drives technology, technology drives industrial engineering, and industrial engineering creates new resources that allow science to be pursued at a larger scope and scale. In some cases the STEM cycle functions as a loosely-coupled structure of our world. The resources of advanced mathematics are necessary to the expression of physics in mathematicized form, but there may be no direct coupling of physics and mathematics, and the mathematics used in physics may have been available for generations. Pure science may suggest a number of technologies, many of which lie fallow, with no particular interest in them. One technology may eventually come into mass manufacture, but it may not be seen to have any initial impact on scientific research. All of these episodes seem de-coupled, and can only be understood as a loosely-coupled cycle when seen in the big picture over the long term. In the case of nuclear fusion, the STEM cycle is more tightly coupled: fusion science must be consciously developed with an eye to its application in various fusion technologies. The many specific technologies developed on the basis of fusion science are tested with an eye to which can be practically scaled up by industrial engineering to build a workable fusion power generation facility.”
Given the role of the STEM cycle in defining industrial-technological civilization, a robust philosophical engagement with the civilization of our time would mean a philosophy of science, a philosophy of technology, and a philosophy of engineering, as well as an overall philosophy of civilization that knit these together in a way that reflects the STEM cycle that unifies the three in industrial-technological civilization. Thus the twentieth century preoccupation with the philosophy of science can be understood as the first attempt to come to grips with the new form of civilization that had replaced the civilization of our rural, agricultural past.
This fits in well with the fact that the philosophy of technology has been booming in recent decades (partially driven by our technophilia), with philosophers of many different backgrounds and orientations — analytical philosophers, phenomenologists, existentialists, Marxists, and many others — equally interested in providing a philosophical commentary on this central feature of our contemporary world. I have myself written about the emergence of what I call techno-philosophy. The philosophy of engineering is a bit behind philosophy of science and philosophy of technology, but it is rapidly catching up, as philosophers realize that they have had little to say about this essential dimension of our contemporary world. The academic publisher Springer now has a series of books on the philosophy of engineering, Philosophy of Engineering and Technology. I would purchase more of these volumes if they weren’t prohibitively expensive.
Beyond the specialized disciplines of philosophy of science, philosophy of technology, and philosophy of engineering, there also needs to be a “big picture” engagement with the three loosely coupled together in the STEM cycle, and beyond this there needs to be a philosophical engagement with how our industrial-technological civilization is embedded in a larger historical context that includes different forms of civilization with profoundly different civilizational motifs and imperatives.
To address the latter need for a truly big picture philosophy, that is not some backward-looking disinterment of Hegelian philosophy of history, but which engages with the world as it know it today, in the light of scientific rationality, we need a philosophy of history that understands history in terms of scientific historiography, which is how a scientific civilization grasps history and arrives at a self-understanding of its place in history.
Philosophical reflection upon existential risk partially serves as a reminder of the philosophical dimension of history and civilization, in a way not unlike meditations on eternity during the period of agrarian-ecclesiastical civilization served as a reminder that life is more than the daily struggle to stay alive. In my post, What is an existential philosophy?, I wrote, “…coming to terms with existence from an existential perspective means coming to terms with Big History, which provides the ultimate (natural historical) context for ordinary experience and its object.”
What we need, then, for a vital and vigorous philosophy for industrial-technological civilization, is a philosophy of big history. I intend to do something about this — in fact, I am working on it now — though it is unlikely that anyone will take notice.
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
2 November 2014
The word “precarity” is quite recent, and does not appear in the Oxford English Dictionary, but has appeared in the titles of several books. The term mostly derives from left-leaning organized labor, and has come into use to describe the lives of workers in precarious circumstances. Wikipedia defines precarity as “a condition of existence without predictability or security, affecting material or psychological welfare.”
Dorothy Day, writing in The Catholic Worker (coming from a context of both Catholic monasticism and labor activism), May 1952 (“Poverty and Precarity”), cites a certain “saintly priest… from Martinique,” now known to be Léonce Crenier, who is quoted as saying:
“True poverty is rare… Nowadays communities are good, I am sure, but they are mistaken about poverty. They accept, admit on principle, poverty, but everything must be good and strong, buildings must be fireproof, Precarity is rejected everywhere, and precarity is an essential element of poverty. That has been forgotten. Here we want precarity in everything except the church.”
Crenier had so absorbed and accepted the ideal of monastic poverty, like the Franciscans and the Poor Clares (or their modern equivalents such as Simone Weil and Christopher McCandless), that he didn’t merely tolerate poverty, he embraced and celebrated poverty. Elsewhere Father Crenier wrote, “I noticed that real poverty, where one misses so many things, attracts singular graces amongst the monks, and in particular spiritual peace and joy.” Given the ideal of poverty and its salutary effect upon the spiritual life, Crenier not only celebrated poverty, but also the condition in which the impoverished live, and this is precarity.
Recently studies have retained this leftist interest in the existential precarity of the lives of marginalized workers, but the monastic interest in poverty for the sake of an enhanced spiritual life has fallen away, and only the misery of precarity remains. Not only has the spiritual virtue of poverty been abandoned as an ideal, but it has, in a sense, been turned on its head, as the spiritual focus of poverty turns from its cultivation to its eradication. In this tradition, the recent sociology of Pippa Norris and Ronald Inglehart is especially interesting, as they have bucked the contemporary trend and given a new argument for secularization, which was once in vogue but has been very much out of favor since the rise of Islamic militancy as a political force in global politics. (I have myself argued that secularization had been too readily and quickly abandoned, and discussed the problem of secularization in relation to the confirmation and disconfirmation of ideas in history.)
Pippa Norris and Ronald Inglehart are perhaps best known for their book Sacred and Secular: Religion and Politics Worldwide. Their paper, Are high levels of existential security conducive to secularization? A response to our critics, is available online. They make the case that, despite the apparent rise of fundamentalist religious belief in the past several deacades, and the anomalous instance of the US, which is wealthy and highly religious, it is not wealth itself that is a predictor of secularization, but rather what they call existential security (which may be considered the economic aspect of ontological security).
While Norris and Inglehart do not use the term “precarity,” clearly their argument is that existential precarity pushes individuals and communities toward the comforts of religion in the face of a hostile and unforgiving world: “…the public’s demand for transcendent religion varies systematically with levels of vulnerabilities to societal and personal risks and threats.” This really isn’t a novel thesis, as Marx pointed out long ago that societies created ideal worlds of justice when justice was denied them in this world, implying that when conditions in this world improve, there would be no need for imagined worlds of perfect justice. Being comfortably well off in the real world means there is little need to imagine comforts in another world.
Speaking on a purely personal (and anecdotal basis), Norris and Inglehart’s thesis rings true in my experience. I have relatives in Scandinavia and have visited the region many times. Here where secularization has gone the furthest, and the greater proportion of the population enjoys a high level of existential security, you can quite literally see the difference in people’s faces. In the US, people are hard-driving and always seemingly on the edge; there is an underlying anxiety that I find very off-putting. But there is a good reason for this: people know that if they lose their jobs, they will possibly lose their homes and end up on the street. In Scandinavia, people look much more relaxed in their facial expressions, and they are not continually on the verge of flying into a rage. People are generally very confident about their lives and don’t worry much about the future.
One might think of the existential precarity of individuals as an ontogenic precarity, and this suggests the possibility of what might be called phylogenic precarity, or the existential precarity of social wholes. Fragile states exist in a condition of existential precarity. In such cases, there is a clear linkage between social precarity and individual precarity. In same cases, there may be no such linkage. It is possible that great individual precarity coexists with social stability, and social precarity may coexist with individual security. An example of the former is the contemporary US; an example of the latter would be some future society in which people are wealthy and comfortable but fail to see that their society is on the verge of collapse — like the Romans, say, in the second and third centuries AD.
The ultimate form of social precarity is the existential precarity of civilization. In some contexts it might be better to discuss the vulnerability and fragility of civilization in terms of existential precarity rather than existential risk or existential threat. I have previously observed that every existential risk is at the same time an existential opportunity, and vice versa (cf. Existential Risk and Existential Opportunity), so that the attempt to limit and contain existential risk may have the unintended consequence of limiting and containing existential opportunity. Thus the selfsame policies instituted for the sake of mitigating existential risk may contribute to the stagnation of civilization and therefore become a source of existential risk. The idea of existential precarity stands outside the dialectic of risk and opportunity, and therefore can provide us with an alternative formulation of existential risk.
How precarious is the life of civilized society? In some cases, social order seems to be balanced on a knife edge. During the 1981 Toxteth riots in Liverpool, which occurred in the wake of recession and high unemployment, as well as tension between the police and residents, Margaret Thatcher memorably said that, “The veneer of civilization is very thin.” But this is misleading. Urban riots are not a sign of the weakness of civilization, but are intrinsic to civilization itself, in the same way that war is intrinsic to civilization: it is not possible to have an urban riot without large-scale urban communities in the same way that it is not possible to have a war without the large-scale organizational resources of a state. Riots even occur in societies as stable as Sweden.
We can distinguish between the superficial precarity of a tense city that might erupt in riots at any time, which is the sort of precarity to which Margaret Thatcher referred, and a deeper, underlying precarity that does not manifest itself in terms of riots, overturned cars, and burned buildings, but in the sudden and inexplicable collapse of a social order that is not followed by immediate recovery. In considering the possibility of the existential precarity of civilization, what we really want to know is whether there is a social equivalent of the passenger pigeon population collapse and then extinction.
In the 19th century, the passenger pigeon was the most common bird in North America. Following hunting and habitat loss, the species experienced a catastrophic population collapse between 1870 and 1890, finally going extinct in 1914. Less than fifty years before the species went extinct, there was no reason to suspect that the species was endangered, or even seriously reduced in numbers. When the end came, it came quickly; somehow the entire species reached a tipping point and could not recover from its collapse. Could this happen to our own species? Could this happen to our civilization? Despite our numbers and our apparent resilience, might we have some existential Achilles’ heel, some essential precarity, incorporated into the human condition of which we are blissfully unaware? And, if we do have some essential vulnerability, is there a way to address this?
I have argued elsewhere that civilization is becoming more robust over time, and I have not changed my mind about this, but neither is it the entire story about the existential security of civilization. In comparison to the precarity of the individual life, civilization is robust in the extreme. Civilization only betrays its existential precarity on time scales several orders of magnitude beyond the human experience of time, which at most encompasses several decades. As we ascend in temporal comprehensiveness, civilization steadily diminishes until it appears as a mere anomaly in the vast stretches of time contemplated in cosmology. At this scale, the longevity of civilization is no longer in question only because its brevity is all too obvious.
At the human time scale, civilization is as certain as the ground beneath our feet; at the cosmological time scale, civilization is as irrelevant as a mayfly. An appraisal of the existential precarity of civilization must take place at some time scale between the human and the cosmological. This brings me to an insight that I had after attending the 2014 IBHA conference last summer. On day 3 of the conference I attended a talk by futurist Joseph Voros that provided much food for thought, and while driving home I thought about a device he employed to discuss future forecasts, the future cone.
This was my first exposure to the future cone, and I immediately recognized the possibility for conceptual clarification that this offers in thinking about the future. If we depict the future as an extension of a timeline indefinitely, the line itself is the most likely future, while progressively larger cones concentric with the line, radiating out from the present, become increasingly less likely forecasts. Within the classes of forecasts defined by the spaces included within progressively larger cones, preferred or unwelcome futures can be identified by further subdivisions of the space defined by the cones. Voros offered an alliterative mnemonic device to differentiate the conceptual spaces defined by the future cone, from the center outward: the projected future, the probable future, the plausible future, the possible future, and the preposterous future.
When I was reflecting on this on the drive home, I realized that, in the short term, the projected future is almost always correct. We can say within a high degree of accuracy what tomorrow will be like. Yet in the long term future, the projected future is almost always wrong. Here when I speak of the projected future I mean the human future. We can project future events in cosmology with a high degree of accuracy — for example, the coming collision of the Milky Way and Andromeda galaxies — but we cannot say anything of significance of what human civilization will be like at this time, or indeed whether there will be any human civilization or any successor institution to human civilization. Futurism forecasting, in other words, goes off the rails in the mid-term future, though exactly where it does so is difficult to say. And it is precisely in this mid-term future — somewhere between human time scales and cosmological time scales — that the existential precarity of civilization becomes clear. Sometime between tomorrow and four billion years from now when a swollen sun swallows up Earth, human civilization will be subject to unpredictable and unprecedented selection pressures that will either mean the permanent ruination of that civilization, or its transformation into something utterly unexpected.
With this in mind, we can focus our conceptual exploration of the existential precarity, existential security, existential threat, and existential risk that bears upon civilization in the period of the mid-term future. How far can we narrow the historico-temporal window of the mid-term future of precarity? What are the selection pressures to which civilization will be subject during this period? What new selection pressures might emerge? Is it more important to focus on existential risk mitigation, or to focus on our civilization making the transition to a post-civilizational institution that will carry with it the memory of its human ancestry? These and many other related questions must assume the central place in our research.
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
7 September 2014
Teleology and Deontology
In moral theory we distinguish between teleological ethical systems and deontological ethical systems. Teleological ethics (also called consequentialism, in reference to consequences) focus on the end of an action, i.e., that actual result, as that which makes an action praiseworthy or blameworthy. The word “teleological” comes from the Greek telos (τέλος), which means end, goal, or purpose. Deontological ethics focus on the motivation for undertaking an action, and is sometimes referred to as “duty-based” ethics; the word “deontological” derives from the Greek deon (δέον), meaning “duty.”
The philosophical literature on teleology and deontology is vast. From this vast literature the history of moral philosophy gives us several well known examples of both teleological and deontological ethics. Utilitarianism is often cited as a paradigmatic example of teleological ethics, as utilitarianism (in one of its many forms) holds that an action is to be judged by its ability to bring about the greatest happiness for the greatest number of persons (also known as the greatest happiness principle). Kantian ethics is usually cited as the paradigmatic case of deontological ethics; Kant placed great emphasis upon duty, and held that nothing is good in itself except the good will. These philosophical expressions of the ideas of teleology and deontology also have vernacular expressions that largely coincide with them, as, for example, when teleological views are expressed as, “the ends justify the means,” or when deontological views are expressed as “justice be done though the heavens may fall.”
The vast literature on deontology and teleological also points to many examples that show these categories of ethical thought to be overly schematic and, in some cases, to cut across each other. For example, if we characterize teleological ethics in terms of the aim to be achieved by an action, a distinction can be made between the actual consequences of an action and the intended consequences of an action. The intended consequences of an action may be understood deontologically as the motivation for undertaking an action. Part of this problem can be addressed by tightening up the terminology and the logic of the argument, but, as has been noted, the literature is vast and many sophisticated arguments have been advanced to demonstrate the interpenetration of teleological and deontological conceptions. We must, then, regard this distinction as a rough-and-ready classification that admits of exceptions.
Teleology and Deontology in a Social Context
We can take these ideas of teleological and deontological ethics and apply them not only to individual action but to social action, and thus speak of the actions of social groups of human beings in teleological or deontological terms, i.e., we can speak in terms of the coordinated actions of a group being undertaken primarily in order to achieve some end, or actions undertaken as ends-in-themselves. This suggests the extrapolation of teleological and deontological conceptions to the largest social formations, and the largest social formation known to us is civilization. Can a civilizaiton entire be teleological or deontological in its outlook? Does a civilization have a moral outlook?
I will assume, without arguing in detail, that a civilization can have a moral outlook, understanding that this is a generalization that holds across a civilization, and that the generalization admits of numerous important exceptions. Elsewhere I have noted the Darwinian perspective that any social group of animals that lives together in sufficient density for a sufficient period of time will evolve social customs for interaction. (This is a position that has been further explored in our time by Frans de Waal and Soshichi Uchii.) The lifeway of a particular people is coextensive with social conventions necessary for a social species to live together in a reasonable degree of harmony; what distinguishes regional permutations of lifeways are the climate and available domesticates. Both ethics and civilization grow from this common root, hence the xenophobia of traditionalist civilizations that unproblematically equate the peculiarities of a particular regional civilization with the good in and of itself.
Can this synthesis of lifeways and ethos that marks out a regional civilization (and which is consolidated in the process of axialization) be characterized as overall teleological or deontological orientation in some particular cases? This is a more difficult question, and rather than tackling it directly, I will discuss the question from various perspectives drawn from an overview of the history of civilization.
Teleology and Deontology in Agrarian-Ecclesiastical Civilization
The emergence of settled agrarian-ecclesiastical civilization presents us with an archaeological horizon that appears globally in widely dispersed locations but at approximately the same time. (An archaeological horizon is “a widely disseminated level of common art and artifacts.” Wikipedia) Prior to an actual horizon, there are a great many suggestive sites that imply both domestication and semi-settled lifeways, but at a certain level (between 9 and 11 thousand years before present) the traces of large scale settlement and domestication of plants and animals becomes common. This is the horizon of civilization (or, more narrowly, the horizon of agrarian-ecclesiastical civilization).
The horizon of agrarian-ecclesiastical civilization exhibits global characteristics that eventually culminate in the Axial Age, when regional civilizations are given definitive expression in mythological terms. Through separately emergent, these civilizations exhibit common features of settlement, division of labor, social hierarchy, a conception of the world, of human nature, and of the relation between the two that are expressed in mythological form, which in being made systematic (an early manifestation of the human condition made rigorous) become the central organizing idea of the civilizations that followed. This period represents the bulk of human civilization history to date, a period lasting almost ten thousand years.
Recently on my other blog I undertook a series on religious experiences and religious observances from hunter-gatherer nomadism through contemporary industrial-technological civilization and on into the future — cf. Settled and Nomadic Religious Experience, Religious Experience in Industrial-Technological Civilization, Religious Experience and the Future of Civilization, Addendum on Religious Experience and the Future of Civilization, and Responding to the World we Find — and thinking of religious observances emergent from human religious experience it is difficult to say whether these ritual observances are performed in the spirit of teleology or deontology, i.e., whether it is the consequences of the ritual that matters, or if the ritual has intrinsic value and ought to be conducted regardless of consequences. This may be one of the many cases in which teleological and deontological categories cut across each other. Agrarian-eccleasiastical civilization at times seems to formulate its central organizing principle of religious observance in terms of the intrinsic value of the observance, and in times in terms of the efficacious consequences of these observances.
We can understand religion (by which I mean the central organizing principle of agrarian-ecclesiastical civilizations) as an existential risk mitigation strategy for pre-technological peoples, who have no method to address personal mortality or the cyclical rise and fall of civilizations (i.e., civilizational mortality) other than the propitiation of gods; once the transition is made from agrarian-ecclesiastical civilization to industrial-technological civilization, the methods of procedural rationality that are the organizing principle of the latter can be brought to bear on existential questions, and it finally becomes possible for existential threats to be assessed and addressed on the level of naturalistic human action. It would not have been possible to conceptualize existential risk in terms of naturalistic human action prior to the technological expansion of effective human action.
Teleology and Deontology in Global Industrial-Technological Civilization
Civilization is an historical reality that exhibits change and development over time. The particular change in civilization that we see at the present time is a transition from regional civilizations, reflecting the coevolution of human beings and domesticates (both plant and animal) ecologically suited to a particular geographical region, to a global industrial-technological civilization that is largely indifferent to local and regional ecological and climatological conditions, because a global trade network provides goods and services from any region to any other region, which means that the maintenance of civilization is no longer dependent upon local or regional constraints.
This development of global industrial-technological civilization is likely to dominate civilization until civilization either fails (i.e., civilization experiences extinction, permanent stagnation, flawed realization, or subsequent ruination) or expands beyond Earth and a self-sustaining center of civilization emerges in space or on another planetary body. In order for the latter to occur, human travel in space must move beyond exploratory forays and become commonplace, that is to say, we would have to see a horizon of space travel. I have called the horizon of human space travel extraterrestrialization. Until that time, civilization remains bound by the finite surface of Earth, and this means that our civilization is growing intensively rather than extensively. The intensive growth of regional civilizations exhaustively covering the surface of Earth means the closer integration of these civilizations (sometimes called globalization), and it is this process that is pushing regional civilizations (e.g., Chinese civilization, Indian civilization, European civilization, etc.) toward integration into a single global industrial-technological civilization.
The spatial constraint of the Earth’s surface together with the expansion and consolidation of settled industrial-technological civilization forces these civilizations into integration, even if only at the margins where their borders meet. Is this de facto constraint upon planetary civilization a mere contingency pushing civilization in a particular direction (which in evolutionary terms could be called civilizational directional selection), or may be think of these constraints in non-contingent terms as a “destiny” of planetary civilization? We find both conceptions represented in contemporary thought.
To think of civilization in terms of destiny is to think in teleological terms. If civilization has a destiny apart from the purposes of individuals and societies, that destiny is the telos of that civilization. But we would not likely refer to an historical accident that selects civilization as “destiny,” even if it shapes our civilization decisively. If we reject the idea of a contingent destiny forced upon us by de facto constraints upon growth and development, then we are implicitly thinking of civilization in terms of practices pursued for their own ends, which is an deontological conception of civilization.
The contemporary idea of a transition to a sustainable civilization — the transition from an industrial infrastructure powered by fossil fuels to an industrial infrastructure based on sustainable and renewable sources of fuel — is clearly a deontological conception of the development of civilization, i.e., that such a transition needs to take place for its own sake, but this deontological ideal of a civilization that lives within its means also implies for many who hold this idea a vision of future civilization that has been revamped to avoid the morally catastrophic mistakes of the past, and in this sense the conception is clearly teleological.
The Historico-Temporal Structure of Human Life
One of the most distinctive features of human consciousness is its time consciousness that extends into an explicit understanding of the future and its relationship to present action, and which developed and iterated becomes historical consciousness, in which the individual and the social group understands himself or itself to stand in relation to a past that preceded the present, and a future that will follow from the present. This historico-temporal structure of human life, both individual and communal, means that human beings plan ahead and make provision for the future in a much more systematic way than any other terrestrial species. This consideration alone suggests that the primary ethical category for understanding human action must be teleological. But this presents us with certain problems.
Civilization itself, and the great processes of civilization such as the Neolithic Agricultural Revolution, urbanization, and industrialization, were unplanned developments that just happened. No one planned to build a civilization, and no one planned for regional civilizations to run into planetary constraints and thus to begin to integrate into a global civilization. So although human beings have the ability to plan and the carry out long term projects, many of the historical human realities that are among the most significant in shaping our lives both individually and collectively were not planned. In the future we may be able to plan a civilization or civilizational process and bring this plan to a successful conclusion, but nothing like this has yet been accomplished in the history of civilization. The closest we have come to this is to build planned communities or cities, and this falls far short of the construction of an entire civilization. Until we can do more, we are subject to a limited teleological civilizational ethos at most.
Teleological and Deontological Sources of Civilization
While agrarian-ecclesiastical civilization tends to organize around an eschtological destiny, and is therefore profoundly teleological in outlook, and industrial-technological civilization tends to organize around procedural rationality, and is therefore profoundly deontological in outlook, we can think of a prehistoric past that is the source of both of these paradigms of civilization as either essentially teleological or deontological.
The basic historico-temporal properties of human life noted above, iterated, extended, and eventually made systematic culminate in an organized and communal way of life for a social species, and this telos of human activity is civilization. Civilization on this view is inherent in human nature. This can be expressed in non-naturalistic, eschatological terms, and this probably the form in which this conception is most familiar to us, but it can also be expressed in scientific terms. Here is Carl Sagan’s expression of this idea:
The cerebral cortex, where matter is transformed into consciousness, is the point of embarkation for all our cosmic voyages. Comprising more than two-thirds of the brain mass, it is the realm of both intuition and critical analysis. It is here that we have ideas and inspirations, here that we read and write, here that we do mathematics and compose music. The cortex regulates our conscious lives. It is the distinction of our species, the seat of our humanity. Civilization is a product of the cerebral cortex.
Carl Sagan, Cosmos, Chapter XI, “The Persistence of Memory”
In my post 2014 IBHA Conference Day 2 I mentioned the presentation of William Katerberg, in which he characterized ideas of inevitability and impossibility as forms of teleology in scientific historiography. While Sagan may not be asserting the inevitability of civilization emerging from the cerebral cortex, all of these conceptions belong under the overarching umbrella of teleology, whether weakly teleological or strongly teleological.
When we consider the highest expressions of the human mind in intellectual and aesthetic production, it is not at all clear if these monuments of human thought are undertaken for their intrinsic value as ends in themselves, or if they have been pursued with an eye to some end beyond the construction of the monument. Consider the pyramids: are these monuments to glorify the Pharaoh, and thus by extension to glorify Egyptian civilization as an end in itself, or are these monuments to secure the eternal reign of the Pharaoh in the afterlife? Many of the mysterious monuments that remain from past civilizations — Stonehenge, Carnac, Göbekli Tepe, the Moai of Easter Island, and the Sphinx, inter alia — have this ambiguous character.
We can imagine a civilization of the prehistorical past essentially called into being by the great effort to create one of these monoliths. The site of Göbekli Tepe is one of the more recent and interesting discoveries from the Neolithic, and some archaeologists that suggested that the site points to civilization coming into being for the purpose of constructing and maintaining this ritual site (something I mentioned in The Birth of Agriculture from the Spirit of Religion).
Teleology, Deontology, and a Philosophy of History
Teleology has been subject to much abuse in the history of human thought, as I have noted on many occasions. There is a strong desire to believe in meaning and purpose that transcends the individual, if not the entire species. The essentially incoherent desire for an meaning or purpose coming from outside the world entire, entering into the world from outside and giving a purpose to mundane actions that these actions cannot derive from any source within the world, is an imperfectly expressed theme of almost all religious thought. Logically, this is the desire for a constructive foundation for meaning and purpose; finding meaning or purpose for the world from within the world is an inherently non-constructive conception that leaves a vaguely dissatisfied feeling rarely brought to logical clarification.
The first great work in western philosophy of history, Saint Augustine’s City of God, is a thoroughly teleological conception of history culminating in the -. Perhaps the next most influential philosophy of history after Augustine was that of Hegel, and, again, Hegel’s philosophy of history is pervasively teleological in spirit. A particular philosophical effort is required to conceive of human history (and human civilization) in non-Augustinian, non-Hegelian terms.
Does there even exist, in the Western philosophical tradition, a deontological philosophy of civilization? In light of the discussion above, I have to examine my own efforts in the philosophy of history, as I realize now that some of my formulations could be interpreted as implying that civilization is the telos of human history. Does human history culminate in human civilization? Is civilization the destiny of humanity? If so, this should be made explicit. If not, a more careful formulation of the relationship of civilization to human history is in order.
. . . . .
. . . . .
. . . . .