Saturday


The Developmental Conception of Civilization

classes of exrisk

Eleventh in a Series on Existential Risk


It is common to think about civilization in both developmental and non-developmental terms. As for the former, ever since Marx historians have identified a sequence of stages of economic development, and of course the idea of social evolution was central for Hegel before Marx gave it an economic interpretation. As for the latter, it is not unusual to hear clear distinctions being drawn between civilized and uncivilized life, very much in the spirit of tertium non datur: either a particular instance of social organization is civilized or it is not.

The developmental conception of civilization can be used to illuminate the idea of existential risk, as the classes of existential risk identified in Nick Bostrom’s “Existential Risk Prevention as Global Priority” readily lend themselves to a developmental interpretation. Here are the classes of existential risk from Bostrom’s paper (Table 1. Classes of existential risk):

● Human extinction Humanity goes extinct prematurely, i.e., before reaching technological maturity.

● Permanent stagnation Humanity survives but never reaches technological maturity.
Subclasses: unrecovered collapse, plateauing, recurrent collapse

● Flawed realisation Humanity reaches technological maturity but in a way that is dismally and irremediably flawed. Subclasses: unconsummated realisation, ephemeral realisation

● Subsequent ruination Humanity reaches technological maturity in a way that gives good future prospects, yet subsequent developments cause the permanent ruination of those prospects.

These classes of existential risk can readily be explicated in developmental terms:

● Human extinction The development of humanity ceases because humanity itself ceases to exist.

● Permanent Stagnation The development of humanity ceases, although humanity itself does not go extinct.

● Flawed Realization Humanity continues in its development, but this development goes horribly wrong and results in a human condition that is so far from being optimal that it might be considered a betrayal of human potential.

● Subsequent Ruination Humanity continues for a time in its development, but this development is brought to an untimely end before its potential is fulfilled.

In this context, what I have previously called existential viability, i.e., the successful mitigation of existential risk, can also be explicated in developmental terms:

● Existential viability Humanity is able to continue its arc of development to the point of the fulfillment of its technological maturity.

It would be possible (and no doubt also interesting), to delineate classes of existential viability parallel to classes of existential risk, and informed by the developmental possibilities consistent with the fulfillment of technological maturity or some other measurement of ongoing human development that does not terminate according to an existential risk scenario.

Bostrom originally expressed his conception of existential risk in terms of “earth-originating intelligence” — “An existential risk is one that threatens the premature extinction of Earth-originating intelligent life or the permanent and drastic destruction of its potential for desirable future development (Bostrom, 2002).” In more recent papers he has expressed existential risk in terms of “humanity” and “technological maturity” (as in the formulations quoted above), as in the following quote:

“The permanent destruction of humanity’s opportunity to attain technological maturity is a prima facie enormous loss, because the capabilities of a technologically mature civilisation could be used to produce outcomes that would plausibly be of great value, such as astronomical numbers of extremely long and fulfilling lives. More specifically, mature technology would enable a far more efficient use of basic natural resources (such as matter, energy, space, time, and negentropy) for the creation of value than is possible with less advanced technology. And mature technology would allow the harvesting (through space colonisation) of far more of these resources than is possible with technology whose reach is limited to Earth and its immediate neighbourhood.”

Nick Bostrom, “Existential Risk Prevention as Global Priority,” Global Policy, Volume 4, Issue 1, February 2013

For the moment, humanity and Earth-originating intelligence coincide, but this may not always be the case. A successor species to homo sapiens or conscious and intelligence machines could either take over the mantle of earth-originating intelligence or exist in parallel with humanity, so that there comes to be more than a single realization of earth-originating intelligence.

While Bostrom mentions civilization throughout his exposition, his crucial formulations are not in terms of civilization, though it would seem that Bostrom had the human species, homo sapiens, in mind when he formulated the class of human extinction, while the other classes of permanent stagnation, flawed realization, and subsequent ruination bear more closely on civilization, or at least on the social potential of homo sapiens, such as the accomplishments represented by intelligence and technology. It is a very different thing to talk about the extinction of a biological species and the extinction of a civilization, and it would probably be a good idea of explicitly distinguish risks facing biological species from risks facing social institutions, even though many of these risks will coincide.

For what classes of entities might we define classes of existential risk? Well, to start, we could define classes of existential risk for individuals in contradistinction to existential risks for social institutions comprised of many institutions, with civilization being the most comprehensive social institution yet devised by humanity.

I suspect that a developmental account of the individual is much less controversial than a developmental account of civilization (or, for that matter, of Earth-originating intelligent life), partly because the development of the individual is something that is personally familiar to all of us, and partly due to the efforts of psychologists and sociologists in laying out a detailed typology of individual developmental psychology. Attempts to lay out a detailed developmental typology of civilization runs into social and moral controversies, though I don’t see this as an essential objection.

In any case, here is an ontogenic formulation of the classes of existential risk:

● Personal extinction Individual development ceases because the individual himself ceases to exist. Death as an inevitable part of the human condition (at least for the time being) means that personal extinction is the personal existential risk that is visited upon each and every one of us.

● Personal Permanent Stagnation Individual development ceases, although the individual himself does not die (as of yet).

● Personal Flawed Realization The individual continues in his development, but this development goes horribly wrong and results in a life that is so far from being optimal that it might be considered a betrayal of the individual’s potential.

● Personal Subsequent Ruination The individual continues for a time in his development, but this development is brought to an end before the arc of personal development fulfills its potential.

Many of these cases of personal existential risks strike very close to home, as in imagining these situations one may well see all-too-clearly individuals that one knows personally, or one may even see oneself in one or more of these classes of personal existential risk. It is poignant and painful to confront permanent stagnation or flawed realization in one’s own life or in the lives of those one knows personally, however fascinating these conditions are for novelists and dramatists.

Just as we can imagine the classes of existential risk formulated specifically to illuminate the life of the individual, so too we can formulate phylogenic forms of the classes of existential risk:

● Civilizational extinction The development of human civilization ceases because human civilization itself ceases to exist. (But note here that the extinction of civilization may be consistent with the continued existence of humanity.)

● Civilizational Permanent Stagnation The development of human civilization ceases, although human civilization itself does not go extinct.

● Civilizational Flawed Realization Human civilization continues in its development, but this development goes horribly wrong and results in a civilization that is so far from being optimal that it might be considered a betrayal of the very idea of human civilization.

● Civilizational Subsequent Ruination Human civilization continues for a time in its development, but this development is brought to an end before the arc of the history of civilization can fulfill its potential.

Such large-scale formulations lack the poignancy of the personalized classes of existential risk, though they are more to the point of existential risk understood sensu stricto. Note that the civilizational formulations of the classes of existential risk are at least in one case consistent with the existential viability of humanity, and all classes of civilization existential risk are consistent with personal forms of existential viability — individuals within stagnant or flawed civilizations may continue to develop and to fulfill their full potential, although this potential is not expressed in a social form. Thus any individual human potential that is intrinsically social would be ruled out by civilizational failure, but I assume that human potential is not exhausted by exclusively social forms of fulfillment.

The poignancy of personal classes of existential risk may be useful precisely due to the visceral effect they have — not unlike the visceral nature of the overview effect and the potential of the overview effect in raising personal awareness of planetary finitude and vulnerability. Similarly, the finitude and vulnerability of humanity on the whole may be driven home to the individual by a personal illustration of existential risk.

There is a yawning chasm that separates the disasters all-too-easily rationalized away as not being worth the effort to pursue preparedness, and global catastrophic risks and existential risks that have as yet no existing preparedness efforts because they seem intractable and overwhelming merely to contemplate.

It is possible that just as we may begin with mundane forms of risk management — readily understood and readily implemented — move up to crisis management, then to global catastrophic risks and finally to existential risks, so too we may start with personal risks and move up to the most comprehensive forms of risk — and this emerging consciousness of more comprehensive forms of risk is itself a developmental process.

This macrocosm/microcosm approach to existential risk suggests a cross fertilization of ideas, such that personal methods for mitigating existential risks may suggest societal methods, and vice versa. However, we know that flawed individuals sometimes do great things, just as flawed societies can boast of great accomplishments. It may be necessary to distinguish between flaws that augment existential threats and flaws that diminish existential threats. If this is also true on a societal level, the consequences are decidedly interesting.

. . . . .

classes of exrisk 2

. . . . .

danger imminent existential threat

. . . . .

Existential Risk: The Philosophy of Human Survival

1. Moral Imperatives Posed by Existential Risk

2. Existential Risk and Existential Uncertainty

3. Addendum on Existential Risk and Existential Uncertainty

4. Existential Risk and the Death Event

5. Risk and Knowledge

6. What is an existential philosophy?

7. An Alternative Formulation of Existential Risk

8. Existential Risk and Existential Opportunity

9. Conceptualization of Existential Risk

10. Existential Risk and Existential Viability

11. Existential Risk and the Developmental Conception of Civilization

. . . . .

ex risk ahead

. . . . .

signature

. . . . .

Grand Strategy Annex

. . . . .

Monday


Morally Distinguishable Outcomes in

nuclear_explosion_on_earth_from_space small

Global Catastrophic Scenarios


Below is Nick Bostrom’s table of qualitative categories of risk. Bostrom and Milan M. Ćirković have together edited a book on Global Catastrophic Risks, which includes this table. Existential risks, that is to say, risk that could result in human extinction, are identified as “an especially severe subset” of global catastrophic risks.

qualitative categories of risk

Of existential risks and their potential consequences I recently wrote this:

“When we think about what this means for us, our other ‘priorities’ pale by comparison. Nothing else matters, no matter how apparently pressing, if we are made extinct by an accident of local cosmology.”

Thinking of this further, I realized that there are many ethical presuppositions implicit in my formulation, and that (at least some of) these presuppositions can be spelled out and made explicit.

Bostrom’s table of qualitative risk categories suggest possibilities of scope and intensity beyond those comprised by global catastrophic risk and existential risk, and on the margin of the table we see “Cosmic?” as a possible scope beyond “pan-generational” and “Hellish?” as a potential intensity beyond “Terminal.” Thus what is cosmic and hellish is a qualitative risk category beyond even that of existential risk. I think that there are moral intuitions from catastrophic outcomes that correspond to these almost unthinkable scenarios.

While it would seem that there is little worse that could happen (from a human perspective, i.e., fully informed by anthropic bias) than human extinction, even given our anthropic bias and therefore our desire to avoid human extinction there are morally distinguishable outcomes in many different scenarios of global catastrophe and human extinction, and where there is the possibility of morally distinguishable outcomes there also will be the possibility of ranking these moral outcomes from the least awful possibility to the worst of all possibilities. There is also the likelihood of moral disagreements on these rankings, and these moral disagreements over prioritizing existential risk mitigation could prove crucial in future debates over the allocation of civilizational resources to existential risk mitigation. Thus even if existential risk comes to be seen as an overriding priority for human beings and civilization, this is not yet the convergence of human moral effort; room for profound disagreement yet remains.

Considering a range of devastating and catastrophic events that could compromise human life and human civilization, possibly to the point of their extinction, I can think of six scenarios in order of severity:

1. Massive but survivable catastrophe A global catastrophic risk realized that results in the loss of millions or billions of lives and deals a major setback to civilization, without either extinguishing human beings or human civilization (in Bostrom’s table of qualitative risks these would include global, trans-generational, and pan-generational endurable risks).

2. Catastrophic failure of civilization A global catastrophic risk realized that resulted in the catastrophic failure of civilization, but does not result in the extinction of human beings. The human population might be drastically reduced to paleolithic population levels, but potentially could rebound. There remains the possibility that civilization might be reconstituted, but this is likely to take hundreds if not thousands of years. (“Global dark age” in the table above.)

3. Human extinction The first level of human extinction I will call simple extinction, which is an existential risk realized, which however leaves the Earth intact, and the legacy of human civilization intact. I add this latter qualification because it is possible, even if human beings become extinct, that human civilization might leave monuments that could be appreciated by other sentient species that could visit the Earth. It is even possible (however unlikely) that other species might appreciate the human record of civilization more than we appreciate it ourselves. Thus human extinction need not mean the loss of human cultural legacy. A pandemic that killed only human beings could have this result. (X marks the spot in the table above.)

4. Human extinction with the extirpation of all human legacy The second level of human extinction I will call compound extinction, which is an existential risk realized that results in human extinction and the elimination of all (or almost all) signs of human presence, but which leaves the biosphere largely intact, and the ordinary business of terrestrial life continues largely unchanged. (This is human extinction coupled with “destruction of cultural heritage.”)

5. Catastrophic compromise of the biosphere The third level of human extinction involves not only the extinction of human beings and all human legacy, but also the extinction of all complex life on the Earth. Terrestrial life continues, but is reduced to single celled organisms. Thus there remains the possibility that life on Earth may recover, but this would probably require billions of years and result in very different life forms.

6. Terrestrial sterilization The most radical form of realized existential risk is terrestrial sterilization which results in human extinction, the extirpation of all human legacy, and the elimination of all terrestrial life, i.e., complete catastrophic failure of the biosphere. From this point there is nothing that can be recovered and no human legacy remains.

I tried to arrange these various morally distinct outcomes on an expanded version of Bostrom’s table of qualitative risk categories, but couldn’t yet find a conceptually neat and straight-forward way to do so. Further thought is needed here. I don’t think there is a need to distinguish further qualitative categories of risk beyond existential risk — in other words, we can refer to all of these morally distinct outcomes as outcomes of existential risk, as realized in distinct scenarios. However, one could make such distinctions if it were helpful to do so.

The most radical moral imperative of existential risk is to take existential risk as absolute and as trumping all other concerns, which is what I clearly implied when I wrote that, “…our other ‘priorities’ pale by comparison. Nothing else matters, no matter how apparently pressing…” if we are made (or make ourselves) extinct. This radical position has profound and discomfiting implications.

If we survey the evils of the world, we would be forced to acknowledge that it is better that any or all of these evils continue than that human life should be permanently extinguished, because the continuation of these evils is consistent with the continuation of human life and human civilization. The end of all human life would also mean the end of all the cruelties and inhumanity that we inflict upon our fellow man, and this would be a good and indeed a desirable state of affairs, but from a radical perspective on existential risk we would have to affirm that, as good a state of affairs as this represents, it would not be as morally good as the state of affairs that involves the perpetuation of these evils together with the perpetuation of human life and civilization.

Of course, under most conceivable scenarios there is no reason whatsoever to suppose that we had to choose between the perpetuation of all the evils of the world and human extinction. That is to say, there is no reason that we cannot work toward the elimination of human evils and the mitigation of existential risks. As a moral thought experiment, however, we can employ the method of isolation and ask whether the survival of human beings and human civilization, together with all the evils this entails is better than the annihilation of human beings and human civilization, so that neither human good nor human evil remains.

While I would be willing to assert that existential risk mitigation trumps all other concerns, even in a thought experiment in which human evils remain unmitigated, I can easily imagine that there are many who would disagree with this judgment. Moral diversity is a fact of human life, and we must recognize that if some among us (myself included) would be willing to explicitly affirm the radical moral consequences of prioritizing existential risk mitigation, there will be others who will equally explicitly reject a radical prioritization of existential risk mitigation, and who will affirm that it is better that the world should come to an end than that the manifold evils of our time should persist. From this point of view, in view of the limited resources available to human beings, we would do better to direct these resources to the mitigation of human evils than to direct these resources to the mitigation of existential risk.

It is entirely possible that someone might affirm that it is a good thing civilization should be ended, and the idea has incredible romantic appeal that cannot be denied and should not be ignored. Many are the science fiction books and films (for example, think of Logan’s Run or 12 Monkeys) that depict a world empty of human beings and populated only by collapsing buildings and animals hunting in the ruins. This scenario is depicted, for example, in Alan Weisman’s book The World Without Us.

The idea that civilization is evil can easily be extended to the idea that humanity is evil in and of itself. The predictions of the original Club of Rome report of 1972, The Limits to Growth, have been widely discussed on its recent 40th anniversary, but what has not been remarked is the language and tone of that original document (which you will not find on the internet, despite the millions of used copies kicking around). The report boldly asserted, “The earth has cancer and the cancer is Man.” This kind of rhetoric, which is less common today, can easily play into a principled denial of the moral value of humanity.

And it is easy to understand why. The world is filled with evils, and the most horrific evils are those that human beings perpetrate upon other human beings — homo homini lupus. If we prioritize existential risk mitigation over the mitigation of human evils, we find ourselves forced into the uncomfortable position of tolerating Kantian radical evil, Marilyn McCord Adams’ conception of horrendous evils, and Claudia Card’s atrocities. Imagine the horrors of genocide, torture, and industrialized warfare and then imagine being forced to admit that it is better than genocides occur, better that torture continues, and better that industrialized warfare persists than that an existential risk be realized. This is a hard saying; nevertheless, this is the argument that must be made, and it is always best to face a hard argument directly than to attempt to avoid it.

In Marilyn McCord Adams’ exposition of what she calls “horrendous evils” in her book Horrendous Evils and the Goodness of God Adams wrote:

“Among the evils that infect this world, some are worse than others. I want to try to capture the most pernicious of them within the category of horrendous evils, which I define (for present purposes) as ‘evils the participation in which (that is, the doing or suffering of which) constitutes prima facie reason to doubt whether the participant’s life could (given their inclusion in it) be a great good to him/her on the whole.’ The class of paradigm horrors includes both individual and massive collective suffering…”

Marilyn McCord Adams, Horrendous Evils and the Goodness of God, Ithica: Cornell University Press, 1999, p. 26.

She went on to add in the next section:

“I believe most people would agree that such evils as listed above constitute reason to doubt whether the participants’ life can be world living, because it is so difficult humanly to conceive how such evils could be overcome.”

Loc. cit.

In the last paragraph of her paper of the same title, Adams again suggests that horrendous evils call into question the possibility of having a life worth living:

“I would go one step further: assuming the pragmatic and/or moral (I would prefer to say, broadly speaking, religious) importance of believing that (one’s own) human life is worth living, the ability of Christianity to exhibit how this could be so despite human vulnerability to horrendous evil, constitutes a pragmatic/moral/religious consideration in its favour, relative to value schemes that do not.”

Marilyn McCord Adams, “Horrendous Evils and the Goodness of God.” Anthologized in The Problem of Evil, edited by Marilyn McCord Adams and Robert Merrihew Adams, Oxford: Oxford University Press, 1990, p. 221.

A generalization of Adams’ argument could easily bring us from the point where horrendous evils make the individual doubt or question that one’s life is worth living to the point where humanity on the whole legitimately, and on principle, questions whether any human life at all is worth living. If humanity comes to decide that horrendous evils overwhelm all value in the world and make human existence utterly meaningless and pointless, then the mitigation of existential risk can come to seem like an evil or an impiety.

Adams finds her answer to this in Christianity; we naturalists cannot appeal to supernaturalistic validation or justification: we must take human evil on its face along with human good, and if we prioritize the mitigation of existential risk (and therefore the continuity of humanity and human civilization), we do so knowing that human evils will continue and are probably ineradicable if not inseparable from human history.

We can actively seek to mitigate human evils, and the effort has intrinsic value, but the intrinsic value of the mitigation of suffering and mundane meliorism can only continue in the case that humanity and organized human activity continue. Therefore the prioritization of the mitigation of existential risk is what makes possible the realization of the intrinsic value of the mitigation of suffering and efforts toward meliorism. With the end of humanity would also come not only an end to all intrinsic goods of human life, but also an end to the intrinsic good of the mitigation of suffering and the effort to make the world a better place.

We can only create a better civilization if civilization continues. If we are perfectibilists, we may believe in the perfectibility of man and indeed even the perfectibility of civilization. This project cannot even be undertaken if humanity and human civilization are cut short in their imperfect state.

. . . . .

Existential Risk: The Philosophy of Human Survival

1. Moral Imperatives Posed by Existential Risk

2. Existential Risk and Existential Uncertainty

3. Addendum on Existential Risk and Existential Uncertainty

4. Existential Risk and the Death Event

5. Risk and Knowledge

6. What is an existential philosophy?

. . . . .

ex risk ahead

. . . . .

signature

. . . . .

Grand Strategy Annex

. . . . .

Pulp-O-Mizer existential risk

. . . . .

Planetary Torpor

6 November 2012

Tuesday


A curious case of selective stagnation:

A whole new way to think about Weltschmerz


Among those who think about human space exploration, the relatively modest (i.e., less than ambitious) human space program since the end of the Apollo program that took human beings to the moon is a problem that requires an explanation. There have always been futurist speculations that have taken particular trends out of context and extrapolated them in isolation. Such narrowly focused futurism almost always gets things wrong. But when we think of all that might have been accomplished in terms of space exploration in the past forty years, and how far we might have gone in terms of existential risk mitigation as a result of a robust space program, one inevitably asks why more has not been done.

Putting the space program in the context of existential risk shifts our understanding a bit, since the space program is usually understood as science or exploration or adventure, but I am coming more to the view that it must be understood in terms of mitigating existential risk, that is to say, establishing self-sustaining, self-sufficient settlements off the surface of the Earth so that life and civilization can go on whatever the vulnerabilities of our home world. From this perspective, from the perspective of existential risk, the space program, and in fact all of human civilization, has been stagnant. We have had the power to leave the Earth and to create a second home for ourselves elsewhere, and we have failed to do so.

The idea of existential risk is due to Nick Bostrum, whom I have mentioned several times recently. His papers Existential Risks: Analyzing Human Extinction Scenarios and Related Hazards and Existential Risk Reduction as Global Priority lay out the basic architecture of the concept, introducing several qualitative risk categories and their classification in terms of existential risk. Bostrum distinguishes four classes of existential risk: human extinction, permanent stagnation, flawed realization, and subsequent ruination.

How are we to construe the relative stagnation of the space program over the past forty years, which could provide a degree of existential risk mitigation, but which has not been widely viewed in this light. Space science has had many spectacular successes in recent decades, which have substantially increased our knowledge of the universe in which we live, but all of this is for naught if our exclusively-terrestrially dwelling species is wiped out by a natural catastrophe beyond the power of our technology to stop or to tame. There is a sense, then, no matter how valuable our scientific knowledge from unmanned missions, that the past forty years have been a wasted opportunity to secure against existential risk. We had the knowledge to go into space, the ability, the economic foundation — all the elements were present, but the will to secure the survival of our own species has been lacking. How do we explain this?

We cannot say that civilization has been exactly stagnant over the past forty years. How can human civilization be said to be stagnant when we have been experiencing exponential technological growth? We have experienced an explosion in the development of telecommunications and computing that was unpredicted and unprecedented. This has profoundly changed our personal lives and the structure of the overall economy and society. It has also increased the rate of technological change, since computerized engineering and design makes it possible to build other technologies in a much more sophisticated fashion than previously was the case. When we think of technological triumphs like the SR-71, the Apollo project, and the Concorde, we must remember that most of this was accomplished by engineers with slide rules writing calculations in pencil on paper. And yet today we have no sophisticated supersonic aerospace industry and nothing on the scale of the Apollo program, though we could presumably do both better now than we did before.

With all this technological progress, there remains a feeling of unfulfilled potential in the past half century. No one can say — as it was in fact said before the space program — that it is simply impossible to travel in space, or for human beings to live in space, or to travel to the moon. We’ve all seen 2001: A Space Odyssey, and even this modest human future in space, with a rotating space station and a base on the moon, didn’t happen. Did people lose interest? Did they turn inward, preferring personal comfort to what Theodore Roosevelt called “the strenuous life”? Was the human spirit broken by the Cold War and the haunting threat of nuclear annihilation?

In German there is a word that we lack in English: Weltschmerz, sometimes translated as “world-weariness.” Americans have never had much use for either the term or the idea, and it sounds a bit too much like post-War French existentialism with its systematic exposition of guilt, despair, alienation, and absurdity. Nevertheless, it is difficult to look at the past half century without thinking of it in terms not unlike Weltschmerz.

Thomas Couture Romans of the Decadence

Stagnation can take the form of a civilization being shot through with ellipses. We could called this condition selective stagnation. Because there are so many possible explanations for the selective stagnation of the past forty years, and because it is unlikely that any one single social, economic, political, or ideological explanation could explain our selective stagnation, the only way we can embrace the complex social phenomenon of selective stagnation is to cover it with a term specifically intended to indicate many historical causes coming together into a trend that constitutes a whole greater than any of its individual parts. Once upon a time this was called “decadence,” as in Thomas Coulture’s famous painting “Romans of the Decadence.” We could also call it Weltschmerz (although it this case it should be Raumshmerz rather than Weltschmerz), or we could call it terrestrial malaise or even planetary torpor.

Since the advent of civilization, there have been several periods of extended stagnation, which historians formerly called “dark ages” but which term is avoided today because of its disparaging connotations. I have previously written about the Greek Dark Ages, and I still occasionally refer to the early middle ages in Western Europe as the “dark ages” because there are senses in which the term remains apt. When we compare the selective stagnation of the past half century to these comprehensive periods during which Western civilization stumbled, and it was a real question whether or not it would recover its footing, our selective stagnation is so minor it scarcely bears mentioning.

But there is a crucial difference: the Greek Dark Age and the Dark Age following the collapse of Roman power in the western empire took place long before the scientific revolution. Since the scientific revolution we have continuously learned more about our place in the universe, and since the industrial revolution we have had the power to modify our place within nature with increasing scope and efficacy. Now we understand better than at any time in the past the existential risks we are facing, and for the past fifty years we have had the power to do something about that existential risk: to establish a human presence in extraterrestrial space that would not be vulnerable to disasters specific to the Earth. This is not absolute risk mitigation — the idea of absolute risk mitigation is illusory — but it is incrementally much better, perhaps even or order of magnitude of distancing ourselves from manifest vulnerability. .

It may be the case that when civilization reaches a certain stage of development at which a minimum level of creature comforts are available for the bulk of the world’s population, that this relative prosperity undermines the springs to action. Because we have only our own terrestrial civilization by which to judge, we don’t have a sufficiently big picture conception of civilization that would allow us to generalize at this level of the idea of civilization.

Singulatarians and transhumanists will tell you that we are poised on the verge of transformative change that will make all previous transitions in human history pale by comparison, and which will launch human beings — or, rather, the post-human, post-biological beings who will be the successors of specifically human being — on a course of development that will make these considerations either irrelevant, or so trivial that it will be a small matter to execute the required solution. But even as these wonders are coming about, we remain vulnerable. We might be on the very verge of the technological singularity when we are wiped out by a stray asteroid. This scenario would constitute what Nick Bostrum called “ephemeral realization.”

For these reasons, as well as many other that the reader will immediately see, I think that the idea of selective stagnation bears further study in its own right.

. . . . .

signature

. . . . .

Grand Strategy Annex

. . . . .

%d bloggers like this: