Moral Imperatives Posed by Existential Risk

25 February 2013

Monday


Morally Distinguishable Outcomes in

nuclear_explosion_on_earth_from_space small

Global Catastrophic Scenarios


Below is Nick Bostrom’s table of qualitative categories of risk. Bostrom and Milan M. Ćirković have together edited a book on Global Catastrophic Risks, which includes this table. Existential risks, that is to say, risk that could result in human extinction, are identified as “an especially severe subset” of global catastrophic risks.

qualitative categories of risk

Of existential risks and their potential consequences I recently wrote this:

“When we think about what this means for us, our other ‘priorities’ pale by comparison. Nothing else matters, no matter how apparently pressing, if we are made extinct by an accident of local cosmology.”

Thinking of this further, I realized that there are many ethical presuppositions implicit in my formulation, and that (at least some of) these presuppositions can be spelled out and made explicit.

Bostrom’s table of qualitative risk categories suggest possibilities of scope and intensity beyond those comprised by global catastrophic risk and existential risk, and on the margin of the table we see “Cosmic?” as a possible scope beyond “pan-generational” and “Hellish?” as a potential intensity beyond “Terminal.” Thus what is cosmic and hellish is a qualitative risk category beyond even that of existential risk. I think that there are moral intuitions from catastrophic outcomes that correspond to these almost unthinkable scenarios.

While it would seem that there is little worse that could happen (from a human perspective, i.e., fully informed by anthropic bias) than human extinction, even given our anthropic bias and therefore our desire to avoid human extinction there are morally distinguishable outcomes in many different scenarios of global catastrophe and human extinction, and where there is the possibility of morally distinguishable outcomes there also will be the possibility of ranking these moral outcomes from the least awful possibility to the worst of all possibilities. There is also the likelihood of moral disagreements on these rankings, and these moral disagreements over prioritizing existential risk mitigation could prove crucial in future debates over the allocation of civilizational resources to existential risk mitigation. Thus even if existential risk comes to be seen as an overriding priority for human beings and civilization, this is not yet the convergence of human moral effort; room for profound disagreement yet remains.

Considering a range of devastating and catastrophic events that could compromise human life and human civilization, possibly to the point of their extinction, I can think of six scenarios in order of severity:

1. Massive but survivable catastrophe A global catastrophic risk realized that results in the loss of millions or billions of lives and deals a major setback to civilization, without either extinguishing human beings or human civilization (in Bostrom’s table of qualitative risks these would include global, trans-generational, and pan-generational endurable risks).

2. Catastrophic failure of civilization A global catastrophic risk realized that resulted in the catastrophic failure of civilization, but does not result in the extinction of human beings. The human population might be drastically reduced to paleolithic population levels, but potentially could rebound. There remains the possibility that civilization might be reconstituted, but this is likely to take hundreds if not thousands of years. (“Global dark age” in the table above.)

3. Human extinction The first level of human extinction I will call simple extinction, which is an existential risk realized, which however leaves the Earth intact, and the legacy of human civilization intact. I add this latter qualification because it is possible, even if human beings become extinct, that human civilization might leave monuments that could be appreciated by other sentient species that could visit the Earth. It is even possible (however unlikely) that other species might appreciate the human record of civilization more than we appreciate it ourselves. Thus human extinction need not mean the loss of human cultural legacy. A pandemic that killed only human beings could have this result. (X marks the spot in the table above.)

4. Human extinction with the extirpation of all human legacy The second level of human extinction I will call compound extinction, which is an existential risk realized that results in human extinction and the elimination of all (or almost all) signs of human presence, but which leaves the biosphere largely intact, and the ordinary business of terrestrial life continues largely unchanged. (This is human extinction coupled with “destruction of cultural heritage.”)

5. Catastrophic compromise of the biosphere The third level of human extinction involves not only the extinction of human beings and all human legacy, but also the extinction of all complex life on the Earth. Terrestrial life continues, but is reduced to single celled organisms. Thus there remains the possibility that life on Earth may recover, but this would probably require billions of years and result in very different life forms.

6. Terrestrial sterilization The most radical form of realized existential risk is terrestrial sterilization which results in human extinction, the extirpation of all human legacy, and the elimination of all terrestrial life, i.e., complete catastrophic failure of the biosphere. From this point there is nothing that can be recovered and no human legacy remains.

I tried to arrange these various morally distinct outcomes on an expanded version of Bostrom’s table of qualitative risk categories, but couldn’t yet find a conceptually neat and straight-forward way to do so. Further thought is needed here. I don’t think there is a need to distinguish further qualitative categories of risk beyond existential risk — in other words, we can refer to all of these morally distinct outcomes as outcomes of existential risk, as realized in distinct scenarios. However, one could make such distinctions if it were helpful to do so.

The most radical moral imperative of existential risk is to take existential risk as absolute and as trumping all other concerns, which is what I clearly implied when I wrote that, “…our other ‘priorities’ pale by comparison. Nothing else matters, no matter how apparently pressing…” if we are made (or make ourselves) extinct. This radical position has profound and discomfiting implications.

If we survey the evils of the world, we would be forced to acknowledge that it is better that any or all of these evils continue than that human life should be permanently extinguished, because the continuation of these evils is consistent with the continuation of human life and human civilization. The end of all human life would also mean the end of all the cruelties and inhumanity that we inflict upon our fellow man, and this would be a good and indeed a desirable state of affairs, but from a radical perspective on existential risk we would have to affirm that, as good a state of affairs as this represents, it would not be as morally good as the state of affairs that involves the perpetuation of these evils together with the perpetuation of human life and civilization.

Of course, under most conceivable scenarios there is no reason whatsoever to suppose that we had to choose between the perpetuation of all the evils of the world and human extinction. That is to say, there is no reason that we cannot work toward the elimination of human evils and the mitigation of existential risks. As a moral thought experiment, however, we can employ the method of isolation and ask whether the survival of human beings and human civilization, together with all the evils this entails is better than the annihilation of human beings and human civilization, so that neither human good nor human evil remains.

While I would be willing to assert that existential risk mitigation trumps all other concerns, even in a thought experiment in which human evils remain unmitigated, I can easily imagine that there are many who would disagree with this judgment. Moral diversity is a fact of human life, and we must recognize that if some among us (myself included) would be willing to explicitly affirm the radical moral consequences of prioritizing existential risk mitigation, there will be others who will equally explicitly reject a radical prioritization of existential risk mitigation, and who will affirm that it is better that the world should come to an end than that the manifold evils of our time should persist. From this point of view, in view of the limited resources available to human beings, we would do better to direct these resources to the mitigation of human evils than to direct these resources to the mitigation of existential risk.

It is entirely possible that someone might affirm that it is a good thing civilization should be ended, and the idea has incredible romantic appeal that cannot be denied and should not be ignored. Many are the science fiction books and films (for example, think of Logan’s Run or 12 Monkeys) that depict a world empty of human beings and populated only by collapsing buildings and animals hunting in the ruins. This scenario is depicted, for example, in Alan Weisman’s book The World Without Us.

The idea that civilization is evil can easily be extended to the idea that humanity is evil in and of itself. The predictions of the original Club of Rome report of 1972, The Limits to Growth, have been widely discussed on its recent 40th anniversary, but what has not been remarked is the language and tone of that original document (which you will not find on the internet, despite the millions of used copies kicking around). The report boldly asserted, “The earth has cancer and the cancer is Man.” This kind of rhetoric, which is less common today, can easily play into a principled denial of the moral value of humanity.

And it is easy to understand why. The world is filled with evils, and the most horrific evils are those that human beings perpetrate upon other human beings — homo homini lupus. If we prioritize existential risk mitigation over the mitigation of human evils, we find ourselves forced into the uncomfortable position of tolerating Kantian radical evil, Marilyn McCord Adams’ conception of horrendous evils, and Claudia Card’s atrocities. Imagine the horrors of genocide, torture, and industrialized warfare and then imagine being forced to admit that it is better than genocides occur, better that torture continues, and better that industrialized warfare persists than that an existential risk be realized. This is a hard saying; nevertheless, this is the argument that must be made, and it is always best to face a hard argument directly than to attempt to avoid it.

In Marilyn McCord Adams’ exposition of what she calls “horrendous evils” in her book Horrendous Evils and the Goodness of God Adams wrote:

“Among the evils that infect this world, some are worse than others. I want to try to capture the most pernicious of them within the category of horrendous evils, which I define (for present purposes) as ‘evils the participation in which (that is, the doing or suffering of which) constitutes prima facie reason to doubt whether the participant’s life could (given their inclusion in it) be a great good to him/her on the whole.’ The class of paradigm horrors includes both individual and massive collective suffering…”

Marilyn McCord Adams, Horrendous Evils and the Goodness of God, Ithica: Cornell University Press, 1999, p. 26.

She went on to add in the next section:

“I believe most people would agree that such evils as listed above constitute reason to doubt whether the participants’ life can be world living, because it is so difficult humanly to conceive how such evils could be overcome.”

Loc. cit.

In the last paragraph of her paper of the same title, Adams again suggests that horrendous evils call into question the possibility of having a life worth living:

“I would go one step further: assuming the pragmatic and/or moral (I would prefer to say, broadly speaking, religious) importance of believing that (one’s own) human life is worth living, the ability of Christianity to exhibit how this could be so despite human vulnerability to horrendous evil, constitutes a pragmatic/moral/religious consideration in its favour, relative to value schemes that do not.”

Marilyn McCord Adams, “Horrendous Evils and the Goodness of God.” Anthologized in The Problem of Evil, edited by Marilyn McCord Adams and Robert Merrihew Adams, Oxford: Oxford University Press, 1990, p. 221.

A generalization of Adams’ argument could easily bring us from the point where horrendous evils make the individual doubt or question that one’s life is worth living to the point where humanity on the whole legitimately, and on principle, questions whether any human life at all is worth living. If humanity comes to decide that horrendous evils overwhelm all value in the world and make human existence utterly meaningless and pointless, then the mitigation of existential risk can come to seem like an evil or an impiety.

Adams finds her answer to this in Christianity; we naturalists cannot appeal to supernaturalistic validation or justification: we must take human evil on its face along with human good, and if we prioritize the mitigation of existential risk (and therefore the continuity of humanity and human civilization), we do so knowing that human evils will continue and are probably ineradicable if not inseparable from human history.

We can actively seek to mitigate human evils, and the effort has intrinsic value, but the intrinsic value of the mitigation of suffering and mundane meliorism can only continue in the case that humanity and organized human activity continue. Therefore the prioritization of the mitigation of existential risk is what makes possible the realization of the intrinsic value of the mitigation of suffering and efforts toward meliorism. With the end of humanity would also come not only an end to all intrinsic goods of human life, but also an end to the intrinsic good of the mitigation of suffering and the effort to make the world a better place.

We can only create a better civilization if civilization continues. If we are perfectibilists, we may believe in the perfectibility of man and indeed even the perfectibility of civilization. This project cannot even be undertaken if humanity and human civilization are cut short in their imperfect state.

. . . . .

Existential Risk: The Philosophy of Human Survival

1. Moral Imperatives Posed by Existential Risk

2. Existential Risk and Existential Uncertainty

3. Addendum on Existential Risk and Existential Uncertainty

4. Existential Risk and the Death Event

5. Risk and Knowledge

6. What is an existential philosophy?

. . . . .

ex risk ahead

. . . . .

signature

. . . . .

Grand Strategy Annex

. . . . .

Pulp-O-Mizer existential risk

. . . . .

Advertisements

8 Responses to “Moral Imperatives Posed by Existential Risk”

  1. Mark Waser said

    What about a seventh scenario — permanent (or extremely long-term) enslavement (whether by “aliens” or our own technology) or the crippling of the potential of the human race?

    • geopolicraticus said

      Certainly, yes. My list was not intended to be exhaustive. And your example poses another interesting moral perspective. I can’t imagine that many who would approve of the end of humanity or the end of human civilization as a good thing would allow that universal human enslavement was a good thing, and therefore to be preferred over annihilation. But suppose an alien power took control of the earth and ended all human evils, but at the price of benevolent slavery. Where would this call in the moral continuum? Is this better or worse than annihilation? I would say it was better, because there would always be hope that we could free ourselves. On the other hand, we would have been shown that human sentience and human civilization are not radically unique, and therefore lowers (relatively speaking) their cosmic value.

      Best wishes,

      Nick

  2. I think Bart Ehrman in God’s Problem did a pretty good job of beating up on the escape to Christianity for at least the standard arguements of why bad things happen to good people.

    It goes along way toward explaining the Marcian/Gnostic dueiistic views that felt that the God of the Old Testament was a corrupt evil/fool lesser god, and the New Testament God was the spiritual saving good (divine spark and all that good stuff). A god that was part of, or reponsible for, this world could not be a good god.

    Since cockaroaches are presumably more survivable than we are in a really big cosmic or nuclear catastrophy, should we be trying to save them first? Preserving at least one form of terrestrail multi cellular complexity?

    • geopolicraticus said

      Thanks for the comment!

      I’ve read Ehrman’s book God’s Problem: How the Bible Fails to Answer Our Most Important Question–Why We Suffer, and enjoyed his personal anecdotes therein. It was obviously an amazing journey that he made, intellectually speaking. On a related note, I wrote about the problem of suffering in Naturalism and Suffering.

      Cockroaches are definitely more survivable than human beings (they can take at least twice the radiation dosage of a human being, I think 1,200 roentgens) and will likely outlive us unless we perish in an instance of terrestrial sterilization or the reduction of all biological complexity. In these cases, there isn’t much we could do to contribute to the survivability of cockroaches. However, if human beings take earth-originating life off the planet and give it other opportunities elsewhere, then there is a great deal that we can do in terms of helping terrestrial biological complexity survive.

      Apart from the intervention of human technology, the only other scenario in which earth-originating life survives in the long term is if some of it gets blasted out on a rock and ends up somewhere else where it has a future — like a low-tech version of what happens in When Worlds Collide. While this is most likely to be the case with microorganisms, it is not beyond the realm of possibility that a hearty cockroach might successfully make the journey. Perhaps you are right, then, and we should be shooting off rockets with cockroaches and other “weedy” species in order to give earth-originating life a second chance.

      Best wishes,

      Nick

  3. The permanent enslavement Mark mentions would be a good example of the ER subtype Bostrom calls Flawed Realization. In fact, the sub classifications here seem like a great refinement of the scope of the two intermediate scenarios of Permanent Stagnation and Flawed Realization. Very handy!

    The more closely we can describe these things, the sooner others will be able to join in on awareness.

    • geopolicraticus said

      Hi Heath,

      If human beings engage in the enslavement of other human beings, and a future civilization comes to be defined by this large scale enslavement, I don’t think that there is any question that this is a flawed realization of civilization. However, if human beings are enslaved by other non-terrestrial species or civilization, then I think we should call this “subsequent ruination,” since human civilization seemed to be headed in an interesting direction and was subsequently ruined by its enslavement to another civilization. However, in this scenario, flawed realization may be a conditio sine qua non, since the enslaving alien civilization then represents an instance of flawed realization.

      There is an interesting tension in the alien enslavement scenario in so far as we can consider this a “natural” disaster, like an asteroid impact or massive vulcanism, or we can think of it a disaster that derives from civilization — only not our civilization. In other words, industrial-technical civilization can develop risks for itself (for the beings that created this civilization) or for other species and their civilizations. This suggests we need to distinguish between anthropogenic (or Earth-originating) civilizational failures and xenogenic civilizational failures.

      Best wishes,

      Nick

  4. Ah, I had missed that Mark was referring to the alien oppressors scenario also. Good point.

    From the point of view of life in the universe as a whole, a situation where we run into alien oppressors someday does amount to failed realization, just as surely as if we ourselves were to become alien oppressors. It’d be a failure of the potential for intelligent life, whatever its form, to achieve both its own goals and yet also allow for the goals of other beings as part of the same unbroken chain of life as a whole.

    But this begs questions of the nature and extent of natural selection, or ‘survival of the fittest.’ I’ve always thought that the fittest of civilizations would find a way to neutralize expansionism of its would-be foes without destroying them. Sun Tzu style.

    • geopolicraticus said

      Hi Heath,

      An important observation, to be sure — on a macroscopic level, the domination of multiple spacefaring civilizations by a powerful and oppressive civilization that enslaves the others is a flawed realization of civilization at a greater order of magnitude, and a scenario that must be contemplated when thinking in terms of the big picture of civilization.

      Certainly the wisest civilization would neutralize the expansion of others without destroying them, but it may well be that aggressive expansionism is selected for. And if we don’t get there first, some other sentient species will beat us to the punch.

      Best wishes,

      Nick

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: