Existential Risk and Existential Opportunity
3 July 2013
Eighth in a Series on Existential Risk:
Every Risk is also an Opportunity
It is a commonplace that every risk is an opportunity, and every opportunity is a risk; risk and opportunity are two sides of the same coin. This can also be expressed by distinguishing negative risk (what we ordinarily call “risk” simpliciter) and positive risk (what we ordinarily call “opportunity”). What this means in terms of existential thought is that every existential risk is an existential opportunity, and existential opportunity is at the same time an existential risk.
If we understand by risk the uncertainty of frequency and uncertainty of magnitude of future loss, then by opportunity we should understand the uncertainty of frequency and uncertainty of magnitude of future gain. The relative probability of a loss is offset by the relative probability of a gain, and the relative probability of a gain is offset by the relative probability of a loss; both are calculable; both are, in principle, insurable. Thus these risks and opportunities represent the subset of uncertainties that present actionable mitigation strategies. Where uncertainty exceeds the possibility of actionable mitigation, we pass beyond insurable risk to uncertainty proper.
In existential risk scenarios, our very existence is at stake; in existential opportunity scenarios, again, our very existence is at stake. To formulate this parallel to the above, we can assert that existential risk is the uncertainty of frequency and uncertainty of magnitude of future loss of earth-originating life and civilization, while existential opportunity is the uncertainty of frequency and uncertainty of magnitude of future gain for earth-originating life and civilization. In formulating the existential condition of humanity, there is little that is risk sensu stricto, since much of the big picture of the human future is given over to uncertainty that lies beyond presently actionable risk. However, the calculus of risk and reward remains, even if we are not speaking strictly of risk that can be fully calculated and thus fully insured. In other words, the existential uncertainties facing humanity admit of a distinction between positive uncertainties and negative uncertainties. Any valuation of this kind, however, is intrinsically disputable and controversial.
Given that our very existence is at stake in existential opportunity no less than in existential risk, a future defined by the realization of an existential opportunity might be unrecognizable as a human future. Indeed, the realization of an existential opportunity might be every bit as unrecognizable as the realization of an existential threat, which means that the two futures might be indistinguishable, which means in turn that existential opportunity might be mistaken for existential risk, and vice versa.
Faced with a stark choice (i.e., faced with an existential choice), I think few would choose extinction, flawed realization, permanent stagnation, or subsequent ruination over species survival, flawless realization, permanent amelioration, or subsequent escalation. (If, in moments of decision in our life, we make our choice in fear and trembling, how must we fear and tremble in moments of decision for our species?) Any such choice, however, is not likely to be visited upon us in this form.
Much more likely that an explicit choice between an utopian future of astonishing wonders and a dystopian future of dismal oppression is an imperceptibly gradual process whereby a promising future suggests certain day-to-day decisions (seemingly seizing an opportunity) which lead incrementally to a future with unintended consequences that greatly outweigh the promises that prompted the daily decisions that led to the future in question. This is how history generally works: by degrees, and not by intention. (Notwithstanding the Will Durant quote — “The future never just happened, it was created.” — that I mentioned in Predicting the Human Future in Space.)
In so far as industrial-technological civilization continues its exponential growth of technology (growing incrementally and often imperceptibly by degrees, and not always by intention), and therefore also the growth of human agency in shaping our environment, the expanding scope of this civilization will mitigate certain existential risks even as it exposes humanity to new and unprecedented risks. That is to say, industrial-technological civilization itself is at once both a risk and an opportunity. Civilization centered on escalating industrial-technological development exposes us to escalating industrial accidents and unintended consequences of technology, unprecedented pollution from industrial processes, changes in our way of life, and indeed changes to our very being as a result of the technological transformation of humanity (i.e., transhumanism).
At the same time, escalating industrial-technological development offers the unprecedented possibility of a spacefaring civilization, which could establish earth-originating life off the surface of the earth and thereby secure the minimum redundancy necessary to the long-term survival of such life. The transition of the terrestrial economy to an economy fully integrated with the industrialization of space — a process that I have called extraterrestrialization — could not take place without the advent of industrial-technological civilization.
Yet the expansion of business operations and interests into extraterrestrial space is a paradigm of uncertainty — no such effort has been made on a large scale, and so the risks of such an enterprise are unknown and cannot be calculated, fully managed, or insured against. Space operations therefore exemplify uncertainty rather than risk, and for the same reason that such operations are uncertain, their execution is potentially beset with contingencies unknown to us today. This does not make such an enterprise is too risky to contemplate — this is the only imaginable contribution that industrial-technological civilization can make to the long-term survival of earth-originating life — but we must undertake such enterprises without illusions or the subsequent losses endured may become socially unsustainable leading to the end of the enterprise. Subsequent unforeseen losses resulting from the transition to a spacefaring civilization may even be interpreted as a form of subsequent ruination, and thereby conceived by many as an existential threat. How we understand existential risk, then, affects what we understand to be a risk and what we understand to be a reward.
In the larger context of industrial-technological civilization we can identify individual industries and technologies that represent in themselves both risks and opportunities. The most fantastic speculations of transhumanist utopias, like the most dismal speculations on transhumanist dystopias, constitute unprecedented opportunities (or risks) implied by the present trajectories of technology. One of the best examples of risk and opportunity in future technology are the possibilities of nano-scale robots. The development of nano-scale robots could, on the one hand, provide for unprecedented medical technologies — robots that could be injected like an inoculation which would treat medical conditions from the inside out, repairing the body on a microscopic scale and potentially greatly improving health and extending longevity. On the other hand, nano-scale robots loose in the biosphere could potentially cause great harm. if not havoc, perhaps even resulting in a gray goo scenario.
In so far as any proposed existential risk mitigation initiatives prioritize safety over opportunity, any concern for existential risk could itself become an existential risk by lending support for policies that address risk through calculated stagnation instituted as a risk-averse response to existential threats. The question then becomes how humanity can lower its exposure to existential risks without reducing its existential opportunities. The attempt to answer this question, even if it does not issue in clear, unambiguous imperatives, may at least provide a framework in which to conceptualize problematic scenarios for the human future that some may identify as desirable while others would identify the same as a moral horror — such as transhumanism.
. . . . .
. . . . .
Existential Risk: The Philosophy of Human Survival
8. Existential Risk and Existential Opportunity
. . . . .
. . . . .
. . . . .
. . . . .