Eighth in a Series on Existential Risk:


Every Risk is also an Opportunity

It is a commonplace that every risk is an opportunity, and every opportunity is a risk; risk and opportunity are two sides of the same coin. This can also be expressed by distinguishing negative risk (what we ordinarily call “risk” simpliciter) and positive risk (what we ordinarily call “opportunity”). What this means in terms of existential thought is that every existential risk is an existential opportunity, and existential opportunity is at the same time an existential risk.

If we understand by risk the uncertainty of frequency and uncertainty of magnitude of future loss, then by opportunity we should understand the uncertainty of frequency and uncertainty of magnitude of future gain. The relative probability of a loss is offset by the relative probability of a gain, and the relative probability of a gain is offset by the relative probability of a loss; both are calculable; both are, in principle, insurable. Thus these risks and opportunities represent the subset of uncertainties that present actionable mitigation strategies. Where uncertainty exceeds the possibility of actionable mitigation, we pass beyond insurable risk to uncertainty proper.

In existential risk scenarios, our very existence is at stake; in existential opportunity scenarios, again, our very existence is at stake. To formulate this parallel to the above, we can assert that existential risk is the uncertainty of frequency and uncertainty of magnitude of future loss of earth-originating life and civilization, while existential opportunity is the uncertainty of frequency and uncertainty of magnitude of future gain for earth-originating life and civilization. In formulating the existential condition of humanity, there is little that is risk sensu stricto, since much of the big picture of the human future is given over to uncertainty that lies beyond presently actionable risk. However, the calculus of risk and reward remains, even if we are not speaking strictly of risk that can be fully calculated and thus fully insured. In other words, the existential uncertainties facing humanity admit of a distinction between positive uncertainties and negative uncertainties. Any valuation of this kind, however, is intrinsically disputable and controversial.

Given that our very existence is at stake in existential opportunity no less than in existential risk, a future defined by the realization of an existential opportunity might be unrecognizable as a human future. Indeed, the realization of an existential opportunity might be every bit as unrecognizable as the realization of an existential threat, which means that the two futures might be indistinguishable, which means in turn that existential opportunity might be mistaken for existential risk, and vice versa.

Faced with a stark choice (i.e., faced with an existential choice), I think few would choose extinction, flawed realization, permanent stagnation, or subsequent ruination over species survival, flawless realization, permanent amelioration, or subsequent escalation. (If, in moments of decision in our life, we make our choice in fear and trembling, how must we fear and tremble in moments of decision for our species?) Any such choice, however, is not likely to be visited upon us in this form.

Much more likely that an explicit choice between an utopian future of astonishing wonders and a dystopian future of dismal oppression is an imperceptibly gradual process whereby a promising future suggests certain day-to-day decisions (seemingly seizing an opportunity) which lead incrementally to a future with unintended consequences that greatly outweigh the promises that prompted the daily decisions that led to the future in question. This is how history generally works: by degrees, and not by intention. (Notwithstanding the Will Durant quote — “The future never just happened, it was created.” — that I mentioned in Predicting the Human Future in Space.)

In so far as industrial-technological civilization continues its exponential growth of technology (growing incrementally and often imperceptibly by degrees, and not always by intention), and therefore also the growth of human agency in shaping our environment, the expanding scope of this civilization will mitigate certain existential risks even as it exposes humanity to new and unprecedented risks. That is to say, industrial-technological civilization itself is at once both a risk and an opportunity. Civilization centered on escalating industrial-technological development exposes us to escalating industrial accidents and unintended consequences of technology, unprecedented pollution from industrial processes, changes in our way of life, and indeed changes to our very being as a result of the technological transformation of humanity (i.e., transhumanism).

At the same time, escalating industrial-technological development offers the unprecedented possibility of a spacefaring civilization, which could establish earth-originating life off the surface of the earth and thereby secure the minimum redundancy necessary to the long-term survival of such life. The transition of the terrestrial economy to an economy fully integrated with the industrialization of space — a process that I have called extraterrestrialization — could not take place without the advent of industrial-technological civilization.

Yet the expansion of business operations and interests into extraterrestrial space is a paradigm of uncertainty — no such effort has been made on a large scale, and so the risks of such an enterprise are unknown and cannot be calculated, fully managed, or insured against. Space operations therefore exemplify uncertainty rather than risk, and for the same reason that such operations are uncertain, their execution is potentially beset with contingencies unknown to us today. This does not make such an enterprise is too risky to contemplate — this is the only imaginable contribution that industrial-technological civilization can make to the long-term survival of earth-originating life — but we must undertake such enterprises without illusions or the subsequent losses endured may become socially unsustainable leading to the end of the enterprise. Subsequent unforeseen losses resulting from the transition to a spacefaring civilization may even be interpreted as a form of subsequent ruination, and thereby conceived by many as an existential threat. How we understand existential risk, then, affects what we understand to be a risk and what we understand to be a reward.

In the larger context of industrial-technological civilization we can identify individual industries and technologies that represent in themselves both risks and opportunities. The most fantastic speculations of transhumanist utopias, like the most dismal speculations on transhumanist dystopias, constitute unprecedented opportunities (or risks) implied by the present trajectories of technology. One of the best examples of risk and opportunity in future technology are the possibilities of nano-scale robots. The development of nano-scale robots could, on the one hand, provide for unprecedented medical technologies — robots that could be injected like an inoculation which would treat medical conditions from the inside out, repairing the body on a microscopic scale and potentially greatly improving health and extending longevity. On the other hand, nano-scale robots loose in the biosphere could potentially cause great harm. if not havoc, perhaps even resulting in a gray goo scenario.

In so far as any proposed existential risk mitigation initiatives prioritize safety over opportunity, any concern for existential risk could itself become an existential risk by lending support for policies that address risk through calculated stagnation instituted as a risk-averse response to existential threats. The question then becomes how humanity can lower its exposure to existential risks without reducing its existential opportunities. The attempt to answer this question, even if it does not issue in clear, unambiguous imperatives, may at least provide a framework in which to conceptualize problematic scenarios for the human future that some may identify as desirable while others would identify the same as a moral horror — such as transhumanism.

. . . . .

danger imminent existential threat

. . . . .

Existential Risk: The Philosophy of Human Survival

1. Moral Imperatives Posed by Existential Risk

2. Existential Risk and Existential Uncertainty

3. Addendum on Existential Risk and Existential Uncertainty

4. Existential Risk and the Death Event

5. Risk and Knowledge

6. What is an existential philosophy?

7. An Alternative Formulation of Existential Risk

8. Existential Risk and Existential Opportunity

. . . . .

ex risk ahead

. . . . .


. . . . .

Grand Strategy Annex

. . . . .


Seventh in a Series on Existential Risk:

risk taxonomy

Infosec as a Guide to Existential Risk

Many of the simplest and seemingly most obvious ideas that we invoke almost every day of our lives are the most inscrutably difficult to formulate in any kind of rigorous way. This is true of time, for example. Saint Augustine famously asked in his Confessions:

What then is time? If no one asks me, I know: if I wish to explain it to one that asketh, I know not: yet I say boldly that I know, that if nothing passed away, time past were not; and if nothing were coming, a time to come were not; and if nothing were, time present were not. (11.14.17)

quid est ergo tempus? si nemo ex me quaerat, scio; si quaerenti explicare velim, nescio. fidenter tamen dico scire me quod, si nihil praeteriret, non esset praeteritum tempus, et si nihil adveniret, non esset futurum tempus, et si nihil esset, non esset praesens tempus.

Marx made a similar point in a slightly different way when he tried to define commodities at the beginning of Das Kapital:

“A commodity appears, at first sight, a very trivial thing, and easily understood. Its analysis shows that it is, in reality, a very queer thing, abounding in metaphysical subtleties and theological niceties.”

“Eine Ware scheint auf den ersten Blick ein selbstverständliches, triviales Ding. Ihre Analyse ergibt, daß sie ein sehr vertracktes Ding ist, voll metaphysischer Spitzfindigkeit und theologischer Mücken.”

Karl Marx, Capital: A Critique of Political Economy, Vol. I. “The Process of Capitalist Production,” Book I, Part I, Chapter I, Section 4., “The Fetishism of Commodities and the Secret Thereof”

Augustine on time and Marx on commodities are virtually interchangeable. Marx might have said, What then is a commodity? If no one asks me, I know: if I wish to explain it to one that asketh, I know not, while Augustine might have said, Time appears, at first sight, a very trivial thing, and easily understood. Its analysis shows that it is, in reality, a very queer thing, abounding in metaphysical subtleties and theological niceties.

As with time and commodities, so too with risk: What is risk? If no one asks me, I know, but if someone asks me to explain, I can’t. Risk appears, at first sight, a very trivial thing, and easily understood; its analysis shows that it is, in reality, a very queer thing, abounding in metaphysical subtleties and theological niceties.

In my writings to date on existential risk I have been developing existential risk in a theoretical context of what is called Knightian risk, because this conception of risk was given its initial exposition by Frank Knight. I quoted Knight’s book Risk, Uncertainty, and Profit at some length in several posts here in an effort to try to place existential risk within a context of Knightian risk. There are, however, alternative formulations of risk, and alternative formulations of risk point to alternative formulations of existential risk.

I happened to notice that a recent issue of Network World had a cover story on “Why don’t risk management programs work?”. The article is an exchange between Jack Jones and Alexander Hutton, information security (infosec) specialists who were struggling with just these foundational issues as to risk as I have noted above. Alexander Hutton sounds like he is quoting Augustine:

“…what is risk? What creates it and how is it measured? These things in and of themselves are evolving hypotheses.”

Both Hutton and Jones point to the weaknesses in the concept of risk that are due to insufficient care in formulations and theoretical models. Jones talks about the inconsistent use of terminology, and Hutton says the following about formal theoretical methods:

“Without strong data and formal methods that are widely identified as useful and successful, the Overconfidence Effect (a serious cognitive bias) is deep and strong. Combined with the stress of our thinning money and time resources, this Overconfidence Effect leads to a generally dismissive attitude toward formalism.”

Probably without knowing it, Jones and Hutton have echoed Kant, who in his little pamphlet On the Old Saw: ‘That May Be Right in Theory, But it Won’t Work in Practice’ argued that the the proper response to an inadequate theory is not less theory but more theory. Here is a short quote from that work of Kant’s to give a flavor of his exposition:

“…theory may be incomplete, and can perhaps be perfected only by future experiments and experiences from which the newly qualified doctor, agriculturalist or economist can and ought to abstract new rules for himself to complete his theory. It is therefore not the fault of the theory if it is of little practical use in such cases. The fault is that there is not enough theory; the person concerned ought to have learnt from experience.”

In the above-quoted article Jack Jones develops the (Kantian) theme of insufficient theoretical foundations, as well as that of multiple approaches to risk that risk clouding our understanding of risk by assigning distinct meanings to one and the same term:

“Risk management programs don’t work because our profession doesn’t, in large part, understand risk. And without understanding the problem we’re trying to manage, we’re pretty much guaranteed to fail… Some practitioners seem to think risk equates to outcome uncertainty (positive or negative), while others believe it’s about the frequency and magnitude of loss. Two fundamentally different views.”

Jones goes on to add:

“…although I’ve heard the arguments for risk = uncertainty, I have yet to see a practical application of the theory to information security. Besides, whenever I’ve spoken with the stakeholders who sign my paychecks, what they care about is the second definition. They don’t see the point in the first definition because in their world the ‘upside’ part of the equation is called ‘opportunity’ and not ‘positive risk’.”

Are these two concepts of risk — uncertainty vs. frequency and magnitude of loss — really fundamentally distinct paradigms for risk? Reading a little further into the literature of risk management in information technology I found that “frequency and magnitude of loss” is almost always prefaced by “probability of” or “likelihood of,” as in this definition of risk in Risk Management: The Open Group Guide, edited by Ian Dobson, Jim Hietala:

“Risk is the probable frequency and probable magnitude of future loss. With this as a starting point, the first two obvious components of risk are loss frequency and loss magnitude.” (section 5.2.1)

What does it mean to speak in terms of probable frequency or likely frequency? It means that the frequency and magnitude of a loss is uncertain, or known only within certain limits. In other words, uncertainty is a component of risk in the definition of risk in terms of frequency and magnitude of loss.

If you have some doubts about the formulation of probable frequency and magnitude of loss in terms of uncertainty, here is a definition of “risk” from Dictionary of Economics by Harold S. Sloan and Arnold J. Zurcher (New York: Barnes and Noble, 1961), dating from well before information security was a major concern:

Risk. The possibility of loss. The term is commonly used to describe the possibility of loss from some particular hazard, as fire risk, war risk, credit risk, etc. It also describes the possibility of loss by an investor who, in popular speech, is often referred to as a risk bearer.

Possibility is just another way of thinking about uncertainty, so one could just as well define risk as the uncertainty of loss. Indeed, in the book cited above, Risk Management: The Open Group Guide, there are several formulations in terms of uncertainty, as, for example:

“A study and analysis of risk is a difficult task. Such an analysis involves a discussion of potential states, and it commonly involves using information that contains some level of uncertainty. And so, therefore, an analyst cannot exactly know the risk in past, current, or future state with absolute certainty.” (2.2.1)

We see, then, that uncertainty is a constitutive element of formulations of risk in terms of frequency and magnitude of loss, but it is also easy to see that in using terms such as “frequency” and “magnitude” which clearly imply quantitative measures, that we are dealing with uncertainties that can be measured and quantified (or, at least, ideally can be quantified), and this is nothing other than Knightian risk, though Knightian risk is usually formulated in terms of uncertainties against which we can be insured. Insuring a risk is made possible though its quantification; those uncertainties that lie beyond the reach of reasonably accurate quantitative predictions remain uncertainties and cannot be transformed into risks. I have suggested in my previous posts that it is the accumulation of knowledge that transforms uncertainties into risk, and I think you will find that this also holds good in infosec: as knowledge of information technologies improves, risk management will improve. Indeed, as much is implied in a couple of quotes from the infosec articled cited above. here is Jack Jones:

“We have the opportunity to break new ground — establish a new science, if you will. What could be more fun than that? There’s still so much to figure out!”

And here is Alexander Hutton making a similar point:

“…the key to success in security and risk for the foreseeable future is going to be data science.”

The development of data science would mean a systematic way of accumulating knowledge that would transform uncertainty into risk and thereby make uncertainties manageable. In other words, when we know more, we will know more about the frequency and magnitude of loss, and the more we know about it, the more we can insure against this loss.

The two conceptions of risk discussed above — risk as uncertainty and risk as probable frequency and magnitude of loss — are not mutually exclusive but rather complementary; uncertainty is employed (if implicitly) in formulations in terms of frequency and magnitude of loss, so that uncertainty is the more fundamental concept. In other words, Knightian risk and uncertainty are the theoretical foundations lacking in infosec formulations. At the same time, the elaboration of risk management in infosec formulations built upon implicit foundations of Knightian risk can be used to arrive at parallel formulations of existential risk.

Existential risk can be understood in terms of the probable frequency and probable magnitude of existential loss, with probably frequency decomposed into existential threat event frequency and existential vulnerability, and so on. Indeed, one of the great difficulties of existential risk consciousness raising stems from the fact that existential threat event frequency must be measured on a time scale that is almost inaccessible to human time consciousness. It is only with the advent of scientific historiography that we have become aware of how often we have dodged the bullet in the past — an observation that suggests that the great filter lies in the past (or perhaps in the present) and not in the future (or so we can hope). In other words, the systematic cultivation of knowledge transforms uncertainty into manageable risk. Thus we can immediately see the relevance of threat event frequency to existential risk mitigation.

Existential risk formulations can illuminate infosec formulations and vice versa. For example, in the book mentioned above, Risk Management: The Open Group Guide, we find this: “Unfortunately, Probable Loss Magnitude (PLM) is one of the toughest nuts to crack in analyzing risk.” Yet in existential risk formulations magnitude of loss has been a central concern, and is quantified by the scope parameter in Bostrom’s qualitative categories of risk.

Table of qualitative risk categories from the book Global Catastrophic Risks.

Table of qualitative risk categories from the book Global Catastrophic Risks.

There is an additional sense in which infosec is relevant to existential risk, and this is the fact that, as industrial-technological civilization incrementally migrates onto virtual platforms, industrial-technological civilization will come progressively closer to being identical to its virtual representation. More and more, the map will be indistinguishable from the territory. This process has already begun in our time, though this beginning is only the thinnest part of the thin edge of the wedge.

We are, at present, far short of totality in the virtual representation of industrial-technological civilization, and perhaps further still from the indistinguishability of virtual and actual worlds. However, we are not at all far short of the indispensability of the virtual to the maintenance of actual industrial-technological civilization, so that the maintenance of the virtual infrastructure of industrial-technological civilization is close to being a conditio sine qua non of the viability of actual industrial-technological civilization. In this way, infosec plays a crucial role in existential risk mitigation.

As I described in The Most Prevalent Form of Degradation in Civilized Life, civilization is the vehicle and the instrument of earth-originating life and its correlates, so that civilizational risks such as flawed realization, permanent stagnation, and subsequent ruination must be accounted co-equal existential threats alongside extinction risks.

If the future of earth-originating life and its correlates is dependent upon industrial-technological civilization, and if industrial-technological civilization is dependent upon an indispensable virtual infrastructure, then the future of earth-originating life and its correlates is dependent upon the indispensable virtual infrastructure of industrial-technological civilization.


. . . . .

danger imminent existential threat

. . . . .

Existential Risk: The Philosophy of Human Survival

1. Moral Imperatives Posed by Existential Risk

2. Existential Risk and Existential Uncertainty

3. Addendum on Existential Risk and Existential Uncertainty

4. Existential Risk and the Death Event

5. Risk and Knowledge

6. What is an existential philosophy?

7. An Alternative Formulation of Existential Risk

. . . . .

ex risk ahead

. . . . .


. . . . .

Grand Strategy Annex

. . . . .


Earth and the moon in one frame as seen from the Galileo spacecraft 6.2 million kilometers away. (from Picture of Earth from Space by Fraser Cain)

Earth and the moon in one frame as seen from the Galileo spacecraft 6.2 million kilometers away. (from Picture of Earth from Space by Fraser Cain)

In my post Existential Risk and Existential Uncertainty I cited Frank Knight’s distinction between risk and certainty and attempted to apply this to the idea of existential risk. I suggested that discussions of existential risk ought to distinguish between existential risk (sensu stricto) and existential uncertainty.

In Knight’s now-classic work Risk, Uncertainty, and Profit, Frank Knight actually made a threefold distinction in the kinds of probabilities that face the entrepreneur:

1. A priori probability. Absolutely homogeneous classification of instances completely identical except for really indeterminate factors. This judgment of probability is on the same logical plane as the propositions of mathematics (which also may be viewed, and are viewed by the writer, as “ultimately” inductions from experience).

2. Statistical probability. Empirical evaluation of the frequency of association between predicates, not analyzable into varying combinations of equally probable alternatives. It must be emphasized that any high degree of confidence that the proportions found in the past will hold in the future is still based on an a priori judgment of indeterminateness. Two complications are to be kept separate: first, the impossibility of eliminating all factors not really indeterminate; and, second, the impossibility of enumerating the equally probable alternatives involved and determining their mode of combination so as to evaluate the probability by a priori calculation. The main distinguishing characteristic of this type is that it rests on an empirical classification of instances.

3. Estimates. The distinction here is that there is no valid basis of any kind for classifying instances. This form of probability is involved in the greatest logical difficulties of all, and no very satisfactory discussion of it can be given, but its distinction from the other types must be emphasized and some of its complicated relations indicated.

Frank Knight, Risk, Uncertainty, and Profit, Chap. VII

At the end of the chapter Knight finally made his point fully explicit:

It is this third type of probability or uncertainty which has been neglected in economic theory, and which we propose to put in its rightful place. As we have repeatedly pointed out, an uncertainty which can by any method be reduced to an objective, quantitatively determinate probability, can be reduced to complete certainty by grouping cases. The business world has evolved several organization devices for effectuating this consolidation, with the result that when the technique of business organization is fairly developed, measurable uncertainties do not introduce into business any uncertainty whatever. Later in our study we shall glance hurriedly at some of these organization expedients, which are the only economic effect of uncertainty in the probability sense; but the present and more important task is to follow out the consequences of that higher form of uncertainty not susceptible to measurement and hence to elimination. It is this true uncertainty which by preventing the theoretically perfect outworking of the tendencies of competition gives the characteristic form of “enterprise” to economic organization as a whole and accounts for the peculiar income of the entrepreneur.

Frank Knight, Risk, Uncertainty, and Profit, Chap. VII

Knight’s distinction between risk and uncertainty — between probabilities that can be calculated, managed, and insured against and probabilities that cannot be calculated and therefore cannot be managed or insured against — continues to be taught in business and economics today. (It is a distinction closely parallel to Rumsfeld’s distinction between known unknowns and unknown unknowns, though worked out in considerably greater detail and sophistication.) Knight’s slightly more subtle threefold distinction among probabilities might be characterized as a tripartite distinction between certainty, risk, and uncertainty.

Knight acknowledges, in his account of statistical probability, i.e., risk, that there are at least two complications:

1. that of eliminating all truly indeterminate features, and…

2. the impossibility of enumerating the equally probable alternatives involved

Knight’s hedged account of risk obliquely acknowledges the gray area that lies between risk and uncertainty — a gray area that can be enlarged in favor of risk as our knowledge improves, or which can be enlarged in favor of uncertainty as additional complicating favors enter into our calculation of risk and render our knowledge less effective and our uncertainty all the greater. That is to say, the line between risk and uncertainty is unclear, and it can move, which makes it doubly ambiguous.

certainty risk uncertainty

These hedges are important qualifications to make, because we all know from real-life experience that additional complicating factors always enter into actual risks. We may try to insure ourselves, and therefore to secure our interests against risk, but fine print in the insurance contract may deny us a settlement, or we may have forgotten to pay our premiums, or a thousand other things might go wrong between our careful planning and the actual events of life that so often preempt our planning and force us to deal with the unexpected with insufficient preparation. As Bobby Burns put it, “The best laid schemes o’ Mice an’ Men, Gang aft agley, An’ lea’e us nought but grief an’ pain, For promis’d joy!”

field_mouse small

Such consideration span the entire universe from field mice to galaxies. A coldly rational assessment of risk can be made, and resources can be expended to mitigate risk to the extent calculated, but not only are the limits to our knowledge, but we don’t know where the limits to our knowledge lie. Indeterminate features can creep into our calculation and equally probable alternatives could be in play without our even being aware of the fact.

Some events that pose existential risks or global catastrophic risks can be predicted with a high degree of accuracy, and some cannot. Even about those risks that seem predictable to a high degree of accuracy — say, the life of the sun, which can be predicted on the basis of our knowledge of cosmology, and which thereby would seem to give us some knowledge of how long a time we have on earth to lay our schemes — admit of indeterminate elements and equally probably scenarios. The end of the earth seems a long way off, if the earth lasts almost as long as the sun, and putting our existential risk far in the future seems to diminish the threat.

There is a famous quote from Frank Ramsey (who died tragically young in a mountain climbing accident, as might happen to anyone, anytime) that is relevant here, both economically and philosophically:

My picture of the world is drawn in perspective and not like a model to scale. The foreground is occupied by human beings and the stars are all as small as three-penny bits. I don’t really believe in astronomy, except as a complicated description of part of the course of human and possibly animal sensation. I apply my perspective not merely to space but also to time. In time the world will cool and everything will die; but that is a long time off still and its present value at compound discount is almost nothing.

From a paper read to the Apostles, a Cambridge discussion society (1925). In ‘The Foundations of Mathematics’ (1925), collected in Frank Plumpton Ramsey and D. H. Mellor (ed.), Philosophical Papers (1990), Epilogue, 249

Despite Ramsey having referred (in another context) to the “Bolshevik menace” of Brouwer and Weyl, it has been said that Ramsey became a constructivist not long before he died. This conversion should not surprise us, given Ramsey’s Protagorean profession in his passage.

Protagoras famously said that Man is the measure of all things, of those things that are, that they are, and of those things that are not, that they are not. (πάντων χρημάτων μέτρον ἐστὶν ἄνθρωπος, τῶν μὲν ὄντων ὡς ἔστιν, τῶν δὲ οὐκ ὄντων ὡς οὐκ ἔστιν.) Protagoras may be counted as the earliest of proto-constructivists, of which company Kant and Poincaré may be considered the most famous.

In the passage quoted above, Ramsey is essentially saying in a modern idiom that man is the measure of all things, even of time and space — that man is the measure of the farthest reaches of time and space, and in so far as these distant prospects of human experience are inaccessible, they are irrelevant. Ramsey is important in his respect because of his consciously chosen anthropocentrism. In a post-Copernican age, this is significant. We are all, of course, familiar with the advocates of the anthropic cosmological principle, and their implicit anthropocentrism, but Ramsey gives us a slightly different perspective on anthropocentrism. Ramsey essentially brings constructivism to our moral life, and in so doing suggests that the moral imperatives posed by existential risk can be safely ignored for the time being.

What Ramsey is saying here is that we can make a definite calculation of the lives of the stars — and also the expected life of our sun — and that we can insure against this risk, but that the risk lies so far in the future that its present value is practically nil. In other words, the eventual burning out of the sun is a risk and not an uncertainty. On the contrary, it is not an uncertainty at all, but a certainty. Just as the finite amount of oil on Earth must eventually come to an end, the finite sun must also come to an end.

What our growing knowledge of cosmology is teaching us is that we are not isolated from the wider universe. Events on a cosmic scale have influenced the development of life on earth, and may well be responsible for our development as a species. If the earth had not been hit by an asteroid or comet about 65 million years ago, mammals may never have developed as they did, and we would not exist. And if our solar system did not bob up and down through the galactic plane of the Milky Way, resulting in geophysical rhythms from the the gravitational interaction with the rest of the galaxy, a distant asteroid of comet might not have been dislodged from its stable orbit and sent careening toward earth.

Given our connection with the wider universe, and our vulnerability to its convulsions, what we know about our local risks (which is not nearly enough, as recent unpredicted episodes have shown us) is not enough to make a calculation of our vulnerability. What appears superficially to be a calculable risk has uncertainty injected back into it by the cosmological context in which all astronomical events take place.

If the death of the sun were the only existential risk against which we needed to insure ourselves, we would not need to be in any hurry about existential risk mitigation, because we would have literally millions of years to build a spacefaring civilization and thus to insure ourselves against that predictable catastrophe. But in our violent universe (as Nigel Calder called it) scarcely a million years goes by without some cosmic catastrophe occurring, and we don’t know when then next one will arrive.

Carl Sagan rightly pointed out that an event that is unlikely to happen in a hundred years may be inevitable in a hundred millions years:

The Earth is a lovely and more or less placid place. Things change, but slowly. We can lead a full life and never personally encounter a natural disaster more violent than a storm. And so we become complacent, relaxed, unconcerned. But in the history of Nature, the record is clear. Worlds have been devastated. Even we humans have achieved the dubious technical distinction of being able to make our own disasters, both intentional and inadvertent. On the landscapes of other planets where the records of the past have been preserved, there is abundant evidence of major catastrophes. It is all a matter of time scale. An event that would be unthinkable in a hundred years may be inevitable in a hundred million.

Carl Sagan, Cosmos, Chapter IV, “Heaven and Hell”

Perhaps in one of his most quoted lines, Sagan said that we are “star stuff,” and certainly this is true. However, the corollary of this inspiring thought is that our star stuff is subject to the natural forces that shape the destinies of stars, and in shaping the destiny of stars, shape the destiny of men who live on planets orbiting stars.

Understanding ourselves as “star stuff” entails understanding ourselves as living in a dangerous universe replete with devouring black holes, gamma ray bursts, supernovae, and other cataclysms almost beyond the capacity of human beings to conceive.

. . . . .


. . . . .

Grand Strategy Annex

. . . . .


Frank Knight

Frank Knight on risk and uncertainty

Early Chicago school economist Frank Knight was known for his work on risk, and especially for the distinction between risk and uncertainty, which is still taught in economics and business courses. Like Schumpeter, Knight was interested in the function of the entrepreneur in the modern commercial economy, and he employed his distinction between risk and uncertainty in order to illuminate the function of the entrepreneur.

Although it is easy to conflate risk and uncertainty, and to speak as though facing a risk were the same thing as facing uncertain or unknown circumstances, Knight doesn’t see it like this at all. A risk can be quantified and calculated, and because risks can be quantified and calculated, they can be controlled. This is the function of insurance: to quantify and price risk. If you have correctly factored risk into your calculation, it is no longer an uncertainty. You might not know the exact date or magnitude of losses, but you know statistically that there will be a certain number of losses of a certain magnitude. It is the job of actuaries to calculate this, and one buys insurance to control the risk to which one is exposed.

The ordinary business of life, and of business, according to Knight, involves risk management, but the unique function of the entrepreneur is to accept uncertainty that cannot be quantified, priced, or insured. The entrepreneur makes his profit not in spite of uncertainty, but because of uncertainty. No insurance can be bought for uncertainty, so that in taking on an uncertain situation the entrepreneur enters into a realm in which it is recognized that there are factors beyond control. If he is not destroyed financially by these uncontrollable factors, he may profit from them, and this profit is likely to exceed the profit made in ordinary business operations exposed to risk but not to uncertainty.

Here is how Knight formulated his distinction between risk and uncertainty:

To preserve the distinction which has been drawn in the last chapter between the measurable uncertainty and an unmeasurable one we may use the term “risk” to designate the former and the term “uncertainty” for the latter. The word “risk” is ordinarily used in a loose way to refer to any sort of uncertainty viewed from the standpoint of the unfavorable contingency and the term “uncertainty” similarly with reference to the favorable outcome; we speak of the “risk” of a loss, the “uncertainty” of a gain. But if our reasoning so far is at an correct, there is a fatal ambiguity in these terms which must be gotten rid of and the use of the term “risk” in connection with the measurable uncertainties or probabilities of insurance gives some justification for specializing the terms as just indicated. We can also employ the terms “objective” and “subjective” probability to designate the risk and uncertainty respectively, as these expressions are already in general use with a signification akin to that proposed.


Knight went on to add…

The practical difference between the two categories, risk and uncertainty, is that in the former the distribution of the outcome in a group of instances is known (either through calculation a priori or from statistics of past experience), while in the case of uncertainty this is not true, the reason being in general that it is impossible to form a group of instances, because the situation dealt with is in a high degree unique.


The growth of knowledge and experience can transform uncertainty into risk if it contextualizes a formerly unique situation in such a way as to demonstrate that it is not unique but belongs to a group of instances. Of the tremendous gains made in the space sciences during the last forty years, during our selective space age stagnation, it could be said that the function of this considerable gain in knowledge has been to transform uncertainty into risk. But this goes only so far.

Even if the boundary between risk and uncertainty can be pushed outward by the growth of knowledge, the same growth of civilization that attends the growth of knowledge and technology means that the boundaries of civilization itself will also be pushed further out, with the result being that we are likely to always encounter further uncertainties even as old uncertainties are transformed by knowledge into risk.

The evolution of the existential risk concept

In many recent posts I have been discussing the idea of existential risk. These posts include, but are not limited to, Moral Imperatives Posed by Existential Risk, Research Questions on Existential Risk, and Six Theses on Existential Risk. The idea of existential risk is due to Nick Bostrom. (I first heard about this at the first 100YSS symposium in Orlando in 2011, when I was talking to Christian Weidemann.)

Nick Bostrom defined existential risk as follows:

Existential risk – One where an adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential.

And added…

An existential risk is one where humankind as a whole is imperiled. Existential disasters have major adverse consequences for the course of human civilization for all time to come.

“Existential Risks: Analyzing Human Extinction Scenarios and Related Hazards,” Nick Bostrom, Professor, Faculty of Philosophy, Oxford University, Published in the Journal of Evolution and Technology, Vol. 9, No. 1 (2002)

In his papers on existential risk and the book on Global Catastrophic Risks, Bostrom steadily expanded and refined the parameters of disasters that have (or would have) major adverse consequences for human beings and their civilization.

Table of six qualitative categories of risk from 'Existential Risks: Analyzing Human Extinction Scenarios and Related Hazards'

Table of six qualitative categories of risk from ‘Existential Risks: Analyzing Human Extinction Scenarios and Related Hazards’

The table from an early existential risk paper above divides qualitative risks into six categories. the table below from the book Global Catastrophic Risks includes twelve qualitative risk categories and implies another eight; the table further below from a more recent paper includes fifteen qualitative risk categories and implies another nine. From a philosophical point of view, these further distinctions represent in advance in clarity, contextualizing both existential risks and global catastrophic risks in a matrix of related horrors.

Table of qualitative risk categories from the book Global Catastrophic Risks.

Table of qualitative risk categories from the book Global Catastrophic Risks.

The specific possible events that Bostrom describes range from the imperceptible loss of one hair to human extinction. Recently in Moral Imperatives Posed by Existential Risk I tried to point out how further distinctions can be made within the variety of human extinction scenarios, and that some distinct outcomes might be morally preferable over other outcomes. For example, even if human beings were to become extinct, we might want some of our legacy to remain to potentially be discovered by alien species visiting our solar system. Given the presence of space probes throughout our solar system, it seems highly likely that these would survive any human extinction scenario, so that we have left some kind of mark on the cosmos — a cosmic equivalent of “Kilroy was here.”

Qualitative risk categories, Figure 2 from 'Existential Risk Prevention as Global Priority' (2012) Nick Bostrom

Qualitative risk categories, Figure 2 from ‘Existential Risk Prevention as Global Priority’ (2012) Nick Bostrom

Further distinction can be made, however, and the distinction that I would like to urge today is that of distinguishing existential risks from existential uncertainties.

The need to explicitly formulate existential uncertainty

Once the distinction is made between existential risks and existential uncertainties, we recognize that existential risks can be quantified and calculated. Ultimately, existential risks can also be insured. The industrial and financial infrastructure is not now in place to do this, although we can clearly see how to do this. And this much is obvious, because much of the discussion of existential risk focuses on potential mitigation efforts. Existential risk mitigation is insurance against extinction.

We can clearly understand that we can guard against the existential risks posed by massive asteroid impacts by a system of observation of objects in space likely to cross the path of the Earth, and building spacecraft that could deflect or otherwise render harmless such threatening asteroids. It was once thought that the appearance of comets or “new stars” (novae) in the sky heralded the death of kings of the end of empires. No longer. This is the perfect example of a former uncertainty that has been transformed into a risk by the growth of knowledge (or, at very least, is in the process of being transformed from an uncertainty into a risk).

We can also clearly see that we could back up the Earth’s biosphere about a truly catastrophic global disaster by transplanting Earth-originating life elsewhere. Far in the future we can even understand the risk of the sun swelling into a red giant and consuming the Earth in its fires — unless by that time we have moved the Earth to an orbit where it remains safe, or perhaps even transported it to another star. All of these are existential risks where “risk” is used sensu stricto.

There are a great many existential risks and global catastrophic risks that have been proposed. When it comes to geological events — like massive vulcanization — or cosmological events — the death of our sun — the sciences of geology and cosmology are likely to mature to the point where these risks are quantifiable, and if industrial-technological civilization continues its path of exponential development, we should also someday have the technology to adequately “insure” against these existential risks.

The vagaries of history and civilization

When it comes to scenarios that involve events and processes not of the variety that contemporary natural science can formulate, we are clearly pushing the envelope of existential risks and verging on existential uncertainties. Such scenarios would include those predicated upon the development of human history and civilization. For example, scenarios of wars of an order of magnitude that far exceed the magnitude of the global wars of the twentieth century are on the outer edges of risk and, as they become more speculative in their formulation, verge onto uncertainty. Similarly, scenarios that involve the intervention of alien species in human history and human civilization — alien invasion, alien enslavement, alien visitation, etc. — verge onto being existential uncertainties.

The anthropogenic existential risks that are of primary concern to Nick Bostrom, Martin Rees, and others — risks from artificial intelligence, machine consciousness, unintended consequences of advanced technologies, and the “gray goo” problem potentially posed by nanotechnology — are similarly problematic as risks, and many must be accounted as uncertainties. In regard to the anthropogenic dimension of many existential uncertainties I am reminded of a passage from Carl Sagan’s Cosmos:

“Biology is more like history than it is like physics. You have to know the past to understand the present. And you have to know it in exquisite detail. There is as yet no predictive theory of biology, just as there is not yet a predictive theory of history. The reasons are the same: both subjects are still too complicated for us. But we can know ourselves better by understanding other cases. The study of a single instance of extraterrestrial life, no matter how humble, will deprovincialize biology. For the first time, the biologists will know what other kinds of life are possible. When we say the search for life elsewhere is important, we are not guaranteeing that it will be easy to find – only that it is very much worth seeking.

Carl Sagan, Cosmos, CHAPTER II, One Voice in the Cosmic Fugue

This strikes me as one of the most powerful and important passages in Cosmos. When Sagan writes that, “[t]here is as yet no predictive theory of biology, just as there is not yet a predictive theory of history,” while leaving open the possibility of a future predictive science of biology and history — he wrote as yet — he squarely recognized that neither biology nor human history (much of which derives more or less directly from biology) can be predicted or quantified or measured in a scientific way. If we had a science of history, such as Marx thought we had discovered, then the potential disasters of human history could be quantified, and we could insure against them.

Well, we can insure against some eventualities of history, though certainly not against all. This is a point that Machiavelli makes:

It is not unknown to me how many men have had, and still have, the opinion that the affairs of the world are in such wise governed by fortune and by God that men with their wisdom cannot direct them and that no one can even help them; and because of this they would have us believe that it is not necessary to labour much in affairs, but to let chance govern them. This opinion has been more credited in our times because of the great changes in affairs which have been seen, and may still be seen, every day, beyond all human conjecture. Sometimes pondering over this, I am in some degree inclined to their opinion. Nevertheless, not to extinguish our free will, I hold it to be true that Fortune is the arbiter of one-half of our actions, but that she still leaves us to direct the other half, or perhaps a little less.

I compare her to one of those raging rivers, which when in flood overflows the plains, sweeping away trees and buildings, bearing away the soil from place to place; everything flies before it, all yield to its violence, without being able in any way to withstand it; and yet, though its nature be such, it does not follow therefore that men, when the weather becomes fair, shall not make provision, both with defences and barriers, in such a manner that, rising again, the waters may pass away by canal, and their force be neither so unrestrained nor so dangerous. So it happens with fortune, who shows her power where valour has not prepared to resist her, and thither she turns her forces where she knows that barriers and defences have not been raised to constrain her.

Nicolo Machiavelli, The Prince, CHAPTER XXV, “What Fortune Can Effect In Human Affairs, And How To Withstand Her”

What remains beyond the predictable storms of floods of history are the true uncertainties, the unknown unknowns, and these pose a danger we cannot predict, quantify, or insure. They are not, then, risks in the strict sense. They are existential uncertainties.

It could be argued that our inability to take specific, concrete, effective measures to mitigate the obvious uncertainties of life has resulted in religious responses to uncertainty that systematically avoid falsifiability and thereby secure the immunity of hopes to exterior circumstances. Whether or not this has been true in the past, merely the recognition of existential uncertainty is the first step toward rationally assessing them.

Existential risk suggests a clear course of mitigating action; existential uncertainty cannot, on the contrary, be the object of planning and preparation. The most that one can do to address existential uncertainty is to keep oneself open and flexible, ready to roll with the punches, and responsive to any challenge that might arise, meeting it at the height of one’s powers; any attempt to prepare specific measures will be fruitless, and quite possibly counter-productive because of the wasted effort.

. . . . .

categories of existential uncertainty

. . . . .


. . . . .

Grand Strategy Annex

. . . . .

%d bloggers like this: