Monday


Seventh in a Series on Existential Risk:

risk taxonomy

Infosec as a Guide to Existential Risk


Many of the simplest and seemingly most obvious ideas that we invoke almost every day of our lives are the most inscrutably difficult to formulate in any kind of rigorous way. This is true of time, for example. Saint Augustine famously asked in his Confessions:

What then is time? If no one asks me, I know: if I wish to explain it to one that asketh, I know not: yet I say boldly that I know, that if nothing passed away, time past were not; and if nothing were coming, a time to come were not; and if nothing were, time present were not. (11.14.17)

quid est ergo tempus? si nemo ex me quaerat, scio; si quaerenti explicare velim, nescio. fidenter tamen dico scire me quod, si nihil praeteriret, non esset praeteritum tempus, et si nihil adveniret, non esset futurum tempus, et si nihil esset, non esset praesens tempus.

Marx made a similar point in a slightly different way when he tried to define commodities at the beginning of Das Kapital:

“A commodity appears, at first sight, a very trivial thing, and easily understood. Its analysis shows that it is, in reality, a very queer thing, abounding in metaphysical subtleties and theological niceties.”

“Eine Ware scheint auf den ersten Blick ein selbstverständliches, triviales Ding. Ihre Analyse ergibt, daß sie ein sehr vertracktes Ding ist, voll metaphysischer Spitzfindigkeit und theologischer Mücken.”

Karl Marx, Capital: A Critique of Political Economy, Vol. I. “The Process of Capitalist Production,” Book I, Part I, Chapter I, Section 4., “The Fetishism of Commodities and the Secret Thereof”

Augustine on time and Marx on commodities are virtually interchangeable. Marx might have said, What then is a commodity? If no one asks me, I know: if I wish to explain it to one that asketh, I know not, while Augustine might have said, Time appears, at first sight, a very trivial thing, and easily understood. Its analysis shows that it is, in reality, a very queer thing, abounding in metaphysical subtleties and theological niceties.

As with time and commodities, so too with risk: What is risk? If no one asks me, I know, but if someone asks me to explain, I can’t. Risk appears, at first sight, a very trivial thing, and easily understood; its analysis shows that it is, in reality, a very queer thing, abounding in metaphysical subtleties and theological niceties.

In my writings to date on existential risk I have been developing existential risk in a theoretical context of what is called Knightian risk, because this conception of risk was given its initial exposition by Frank Knight. I quoted Knight’s book Risk, Uncertainty, and Profit at some length in several posts here in an effort to try to place existential risk within a context of Knightian risk. There are, however, alternative formulations of risk, and alternative formulations of risk point to alternative formulations of existential risk.

I happened to notice that a recent issue of Network World had a cover story on “Why don’t risk management programs work?”. The article is an exchange between Jack Jones and Alexander Hutton, information security (infosec) specialists who were struggling with just these foundational issues as to risk as I have noted above. Alexander Hutton sounds like he is quoting Augustine:

“…what is risk? What creates it and how is it measured? These things in and of themselves are evolving hypotheses.”

Both Hutton and Jones point to the weaknesses in the concept of risk that are due to insufficient care in formulations and theoretical models. Jones talks about the inconsistent use of terminology, and Hutton says the following about formal theoretical methods:

“Without strong data and formal methods that are widely identified as useful and successful, the Overconfidence Effect (a serious cognitive bias) is deep and strong. Combined with the stress of our thinning money and time resources, this Overconfidence Effect leads to a generally dismissive attitude toward formalism.”

Probably without knowing it, Jones and Hutton have echoed Kant, who in his little pamphlet On the Old Saw: ‘That May Be Right in Theory, But it Won’t Work in Practice’ argued that the the proper response to an inadequate theory is not less theory but more theory. Here is a short quote from that work of Kant’s to give a flavor of his exposition:

“…theory may be incomplete, and can perhaps be perfected only by future experiments and experiences from which the newly qualified doctor, agriculturalist or economist can and ought to abstract new rules for himself to complete his theory. It is therefore not the fault of the theory if it is of little practical use in such cases. The fault is that there is not enough theory; the person concerned ought to have learnt from experience.”

In the above-quoted article Jack Jones develops the (Kantian) theme of insufficient theoretical foundations, as well as that of multiple approaches to risk that risk clouding our understanding of risk by assigning distinct meanings to one and the same term:

“Risk management programs don’t work because our profession doesn’t, in large part, understand risk. And without understanding the problem we’re trying to manage, we’re pretty much guaranteed to fail… Some practitioners seem to think risk equates to outcome uncertainty (positive or negative), while others believe it’s about the frequency and magnitude of loss. Two fundamentally different views.”

Jones goes on to add:

“…although I’ve heard the arguments for risk = uncertainty, I have yet to see a practical application of the theory to information security. Besides, whenever I’ve spoken with the stakeholders who sign my paychecks, what they care about is the second definition. They don’t see the point in the first definition because in their world the ‘upside’ part of the equation is called ‘opportunity’ and not ‘positive risk’.”

Are these two concepts of risk — uncertainty vs. frequency and magnitude of loss — really fundamentally distinct paradigms for risk? Reading a little further into the literature of risk management in information technology I found that “frequency and magnitude of loss” is almost always prefaced by “probability of” or “likelihood of,” as in this definition of risk in Risk Management: The Open Group Guide, edited by Ian Dobson, Jim Hietala:

“Risk is the probable frequency and probable magnitude of future loss. With this as a starting point, the first two obvious components of risk are loss frequency and loss magnitude.” (section 5.2.1)

What does it mean to speak in terms of probable frequency or likely frequency? It means that the frequency and magnitude of a loss is uncertain, or known only within certain limits. In other words, uncertainty is a component of risk in the definition of risk in terms of frequency and magnitude of loss.

If you have some doubts about the formulation of probable frequency and magnitude of loss in terms of uncertainty, here is a definition of “risk” from Dictionary of Economics by Harold S. Sloan and Arnold J. Zurcher (New York: Barnes and Noble, 1961), dating from well before information security was a major concern:

Risk. The possibility of loss. The term is commonly used to describe the possibility of loss from some particular hazard, as fire risk, war risk, credit risk, etc. It also describes the possibility of loss by an investor who, in popular speech, is often referred to as a risk bearer.

Possibility is just another way of thinking about uncertainty, so one could just as well define risk as the uncertainty of loss. Indeed, in the book cited above, Risk Management: The Open Group Guide, there are several formulations in terms of uncertainty, as, for example:

“A study and analysis of risk is a difficult task. Such an analysis involves a discussion of potential states, and it commonly involves using information that contains some level of uncertainty. And so, therefore, an analyst cannot exactly know the risk in past, current, or future state with absolute certainty.” (2.2.1)

We see, then, that uncertainty is a constitutive element of formulations of risk in terms of frequency and magnitude of loss, but it is also easy to see that in using terms such as “frequency” and “magnitude” which clearly imply quantitative measures, that we are dealing with uncertainties that can be measured and quantified (or, at least, ideally can be quantified), and this is nothing other than Knightian risk, though Knightian risk is usually formulated in terms of uncertainties against which we can be insured. Insuring a risk is made possible though its quantification; those uncertainties that lie beyond the reach of reasonably accurate quantitative predictions remain uncertainties and cannot be transformed into risks. I have suggested in my previous posts that it is the accumulation of knowledge that transforms uncertainties into risk, and I think you will find that this also holds good in infosec: as knowledge of information technologies improves, risk management will improve. Indeed, as much is implied in a couple of quotes from the infosec articled cited above. here is Jack Jones:

“We have the opportunity to break new ground — establish a new science, if you will. What could be more fun than that? There’s still so much to figure out!”

And here is Alexander Hutton making a similar point:

“…the key to success in security and risk for the foreseeable future is going to be data science.”

The development of data science would mean a systematic way of accumulating knowledge that would transform uncertainty into risk and thereby make uncertainties manageable. In other words, when we know more, we will know more about the frequency and magnitude of loss, and the more we know about it, the more we can insure against this loss.

The two conceptions of risk discussed above — risk as uncertainty and risk as probable frequency and magnitude of loss — are not mutually exclusive but rather complementary; uncertainty is employed (if implicitly) in formulations in terms of frequency and magnitude of loss, so that uncertainty is the more fundamental concept. In other words, Knightian risk and uncertainty are the theoretical foundations lacking in infosec formulations. At the same time, the elaboration of risk management in infosec formulations built upon implicit foundations of Knightian risk can be used to arrive at parallel formulations of existential risk.

Existential risk can be understood in terms of the probable frequency and probable magnitude of existential loss, with probably frequency decomposed into existential threat event frequency and existential vulnerability, and so on. Indeed, one of the great difficulties of existential risk consciousness raising stems from the fact that existential threat event frequency must be measured on a time scale that is almost inaccessible to human time consciousness. It is only with the advent of scientific historiography that we have become aware of how often we have dodged the bullet in the past — an observation that suggests that the great filter lies in the past (or perhaps in the present) and not in the future (or so we can hope). In other words, the systematic cultivation of knowledge transforms uncertainty into manageable risk. Thus we can immediately see the relevance of threat event frequency to existential risk mitigation.

Existential risk formulations can illuminate infosec formulations and vice versa. For example, in the book mentioned above, Risk Management: The Open Group Guide, we find this: “Unfortunately, Probable Loss Magnitude (PLM) is one of the toughest nuts to crack in analyzing risk.” Yet in existential risk formulations magnitude of loss has been a central concern, and is quantified by the scope parameter in Bostrom’s qualitative categories of risk.

Table of qualitative risk categories from the book Global Catastrophic Risks.

Table of qualitative risk categories from the book Global Catastrophic Risks.

There is an additional sense in which infosec is relevant to existential risk, and this is the fact that, as industrial-technological civilization incrementally migrates onto virtual platforms, industrial-technological civilization will come progressively closer to being identical to its virtual representation. More and more, the map will be indistinguishable from the territory. This process has already begun in our time, though this beginning is only the thinnest part of the thin edge of the wedge.

We are, at present, far short of totality in the virtual representation of industrial-technological civilization, and perhaps further still from the indistinguishability of virtual and actual worlds. However, we are not at all far short of the indispensability of the virtual to the maintenance of actual industrial-technological civilization, so that the maintenance of the virtual infrastructure of industrial-technological civilization is close to being a conditio sine qua non of the viability of actual industrial-technological civilization. In this way, infosec plays a crucial role in existential risk mitigation.

As I described in The Most Prevalent Form of Degradation in Civilized Life, civilization is the vehicle and the instrument of earth-originating life and its correlates, so that civilizational risks such as flawed realization, permanent stagnation, and subsequent ruination must be accounted co-equal existential threats alongside extinction risks.

If the future of earth-originating life and its correlates is dependent upon industrial-technological civilization, and if industrial-technological civilization is dependent upon an indispensable virtual infrastructure, then the future of earth-originating life and its correlates is dependent upon the indispensable virtual infrastructure of industrial-technological civilization.

Q.E.D.

. . . . .

danger imminent existential threat

. . . . .

Existential Risk: The Philosophy of Human Survival

1. Moral Imperatives Posed by Existential Risk

2. Existential Risk and Existential Uncertainty

3. Addendum on Existential Risk and Existential Uncertainty

4. Existential Risk and the Death Event

5. Risk and Knowledge

6. What is an existential philosophy?

7. An Alternative Formulation of Existential Risk

. . . . .

ex risk ahead

. . . . .

signature

. . . . .

Grand Strategy Annex

. . . . .