Suboptimal Civilizations

25 April 2015

Saturday


sphinx-egypt-mcleish

When Thinking about civilization this also entails thinking about compromised forms of civilization as well as the end of civilization. Ideally, a comprehensive theory of civilization would be able to account for both civilizations that flourish and prosper as well as those that fail to flourish, and which stagnate, decline, or disappear, or which develop in an undesirable direction (flawed realization). One can think of stagnation and decline as selective or partial collapse; contrariwise, civilizational collapse can be understood as the totality of stagnation or decline (the fulfillment of decline, if you will, which shows that not only progress but also decay can be formulated in teleological terms).

ziggurat-ur

In what follows I will adopt the term “suboptimal civilizations” to indicate those civilizations that have weathered existential threats and which have not gone extinct, but have continued in existence, albeit in a damaged, deformed, or otherwise compromised form due to being subject to stresses beyond that civilization’s level of resilience. A suboptimal civilization, then, is a civilization that has fallen prey to existential risk or risks, but is still extant.

Angkor-Wat-by-Helen-Candee

A civilization may become extinct even when the species that produced that civilization has not gone extinct. Thus the extinction of civilizations is a separate and distinct question from that of the extinction of species. However, the extinction of a species is likely to be much more tightly coupled to the extinction of a civilization, though we could construct scenarios in which a civilization is continued by some other species, or some other agent, than that which originated a given civilization. Generally speaking, those existential risks that lead to the extinction of a civilization are extinction and subsequent ruination; those existential risks that lead to suboptimal civilizations are stagnation and flawed realization.

Temple of Heaven

There is a philosophical problem when it comes to judging civilizations of the past that have transitioned into contemporary forms of civilization, losing their identity in the process, but leaving a legacy in the form of a continuing influence. One way to deal with this problem is to distinguish between civilizations that attained maturity and those that did not. Is a civilization that failed to attain maturity because it was preempted by another form of civilization now to be considered extinct? The obvious example that I have in mind, and which I have cited numerous times, is that of early modern European civilization, which I have called modernism without industrialism, which rapidly was transformed by the industrial revolution, which latter preempted the “natural” development of modernity before that modernity had achieved maturity.

India postcard

I will not attempt at present to define maturity for civilization, but my assumption will be that the maturity of a civilization will have something to do with the bringing to fulfillment of the essential idea of a civilization. I am not prepared to say how the essential idea of a civilization is to be identified, or how it is to be judged to have come to fulfillment, but this should be sufficient to give the reader an intuitive sense of what I have in mind.

917_001

The range of suboptimal civilizations, including those trapped in the social equivalent of neurotic misery, might be quite considerable. Toynbee formulated a range of concepts to understand suboptimal civilizations, including abortive civilizations, arrested civilizations, and fossil civilizations. Extrapolating from Toynbee’s conceptions of suboptimal civilizations, I formulated the idea of submerged civilizations in my post In the Shadow of Civilization.

martin_chambi_141

Toynbee’s conceptions of suboptimal civilizations are imaginative and poetic, but more qualitative than quantitative conceptions. In order to do this in the spirit of science, we would want our comprehensive theory of civilization to incorporate quantifiable metrics for the success or failure of a civilization. At our present stage of social development, it is controversial to compare civilizational traditions and to rate any one tradition as “higher” or “more advanced” than any other tradition (an idea I discussed in Comparative Concepts in the Study of Civilization), as representatives of those civilizations that rate lower on any proposed scale are offended by the metric employed, and they will usually suggest alternative metrics by which their preferred civilizational metric fares much better, while the civilizational tradition that fared better under the other metric would not come off as well by this alternative metric. The attempt by the nation-state of Bhutan to measure “gross national happiness,” may be taken as an example of this, although I am not sure that this is a helpful measure.

BALBECK-Baalbek-LEBANON

It would also be desirable in a comprehensive theory of civilization to formulate metrics for the viability or sustainability of a given civilization. In some cases, metrics for the success of civilization might coincide with metrics for the viability of civilization, but the possibility of very long lived civilizations that are less than ideal — suboptimal civilizations — points out the limitations of defining civilizational success in terms of civilizational survival. In some cases viability and optimality will coincide, while in some cases they will not coincide, and suboptimal civilizations that survive existential risks in a compromised form will be an example of such non-coincidence. The survival of a stagnant civilization can be a matter of mere cosmic good fortune, whereby a particular planet enjoys an uncommonly clement cosmic climate for an uncharacteristically long period of time (while other contingent factors may mean that the climate for civilizational development to maturity is not equally clement).

Ancient-Greece-Ruins-Vintage-Postcards

There are many ways to explore the idea of suboptimal civilization, as was observed above there are many ways for a civilization to languish in suboptimality. Indeed, it may be the case that the essential idea of a civilization has a much smaller class of circumstances in which that idea comes to full fruition and maturity, and a much larger class of circumstances in which that idea fails to mature for any number of distinct reasons, so that suboptimal civilizations are likely to outnumber civilizations that have attained optimality.

Kars

There is another philosophical problem, related to the problem noted above, in identifying the continuity of a civilization, so that a later stage of development can be considered the fulfillment, or failure of fulfillment, of some earlier civilizational idea, and not the emergence of a new idea not yet brought to fulfillment. I have previously considered this problem in several posts on the invariant properties of civilization. If a civilization emerges that seems to lack heretofore invariant properties of civilization, is to identified as a new form of civilization, or as non-civilization? Another way to formulate the problem is to ask whether civilization is an open-textured concept. The problem is posed every time an unprecedented development occurs in the history of civilization, so that the problem re-emerges at every stage in the history of a tradition, since the unprecedented is always occurring in one form or another. Let me provide an example of what I mean by this claim.

bazaar-roof

Imagine, if you will (as a thought experiment), that there were social scientists prior to the scientific revolution who studied their contemporaneous society much as we study our own societies today, and further suppose that despite the disadvantages such pre-modern social scientists would have labored under, that they manage to assemble reasonably accurate data sets that allows them to model the world in which they live and the history up to that point that had resulted in the world in which they lived (that is, the world of modernism without industrialism).

Venice from the early 20th Century

If you were to show pre-modern social scientists the spike in demographics, technology, energy use, and urbanization that attended the industrial revolution they might deny that any such development was even possible, and if they admitted that it was possible, they might say that a world so transformed would not constitute civilization as they understood civilization. They would be right, in a sense, to characterize our world today, after the industrial revolution, as a post-civilizational institution, derived perhaps from the long tradition of civilization with which they were familiar, but not really a part of this tradition. I implied as much recently when I wrote that, “It could be argued that traditional society… has already collapsed and has been incrementally replaced by an entirely different kind of society. For this is surely what has happened in the wake of the industrial revolution, which destroyed more aspects of traditional society than any Marxist, any revolutionary, or any atheist.” (cf. Is society existentially dependent upon religion?)

072_001

The thought experiment that I have suggested here in regard to the industrial revolution could also be performed in regard to the Neolithic agricultural revolution, although in this case we could not properly speak of an ancient civilization. Humanity as a species might have attained a great antiquity and even have made use of its intellectual gifts without having passed through any stage of large-scale settlement. This is an especially interesting thought experiment when we reflect that the paradigmatically human activities of art and technology predate civilization and may be understood in isolation from civilization, and might have developed separately from civilization. The rate of technological innovation prior to the advent of civilization was very slow, but it was not zero, and extrapolated to a sufficient age it would have produced an impressive technology, though this would have taken an order of magnitude longer than it took as a result of the industrial revolution. Something like civilization, but not exactly civilization as we know it, might have emerged from a very old human society that had not adopted large-scale settlement and consequently the institutions of settled civilization.

060705h

This ancient human society that had never crossed the threshold of civilization proper — at least in some senses a suboptimal form of social organization, even if not a suboptimal civilization — suggests yet another thought experiment: an ancient civilization that, despite its antiquity, never passes the threshold to become a Kardashevian supercivilization. The motif of a million-year-old civilization is a common one, Kardashev called them “supercivilizations” and Sagan often speculated on their histories, but what about the possibility of a million-year-old civilization that never develops technologically and never experiences an industrial revolution?

south_am_travel

If we plot out the history of technology and population (among other metrics) on a graph and extrapolate from trends prior to the industrial revolution (when these metrics suddenly spike) we can easily see the possibility of a very old civilization — tens of thousands or hundreds of thousands of years old — that would be the result of a simple diachronic extrapolation of trends that had characterized human life from the emergence of hominids up until the industrial revolution. This is at least possible as a counter-factual, and conceivable by way of an analogy with our prehistoric past.

Downtown Hartford early 1900s

The very old civilization that would be the result of a straight-forward diachonic extrapolation of civilization prior to the industrial revolution, given climatological conditions that allow for continual development, would be a civilization conceived in terms proportional to human history. We often forget that, prior to Homo sapiens, there is a multi-million year history of hominids with minimal toolkits that changed almost not at all over a million or even two million years. The human condition need not change appreciably even over very long periods of time.

porta_nigra_black_gate_moselle

A million year old agricultural civilization would probably look much like a 2,000 year old civilization, except that it would have a very long history, which means either a massive archive if continuity is maintained, or a lot of ruins and buried artifacts of the past if continuity has not been maintained. Would we have anything to learn from a million-year-old civilization that was not a supercivilization? Consider the possibility of art and literature a million years in development — the steady rate at which civilization prior to the industrial revolution produced masterpieces of art suggests that civilization without industrialization would be a very old agrarian civilization that was laden with a million years’ worth of art treasures. In this case a suboptimal civilization would be productive of values that would not and could not be achieved under an optimal civilization, which ought to make us question the optimality of optimal civilization where our presuppositions of optimality are drawn from industrialization.

. . . . .

signature

. . . . .

Grand Strategy Annex

. . . . .

project astrolabe logo small

. . . . .

Advertisements

Sunday


STEM cycle epiphenomena 3

Inefficiency in the STEM cycle

In my previous post, The Open Loop of Industrial-Technological Civilization, I ended on the apparently pessimistic note of the existential risks posed to industrial-technological civilization by friction and inefficiency in the STEM cycle that drives our civilization headlong into the future. Much that is produced by the feedback loop of science, technology, and engineering is dissipated in science that does not result in technologies, technologies that are not engineered in to industries, and industries that do not produce new scientific instruments. However, just enough science feeds into technology, technology into engineering, and engineering into science to keep the STEM cycle going.

These “inefficiencies” should not be seen as a “bad” thing, since much pure science that is valuable as an intellectual contribution to civilization has few if any practical consequences. The “inefficient” science that does not contribute directly to the STEM cycle is some of the best science that does humanity credit. Indeed, G. H. Hardy was famously emphatic that all practical mathematics was “ugly” and only pure mathematics, untainted by practical application, was truly beautiful — and Hardy made it clear that beautiful mathematics was ultimately the only thing that mattered. Thus these “inefficiencies” that appear to weaken the STEM cycle and hence pose an existential risk to our industrial-technological civilization, are at the same time existential opportunities — as always, risk and opportunity are one and the same.

STEM cycle epiphenomena 4

Opportunities of the STEM cycle

The apparently pessimistic formulation of my previous took this form:

“It is entirely possible that a shift in social, economic, cultural, or other factors that influence or are influenced by the STEM cycle could increase the amount of epiphenomenal science, technology, and engineering, thus decreasing the efficiency of the STEM cycle.”

Such a formulation must be balanced by an appropriate and parallel formulation to the effect that it is entirely possible that a shift in social, economic, cultural, or other factors that influence or are influenced by the STEM cycle could decrease the amount of epiphenomenal science, technology, and engineering, thus increasing the efficiency of the STEM cycle.

However, making the STEM cycle more “efficient” might well be catastrophic, or nearly catastrophic, for civilization, as it would imply a narrowing of human life to the parameters defined by the STEM cycle. This might lead to a realization of the existential risks of permanent stagnation (i.e., the stagnation of all aspects of civilization other than those that advance industrial-technological civilization, which could prove frightening) or flawed realization, in which an acceleration or consolidation of the STEM cycle leads to the sort of civilization no one would find desirable or welcome.

There is no reason one could not, however, both strengthen the STEM cycle, making industrial-technological civilization more robust and more productive of advanced science, technology, and engineering, while at the same time also producing more pure science, more marginal technologies, and more engineering curiosities that don’t feed directly into the STEM cycle. The bigger the pie, the bigger each piece of the pie and the more to go around for everyone. Also, pure science and practical science exist in a cycle of mutual escalation of their own, in which pure science inspires practical science and practical science inspires more pure science. Perhaps the same is true also of marginal and practical technologies and the engineering of curiosities and the engineering of mass industries.

STEM cycle epiphenomena 6

Scaling the STEM cycle

The dissipation of excess productions of the STEM cycle mean that unexpected sectors of the economy (as well as unexpected sectors of society) are occasionally the recipients of disproportional inputs. These disproportional inputs, like the inefficiencies discussed above, might be understood as either risks or opportunities. Some socioeconomic sectors might be catastrophically stressed by a disproportionate input, while others might unexpected flourish with a flourishing input. To control the possibilities of catastrophic failure or flourishing success, we must consider the possibility scaling the STEM cycle.

To what degree can the STEM cycle be scaled? By this question I mean that, once we are explicitly and consciously aware that it is the STEM cycle that drives industrial-technological civilization (or, minimally, that it is among the drivers of industrial-technological civilization), if we want to further drive that civilization forward (as I would like to see it driven until earth-originating life has established extraterrestrial redundancy in the interest of existential risk mitigation) can we consciously do so? To what extent can the STEM cycle be controlled, or can its scaling be controlled? Can we consciously direct the STEM cycle so that more science begets more technology, more technology begets more engineering, and more engineering begets more science? I think that we can. But, as with the matters discussed above, we must always be aware of the risk/opportunity trade-off. Focusing too much of the STEM cycle may have disadvantages.

Once we understand an underlying mechanism of civilization, like the STEM cycle, we can consciously cultivate this mechanism if we wish to see more of this kind of civilization, or we can attempt to dampen this mechanism if we want to see less of this civilization. These attempts to cultivate or dampen a mechanism of civilization can take microscopic or macroscopic forms. Macroscopically, we are concerned with the total picture of civilization; microscopically we may discern the smallest manifestations of the mechanism, as when the STEM cycle is purposefully pursued by the R&D division of a business, which funds a certain kind of science with an eye toward creating certain technologies that can be engineered into specific industries — all in the interest of making a profit for the shareholders.

This last example is a very conscious exemplification of the STEM cycle, that might conceivably be reduced the work of a single individual, working in turn as scientist, technologist, and engineer. The very narrowness of this process which is likely to produce specific and quantifiable results is also likely to produce very little in terms of epiphenomenal manifestations of the STEM cycle, and thus may contribute little or nothing to the more edifying dimensions of civilization. But this is not necessarily the case. Arno Penzias and Robert Wilson were working as scientists trying to solve a practical problem for Bell Labs when they discovered the cosmic microwave background radiation.

STEM cycle epiphenomena 7

Reason for Hope

We have at least as much reason to hope for the future as to despair of the future, if not more reason to hope. The longer civilization persists, the more robust it becomes, and the more robust civilization becomes, the more internal diversity and experimentation civilization can tolerate (i.e., greater social differentiation, as Siggi Becker has recently pointed out to me). The extreme social measures taken in the past to enforce conformity within society have been softened in Western civilization, and individuals have a great deal of latitude that was unthinkable even in the recent past.

Perhaps more significantly from the perspective of civilization, the more robust and tolerant our civilization, the more latitude there is for like-minded individuals to cooperate in the founding and advancement of innovative social movements which, if they prove to be effective and to meet a need, can result in real change to the overall structure of society, and this sort of bottom-up social change was precisely the kind of change that agrarian-ecclesiastical civilization was structured to frustrate, resist, and suppress. In this respect, if in no other, we have seen social progress in the development of civilization that is distinct from the technological and economic progress that characterizes the STEM cycle.

As I wrote in my recent Centauri Dreams post, SETI, METI, and Existential Risk, to exist is to be subject to existential risk. Given the relation of risk and opportunity, it is also the case that to exist is to choose among existential opportunities. This is why we fight so desperately to stay alive, and struggle so insistently to improve our condition once we have secured the essentials of existence. To be alive is to have countless existential opportunities within reach; once we die, all of this is lost to us. And to improve one’s condition is to increase the actionable existential opportunities within one’s grasp.

The development of civilization, for all its faults and deficiencies, is tending toward increasing the range of existential opportunities available as “live options” (as William James would say) for both individuals and communities. That this increased range of existential opportunities also comes with an increased variety of existential risks should not be employed as an excuse to attempt to reverse the real social gains bequeathed by industrial-technological civilization.

. . . . .

signature

. . . . .

Grand Strategy Annex

. . . . .

Saturday


The Developmental Conception of Civilization

classes of exrisk

Eleventh in a Series on Existential Risk


It is common to think about civilization in both developmental and non-developmental terms. As for the former, ever since Marx historians have identified a sequence of stages of economic development, and of course the idea of social evolution was central for Hegel before Marx gave it an economic interpretation. As for the latter, it is not unusual to hear clear distinctions being drawn between civilized and uncivilized life, very much in the spirit of tertium non datur: either a particular instance of social organization is civilized or it is not.

The developmental conception of civilization can be used to illuminate the idea of existential risk, as the classes of existential risk identified in Nick Bostrom’s “Existential Risk Prevention as Global Priority” readily lend themselves to a developmental interpretation. Here are the classes of existential risk from Bostrom’s paper (Table 1. Classes of existential risk):

● Human extinction Humanity goes extinct prematurely, i.e., before reaching technological maturity.

● Permanent stagnation Humanity survives but never reaches technological maturity.
Subclasses: unrecovered collapse, plateauing, recurrent collapse

● Flawed realisation Humanity reaches technological maturity but in a way that is dismally and irremediably flawed. Subclasses: unconsummated realisation, ephemeral realisation

● Subsequent ruination Humanity reaches technological maturity in a way that gives good future prospects, yet subsequent developments cause the permanent ruination of those prospects.

These classes of existential risk can readily be explicated in developmental terms:

● Human extinction The development of humanity ceases because humanity itself ceases to exist.

● Permanent Stagnation The development of humanity ceases, although humanity itself does not go extinct.

● Flawed Realization Humanity continues in its development, but this development goes horribly wrong and results in a human condition that is so far from being optimal that it might be considered a betrayal of human potential.

● Subsequent Ruination Humanity continues for a time in its development, but this development is brought to an untimely end before its potential is fulfilled.

In this context, what I have previously called existential viability, i.e., the successful mitigation of existential risk, can also be explicated in developmental terms:

● Existential viability Humanity is able to continue its arc of development to the point of the fulfillment of its technological maturity.

It would be possible (and no doubt also interesting), to delineate classes of existential viability parallel to classes of existential risk, and informed by the developmental possibilities consistent with the fulfillment of technological maturity or some other measurement of ongoing human development that does not terminate according to an existential risk scenario.

Bostrom originally expressed his conception of existential risk in terms of “earth-originating intelligence” — “An existential risk is one that threatens the premature extinction of Earth-originating intelligent life or the permanent and drastic destruction of its potential for desirable future development (Bostrom, 2002).” In more recent papers he has expressed existential risk in terms of “humanity” and “technological maturity” (as in the formulations quoted above), as in the following quote:

“The permanent destruction of humanity’s opportunity to attain technological maturity is a prima facie enormous loss, because the capabilities of a technologically mature civilisation could be used to produce outcomes that would plausibly be of great value, such as astronomical numbers of extremely long and fulfilling lives. More specifically, mature technology would enable a far more efficient use of basic natural resources (such as matter, energy, space, time, and negentropy) for the creation of value than is possible with less advanced technology. And mature technology would allow the harvesting (through space colonisation) of far more of these resources than is possible with technology whose reach is limited to Earth and its immediate neighbourhood.”

Nick Bostrom, “Existential Risk Prevention as Global Priority,” Global Policy, Volume 4, Issue 1, February 2013

For the moment, humanity and Earth-originating intelligence coincide, but this may not always be the case. A successor species to homo sapiens or conscious and intelligence machines could either take over the mantle of earth-originating intelligence or exist in parallel with humanity, so that there comes to be more than a single realization of earth-originating intelligence.

While Bostrom mentions civilization throughout his exposition, his crucial formulations are not in terms of civilization, though it would seem that Bostrom had the human species, homo sapiens, in mind when he formulated the class of human extinction, while the other classes of permanent stagnation, flawed realization, and subsequent ruination bear more closely on civilization, or at least on the social potential of homo sapiens, such as the accomplishments represented by intelligence and technology. It is a very different thing to talk about the extinction of a biological species and the extinction of a civilization, and it would probably be a good idea of explicitly distinguish risks facing biological species from risks facing social institutions, even though many of these risks will coincide.

For what classes of entities might we define classes of existential risk? Well, to start, we could define classes of existential risk for individuals in contradistinction to existential risks for social institutions comprised of many institutions, with civilization being the most comprehensive social institution yet devised by humanity.

I suspect that a developmental account of the individual is much less controversial than a developmental account of civilization (or, for that matter, of Earth-originating intelligent life), partly because the development of the individual is something that is personally familiar to all of us, and partly due to the efforts of psychologists and sociologists in laying out a detailed typology of individual developmental psychology. Attempts to lay out a detailed developmental typology of civilization runs into social and moral controversies, though I don’t see this as an essential objection.

In any case, here is an ontogenic formulation of the classes of existential risk:

● Personal extinction Individual development ceases because the individual himself ceases to exist. Death as an inevitable part of the human condition (at least for the time being) means that personal extinction is the personal existential risk that is visited upon each and every one of us.

● Personal Permanent Stagnation Individual development ceases, although the individual himself does not die (as of yet).

● Personal Flawed Realization The individual continues in his development, but this development goes horribly wrong and results in a life that is so far from being optimal that it might be considered a betrayal of the individual’s potential.

● Personal Subsequent Ruination The individual continues for a time in his development, but this development is brought to an end before the arc of personal development fulfills its potential.

Many of these cases of personal existential risks strike very close to home, as in imagining these situations one may well see all-too-clearly individuals that one knows personally, or one may even see oneself in one or more of these classes of personal existential risk. It is poignant and painful to confront permanent stagnation or flawed realization in one’s own life or in the lives of those one knows personally, however fascinating these conditions are for novelists and dramatists.

Just as we can imagine the classes of existential risk formulated specifically to illuminate the life of the individual, so too we can formulate phylogenic forms of the classes of existential risk:

● Civilizational extinction The development of human civilization ceases because human civilization itself ceases to exist. (But note here that the extinction of civilization may be consistent with the continued existence of humanity.)

● Civilizational Permanent Stagnation The development of human civilization ceases, although human civilization itself does not go extinct.

● Civilizational Flawed Realization Human civilization continues in its development, but this development goes horribly wrong and results in a civilization that is so far from being optimal that it might be considered a betrayal of the very idea of human civilization.

● Civilizational Subsequent Ruination Human civilization continues for a time in its development, but this development is brought to an end before the arc of the history of civilization can fulfill its potential.

Such large-scale formulations lack the poignancy of the personalized classes of existential risk, though they are more to the point of existential risk understood sensu stricto. Note that the civilizational formulations of the classes of existential risk are at least in one case consistent with the existential viability of humanity, and all classes of civilization existential risk are consistent with personal forms of existential viability — individuals within stagnant or flawed civilizations may continue to develop and to fulfill their full potential, although this potential is not expressed in a social form. Thus any individual human potential that is intrinsically social would be ruled out by civilizational failure, but I assume that human potential is not exhausted by exclusively social forms of fulfillment.

The poignancy of personal classes of existential risk may be useful precisely due to the visceral effect they have — not unlike the visceral nature of the overview effect and the potential of the overview effect in raising personal awareness of planetary finitude and vulnerability. Similarly, the finitude and vulnerability of humanity on the whole may be driven home to the individual by a personal illustration of existential risk.

There is a yawning chasm that separates the disasters all-too-easily rationalized away as not being worth the effort to pursue preparedness, and global catastrophic risks and existential risks that have as yet no existing preparedness efforts because they seem intractable and overwhelming merely to contemplate.

It is possible that just as we may begin with mundane forms of risk management — readily understood and readily implemented — move up to crisis management, then to global catastrophic risks and finally to existential risks, so too we may start with personal risks and move up to the most comprehensive forms of risk — and this emerging consciousness of more comprehensive forms of risk is itself a developmental process.

This macrocosm/microcosm approach to existential risk suggests a cross fertilization of ideas, such that personal methods for mitigating existential risks may suggest societal methods, and vice versa. However, we know that flawed individuals sometimes do great things, just as flawed societies can boast of great accomplishments. It may be necessary to distinguish between flaws that augment existential threats and flaws that diminish existential threats. If this is also true on a societal level, the consequences are decidedly interesting.

. . . . .

classes of exrisk 2

. . . . .

danger imminent existential threat

. . . . .

Existential Risk: The Philosophy of Human Survival

1. Moral Imperatives Posed by Existential Risk

2. Existential Risk and Existential Uncertainty

3. Addendum on Existential Risk and Existential Uncertainty

4. Existential Risk and the Death Event

5. Risk and Knowledge

6. What is an existential philosophy?

7. An Alternative Formulation of Existential Risk

8. Existential Risk and Existential Opportunity

9. Conceptualization of Existential Risk

10. Existential Risk and Existential Viability

11. Existential Risk and the Developmental Conception of Civilization

. . . . .

ex risk ahead

. . . . .

signature

. . . . .

Grand Strategy Annex

. . . . .

Saturday


Ninth in a Series on Existential Risk:

astronaut in space

How we understand what exactly is at risk.


In my last post in this series on existential risk, Existential Risk and Existential Opportunity, I wrote this:

How we understand existential risk, then, affects what we understand to be a risk and what we understand to be a reward.

It is possible to clarify this claim, or at least to lay out in greater detail the conceptualization of existential risk, and it is worthwhile to pursue such a clarification.

We cannot identify risk-taking behavior or risk averse behavior unless we can identify instances of risk. Any given individual is likely to identify risks differently than any other individual, and the greater the difference between any two given individuals, the greater the difference is likely to be in their identification of risks. Similarly, a given community or society will be likely to identify risks differently than any other given community or society, and the greater the differences between two given communities, the greater the difference is likely to be between the existential risks identified by the two communities.

This difference in the assessment of risk can at least in part be put to the role of knowledge in determining the distinction between prediction, risk, and uncertainty, as discussed in Existential Risk and Existential Uncertainty and Addendum on Existential Risk and Existential Uncertainty: distinct individuals, communities, societies, and indeed civilizations are in possession not only of distinct knowledge, but also of distinct kinds of knowledge. The distinct epistemic profiles of different societies results in distinct understandings of existential risk.

Consider, for example, the kind of knowledge that is widespread in agrarian-ecclesiastical civilization in contradistinction to industrial-technological civilization: in the former, many people know the intimate details of farming, but few are literate; in the latter, many are literate, but few know how to farm. The macro-historical division of civilization in which a given population is to be found profoundly shapes the epistemic profile of the individuals and communities that fall within a given macro-historical division.

Moreover, knowledge is integral with ideological, religious and philosophical ideas and assumptions that provide the foundation of knowledge within a given macro-historical division of civilization. The intellectual foundations of agrarian-ecclesiastical civilization (something I explicitly discussed in Addendum on the Agrarian-Ecclesiastical Thesis) differ profoundly from the intellectual foundations of industrial-technological civilization.

Differences in knowledge and differences in the conditions of the possibility of knowledge among distinct individuals and civilizations mean that the boundaries between prediction, risk, and uncertainty are differently constructed. In agrarian-ecclesiastical civilization, the religious ideology that lies at the foundation of all knowledge gives certainty (and therefore predictability) to things not seen, while consigning all of this world to an unpredictable (therefore uncertain) vale of tears in which any community might find itself facing starvation as the result of a bad harvest. The naturalistic philosophical foundations of knowledge in industrial-technological civilization have stripped away all certainty in regard to things not seen, but by systematically expanding knowledge has greatly reduced uncertainty in this world and converted many certainties into risks and some risks into certain predictions.

Differences in knowledge can also partly explain differences in risk perception among individuals: the greater one’s knowledge, the more one faces calculable risks rather than uncertainties, and predictable consequences rather than risks. Moreover, the kind of knowledge one possesses will govern the kind of risk one perceives and the kind of predictions that one can make with a degree of confidence in the outcome.

While there is much that can be explained between differences in knowledge, and differences between kinds of knowledge (a literary scholar will be certain of different epistemic claims than a biologist), there is also much that cannot be explained by knowledge, and these differences in risk perception are the most fraught and problematic, because they are due to moral and ethical differences between individuals, between communities, and between civilizations.

One might well ask — Who would possibly object to preventing human extinction? There are many interesting moral questions hidden within this apparently obvious question. Can we agree on what constitutes human viability in the long term? Can we agree on what is human? Would some successor species to humanity count as human, and therefore an extension of human viability, or must human viability be attached to a particular idea of the homo sapiens genome frozen in time in its present form? And we must also keep in mind that many today view human actions as being so egregious that the world would be better off without us, and such persons, even if in the minority, might well affirm that human extinction would be a good thing.

Let us consider, for a moment, a couple of Nick Bostrom’s formulations of existential risk:

An existential risk is one that threatens the premature
extinction of Earth-originating intelligent life or the permanent and drastic destruction of its potential for desirable future development.

…and again…

…an existential risk is one that threatens to cause the extinction of Earth-originating intelligent life or the permanent and drastic failure of that life to realise its potential for desirable development. In other words, an existential risk jeopardises the entire future of humankind.

Existential Risk Prevention as Global Priority, Nick Bostrom, University of Oxford, Global Policy (2013) 4:1, 2013, University of Durham and John Wiley & Sons, Ltd.

What exactly would constitute the “drastic failure of that life to realise its potential for desirable development”? What exactly is permanent stagnation? Flawed realization? Subsequent ruination? What is human potential? Does it include transhumanism?

For some, the very idea of transhumanism is a moral horror, and a paradigm case of flawed realization. For others, transhumanism is a necessary condition of the full realization of human potential. Thus one might imagine an exciting human future of interstellar exploration and expanding knowledge of the world, and understand this to be an instance of permanent stagnation because human beings do not augment themselves and become something more or something different than we are today. And, honestly, such a scenario does involve an essentially stagnant conception of humanity. Another might imagine a future of continual human augmentation and experimentation, but more or less populated by beings — however advanced — who engage in essentially the same pursuits as those we pursue today, so that while the concept of humanity has not remained stagnant, the pursuits of humanity are essentially mired in permanent stagnation.

Similar considerations hold for civilization as hold for individuals: there are vastly different conceptions of what constitutes a viable civilization and of what constitutes the good for civilization. Future forms of civilization that depart too far from the Good may be characterized as instances of flawed realization, while future forms of civilization that don’t depart at all from contemporary civilization may be characterized as instances of permanent stagnation. The extinction of earth-originating intelligent life, or the subsequent ruination of our civilization, may seem more straight-forward, but what constitutes earth-originating intelligent life is vulnerable to the questions above about human successor species, and subsequent ruination may be judged by some to be preferable to the present trajectory of civilization continuing.

Sometimes these moral differences among peoples are exemplified in distinct civilizations. The kind of existential risks recognized within agrarian-ecclesiastical civilization are profoundly different from the kind of existential risks now being recognized by industrial-technological civilization. We can see earlier conceptions of existential risk as deviant, limited, or flawed as compared to those conceptions made possible by the role of science within our civilization, but we should also realize that, if we could revive representatives of agrarian-ecclesiastical civilization and give them a tour of our world today, they would certainly recognize features of our world of which we are most proud as instances of flawed realization (once we had explained to them what “flawed realization” means). For a further investigation of this idea I strongly recommend that the reader peruse Reinhart Koselleck’s Future’s Past: On the Semantics of Historical Time.

It would be well worth the effort to pursue possible quantitative measures of human extinction, permanent stagnation, flawed realization, and subsequent realization, but if we do so we must do so in the full knowledge that this is as much a moral and philosophical inquiry as it would be a scientific and theoretical inquiry; we cannot separate the desirability of future outcomes from the evaluative nature of our desires.

Like the sailors on the Pequod who each look into the gold doubloon nailed to the mast and see themselves and their personal concerns within, just so when we look into the mirror that is the future, we see our own hopes and fears, notwithstanding the fact that, when the future arrives, our concerns will be long washed away by the passage of time, replaced by the hopes and fears of future men and women (or the successors of men and women).

. . . . .

danger imminent existential threat

. . . . .

Existential Risk: The Philosophy of Human Survival

1. Moral Imperatives Posed by Existential Risk

2. Existential Risk and Existential Uncertainty

3. Addendum on Existential Risk and Existential Uncertainty

4. Existential Risk and the Death Event

5. Risk and Knowledge

6. What is an existential philosophy?

7. An Alternative Formulation of Existential Risk

8. Existential Risk and Existential Opportunity

9. Conceptualization of Existential Risk

. . . . .

ex risk ahead

. . . . .

signature

. . . . .

Grand Strategy Annex

. . . . .

Planetary Torpor

6 November 2012

Tuesday


A curious case of selective stagnation:

A whole new way to think about Weltschmerz


Among those who think about human space exploration, the relatively modest (i.e., less than ambitious) human space program since the end of the Apollo program that took human beings to the moon is a problem that requires an explanation. There have always been futurist speculations that have taken particular trends out of context and extrapolated them in isolation. Such narrowly focused futurism almost always gets things wrong. But when we think of all that might have been accomplished in terms of space exploration in the past forty years, and how far we might have gone in terms of existential risk mitigation as a result of a robust space program, one inevitably asks why more has not been done.

Putting the space program in the context of existential risk shifts our understanding a bit, since the space program is usually understood as science or exploration or adventure, but I am coming more to the view that it must be understood in terms of mitigating existential risk, that is to say, establishing self-sustaining, self-sufficient settlements off the surface of the Earth so that life and civilization can go on whatever the vulnerabilities of our home world. From this perspective, from the perspective of existential risk, the space program, and in fact all of human civilization, has been stagnant. We have had the power to leave the Earth and to create a second home for ourselves elsewhere, and we have failed to do so.

The idea of existential risk is due to Nick Bostrum, whom I have mentioned several times recently. His papers Existential Risks: Analyzing Human Extinction Scenarios and Related Hazards and Existential Risk Reduction as Global Priority lay out the basic architecture of the concept, introducing several qualitative risk categories and their classification in terms of existential risk. Bostrum distinguishes four classes of existential risk: human extinction, permanent stagnation, flawed realization, and subsequent ruination.

How are we to construe the relative stagnation of the space program over the past forty years, which could provide a degree of existential risk mitigation, but which has not been widely viewed in this light. Space science has had many spectacular successes in recent decades, which have substantially increased our knowledge of the universe in which we live, but all of this is for naught if our exclusively-terrestrially dwelling species is wiped out by a natural catastrophe beyond the power of our technology to stop or to tame. There is a sense, then, no matter how valuable our scientific knowledge from unmanned missions, that the past forty years have been a wasted opportunity to secure against existential risk. We had the knowledge to go into space, the ability, the economic foundation — all the elements were present, but the will to secure the survival of our own species has been lacking. How do we explain this?

We cannot say that civilization has been exactly stagnant over the past forty years. How can human civilization be said to be stagnant when we have been experiencing exponential technological growth? We have experienced an explosion in the development of telecommunications and computing that was unpredicted and unprecedented. This has profoundly changed our personal lives and the structure of the overall economy and society. It has also increased the rate of technological change, since computerized engineering and design makes it possible to build other technologies in a much more sophisticated fashion than previously was the case. When we think of technological triumphs like the SR-71, the Apollo project, and the Concorde, we must remember that most of this was accomplished by engineers with slide rules writing calculations in pencil on paper. And yet today we have no sophisticated supersonic aerospace industry and nothing on the scale of the Apollo program, though we could presumably do both better now than we did before.

With all this technological progress, there remains a feeling of unfulfilled potential in the past half century. No one can say — as it was in fact said before the space program — that it is simply impossible to travel in space, or for human beings to live in space, or to travel to the moon. We’ve all seen 2001: A Space Odyssey, and even this modest human future in space, with a rotating space station and a base on the moon, didn’t happen. Did people lose interest? Did they turn inward, preferring personal comfort to what Theodore Roosevelt called “the strenuous life”? Was the human spirit broken by the Cold War and the haunting threat of nuclear annihilation?

In German there is a word that we lack in English: Weltschmerz, sometimes translated as “world-weariness.” Americans have never had much use for either the term or the idea, and it sounds a bit too much like post-War French existentialism with its systematic exposition of guilt, despair, alienation, and absurdity. Nevertheless, it is difficult to look at the past half century without thinking of it in terms not unlike Weltschmerz.

Thomas Couture Romans of the Decadence

Stagnation can take the form of a civilization being shot through with ellipses. We could called this condition selective stagnation. Because there are so many possible explanations for the selective stagnation of the past forty years, and because it is unlikely that any one single social, economic, political, or ideological explanation could explain our selective stagnation, the only way we can embrace the complex social phenomenon of selective stagnation is to cover it with a term specifically intended to indicate many historical causes coming together into a trend that constitutes a whole greater than any of its individual parts. Once upon a time this was called “decadence,” as in Thomas Coulture’s famous painting “Romans of the Decadence.” We could also call it Weltschmerz (although it this case it should be Raumshmerz rather than Weltschmerz), or we could call it terrestrial malaise or even planetary torpor.

Since the advent of civilization, there have been several periods of extended stagnation, which historians formerly called “dark ages” but which term is avoided today because of its disparaging connotations. I have previously written about the Greek Dark Ages, and I still occasionally refer to the early middle ages in Western Europe as the “dark ages” because there are senses in which the term remains apt. When we compare the selective stagnation of the past half century to these comprehensive periods during which Western civilization stumbled, and it was a real question whether or not it would recover its footing, our selective stagnation is so minor it scarcely bears mentioning.

But there is a crucial difference: the Greek Dark Age and the Dark Age following the collapse of Roman power in the western empire took place long before the scientific revolution. Since the scientific revolution we have continuously learned more about our place in the universe, and since the industrial revolution we have had the power to modify our place within nature with increasing scope and efficacy. Now we understand better than at any time in the past the existential risks we are facing, and for the past fifty years we have had the power to do something about that existential risk: to establish a human presence in extraterrestrial space that would not be vulnerable to disasters specific to the Earth. This is not absolute risk mitigation — the idea of absolute risk mitigation is illusory — but it is incrementally much better, perhaps even or order of magnitude of distancing ourselves from manifest vulnerability. .

It may be the case that when civilization reaches a certain stage of development at which a minimum level of creature comforts are available for the bulk of the world’s population, that this relative prosperity undermines the springs to action. Because we have only our own terrestrial civilization by which to judge, we don’t have a sufficiently big picture conception of civilization that would allow us to generalize at this level of the idea of civilization.

Singulatarians and transhumanists will tell you that we are poised on the verge of transformative change that will make all previous transitions in human history pale by comparison, and which will launch human beings — or, rather, the post-human, post-biological beings who will be the successors of specifically human being — on a course of development that will make these considerations either irrelevant, or so trivial that it will be a small matter to execute the required solution. But even as these wonders are coming about, we remain vulnerable. We might be on the very verge of the technological singularity when we are wiped out by a stray asteroid. This scenario would constitute what Nick Bostrum called “ephemeral realization.”

For these reasons, as well as many other that the reader will immediately see, I think that the idea of selective stagnation bears further study in its own right.

. . . . .

signature

. . . . .

Grand Strategy Annex

. . . . .

%d bloggers like this: