It cannot be pointed out too often that by far the most extensive period of human history is prehistory. In the past it was possible to evade this fact and its problematic consequences for conventional historiography, because prehistory could be safely set aside as not being history at all. The subsequent rise of scientific historiography, which allows us to read texts other than written language — geological texts, genetic texts, the texts of material culture uncovered by archaeologists, and so on — have been progressively chipping away at the facile distinction between history and prehistory, so that boundary between the two can no longer be maintained and any distinction between history and prehistory must be merely conventional, such as the convention of identifying history sensu stricto with the advent of written language.
The evolutionary psychology of human beings carries the imprint of this long past until recently unknown to us, lost to us, its loss during the earliest period of civilization being a function of history effaced as the events of more recent history wipe clean the slate of the earlier history that preceded it. Scientific historiography provides us with the ability to recover lost histories once effaced, and, like a recovered memory, we recognize ourselves in this recovered past because it is true to what we are, still today.
From the perspective of illuminating contemporary human society, we may begin with the historical recovery of relatively complex societies that emerged from the Upper Paleolithic, which communities were the context from which the Neolithic Agricultural Revolution emerged. But from the perspective of the evolutionary psychology that shaped our minds, we must go back to the origins of the brain in natural history, and follow it forward in time, for each stage in the evolution of the brain left its traces in our behavior. The brainstem that we share with reptiles governs autonomous functions and the most rudimentary drives, the limbic system that we share with other mammals and which is implicated in our sentience-rich biosphere is responsible for our emotions and a higher grade of consciousness than the brainstem alone can support, and the cerebral cortex enables more advanced cognitive functions that include reflexive self-awareness and historical consciousness (awareness of the past and the future in relation to the immediacy of the present).
Each of these developments in terrestrial brain evolution carries with it its own suite of behaviors, with each new set of behaviors superimposed on previous behaviors much as each new layer of the brain is superimposed upon older layers. Over the longue durée of evolution these developments in brain evolution were also coupled with the evolution of our bodies, which enact the behaviors in question. As we descended from the trees and hunted and killed for food, our stomachs shrank and our brains grew. We have the record of this transition preserved in the bones of our ancestors; we can still see today the cone-shaped ribcage of a gorilla, over the large stomach of a species that has remained primarily vegetarian; we can see in almost every other mammal, almost every other vertebrate, the flat skull with nothing above the eyes, compared to which the domed cranium of hominids seems strange and out of place.
As I wrote in Survival Beyond the EEA, “Evolution means that human beings are (or were) optimized for survival and reproduction in the Environment of Evolutionary Adaptedness (EEA).” (Also on the EEA cf. Existential Threat Narratives) The long history of the formation of our cognitive abilities has refined and modified survival and reproduction behaviors, but it has not replaced them. Our hunter-gatherer ancestors of the Upper Paleolithic were already endowed with the full cognitive power that we continue to enjoy today, though admittedly without the concepts we have formulated over the past hundred thousand years, which have allowed us to make better use of our cognitive endowment in the context of civilization. Everything essential to the human mind was in place long before the advent of civilization, and civilization has not endured for a period of time sufficient to make any essential change to the constitution of the human mind.
The most difficult aspects of the human point of view to grasp objectively are those that have been perfectly consistent and unchanging over the history of our species. And so it is that we do not know ourselves as dwellers on the surface of a planet, shaped by the perspective afforded by a planetary surface, looking up to the stars through the distorting lens of the atmosphere, and held tight to the ground beneath our feet by gravity. At least, we have not known ourselves as such until very recently, and this knowledge has endured for a much shorter period of time than civilization, and hence has had even less impact on the constitution of our minds than has civilization, however much impact it has had upon our thoughts. Our conceptualization of ourselves as beings situated in the universe as understood by contemporary cosmology takes place against the background of the EEA, which is a product of our evolutionary psychology.
To understand ourselves aright, then, we need to understand ourselves as beings with the minds of hunter-gatherers who have come into a wealth of scientific knowledge and technological power over an historically insignificant period of time. How did hunter-gatherers conceive and experience their world? What was the Weltanschauung of hunter-gatherers? Or, if you prefer, what was the worldview of hunter-gatherers?
Living in nature as a part of nature, only differentiated in the slightest degree from the condition of prehuman prehistory, the hunter-gatherer lives always in the presence of the sublime, overwhelmed by an environment of a scale that early human beings had no concepts to articulate. And yet the hunter-gatherer learns to bring down sublimely large game — an empowering experience that must have contributed to a belief in human efficacy and agency in spite of vulnerability to a variable food supply, not yet under human control. Always passing through this sublime setting for early human life, moving on to find water, to locate game, to gather nuts and berries, or to escape the depredations of some other band of hunter-gatherers, our ancestor’s way of life was rooted in the landscape without being settled. The hunter-gatherer is rewarded for his curiosity, which occasionally reveals new sources of food, as he is rewarded for his technological innovations that allow him to more easily hunt or to build a fire. The band never has more children than can be carried by the adults, until the children can themselves escape, by running or hiding, the many dangers the band faces.
As settled agriculturalism began to displace hunter-gatherers, first from the fertile lowlands and river valleys were riparian civilizations emerged, new behaviors emerged that were entirely dependent upon the historical consciousness enabled by the cerebral cortex (that is to say, enabled by the ability to explicitly remember the past and to plan for the future). Here we find fatalism in the vulnerability of agriculture to the weather, humanism in this new found power over life, a conscious of human power in its the command of productive forces, and the emergence of soteriology and eschatology, the propitiation of fickle gods, as human compensations for the insecurity inherent in the unknowns and uncertainties of integrating human life cycles with the life cycles of domesticated plants and animals and the establishment of cities, with their social differentiation and political hierarchies, all unprecedented in the history of the world.
The Weltanschauung of hunter-gatherers, which laid the foundations for the emergence of agrarian and pastoral civilizations, I call the homeworld effect in contradistinction to what Frank White has called the overview effect. The homeworld effect is our understanding of ourselves and of our world before we have experienced the overview effect, and before the overview effect has transformed our understanding of ourselves and our world, as it surely will if human beings are able to realize a spacefaring civilization.
The homeworld effect — that our species emerged on a planetary surface and knows the cosmos initially only from this standpoint — allows us to assert the uniqueness of the overview effect for human beings. The overview effect is an unprecedented historical event that cannot be repeated in the history of a civilization. (If a civilization disappears and all memory of its having attained the overview effect is effaced, then the overview effect can be repeated for a species, but only in the context of a distinct civilization.) A corollary of this is that each and every intelligent species originating on a planetary surface (which I assume fulfills the principle of mediocrity for intelligent species during the Stelliferous Era) experiences a unique overview effect upon the advent of spacefaring, should the cohort of emergent complexities on the planet in question include a technologically competent civilization.
The homeworld effect is a consequence of planetary surfaces being a locus of material resources and energy flows where emergent complexities can appear during the Stelliferous Era (this is an idea I have been exploring in my series on planetary endemism, on which cf. Part I, Part II, Part III, Part IV, and Part V). We can say that the homeworld effect follows from this planetary standpoint of intelligent beings emerging on the surface of a planet, subject to planetary constraints, just as the overview effect follows from an extraterrestrial standpoint.
We can generalize from this observation and arrive at the principle that an effect such as the overview effect or the homeworld effect is contingent upon the experience of some standpoint (or, if you prefer, some perspective) that an embodied being experiences in the first person (and in virtue of being embodied). This first level of generalization makes it obvious that there are many standpoints and many effects that result from standpoints. Standing on the surface of a planet is a standpoint, and it yields the homeworld effect, which when formulated theoretically becomes something like Ptolemaic cosmology — A Weltanschauung or worldview that was implicit and informal for our hunter-gatherer ancestors, but which was explicitly formulated and formalized after the advent of civilization. A standpoint in orbit yields a planetary overview effect, with the standpoint being the conditio sine qua non of the effect, and this converges upon a generalization of Copernican cosmology — what Frank White has called the Copernican Perspective. (We could, in which same spirit, posit a Terrestrial Perspective that is an outgrowth of the homeworld effect.) If a demographically significant population attains a particular standpoint and experiences an effect as a result of this standpoint, and the perspective becomes the perspective of a community, a worldview emerges from the community.
Further extrapolation yields classes of standpoints, classes of effects, classes of perspectives, and classes of worldviews, each member of a class possessing an essential property in common. The classes of planetary worldviews and spacefaring worldviews will be different in detail, but all will share important properties. Civilization(s) emerging on planetary surfaces at the bottom of a gravity well constitute a class of homeworld standpoints. Although each homeworld is different in detail, the homeworld effect and the perspective it engenders will be essentially the same. Initial spacefaring efforts by any civilization will yield a class of orbital standpoints, again, each different in detail, but yielding an overview effect and a Copernican perspective. Further overview effects will eventually (if a civilization does not stagnate or collapse) converge upon a worldview of a spacefaring civilization, but this has yet to take shape for human civilization.
A distinctive aspect of the overview effect, which follows from an orbital standpoint, is the suddenness of the revelation. It takes a rocket only a few minutes to travel from the surface of Earth, the home of our species since its inception, into orbit, which no human being saw until the advent of spacefaring. The suddenness of the revelation not only furnishes a visceral counter-example to what our senses have been telling us all throughout our lives, but also stands in stark contrast to the slow and gradual accumulation of knowledge that today makes it possible to understand our position in the universe before we experience this position viscerally by having attained an orbital standpoint, i.e., an extraterrestrial perspective on all things terrestrial.
With the sudden emergence in history of the overview effect (no less suddenly than it emerges in the experience of the individual), we find ourselves faced with a novel sublime, the sublime represented by the cosmos primeval, a wilderness on a far grander scale than any wilderness we once faced on our planet, and, once again, as with our ancestors before the vastness of the world, the thundering thousands of game animals on the hoof, oceans that could not be crossed and horizons that could not be reached, we lack the conceptual infrastructure at present to fully make sense of what we have seen. The experience is sublime, it moves us, precisely because we do not fully understand it. The human experience of the homeworld effect eventually culminated in the emergence of scientific civilization, which in turn made it possible for human beings to understand their world, if not fully, at least adequately. Further extrapolation suggests that the human experience of the overview effect could someday culminate in an adequate understanding of the cosmos, as our hunter-gatherer drives for locating and exploiting resources wherever they can be found, and the reward for technological innovations that serve this end, continue to serve us as a spacefaring species.
. . . . .
I am indebted to my recent correspondence with Frank White and David Beaver, which has influenced the development and formulation of the ideas above. Much of the material above appeared first in this correspondence.
. . . . .
. . . . .
. . . . .
. . . . .
14 January 2016
David Christian and Stephen Jay Gould on Complexity
The development of the universe as we have been able to discern its course by means of science reveals a growth of emergent complexity against a background of virtually unchanging homogeneity. Some accounts of the universe emphasize the emergent complexity, while other accounts emphasize the virtually unchanging homogeneity. The school of historiography we now call Big History focuses on the emergent complexity. Indeed, Big Historians, most famously David Christian, employ a schematic hierarchy of emergent complexity for a periodization of the history of the universe entire.
In contradistinction to the narrative of emergent complexity, Stephen Jay Gould frequently emphasized the virtually unchanging homogeneity of the world. Gould argued that complexity is marginal, perhaps not even statistically significant. Life is dominated by the simplest forms of life, from its earliest emergence to the present day. Complexity has arisen as an inevitable byproduct of the fact that the only possible development away from the most rudimentary simplicity is toward greater complexity, but complexity in life remains marginal compared to the overwhelming rule of simplicity.
When we have the ability to pursue biology beyond Earth, to de-provincialize biology, as Carl Sagan put it, this judgment of Gould is likely to be affirmed and reaffirmed repeatedly, as we will likely find simple life to be relatively common in the universe, but complexity will be rare, and the more life we discover, the less that complex life will represent of the overall picture of life in the universe. And what Gould said of life we can generalize to all forms of emergent complexity; in a universe dominated by hydrogen and helium, as it was when it began with the big bang, the existence of stars, galaxies, and planets scarcely registers, and 13.7 billion years later the universe is still dominated by hydrogen and helium.
Here is how Gould characterized the place of biological complexity in Full House, his book devoted to an exposition of life shorn of any idea of a trend toward progress:
“I do not deny the phenomenon of increased complexity in life’s history — but I subject this conclusion to two restrictions that undermine its traditional hegemony as evolution’s defining feature. First, the phenomenon exists only in the pitifully limited and restricted sense of a few species extending the small right tail of a bell curve with an ever-constant mode at bacterial complexity — and not as a pervasive feature in the history of most lineages. Second, this restricted phenomenon arises as an incidental consequence — an ‘effect,’ in the terminology of Williams (1966) and Vrba (1980), rather than an intended result — of causes that include no mechanism for progress or increasing complexity in their main actions.”
Stephen Jay Gould, Full House: The Spread of Excellence from Plato to Darwin, 1996, p. 197
And Gould further explained the different motivations and central ideas of two of his most influential books:
“Wonderful Life asserts the unpredictability and contingency of any particular event in evolution and emphasizes that the origin of Homo sapiens must be viewed as such an unrepeatable particular, not an expected consequence. Full House presents the general argument for denying that progress defines the history of life or even exists as a general trend at all. Within such a view of life-as-a-whole, humans can occupy no preferred status as a pinnacle or culmination. Life has always been dominated by its bacterial mode.”
Stephen Jay Gould, Full House: The Spread of Excellence from Plato to Darwin, 1996, p. 4
Gould’s work is through-and-through permeated by the Copernican principle, taken seriously and applied systematically to biology, paleontology, and anthropology. Gould not only denies the centrality of human beings to any narrative of life, he also denies any mechanism that would culminate in some future progress of complexity that would be definitive of life. Gould conceived a biological Copernicanism more radical than anything imagined by Copernicus or his successors in cosmology.
Emergent Complexity during the Stelliferous Era
How are we to understand the cohort of emergent complexities of which we are a part and a representative, and therefore also possess a vested interest in magnifying the cosmic significance of this cohort? Our reflections on emergent complexity are reflexive (as we are, ourselves, an emergent complexity) and thus are non-constructive in the sense of being impredicative. Perhaps the question for us ought to be, how can we avoid misunderstanding emergent complexity? How are we to circumvent our cognitive biases, which, when projected on a cosmological scale, result in errors of a cosmological magnitude?
Emergent complexities represent the “middle ages” of the cosmos, which first comes out of great simplicity, and which will, in the fullness of time, return to great simplicity. In the meantime, the chaotic intermixing of the elements and parts of the universe can temporarily give rise to complexity. Emergent complexity does not appear in spite of entropy, but rather because of entropy. It is the entropic course of events that brings about the temporary admixture that is the world we know and love. And entropy will, in the same course of events, eventually bring about the dissolution of the temporary admixture that is emergent complexity. In this sense, and as against Gould, emergent complexity is a trend of cosmological history, but it is a trend that will be eventually reversed. Once reversed, once the universe enters well and truly upon its dissolution, emergent complexities will disappear one-by-one, and the trend will be toward simplicity.
One could, on this basis, complete the sequence of emergent complexity employed in Big History by projecting its mirror image into the future, allowing for further emergent complexities prior to the onset of entropy-driven dissolution, except that the undoing of the world will not follow the same sequence of steps in reverse. If the evolution of the universe were phrased in sufficiently general terms, then certainly we could contrast the formation of matter in the past with the dissolution of matter in the future, but matter will not be undone by the reversal of stellar nucleosynthesis.
The Structure of Emergent Complexity
Among the emergent complexities are phenomena like the formation of stars and galaxies, and nucleosynthesis making chemical elements and minerals possible. But as human beings the emergent complexities that interest us the most, perhaps for purely anthropocentric reasons, are life and civilization. We are alive, and we have built a civilization for ourselves, and in life and civilization we see our origins and our end; they are the mirror of human life and ambition. If we were to find life and civilization elsewhere in the universe, we would find a mirror of ourselves which, no matter how alien, we could see some semblance of a reflection of our origins and our end.
Recognizable life would be life as we know it, as recognizable civilization would be civilization as we know it, presumably following from life as we know it. Life, i.e., life as we know it, is predicated upon planetary systems warmed by stars. Thus it might be tempting to say that the life-bearing period of the cosmos is entirely contained within the stelliferous, but that wouldn’t be exactly right. Even after star formation ceases entirely, planetary systems could continue to support life for billions of years yet. And, similarly, even after life has faded from the universe, civilization might continue for billions of years yet. But each development of a new level of emergent complexity must await the prior development of the emergent complexity upon which it is initially contingent, even if, once established in the universe, the later emergent complexity can outlive the specific conditions of its emergence. This results in the structure of emergent complexities not as a nested series wholly contained within more comprehensive conditions of possibility, but as overlapping peaks in which the conditio sine qua non of the later emergent may already be in decline when the next level of complexity appears.
The Ages of Cosmic History
In several posts — Who will read the Encyclopedia Galactica? and A Brief History of the Stelliferous Era — I have adopted the periodization of cosmic history formulated by Adams and Greg Laughlin, which distinguishes between the Primordial Era, the Stelliferous Era, the Degenerate Era, the Black Hole Era, and the Dark Era. The scale of time involved in this periodization is so vast that the “eras” might be said to embody both emergent complexity and unchanging homogeneity, without favoring either one.
The Primordial Era is the period of time between the big bang and when the first stars light up; the Stelliferous Era is dominated by stars and galaxies; during the Degenerate Era it is the degenerate remains of stars that dominate; after even degenerate remains of stars have dissipated only massive black holes remain in the Black Hole Era; after even the black holes dissipate, it is the Dark Era, when the universe quietly converges upon heat death. All of these ages of the universe, except perhaps the last, exhibit emergent complexity, and embrace a range of astrophysical processes, but adopting such sweeping periodizations the homogeneity of each era is made clear.
Big History’s first threshold of emergent complexity corresponds to the Primordial Era, but the remainder of its periodizations of emergent complexity are all entirely contained within the Stelliferous Era. I am not aware of any big history periodization that projects the far future as embraced by Adams and Laughlin’s five ages periodization. Big history looks forward to the ninth threshold, which comprises some unnamed, unknown emergent complexity, but it usually does not look as far into the future as the heat death of the universe. (The idea of the “ninth threshold” is a non-constructive concept, I will note — the idea that there will be some threshold and some new emergent complexity, but even as we acknowledge this, we also acknowledge that we do not know what this threshold will be, nor do we known anything of the emergent complexity that will characterize it). Another periodization of comparable scale, Eric Chaisson’s decomposition of cosmic history into the Energy Era, the Matter Era, and the Life Era, cut across Adams and Laughlin’s five ages of the universe, with the distinction between the Energy Era and the Matter Era decomposing the early history of the universe a little differently than the distinction between the Primordial Era and the Stelliferous Era.
The “peak Stelliferous Era,” understood as the period of peak star formation during the Stelliferous Era, has already passed. The universe as defined by stars and galaxies is already in decline — terminal decline that will end in new stars ceasing the form, and then the stars that have formed up to that time eventually burning out, one by one, until none are left. First the bright blue stars will burn out, then the sun-like stars, and the dwarf stars will outlast them all, slowly burning their fuel for billions of years to come. That is still a long time in the future for us, but the end of the peak stelliferous is already a long time in the past for us.
In the paper The Complete Star Formation History of the Universe, by Alan Heavens, Benjamin Panter, Raul Jimenez, and James Dunlop, the authors note that the stellar birthrate peaked between five and eight billion years ago (with the authors of the paper arguing for the more recent peak). Both dates are near to being half the age of the universe, and our star and planetary system were only getting their start after the peak stelliferous had passed. Since the peak, star formation has fallen by an order of magnitude.
The paper cited above was from 2004. Since then, a detailed study star formation rates was widely reported in 2012, which located the peak of stellar birthrates about 11 billion years ago, or 2.7 billion years after the big bang, in which case the greater part of the Stelliferous Era that has elapsed to date has been after the peak of star formation. An even more recent paper, Cosmic Star Formation History, by Piero Madau and Mark Dickinson, argues for peak star formation about 3.5 billion years after the big bang. What all of these studies have in common is finding peak stellar birthrates billions years in the past, placing the present universe well after the peak stelliferous.
A recent paper that was widely noted and discussed, On The History and Future of Cosmic Planet Formation by Peter Behroozi and Molly Peeples, argued that, “…the Universe will form over 10 times more planets than currently exist.” (Also cf. Most Earth-Like Worlds Have Yet to Be Born, According to Theoretical Study) Thus even though we have passed the peak of the Stelliferous in terms of star formation, we may not yet have reached the peak of the formation of habitable planets, and population of habitable planets must peak before planets actually inhabited by life as we know it can peak, thereby achieving peak life in the universe.
The Behroozi ane Peeples paper states:
“…we note that only 8% of the currently available gas around galaxies (i.e., within dark matter haloes) had been converted into stars at the Earth’s formation time (Behroozi et al. 2013c). Even discounting any future gas accretion onto haloes, continued cooling of the existing gas would result in Earth having formed earlier than at least 92% of other similar planets. For giant planets, which are more frequent around more metal-rich stars, we note that galaxy metallicities rise with both increasing cosmic time and stellar mass (Maiolino et al. 2008), so that future galaxies’ star formation will always take place at higher metallicities than past galaxies’ star formation. As a result, Jupiter would also have formed earlier than at least ~90% of all past and future giant planets.”
We do not know the large scale structure of life in the cosmos, whether in terms of space or time, so that we are not at present in a position to measure or determine peak life, in the way that contemporary science can at least approach an estimate of peak stelliferous. However, we can at least formulate the scientific resources that would be necessary to such a determination. The ability to take spectroscopic readings of exoplanet atmospheres, in the way that we can now employ powerful telescopes to see stars throughout the universe, would probably be sufficient to make an estimate of life throughout the universe. This is a distant but still an entirely conceivable technology, so that an understanding of the large scale structure of life in space and time need not elude us perpetually.
Even if life exclusively originated on Earth, the technological agency of civilization may engineer a period of peak life that follows long after the possibility of continued life on Earth has passed. Life in possession of technological agency can spread itself throughout the worlds of our galaxy, and then through the galaxies of the universe. But peak life, in so far as we limit ourselves to life as we know it, must taper off and come to an end with the end of the Stelliferous Era. Life in some form may continue on, but peak life, in the sense of an abundance of populated worlds of high biodiversity, is a function of a large number of worlds warmed by countless stars throughout our universe. As these stars slowly use up their fuel and no new stars form, there will be fewer and fewer worlds warmed by these stars. As stars go cold, worlds will go cold, one by one, throughout the universe, and life, even if it survives in some other, altered form, will occupy fewer and fewer worlds until no “worlds” in this sense remain at all. This inevitable decline of life, however abundantly or sparingly distributed throughout the cosmos, eventually ending in the extinction of life as we know it, I have called the End Stelliferous Mass Extinction Event (ESMEE).
If we do not know when our universe will arrive at a period of peak life, even less do we know the period of peak civilization — whether it has already happened, whether it is right now, right here (if we are the only civilization the universe, and all that will ever be, then civilization Earth right now represents peak civilization), or whether peak civilization is still to come. We can, however, set parameters on peak civilization as we can set parameters on peak star formation of the Stelliferous Era and peak life.
The origins of civilization as we know it are contingent upon life as we known it, and life as we known it, as we have seen, is a function of the Stelliferous Era cosmos. However, civilization may be defined (among many other possible definitions) as life in possession of technological agency, and once life possesses technological agency it need not remain contingent upon the conditions of its origins. Some time ago in Human Beings: A Solar Species I addressed the idea that humanity is a solar species. Descriptively this is true at present, but it would be a logical fallacy to conflate the “is” of this present descriptive reality with an “ought” that prescribes out dependence upon our star, or even upon the system of stars that is the Stelliferous Era.
Civilization need not suffer from the End Stelliferous Mass Extinction Event as life must inevitably and eventually suffer. It could be argued that civilization as we know it (and, moreover, as defined above as “life in possession of technological agency”) is as contingent upon the conditions of the Stelliferous Era as is life as we known it. If we focus on the technological agency rather than upon life as we known it, even the far future of the universe offers amazing opportunities for civilization. The energy that we now derive from our star and from fossil fuels (itself a form of stored solar energy) we can derive on a far greater scale from angular momentum of rotating black holes (not mention other exotic forms of energy available to supercivilizations), and black holes and their resources will be available to civilizations even beyond the Degenerate Era following the Stelliferous Era, throughout the Black Hole Era.
In Addendum on Degenerate Era Civilization and Cosmology is the Principle of Plenitude teaching by Example I considered some of the interesting possibilities remaining for civilization during the Degenerate Era, and I pushed this perspective even further in my long Centauri Dreams post Who will read the Encyclopedia Galactica?
It is not until the Dark Era that the universe leaves civilization with no extractable energy resources, so that, if we have not by that time found our way to another, younger universe, it is the end of the Black Hole Era, and not the end of the Stelliferous Era, that will spell the doom of civilization. As black holes fade into nothingness one by one, much like stars at the end of the Stelliferous Era, the civilizations dependent upon them will wink out of existence, and this will be the End Civilization Mass Extinction Event (ECMEE) — but only if there is a mass of civilizations at this time to go extinct. This would mark the end of the apotheosis of emergent complexity.
The Apotheosis of Emergent Complexity
We can identify a period of time for our universe that we may call the apotheosis of emergent complexity, when stars are still forming, though on the decline, civilizations are only beginning to establish themselves in the cosmos, and life in the universe is at its peak. During this period, all of the forms of emergent complexity of which we are aware are simultaneously present, and the ecologies of galaxies, biospheres, and civilizations are all enmeshed each in the other.
It remains a possibility, perhaps even a likelihood, that further, unsuspected emergent complexities will grace the universe before its final dissolution in a heat death when the universe will be reduced to the thermodynamic equilibrium, which is the lowest common denominator of existence as we know it. Further forms of emergent complexity would require that we extend the framework I have suggested here, but, short of a robust and testable theory of the multiverse, which would extend the emergent complexity of stars, life, and civilizations to universes other than our own, the basic structure of the apotheosis of emergent complexity should remain as outlined above, even if extended by new forms.
. . . . .
. . . . .
. . . . .
. . . . .
28 November 2015
As yet we have too little evidence of civilization to understand civilizational processes. This sounds like a mere platitude, but it is a platitude to which we can give content by pointing out the relative lack of content of our conception of civilization.
On scale below that of macro-historical transitions (which latter I previously called macro-historical revolutions), we have many examples: many examples of the origins of civilization, many examples of the ends of civilizations, and many examples of the transitions that occur within the development and evolution of civilization. In other words, we have a great deal of evidence when it comes to individual civilizations, but we have very little evidence — insufficient evidence to form a judgment — when it comes to civilization as such (what I previously, very early in the history of this blog, called The Phenomenon of Civilization).
On the scale of macro-historical change, we have only a single instance in the history of terrestrial civilization, i.e., only a single data point on which to base any theory about macro-historical intra-civilizational change, and that is the shift from agricultural civilization (agrarian-ecclesiastical civilization) to industrial civilization (industrial-technological civilization). Moreover, the transition from agricultural and industrial civilization is still continuing today, and is not yet complete, as in many parts of the world industrialization is marginal at best and subsistence agriculture is still the economic mainstay.
Prior to this there was a macro-scale transition with the advent of civilization itself — the transition from hunter-gatherer nomadism to agrarian-ecclesiastical civilization — but this was not an intra-civilizational change, i.e., this was not a fundamental change in the structure of civilization, but the origins of civilization itself. Thus we can say that we have had multiple macro-scale transitions in human history, but human history is much longer than the history of civilization. When civilization emerges within human history it is a game-changer, and we are forced to re-conceptualize human history in terms of civilization.
Parallel to agrarian-ecclesiastical civilization, but a little later in emergence and development, was pastoral-nomadic civilization, which proved to be the greatest challenge to face agrarian-ecclesiastical civilization until the advent of industrialization (cf. The Pastoralist Challenge to Agriculturalism). Pastoral-nomadic civilization seems to have emerged independently in central Asia shortly after the domestication of the horse (and then, again independently, in the Great Plains of North America when horses were re-introduced), probably among peoples practicing subsistence agriculture without having produced the kinds of civilization found in centers of civilization in the Old World — the Yellow River Valley, the Indus Valley, and Mesopotamia.
Pastoral-nomadic civilization, as it followed its developmental course, was not derived from any great civilization, so there was no intra-civilizational transition at its advent, and when it ultimately came to an end it did not end with a transition into a new kind of civilization, but was rather supplanted by agricultural civilization, which slowly encroached on the great grasslands that were necessary for the pasturage of the horses of pastoral-nomadic peoples. So while pastoral-nomadic civilization was a fundamentally different kind of civilization — as different from agricultural civilization as agricultural civilization is different from industrial civilization — the particular circumstances of the emergence and eventual failure of pastoral-nomadic civilization in human history did not yield additional macro-historical transitions that could have provided evidence for the study of intra-civilizational macro-historical change (though it certainly does provide evidence for the study of intra-civilizational conflict).
We would be right to be extremely skeptical of any predictions about the future transition of our civilization into some other form of civilization when we have so little information to go on. All of this is civilization beyond the prediction wall. The view from within a civilization (i.e., the view that we have of ourselves in our own civilization) places too much emphasis upon slight changes to basic civilizational structures. We see this most clearly in mass media publications which present every new fad as a “sea change” that heralds a new age in the history of the world; of course, newspapers and magazines (and now their online equivalents) must adopt this shrill strategy in order to pay the bills, and no one employed at these publications necessarily needs to believe the hyperbole being sold to a gullible public. The most egregious futurism of the twentieth century was a product of precisely the same social mechanism, so that we should not be surprised that it was an inaccurate as it was. (On media demand-driven futurism cf. The Human Future in Space)
. . . . .
. . . . .
. . . . .
. . . . .
26 October 2015
Between the advent of cognitive modernity, perhaps seventy thousand years ago (more or less), and the advent of settled agricultural civilization, about ten thousand years ago, there is a period of fifty thousand years or more of human history — an order of magnitude of history beyond the historical period, sensu stricto, i.e., the period of written records formerly presumed coextensive with civilization — that we have only recently begun to recover by the methods of scientific historiography. This pre-Holocene world was a world of the “ice age” and of “cave men.” These ideas have become so confused in popular culture that I must put them in scare quotes, but in some senses they are accurate, if occasionally misleading.
One way in which the idea of an “Ice Age” is misleading is that it implies that our warmer climate today is the norm and an ice age is a passing exception to that norm. This is the reverse of the case. For the past two and a half million years the planet has been passing through the Quaternary Period, which mostly consists of long (about 100,000 year) periods of glaciation punctuated by shorter (about 10,000 year) interglacial periods (also called warming periods) during which the global climate warms and the polar ice sheets retreat. I have pointed out elsewhere that, although human ancestors have been present throughout the entire Quaternary, and so have therefore experienced several cycles of glaciation and interglacials, the present interglacial (the Holocene) is the first warming period since cognitive modernity, and we find the beginnings of civilization as soon as this present warming period begins. Thus the Holocene Epoch is dominated, from an anthropocentric perspective, by civilization; the Quaternary Period before the Holocene Epoch is, again from an anthropocentric perspective, human history before civilization: history before history.
We should remind ourselves that this very alien world and its inhabitants is the precursor to our world and the inhabitants are our direct ancestors. In other words, this is us. This is our history, even if we have only recently become accustomed to thinking of prehistory as history no less than the historical period sensu stricto. The Upper Paleolithic, with its ice age, cave bears, cave men, painted animals seen in flickering torchlight, and thousands upon thousands of years of a winter that does not end was a human world — the human world of the Upper Paleolithic — that we can only with effort recover as our own and come to feel its formative power to shape what we have become. The technical term is that his human world of the Upper Paleolithic was our environment of evolutionary adaptedness (EEA). It is this world that made us what we are today.
One website has this very evocative passage describing the world of the Upper Paleolithic:
“The longest war ever fought by humans was not fought against other humans, but against another species — Ursus spelaeus, the Cave Bear. For several hundred thousand years our stone age ancestors fought pitched and bloody battles with these denizens of the most precious commodity on earth — habitable caves. Without these shelters homo sapiens would have had little chance of surviving the Ice Ages, the winter storms, and the myriad of predators that lurked in the dark.”
While there isn’t direct scientific evidence for this compellingly dramatic way of thinking about the Upper Paleolithic (though I was very tempted to title this post “The 100,000 Year War”), it can accurately be said that human/cave bear interactions did occur during the most recent glacial maximum, that both human beings and cave bears are warm-blooded mammals and caves would have provided a measure of protection and warmth that would have endured literally for thousands or tens of thousands of years during this climatological “bottleneck” for mammals, whereas no human-built shelter could have survived these conditions for this period of time. Another species as ill-suited for cold weather as homo sapiens would have simply moved on or gone extinct, but we had our big brains by this time, and this made it possible for early man to fight tenaciously for keep a grip on life even in an environment in which they have to fight cave bears for the few available shelters.
Human beings would have survived elsewhere on the planet in any event, because the equatorial belt was still plenty warm at the time, but the fact that some human beings survived in caves in glaciated Europe is a testament both to their cognitive modernity and their stubbornness. It becomes a little easier to understand how and why early human beings squeezed into caves by passages that cause contemporary archaeologists to experience not a little claustrophobia, when we understand that human beings were routinely inhabiting caves, and probably had to explore them in some depth to make sure they wouldn’t have any unpleasant surprises when a cave bear woke up from its hibernation in the spring.
Unlike human beings, cave bears probably could not have survived elsewhere — they were a species endemic to a particular climate and a particular range and did not have the powers of behavioral adaptation possessed by human beings. The caves of ice age Eurasia were their world, and they spent enough time in these shelters that the walls of caves have a distinctive sheen that is called “Bärenschliffe”:
The “Bärenschliffe” are smooth, polished and often shining surfaces, thought to be caused by passing bears, rubbing their fur along the walls. These surfaces do not only occur in narrow passages, where the bear would come into contact with the walls, but also at corners or rocks in wider passages.
“Trace fossils from bears in caves of Germany and Austria” by Wilfried Rosendahl and Doris Döppes, Scientific Annals, School of Geology Aristotle University of Thessaloniki, Special volume 98, p. 241-249, Thessaloniki, 2006.
Some of these caves are said to be polished “like marble” (I haven’t visited any of these caves myself, so I am reporting what I have read in the literature), so that one must imagine cave bears passing through the narrow passages of their caves for thousands of years, brushing against the wall with their fur until the rough stone is made smooth. The human beings who later took over these caves would have run their hand along these smooth walls, noted the niches where the bears hibernated, and wondered if another bear would come to claim the cave they had claimed.
There is a particularly interesting cave in Switzerland, Drachenloch (which means “dragon’s lair,” as cave bear skulls were once thought to have been the skulls of dragons), in which early human beings seem to have stacked cave bear skulls in a stone “vault” in the floor of the cave. Certainly these two mammal species — ursus spelaeus and homo sapiens — would have known each other by all their shared signs of cave habitation. Indeed, they would have smelled each other.
Mythology scholar Joseph Campbell many times pointed out the fundamental mythological differences between hunter-gatherer peoples and settled agricultural peoples; in the case of the Upper Paleolithic, we have hunter-gatherers and only hunter-gatherers — that is to say, tens of thousands of years of a belief system emergent from a hunting culture with virtually no alternatives. Given the tendency of hunting peoples to animism, and of viewing other species as spiritually significant — metaphysical peers, as it were — one would expect that hunters who fought and killed cave bears in order to take over their shelters would have revered these animals in a religious sense, and this religious reverence for the slain foe (of any species) could explain the prevalence of apparent cave bear altars in caves inhabited by human beings during the Upper Paleolithic.
The human world of the Upper Paleolithic would also have been a world shared with other hominid species — an experience we do not have today, being the sole surviving hominid (perhaps as the result of being a genocidal species) — and most especially shared with Neanderthals. Recent genetic research has demonstrated that there was limited interbreeding between homo sapiens and Neanderthals (cf., e.g., Neanderthals had outsize effect on human biology), but it is likely that these communities were mostly separate. If we reflect on the still powerful effect of in-group bias in our cosmopolitan world, how much stronger must in-group bias have been among these small communities of homo sapiens, homo neanderthalensis, and Denisova hominins? One suspects that strong taboos were associated with other species, and rivals in hunting.
It is likely that Neanderthals evolved in the Levant or Europe from human ancestors who left Africa prior to the speciation of Homo sapiens. The Neanderthal were specifically adapted to life in the cold climates of Eurasia during the last glacial maximum. However, such is the power of intelligence as an adaptive tool that the modern human beings who left Africa were able displace Neanderthals in their own environment, much as homo sapiens displaced a great many other species (and much as they displaced cave bears from their caves). While Neanderthals had larger brains than Homo sapiens, they made tools and they wore clothing after a fashion, Neanderthals did not pass through a selective filter that (would have) resulted in the Neanderthal equivalent of cognitive modernity.
Homo sapiens made better tools and better clothing, and, in the depths of the last glacial maximum, better tools and better clothing constituted the margin between survival and extinction. Perhaps the most significant invention in hominid history after the control of fire was the bone needle, that allowed for the sewing of form-fitting clothing. With form-fitting clothing our prehistoric ancestors were able to make their way through the world of the last glacial maximum and the occupy every biome and every continent on the planet (with the exception of Antarctica).
While “lost worlds” and inexplicable mysteries are a favorite feature of historical popularization, the lost human world of the Upper Paleolithic is being recovered for us by scientific historiography. We are, as a result, reclaiming a part of our identity lost for the ten thousand years of civilization since the advent of the Holocene. The mystery of human origins is gradually becoming less mysterious, and will become less more, the more that we learn.
. . . . .
. . . . .
. . . . .
. . . . .
15 August 2015
In a series of posts I started last summer, A Century of Industrialized Warfare, I reflected on some of the significant 100 year anniversaries of the First World War. There are many more centennials yet to come. There is, in fact, almost a century of centennials from a century of almost continuous warfare.
Many have made the claim that the First and Second World Wars were one war with a twenty year hiatus (to rearm and regroup) ever since Marshal Ferdinand Foch, upon seeing the terms of the Treaty of Versailles, summarily announced, “This is not a peace. It is an armistice for twenty years.” (Foch was not one of those, like Keynes, who saw the terms as too harsh; Foch was disturbed that Germany was not completely dismembered as nation-state.) This reasoning can be extrapolated beyond the First and Second World Wars, which was followed immediately by the Cold War, and so on. If we make this extrapolation, we have a period of armed conflict rivaled in its duration only by the Hundred Years’ War.
The Hundred Years’ War was a construction of later historians: no one in the fourteenth and fifteenth century called the series of conflicts between the English and the French the “Hundred Years’ War,” and no one today calls the series of conflicts triggered by the First World War the “Second Hundred Years’ War,” though we can use the second term with as much justification as the first. Our periodizations are devices that we employ to attempt to help us better understand the past. While our metaphysical ambition is to carve nature at the joints, it is not clear that we can do this with history, i.e., that there is an intrinsic metaphysical structure to history. And we might understand the past century better if we understood out time as the Second Hundred Years’ War.
As the Hundred Years’ War is divided into a periodization of the Edwardian Era War (1337–1360), the Caroline War (1369–1389), and the Lancastrian War (1415–1453), so too we can divide the Second Hundred Years’ War into World War One, World War Two, The Cold War… and then whatever historians will eventually call our present stage of instability consisting of a series of Balkan wars, Persian Gulf wars, Central Asian wars, and the “War on Terror.” In both cases — that is to say, in both Hundred Year wars — the outcome of each major conflict created the conditions for the conflict to follow, and follow they did, with a dreary inevitability.
If the First Hundred Years’ War was about who would control the largest kingdom on the European continent (i.e., France), the Second Hundred Years’ War is about a political settlement in the context of industrial-technological civilization, when civilization is global. In other words, the Second Hundred Years’ War is about who will control the planet. This was already implicit in the geopolitics that led up to the First and Second World Wars, specifically, in Mackinder’s doctrine (sometimes called The Geographical Pivot of History) that, “Who rules East Europe commands the Heartland; who rules the Heartland commands the World-Island; who rules the World-Island commands the world.” (Mackinder, Democratic Ideals and Reality, p. 150)
I am not defending Mackinder’s view, which is still today discussed by geostrategists; I have observed elsewhere that Mackinder’s focus on land power was balanced by Alfred Thayer Mahan’s focus on sea power. The world-island, after all, is situated in the world-sea, and either can be a pathway to global dominion. But, really, this is not very interesting any more. No one talks about world dominion in explicit terms these days (except for villains in James Bond films), while the practical and pragmatic approaches to global power projection no longer look like Mackinder (or Mahan).
Nevertheless, there is a sense in which the global political system, which cannot avoid being global today because of the way all civilizations are crowded up against each other, seeks an equilibrium, and an equilibrium would be some global settlement of power relationships that would allow for an internal security regime in each nation-state and an external security regime that minimized conflict and facilitated trade and commerce. If this is what “global dominion” means today, so be it. Perhaps you would prefer to call it peace. Whatever you call it, this is what it will take to end the Second Hundred Years’ War.
. . . . .
. . . . .
A Century of Industrialized Warfare
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
27 May 2015
Is it possible for human beings to care about the fate of strangers? This is at once a profound philosophical question and an immediately practical question. The most famous response to this question is perhaps that of John Donne:
“No man is an island, entire of itself; every man is a piece of the continent, a part of the main. If a clod be washed away by the sea, Europe is the less, as well as if a promontory were, as well as if a manor of thy friend’s or of thine own were: any man’s death diminishes me, because I am involved in mankind, and therefore never send to know for whom the bells tolls; it tolls for thee.”
John Donne, Devotions upon Emergent Occasions, XVII. Nunc lento sonitu dicunt, morieris. Now, this bell tolling softly for another, says to me: Thou must die.
Immanuel Levinas spoke of “the community of those with nothing in common,” in an attempt to get at the human concern for other human beings of whom we know little or nothing. More recently, there is this from Bill Gates:
“When I talk to friends about global health, I often run into a strange paradox. The idea of saving one person’s life is profound and thrilling. But I’ve found that when you talk about saving millions of lives — it sounds almost meaningless. The higher the number, the harder it is to wrap your head around.”
Bill Gates, opening paragraph of An AIDS Number That’s Almost Too Big to Believe
Gates presents this as a paradox, but in social science it is a well-known and well-studied cognitive bias known as the Identifiable victim effect. One researcher who has studied this cognitive bias is Paul Slovic, whose work was discussed by Sam Harris in the following passage:
“…when human life is threatened, it seems both rational and moral for our concern to increase with the number of lives at stake. And if we think that losing many lives might have some additional negative consequences (like the collapse of civilization), the curve of our concern should grow steeper still. But this is not how we characteristically respond to the suffering of other human beings.”
“Slovic’s experimental work suggests that we intuitively care most about a single, identifiable human life, less about two, and we grow more callous as the body count rises. Slovic believes that this ‘psychic numbing’ explains the widely lamented fact that we are generally more distressed by the suffering of single child (or even a single animal) than by a proper genocide. What Slovic has termed ‘genocide neglect’ — our reliable failure to respond, both practically and emotionally, to the most horrific instances of unnecessary human suffering — represents one of the more perplexing and consequential failures of our moral intuition.”
“Slovic found that when given a chance to donate money in support of needy children, subjects give most generously and feel the greatest empathy when told only about a single child’s suffering. When presented with two needy cases, their compassion wanes. And this diabolical trend continues: the greater the need, the less people are emotionally affected and the less they are inclined to give.”
Sam Harris, The Moral Landscape, Chapter 2
Skip down another paragraph and Harris adds this:
“The fact that people seem to be reliably less concerned when faced with an increase in human suffering represents an obvious violation of moral norms. The important point, however, is that we immediately recognize how indefensible this allocation of emotional and material resources is once it is brought to our attention.”
While Harris has not hesitated to court controversy, and speaks the truth plainly enough as he sees it, by failing to place what he characterizes as norms of moral reasoning in an evolutionary context he presents us with a paradox (the above section of the book is subtitled “Moral Paradox”). Really, this kind of cognitive bias only appears paradoxical when compared to a relatively recent conception of morality liberated from parochial in-group concerns.
For our ancestors, focusing on a single individual whose face is known had a high survival value for a small nomadic band, whereas a broadly humanitarian concern for all human beings would have been disastrous in equal measure. Today, in the context of industrial-technological civilization we can afford to love humanity; if our ancestors had loved humanity rather than particular individuals they knew well, they likely would have gone extinct.
Our evolutionary past has ill prepared us for the perplexities of population ethics in which the lives of millions may rest on our decisions. On the other hand, our evolutionary past has well prepared us for small group dynamics in which we immediately recognize everyone in our in-group and with equal immediacy identify anyone who is not part of our in-group and who therefore belongs to an out-group. We continue to behave as though our decisions were confined to a small band of individuals known to us, and the ability of contemporary telecommunications to project particular individuals into our personal lives as though we knew them, as if they were part of our in-group, plays into this cognitive bias.
While the explicit formulation of Identifiable victim effect is recent, the principle has been well known for hundreds of years at least, and has been as compellingly described in historical literature as in recent social science, as, for example, in Adam Smith:
“Let us suppose that the great empire of China, with all its myriads of inhabitants, was suddenly swallowed up by an earthquake, and let us consider how a man of humanity in Europe, who had no sort of connexion with that part of the world, would be affected upon receiving intelligence of this dreadful calamity. He would, I imagine, first of all, express very strongly his sorrow for the misfortune of that unhappy people, he would make many melancholy reflections upon the precariousness of human life, and the vanity of all the labours of man, which could thus be annihilated in a moment. He would too, perhaps, if he was a man of speculation, enter into many reasonings concerning the effects which this disaster might produce upon the commerce of Europe, and the trade and business of the world in general. And when all this fine philosophy was over, when all these humane sentiments had been once fairly expressed, he would pursue his business or his pleasure, take his repose or his diversion, with the same ease and tranquillity, as if no such accident had happened. The most frivolous disaster which could befall himself would occasion a more real disturbance. If he was to lose his little finger to-morrow, he would not sleep to-night; but, provided he never saw them, he will snore with the most profound security over the ruin of a hundred millions of his brethren, and the destruction of that immense multitude seems plainly an object less interesting to him, than this paltry misfortune of his own.”
Adam Smith, Theory of Moral Sentiments, Part III, chapter 3, paragraph 4
And immediately after Hume made his famous claim that, “Reason is, and ought only to be the slave of the passions, and can never pretend to any other office than to serve and obey them,” he illustrated the claim with an observation similar to Smith’s:
“Where a passion is neither founded on false suppositions, nor chuses means insufficient for the end, the understanding can neither justify nor condemn it. It is not contrary to reason to prefer the destruction of the whole world to the scratching of my finger. It is not contrary to reason for me to chuse my total ruin, to prevent the least uneasiness of an Indian or person wholly unknown to me. It is as little contrary to reason to prefer even my own acknowledgeed lesser good to my greater, and have a more ardent affection for the former than the latter.”
David Hume, A Treatise of Human Nature, Book II, Part III, section 3
Bertrand Russell has well described how the expression of this cognitive bias can take on the conceit of moral superiority in the context of romanticism:
“Cultivated people in eighteenth-century France greatly admired what they called la sensibilité, which meant a proneness to emotion, and more particularly to the emotion of sympathy. To be thoroughly satisfactory, the emotion must be direct and violent and quite uninformed by thought. The man of sensibility would be moved to tears by the sight of a single destitute peasant family, but would be cold to well-thought-out schemes for ameliorating the lot of peasants as a class. The poor were supposed to possess more virtue than the rich; the sage was thought of as a man who retires from the corruption of courts to enjoy the peaceful pleasures of an unambitious rural existence.”
Bertrand Russell, A History of Western Philosophy, Part II. From Rousseau to the Present Day, CHAPTER XVIII “The Romantic Movement”
Russell’s account of romanticism provides some of the missing rationalization whereby a cognitive bias clearly at variance with norms of moral reasoning is justified as being the “higher” moral ground. Harris seems to suggest that, as soon as this violation of moral reasoning is pointed out to us, we will change. But we don’t change, for the most part. Our rationalizations change, but our behavior rarely does. And indeed studies of cognitive bias have revealed that even when experimental subjects are informed of cognitive biases that should be obvious ex post facto, most will continue to defend choices that unambiguously reflect cognitive bias.
I have personally experienced the attitude described by Russell (despite the fact that I have not lived in eighteenth-century France) more times than I care to recall, though I find myself temperamentally on the side of those formulating well-thought-out schemes for the amelioration of the lot of the destitute as a class, rather than those moved to tears by the sight of a single destitute family. From these personal experiences of mine, anecdotal evidence suggests to me that if you attempt to live by the quasi-utilitarianism advocated by Russell and Harris, others will regard you as cold, unfeeling, and lacking in the milk of human kindness.
The cognitive bias challenge to presumptive norms of moral reasoning is also a profound challenge to existential risk mitigation, since existential risk mitigation deals in the largest numbers of human lives saved, but is a well-thought-out scheme for ameliorating the lot of human beings as a class, and may therefore have little emotional appeal compared to putting an individual’s face on a problem and then broadcasting that face repetitively.
We have all heard that the past is the foreign county, and that they do things differently there. (This line comes from the 1953 novel The Go-Between by L. P. Hartley.) We are the past of some future that has yet to occur, and we will in turn be a foreign country to that future. And, by the same token, the future is a foreign country, and they do things differently there. Can we care about these foreigners with their foreign ways? Can we do more than care about them, and actually change our behavior in the present in order to ensure on ongoing future, however foreign that future is from our parochial concerns?
In Bostrom’s paper “Existential Risk Prevention as Global Priority” (Global Policy, Volume 4, Issue 1, February 2013) the author gives a lower bound of 1016 potential future lives saved by existential risk mitigation (though he also gives “a lower bound of 1054 human-brain-emulation subjective life-years” as a possibility), but if the “collapse of compassion” is a function of the numbers involved, the higher the numbers we cite for individuals saved as a result of existential risk mitigation, the less will the average individual of today care.
Would it be possible to place an identifiable victim in the future? This is difficult, but we are all familiar with appeals to the world we leave to our children, and these are attempts to connect identifiable victims with actions that may prejudice the ability of human beings in the future to live lives of value commensurate with our own. It would be possible to construct some grand fiction, like Plato’s “noble lie” in order to interest the mass of the public in existential risk mitigation, but this would not be successful unless it became some kind of quasi-religious belief exempted from falsification that becomes the receptacle of our collective hopes. This does not seem very plausible (or sustainable) to me.
Are we left, then, to take the high road? To try to explain in painstaking (and off-putting) detail the violation of moral norms involved in our failure to adequately address existential risks, thereby putting our descendants in mortal danger? Certainly if an attempt to place an identifiable victim in the future is doomed to failure, we have no remaining option but the attempt at a moral intervention and relentless moral education that could transform the moral lives of humanity.
I do not think either of the above approaches to resolving the identifiable victim challenge to existential risk mitigation would be likely to be successful. I can put this more strongly yet: I think both approaches would almost certainly result in a backlash and would therefore be counter-productive to existential risk mitigation efforts. The only way forward that I can see is to present existential risk mitigation under the character of the adventure and exploration made possible by a spacefaring civilization that would, almost as an unintended consequence, secure the redundancy and autonomy of extraterrestrial centers of human civilization.
Human beings (at least as I know them) have a strong distaste for moral lectures and do not care to be told to do anything for their own good, but if you present them with the possibility of adventure and excitement that promises new experiences to every individual and possibly even the prospect of the extraterrestrial equivalent of a buried treasure, or even a pot of gold at the of the rainbow, you might enlist the selfishness and greed of individuals in a great cause on behalf of Earth and all its inhabitants, so that each individual is moved, as it were, by an invisible hand to promote an end which was no part of his intention.
. . . . .
. . . . .
Existential Risk: The Philosophy of Human Survival
13. Existential Risk and Identifiable Victims
. . . . .
. . . . .
. . . . .
. . . . .
30 January 2015
Introduction: Periodization in Cosmology
Recently Paul Gilster has posted my Who will read the Encyclopedia Galactica? on his Centauri Dreams blog. In this post I employ the framework of Fred Adams and Greg Laughlin from their book The Five Ages of the Universe: Inside the Physics of Eternity, who distinguish the Primordial Era, before stars have formed, the Stelliferous Era, which is populated by stars, the Degenerate Era, when only the degenerate remains of stars are to be found, the Black Hole Era, when only black holes remain, and finally the Dark Era, when even black holes have evaporated. These major divisions of cosmological history allow us to partition the vast stretches of cosmological time, but it also invites us to subdivide each era into smaller increments (such is the historian’s passion for periodization).
The Stelliferous Era is the most important to us, because we find ourselves living in the Stelliferous Era, and moreover everything that we understand in terms of life and civilization is contingent upon a biosphere on the surface of a planet warmed by a star. When stellar formation has ceased and the last star in the universe burns out, planets will go dark (unless artificially lighted by advanced civilizations) and any remaining biospheres will cease to function. Life and civilization as we know it will be over. I have called this the End-Stelliferous Mass Extinction Event.
It will be a long time before the end of the Stelliferous Era — in human terms, unimaginably long. Even in scientific terms, the time scale of cosmology is long. It would make sense for us, then, to break up the Stelliferous Era into smaller periodizations that can be dealt with each in turn. Adams and Laughlin constructed a logarithmic time scale based on powers of ten, calling each of these powers of ten a “cosmological decade.” The Stelliferous Era comprises cosmological decades 7 to 15, so we can further break down the Stelliferous Era into three divisions of three cosmological decades each, so cosmological decades 7-9 will be the Early Stelliferous, cosmological decades 10-12 will be the Middle Stelliferous, and cosmological decades 13-15 will be the late Stelliferous.
The Early Stelliferous
Another Big History periodization that has been employed other than that of Adams of Laughlin is Eric Chaisson’s tripartite distinction between the Energy Era, the Matter Era, and the Life Era. The Primordial Era and the Energy Era coincide until the transition point (or, if you like, the phase transition) when the energies released by the big bang coalesce into matter. This phase transition is the transition from the Energy Era to the Matter Era in Chaisson; for Adams and Laughlin this transition is wholly contained within the Primordial Era and may be considered one of the major events of the Primorial Era. This phase transition occurs at about the fifth cosmological decade, so that there is one cosmological decade of matter prior to that matter forming stars.
At the beginning of the Early Stelliferous the first stars coalesce from matter, which has now cooled to the point that this becomes possible for the first time in cosmological history. The only matter available at this time to form stars is hydrogen and helium produced by the big bang. The first generation of stars to light up after the big bang are called Population III stars, and their existence can only be hypothesized because no certain observations exist of Population III stars. The oldest known star, HD 140283, sometimes called the Methuselah Star, is believed to be a Population II star, and is said to be metal poor, or of low metallicity. To an astrophysicist, any element other than hydrogen or helium is a “metal,” and the spectra of stars are examined for the “metals” present to determine their order of appearance in galactic ecology.
The youngest stars, like our sun and other stars in the spiral arms of the Milky Way, are Population I stars and are rich in metals. The whole history of the universe up to the present is necessary to produce the high metallicity younger stars, and these younger stars form from dust and gas that coalesce into a protoplanetary disk surrounding the young star of similarly high metal content. We can think of the stages of Population III, Population II, and Population I stars as the evolutionary stages of galactic ecology that have produced structures of greater complexity. Repeated cycles of stellar nucleosynthesis, catastrophic supernovae, and new star formation from these remnants have produced the later, younger stars of high metallcity.
It is the high relative proportion of heavier elements that makes possible the formulation of small rocky planets in the habitable zone of a stable star. The minerals that form these rocky planets are the result of what Robert Hazen calls minerological evolution, which we may consider to be an extension of galactic ecology on a smaller scale. These planets, in turn, have heavier elements distributed throughout their crust, which, in the case of Earth, human civilization has dug out of the crust and put to work manufacturing the implements of industrial-technological civilization. If Population II and Population III stars had planets (this is an open area of research in planet formation and without a definite answer as yet), it is conceivable that these planets might have harbored life, but the life on such worlds would not have had access to heavier elements, so any civilization that resulted would have had a difficult time of it creating an industrial or electrical technology.
The Middle Stelliferous
In the Middle Stelliferous, the processes of galactic ecology that produced and which now sustain the Stelliferous Era have come to maturity. There is a wide range of galaxies consisting of a wide range of stars, running the gamut of the Hertzsprung–Russell diagram. It is a time of both galactic and stellar prolixity, diversity, and fecundity. But even as the processes of galactic ecology reach their maturity, they begin to reveal the dissipation and dissolution that will characterize the Late Stelliferous Era and even the Degenerate Era to follow.
The Milky Way, which is a very old galaxy, carries with it the traces of the smaller galaxies that it has already absorbed in its earlier history — as, for example, the Helmi Stream — and for the residents of the Milky Way and Andromeda galaxies one of the most spectacular events of the Middle Stelliferous Era will be the merging of these two galaxies in a slow-motion collision taking place over millions of years, throwing some star systems entirely clear of the newly merged galaxies, and eventually resulting in the merging of the supermassive black holes that anchor the centers of each of these elegant spiral galaxies. The result is likely to be an elliptical galaxy not clearly resembling either predecessor (and sometimes called the Milkomeda).
Eventually the Triangulum galaxy — the other large spiral galaxy in the local group — will also be absorbed into this swollen mass of stars. In terms of the cosmological time scales here under consideration, all of this happens rather quickly, as does also the isolation of each of these merged local groups which persist as lone galaxies, suspended like a island universe with no other galaxies available to observational cosmology. The vast majority of the history of the universe will take place after these events have transpired and are left in the long distant past — hopefully not forgotten, but possibly lost and unrecoverable.
The Tenth Decade
The tenth cosmological decade, comprising the years between 1010 to 1011 (10,000,000,000 to 100,000,000,000 years, or 10 Ga. to 100 Ga.) since the big bang, is especially interesting to us, like the Stelliferous Era on the whole, because this is where we find ourselves. Because of this we are subject to observation selection effects, and we must be particularly on guard for cognitive biases that grow out of the observational selection effects we experience. Just as it seems, when we look out into the universe, that we are in the center of everything, and all the galaxies are racing away from us as the universe expands, so too it seems that we are situated in the center of time, with a vast eternity preceding us and a vast eternity following us.
Almost everything that seems of interest to us in the cosmos occurs within the tenth decade. It is arguable (though not definitive) that no advanced intelligence or technological civilization could have evolved prior to the tenth decade. This is in part due to the need to synthesize the heavier elements — we could not have developed nuclear technology had it not been for naturally occurring uranium, and it is radioactive decay of uranium in Earth’s crust that contributes significantly to the temperature of Earth’s core and hence to Earth being a geologically active planet. By the end of the tenth decade, all galaxies will have become isolated as “island universes” (once upon a time the cosmological model for our universe today) and the “end of cosmology” (as Krauss and Sherrer put it) will be upon us because observational cosmology will no longer be able to study the large scale structures of the universe.
The tenth decade, thus, is not only when it becomes possible for an intelligent species to evolve, to establish an industrial-technological civilization on the basis of heavier elements built up through nucleosynthesis and supernova explosions, and to employ these resources to launch itself as a spacefaring civilization, but also this is the only period in the history of the universe when such a spacefaring civilization can gain a true foothold in the cosmos to establish an intergalactic civilization. After local galactic groups coalesce into enormous single galaxies, and all other similarly coalesced galaxies have passed beyond the cosmological horizon and can no longer be observed, an intergalactic civilization is no longer possible on principles of science and technology as we understand them today.
It is sometimes said that, for astronomers, galaxies are the basic building blocks of the universe. The uniqueness of the tenth decade, then, can be expressed as being the only time in cosmological history during which a spacefaring civilization can emerge and then can go on to assimilate and unify the basic building blocks of the universe. It may well happen that, by the time of million year old supercivilizations and even billion year old supercivilizations, sciences and technologies will have been developed far beyond our understanding that is possible today, and some form of intergalactic relationship may continue after the end of observational cosmology, but, if this is the case, the continued intergalactic organization must be on principles not known to us today.
The Late Stelliferous
In the Late Stelliferous Era, after the end of the cosmology, each isolated local galactic group, now merged into a single giant assemblage of stars, will continue its processes of star formation and evolution, ever so slowly using up all the hydrogen produced in the big bang. The Late Stelliferous Era is a universe having passed “Peak Hydrogen” and which can therefore only look forward to the running down of the processes of galactic ecology that have sustained the universe up to this time.
The end of cosmology will mean a changed structure of galactic ecology. Even if civilizations can find a way around their cosmological isolation through advanced technology, the processes of nature will still be bound by familiar laws of nature, which, being highly rigid, will not have changed appreciably even over billions of years of cosmological evolution. Where light cannot travel, matter cannot travel either, and so any tenuous material connection between galactic groups will cease to play any role in galactic ecology.
The largest scale structures that we know of in the universe today — superclusters and filaments — will continue to expand and cool and to dissipate. We can imagine a bird’s eye view of the future universe (if only a bird could fly over the universe entire), with its large scale structures no longer in touch with one another but still constituting the structure, rarified by expansion, stretched by gravity, and subject to the evolutionary processes of the universe. This future universe (which we may have to stop calling the universe, as it is lost its unity) stands in relation to its current structure as the isolated and strung out continents of Earth today stand in relation to earlier continental structures (such as the last supercontinent, Pangaea), preceding the present disposition of continents (though keep in mind that there have been at least five supercontinent cycles since the formation of Earth and the initiation of its tectonic processes).
Near the end of the Stelliferous Era, there is no longer any free hydrogen to be gathered together by gravity into new suns. Star formation ceases. At this point, the fate of the brilliantly shining universe of stars and galaxies is sealed; the Stelliferous Era has arrived at functional extinction, i.e., the population of late Stelliferous Era stars continues to shine but is no longer viable. Galactic ecology has shut down. Once star formation ceases, it is only a matter of time before the last of the stars to form burn themselves out. Stars can be very large, very bright and short lived, or very small, scarcely a star at all, very dim, cool, and consequently very long lived. Red dwarf stars will continue to burn dimly long after all the main sequence stars like the sun have burned themselves out, but eventually even the dwarf stars, burning through their available fuel at a miserly rate, will burn out also.
The Post-Stelliferous Era
After the Stelliferous Era comes the Degenerate Era, with the two eras separated by what I have called the Post-Stelliferous Mass Extinction Event. What the prospects are for continued life and intelligence in the Degenerate Era is something that I have considered in Who will read the Encyclopedia Galactica? and Addendum on Degenerate Era civilization, inter alia.
Our enormous and isolated galaxy will not be immediately plunged into absolute darkness. Adams and Laughlin (referred to above) estimate that our galaxy may have about a hundred small stars shining — the result of the collision of two or more brown dwarfs. Brown dwarf stars, at this point in the history of the cosmos, contain what little hydrogen remains, since brown dwarf stars were not large enough to initiate fusion during the Stelliferous Era. However, if two or more brown dwarfs collide — a rare event, but in the vast stretches of time in the future of the universe rare events will happen eventually — they may form a new small star that will light up like a dim candle in a dark room. There is a certain melancholy grandeur in attempting to imagine a hundred or so dim stars strewn through the galaxy, providing a dim glow by which to view this strange and unfamiliar world.
Our ability even to outline the large scale structures — spatial, temporal, biological, technological, intellectual, etc. — of the extremely distant future is severely constrained by our paucity of knowledge. However, if terrestrial industrial-technological civilization successfully makes the transition to being a viable spacefaring civilization (what I might call extraterrestrial-spacefaring civilization) our scientific knowledge of the universe is likely to experience an exponential inflection point surpassing the scientific revolution of the early modern period.
An exponential improvement in scientific knowledge (supported on an industrial-technological base broader than the surface of a single planet) will help to bring the extremely distant future into better focus and will give to our existential risk mitigation efforts both the knowledge that such efforts requires and the technological capability needed to ensure the perpetual ongoing extrapolation of complexity driven by intelligent, conscious, and purposeful intervention in the world. And if not us, if not terrestrial civilization, then some other civilization will take over the mantle and the far future will belong to them.
. . . . .
. . . . .
. . . . .
. . . . .
12 December 2014
An Exercise in Techno-Philosophy
Quite some time ago in Fear of the Future I employed the phase “the technological frontier,” but I did not follow up on this idea in a systematic way. In the popular mind, the high technology futurism of the technological singularity has largely replaced the futurism of rocketships and jetpacks, so that the idea of a technological frontier has particular resonance for us today. The idea of a technological frontier is particularly compelling in our time, as technology seems to dominate our lives to an increasing degree, and this trend may only accelerate in the future. If our lives are shaped by technology today, how much more profoundly will they be shaped by technology in ten, twenty, fifty, or a hundred years? We would seem to be poised like pioneers on a technological frontier.
How are we to understand the human condition in the age of the technological frontier? The human condition is no longer merely the human condition, but it is the human condition in the context of technology. This was not always the case. Let me try to explain.
While humanity emerged from nature and lived entirely within the context of nature, our long prehistory integrated into nature was occluded and utterly lost after the emergence of civilization, and the origins of civilization was attended by the formulation of etiological mythologies that attributed supernatural causes to the manifold natural causes that shape our lives. We continued to live at the mercy of nature, but posited ourselves as outside nature. This led to a strangely conflicted conception of nature and a fraught relationship with the world from which we emerged.
The fraught human relationship to nature has been characterized by E. O. Wilson in terms of biophilia; the similarly fraught human relationship to technology might be similarly characterized in terms of technophilia, which I posited in The Technophilia Hypothesis (and further elaborated in Technophilia and Evolutionary Psychology). And as with biophilia and biophobia, so, too, while there is technophilia, there is also technophobia.
Today we have so transformed our world that the context of our lives is the technological world; we have substituted technology for nature as the framework within which we conduct the ordinary business of life. And whereas we once asked about humanity’s place in nature, we now ask, or ought to ask, what humanity’s place is or ought to be in this technological world with which we have surrounded ourselves. We ask these questions out of need, existential need, as there is both pessimism and optimism about a human future increasingly dominated by the technology we have created.
I attach considerable importance to the fact that we have literally surrounded ourselves with our technology. Technology began as isolated devices that appeared within the context of nature. A spear, a needle, a comb, or an arrow were set against the background of omnipresent nature. And the relationship of these artifacts to their sources in nature were transparent: the spear was made of wood, the needle and the comb of bone, the arrow head of flint. Technological artifacts, i.e., individual instances of technology, were interpolations into the natural world. Over a period of more than ten thousand years, however, technological artifacts accumulated until they have displaced nature and they constitute the background against which nature is seen. Nature then became an interpolation within the context of the technological innovations of civilizations. We have gardens and parks and zoos that interpolate plants and animals into the built environment, which is the environment created by technology.
With technology as the environment and the background of our lives, and not merely constituted by objects within our lives, technology now has an ontological dimension — it has its own laws, its own features, its own properties — and it has a frontier. We ourselves are objects within a technological world (hence the feeling of anomie from being cogs within an enormous machine); we populate an environment defined and constituted by technology, and as such bear some relationship to the ontology of technology as well as to its frontier. Technology conceived in this way, as a totality, suggests ways of thinking about technology parallel to our conceptions of humanity and civilization, inter alia.
One way to think about the technological frontier is as the human exploration of the technium. The idea of the technium accords well with the conception of the technological world as the context of human life that I described above. The “technium” is a term introduced by Kevin Kelly to denote the totality of technology. Here is the passage in which Kelly introduces the term:
“I dislike inventing new words that no one else uses, but in this case all known alternatives fail to convey the required scope. So I’ve somewhat reluctantly coined a word to designate the greater, global, massively interconnected system of technology vibrating around us. I call it the technium. The technium extends beyond shiny hardware to include culture, art, social institutions, and intellectual creations of all types. It includes intangibles like software, law, and philosophical concepts. And most important, it includes the generative impulses of our inventions to encourage more tool making, more technology invention, and more self-enhancing connections. For the rest of this book I will use the term technium where others might use technology as a plural, and to mean a whole system (as in “technology accelerates”). I reserve the term technology to mean a specific technology, such as radar or plastic polymers.”
Kevin Kelly, What Technology Wants
The concept of the technium can be extended in parallel to schema I have applied to civilization in Eo-, Eso-, Exo-, Astro-, so that we have the concepts of the eotechnium, the esotechnium, the exotechnium, and the astrotechnium. (Certainly no one is going to employ this battery of unlovely terms I have coined — neither the words nor the concepts are immediately accessible — but I keep this ideas in the back of my mind and hope to further extend, perhaps in a formal context in which symbols can be substituted for awkward words and the ideas can be presented.)
● Eotechnium the origins of technology, wherever and whenever it occurs, terrestrial or otherwise
● Esotechnium our terrestrial technology
● Exotechnium the extraterrestrial technium exclusive of the terrestrial technium
● Astrotechnium the technium in its totality throughout the universe; the terrestrial and extraterrestrial technium taken together in their cosmological context
I previously formulated these permutations of technium in Civilization and the Technium. In that post I wrote:
The esotechnium corresponds to what has been called the technosphere, mentioned above. I have pointed out that the concept of the technosphere (like other -spheres such as the hydrosphere and the sociosphere, etc.) is essentially Ptolemaic in conception, i.e., geocentric, and that to make the transition to fully Copernican conceptions of science and the world we need to transcend our Ptolemaic ideas and begin to employ Copernican ideas. Thus to recognize that the technosphere corresponds to the esotechnium constitutes conceptual progress, because on this basis we can immediately posit the exotechnium, and beyond both the esotechnium and the exotechnium we can posit the astrotechnium.
We can already glimpse the astrotechnium, in so far as human technological artifacts have already reconnoitered the solar system and, in the case of the Voyager space probes, have left the solar system and passed into interstellar space. The technium then, i.e., from the eotechnium originating on Earth, now extends into space, and we can conceive the whole of this terrestrial technology together with our extraterrestrial technology as the astrotechnium.
It is a larger question yet whether there are other technological civilizations in the universe — it is the remit of SETI to discover if this is the case — and, if there are, there is an astrotechnium much greater than that we have created by sending our probes through our solar system. A SETI detection of an extraterrestrial signal would mean that the technology of some other species had linked up with our technology, and by their transmission and our reception an interstellar astrotechnium comes into being.
The astrotechnium is both itself a technological frontier, and it extends throughout the frontier of extraterrestrial space, and a physical frontier of space. The exploration of the astrotechnium would be at once an exploration of the technological frontier and an exploration of an actual physical frontier. This is surely the frontier in every sense of the term. But there are other senses as well.
We can go my taxonomy of the technium one better and also include the endotechnium, where the prefix “endo-” means “inside” or “interior.” The endotechnium is that familiar motif of contemporary thought of virtual reality becoming indistinguishable from the reality of nature. Virtual reality is immersion in the endotechnium.
I have noted (in An Idea for the Employment of “Friendly” AI) that one possible employment of friendly AI would be the on-demand production of virtual worlds for our entertainment (and possibly also our education). One would presumably instruct one’s AI interface (which already has all human artistic and intellectual accomplishments storied in its databanks) that one wishes to enter into a particular story. The AI generates the entire world virtually, and one employs one’s preferred interface to step into the world of the imagination. Why would one so immersed choose to emerge again?
One of the responses to the Fermi paradox is that any sufficiently advanced civilization that had developed to the point of being able to generate virtual reality of a quality comparable to ordinary experience would thereafter devote itself to the exploration of virtual worlds, turning inward rather than outward, forsaking the wider universe outside for the universe of the mind. In this sense, the technological frontier represented by virtual reality is the exploration of the human imagination (or, for some other species, the exploration of the alien imagination). This exploration was formerly carried out in literature and the arts, but we seem poised to enact this exploration in an unprecedented way.
There are, then, many senses of the technological frontier. Is there any common framework within which we can grasp the significance of these several frontiers? The most famous representative of the role of the frontier in history is of course Frederick Jackson Turner, for whom the Turner Thesis is named. At the end of his famous essay on the frontier in American life, Turner wrote:
“From the conditions of frontier life came intellectual traits of profound importance. The works of travelers along each frontier from colonial days onward describe certain common traits, and these traits have, while softening down, still persisted as survivals in the place of their origin, even when a higher social organization succeeded. The result is that to the frontier the American intellect owes its striking characteristics. That coarseness and strength combined with acuteness and inquisitiveness; that practical, inventive turn of mind, quick to find expedients; that masterful grasp of material things, lacking in the artistic but powerful to effect great ends; that restless, nervous energy; that dominant individualism, working for good and for evil, and withal that buoyancy and exuberance which comes with freedom — these are traits of the frontier, or traits called out elsewhere because of the existence of the frontier.”
Frederick Jackson Turner, “The Significance of the Frontier in American History,” which constitutes the first chapter of The Frontier In American History
Turner is not widely cited today, and his work has fallen into disfavor (especially targeted by the “New Western Historians”), but much that Turner observed about the frontier is not only true, but more generally applicable beyond the American experience of the frontier. I think many readers will recognize in the attitudes of those today on the technological frontier the qualities that Turner described in the passage quoted above, attributing them specially to the American frontier, which for Turner was, “an area of free land, its continuous recession, and the advance of American settlement westward.”
The technological frontier, too, is an area of free space — the abstract space of technology — the continuous recession of this free space as frontier technologies migrate into the ordinary business of life even while new frontiers are opened, and the advance of pioneers into the technological frontier.
One of the attractions of a frontier is that it is distant from the centers of civilization, and in this sense represents an escape from the disciplined society of mature institutions. The frontier serves as a refuge; the most marginal elements of society naturally seek the margins of society, at the periphery, far from the centers of civilization. (When I wrote about the center and periphery of civilization in The Farther Reaches of Civilization I could just as well have expressed myself in terms of the frontier.)
In the past, the frontier was defined in terms of its (physical) distance from the centers of civilization, but the world of high technology being created today is a product of the most technologically advanced centers of civilization, so that the technological frontier is defined by its proximity to the centers of civilization, understood at the centers of innovation and production for industrial-technological civilization.
The technological frontier nevertheless exists on the periphery of many of the traditional symbols of high culture that were once definitive of civilizational centers; in this sense, the technological frontier may be defined as the far periphery of the traditional center of civilization. If we identify civilization with the relics of high culture — painting, sculpture, music, dance, and even philosophy, all understood in their high-brow sense (and everything that might have featured symbolically in a seventeenth century Vanitas painting) — we can see that the techno-philosophy of our time has little sympathy for these traditional markers of culture.
The frontier has been the antithesis of civilization — civilization’s other — and the further one penetrates the frontier, moving always away from civilization, the nearer one approaches the absolute other of civilization: wildness and wilderness. The technological frontier offers to the human sense of adventure a kind of wildness distinct from that of nature as well as the intellectual adventure of traditional culture. Although the technological frontier is in one sense antithetical to the post-apocalyptic visions of formerly civilized individuals transformed into a noble savage (which usually marked by technological rejectionism), there is also a sense in which the technological frontier is like the post-apocalyptic frontier in its radical rejection of bourgeois values.
If we take the idea of the technological frontier in the context of the STEM cycle, we would expect that the technological frontier would have parallels in science and engineering — a scientific frontier and an engineering frontier. In fact, the frontier of scientific knowledge has been a familiar motif since at least the middle of the twentieth century. With the profound disruptions of scientific knowledge represented by relativity and quantum theory, the center of scientific inquiry has been displaced into an unfamiliar periphery populated by strange and inexplicable phenomena of the kind that would have been dismissed as anomalies by classical physics.
The displacement of traditional values of civilization, and even of traditional conceptions of science, gives the technological frontier its frontier character even as it emerges within the centers of industrial-technological civilization. In The Interstellar Imperative I asserted that the central imperative of industrial-technological civilization is the propagation of the STEM cycle. It is at least arguable that the technological frontier is both a result and a cause of the ongoing STEM cycle, which experiences its most unexpected advances when its scientific, technological, and engineering innovations seem to be at their most marginal and peripheral. A civilization that places itself within its own frontier in this way is a frontier society par excellence.
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
2 November 2014
The word “precarity” is quite recent, and does not appear in the Oxford English Dictionary, but has appeared in the titles of several books. The term mostly derives from left-leaning organized labor, and has come into use to describe the lives of workers in precarious circumstances. Wikipedia defines precarity as “a condition of existence without predictability or security, affecting material or psychological welfare.”
Dorothy Day, writing in The Catholic Worker (coming from a context of both Catholic monasticism and labor activism), May 1952 (“Poverty and Precarity”), cites a certain “saintly priest… from Martinique,” now known to be Léonce Crenier, who is quoted as saying:
“True poverty is rare… Nowadays communities are good, I am sure, but they are mistaken about poverty. They accept, admit on principle, poverty, but everything must be good and strong, buildings must be fireproof, Precarity is rejected everywhere, and precarity is an essential element of poverty. That has been forgotten. Here we want precarity in everything except the church.”
Crenier had so absorbed and accepted the ideal of monastic poverty, like the Franciscans and the Poor Clares (or their modern equivalents such as Simone Weil and Christopher McCandless), that he didn’t merely tolerate poverty, he embraced and celebrated poverty. Elsewhere Father Crenier wrote, “I noticed that real poverty, where one misses so many things, attracts singular graces amongst the monks, and in particular spiritual peace and joy.” Given the ideal of poverty and its salutary effect upon the spiritual life, Crenier not only celebrated poverty, but also the condition in which the impoverished live, and this is precarity.
Recently studies have retained this leftist interest in the existential precarity of the lives of marginalized workers, but the monastic interest in poverty for the sake of an enhanced spiritual life has fallen away, and only the misery of precarity remains. Not only has the spiritual virtue of poverty been abandoned as an ideal, but it has, in a sense, been turned on its head, as the spiritual focus of poverty turns from its cultivation to its eradication. In this tradition, the recent sociology of Pippa Norris and Ronald Inglehart is especially interesting, as they have bucked the contemporary trend and given a new argument for secularization, which was once in vogue but has been very much out of favor since the rise of Islamic militancy as a political force in global politics. (I have myself argued that secularization had been too readily and quickly abandoned, and discussed the problem of secularization in relation to the confirmation and disconfirmation of ideas in history.)
Pippa Norris and Ronald Inglehart are perhaps best known for their book Sacred and Secular: Religion and Politics Worldwide. Their paper, Are high levels of existential security conducive to secularization? A response to our critics, is available online. They make the case that, despite the apparent rise of fundamentalist religious belief in the past several deacades, and the anomalous instance of the US, which is wealthy and highly religious, it is not wealth itself that is a predictor of secularization, but rather what they call existential security (which may be considered the economic aspect of ontological security).
While Norris and Inglehart do not use the term “precarity,” clearly their argument is that existential precarity pushes individuals and communities toward the comforts of religion in the face of a hostile and unforgiving world: “…the public’s demand for transcendent religion varies systematically with levels of vulnerabilities to societal and personal risks and threats.” This really isn’t a novel thesis, as Marx pointed out long ago that societies created ideal worlds of justice when justice was denied them in this world, implying that when conditions in this world improve, there would be no need for imagined worlds of perfect justice. Being comfortably well off in the real world means there is little need to imagine comforts in another world.
Speaking on a purely personal (and anecdotal basis), Norris and Inglehart’s thesis rings true in my experience. I have relatives in Scandinavia and have visited the region many times. Here where secularization has gone the furthest, and the greater proportion of the population enjoys a high level of existential security, you can quite literally see the difference in people’s faces. In the US, people are hard-driving and always seemingly on the edge; there is an underlying anxiety that I find very off-putting. But there is a good reason for this: people know that if they lose their jobs, they will possibly lose their homes and end up on the street. In Scandinavia, people look much more relaxed in their facial expressions, and they are not continually on the verge of flying into a rage. People are generally very confident about their lives and don’t worry much about the future.
One might think of the existential precarity of individuals as an ontogenic precarity, and this suggests the possibility of what might be called phylogenic precarity, or the existential precarity of social wholes. Fragile states exist in a condition of existential precarity. In such cases, there is a clear linkage between social precarity and individual precarity. In same cases, there may be no such linkage. It is possible that great individual precarity coexists with social stability, and social precarity may coexist with individual security. An example of the former is the contemporary US; an example of the latter would be some future society in which people are wealthy and comfortable but fail to see that their society is on the verge of collapse — like the Romans, say, in the second and third centuries AD.
The ultimate form of social precarity is the existential precarity of civilization. In some contexts it might be better to discuss the vulnerability and fragility of civilization in terms of existential precarity rather than existential risk or existential threat. I have previously observed that every existential risk is at the same time an existential opportunity, and vice versa (cf. Existential Risk and Existential Opportunity), so that the attempt to limit and contain existential risk may have the unintended consequence of limiting and containing existential opportunity. Thus the selfsame policies instituted for the sake of mitigating existential risk may contribute to the stagnation of civilization and therefore become a source of existential risk. The idea of existential precarity stands outside the dialectic of risk and opportunity, and therefore can provide us with an alternative formulation of existential risk.
How precarious is the life of civilized society? In some cases, social order seems to be balanced on a knife edge. During the 1981 Toxteth riots in Liverpool, which occurred in the wake of recession and high unemployment, as well as tension between the police and residents, Margaret Thatcher memorably said that, “The veneer of civilization is very thin.” But this is misleading. Urban riots are not a sign of the weakness of civilization, but are intrinsic to civilization itself, in the same way that war is intrinsic to civilization: it is not possible to have an urban riot without large-scale urban communities in the same way that it is not possible to have a war without the large-scale organizational resources of a state. Riots even occur in societies as stable as Sweden.
We can distinguish between the superficial precarity of a tense city that might erupt in riots at any time, which is the sort of precarity to which Margaret Thatcher referred, and a deeper, underlying precarity that does not manifest itself in terms of riots, overturned cars, and burned buildings, but in the sudden and inexplicable collapse of a social order that is not followed by immediate recovery. In considering the possibility of the existential precarity of civilization, what we really want to know is whether there is a social equivalent of the passenger pigeon population collapse and then extinction.
In the 19th century, the passenger pigeon was the most common bird in North America. Following hunting and habitat loss, the species experienced a catastrophic population collapse between 1870 and 1890, finally going extinct in 1914. Less than fifty years before the species went extinct, there was no reason to suspect that the species was endangered, or even seriously reduced in numbers. When the end came, it came quickly; somehow the entire species reached a tipping point and could not recover from its collapse. Could this happen to our own species? Could this happen to our civilization? Despite our numbers and our apparent resilience, might we have some existential Achilles’ heel, some essential precarity, incorporated into the human condition of which we are blissfully unaware? And, if we do have some essential vulnerability, is there a way to address this?
I have argued elsewhere that civilization is becoming more robust over time, and I have not changed my mind about this, but neither is it the entire story about the existential security of civilization. In comparison to the precarity of the individual life, civilization is robust in the extreme. Civilization only betrays its existential precarity on time scales several orders of magnitude beyond the human experience of time, which at most encompasses several decades. As we ascend in temporal comprehensiveness, civilization steadily diminishes until it appears as a mere anomaly in the vast stretches of time contemplated in cosmology. At this scale, the longevity of civilization is no longer in question only because its brevity is all too obvious.
At the human time scale, civilization is as certain as the ground beneath our feet; at the cosmological time scale, civilization is as irrelevant as a mayfly. An appraisal of the existential precarity of civilization must take place at some time scale between the human and the cosmological. This brings me to an insight that I had after attending the 2014 IBHA conference last summer. On day 3 of the conference I attended a talk by futurist Joseph Voros that provided much food for thought, and while driving home I thought about a device he employed to discuss future forecasts, the future cone.
This was my first exposure to the future cone, and I immediately recognized the possibility for conceptual clarification that this offers in thinking about the future. If we depict the future as an extension of a timeline indefinitely, the line itself is the most likely future, while progressively larger cones concentric with the line, radiating out from the present, become increasingly less likely forecasts. Within the classes of forecasts defined by the spaces included within progressively larger cones, preferred or unwelcome futures can be identified by further subdivisions of the space defined by the cones. Voros offered an alliterative mnemonic device to differentiate the conceptual spaces defined by the future cone, from the center outward: the projected future, the probable future, the plausible future, the possible future, and the preposterous future.
When I was reflecting on this on the drive home, I realized that, in the short term, the projected future is almost always correct. We can say within a high degree of accuracy what tomorrow will be like. Yet in the long term future, the projected future is almost always wrong. Here when I speak of the projected future I mean the human future. We can project future events in cosmology with a high degree of accuracy — for example, the coming collision of the Milky Way and Andromeda galaxies — but we cannot say anything of significance of what human civilization will be like at this time, or indeed whether there will be any human civilization or any successor institution to human civilization. Futurism forecasting, in other words, goes off the rails in the mid-term future, though exactly where it does so is difficult to say. And it is precisely in this mid-term future — somewhere between human time scales and cosmological time scales — that the existential precarity of civilization becomes clear. Sometime between tomorrow and four billion years from now when a swollen sun swallows up Earth, human civilization will be subject to unpredictable and unprecedented selection pressures that will either mean the permanent ruination of that civilization, or its transformation into something utterly unexpected.
With this in mind, we can focus our conceptual exploration of the existential precarity, existential security, existential threat, and existential risk that bears upon civilization in the period of the mid-term future. How far can we narrow the historico-temporal window of the mid-term future of precarity? What are the selection pressures to which civilization will be subject during this period? What new selection pressures might emerge? Is it more important to focus on existential risk mitigation, or to focus on our civilization making the transition to a post-civilizational institution that will carry with it the memory of its human ancestry? These and many other related questions must assume the central place in our research.
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .