The Blank Slate of Outer Space
3 February 2020
Monday
In his book, A Theory of Justice, John Rawls presented several highly influential doctrines that have come to be widely discussed. Rawls says that justice is fairness, and his method for arriving at fairness in social structures is that these structures should be formulated from behind a “veil of ignorance,” with that ignorance being the ignorance of the individual as to their place in the society in which they would live. The method that Rawls formulated is a thought experiment called the “original position.” Here’s how he stated it:
“In justice as fairness the original position of equality corresponds to the state of nature in the traditional theory of the social contract. This original position is not, of course, thought of as an actual historical state of affairs, much less as a primitive condition of culture. It is understood as a purely hypothetical situation characterized so as to lead to a certain conception of justice. Among the essential features of this situation is that no one knows his place in society, his class position or social status, nor does any one know his fortune in the distribution of natural assets and abilities, his intelligence, strength, and the like. I shall even assume that the parties do not know their conceptions of the good or their special psychological propensities. The principles of justice are chosen behind a veil of ignorance. This ensures that no one is advantaged or disadvantaged in the choice of principles by the outcome of natural chance or the contingency of social circumstances. Since all are similarly situated and no one is able to design principles to favor his particular condition, the principles of justice are the result of a fair agreement or bargain.”
John Rawls, A Theory of Justice, Cambridge: Harvard University Press, 1999, p. 11
As noted in the above quote, this stands in the tradition of “state of nature” thought experiments familiar from Hobbes, Locke, Rousseau, and Pufendorf. State of nature or original position thought experiments posit a time before human societies existed — a blank slate upon which human beings were free to write as they pleased — so as to try to imagine how the existing societies with which we are familiar came into existence.
Many formulations of visions of the human future (including spacefaring futures) implicitly incorporate something like Rawls’ thought experiment without even realizing that this is what they are doing. Outer space as a place for human activity and achievement gives us the same opportunity as state of nature thought experiments to reflect on counterfactual human societies against a backdrop that has been putatively purged of contemporary social presuppositions, though, with outer space, applied symmetrically to the future instead of the past. In so far as we perceive outer space as a blank slate, and in so far as we attempt to project a future upon outer space as though it were a blank slate for future human spacefaring societies, then outer space takes on the properties (or, rather, the lack of properties) of a blank slate.
In the case of outer space, we try to imagine how existing societies with which we are now familiar will pass out of existence and be replaced by some future society. Since the advent of spacefaring futurism (probably traceable to the Golden Age of Science Fiction) outer space became a place where human beings could project what was best in themselves in order to cultivate a hopeful future. Early science fiction such as Mary Shelley and H. G. Wells was markedly dystopian, but the genre rapidly transformed into an optimistic and expansive vision of the future. With this minimal framework of sapcefaring and optimism and progress toward a better future, outer space became a blank slate for human hopes and dreams. For a time, utopianism reigned, but when the simple utopianism faded, it was replaced sometimes by dystopianism, but, more interestingly to me, by the idea of outer space as the kind of blank slate for a better tomorrow in spite of the problems we have today.
This latter conception is something I have personally encountered many times. The idea seems to be that human beings have made many mistakes on Earth so that outer space is a “second chance” for humanity and we consequently have a moral obligation to only establish human societies away from Earth when we are fully prepared to do a better job than we have done on Earth — hence the waiting gambit. Sometimes this position is presented such that humanity does not deserve to establish a presence beyond Earth, because we have made such a mess of things here.
Here the “original position” has been transposed with a “final position” for human society — the ultimate form that human society is to take, projected onto a spacefaring future in which outer space is the setting for a perfect society that has not been realized on Earth, and which cannot be realized on Earth because of our history. Thus the “final position” takes the form that it does because the “original position” was corrupt. This is very much in the spirit of Rousseau, who saw the original foundations of human society to be corrupt, and this inherited corruption (not of the individual, who, according to Rousseau, is naturally good, but of social institutions) has been passed down to all subsequent human societies. Thus outer space presents itself as a domain free from this inherited corruption where a “final position” can be brought into being, and humanity can, for the first time in its history, realize a righteous society.
It is interesting to note that this is in no sense a revolutionary view, as it imagines human beings confined to the purgatory that is Earth for an indefinite period of time until the sins of the youth of our species have been burned and purged away. Only when we have fully completed our penance — a gradual and excruciatingly incremental process — and become perfect, do we deserve to take the next step and inhabit the blank slate of outer space as newly innocent beings, a humanity that has made itself innocent, and thus worthy of the final position, through great moral effort.
A revolutionary view, in contradistinction to this moral incrementalism, would be that the final position is there before us, suspended in the air like Macbeth’s dagger, which we can reach out and grasp at any time. We can, in this view, attain the final position simply by taking a sequence of revolutionary steps that will transform us and our world because we have the boldness to take these steps. This revolutionary moralism vis-à-vis the final position would fit well with what I have called an early spacefaring breakout in The Spacefaring Inflection Point (and further elaborated in Bound in Shallows: Space Exploration and Institutional Drift).
An early and sudden inflection point in the development of spacefaring civilization could be both the revolutionary step required to put in place the final position as well as the advent of a new kind of civilization. This view seems more historiographically justified that the more prevalent incrementalist view, but I have only ever heard the incrementalist view — though, I ought to say, I know of no one who has formulated the incrementalist view in the full sweep of the vision. One usually gets only bits and pieces of the vision, which leaves its advocates with a certain plausible deniability in regard to the future final position implicit in their conception of human destiny in outer space.
It may sound like I am here advocating one conception of the final position over another, but I find both to be as profoundly mistaken as original position or state of nature thought experiments. While I think there is something to be learned from both thought experiments — the original position and the final position — I regard them as being as Rawls has characterized them, “…a purely hypothetical situation characterized so as to lead to a certain conception of justice.” Hypotheticals have a value, but that value is not absolute. Moreover, we know that hypotheticals, whether past or future, are not actual depictions of the past or accurate predictions of the future; they are scenarios that allow us to rehearse certain ideas as they might play out in practice.
In practice, there are no blank slates. Human societies don’t arise from nothing; they arise from prior societies, and these societies can be traced backward in time to long before human beings existed. The same is true of our brain and our intelligence, that we use to shape our social order: both have a deep history in the biosphere, but both bear the lowly imprint of their origins. We can’t even say, as Nietzsche said, that all this is human, all-too-human, because it all precedes humanity. It is terrestrial, all-too-terrestrial.
There was no paradisaical state of nature to which we might dream that we can return (if only we could deconstruct the injustices of human social institutions, which seems to have been Rousseau’s position), and there will be no final position of a perfectly just human society in the future, whether on Earth or in space. Both ideas are painfully naïve, and if we are ever to make real progress, and not the imaginary progress of utopias safely compartmentalized in the distant past or the distant future, we must disabuse ourselves of the idea that humanity is ever going to be anything other than human, all-too-human. Justice and morality did not reign in the past, and they are not going to reign in the future, whether on Earth or in space, world without end. Amen.
. . . . .
Note added Friday 28 February 2020: A perfect example of the blank slate of outer space can be found in the recently announced contest for a design for a city of a million persons on Mars, Mars City State Design Competition Announced. The text of the announcement includes this: “How, given a fresh start, can life on Mars be made better than life on Earth?”
. . . . .
. . . . .
I have previously written a number of blog posts on the idea of the blank slate, including:
● The Metaphysical Blank Slate: Positivism and Metaphysical Neutrality
● Addendum on the Metaphysical Blank Slate
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
A Macro-Implosion Weapon of Mass Destruction
14 April 2018
Saturday

The truncated icosahedron geometry employed for the symmetrical shockwave compression of fission implosion devices.
The simplest nuclear weapon is commonly known as a gun-type device, because it achieves critical mass by forcing together two sub-critical masses of uranium through a mechanism very much like a gun that shoots a smaller wedge-shaped sub-critical mass into a larger sub-critical mass. This was the design of the “Little Boy” Hiroshima atomic bomb. The next level of complexity in nuclear weapon design was the implosion device, which relied upon conventional explosives to symmetrically compress a larger reflector/tamper sphere of U-238 into a smaller sphere of Pu-239, with a polonium-beryllium “Urchin” initiator at the very center. The scientists of the Manhattan project were so certain that the gun-type device would work that they didn’t even bother to test it, so the first nuclear device to be tested, and indeed the first nuclear explosion on the planet, was the Gadget device designed to be the proof of concept of the more sophisticated implosion design. It worked, and this design was used for the “Fat Man” atomic bomb dropped on Nagasaki.
These early nuclear weapon designs (conceptually familiar, but all the engineering designs are still very secret) are usually called First Generation nuclear weapons. The two-stage thermonuclear devices (fission primaries to trigger fusion secondaries, though most of the explosive yield still derives from fission) designed and tested a few years later, known as the Teller-Ulam design (and tested with the Ivy Mike device), were called Second Generation nuclear weapons. A number of ideas were floated for third generation nuclear weapons design, and probably many were tested before the Nuclear Test Ban Treaty came into effect (and for all practical purposes brought an end to the rapid development of nuclear weapon design). One of the design concepts for Third Generation nuclear weapons was that of a shaped charge that could direct the energy of the explosion, rather that dissipating the blast in an omnidirecitonal explosion. There are also a lot of concepts for Fourth Generation nuclear weapons, though many of these ideas are both on the cutting edge of technology and they can’t be legally tested, so it is likely that little will come of these as long as the current test ban regime remains in place.
According to Kosta Tsipis, “Nuclear weapons designed to maximize certain of their properties and to suppress others are considered to constitute a third generation in the sense that their design goes beyond the basic, even though sophisticated, design of modern thermonuclear weapons.” These are sometimes also referred to as “tailored effects.” Examples of tailored effects include enhanced radiation warheads (the “neutron bomb”), so-called “salted” nuclear weapons like the proposed cobalt bomb, electro-magnetic pulse weapons (EMP), and the X-ray laser. We will here be primarily interesting in enhancing the directionality of a nuclear detonation, as in the case of the Casaba-Howitzer, shaped nuclear charges, and the X-ray laser.
What I would like to propose as a WMD is the use of multiple shaped nuclear charges directing their blast at a common center. This is like a macroscopic implementation of the implosion employed in first generation nuclear weapons. The symmetry of implosion in the gadget device and the Fat Man bomb employed 32 simultaneous high explosive charges, arranged according to the geometry of a truncated icosahedron, which would result in a nicely symmetrical convergence on the central trigger without having to scale up to an unrealistic number of high explosive charges for an even more evenly symmetrical implosion. (The actual engineering is a bit more complicated, as a combination of rapid explosions and slower explosions were needed for the optimal convergence of the implosion on the trigger.) This could be employed at a macroscopic scale by directional nuclear charges arranged around a central target. I call this a macro-implosion device. In a “conventional” nuclear strike, the explosive force is dissipated outward from ground zero. With a macro-implosion device, the explosive force would be focused inward toward ground zero, which would experience a sharply higher blast pressure than elsewhere as a result of the constructive interference of multiple converging shockwaves.

A partially assembled implosion device of a first generation nuclear weapon.
The reader may immediately think of the Casaba-Howitzer as a similar idea, but what I am suggesting is a bit different. You can read a lot about the Casaba-Howitzer at The Nuclear Spear: Casaba Howitzer, which is contextualized in even more information on Winchell Chung’s Atomic Rockets site. If you were to surround a target with multiple Casaba-Howitzers and fire at a common center at the same time you would get something like the effect I am suggesting, but this would require far more infrastructure. What I am suggesting could be assembled as a deliverable weapons system engineered as an integrated package.

A cruise missile would be a good way to deliver a macro-implosion device to its target.
There are already weapons designs that release multiple bomblets near a target with each individual bomblet precision targeted (the CBU-103 Combined Effects Munition, more commonly known as a cluster bomb). This could be scaled up in a cruise missile package, so that a cruise missile in approaching its target could open up and release 12 to 16 miniaturized short-range cruise missiles which could then by means of GPS or similar precision location technology arrange themselves around the target in a hemisphere and then simultaneously detonate their directed charges toward ground zero. Both precision timing and precision location would be necessary to optimize shockwave convergence, but with technologies like atomic clocks and dual frequency GPS (and quantum positioning in the future) such performance is possible.
A similar effect could be obtained, albeit a bit more slowly but also more quietly and more subtly, with the use of drones. A dozen or so drones could be released either from the air or launched from the ground, arrange themselves around the target, and then detonate simultaneously. Where it would be easier to approach a target with a small truck, even an ordinary delivery van (perhaps disguised as some local business), as compared to a cruise missile, which could set off air defense warnings, this would be a preferred method of deployment, although the drones would have to be relatively large because they would have to carry a miniaturized nuclear weapon, precision timing, and precision location devices. There are a few commercially available drones today that can lift 20 kg, which is probably just about the lower limit of a miniaturized package such as I have described.
The most elegant deployment of a macro-implosion device would be a hardened target in exoatmospheric space. Currently there isn’t anything flying that is large enough or hardened enough to merit being the target of such a device, but in a future war in space macro-implosion could be deployed against a hard target with a full spherical implosion converging on a target. For ground-based targets, a hemisphere with the target at the center would be the preferred deployment.
In the past, a nation-state pursuing a counter-force strategy, i.e., a nuclear strategy based on eliminating the enemy’s nuclear forces, hence the targeting of nuclear missiles, had to employ very large and very destructive bombs because nuclear missile silos were hardened to survive all but a near miss with a nuclear weapon. Now the age of land-based ICBMs is over for the most advanced industrialized nation-states, and there is no longer any reason to build silos for land-based missiles, therefore no reason to pursue this particular kind of counter-force strategy. SLBMs and ALCMs are now sufficiently sophisticated that they are more accurate than the most accurate land-based ICBMs of the past, and they are far more difficult to find and to destroy because they are small and mobile and can be hidden.
However, hardened, high-value targets like the missile silos of the past would be precisely the kind of target one would employ a macro-implosion device to destroy. And while ICBM silos are no longer relevant, there are plenty of hardened, high-value targets out there. A decapitation strike against a leadership target where the location of the bunker is known (as in the case of Cheyenne Mountain Complex or Kosvinsky Kamen) is such an example.
This is, of course, what “bunker buster” bombs like the B61 were designed to do. However, earth penetrating bunker buster bombs, while less indiscriminate than above ground bursts, are still nuclear explosions in the ground that release their energy in an omnidirectional burst (or perhaps along an axis). The advantage of a macro-implosion device would be that the focused blast pressures would collapse any weak spots in a target area, and, when you’re talking about a subterranean bunker, even an armored door would constitute a weak spot.
I haven’t seen any discussion anywhere of a device such as I have described above, though I have no doubt that the idea has been studied already.
. . . . .
. . . . .
. . . . .
. . . . .
What Happens Next in the Fable of the Sparrows
24 July 2017
Monday
In my Centauri Dreams post Where Do We Come From? What Are We? Where Are We Going? I noted that it has become a contemporary commonplace that the emergence of superintelligent artificial intelligence represents the greatest existential risk of our time and the near future. I do not share this view, but I understand why this view is common. Testimony to superintelligence as an existential risk is the book Superintelligence by Nick Bostrom, who has been instrumental both in the exposition of existential risks and in the exposition of superintelligence.
Bostrom prefaces his book on superintelligence with a fable, “The Unfinished Fable of the Sparrows.” In the fable, a flock of sparrows decides that they would benefit if they had an owl to help them. One member of the flock, Scronkfinkle, objects, saying, “Should we not give some thought to the art of owl-domestication and owl-taming first, before we bring such a creature into our midst?” The other sparrows disregard the warning, upon the premise that they will first obtain own owlet or an owl egg, and then concern themselves with the control of the owl. As the other sparrows leave to find an owl, the fable ends:
“Just two or three sparrows remained behind. Together they began to try to work out how owls might be tamed or domesticated. They soon realized that Pastus had been right: this was an exceedingly difficult challenge, especially in the absence of an actual owl to practice on. Nevertheless they pressed on as best they could, constantly fearing that the flock might return with an owl egg before a solution to the control problem had been found. It is not known how the story ends, but the author dedicates this book to Scronkfinkle and his followers.”
Nick Bostrom, Superintelligence: Paths, Dangers, Strategies, Oxford, 2016
Bostrom leaves the fable unfinished; I will provide one account of what happens next.
. . . . .
The few sparrows who remained behind, despite their difficulties, settled on the plan that the best way to approach owl taming and domestication was by not allowing the owl to understand that he is an owl. They would raise any owl obtained by the sparrows to maturity as a sparrow, so that the owl would believe itself to be a sparrow, and so would naturally identify with the flock of sparrows, would desire to use its greater strength to build better nests for the sparrows, would want to help with the care of both young and old sparrows, and would advise the sparrows even while protecting them from the cat. “This owl will be as sparrow-like as an owl can possibly be,” they asserted, and set about formulating a detailed plan to raise the owl as one of their own.
When the other sparrows returned with the enormous egg of a tawny owl, many times the size of a sparrow egg, the owl tamers were confident in their plan, and the returning sparrows with their owl egg rejoiced to know that the most advanced owl researchers had settled upon a plan that they were sure would work to the benefit of all sparrows. Several sparrows sat on the egg at the same time in order to evenly incubate the owl egg, and once the young owlet broke out of its shell, it immediately imprinted its sparrow mothers, who brought it seeds and small insects to eat. This was a challenge, as the large owlet ate much more than several sparrow chicks, and many sparrows had to be tasked in the feeding of their owlet.
The owlet grew, though it grew slowly, and certainly was not the most impressive specimen of a tawny owl, fed as it was an small seeds and small insects that were scarcely enough to satisfy its hunger. As the owlet grew, all the sparrows, overseen by the owl researchers, sought to teach the owl to be a good sparrow. Wanting to please his sparrow parents, the owlet tried to chirp cheerfully like a sparrow, to dust bathe with the other sparrows, and to hop around on the ground looking for seeds and insects to eat.
The plan appeared to exceed all expectations, and the owlet counted himself one of the flock of sparrows, never questioning his place among the sparrows, and already beginning to use his growing strength to aid his “fellow” sparrows. Until one day. The sparrows were together in a large flock looking for seeds when an enormous adult tawny owl suddenly descended upon them. The sparrows panicked and scattered, all of them flying off in different directions. Except for the owlet, for he, too, was a tawny owl, though he did not know it. He stood his ground as the great, magnificent tawny owl settled down, folded his feathers smoothly and seamlessly to his body, and looked quizzically at the little tawny owlet, who stood alone where moments before there had been hundreds of sparrows.
“And what is this?” asked the large tawny owl, “An owl living with sparrows?” And then he gave a large, piercing hoot of the kind that tawny owls use as their call. The little owlet, a bit frightened but still standing his ground, replied with a subdued, “Chirp, chirp.” The large owl tilted his head to one side, perplexed with the little fellow, and also a bit put-out that one of his kind should behave in such a manner and be living with sparrows.
The large owl said to the little owlet, “I will show you your true nature,” so he picked up the owlet carefully but firmly in his powerful beak and flew the little owlet to a branch that hung low over a still pond. There he set the owlet down on the branch, and indicated for him to look down into the water. The still, smooth surface of the pond reflected the perfect likeness of the two tawny owls, one large, one small, so that as both looked down into the water they saw themselves, and for the first time the little owlet saw that he was an owl, and that he was not a sparrow. “You see now that you are like me,” said the large owl to the owlet, “Now be like me!”
“Now,” said the large owl, “I will show you how an owl lives.” He took the owlet to his nest in the hollow of a tree as the sun was setting, and as the little owl flew behind the big owl he saw how beautiful the forest was in the low light of dusk. He perched at the edge of the hollow, and the large owl said, “Wait here,” then dived down into the deepening darkness below. The little owlet realized that even in the dim light he could see the large owl swoop down and fly purposefully, but to some purpose the owlet did not yet understand.
Soon the large own returned, and he held in his claws a freshly killed bird, about the size of a sparrow (he had spared the owlet the agony of beginning with a sparrow). The little owlet felt sick to this stomach. He said to the big owl, “I’m hungry and I would like some seeds and insects please.” The large owl looked at him disdainfully. He held the dead bird down with one talon and ripped the body open with his beak. “This is owl food!” he said to the owlet as he gulped down a chunk of fresh meat. The big owl tears off another chunk of meat and says to the owlet, “Open your beak!” The little owlet shakes his head from side to side (finding that he can almost rotate his head all the way around when he does so) and tries to flatten himself against the wall of the tree behind him.
“No, I want to eat seeds,” says the little owlet. The large owl will have none of it, and he forces the chunk of fresh meat down the maw of the little owl, who gags on the bloody feast (as all gag upon attempting to swallow an unwelcome truth) but eventually chokes it down. Gagging and frightened, the little owlet slowly begins to understand that he has now, for the first time in his life, encountered his true food, the food of owls, the only food that can nourish him and sustain him as an owl. For he has seen himself in the still water of the pond, and now knows himself to be an owl.
The little owlet attempts to hoot like a tawny owl, and though his first owl-utterance is a weak and sickly sort of hoot, it is the right kind of sound for an owl to make. The big owl looks down on him with growing satisfaction and says, “Today you are an owl. Now I will take you into the depths of the forest at night and we will hunt like owls and eat owl food.” While the little owl does not understand all that this means, he nods uncertainly and follows as the larger owl leaps into the darkness again.
What happens next in the Fable of the Sparrows has not been recorded, but one can conjecture that the owl researchers among the sparrows returned to their notes and their calculations, trying to understand where they had gone wrong, and attempting to form a new plan, now that their sparrow-like owl had been taken under the wing of a true owl.
. . . . .
. . . . .
Readers familiar with the work of Joseph Campbell will immediately recognize that the myth I have here made use of is the Indian myth of the tiger and the goats from Campbell’s “The Occult in Myth and Literature” in The Mythic Dimension: Selected Essays 1959-1987.
. . . . .
. . . . .
. . . . .
. . . . .
Our Terrestrial Heritage
27 February 2017
Monday
In my previous post, Do the clever animals have to die?, I considered the “ultimate concern” (to borrow a phrase from Paul Tillich) of existential risk mitigation: the survival of life and other emergent complexities beyond the habitability of its homeworld or home planetary system. While a planetary system could be inhabited for hundreds of millions of years in most cases, and possibly for billion or tens of billions of years (the latter in the case of red dwarf stars, as in the recently discovered planetary system at TRAPPIST-1, which appears to be a young star with a long history ahead of it), there are yet many events that could occur that could render a homeworld or an entire planetary system uninhabitable, or which could be sufficiently catastrophic that a civilization clustered in the vicinity of a single star would almost certainly be extirpated by them (e.g., a sufficiently large gamma ray burst, GRB, from outside our solar system, or a sufficiently large coronal mass ejection, CME, from within our solar system).
Because any civilization that endures for cosmologically significant periods of time must have established multiple independent centers of civilization, and will probably have survived its homeworld having become uninhabitable, mature advanced civilizations may view this condition as definitive of a mature civilization. Having ensured their risk of extinction against existential threats through establishing multiple independent centers of civilization, these advanced civilizations may not regard as a “peer” (i.e., not regard as a fellow advanced civilization) any civilization that still remains tightly-coupled to its homeworld.
It nevertheless may be the case (if there are, or will be, multiple examples of advanced civilizations) that some civilizations choose to remain tightly-coupled to their homeworlds. We can posit this as the condition of a certain kind of civilization. In the question and answer segment following my 2015 talk, What kind of civilizations build starships? a member of the audience, Alex Sherwood, suggested, in contradistinction to the expansion hypothesis, a constancy hypothesis, according to which a civilization does not expand and does not contract, but rather remains constant; I would prefer to call this the equilibrium hypothesis. One way in which a civilization might exemplify the constancy hypothesis would be for it to remain tightly-coupled to its homeworld.
Some subset of homeworld-coupled civilizations will probably experience extinction due to this choice. Such a homeworld-coupled civilization might choose, instead of establishing multiple independent centers of civilization as existential risk mitigation, to instead establish de-extinction and backup measures that would allow civilization to be restored on its homeworld despite any realized existential risks. However, while this approach to civilizational longevity may ensure the existence of a civilization over the billions of years of the life of its parent star, if a civilization does not want the historical accident of the age of its parent star to determine its ongoing viability, then such a civilization must abandon its homeworld and eventually also its home planetary system.
A civilization might continue to exemplify the equilibrium hypothesis by maintaining the unity and distinctiveness of its civilization despite needing to pursue megastructure-scale projects in order to ensure its ongoing existential viability. The idea of constructing a Shkadov thruster to move a star was partly inspired by this particular conception of the equilibrium hypothesis, as a star might, by this method, be moved to another, younger star, and the homeworld transferred into the orbit of that younger star. In this way, the relationship to the parent star is de-coupled, but the relationship to homeworld remains exclusive. At yet another remove, an entire civilization might simply choose to pick up from its homeworld and transfer itself to another chosen world. (As an historical analogy, consider the ancient city of Knidos, which was founded on the Datça Peninsula, but as the city grew in size and wealth, the city fathers decided that they needed to start again, so they built themselves a new and grander city nearby, and moved the entire city to this new location.) This conception of the equilibrium hypothesis would de-couple a civilization from both parent star and homeworld, but could still maintain the civilization as a unique and distinctive whole, thus continuing that civilization in its equilibrium condition.
A civilization that establishes multiple independent centers of civilization (and thus, to some degree, exemplifies the expansion hypothesis) might still retain strong connections to its homeworld — only not the connection of dependency. Such civilizations fully independent of a homeworld might be said to be loosely-coupled to their homeworld, in contradistinction to civilizations tightly-coupled to their homeworld and exemplifying the equilibrium hypothesis. Expansionary civilizations might remain in close contact with a homeworld for as long as the homeworld was habitable, only to fully abandon it when the homeworld could no longer support life.
Eventually, as the climate changes and the continents move and the surface of Earth is entirely rearranged, as would be experienced by a billion-year-old civilization, almost all terrestrial cities and monuments will disappear, and even the familiar look of Earth will change until it eventually becomes unrecognizable. The heritage of terrestrial civilization might be preserved in part by moving entire monuments to other worlds, or to no world at all, but perhaps to a permanent artificial habitat that is not a planet. Terrestrial places might be recreated on other worlds (or, again, on no world at all) in a grand gesture of historical reconstruction.
There might be other surprising ways of preserving our terrestrial heritage, such as building projects that were never realized on Earth. For example, some future civilization might choose to build Étienne-Louis Boullée’s design for an enormous cenotaph commemorating Isaac Newton, or Antoni Gaudí’s unbuilt skyscraper, or indeed any number of countless projects conceived but never built. An entire city of unbuilt buildings could be constructed on other worlds, which would be new cities, cities never before built, but cities in the tradition of our terrestrial heritage, maintaining the connection to our homeworld even while looking to a future de-coupled from that homeworld.
A civilization that outlasts its homeworld could be said to be de-coupled from its homeworld, though the homeworld will always be the origin of the intelligent agent that is the progenitor of a civilization, and hence a touchstone and a point of reference — like a hometown that one has left in order to pursue a career in the wider world. One would expect historical reconstruction and reenactment in order to maintain our intimacy with the past, which is, at the same time, our intimacy with our homeworld, should we become de-coupled from Earth. If humanity goes on to expand into the universe, establishing multiple independent centers of civilization, including gestures of respect to our terrestrial past in the form of reconstruction, the eventual loss of the Earth to habitability may not come as such a devastating blow if some trace of Earth was preserved.
When the uninhabitability of the Earth does become a definite prospect, and should civilization endure up to that time, that future civilization’s opportunities for historical preservation and conservation will be predicated upon the technological resources available at that time, and what conception of authenticity prevails in that future age. A civilization of sufficiently advanced technology might simply preserve its homeworld entire, as a kind of museum, moving it to wherever would be convenient in order to maintain it in some form that it would be visited by antiquaries and eccentrics. Or such a future civilization might deem such preservation to be undesirable, and only certain artifacts would be removed before the planet entire was consumed by the sun as it expands into a red giant star. In an emergency abandonment of Earth, what could be evacuated would be limited, and principles of selection therefore more rigorous — but also constrained by opportunity. In the event of emergency abandonment, there might also be the possibility of returning for salvage after the emergency had passed.
. . . . .

Antonio Sant’Elia’s La Città Nuova, or Frank Lloyd Wright’s Broadacre City, or even Le Corbusier’s Voisin plan for Paris might yet be built on other worlds.
. . . . .
. . . . .
. . . . .
. . . . .
Intra-Civilizational Macro-Historical Transitions
28 November 2015
Saturday

A future science of civilization will want to map out the macro-historical divisions of human history, but it needs evidence in order to do so.
As yet we have too little evidence of civilization to understand civilizational processes. This sounds like a mere platitude, but it is a platitude to which we can give content by pointing out the relative lack of content of our conception of civilization.
On scale below that of macro-historical transitions (which latter I previously called macro-historical revolutions), we have many examples: many examples of the origins of civilization, many examples of the ends of civilizations, and many examples of the transitions that occur within the development and evolution of civilization. In other words, we have a great deal of evidence when it comes to individual civilizations, but we have very little evidence — insufficient evidence to form a judgment — when it comes to civilization as such (what I previously, very early in the history of this blog, called The Phenomenon of Civilization).
On the scale of macro-historical change, we have only a single instance in the history of terrestrial civilization, i.e., only a single data point on which to base any theory about macro-historical intra-civilizational change, and that is the shift from agricultural civilization (agrarian-ecclesiastical civilization) to industrial civilization (industrial-technological civilization). Moreover, the transition from agricultural and industrial civilization is still continuing today, and is not yet complete, as in many parts of the world industrialization is marginal at best and subsistence agriculture is still the economic mainstay.
Prior to this there was a macro-scale transition with the advent of civilization itself — the transition from hunter-gatherer nomadism to agrarian-ecclesiastical civilization — but this was not an intra-civilizational change, i.e., this was not a fundamental change in the structure of civilization, but the origins of civilization itself. Thus we can say that we have had multiple macro-scale transitions in human history, but human history is much longer than the history of civilization. When civilization emerges within human history it is a game-changer, and we are forced to re-conceptualize human history in terms of civilization.
Parallel to agrarian-ecclesiastical civilization, but a little later in emergence and development, was pastoral-nomadic civilization, which proved to be the greatest challenge to face agrarian-ecclesiastical civilization until the advent of industrialization (cf. The Pastoralist Challenge to Agriculturalism). Pastoral-nomadic civilization seems to have emerged independently in central Asia shortly after the domestication of the horse (and then, again independently, in the Great Plains of North America when horses were re-introduced), probably among peoples practicing subsistence agriculture without having produced the kinds of civilization found in centers of civilization in the Old World — the Yellow River Valley, the Indus Valley, and Mesopotamia.
Pastoral-nomadic civilization, as it followed its developmental course, was not derived from any great civilization, so there was no intra-civilizational transition at its advent, and when it ultimately came to an end it did not end with a transition into a new kind of civilization, but was rather supplanted by agricultural civilization, which slowly encroached on the great grasslands that were necessary for the pasturage of the horses of pastoral-nomadic peoples. So while pastoral-nomadic civilization was a fundamentally different kind of civilization — as different from agricultural civilization as agricultural civilization is different from industrial civilization — the particular circumstances of the emergence and eventual failure of pastoral-nomadic civilization in human history did not yield additional macro-historical transitions that could have provided evidence for the study of intra-civilizational macro-historical change (though it certainly does provide evidence for the study of intra-civilizational conflict).
We would be right to be extremely skeptical of any predictions about the future transition of our civilization into some other form of civilization when we have so little information to go on. All of this is civilization beyond the prediction wall. The view from within a civilization (i.e., the view that we have of ourselves in our own civilization) places too much emphasis upon slight changes to basic civilizational structures. We see this most clearly in mass media publications which present every new fad as a “sea change” that heralds a new age in the history of the world; of course, newspapers and magazines (and now their online equivalents) must adopt this shrill strategy in order to pay the bills, and no one employed at these publications necessarily needs to believe the hyperbole being sold to a gullible public. The most egregious futurism of the twentieth century was a product of precisely the same social mechanism, so that we should not be surprised that it was an inaccurate as it was. (On media demand-driven futurism cf. The Human Future in Space)
. . . . .
. . . . .
. . . . .
. . . . .
Existential Risk and Identifiable Victims
27 May 2015
Wednesday
Is it possible for human beings to care about the fate of strangers? This is at once a profound philosophical question and an immediately practical question. The most famous response to this question is perhaps that of John Donne:
“No man is an island, entire of itself; every man is a piece of the continent, a part of the main. If a clod be washed away by the sea, Europe is the less, as well as if a promontory were, as well as if a manor of thy friend’s or of thine own were: any man’s death diminishes me, because I am involved in mankind, and therefore never send to know for whom the bells tolls; it tolls for thee.”
John Donne, Devotions upon Emergent Occasions, XVII. Nunc lento sonitu dicunt, morieris. Now, this bell tolling softly for another, says to me: Thou must die.
Immanuel Levinas spoke of “the community of those with nothing in common,” in an attempt to get at the human concern for other human beings of whom we know little or nothing. More recently, there is this from Bill Gates:
“When I talk to friends about global health, I often run into a strange paradox. The idea of saving one person’s life is profound and thrilling. But I’ve found that when you talk about saving millions of lives — it sounds almost meaningless. The higher the number, the harder it is to wrap your head around.”
Bill Gates, opening paragraph of An AIDS Number That’s Almost Too Big to Believe
Gates presents this as a paradox, but in social science it is a well-known and well-studied cognitive bias known as the Identifiable victim effect. One researcher who has studied this cognitive bias is Paul Slovic, whose work was discussed by Sam Harris in the following passage:
“…when human life is threatened, it seems both rational and moral for our concern to increase with the number of lives at stake. And if we think that losing many lives might have some additional negative consequences (like the collapse of civilization), the curve of our concern should grow steeper still. But this is not how we characteristically respond to the suffering of other human beings.”
“Slovic’s experimental work suggests that we intuitively care most about a single, identifiable human life, less about two, and we grow more callous as the body count rises. Slovic believes that this ‘psychic numbing’ explains the widely lamented fact that we are generally more distressed by the suffering of single child (or even a single animal) than by a proper genocide. What Slovic has termed ‘genocide neglect’ — our reliable failure to respond, both practically and emotionally, to the most horrific instances of unnecessary human suffering — represents one of the more perplexing and consequential failures of our moral intuition.”
“Slovic found that when given a chance to donate money in support of needy children, subjects give most generously and feel the greatest empathy when told only about a single child’s suffering. When presented with two needy cases, their compassion wanes. And this diabolical trend continues: the greater the need, the less people are emotionally affected and the less they are inclined to give.”
Sam Harris, The Moral Landscape, Chapter 2
Skip down another paragraph and Harris adds this:
“The fact that people seem to be reliably less concerned when faced with an increase in human suffering represents an obvious violation of moral norms. The important point, however, is that we immediately recognize how indefensible this allocation of emotional and material resources is once it is brought to our attention.”
While Harris has not hesitated to court controversy, and speaks the truth plainly enough as he sees it, by failing to place what he characterizes as norms of moral reasoning in an evolutionary context he presents us with a paradox (the above section of the book is subtitled “Moral Paradox”). Really, this kind of cognitive bias only appears paradoxical when compared to a relatively recent conception of morality liberated from parochial in-group concerns.
For our ancestors, focusing on a single individual whose face is known had a high survival value for a small nomadic band, whereas a broadly humanitarian concern for all human beings would have been disastrous in equal measure. Today, in the context of industrial-technological civilization we can afford to love humanity; if our ancestors had loved humanity rather than particular individuals they knew well, they likely would have gone extinct.
Our evolutionary past has ill prepared us for the perplexities of population ethics in which the lives of millions may rest on our decisions. On the other hand, our evolutionary past has well prepared us for small group dynamics in which we immediately recognize everyone in our in-group and with equal immediacy identify anyone who is not part of our in-group and who therefore belongs to an out-group. We continue to behave as though our decisions were confined to a small band of individuals known to us, and the ability of contemporary telecommunications to project particular individuals into our personal lives as though we knew them, as if they were part of our in-group, plays into this cognitive bias.
While the explicit formulation of Identifiable victim effect is recent, the principle has been well known for hundreds of years at least, and has been as compellingly described in historical literature as in recent social science, as, for example, in Adam Smith:
“Let us suppose that the great empire of China, with all its myriads of inhabitants, was suddenly swallowed up by an earthquake, and let us consider how a man of humanity in Europe, who had no sort of connexion with that part of the world, would be affected upon receiving intelligence of this dreadful calamity. He would, I imagine, first of all, express very strongly his sorrow for the misfortune of that unhappy people, he would make many melancholy reflections upon the precariousness of human life, and the vanity of all the labours of man, which could thus be annihilated in a moment. He would too, perhaps, if he was a man of speculation, enter into many reasonings concerning the effects which this disaster might produce upon the commerce of Europe, and the trade and business of the world in general. And when all this fine philosophy was over, when all these humane sentiments had been once fairly expressed, he would pursue his business or his pleasure, take his repose or his diversion, with the same ease and tranquillity, as if no such accident had happened. The most frivolous disaster which could befall himself would occasion a more real disturbance. If he was to lose his little finger to-morrow, he would not sleep to-night; but, provided he never saw them, he will snore with the most profound security over the ruin of a hundred millions of his brethren, and the destruction of that immense multitude seems plainly an object less interesting to him, than this paltry misfortune of his own.”
Adam Smith, Theory of Moral Sentiments, Part III, chapter 3, paragraph 4
And immediately after Hume made his famous claim that, “Reason is, and ought only to be the slave of the passions, and can never pretend to any other office than to serve and obey them,” he illustrated the claim with an observation similar to Smith’s:
“Where a passion is neither founded on false suppositions, nor chuses means insufficient for the end, the understanding can neither justify nor condemn it. It is not contrary to reason to prefer the destruction of the whole world to the scratching of my finger. It is not contrary to reason for me to chuse my total ruin, to prevent the least uneasiness of an Indian or person wholly unknown to me. It is as little contrary to reason to prefer even my own acknowledgeed lesser good to my greater, and have a more ardent affection for the former than the latter.”
David Hume, A Treatise of Human Nature, Book II, Part III, section 3
Bertrand Russell has well described how the expression of this cognitive bias can take on the conceit of moral superiority in the context of romanticism:
“Cultivated people in eighteenth-century France greatly admired what they called la sensibilité, which meant a proneness to emotion, and more particularly to the emotion of sympathy. To be thoroughly satisfactory, the emotion must be direct and violent and quite uninformed by thought. The man of sensibility would be moved to tears by the sight of a single destitute peasant family, but would be cold to well-thought-out schemes for ameliorating the lot of peasants as a class. The poor were supposed to possess more virtue than the rich; the sage was thought of as a man who retires from the corruption of courts to enjoy the peaceful pleasures of an unambitious rural existence.”
Bertrand Russell, A History of Western Philosophy, Part II. From Rousseau to the Present Day, CHAPTER XVIII “The Romantic Movement”
Russell’s account of romanticism provides some of the missing rationalization whereby a cognitive bias clearly at variance with norms of moral reasoning is justified as being the “higher” moral ground. Harris seems to suggest that, as soon as this violation of moral reasoning is pointed out to us, we will change. But we don’t change, for the most part. Our rationalizations change, but our behavior rarely does. And indeed studies of cognitive bias have revealed that even when experimental subjects are informed of cognitive biases that should be obvious ex post facto, most will continue to defend choices that unambiguously reflect cognitive bias.
I have personally experienced the attitude described by Russell (despite the fact that I have not lived in eighteenth-century France) more times than I care to recall, though I find myself temperamentally on the side of those formulating well-thought-out schemes for the amelioration of the lot of the destitute as a class, rather than those moved to tears by the sight of a single destitute family. From these personal experiences of mine, anecdotal evidence suggests to me that if you attempt to live by the quasi-utilitarianism advocated by Russell and Harris, others will regard you as cold, unfeeling, and lacking in the milk of human kindness.
The cognitive bias challenge to presumptive norms of moral reasoning is also a profound challenge to existential risk mitigation, since existential risk mitigation deals in the largest numbers of human lives saved, but is a well-thought-out scheme for ameliorating the lot of human beings as a class, and may therefore have little emotional appeal compared to putting an individual’s face on a problem and then broadcasting that face repetitively.
We have all heard that the past is the foreign county, and that they do things differently there. (This line comes from the 1953 novel The Go-Between by L. P. Hartley.) We are the past of some future that has yet to occur, and we will in turn be a foreign country to that future. And, by the same token, the future is a foreign country, and they do things differently there. Can we care about these foreigners with their foreign ways? Can we do more than care about them, and actually change our behavior in the present in order to ensure on ongoing future, however foreign that future is from our parochial concerns?
In Bostrom’s paper “Existential Risk Prevention as Global Priority” (Global Policy, Volume 4, Issue 1, February 2013) the author gives a lower bound of 1016 potential future lives saved by existential risk mitigation (though he also gives “a lower bound of 1054 human-brain-emulation subjective life-years” as a possibility), but if the “collapse of compassion” is a function of the numbers involved, the higher the numbers we cite for individuals saved as a result of existential risk mitigation, the less will the average individual of today care.
Would it be possible to place an identifiable victim in the future? This is difficult, but we are all familiar with appeals to the world we leave to our children, and these are attempts to connect identifiable victims with actions that may prejudice the ability of human beings in the future to live lives of value commensurate with our own. It would be possible to construct some grand fiction, like Plato’s “noble lie” in order to interest the mass of the public in existential risk mitigation, but this would not be successful unless it became some kind of quasi-religious belief exempted from falsification that becomes the receptacle of our collective hopes. This does not seem very plausible (or sustainable) to me.
Are we left, then, to take the high road? To try to explain in painstaking (and off-putting) detail the violation of moral norms involved in our failure to adequately address existential risks, thereby putting our descendants in mortal danger? Certainly if an attempt to place an identifiable victim in the future is doomed to failure, we have no remaining option but the attempt at a moral intervention and relentless moral education that could transform the moral lives of humanity.
I do not think either of the above approaches to resolving the identifiable victim challenge to existential risk mitigation would be likely to be successful. I can put this more strongly yet: I think both approaches would almost certainly result in a backlash and would therefore be counter-productive to existential risk mitigation efforts. The only way forward that I can see is to present existential risk mitigation under the character of the adventure and exploration made possible by a spacefaring civilization that would, almost as an unintended consequence, secure the redundancy and autonomy of extraterrestrial centers of human civilization.
Human beings (at least as I know them) have a strong distaste for moral lectures and do not care to be told to do anything for their own good, but if you present them with the possibility of adventure and excitement that promises new experiences to every individual and possibly even the prospect of the extraterrestrial equivalent of a buried treasure, or even a pot of gold at the of the rainbow, you might enlist the selfishness and greed of individuals in a great cause on behalf of Earth and all its inhabitants, so that each individual is moved, as it were, by an invisible hand to promote an end which was no part of his intention.
. . . . .
. . . . .
Existential Risk: The Philosophy of Human Survival
1. Moral Imperatives Posed by Existential Risk
2. Existential Risk and Existential Uncertainty
3. Addendum on Existential Risk and Existential Uncertainty
4. Existential Risk and the Death Event
6. What is an existential philosophy?
7. An Alternative Formulation of Existential Risk
8. Existential Risk and Existential Opportunity
9. Conceptualization of Existential Risk
10. Existential Risk and Existential Viability
11. Existential Risk and the Developmental Conception of Civilization
12. Developing an Existential Perspective
13. Existential Risk and Identifiable Victims
. . . . .
. . . . .
. . . . .
. . . . .
A Brief History of the Stelliferous Era
30 January 2015
Friday
Introduction: Periodization in Cosmology
Recently Paul Gilster has posted my Who will read the Encyclopedia Galactica? on his Centauri Dreams blog. In this post I employ the framework of Fred Adams and Greg Laughlin from their book The Five Ages of the Universe: Inside the Physics of Eternity, who distinguish the Primordial Era, before stars have formed, the Stelliferous Era, which is populated by stars, the Degenerate Era, when only the degenerate remains of stars are to be found, the Black Hole Era, when only black holes remain, and finally the Dark Era, when even black holes have evaporated. These major divisions of cosmological history allow us to partition the vast stretches of cosmological time, but it also invites us to subdivide each era into smaller increments (such is the historian’s passion for periodization).
The Stelliferous Era is the most important to us, because we find ourselves living in the Stelliferous Era, and moreover everything that we understand in terms of life and civilization is contingent upon a biosphere on the surface of a planet warmed by a star. When stellar formation has ceased and the last star in the universe burns out, planets will go dark (unless artificially lighted by advanced civilizations) and any remaining biospheres will cease to function. Life and civilization as we know it will be over. I have called this the End-Stelliferous Mass Extinction Event.
It will be a long time before the end of the Stelliferous Era — in human terms, unimaginably long. Even in scientific terms, the time scale of cosmology is long. It would make sense for us, then, to break up the Stelliferous Era into smaller periodizations that can be dealt with each in turn. Adams and Laughlin constructed a logarithmic time scale based on powers of ten, calling each of these powers of ten a “cosmological decade.” The Stelliferous Era comprises cosmological decades 7 to 15, so we can further break down the Stelliferous Era into three divisions of three cosmological decades each, so cosmological decades 7-9 will be the Early Stelliferous, cosmological decades 10-12 will be the Middle Stelliferous, and cosmological decades 13-15 will be the late Stelliferous.
The Early Stelliferous
Another Big History periodization that has been employed other than that of Adams of Laughlin is Eric Chaisson’s tripartite distinction between the Energy Era, the Matter Era, and the Life Era. The Primordial Era and the Energy Era coincide until the transition point (or, if you like, the phase transition) when the energies released by the big bang coalesce into matter. This phase transition is the transition from the Energy Era to the Matter Era in Chaisson; for Adams and Laughlin this transition is wholly contained within the Primordial Era and may be considered one of the major events of the Primorial Era. This phase transition occurs at about the fifth cosmological decade, so that there is one cosmological decade of matter prior to that matter forming stars.
At the beginning of the Early Stelliferous the first stars coalesce from matter, which has now cooled to the point that this becomes possible for the first time in cosmological history. The only matter available at this time to form stars is hydrogen and helium produced by the big bang. The first generation of stars to light up after the big bang are called Population III stars, and their existence can only be hypothesized because no certain observations exist of Population III stars. The oldest known star, HD 140283, sometimes called the Methuselah Star, is believed to be a Population II star, and is said to be metal poor, or of low metallicity. To an astrophysicist, any element other than hydrogen or helium is a “metal,” and the spectra of stars are examined for the “metals” present to determine their order of appearance in galactic ecology.
The youngest stars, like our sun and other stars in the spiral arms of the Milky Way, are Population I stars and are rich in metals. The whole history of the universe up to the present is necessary to produce the high metallicity younger stars, and these younger stars form from dust and gas that coalesce into a protoplanetary disk surrounding the young star of similarly high metal content. We can think of the stages of Population III, Population II, and Population I stars as the evolutionary stages of galactic ecology that have produced structures of greater complexity. Repeated cycles of stellar nucleosynthesis, catastrophic supernovae, and new star formation from these remnants have produced the later, younger stars of high metallcity.
It is the high relative proportion of heavier elements that makes possible the formulation of small rocky planets in the habitable zone of a stable star. The minerals that form these rocky planets are the result of what Robert Hazen calls minerological evolution, which we may consider to be an extension of galactic ecology on a smaller scale. These planets, in turn, have heavier elements distributed throughout their crust, which, in the case of Earth, human civilization has dug out of the crust and put to work manufacturing the implements of industrial-technological civilization. If Population II and Population III stars had planets (this is an open area of research in planet formation and without a definite answer as yet), it is conceivable that these planets might have harbored life, but the life on such worlds would not have had access to heavier elements, so any civilization that resulted would have had a difficult time of it creating an industrial or electrical technology.
The Middle Stelliferous
In the Middle Stelliferous, the processes of galactic ecology that produced and which now sustain the Stelliferous Era have come to maturity. There is a wide range of galaxies consisting of a wide range of stars, running the gamut of the Hertzsprung–Russell diagram. It is a time of both galactic and stellar prolixity, diversity, and fecundity. But even as the processes of galactic ecology reach their maturity, they begin to reveal the dissipation and dissolution that will characterize the Late Stelliferous Era and even the Degenerate Era to follow.
The Milky Way, which is a very old galaxy, carries with it the traces of the smaller galaxies that it has already absorbed in its earlier history — as, for example, the Helmi Stream — and for the residents of the Milky Way and Andromeda galaxies one of the most spectacular events of the Middle Stelliferous Era will be the merging of these two galaxies in a slow-motion collision taking place over millions of years, throwing some star systems entirely clear of the newly merged galaxies, and eventually resulting in the merging of the supermassive black holes that anchor the centers of each of these elegant spiral galaxies. The result is likely to be an elliptical galaxy not clearly resembling either predecessor (and sometimes called the Milkomeda).
Eventually the Triangulum galaxy — the other large spiral galaxy in the local group — will also be absorbed into this swollen mass of stars. In terms of the cosmological time scales here under consideration, all of this happens rather quickly, as does also the isolation of each of these merged local groups which persist as lone galaxies, suspended like a island universe with no other galaxies available to observational cosmology. The vast majority of the history of the universe will take place after these events have transpired and are left in the long distant past — hopefully not forgotten, but possibly lost and unrecoverable.
The Tenth Decade
The tenth cosmological decade, comprising the years between 1010 to 1011 (10,000,000,000 to 100,000,000,000 years, or 10 Ga. to 100 Ga.) since the big bang, is especially interesting to us, like the Stelliferous Era on the whole, because this is where we find ourselves. Because of this we are subject to observation selection effects, and we must be particularly on guard for cognitive biases that grow out of the observational selection effects we experience. Just as it seems, when we look out into the universe, that we are in the center of everything, and all the galaxies are racing away from us as the universe expands, so too it seems that we are situated in the center of time, with a vast eternity preceding us and a vast eternity following us.
Almost everything that seems of interest to us in the cosmos occurs within the tenth decade. It is arguable (though not definitive) that no advanced intelligence or technological civilization could have evolved prior to the tenth decade. This is in part due to the need to synthesize the heavier elements — we could not have developed nuclear technology had it not been for naturally occurring uranium, and it is radioactive decay of uranium in Earth’s crust that contributes significantly to the temperature of Earth’s core and hence to Earth being a geologically active planet. By the end of the tenth decade, all galaxies will have become isolated as “island universes” (once upon a time the cosmological model for our universe today) and the “end of cosmology” (as Krauss and Sherrer put it) will be upon us because observational cosmology will no longer be able to study the large scale structures of the universe.
The tenth decade, thus, is not only when it becomes possible for an intelligent species to evolve, to establish an industrial-technological civilization on the basis of heavier elements built up through nucleosynthesis and supernova explosions, and to employ these resources to launch itself as a spacefaring civilization, but also this is the only period in the history of the universe when such a spacefaring civilization can gain a true foothold in the cosmos to establish an intergalactic civilization. After local galactic groups coalesce into enormous single galaxies, and all other similarly coalesced galaxies have passed beyond the cosmological horizon and can no longer be observed, an intergalactic civilization is no longer possible on principles of science and technology as we understand them today.
It is sometimes said that, for astronomers, galaxies are the basic building blocks of the universe. The uniqueness of the tenth decade, then, can be expressed as being the only time in cosmological history during which a spacefaring civilization can emerge and then can go on to assimilate and unify the basic building blocks of the universe. It may well happen that, by the time of million year old supercivilizations and even billion year old supercivilizations, sciences and technologies will have been developed far beyond our understanding that is possible today, and some form of intergalactic relationship may continue after the end of observational cosmology, but, if this is the case, the continued intergalactic organization must be on principles not known to us today.
The Late Stelliferous
In the Late Stelliferous Era, after the end of the cosmology, each isolated local galactic group, now merged into a single giant assemblage of stars, will continue its processes of star formation and evolution, ever so slowly using up all the hydrogen produced in the big bang. The Late Stelliferous Era is a universe having passed “Peak Hydrogen” and which can therefore only look forward to the running down of the processes of galactic ecology that have sustained the universe up to this time.
The end of cosmology will mean a changed structure of galactic ecology. Even if civilizations can find a way around their cosmological isolation through advanced technology, the processes of nature will still be bound by familiar laws of nature, which, being highly rigid, will not have changed appreciably even over billions of years of cosmological evolution. Where light cannot travel, matter cannot travel either, and so any tenuous material connection between galactic groups will cease to play any role in galactic ecology.
The largest scale structures that we know of in the universe today — superclusters and filaments — will continue to expand and cool and to dissipate. We can imagine a bird’s eye view of the future universe (if only a bird could fly over the universe entire), with its large scale structures no longer in touch with one another but still constituting the structure, rarified by expansion, stretched by gravity, and subject to the evolutionary processes of the universe. This future universe (which we may have to stop calling the universe, as it is lost its unity) stands in relation to its current structure as the isolated and strung out continents of Earth today stand in relation to earlier continental structures (such as the last supercontinent, Pangaea), preceding the present disposition of continents (though keep in mind that there have been at least five supercontinent cycles since the formation of Earth and the initiation of its tectonic processes).
Near the end of the Stelliferous Era, there is no longer any free hydrogen to be gathered together by gravity into new suns. Star formation ceases. At this point, the fate of the brilliantly shining universe of stars and galaxies is sealed; the Stelliferous Era has arrived at functional extinction, i.e., the population of late Stelliferous Era stars continues to shine but is no longer viable. Galactic ecology has shut down. Once star formation ceases, it is only a matter of time before the last of the stars to form burn themselves out. Stars can be very large, very bright and short lived, or very small, scarcely a star at all, very dim, cool, and consequently very long lived. Red dwarf stars will continue to burn dimly long after all the main sequence stars like the sun have burned themselves out, but eventually even the dwarf stars, burning through their available fuel at a miserly rate, will burn out also.
The Post-Stelliferous Era
After the Stelliferous Era comes the Degenerate Era, with the two eras separated by what I have called the Post-Stelliferous Mass Extinction Event. What the prospects are for continued life and intelligence in the Degenerate Era is something that I have considered in Who will read the Encyclopedia Galactica? and Addendum on Degenerate Era civilization, inter alia.
Our enormous and isolated galaxy will not be immediately plunged into absolute darkness. Adams and Laughlin (referred to above) estimate that our galaxy may have about a hundred small stars shining — the result of the collision of two or more brown dwarfs. Brown dwarf stars, at this point in the history of the cosmos, contain what little hydrogen remains, since brown dwarf stars were not large enough to initiate fusion during the Stelliferous Era. However, if two or more brown dwarfs collide — a rare event, but in the vast stretches of time in the future of the universe rare events will happen eventually — they may form a new small star that will light up like a dim candle in a dark room. There is a certain melancholy grandeur in attempting to imagine a hundred or so dim stars strewn through the galaxy, providing a dim glow by which to view this strange and unfamiliar world.
Our ability even to outline the large scale structures — spatial, temporal, biological, technological, intellectual, etc. — of the extremely distant future is severely constrained by our paucity of knowledge. However, if terrestrial industrial-technological civilization successfully makes the transition to being a viable spacefaring civilization (what I might call extraterrestrial-spacefaring civilization) our scientific knowledge of the universe is likely to experience an exponential inflection point surpassing the scientific revolution of the early modern period.
An exponential improvement in scientific knowledge (supported on an industrial-technological base broader than the surface of a single planet) will help to bring the extremely distant future into better focus and will give to our existential risk mitigation efforts both the knowledge that such efforts requires and the technological capability needed to ensure the perpetual ongoing extrapolation of complexity driven by intelligent, conscious, and purposeful intervention in the world. And if not us, if not terrestrial civilization, then some other civilization will take over the mantle and the far future will belong to them.
. . . . .
. . . . .
. . . . .
. . . . .
The Technological Frontier
12 December 2014
Friday
An Exercise in Techno-Philosophy
Quite some time ago in Fear of the Future I employed the phase “the technological frontier,” but I did not follow up on this idea in a systematic way. In the popular mind, the high technology futurism of the technological singularity has largely replaced the futurism of rocketships and jetpacks, so that the idea of a technological frontier has particular resonance for us today. The idea of a technological frontier is particularly compelling in our time, as technology seems to dominate our lives to an increasing degree, and this trend may only accelerate in the future. If our lives are shaped by technology today, how much more profoundly will they be shaped by technology in ten, twenty, fifty, or a hundred years? We would seem to be poised like pioneers on a technological frontier.
How are we to understand the human condition in the age of the technological frontier? The human condition is no longer merely the human condition, but it is the human condition in the context of technology. This was not always the case. Let me try to explain.
While humanity emerged from nature and lived entirely within the context of nature, our long prehistory integrated into nature was occluded and utterly lost after the emergence of civilization, and the origins of civilization was attended by the formulation of etiological mythologies that attributed supernatural causes to the manifold natural causes that shape our lives. We continued to live at the mercy of nature, but posited ourselves as outside nature. This led to a strangely conflicted conception of nature and a fraught relationship with the world from which we emerged.
The fraught human relationship to nature has been characterized by E. O. Wilson in terms of biophilia; the similarly fraught human relationship to technology might be similarly characterized in terms of technophilia, which I posited in The Technophilia Hypothesis (and further elaborated in Technophilia and Evolutionary Psychology). And as with biophilia and biophobia, so, too, while there is technophilia, there is also technophobia.
Today we have so transformed our world that the context of our lives is the technological world; we have substituted technology for nature as the framework within which we conduct the ordinary business of life. And whereas we once asked about humanity’s place in nature, we now ask, or ought to ask, what humanity’s place is or ought to be in this technological world with which we have surrounded ourselves. We ask these questions out of need, existential need, as there is both pessimism and optimism about a human future increasingly dominated by the technology we have created.
I attach considerable importance to the fact that we have literally surrounded ourselves with our technology. Technology began as isolated devices that appeared within the context of nature. A spear, a needle, a comb, or an arrow were set against the background of omnipresent nature. And the relationship of these artifacts to their sources in nature were transparent: the spear was made of wood, the needle and the comb of bone, the arrow head of flint. Technological artifacts, i.e., individual instances of technology, were interpolations into the natural world. Over a period of more than ten thousand years, however, technological artifacts accumulated until they have displaced nature and they constitute the background against which nature is seen. Nature then became an interpolation within the context of the technological innovations of civilizations. We have gardens and parks and zoos that interpolate plants and animals into the built environment, which is the environment created by technology.
With technology as the environment and the background of our lives, and not merely constituted by objects within our lives, technology now has an ontological dimension — it has its own laws, its own features, its own properties — and it has a frontier. We ourselves are objects within a technological world (hence the feeling of anomie from being cogs within an enormous machine); we populate an environment defined and constituted by technology, and as such bear some relationship to the ontology of technology as well as to its frontier. Technology conceived in this way, as a totality, suggests ways of thinking about technology parallel to our conceptions of humanity and civilization, inter alia.
One way to think about the technological frontier is as the human exploration of the technium. The idea of the technium accords well with the conception of the technological world as the context of human life that I described above. The “technium” is a term introduced by Kevin Kelly to denote the totality of technology. Here is the passage in which Kelly introduces the term:
“I dislike inventing new words that no one else uses, but in this case all known alternatives fail to convey the required scope. So I’ve somewhat reluctantly coined a word to designate the greater, global, massively interconnected system of technology vibrating around us. I call it the technium. The technium extends beyond shiny hardware to include culture, art, social institutions, and intellectual creations of all types. It includes intangibles like software, law, and philosophical concepts. And most important, it includes the generative impulses of our inventions to encourage more tool making, more technology invention, and more self-enhancing connections. For the rest of this book I will use the term technium where others might use technology as a plural, and to mean a whole system (as in “technology accelerates”). I reserve the term technology to mean a specific technology, such as radar or plastic polymers.”
Kevin Kelly, What Technology Wants
I previously wrote about the technium in Civilization and the Technium and The Genealogy of the Technium.
The concept of the technium can be extended in parallel to schema I have applied to civilization in Eo-, Eso-, Exo-, Astro-, so that we have the concepts of the eotechnium, the esotechnium, the exotechnium, and the astrotechnium. (Certainly no one is going to employ this battery of unlovely terms I have coined — neither the words nor the concepts are immediately accessible — but I keep this ideas in the back of my mind and hope to further extend, perhaps in a formal context in which symbols can be substituted for awkward words and the ideas can be presented.)
● Eotechnium the origins of technology, wherever and whenever it occurs, terrestrial or otherwise
● Esotechnium our terrestrial technology
● Exotechnium the extraterrestrial technium exclusive of the terrestrial technium
● Astrotechnium the technium in its totality throughout the universe; the terrestrial and extraterrestrial technium taken together in their cosmological context
I previously formulated these permutations of technium in Civilization and the Technium. In that post I wrote:
The esotechnium corresponds to what has been called the technosphere, mentioned above. I have pointed out that the concept of the technosphere (like other -spheres such as the hydrosphere and the sociosphere, etc.) is essentially Ptolemaic in conception, i.e., geocentric, and that to make the transition to fully Copernican conceptions of science and the world we need to transcend our Ptolemaic ideas and begin to employ Copernican ideas. Thus to recognize that the technosphere corresponds to the esotechnium constitutes conceptual progress, because on this basis we can immediately posit the exotechnium, and beyond both the esotechnium and the exotechnium we can posit the astrotechnium.
We can already glimpse the astrotechnium, in so far as human technological artifacts have already reconnoitered the solar system and, in the case of the Voyager space probes, have left the solar system and passed into interstellar space. The technium then, i.e., from the eotechnium originating on Earth, now extends into space, and we can conceive the whole of this terrestrial technology together with our extraterrestrial technology as the astrotechnium.
It is a larger question yet whether there are other technological civilizations in the universe — it is the remit of SETI to discover if this is the case — and, if there are, there is an astrotechnium much greater than that we have created by sending our probes through our solar system. A SETI detection of an extraterrestrial signal would mean that the technology of some other species had linked up with our technology, and by their transmission and our reception an interstellar astrotechnium comes into being.
The astrotechnium is both itself a technological frontier, and it extends throughout the frontier of extraterrestrial space, and a physical frontier of space. The exploration of the astrotechnium would be at once an exploration of the technological frontier and an exploration of an actual physical frontier. This is surely the frontier in every sense of the term. But there are other senses as well.
We can go my taxonomy of the technium one better and also include the endotechnium, where the prefix “endo-” means “inside” or “interior.” The endotechnium is that familiar motif of contemporary thought of virtual reality becoming indistinguishable from the reality of nature. Virtual reality is immersion in the endotechnium.
I have noted (in An Idea for the Employment of “Friendly” AI) that one possible employment of friendly AI would be the on-demand production of virtual worlds for our entertainment (and possibly also our education). One would presumably instruct one’s AI interface (which already has all human artistic and intellectual accomplishments storied in its databanks) that one wishes to enter into a particular story. The AI generates the entire world virtually, and one employs one’s preferred interface to step into the world of the imagination. Why would one so immersed choose to emerge again?
One of the responses to the Fermi paradox is that any sufficiently advanced civilization that had developed to the point of being able to generate virtual reality of a quality comparable to ordinary experience would thereafter devote itself to the exploration of virtual worlds, turning inward rather than outward, forsaking the wider universe outside for the universe of the mind. In this sense, the technological frontier represented by virtual reality is the exploration of the human imagination (or, for some other species, the exploration of the alien imagination). This exploration was formerly carried out in literature and the arts, but we seem poised to enact this exploration in an unprecedented way.
There are, then, many senses of the technological frontier. Is there any common framework within which we can grasp the significance of these several frontiers? The most famous representative of the role of the frontier in history is of course Frederick Jackson Turner, for whom the Turner Thesis is named. At the end of his famous essay on the frontier in American life, Turner wrote:
“From the conditions of frontier life came intellectual traits of profound importance. The works of travelers along each frontier from colonial days onward describe certain common traits, and these traits have, while softening down, still persisted as survivals in the place of their origin, even when a higher social organization succeeded. The result is that to the frontier the American intellect owes its striking characteristics. That coarseness and strength combined with acuteness and inquisitiveness; that practical, inventive turn of mind, quick to find expedients; that masterful grasp of material things, lacking in the artistic but powerful to effect great ends; that restless, nervous energy; that dominant individualism, working for good and for evil, and withal that buoyancy and exuberance which comes with freedom — these are traits of the frontier, or traits called out elsewhere because of the existence of the frontier.”
Frederick Jackson Turner, “The Significance of the Frontier in American History,” which constitutes the first chapter of The Frontier In American History
Turner is not widely cited today, and his work has fallen into disfavor (especially targeted by the “New Western Historians”), but much that Turner observed about the frontier is not only true, but more generally applicable beyond the American experience of the frontier. I think many readers will recognize in the attitudes of those today on the technological frontier the qualities that Turner described in the passage quoted above, attributing them specially to the American frontier, which for Turner was, “an area of free land, its continuous recession, and the advance of American settlement westward.”
The technological frontier, too, is an area of free space — the abstract space of technology — the continuous recession of this free space as frontier technologies migrate into the ordinary business of life even while new frontiers are opened, and the advance of pioneers into the technological frontier.
One of the attractions of a frontier is that it is distant from the centers of civilization, and in this sense represents an escape from the disciplined society of mature institutions. The frontier serves as a refuge; the most marginal elements of society naturally seek the margins of society, at the periphery, far from the centers of civilization. (When I wrote about the center and periphery of civilization in The Farther Reaches of Civilization I could just as well have expressed myself in terms of the frontier.)
In the past, the frontier was defined in terms of its (physical) distance from the centers of civilization, but the world of high technology being created today is a product of the most technologically advanced centers of civilization, so that the technological frontier is defined by its proximity to the centers of civilization, understood at the centers of innovation and production for industrial-technological civilization.
The technological frontier nevertheless exists on the periphery of many of the traditional symbols of high culture that were once definitive of civilizational centers; in this sense, the technological frontier may be defined as the far periphery of the traditional center of civilization. If we identify civilization with the relics of high culture — painting, sculpture, music, dance, and even philosophy, all understood in their high-brow sense (and everything that might have featured symbolically in a seventeenth century Vanitas painting) — we can see that the techno-philosophy of our time has little sympathy for these traditional markers of culture.
The frontier has been the antithesis of civilization — civilization’s other — and the further one penetrates the frontier, moving always away from civilization, the nearer one approaches the absolute other of civilization: wildness and wilderness. The technological frontier offers to the human sense of adventure a kind of wildness distinct from that of nature as well as the intellectual adventure of traditional culture. Although the technological frontier is in one sense antithetical to the post-apocalyptic visions of formerly civilized individuals transformed into a noble savage (which usually marked by technological rejectionism), there is also a sense in which the technological frontier is like the post-apocalyptic frontier in its radical rejection of bourgeois values.
If we take the idea of the technological frontier in the context of the STEM cycle, we would expect that the technological frontier would have parallels in science and engineering — a scientific frontier and an engineering frontier. In fact, the frontier of scientific knowledge has been a familiar motif since at least the middle of the twentieth century. With the profound disruptions of scientific knowledge represented by relativity and quantum theory, the center of scientific inquiry has been displaced into an unfamiliar periphery populated by strange and inexplicable phenomena of the kind that would have been dismissed as anomalies by classical physics.
The displacement of traditional values of civilization, and even of traditional conceptions of science, gives the technological frontier its frontier character even as it emerges within the centers of industrial-technological civilization. In The Interstellar Imperative I asserted that the central imperative of industrial-technological civilization is the propagation of the STEM cycle. It is at least arguable that the technological frontier is both a result and a cause of the ongoing STEM cycle, which experiences its most unexpected advances when its scientific, technological, and engineering innovations seem to be at their most marginal and peripheral. A civilization that places itself within its own frontier in this way is a frontier society par excellence.
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
The Existential Precarity of Civilization
2 November 2014
Sunday
The word “precarity” is quite recent, and does not appear in the Oxford English Dictionary, but has appeared in the titles of several books. The term mostly derives from left-leaning organized labor, and has come into use to describe the lives of workers in precarious circumstances. Wikipedia defines precarity as “a condition of existence without predictability or security, affecting material or psychological welfare.”
Dorothy Day, writing in The Catholic Worker (coming from a context of both Catholic monasticism and labor activism), May 1952 (“Poverty and Precarity”), cites a certain “saintly priest… from Martinique,” now known to be Léonce Crenier, who is quoted as saying:
“True poverty is rare… Nowadays communities are good, I am sure, but they are mistaken about poverty. They accept, admit on principle, poverty, but everything must be good and strong, buildings must be fireproof, Precarity is rejected everywhere, and precarity is an essential element of poverty. That has been forgotten. Here we want precarity in everything except the church.”
Crenier had so absorbed and accepted the ideal of monastic poverty, like the Franciscans and the Poor Clares (or their modern equivalents such as Simone Weil and Christopher McCandless), that he didn’t merely tolerate poverty, he embraced and celebrated poverty. Elsewhere Father Crenier wrote, “I noticed that real poverty, where one misses so many things, attracts singular graces amongst the monks, and in particular spiritual peace and joy.” Given the ideal of poverty and its salutary effect upon the spiritual life, Crenier not only celebrated poverty, but also the condition in which the impoverished live, and this is precarity.

Jean XXII reçoit les transcriptions de l’interrogatoire de Gui de Corvo. Manuscrit du XVem siècle. Bibl Nazionale Braidense, Milan, Italie.
Recently studies have retained this leftist interest in the existential precarity of the lives of marginalized workers, but the monastic interest in poverty for the sake of an enhanced spiritual life has fallen away, and only the misery of precarity remains. Not only has the spiritual virtue of poverty been abandoned as an ideal, but it has, in a sense, been turned on its head, as the spiritual focus of poverty turns from its cultivation to its eradication. In this tradition, the recent sociology of Pippa Norris and Ronald Inglehart is especially interesting, as they have bucked the contemporary trend and given a new argument for secularization, which was once in vogue but has been very much out of favor since the rise of Islamic militancy as a political force in global politics. (I have myself argued that secularization had been too readily and quickly abandoned, and discussed the problem of secularization in relation to the confirmation and disconfirmation of ideas in history.)
Pippa Norris and Ronald Inglehart are perhaps best known for their book Sacred and Secular: Religion and Politics Worldwide. Their paper, Are high levels of existential security conducive to secularization? A response to our critics, is available online. They make the case that, despite the apparent rise of fundamentalist religious belief in the past several deacades, and the anomalous instance of the US, which is wealthy and highly religious, it is not wealth itself that is a predictor of secularization, but rather what they call existential security (which may be considered the economic aspect of ontological security).
While Norris and Inglehart do not use the term “precarity,” clearly their argument is that existential precarity pushes individuals and communities toward the comforts of religion in the face of a hostile and unforgiving world: “…the public’s demand for transcendent religion varies systematically with levels of vulnerabilities to societal and personal risks and threats.” This really isn’t a novel thesis, as Marx pointed out long ago that societies created ideal worlds of justice when justice was denied them in this world, implying that when conditions in this world improve, there would be no need for imagined worlds of perfect justice. Being comfortably well off in the real world means there is little need to imagine comforts in another world.
Speaking on a purely personal (and anecdotal basis), Norris and Inglehart’s thesis rings true in my experience. I have relatives in Scandinavia and have visited the region many times. Here where secularization has gone the furthest, and the greater proportion of the population enjoys a high level of existential security, you can quite literally see the difference in people’s faces. In the US, people are hard-driving and always seemingly on the edge; there is an underlying anxiety that I find very off-putting. But there is a good reason for this: people know that if they lose their jobs, they will possibly lose their homes and end up on the street. In Scandinavia, people look much more relaxed in their facial expressions, and they are not continually on the verge of flying into a rage. People are generally very confident about their lives and don’t worry much about the future.
One might think of the existential precarity of individuals as an ontogenic precarity, and this suggests the possibility of what might be called phylogenic precarity, or the existential precarity of social wholes. Fragile states exist in a condition of existential precarity. In such cases, there is a clear linkage between social precarity and individual precarity. In same cases, there may be no such linkage. It is possible that great individual precarity coexists with social stability, and social precarity may coexist with individual security. An example of the former is the contemporary US; an example of the latter would be some future society in which people are wealthy and comfortable but fail to see that their society is on the verge of collapse — like the Romans, say, in the second and third centuries AD.
The ultimate form of social precarity is the existential precarity of civilization. In some contexts it might be better to discuss the vulnerability and fragility of civilization in terms of existential precarity rather than existential risk or existential threat. I have previously observed that every existential risk is at the same time an existential opportunity, and vice versa (cf. Existential Risk and Existential Opportunity), so that the attempt to limit and contain existential risk may have the unintended consequence of limiting and containing existential opportunity. Thus the selfsame policies instituted for the sake of mitigating existential risk may contribute to the stagnation of civilization and therefore become a source of existential risk. The idea of existential precarity stands outside the dialectic of risk and opportunity, and therefore can provide us with an alternative formulation of existential risk.
How precarious is the life of civilized society? In some cases, social order seems to be balanced on a knife edge. During the 1981 Toxteth riots in Liverpool, which occurred in the wake of recession and high unemployment, as well as tension between the police and residents, Margaret Thatcher memorably said that, “The veneer of civilization is very thin.” But this is misleading. Urban riots are not a sign of the weakness of civilization, but are intrinsic to civilization itself, in the same way that war is intrinsic to civilization: it is not possible to have an urban riot without large-scale urban communities in the same way that it is not possible to have a war without the large-scale organizational resources of a state. Riots even occur in societies as stable as Sweden.
We can distinguish between the superficial precarity of a tense city that might erupt in riots at any time, which is the sort of precarity to which Margaret Thatcher referred, and a deeper, underlying precarity that does not manifest itself in terms of riots, overturned cars, and burned buildings, but in the sudden and inexplicable collapse of a social order that is not followed by immediate recovery. In considering the possibility of the existential precarity of civilization, what we really want to know is whether there is a social equivalent of the passenger pigeon population collapse and then extinction.
In the 19th century, the passenger pigeon was the most common bird in North America. Following hunting and habitat loss, the species experienced a catastrophic population collapse between 1870 and 1890, finally going extinct in 1914. Less than fifty years before the species went extinct, there was no reason to suspect that the species was endangered, or even seriously reduced in numbers. When the end came, it came quickly; somehow the entire species reached a tipping point and could not recover from its collapse. Could this happen to our own species? Could this happen to our civilization? Despite our numbers and our apparent resilience, might we have some existential Achilles’ heel, some essential precarity, incorporated into the human condition of which we are blissfully unaware? And, if we do have some essential vulnerability, is there a way to address this?

Zoological illustration from a volume of articles, The Passenger Pigeon, 1907 (Mershon, editor). Engraving from painting by John James Audubon in Pennsylvania, 1824.
I have argued elsewhere that civilization is becoming more robust over time, and I have not changed my mind about this, but neither is it the entire story about the existential security of civilization. In comparison to the precarity of the individual life, civilization is robust in the extreme. Civilization only betrays its existential precarity on time scales several orders of magnitude beyond the human experience of time, which at most encompasses several decades. As we ascend in temporal comprehensiveness, civilization steadily diminishes until it appears as a mere anomaly in the vast stretches of time contemplated in cosmology. At this scale, the longevity of civilization is no longer in question only because its brevity is all too obvious.
At the human time scale, civilization is as certain as the ground beneath our feet; at the cosmological time scale, civilization is as irrelevant as a mayfly. An appraisal of the existential precarity of civilization must take place at some time scale between the human and the cosmological. This brings me to an insight that I had after attending the 2014 IBHA conference last summer. On day 3 of the conference I attended a talk by futurist Joseph Voros that provided much food for thought, and while driving home I thought about a device he employed to discuss future forecasts, the future cone.
This was my first exposure to the future cone, and I immediately recognized the possibility for conceptual clarification that this offers in thinking about the future. If we depict the future as an extension of a timeline indefinitely, the line itself is the most likely future, while progressively larger cones concentric with the line, radiating out from the present, become increasingly less likely forecasts. Within the classes of forecasts defined by the spaces included within progressively larger cones, preferred or unwelcome futures can be identified by further subdivisions of the space defined by the cones. Voros offered an alliterative mnemonic device to differentiate the conceptual spaces defined by the future cone, from the center outward: the projected future, the probable future, the plausible future, the possible future, and the preposterous future.
When I was reflecting on this on the drive home, I realized that, in the short term, the projected future is almost always correct. We can say within a high degree of accuracy what tomorrow will be like. Yet in the long term future, the projected future is almost always wrong. Here when I speak of the projected future I mean the human future. We can project future events in cosmology with a high degree of accuracy — for example, the coming collision of the Milky Way and Andromeda galaxies — but we cannot say anything of significance of what human civilization will be like at this time, or indeed whether there will be any human civilization or any successor institution to human civilization. Futurism forecasting, in other words, goes off the rails in the mid-term future, though exactly where it does so is difficult to say. And it is precisely in this mid-term future — somewhere between human time scales and cosmological time scales — that the existential precarity of civilization becomes clear. Sometime between tomorrow and four billion years from now when a swollen sun swallows up Earth, human civilization will be subject to unpredictable and unprecedented selection pressures that will either mean the permanent ruination of that civilization, or its transformation into something utterly unexpected.

What unforeseen forces will shape human life and civilization in the future? (First Contact, by Nikolai Nedbailo)
With this in mind, we can focus our conceptual exploration of the existential precarity, existential security, existential threat, and existential risk that bears upon civilization in the period of the mid-term future. How far can we narrow the historico-temporal window of the mid-term future of precarity? What are the selection pressures to which civilization will be subject during this period? What new selection pressures might emerge? Is it more important to focus on existential risk mitigation, or to focus on our civilization making the transition to a post-civilizational institution that will carry with it the memory of its human ancestry? These and many other related questions must assume the central place in our research.
. . . . .

About four billion years from now, when the sun is swelling into a red giant star, the Milky Way and Andromeda galaxies will merge, perhaps resulting in an elliptical galaxy. The universe will be an interesting place, but will human civilization be around to record the event?
. . . . .
. . . . .
. . . . .
. . . . .
2014 IBHA Conference Day 3
9 August 2014
Saturday
Complexity (2)
Day 3 of the 2014 IBHA conference began with panel 32 in room 201, “Complexity (2).” Three speakers were scheduled, but one canceled so that more time was available to the other two. This turned out to be quite fortunate. This panel was, without question, one of the best I have attended. It began with Ken Baskin on “Sister Disciplines: Bringing Big History and Complexity Theory Together,” and continued with Claudio Maccone with “Entropy as an Evolution Measure (Evo-SETI Theory).”
Ken Baskin, author of the forthcoming The Axial Ages of World History: Lessons for the 21st Century, said that big history and complexity theory are “post-Newtonian disciplines that complement each other.” His subsequent exposition made a real impression to this end. He used the now-familiar concepts of complexity — complex adaptive systems (CAS), non-linearity, and attractors, strange and otherwise — to give an exposition of big history periodization. He presented historical changes as being “thick” — that is to say, not as brief transitional periods, but as extended transitional periods that led to even longer-term states of relative stability. According to his periodization, the hunter-gatherer era was stable, and was followed by the disruption of the agricultural revolution; this eventually issued in a stable “pre-axial” age, which was in turn disrupted by the Axial Age. The Axial Age transition lasted for several hundred years but gave way to somewhat stable post-Axial societies, and this in turn has been disrupted by a second axial age. According to Baskin, we have been in this second axial transition since about 1500 and have not yet settled down into a new, stable social regime.
Claudio Maccone is an Italian SETI astronomer who has written a range of technical books, including Mathematical SETI: Statistics, Signal Processing, Space Missions and Deep Space Flight and Communications: Exploiting the Sun as a Gravitational Lens. His presentation was nothing less than phenomenal. My response is partly due to the fact that he addressed many of my interests. Before the IBHA conference a friend asked me what I would have talked about if I had given a presentation. I said that I would have talked about big history in relation to astrobiology, and specifically that I would like to point out the similarities between the emergent complexity schema of big history to the implicit levels of complexity in the Drake equation. This is exactly what Maccone did, and he did so brilliantly, with equations and data to back up his argument. Also, Maccone spoke like a professor giving a lecture, with an effortless mastery of his subject.
Maccone said that, for him, big history was simply an extension of the Drake equation — the Drake equation goes back some ten million years or so, and by adding some additional terms to the beginning of the Drake equation we can expand it to comprise the whole 13.7 billion years of cosmic history. I think that this was one of the best concise statements of big history that I heard at the entire conference, notwithstanding its deviation from most of the other definitions offered. The Drake equation is a theoretical framework that is limited only by the imagination of the researcher in revising its terms and expression. And Maccone has taken it much further yet.
Maccone has worked out a revision of the Drake equation that plugs probability distributions into the variables of the Drake equation (which he published as “The Statistical Drake Equation” in Acta Astronautica, 2010 doi:10.1016/
j.actaastro.2010.05.003). His work is the closest thing that I have seen to being a mathematical model of civilization. All I can say is: get all his books and papers and study them carefully. It will be worth the effort.
Big History and the Future
The next panel was the most difficult decision to make of the conference, because in one room were David Christian, Cynthia Brown, and others discussing “Meaning in Big History: A Naturalistic Perspective,” but I chose instead to go to panel 39 in room 301, “Big History and the Future,” which was concerned with futurism, or, as is now said, “future studies.”
The session started out with J. Daniel May reviewing past visions of the future by a discussion of twentieth century science fiction films, including Metropolis, Forbidden Planet, Lost in Space, Star Trek, and 2001. I have seen all these films and television programs, and, as was evident by the discussion following the talk, many others had as well, citing arcane details from the films in their comments.
Joseph Voros then presented “On the transition to ‘Threshold 9’: examining the implications of ‘sustainability’ for human civilization, using the lens of big history.” The present big history schematization of the past that is most common (but not universal, as evidenced by this conference) recognizes eight thresholds of emergent complexity. This immediately suggests the question of what the next threshold of emergent complexity will be, which would be the ninth threshold, thus making the “ninth threshold” a kind of cipher among big historians and a framework for discussing the future in the context of big history. Given that the current threshold of emergent complexity is fossil-fueled civilization (what I call industrial-technological civilization), and given that fossil fuels are finite, an obvious projection for the future concerns the nature of a post-fossil-fuel civilization.
Voros claimed that all scenarios for the future fall into four categories: 1) continuation, 2) collapse (which is also called “descent”), 3) disciplined society (presumably what Bostrom would call “flawed realization”), and 4) transformational society, in which the transformation might be technological or spiritual. Since Voros was focused on post-fossil-fuel civilization, his talk was throughout related to “peak oil” concerns, though at a much more sophisticated level. He noted the the debate over “peak oil” is almost irrelevant from a big history perspective, because whether oil runs out now or later doesn’t alter the fact that it will run out being a finite resource renewable only over a period of time much greater than the time horizon of civilization. With this energy focus, he proposed that one of the forms of a “disciplined society” that could come about would be that of an “energy disciplined society.” Of the transformational possibilities he outlined four scenarios: 1) energy bonanza, 2) spiritual awakening, 3) brain/mind upload, and 4) childhood’s end.
After Voros, Cadell Last of the Global Brain Institute presented “The Future of Big History: High Intelligence to Developmental Singularity.” He began by announcing his “heretical” view that cultural evolution can be predicted. His subsequent talk revealed additional heresies without further trigger warnings. Last spoke of a coming era of cultural evolution in which the unit of selection is the idea (I was happy that he used “idea” instead of “meme”), and that this future would largely be determined by “idea flows” — presumably analogous to the “energy flows” of Eric Chaisson that have played such a large role in this conference, as well as the gene flows of biological evolution. (“Idea flows” may be understood as a contemporary reformulation of “idea diffusion.”) This era of cultural evolution will differ from biological evolution in that the idea flows, unlike gene flow in biological evolution, is not individual (it is cultural) and is not blind (conscious agents can plan ahead).
Last gave a wonderfully intuitive presentation of his ideas, and though it was the sort of thing that futurists recognize as familiar, I suspect much of what he said would strike the average listener as something akin to moral horror. Last said that, in the present world, biological and linguistic codes are in competition with each other, and gave the example familiar to everyone of having to make the choice whether to invest time and effort into biological descendants or cultural descendants. Scarcity of our personal resources means that we are likely to focus on one or the other. Finally, biological evolution will cease and all energies will be poured into cultural evolution. At this time, we will be free from the “tyranny of biology,” which requires that we engage in non-voluntary activities.
Reconceptualizations of Big History
For the final sessions divided into separate rooms I attended panel 44, “Reconceptualizations of Big History.” I came to this session primarily to hear to Camelo Castillo speak on “Mind as a Major Transition in big History.” Castillo, the author of Origins of Mind: A History of Systems, critiqued previous periodizations of big history, noting that they conflate changes in structure and changes in function. He then went on to define major transitions as, “transitions from individuals to groups that utilize novel processes to maintain novel structures.” With this definition, he went back to the literature and produced a periodization of six major transitions in big history. Not yet finished, he hypothesized that by looking for mind in the brain we are looking in the wrong place. Since all early major transitions involved both structures and processes, and from individuals to groups, that we should be looking for mind in social groups of human beings. The brain, he allowed, was implicated in the development of human social life, but social life is not reducible to the brain, and mind should be sought in theories of social intelligence.
Castillo’s work is quite rigorous and he defends it well, but I asked myself why we should not have different kinds of transitions at different stages of history and development, especially given that the kind of entities involved in the transition may be fundamentally distinct. Just as new or distinctive orders of existence require new or distinctive metrics for their measurement, so too new or distinctive orders of existence may come into being or pass out of being according to a transition specific to that kind of existent.
Final Plenary Sessions
After the individual session were finished, there was a series of plenary sessions. There was a presentation of Chronozoom, Fred Spier presented “The Future of Big History,” and finally there was a panel discussion entirely devoted to questions and answers, with Walter Alvarez, Craig Benjamin, Cynthia Brown, David Christian, Fred Spier, and Joseph Voros fielding the questions.
After the intellectual intensity of the sessions, it was not a surprise that these plenary sessions came to be mostly about funding, outreach, teaching, and the practical infrastructure of scholarship.
And with that the conference was declared to be closed.
. . . . .
. . . . .
. . . . .