Is it possible for human beings to care about the fate of strangers? This is at once a profound philosophical question and an immediately practical question. The most famous response to this question is perhaps that of John Donne:

“No man is an island, entire of itself; every man is a piece of the continent, a part of the main. If a clod be washed away by the sea, Europe is the less, as well as if a promontory were, as well as if a manor of thy friend’s or of thine own were: any man’s death diminishes me, because I am involved in mankind, and therefore never send to know for whom the bells tolls; it tolls for thee.”

John Donne, Devotions upon Emergent Occasions, XVII. Nunc lento sonitu dicunt, morieris. Now, this bell tolling softly for another, says to me: Thou must die.

Immanuel Levinas spoke of “the community of those with nothing in common,” in an attempt to get at the human concern for other human beings of whom we know little or nothing. More recently, there is this from Bill Gates:

“When I talk to friends about global health, I often run into a strange paradox. The idea of saving one person’s life is profound and thrilling. But I’ve found that when you talk about saving millions of lives — it sounds almost meaningless. The higher the number, the harder it is to wrap your head around.”

Bill Gates, opening paragraph of An AIDS Number That’s Almost Too Big to Believe

Gates presents this as a paradox, but in social science it is a well-known and well-studied cognitive bias known as the Identifiable victim effect. One researcher who has studied this cognitive bias is Paul Slovic, whose work was discussed by Sam Harris in the following passage:

“…when human life is threatened, it seems both rational and moral for our concern to increase with the number of lives at stake. And if we think that losing many lives might have some additional negative consequences (like the collapse of civilization), the curve of our concern should grow steeper still. But this is not how we characteristically respond to the suffering of other human beings.”

“Slovic’s experimental work suggests that we intuitively care most about a single, identifiable human life, less about two, and we grow more callous as the body count rises. Slovic believes that this ‘psychic numbing’ explains the widely lamented fact that we are generally more distressed by the suffering of single child (or even a single animal) than by a proper genocide. What Slovic has termed ‘genocide neglect’ — our reliable failure to respond, both practically and emotionally, to the most horrific instances of unnecessary human suffering — represents one of the more perplexing and consequential failures of our moral intuition.”

“Slovic found that when given a chance to donate money in support of needy children, subjects give most generously and feel the greatest empathy when told only about a single child’s suffering. When presented with two needy cases, their compassion wanes. And this diabolical trend continues: the greater the need, the less people are emotionally affected and the less they are inclined to give.”

Sam Harris, The Moral Landscape, Chapter 2

Skip down another paragraph and Harris adds this:

“The fact that people seem to be reliably less concerned when faced with an increase in human suffering represents an obvious violation of moral norms. The important point, however, is that we immediately recognize how indefensible this allocation of emotional and material resources is once it is brought to our attention.”

While Harris has not hesitated to court controversy, and speaks the truth plainly enough as he sees it, by failing to place what he characterizes as norms of moral reasoning in an evolutionary context he presents us with a paradox (the above section of the book is subtitled “Moral Paradox”). Really, this kind of cognitive bias only appears paradoxical when compared to a relatively recent conception of morality liberated from parochial in-group concerns.

For our ancestors, focusing on a single individual whose face is known had a high survival value for a small nomadic band, whereas a broadly humanitarian concern for all human beings would have been disastrous in equal measure. Today, in the context of industrial-technological civilization we can afford to love humanity; if our ancestors had loved humanity rather than particular individuals they knew well, they likely would have gone extinct.

Our evolutionary past has ill prepared us for the perplexities of population ethics in which the lives of millions may rest on our decisions. On the other hand, our evolutionary past has well prepared us for small group dynamics in which we immediately recognize everyone in our in-group and with equal immediacy identify anyone who is not part of our in-group and who therefore belongs to an out-group. We continue to behave as though our decisions were confined to a small band of individuals known to us, and the ability of contemporary telecommunications to project particular individuals into our personal lives as though we knew them, as if they were part of our in-group, plays into this cognitive bias.

While the explicit formulation of Identifiable victim effect is recent, the principle has been well known for hundreds of years at least, and has been as compellingly described in historical literature as in recent social science, as, for example, in Adam Smith:

“Let us suppose that the great empire of China, with all its myriads of inhabitants, was suddenly swallowed up by an earthquake, and let us consider how a man of humanity in Europe, who had no sort of connexion with that part of the world, would be affected upon receiving intelligence of this dreadful calamity. He would, I imagine, first of all, express very strongly his sorrow for the misfortune of that unhappy people, he would make many melancholy reflections upon the precariousness of human life, and the vanity of all the labours of man, which could thus be annihilated in a moment. He would too, perhaps, if he was a man of speculation, enter into many reasonings concerning the effects which this disaster might produce upon the commerce of Europe, and the trade and business of the world in general. And when all this fine philosophy was over, when all these humane sentiments had been once fairly expressed, he would pursue his business or his pleasure, take his repose or his diversion, with the same ease and tranquillity, as if no such accident had happened. The most frivolous disaster which could befall himself would occasion a more real disturbance. If he was to lose his little finger to-morrow, he would not sleep to-night; but, provided he never saw them, he will snore with the most profound security over the ruin of a hundred millions of his brethren, and the destruction of that immense multitude seems plainly an object less interesting to him, than this paltry misfortune of his own.”

Adam Smith, Theory of Moral Sentiments, Part III, chapter 3, paragraph 4

And immediately after Hume made his famous claim that, “Reason is, and ought only to be the slave of the passions, and can never pretend to any other office than to serve and obey them,” he illustrated the claim with an observation similar to Smith’s:

“Where a passion is neither founded on false suppositions, nor chuses means insufficient for the end, the understanding can neither justify nor condemn it. It is not contrary to reason to prefer the destruction of the whole world to the scratching of my finger. It is not contrary to reason for me to chuse my total ruin, to prevent the least uneasiness of an Indian or person wholly unknown to me. It is as little contrary to reason to prefer even my own acknowledgeed lesser good to my greater, and have a more ardent affection for the former than the latter.”

David Hume, A Treatise of Human Nature, Book II, Part III, section 3

Bertrand Russell has well described how the expression of this cognitive bias can take on the conceit of moral superiority in the context of romanticism:

“Cultivated people in eighteenth-century France greatly admired what they called la sensibilité, which meant a proneness to emotion, and more particularly to the emotion of sympathy. To be thoroughly satisfactory, the emotion must be direct and violent and quite uninformed by thought. The man of sensibility would be moved to tears by the sight of a single destitute peasant family, but would be cold to well-thought-out schemes for ameliorating the lot of peasants as a class. The poor were supposed to possess more virtue than the rich; the sage was thought of as a man who retires from the corruption of courts to enjoy the peaceful pleasures of an unambitious rural existence.”

Bertrand Russell, A History of Western Philosophy, Part II. From Rousseau to the Present Day, CHAPTER XVIII “The Romantic Movement”

Russell’s account of romanticism provides some of the missing rationalization whereby a cognitive bias clearly at variance with norms of moral reasoning is justified as being the “higher” moral ground. Harris seems to suggest that, as soon as this violation of moral reasoning is pointed out to us, we will change. But we don’t change, for the most part. Our rationalizations change, but our behavior rarely does. And indeed studies of cognitive bias have revealed that even when experimental subjects are informed of cognitive biases that should be obvious ex post facto, most will continue to defend choices that unambiguously reflect cognitive bias.

I have personally experienced the attitude described by Russell (despite the fact that I have not lived in eighteenth-century France) more times than I care to recall, though I find myself temperamentally on the side of those formulating well-thought-out schemes for the amelioration of the lot of the destitute as a class, rather than those moved to tears by the sight of a single destitute family. From these personal experiences of mine, anecdotal evidence suggests to me that if you attempt to live by the quasi-utilitarianism advocated by Russell and Harris, others will regard you as cold, unfeeling, and lacking in the milk of human kindness.

The cognitive bias challenge to presumptive norms of moral reasoning is also a profound challenge to existential risk mitigation, since existential risk mitigation deals in the largest numbers of human lives saved, but is a well-thought-out scheme for ameliorating the lot of human beings as a class, and may therefore have little emotional appeal compared to putting an individual’s face on a problem and then broadcasting that face repetitively.

We have all heard that the past is the foreign county, and that they do things differently there. (This line comes from the 1953 novel The Go-Between by L. P. Hartley.) We are the past of some future that has yet to occur, and we will in turn be a foreign country to that future. And, by the same token, the future is a foreign country, and they do things differently there. Can we care about these foreigners with their foreign ways? Can we do more than care about them, and actually change our behavior in the present in order to ensure on ongoing future, however foreign that future is from our parochial concerns?

In Bostrom’s paper “Existential Risk Prevention as Global Priority” (Global Policy, Volume 4, Issue 1, February 2013) the author gives a lower bound of 1016 potential future lives saved by existential risk mitigation (though he also gives “a lower bound of 1054 human-brain-emulation subjective life-years” as a possibility), but if the “collapse of compassion” is a function of the numbers involved, the higher the numbers we cite for individuals saved as a result of existential risk mitigation, the less will the average individual of today care.

Would it be possible to place an identifiable victim in the future? This is difficult, but we are all familiar with appeals to the world we leave to our children, and these are attempts to connect identifiable victims with actions that may prejudice the ability of human beings in the future to live lives of value commensurate with our own. It would be possible to construct some grand fiction, like Plato’s “noble lie” in order to interest the mass of the public in existential risk mitigation, but this would not be successful unless it became some kind of quasi-religious belief exempted from falsification that becomes the receptacle of our collective hopes. This does not seem very plausible (or sustainable) to me.

Are we left, then, to take the high road? To try to explain in painstaking (and off-putting) detail the violation of moral norms involved in our failure to adequately address existential risks, thereby putting our descendants in mortal danger? Certainly if an attempt to place an identifiable victim in the future is doomed to failure, we have no remaining option but the attempt at a moral intervention and relentless moral education that could transform the moral lives of humanity.

I do not think either of the above approaches to resolving the identifiable victim challenge to existential risk mitigation would be likely to be successful. I can put this more strongly yet: I think both approaches would almost certainly result in a backlash and would therefore be counter-productive to existential risk mitigation efforts. The only way forward that I can see is to present existential risk mitigation under the character of the adventure and exploration made possible by a spacefaring civilization that would, almost as an unintended consequence, secure the redundancy and autonomy of extraterrestrial centers of human civilization.

Human beings (at least as I know them) have a strong distaste for moral lectures and do not care to be told to do anything for their own good, but if you present them with the possibility of adventure and excitement that promises new experiences to every individual and possibly even the prospect of the extraterrestrial equivalent of a buried treasure, or even a pot of gold at the of the rainbow, you might enlist the selfishness and greed of individuals in a great cause on behalf of Earth and all its inhabitants, so that each individual is moved, as it were, by an invisible hand to promote an end which was no part of his intention.

. . . . .

danger imminent existential threat

. . . . .

Existential Risk: The Philosophy of Human Survival

1. Moral Imperatives Posed by Existential Risk

2. Existential Risk and Existential Uncertainty

3. Addendum on Existential Risk and Existential Uncertainty

4. Existential Risk and the Death Event

5. Risk and Knowledge

6. What is an existential philosophy?

7. An Alternative Formulation of Existential Risk

8. Existential Risk and Existential Opportunity

9. Conceptualization of Existential Risk

10. Existential Risk and Existential Viability

11. Existential Risk and the Developmental Conception of Civilization

12. Developing an Existential Perspective

13. Existential Risk and Identifiable Victims

. . . . .


. . . . .

Grand Strategy Annex

. . . . .

project astrolabe logo smaller

. . . . .


Fred Adams and Greg Laughlin's five ages of the universe

Fred Adams and Greg Laughlin’s five ages of the universe

Introduction: Periodization in Cosmology

Recently Paul Gilster has posted my Who will read the Encyclopedia Galactica? on his Centauri Dreams blog. In this post I employ the framework of Fred Adams and Greg Laughlin from their book The Five Ages of the Universe: Inside the Physics of Eternity, who distinguish the Primordial Era, before stars have formed, the Stelliferous Era, which is populated by stars, the Degenerate Era, when only the degenerate remains of stars are to be found, the Black Hole Era, when only black holes remain, and finally the Dark Era, when even black holes have evaporated. These major divisions of cosmological history allow us to partition the vast stretches of cosmological time, but it also invites us to subdivide each era into smaller increments (such is the historian’s passion for periodization).

The Stelliferous Era is the most important to us, because we find ourselves living in the Stelliferous Era, and moreover everything that we understand in terms of life and civilization is contingent upon a biosphere on the surface of a planet warmed by a star. When stellar formation has ceased and the last star in the universe burns out, planets will go dark (unless artificially lighted by advanced civilizations) and any remaining biospheres will cease to function. Life and civilization as we know it will be over. I have called this the End-Stelliferous Mass Extinction Event.

It will be a long time before the end of the Stelliferous Era — in human terms, unimaginably long. Even in scientific terms, the time scale of cosmology is long. It would make sense for us, then, to break up the Stelliferous Era into smaller periodizations that can be dealt with each in turn. Adams and Laughlin constructed a logarithmic time scale based on powers of ten, calling each of these powers of ten a “cosmological decade.” The Stelliferous Era comprises cosmological decades 7 to 15, so we can further break down the Stelliferous Era into three divisions of three cosmological decades each, so cosmological decades 7-9 will be the Early Stelliferous, cosmological decades 10-12 will be the Middle Stelliferous, and cosmological decades 13-15 will be the late Stelliferous.

Early Stelliferous

The Early Stelliferous

Another Big History periodization that has been employed other than that of Adams of Laughlin is Eric Chaisson’s tripartite distinction between the Energy Era, the Matter Era, and the Life Era. The Primordial Era and the Energy Era coincide until the transition point (or, if you like, the phase transition) when the energies released by the big bang coalesce into matter. This phase transition is the transition from the Energy Era to the Matter Era in Chaisson; for Adams and Laughlin this transition is wholly contained within the Primordial Era and may be considered one of the major events of the Primorial Era. This phase transition occurs at about the fifth cosmological decade, so that there is one cosmological decade of matter prior to that matter forming stars.

At the beginning of the Early Stelliferous the first stars coalesce from matter, which has now cooled to the point that this becomes possible for the first time in cosmological history. The only matter available at this time to form stars is hydrogen and helium produced by the big bang. The first generation of stars to light up after the big bang are called Population III stars, and their existence can only be hypothesized because no certain observations exist of Population III stars. The oldest known star, HD 140283, sometimes called the Methuselah Star, is believed to be a Population II star, and is said to be metal poor, or of low metallicity. To an astrophysicist, any element other than hydrogen or helium is a “metal,” and the spectra of stars are examined for the “metals” present to determine their order of appearance in galactic ecology.

The youngest stars, like our sun and other stars in the spiral arms of the Milky Way, are Population I stars and are rich in metals. The whole history of the universe up to the present is necessary to produce the high metallicity younger stars, and these younger stars form from dust and gas that coalesce into a protoplanetary disk surrounding the young star of similarly high metal content. We can think of the stages of Population III, Population II, and Population I stars as the evolutionary stages of galactic ecology that have produced structures of greater complexity. Repeated cycles of stellar nucleosynthesis, catastrophic supernovae, and new star formation from these remnants have produced the later, younger stars of high metallcity.

It is the high relative proportion of heavier elements that makes possible the formulation of small rocky planets in the habitable zone of a stable star. The minerals that form these rocky planets are the result of what Robert Hazen calls minerological evolution, which we may consider to be an extension of galactic ecology on a smaller scale. These planets, in turn, have heavier elements distributed throughout their crust, which, in the case of Earth, human civilization has dug out of the crust and put to work manufacturing the implements of industrial-technological civilization. If Population II and Population III stars had planets (this is an open area of research in planet formation and without a definite answer as yet), it is conceivable that these planets might have harbored life, but the life on such worlds would not have had access to heavier elements, so any civilization that resulted would have had a difficult time of it creating an industrial or electrical technology.

Middle Stelliferous

The Middle Stelliferous

In the Middle Stelliferous, the processes of galactic ecology that produced and which now sustain the Stelliferous Era have come to maturity. There is a wide range of galaxies consisting of a wide range of stars, running the gamut of the Hertzsprung–Russell diagram. It is a time of both galactic and stellar prolixity, diversity, and fecundity. But even as the processes of galactic ecology reach their maturity, they begin to reveal the dissipation and dissolution that will characterize the Late Stelliferous Era and even the Degenerate Era to follow.

The Milky Way, which is a very old galaxy, carries with it the traces of the smaller galaxies that it has already absorbed in its earlier history — as, for example, the Helmi Stream — and for the residents of the Milky Way and Andromeda galaxies one of the most spectacular events of the Middle Stelliferous Era will be the merging of these two galaxies in a slow-motion collision taking place over millions of years, throwing some star systems entirely clear of the newly merged galaxies, and eventually resulting in the merging of the supermassive black holes that anchor the centers of each of these elegant spiral galaxies. The result is likely to be an elliptical galaxy not clearly resembling either predecessor (and sometimes called the Milkomeda).

Eventually the Triangulum galaxy — the other large spiral galaxy in the local group — will also be absorbed into this swollen mass of stars. In terms of the cosmological time scales here under consideration, all of this happens rather quickly, as does also the isolation of each of these merged local groups which persist as lone galaxies, suspended like a island universe with no other galaxies available to observational cosmology. The vast majority of the history of the universe will take place after these events have transpired and are left in the long distant past — hopefully not forgotten, but possibly lost and unrecoverable.

tenth decade

The Tenth Decade

The tenth cosmological decade, comprising the years between 1010 to 1011 (10,000,000,000 to 100,000,000,000 years, or 10 Ga. to 100 Ga.) since the big bang, is especially interesting to us, like the Stelliferous Era on the whole, because this is where we find ourselves. Because of this we are subject to observation selection effects, and we must be particularly on guard for cognitive biases that grow out of the observational selection effects we experience. Just as it seems, when we look out into the universe, that we are in the center of everything, and all the galaxies are racing away from us as the universe expands, so too it seems that we are situated in the center of time, with a vast eternity preceding us and a vast eternity following us.

Almost everything that seems of interest to us in the cosmos occurs within the tenth decade. It is arguable (though not definitive) that no advanced intelligence or technological civilization could have evolved prior to the tenth decade. This is in part due to the need to synthesize the heavier elements — we could not have developed nuclear technology had it not been for naturally occurring uranium, and it is radioactive decay of uranium in Earth’s crust that contributes significantly to the temperature of Earth’s core and hence to Earth being a geologically active planet. By the end of the tenth decade, all galaxies will have become isolated as “island universes” (once upon a time the cosmological model for our universe today) and the “end of cosmology” (as Krauss and Sherrer put it) will be upon us because observational cosmology will no longer be able to study the large scale structures of the universe.

The tenth decade, thus, is not only when it becomes possible for an intelligent species to evolve, to establish an industrial-technological civilization on the basis of heavier elements built up through nucleosynthesis and supernova explosions, and to employ these resources to launch itself as a spacefaring civilization, but also this is the only period in the history of the universe when such a spacefaring civilization can gain a true foothold in the cosmos to establish an intergalactic civilization. After local galactic groups coalesce into enormous single galaxies, and all other similarly coalesced galaxies have passed beyond the cosmological horizon and can no longer be observed, an intergalactic civilization is no longer possible on principles of science and technology as we understand them today.

It is sometimes said that, for astronomers, galaxies are the basic building blocks of the universe. The uniqueness of the tenth decade, then, can be expressed as being the only time in cosmological history during which a spacefaring civilization can emerge and then can go on to assimilate and unify the basic building blocks of the universe. It may well happen that, by the time of million year old supercivilizations and even billion year old supercivilizations, sciences and technologies will have been developed far beyond our understanding that is possible today, and some form of intergalactic relationship may continue after the end of observational cosmology, but, if this is the case, the continued intergalactic organization must be on principles not known to us today.

Late Stelliferous

The Late Stelliferous

In the Late Stelliferous Era, after the end of the cosmology, each isolated local galactic group, now merged into a single giant assemblage of stars, will continue its processes of star formation and evolution, ever so slowly using up all the hydrogen produced in the big bang. The Late Stelliferous Era is a universe having passed “Peak Hydrogen” and which can therefore only look forward to the running down of the processes of galactic ecology that have sustained the universe up to this time.

The end of cosmology will mean a changed structure of galactic ecology. Even if civilizations can find a way around their cosmological isolation through advanced technology, the processes of nature will still be bound by familiar laws of nature, which, being highly rigid, will not have changed appreciably even over billions of years of cosmological evolution. Where light cannot travel, matter cannot travel either, and so any tenuous material connection between galactic groups will cease to play any role in galactic ecology.

The largest scale structures that we know of in the universe today — superclusters and filaments — will continue to expand and cool and to dissipate. We can imagine a bird’s eye view of the future universe (if only a bird could fly over the universe entire), with its large scale structures no longer in touch with one another but still constituting the structure, rarified by expansion, stretched by gravity, and subject to the evolutionary processes of the universe. This future universe (which we may have to stop calling the universe, as it is lost its unity) stands in relation to its current structure as the isolated and strung out continents of Earth today stand in relation to earlier continental structures (such as the last supercontinent, Pangaea), preceding the present disposition of continents (though keep in mind that there have been at least five supercontinent cycles since the formation of Earth and the initiation of its tectonic processes).

Near the end of the Stelliferous Era, there is no longer any free hydrogen to be gathered together by gravity into new suns. Star formation ceases. At this point, the fate of the brilliantly shining universe of stars and galaxies is sealed; the Stelliferous Era has arrived at functional extinction, i.e., the population of late Stelliferous Era stars continues to shine but is no longer viable. Galactic ecology has shut down. Once star formation ceases, it is only a matter of time before the last of the stars to form burn themselves out. Stars can be very large, very bright and short lived, or very small, scarcely a star at all, very dim, cool, and consequently very long lived. Red dwarf stars will continue to burn dimly long after all the main sequence stars like the sun have burned themselves out, but eventually even the dwarf stars, burning through their available fuel at a miserly rate, will burn out also.

The Post-Stelliferous Era

After the Stelliferous Era comes the Degenerate Era, with the two eras separated by what I have called the Post-Stelliferous Mass Extinction Event. What the prospects are for continued life and intelligence in the Degenerate Era is something that I have considered in Who will read the Encyclopedia Galactica? and Addendum on Degenerate Era civilization, inter alia.

Our enormous and isolated galaxy will not be immediately plunged into absolute darkness. Adams and Laughlin (referred to above) estimate that our galaxy may have about a hundred small stars shining — the result of the collision of two or more brown dwarfs. Brown dwarf stars, at this point in the history of the cosmos, contain what little hydrogen remains, since brown dwarf stars were not large enough to initiate fusion during the Stelliferous Era. However, if two or more brown dwarfs collide — a rare event, but in the vast stretches of time in the future of the universe rare events will happen eventually — they may form a new small star that will light up like a dim candle in a dark room. There is a certain melancholy grandeur in attempting to imagine a hundred or so dim stars strewn through the galaxy, providing a dim glow by which to view this strange and unfamiliar world.

Our ability even to outline the large scale structures — spatial, temporal, biological, technological, intellectual, etc. — of the extremely distant future is severely constrained by our paucity of knowledge. However, if terrestrial industrial-technological civilization successfully makes the transition to being a viable spacefaring civilization (what I might call extraterrestrial-spacefaring civilization) our scientific knowledge of the universe is likely to experience an exponential inflection point surpassing the scientific revolution of the early modern period.

An exponential improvement in scientific knowledge (supported on an industrial-technological base broader than the surface of a single planet) will help to bring the extremely distant future into better focus and will give to our existential risk mitigation efforts both the knowledge that such efforts requires and the technological capability needed to ensure the perpetual ongoing extrapolation of complexity driven by intelligent, conscious, and purposeful intervention in the world. And if not us, if not terrestrial civilization, then some other civilization will take over the mantle and the far future will belong to them.

. . . . .


. . . . .

Grand Strategy Annex

. . . . .

project astrolabe logo smaller

. . . . .

The Technological Frontier

12 December 2014


wanderer above the technological frontier

An Exercise in Techno-Philosophy

Quite some time ago in Fear of the Future I employed the phase “the technological frontier,” but I did not follow up on this idea in a systematic way. In the popular mind, the high technology futurism of the technological singularity has largely replaced the futurism of rocketships and jetpacks, so that the idea of a technological frontier has particular resonance for us today. The idea of a technological frontier is particularly compelling in our time, as technology seems to dominate our lives to an increasing degree, and this trend may only accelerate in the future. If our lives are shaped by technology today, how much more profoundly will they be shaped by technology in ten, twenty, fifty, or a hundred years? We would seem to be poised like pioneers on a technological frontier.

How are we to understand the human condition in the age of the technological frontier? The human condition is no longer merely the human condition, but it is the human condition in the context of technology. This was not always the case. Let me try to explain.

While humanity emerged from nature and lived entirely within the context of nature, our long prehistory integrated into nature was occluded and utterly lost after the emergence of civilization, and the origins of civilization was attended by the formulation of etiological mythologies that attributed supernatural causes to the manifold natural causes that shape our lives. We continued to live at the mercy of nature, but posited ourselves as outside nature. This led to a strangely conflicted conception of nature and a fraught relationship with the world from which we emerged.

The fraught human relationship to nature has been characterized by E. O. Wilson in terms of biophilia; the similarly fraught human relationship to technology might be similarly characterized in terms of technophilia, which I posited in The Technophilia Hypothesis (and further elaborated in Technophilia and Evolutionary Psychology). And as with biophilia and biophobia, so, too, while there is technophilia, there is also technophobia.

Today we have so transformed our world that the context of our lives is the technological world; we have substituted technology for nature as the framework within which we conduct the ordinary business of life. And whereas we once asked about humanity’s place in nature, we now ask, or ought to ask, what humanity’s place is or ought to be in this technological world with which we have surrounded ourselves. We ask these questions out of need, existential need, as there is both pessimism and optimism about a human future increasingly dominated by the technology we have created.

I attach considerable importance to the fact that we have literally surrounded ourselves with our technology. Technology began as isolated devices that appeared within the context of nature. A spear, a needle, a comb, or an arrow were set against the background of omnipresent nature. And the relationship of these artifacts to their sources in nature were transparent: the spear was made of wood, the needle and the comb of bone, the arrow head of flint. Technological artifacts, i.e., individual instances of technology, were interpolations into the natural world. Over a period of more than ten thousand years, however, technological artifacts accumulated until they have displaced nature and they constitute the background against which nature is seen. Nature then became an interpolation within the context of the technological innovations of civilizations. We have gardens and parks and zoos that interpolate plants and animals into the built environment, which is the environment created by technology.

With technology as the environment and the background of our lives, and not merely constituted by objects within our lives, technology now has an ontological dimension — it has its own laws, its own features, its own properties — and it has a frontier. We ourselves are objects within a technological world (hence the feeling of anomie from being cogs within an enormous machine); we populate an environment defined and constituted by technology, and as such bear some relationship to the ontology of technology as well as to its frontier. Technology conceived in this way, as a totality, suggests ways of thinking about technology parallel to our conceptions of humanity and civilization, inter alia.

One way to think about the technological frontier is as the human exploration of the technium. The idea of the technium accords well with the conception of the technological world as the context of human life that I described above. The “technium” is a term introduced by Kevin Kelly to denote the totality of technology. Here is the passage in which Kelly introduces the term:

“I dislike inventing new words that no one else uses, but in this case all known alternatives fail to convey the required scope. So I’ve somewhat reluctantly coined a word to designate the greater, global, massively interconnected system of technology vibrating around us. I call it the technium. The technium extends beyond shiny hardware to include culture, art, social institutions, and intellectual creations of all types. It includes intangibles like software, law, and philosophical concepts. And most important, it includes the generative impulses of our inventions to encourage more tool making, more technology invention, and more self-enhancing connections. For the rest of this book I will use the term technium where others might use technology as a plural, and to mean a whole system (as in “technology accelerates”). I reserve the term technology to mean a specific technology, such as radar or plastic polymers.”

Kevin Kelly, What Technology Wants

I previously wrote about the technium in Civilization and the Technium and The Genealogy of the Technium.

The concept of the technium can be extended in parallel to schema I have applied to civilization in Eo-, Eso-, Exo-, Astro-, so that we have the concepts of the eotechnium, the esotechnium, the exotechnium, and the astrotechnium. (Certainly no one is going to employ this battery of unlovely terms I have coined — neither the words nor the concepts are immediately accessible — but I keep this ideas in the back of my mind and hope to further extend, perhaps in a formal context in which symbols can be substituted for awkward words and the ideas can be presented.)

● Eotechnium the origins of technology, wherever and whenever it occurs, terrestrial or otherwise

● Esotechnium our terrestrial technology

● Exotechnium the extraterrestrial technium exclusive of the terrestrial technium

● Astrotechnium the technium in its totality throughout the universe; the terrestrial and extraterrestrial technium taken together in their cosmological context

I previously formulated these permutations of technium in Civilization and the Technium. In that post I wrote:

The esotechnium corresponds to what has been called the technosphere, mentioned above. I have pointed out that the concept of the technosphere (like other -spheres such as the hydrosphere and the sociosphere, etc.) is essentially Ptolemaic in conception, i.e., geocentric, and that to make the transition to fully Copernican conceptions of science and the world we need to transcend our Ptolemaic ideas and begin to employ Copernican ideas. Thus to recognize that the technosphere corresponds to the esotechnium constitutes conceptual progress, because on this basis we can immediately posit the exotechnium, and beyond both the esotechnium and the exotechnium we can posit the astrotechnium.

We can already glimpse the astrotechnium, in so far as human technological artifacts have already reconnoitered the solar system and, in the case of the Voyager space probes, have left the solar system and passed into interstellar space. The technium then, i.e., from the eotechnium originating on Earth, now extends into space, and we can conceive the whole of this terrestrial technology together with our extraterrestrial technology as the astrotechnium.

It is a larger question yet whether there are other technological civilizations in the universe — it is the remit of SETI to discover if this is the case — and, if there are, there is an astrotechnium much greater than that we have created by sending our probes through our solar system. A SETI detection of an extraterrestrial signal would mean that the technology of some other species had linked up with our technology, and by their transmission and our reception an interstellar astrotechnium comes into being.

The astrotechnium is both itself a technological frontier, and it extends throughout the frontier of extraterrestrial space, and a physical frontier of space. The exploration of the astrotechnium would be at once an exploration of the technological frontier and an exploration of an actual physical frontier. This is surely the frontier in every sense of the term. But there are other senses as well.

We can go my taxonomy of the technium one better and also include the endotechnium, where the prefix “endo-” means “inside” or “interior.” The endotechnium is that familiar motif of contemporary thought of virtual reality becoming indistinguishable from the reality of nature. Virtual reality is immersion in the endotechnium.

I have noted (in An Idea for the Employment of “Friendly” AI) that one possible employment of friendly AI would be the on-demand production of virtual worlds for our entertainment (and possibly also our education). One would presumably instruct one’s AI interface (which already has all human artistic and intellectual accomplishments storied in its databanks) that one wishes to enter into a particular story. The AI generates the entire world virtually, and one employs one’s preferred interface to step into the world of the imagination. Why would one so immersed choose to emerge again?

One of the responses to the Fermi paradox is that any sufficiently advanced civilization that had developed to the point of being able to generate virtual reality of a quality comparable to ordinary experience would thereafter devote itself to the exploration of virtual worlds, turning inward rather than outward, forsaking the wider universe outside for the universe of the mind. In this sense, the technological frontier represented by virtual reality is the exploration of the human imagination (or, for some other species, the exploration of the alien imagination). This exploration was formerly carried out in literature and the arts, but we seem poised to enact this exploration in an unprecedented way.

There are, then, many senses of the technological frontier. Is there any common framework within which we can grasp the significance of these several frontiers? The most famous representative of the role of the frontier in history is of course Frederick Jackson Turner, for whom the Turner Thesis is named. At the end of his famous essay on the frontier in American life, Turner wrote:

“From the conditions of frontier life came intellectual traits of profound importance. The works of travelers along each frontier from colonial days onward describe certain common traits, and these traits have, while softening down, still persisted as survivals in the place of their origin, even when a higher social organization succeeded. The result is that to the frontier the American intellect owes its striking characteristics. That coarseness and strength combined with acuteness and inquisitiveness; that practical, inventive turn of mind, quick to find expedients; that masterful grasp of material things, lacking in the artistic but powerful to effect great ends; that restless, nervous energy; that dominant individualism, working for good and for evil, and withal that buoyancy and exuberance which comes with freedom — these are traits of the frontier, or traits called out elsewhere because of the existence of the frontier.”

Frederick Jackson Turner, “The Significance of the Frontier in American History,” which constitutes the first chapter of The Frontier In American History

Turner is not widely cited today, and his work has fallen into disfavor (especially targeted by the “New Western Historians”), but much that Turner observed about the frontier is not only true, but more generally applicable beyond the American experience of the frontier. I think many readers will recognize in the attitudes of those today on the technological frontier the qualities that Turner described in the passage quoted above, attributing them specially to the American frontier, which for Turner was, “an area of free land, its continuous recession, and the advance of American settlement westward.”

The technological frontier, too, is an area of free space — the abstract space of technology — the continuous recession of this free space as frontier technologies migrate into the ordinary business of life even while new frontiers are opened, and the advance of pioneers into the technological frontier.

One of the attractions of a frontier is that it is distant from the centers of civilization, and in this sense represents an escape from the disciplined society of mature institutions. The frontier serves as a refuge; the most marginal elements of society naturally seek the margins of society, at the periphery, far from the centers of civilization. (When I wrote about the center and periphery of civilization in The Farther Reaches of Civilization I could just as well have expressed myself in terms of the frontier.)

In the past, the frontier was defined in terms of its (physical) distance from the centers of civilization, but the world of high technology being created today is a product of the most technologically advanced centers of civilization, so that the technological frontier is defined by its proximity to the centers of civilization, understood at the centers of innovation and production for industrial-technological civilization.

The technological frontier nevertheless exists on the periphery of many of the traditional symbols of high culture that were once definitive of civilizational centers; in this sense, the technological frontier may be defined as the far periphery of the traditional center of civilization. If we identify civilization with the relics of high culture — painting, sculpture, music, dance, and even philosophy, all understood in their high-brow sense (and everything that might have featured symbolically in a seventeenth century Vanitas painting) — we can see that the techno-philosophy of our time has little sympathy for these traditional markers of culture.

The frontier has been the antithesis of civilization — civilization’s other — and the further one penetrates the frontier, moving always away from civilization, the nearer one approaches the absolute other of civilization: wildness and wilderness. The technological frontier offers to the human sense of adventure a kind of wildness distinct from that of nature as well as the intellectual adventure of traditional culture. Although the technological frontier is in one sense antithetical to the post-apocalyptic visions of formerly civilized individuals transformed into a noble savage (which usually marked by technological rejectionism), there is also a sense in which the technological frontier is like the post-apocalyptic frontier in its radical rejection of bourgeois values.

If we take the idea of the technological frontier in the context of the STEM cycle, we would expect that the technological frontier would have parallels in science and engineering — a scientific frontier and an engineering frontier. In fact, the frontier of scientific knowledge has been a familiar motif since at least the middle of the twentieth century. With the profound disruptions of scientific knowledge represented by relativity and quantum theory, the center of scientific inquiry has been displaced into an unfamiliar periphery populated by strange and inexplicable phenomena of the kind that would have been dismissed as anomalies by classical physics.

The displacement of traditional values of civilization, and even of traditional conceptions of science, gives the technological frontier its frontier character even as it emerges within the centers of industrial-technological civilization. In The Interstellar Imperative I asserted that the central imperative of industrial-technological civilization is the propagation of the STEM cycle. It is at least arguable that the technological frontier is both a result and a cause of the ongoing STEM cycle, which experiences its most unexpected advances when its scientific, technological, and engineering innovations seem to be at their most marginal and peripheral. A civilization that places itself within its own frontier in this way is a frontier society par excellence.

. . . . .

artificial intelligence

. . . . .


. . . . .

Grand Strategy Annex

. . . . .

project astrolabe logo smaller

. . . . .


Léonce Crenier

Léonce Crenier

The word “precarity” is quite recent, and does not appear in the Oxford English Dictionary, but has appeared in the titles of several books. The term mostly derives from left-leaning organized labor, and has come into use to describe the lives of workers in precarious circumstances. Wikipedia defines precarity as “a condition of existence without predictability or security, affecting material or psychological welfare.”

Dorothy Day

Dorothy Day

Dorothy Day, writing in The Catholic Worker (coming from a context of both Catholic monasticism and labor activism), May 1952 (“Poverty and Precarity”), cites a certain “saintly priest… from Martinique,” now known to be Léonce Crenier, who is quoted as saying:

“True poverty is rare… Nowadays communities are good, I am sure, but they are mistaken about poverty. They accept, admit on principle, poverty, but everything must be good and strong, buildings must be fireproof, Precarity is rejected everywhere, and precarity is an essential element of poverty. That has been forgotten. Here we want precarity in everything except the church.”

Crenier had so absorbed and accepted the ideal of monastic poverty, like the Franciscans and the Poor Clares (or their modern equivalents such as Simone Weil and Christopher McCandless), that he didn’t merely tolerate poverty, he embraced and celebrated poverty. Elsewhere Father Crenier wrote, “I noticed that real poverty, where one misses so many things, attracts singular graces amongst the monks, and in particular spiritual peace and joy.” Given the ideal of poverty and its salutary effect upon the spiritual life, Crenier not only celebrated poverty, but also the condition in which the impoverished live, and this is precarity.

Jean XXII reçoit les transcriptions de l'interrogatoire de Gui de Corvo. Manuscrit du XVem siècle. Bibl Nazionale Braidense, Milan, Italie.

Jean XXII reçoit les transcriptions de l’interrogatoire de Gui de Corvo. Manuscrit du XVem siècle. Bibl Nazionale Braidense, Milan, Italie.

Recently studies have retained this leftist interest in the existential precarity of the lives of marginalized workers, but the monastic interest in poverty for the sake of an enhanced spiritual life has fallen away, and only the misery of precarity remains. Not only has the spiritual virtue of poverty been abandoned as an ideal, but it has, in a sense, been turned on its head, as the spiritual focus of poverty turns from its cultivation to its eradication. In this tradition, the recent sociology of Pippa Norris and Ronald Inglehart is especially interesting, as they have bucked the contemporary trend and given a new argument for secularization, which was once in vogue but has been very much out of favor since the rise of Islamic militancy as a political force in global politics. (I have myself argued that secularization had been too readily and quickly abandoned, and discussed the problem of secularization in relation to the confirmation and disconfirmation of ideas in history.)

Pippa Norris and Ronald Inglehart

Pippa Norris and Ronald Inglehart

Pippa Norris and Ronald Inglehart are perhaps best known for their book Sacred and Secular: Religion and Politics Worldwide. Their paper, Are high levels of existential security conducive to secularization? A response to our critics, is available online. They make the case that, despite the apparent rise of fundamentalist religious belief in the past several deacades, and the anomalous instance of the US, which is wealthy and highly religious, it is not wealth itself that is a predictor of secularization, but rather what they call existential security (which may be considered the economic aspect of ontological security).

While Norris and Inglehart do not use the term “precarity,” clearly their argument is that existential precarity pushes individuals and communities toward the comforts of religion in the face of a hostile and unforgiving world: “…the public’s demand for transcendent religion varies systematically with levels of vulnerabilities to societal and personal risks and threats.” This really isn’t a novel thesis, as Marx pointed out long ago that societies created ideal worlds of justice when justice was denied them in this world, implying that when conditions in this world improve, there would be no need for imagined worlds of perfect justice. Being comfortably well off in the real world means there is little need to imagine comforts in another world.

Speaking on a purely personal (and anecdotal basis), Norris and Inglehart’s thesis rings true in my experience. I have relatives in Scandinavia and have visited the region many times. Here where secularization has gone the furthest, and the greater proportion of the population enjoys a high level of existential security, you can quite literally see the difference in people’s faces. In the US, people are hard-driving and always seemingly on the edge; there is an underlying anxiety that I find very off-putting. But there is a good reason for this: people know that if they lose their jobs, they will possibly lose their homes and end up on the street. In Scandinavia, people look much more relaxed in their facial expressions, and they are not continually on the verge of flying into a rage. People are generally very confident about their lives and don’t worry much about the future.

One might think of the existential precarity of individuals as an ontogenic precarity, and this suggests the possibility of what might be called phylogenic precarity, or the existential precarity of social wholes. Fragile states exist in a condition of existential precarity. In such cases, there is a clear linkage between social precarity and individual precarity. In same cases, there may be no such linkage. It is possible that great individual precarity coexists with social stability, and social precarity may coexist with individual security. An example of the former is the contemporary US; an example of the latter would be some future society in which people are wealthy and comfortable but fail to see that their society is on the verge of collapse — like the Romans, say, in the second and third centuries AD.

The ultimate form of social precarity is the existential precarity of civilization. In some contexts it might be better to discuss the vulnerability and fragility of civilization in terms of existential precarity rather than existential risk or existential threat. I have previously observed that every existential risk is at the same time an existential opportunity, and vice versa (cf. Existential Risk and Existential Opportunity), so that the attempt to limit and contain existential risk may have the unintended consequence of limiting and containing existential opportunity. Thus the selfsame policies instituted for the sake of mitigating existential risk may contribute to the stagnation of civilization and therefore become a source of existential risk. The idea of existential precarity stands outside the dialectic of risk and opportunity, and therefore can provide us with an alternative formulation of existential risk.

Toxteth riot in Liverpool

Toxteth riot in Liverpool

How precarious is the life of civilized society? In some cases, social order seems to be balanced on a knife edge. During the 1981 Toxteth riots in Liverpool, which occurred in the wake of recession and high unemployment, as well as tension between the police and residents, Margaret Thatcher memorably said that, “The veneer of civilization is very thin.” But this is misleading. Urban riots are not a sign of the weakness of civilization, but are intrinsic to civilization itself, in the same way that war is intrinsic to civilization: it is not possible to have an urban riot without large-scale urban communities in the same way that it is not possible to have a war without the large-scale organizational resources of a state. Riots even occur in societies as stable as Sweden.

Margaret Thatcher

Margaret Thatcher

We can distinguish between the superficial precarity of a tense city that might erupt in riots at any time, which is the sort of precarity to which Margaret Thatcher referred, and a deeper, underlying precarity that does not manifest itself in terms of riots, overturned cars, and burned buildings, but in the sudden and inexplicable collapse of a social order that is not followed by immediate recovery. In considering the possibility of the existential precarity of civilization, what we really want to know is whether there is a social equivalent of the passenger pigeon population collapse and then extinction.

1981 Toxteth riot in Liverpool

1981 Toxteth riot in Liverpool

In the 19th century, the passenger pigeon was the most common bird in North America. Following hunting and habitat loss, the species experienced a catastrophic population collapse between 1870 and 1890, finally going extinct in 1914. Less than fifty years before the species went extinct, there was no reason to suspect that the species was endangered, or even seriously reduced in numbers. When the end came, it came quickly; somehow the entire species reached a tipping point and could not recover from its collapse. Could this happen to our own species? Could this happen to our civilization? Despite our numbers and our apparent resilience, might we have some existential Achilles’ heel, some essential precarity, incorporated into the human condition of which we are blissfully unaware? And, if we do have some essential vulnerability, is there a way to address this?

Zoological illustration from a volume of articles, The Passenger Pigeon, 1907 (Mershon, editor). Engraving from painting by John James Audubon in Pennsylvania, 1824.

Zoological illustration from a volume of articles, The Passenger Pigeon, 1907 (Mershon, editor). Engraving from painting by John James Audubon in Pennsylvania, 1824.

I have argued elsewhere that civilization is becoming more robust over time, and I have not changed my mind about this, but neither is it the entire story about the existential security of civilization. In comparison to the precarity of the individual life, civilization is robust in the extreme. Civilization only betrays its existential precarity on time scales several orders of magnitude beyond the human experience of time, which at most encompasses several decades. As we ascend in temporal comprehensiveness, civilization steadily diminishes until it appears as a mere anomaly in the vast stretches of time contemplated in cosmology. At this scale, the longevity of civilization is no longer in question only because its brevity is all too obvious.

Joseph Voros discussing disciplined societies.

Joseph Voros discussing disciplined societies.

At the human time scale, civilization is as certain as the ground beneath our feet; at the cosmological time scale, civilization is as irrelevant as a mayfly. An appraisal of the existential precarity of civilization must take place at some time scale between the human and the cosmological. This brings me to an insight that I had after attending the 2014 IBHA conference last summer. On day 3 of the conference I attended a talk by futurist Joseph Voros that provided much food for thought, and while driving home I thought about a device he employed to discuss future forecasts, the future cone.

From The Future and Accessibility, OZeWAI Conference 2011, Jacqui van Teulingen Director, Web Policy

From The Future and Accessibility, OZeWAI Conference 2011, Jacqui van Teulingen
Director, Web Policy

This was my first exposure to the future cone, and I immediately recognized the possibility for conceptual clarification that this offers in thinking about the future. If we depict the future as an extension of a timeline indefinitely, the line itself is the most likely future, while progressively larger cones concentric with the line, radiating out from the present, become increasingly less likely forecasts. Within the classes of forecasts defined by the spaces included within progressively larger cones, preferred or unwelcome futures can be identified by further subdivisions of the space defined by the cones. Voros offered an alliterative mnemonic device to differentiate the conceptual spaces defined by the future cone, from the center outward: the projected future, the probable future, the plausible future, the possible future, and the preposterous future.

future cone 2

When I was reflecting on this on the drive home, I realized that, in the short term, the projected future is almost always correct. We can say within a high degree of accuracy what tomorrow will be like. Yet in the long term future, the projected future is almost always wrong. Here when I speak of the projected future I mean the human future. We can project future events in cosmology with a high degree of accuracy — for example, the coming collision of the Milky Way and Andromeda galaxies — but we cannot say anything of significance of what human civilization will be like at this time, or indeed whether there will be any human civilization or any successor institution to human civilization. Futurism forecasting, in other words, goes off the rails in the mid-term future, though exactly where it does so is difficult to say. And it is precisely in this mid-term future — somewhere between human time scales and cosmological time scales — that the existential precarity of civilization becomes clear. Sometime between tomorrow and four billion years from now when a swollen sun swallows up Earth, human civilization will be subject to unpredictable and unprecedented selection pressures that will either mean the permanent ruination of that civilization, or its transformation into something utterly unexpected.

What unforeseen forces will shape human life and civilization in the future? (First Contact, by Nikolai Nedbailo)

What unforeseen forces will shape human life and civilization in the future? (First Contact, by Nikolai Nedbailo)

With this in mind, we can focus our conceptual exploration of the existential precarity, existential security, existential threat, and existential risk that bears upon civilization in the period of the mid-term future. How far can we narrow the historico-temporal window of the mid-term future of precarity? What are the selection pressures to which civilization will be subject during this period? What new selection pressures might emerge? Is it more important to focus on existential risk mitigation, or to focus on our civilization making the transition to a post-civilizational institution that will carry with it the memory of its human ancestry? These and many other related questions must assume the central place in our research.

. . . . .

About four billion years from now, when the sun is swelling into a red giant star, the Milky Way and Andromeda galaxies will merge, perhaps resulting in an elliptical galaxy. The universe will be an interesting place,, but will human civilization be around to record the event?

About four billion years from now, when the sun is swelling into a red giant star, the Milky Way and Andromeda galaxies will merge, perhaps resulting in an elliptical galaxy. The universe will be an interesting place, but will human civilization be around to record the event?

. . . . .


. . . . .

Grand Strategy Annex

. . . . .

project astrolabe logo smaller

. . . . .


Ken Baskin talking about big history and complexity theory.

Ken Baskin talking about big history and complexity theory.

Complexity (2)

Day 3 of the 2014 IBHA conference began with panel 32 in room 201, “Complexity (2).” Three speakers were scheduled, but one canceled so that more time was available to the other two. This turned out to be quite fortunate. This panel was, without question, one of the best I have attended. It began with Ken Baskin on “Sister Disciplines: Bringing Big History and Complexity Theory Together,” and continued with Claudio Maccone with “Entropy as an Evolution Measure (Evo-SETI Theory).”

Ken Baskin, author of the forthcoming The Axial Ages of World History: Lessons for the 21st Century, said that big history and complexity theory are “post-Newtonian disciplines that complement each other.” His subsequent exposition made a real impression to this end. He used the now-familiar concepts of complexity — complex adaptive systems (CAS), non-linearity, and attractors, strange and otherwise — to give an exposition of big history periodization. He presented historical changes as being “thick” — that is to say, not as brief transitional periods, but as extended transitional periods that led to even longer-term states of relative stability. According to his periodization, the hunter-gatherer era was stable, and was followed by the disruption of the agricultural revolution; this eventually issued in a stable “pre-axial” age, which was in turn disrupted by the Axial Age. The Axial Age transition lasted for several hundred years but gave way to somewhat stable post-Axial societies, and this in turn has been disrupted by a second axial age. According to Baskin, we have been in this second axial transition since about 1500 and have not yet settled down into a new, stable social regime.

Claudio Maccone on big history and SETI.

Claudio Maccone on big history and SETI.

Claudio Maccone is an Italian SETI astronomer who has written a range of technical books, including Mathematical SETI: Statistics, Signal Processing, Space Missions and Deep Space Flight and Communications: Exploiting the Sun as a Gravitational Lens. His presentation was nothing less than phenomenal. My response is partly due to the fact that he addressed many of my interests. Before the IBHA conference a friend asked me what I would have talked about if I had given a presentation. I said that I would have talked about big history in relation to astrobiology, and specifically that I would like to point out the similarities between the emergent complexity schema of big history to the implicit levels of complexity in the Drake equation. This is exactly what Maccone did, and he did so brilliantly, with equations and data to back up his argument. Also, Maccone spoke like a professor giving a lecture, with an effortless mastery of his subject.

Maccone said that, for him, big history was simply an extension of the Drake equation — the Drake equation goes back some ten million years or so, and by adding some additional terms to the beginning of the Drake equation we can expand it to comprise the whole 13.7 billion years of cosmic history. I think that this was one of the best concise statements of big history that I heard at the entire conference, notwithstanding its deviation from most of the other definitions offered. The Drake equation is a theoretical framework that is limited only by the imagination of the researcher in revising its terms and expression. And Maccone has taken it much further yet.

Maccone has worked out a revision of the Drake equation that plugs probability distributions into the variables of the Drake equation (which he published as “The Statistical Drake Equation” in Acta Astronautica, 2010 doi:10.1016/
j.actaastro.2010.05.003). His work is the closest thing that I have seen to being a mathematical model of civilization. All I can say is: get all his books and papers and study them carefully. It will be worth the effort.

J. Daniel May looking at past futurism through science fiction films.

J. Daniel May looking at past futurism through science fiction films.

Big History and the Future

The next panel was the most difficult decision to make of the conference, because in one room were David Christian, Cynthia Brown, and others discussing “Meaning in Big History: A Naturalistic Perspective,” but I chose instead to go to panel 39 in room 301, “Big History and the Future,” which was concerned with futurism, or, as is now said, “future studies.”

The session started out with J. Daniel May reviewing past visions of the future by a discussion of twentieth century science fiction films, including Metropolis, Forbidden Planet, Lost in Space, Star Trek, and 2001. I have seen all these films and television programs, and, as was evident by the discussion following the talk, many others had as well, citing arcane details from the films in their comments.

Joseph Voros discussing disciplined societies.

Joseph Voros discussing disciplined societies.

Joseph Voros then presented “On the transition to ‘Threshold 9’: examining the implications of ‘sustainability’ for human civilization, using the lens of big history.” The present big history schematization of the past that is most common (but not universal, as evidenced by this conference) recognizes eight thresholds of emergent complexity. This immediately suggests the question of what the next threshold of emergent complexity will be, which would be the ninth threshold, thus making the “ninth threshold” a kind of cipher among big historians and a framework for discussing the future in the context of big history. Given that the current threshold of emergent complexity is fossil-fueled civilization (what I call industrial-technological civilization), and given that fossil fuels are finite, an obvious projection for the future concerns the nature of a post-fossil-fuel civilization.

Voros claimed that all scenarios for the future fall into four categories: 1) continuation, 2) collapse (which is also called “descent”), 3) disciplined society (presumably what Bostrom would call “flawed realization”), and 4) transformational society, in which the transformation might be technological or spiritual. Since Voros was focused on post-fossil-fuel civilization, his talk was throughout related to “peak oil” concerns, though at a much more sophisticated level. He noted the the debate over “peak oil” is almost irrelevant from a big history perspective, because whether oil runs out now or later doesn’t alter the fact that it will run out being a finite resource renewable only over a period of time much greater than the time horizon of civilization. With this energy focus, he proposed that one of the forms of a “disciplined society” that could come about would be that of an “energy disciplined society.” Of the transformational possibilities he outlined four scenarios: 1) energy bonanza, 2) spiritual awakening, 3) brain/mind upload, and 4) childhood’s end.

After Voros, Cadell Last of the Global Brain Institute presented “The Future of Big History: High Intelligence to Developmental Singularity.” He began by announcing his “heretical” view that cultural evolution can be predicted. His subsequent talk revealed additional heresies without further trigger warnings. Last spoke of a coming era of cultural evolution in which the unit of selection is the idea (I was happy that he used “idea” instead of “meme”), and that this future would largely be determined by “idea flows” — presumably analogous to the “energy flows” of Eric Chaisson that have played such a large role in this conference, as well as the gene flows of biological evolution. (“Idea flows” may be understood as a contemporary reformulation of “idea diffusion.”) This era of cultural evolution will differ from biological evolution in that the idea flows, unlike gene flow in biological evolution, is not individual (it is cultural) and is not blind (conscious agents can plan ahead).

Last gave a wonderfully intuitive presentation of his ideas, and though it was the sort of thing that futurists recognize as familiar, I suspect much of what he said would strike the average listener as something akin to moral horror. Last said that, in the present world, biological and linguistic codes are in competition with each other, and gave the example familiar to everyone of having to make the choice whether to invest time and effort into biological descendants or cultural descendants. Scarcity of our personal resources means that we are likely to focus on one or the other. Finally, biological evolution will cease and all energies will be poured into cultural evolution. At this time, we will be free from the “tyranny of biology,” which requires that we engage in non-voluntary activities.

Camelo Castillo discussed major transitions in big history.

Camelo Castillo discussed major transitions in big history.

Reconceptualizations of Big History

For the final sessions divided into separate rooms I attended panel 44, “Reconceptualizations of Big History.” I came to this session primarily to hear to Camelo Castillo speak on “Mind as a Major Transition in big History.” Castillo, the author of Origins of Mind: A History of Systems, critiqued previous periodizations of big history, noting that they conflate changes in structure and changes in function. He then went on to define major transitions as, “transitions from individuals to groups that utilize novel processes to maintain novel structures.” With this definition, he went back to the literature and produced a periodization of six major transitions in big history. Not yet finished, he hypothesized that by looking for mind in the brain we are looking in the wrong place. Since all early major transitions involved both structures and processes, and from individuals to groups, that we should be looking for mind in social groups of human beings. The brain, he allowed, was implicated in the development of human social life, but social life is not reducible to the brain, and mind should be sought in theories of social intelligence.

Castillo’s work is quite rigorous and he defends it well, but I asked myself why we should not have different kinds of transitions at different stages of history and development, especially given that the kind of entities involved in the transition may be fundamentally distinct. Just as new or distinctive orders of existence require new or distinctive metrics for their measurement, so too new or distinctive orders of existence may come into being or pass out of being according to a transition specific to that kind of existent.

Guzman Hall, where most of the 2014 IBHA events took place.

Guzman Hall, where most of the 2014 IBHA events took place.

Final Plenary Sessions

After the individual session were finished, there was a series of plenary sessions. There was a presentation of Chronozoom, Fred Spier presented “The Future of Big History,” and finally there was a panel discussion entirely devoted to questions and answers, with Walter Alvarez, Craig Benjamin, Cynthia Brown, David Christian, Fred Spier, and Joseph Voros fielding the questions.

After the intellectual intensity of the sessions, it was not a surprise that these plenary sessions came to be mostly about funding, outreach, teaching, and the practical infrastructure of scholarship.

And with that the conference was declared to be closed.

. . . . .


. . . . .

Grand Strategy Annex

. . . . .


energy sources

A distinction often employed in historiography is that between the diachronic and the synchronic. I have written about this distinction in several posts including Axes of Historiography, Ecological Temporality and the Axes of Historiography, Synchronic and Diachronic Geopolitical Theories, and Synchronic and Diachronic Approaches to Civilization.

It is common for this distinction be be explained by saying that the diachronic perspective is through time and the synchronic perspective is across time. I don’t find this explanation to be helpful or intuitively insightful. I prefer to say that the diachronic perspective is concerned with succession while the synchronic perspective is concerned with interaction within a given period of time. Sometimes I try to drive this point home by using the phrases “diachronic succession” and “synchronic interaction.”

In several posts I have emphasized that futurism is the historiography of the future, and history the futurism of the past. In this spirit, it is obvious that the future, like the past, can also be approached diachronically or synchronically. That is to say, we can think of the future in terms of a succession of events, one following upon another — what Shakespeare called such a dependency of thing on thing, as e’er I heard in madness — or in terms of the interaction of events within a given period of future time. Thus we can distinguish diachronic futurism and synchronic futurism. This is a difference that makes a difference.

One of the rare points at which futurism touches upon public policy and high finance is in planning for the energy needs of power-hungry industrial-technological civilization. If planners are convinced that the future of energy production lies in a particular power source, billions of dollars may follow, so real money is at stake. And sometimes real money is lost. When the Washington Public Power Supply System (abbreviated as WPPSS, and which came to be pronounced “whoops”) thought that nuclear power was the future for the growing energy needs of the Pacific Northwest, they started to build no fewer than five nuclear power facilities. For many reasons, this turned out to be a bad bet on the future, and WPPSS defaulted on 2.25 billion dollars of bonds.

The energy markets provide a particularly robust demonstration of synchrony, so that within the broadly defined “present” — that is to say, in the months or years that constitute the planning horizon for building major power plants — we can see a great number of interactions within the economy that resemble nothing so much as the checks and balances that the writers of the US Constitution built into the structure of the federal government. But while the founders sought political checks and balances to disrupt the possibility of any one part of the government becoming disproportionately powerful, the machinations of the market (what Adam Smith called the “invisible hand”) constitute economic checks and balances that often frustrate the best laid schemes of mice and men.

Energy markets are not only a concrete and pragmatic exercise in futurism, they are also a sector that tends to great oversimplification and are to vulnerable to bubbles and panics that have contributed to a boom-and-bust cycle in the industry that has had disastrous consequences. The captivity of energy markets to public perceptions has led to a lot of diachronic extrapolation of present trends in the overall economy and in the energy sector in particular. I’ve written some posts on diachronic extrapolation — The Problem with Diachronic Extrapolation and Diachronic Extrapolation and Uniformitarianism — in an attempt to point out some of the problems with straight line extrapolations of current trends (not to mention the problems with exponential extrapolation).

An example of diachronic extrapolation carried out in great detail is the book $20 Per Gallon: How the Inevitable Rise in the Price of Gasoline Will Change Our Lives for the Better by Christopher Steiner, which I discussed in Are Happy Days Here Again?, speculating on how the economy will change as gasoline prices continue to climb, and written as though nothing else would happen at the same time that gas prices are going up. If we could treat one energy source — like gasoline — in ideal isolation, this might be a useful exercise, but this isn’t the case.

When the price of fossil fuels increase, several things happen simultaneously. More investment comes into the industry, sources that had been uneconomical to tap start to become commercially viable, and other sources of energy that had been expensive relative to fossil fuels become more affordable relative to the increasing price of their alternatives. Also, with the passage of time, new technologies become available that make it both more efficient and more cost effective to extract fossil fuels previously not worth the effort to extract. Higher technologies not only affect production, but also consumption: the extracted fossil fuels will be used much more efficiently than in the past. And any fossil fuels that lie untapped — such as, for example, the oil presumed to be under ANWR — are essentially banked in the ground for a future time when their extraction will be efficient, effective, and can be conducted in a manner consistent with the increasingly stringent environmental standards that apply to such resources.

Energy industry executives have in the past had difficulty in concealing their contempt for alternative and renewable resources, and for decades the mass media aided and abetted this by not taking these sources seriously. But that is changing now. The efficiency of solar electric and wind turbines has been steadily improving, and many European nation-states have proved that these technologies can be scaled up to supply an energy grid on an industrial scale. For those who look at the big picture and the long term, there is no question that solar electric will be a dominant form of energy; the only problem is that of storage, we are told. But the storage problem for solar electricity is a lot like the “eyesore” problem for wind turbines: it has only been an effective objection because the alternatives are not taken seriously, and propaganda rather than research has driven the agenda. The Earth is bathed in sunlight at all times, but one side is always dark. a global energy grid — well within contemporary technological means — could readily supply energy from lighted side to the dark side.

Even this discussion is too limited. The whole idea of a “national grid” is predicated upon an anarchic international system of nation-states in conflict, and the national energy grid becomes in turn a way for nation-states to defend their geographical territory by asserting control of energy resources within that territory. There is no need for a national energy grid, or for each nation-state to have a proprietary grid. We possess the technology today for decentralized energy production and consumption that could move away from the current paradigm of a national energy grid of widely distributed consumption and centralized production.

But it is not my intention in this context to write about alternative energy, although this is relevant to the idea of synchrony in energy markets. I cite alternative energy sources because this is a particular blindspot for conventional thinking about energy. Individuals — especially individuals in positions of power and influence — get trapped in energy groupthink no less than strategic groupthink, and as a result of being virtually unable to conceive of any energy solution that does not conform to the present paradigm, those who make public energy policy are often blindsided by developments they did not anticipate. Unfortunately, they do so with public money, picking winners and losers, and are wrong much of the time, meaning losses to the public treasury.

When an economy, or a sector of the economy, is subject to stresses, that economy or sector may experience failure — whether localized and containable, or catastrophic and contagious. In the wake of the late financial crisis, we have heard about “stress testing” banks. Volatility in energy markets stress tests the components of the energy markets. Since this is a real-world event and not a test, different individuals respond differently. Individuals representing institutional interests respond as one would expect institutions to respond, but in a market as complex and as diversified as the energy market, there are countless small actors who will experiment with alternatives. Usually this experimentation does not amount to much, as the kind of resources that institutions possess are not invested in them, but this can change incrementally over time. The experimental can become a marginal sector, and a marginal sector can grow until it becomes too large to ignore.

All of these events in the energy sector — and more and better besides — are occurring simultaneously, and the actions of any one agent influence the actions of all other agents. It is a fallacy to consider any one energy source in isolation from others, but it is a necessary fallacy because no one can understand or anticipate all the factors that will enter into future production and consumption. Energy is the lifeblood of industrial-technological civilization, and yet it is beyond the capacity of that civilization to plan its energy future, which means that industrial-technological civilization cannot plan its own future, or foresee the form that it will eventually take.

Synchrony in energy markets occurs at an order of magnitude that defies all prediction, no matter how hard-headed or stubbornly utilitarian in conception the energy futurism involved. The big picture reveals patterns — that fossil fuels dominate the present, and solar electric is likely to dominate the future — but it is impossible to say in detail how we will get from here to there.

. . . . .


. . . . .

Grand Strategy Annex

. . . . .




Why be concerned about the future? Will not the future take care of itself? After all, have we not gotten along just fine without being explicitly concerned with the future? The record of history is not an encouraging one, and suggests that we might do much better if only provisions were made for the future, and problems were addressed before they become unmanageable. But are provisions being made for the future? Mostly, no. And there is a surprisingly simple reason that provisions are rarely made for the future, and that is because the future does not get funded.

The present gets funded, because the present is here with us to plead its case and to tug at our heart strings directly. Unfortunately, the past is also often too much with us, and we find ourselves funding the past because it is familiar and comfortable, not realizing that this works against our interests more often than it serves our interests. But the future remains abstract and elusive, and it is all too easy to neglect what we must face tomorrow in light of present crises. But the future is coming, and it can be funded, if only we will choose to do so.

hundred banknotes

Money, money, everywhere…

The world today is awash in money. Despite the aftereffects of the subprime mortgage crisis, the Great Recession, and the near breakup of the European Union, there has never been so much capital in the world seeking advantageous investment, nor has capital ever been so concentrated as it is now. The statistics are readily available to anyone who cares to do the research: a relatively small number of individuals and institutions own and control the bulk of the world’s wealth. What are they doing with this money? Mostly, they are looking for a safe place to invest it, and it is not easy to find a place to securely stash so much money.

The global availability of money is parallel to the global availability of food: there is plenty of food in the world today, notwithstanding the population now at seven billion and rising, and the only reason that anyone goes without food is due to political (and economic) impediments to food distribution. Still, even in the twenty-first century, when there is food sufficient to feed everyone on the planet, many go hungry, and famines still occur. Similarly, despite the world being awash in capital seeking investment and returns, many worthy projects are underfunded, and many projects are never funded at all.


What gets funded?

What does get funded? Predictable, institutional projects usually get funded (investments that we formerly called, “as safe as houses”). Despite the fact of sovereign debt defaults, nation-states are still a relatively good credit risk, but above all they are large enough to be able to soak up the massive amounts of capital now looking for a place to go. Major industries are also sufficiently large and stable to attract significant investment. And a certain amount of capital finds itself invested as venture capital in smaller projects.

Venture capital is known to be the riskiest of investments, and the venture capitalist expects that most of his ventures will fail and yield no returns whatever. The reward comes from the exceptional and unusual venture that, against all odds and out of proportion to the capital invested in it, becomes an enormous success. This rare venture capital success is so profitable that it not only makes up for all the other losses, but more than makes up the losses and makes the successful venture capital firm one of the most intensively capitalized industries in the world.

risk blocks

Risk for risk’s sake?

With the risk already so high in any venture capital project, the venture capitalist does not unnecessarily court additional, unnecessary risks, so, from among the small projects that receive venture funding, it is not the riskiest ventures that get funded, but the least risky that get funded. That is to say, among the marginal investments available to capital, the investor tries to pick the ones that look as close to being a sure thing as anything can be, notwithstanding the fact that most of these ventures will fail and lose money. No one is seeking risk for risk’s sake; if risk is courted, it is only courted as a means to the end of a greater return on capital.

The venture capitalists have a formula. They invest a certain amount of money at what is seen to be a critical stage in the early development of a project, which is then set on a timetable of delivering its product to market and taking the company public at the earliest possible opportunity so that the venture capital investors can get their money out again in two to five years.

Given the already tenuous nature of the investments that attract venture capital, many ideas for investment are rejected on the most tenuous pretexts, rejected out of hand scarcely without serious consideration, because they are thought to be impractical or too idealistic or are not likely to yield a return quickly enough to justify a venture capital infusion.


Entrepreneurs, investors, and the spectrum of temperament

Why do the funded projects get funded, while other projects do not get funded? The answer to this lies in the individual psychology of the successful investor. The few individuals who accumulate enough capital to become investors in new enterprises largely become wealthy because they had one good idea and they followed through with relentless focus. The focus is necessary to success, but it usually comes at the cost of wearing blinders.

Every human being has both impulses toward adventure and experimentation, and desires for stability and familiarity. From the impulse to adventure comes entrepreneurship, the questioning of received wisdom, a willingness to experiment and take risks (often including thrill-seeking activities), and a readiness to roll with the punches. From the desire for stability comes discipline, focus, diligence, and all of the familiar, stolid virtues of the industrious. With some individuals, the impulse to adventure predominates, while in others the desire for stability is the decisive influence on a life.

With entrepreneurs, the impulse to adventure outweighs the desire for stability, while for financiers the desire for stability outweighs the impulse to adventure. Thus entrepreneurs and the investors who fund them constitute complementary personality types. But neither exemplifies the extreme end of either spectrum. Adventurers and poets are the polar representatives of the imaginative end of the spectrum, while the hidebound traditionalist exemplifies the polar extreme of the stable end of the spectrum.

It is the rare individual who possesses both adventurous imagination and discipline in equal measures; this is genius. For most, either imagination or discipline predominates. Those with an active imagination but little discipline may entertain flights of fancy but are likely to accomplish little in the real world. Those in whom discipline predominates are likely to be unimaginative in their approach to life, but they are also likely to be steady, focused, and predictable in their behavior.

Most people who start out with a modest stake in life yearn for greater adventures than an annual return of six percent. Because of the impulse to adventure, they are likely to take risks that are not strictly financially justified. Such an individual may be rewarded with unique experiences, but would likely have been more financially successful if they could have overcome the desire in themselves for adventure and focused on a disciplined plan of investment coupled with delayed gratification. If you can overcome this desire for adventure, you can make yourself reasonably wealthy (at very least, comfortable) without too much effort. Despite the paeans we hear endlessly celebrating novelty and innovation, in fact discipline is far more important than creativity or innovation.

The bottom line is that the people who have a stranglehold on the world’s capital are not intellectually adventuresome or imaginative; on the contrary, their financial success is a selective result of their lack of imagination.


A lesson from institutional largesse

The lesson of the MacArthur fellowships is worth citing in this connection. When the MacArthur Foundation fellowships were established, the radical premise was to give money away to individuals who could then be freed to do whatever work they desired. When the initial fellowships were awarded, some in the press and some experiencing sour grapes ridiculed the fellowships as “genius grants,” implying that the foundation was being a little too loose and free in its largesse. Apparently the criticism hit home, as in successive rounds of naming MacArthur fellows the grants become more and more conservative, and critics mostly ceased to call them “genius grants” while sniggering behind their hands.

Charitable foundations, like businesses, function in an essentially conservative, if not reactionary, social milieu, in which anything new is immediately suspect and the tried and true is favored. No one wants to court controversy; no one wants to be mentioned in the media for the wrong reason or in an unflattering context, so that anyone who can stir up a controversy, even where none exists, can hold this risk averse milieu hostage to their ridicule or even to their snide laughter.

Who serves on charitable boards? The same kind of unimaginative individuals who serve on corporate boards, and who make their fortunes through the kind of highly disciplined yet largely unimaginative and highly tedious investment strategies favored by those who tend toward the stable end of the spectrum of temperament.

Handing out “genius grants” proved to be too adventuresome and socially risky, and left those in charge of the grants open to criticism. A reaction followed, and conventionality came to dominate over imagination; institutional ossification set in. It is this pervasive institutional ossification that made the MacArthur awards so radical in the early days of the fellowships, when the MacArthur Foundation itself was young and adventuresome, but the institutional climate caught up with the institution and brought it to heel. It now comfortably reclines in respectable conventionality.

clock with dates

Preparing for the next economy

One of the consequences of a risk averse investment class (that nevertheless always talks about its “risk tolerance”) is that it tends to fund familiar technologies, and to fund businesses based on familiar technologies. Yet, in a technological economy the one certainty is that old technologies are regularly replaced by new technologies (a process that I have called technological succession). In some cases there is a straight-forward process of technological succession in which old technologies are abandoned (as when cars displaced horse-drawn carriages), but in many cases what we see instead is that new technologies build on old technologies. In this way, the building of an electricity grid was once a cutting edge technological accomplishment; now it is simply part of the infrastructure upon which the economy is dependent (technologies I recently called facilitators of change), and which serves as the basis of new technologies that go on to become the next cutting edge technologies in their turn (technologies I recently called drivers of change).

What ought to concern us, then, is not the established infrastructure of technologies, which will continue to be gradually refined and improved (a process likely to yield profits proportional to the incremental nature of the progress), but the new technologies that will be built using the infrastructure of existing technologies. Technologies, when introduced, have the capability of providing a competitive advantage when one business enterprise has mastered them while other business enterprises have not yet mastered them. Once a technology has been mastered by all elements of the economy it ceases to provide a competitive advantage to any one firm but is equally possessed and employed by all, and also ceases to be a driver a change. Thus a distinction can be made between technologies that are drivers of change and established technologies that are facilitators of change, driven by other technologies, that is to say, technologies that are tools for the technologies that are in the vanguard of economic, social, and political change.

From the point of view both of profitability and social change, the art of funding visionary business enterprises is to fund those that will focus on those technologies that will be drivers of change in the future, rather than those that have been drivers of change in the past. This can be a difficult art to master. We have heard that generals always prepare for the last war that was just fought rather than preparing for the next war. This is not always true — we can name a list of visionary military thinkers who saw the possibilities for future combat and bent every effort to prepare for it, such as Giulio Douhet, Billy Mitchell, B. H. Liddell Hart, and Heinz Guderian — but the point is well taken, and is equally true in business and industry: financiers and businessmen prepare for the economy that was rather than the economy that will be.

The prevailing investment climate now favors investment in new technology start ups, but the technology in question is almost always implicitly understood to be some kind of electronic device to add to the growing catalog of electronic devices routinely carried about today, or some kind of software application for such an electronic device.

The very fact of risk averse capital coupled with entrepreneurs shaping their projects in such a way as to appeal to investors and thereby to gain access to capital for their enterprises suggests the possibility of the path not taken, and this path would be an enterprise constituted with the particular aim of building the future by funding its sciences, technology, engineering, and even its ideas, that is to say, but funding those developments that are yet to become drivers of change in the economy, rather than those that already are drivers of change in the economy, and therefore will slip into second place as established facilitators of the economy.

open door on road

What is possible?

If there were more imagination on the part of those in control of capital, what might be funded? What are the possibilities? What might be realized by large scale investments into science, technology, and engineering, not to mention the arts and the best of human culture generally speaking? One possibility is that of explicitly funding a particular vision of the future by funding enterprises that are explicitly oriented toward the realization of aims that transcend the present.

Business enterprises explicitly oriented toward the future might be seen as the riskiest of risky investments, but there is another sense in which they are the most conservative of conservative investments: we know that the future will come, whether bidden or unbidden, although we don’t know what this inevitable future holds. Despite our ignorance as to what the future holds, we at least have the power — however limited and uncertain that power — to shape events in the future. We have no real power to shape events in the past, though many spin doctors try to conceal this impotency.

Those who think in explicit terms about the future are likely to seem like dreamers to an investor, and no one wants to labeled a “dreamer,” as this a tantamount to being ignored as a crank or a fool. Nevertheless, we need dreamers to give us a sense as to what might be possible in the future that we can shape, but of which we are as yet ignorant. The dreamer is one who has at least a partial vision of the future, and however imperfect this vision, it is at least a glimpse, and represents the first attempt to shape the future by imagining it.

Everyone who has ever dreamed big dreams knows what it is like to attempt to share these dreams and have them dismissed out of hand. Those who dismiss big dreams for the future usually are not content merely to ignore or to dismiss the dreamer, but they seem to feel compelled to go beyond dismissal and to ridicule if not attempt to shame those who dream their dreams in spite of social disapproval.

The tactics of discouragement are painfully familiar, and are as unimaginative as they are unhelpful: that the idea is unworkable, that it is a mere fantasy, or it is “science fiction.” One also hears that one is wasting one’s time, that one’s time could be better spent, and there is also the patronizing question, “Don’t you want to have a real influence?”

There is no question that the attempt to surpass the present economic paradigm involves much greater risk than seeking to find a safe place for one’s money with the stable and apparent certainty of the present economic paradigm, but greater risks promise commensurate rewards. And the potential rewards are not limited to the particular vision of a particular business enterprise, however visionary or oriented toward the future. The large scale funding of an unconventional enterprise is likely to have unconventional economic outcomes. These outcomes will be unprecedented and therefore unpredictable, but they are far more likely to be beneficial than harmful.

There is a famous passage from Keynes’ General Theory of Employment, Interest and Money that is applicable here:

“If the Treasury were to fill old bottles with banknotes, bury them at suitable depths in disused coalmines which are then filled up to the surface with town rubbish, and leave it to private enterprise on well-tried principles of laissez-faire to dig the notes up again (the right to do so being obtained, of course, by tendering for leases of the note-bearing territory), there need be no more unemployment and, with the help of the repercussions, the real income of the community, and its capital wealth also, would probably become a good deal greater than it actually is. It would, indeed, be more sensible to build houses and the like; but if there are political and practical difficulties in the way of this, the above would be better than nothing.”

John Maynard Keynes, General Theory of Employment, Interest and Money, Book III, Chapter 10, VI

For Keynes, doing something is better than doing nothing, although it would be better still to build houses than to dig up banknotes buried for the purpose of stimulating economic activity. But if it is better to do something than to do nothing, and if it is better to do something constructive like building houses rather than to do something pointless like digging holes in the ground, how much better must it not be to build a future for humanity?

If some of the capital now in search of an investment were to be systematically directed into projects that promised a larger, more interesting, more exciting, and more comprehensive future for all human beings, the eventual result would almost certainly not be that which was originally intended, but whatever came out of an attempt to build the future would be an unprecedented future.

The collateral effect of funding a variety of innovative technologies is likely to be that, as Keynes wrote, “…the real income of the community, and its capital wealth also, would probably become a good deal greater than it actually is.” Even for the risk averse investor, this ought to be too good of a prospect to pass up.


Where there is no vision, the people perish

What is the alternative to funding the future? Funding the past. It sounds vacuous to say so, but there is not much of a future in funding the past. Nevertheless, it is the past that gets funded in the present socioeconomic investment climate.

Why should the future be funded? Despite our fashionable cynicism, even the cynical need a future in which they can believe. Funding a hopeful vision of the future is the best antidote to hopeless hand-wringing and despair.

Who could fund the future if they wanted to? Any of the risk averse investors who have been looking for returns on their capital and imagining that the world can continue as though nothing were going to change as the future unfolds.

What would it take to fund the future? A large scale investment in an enterprise conceived from its inception as concerned both to be a part of the future as it unfolds, and focused on a long term future in which humanity and the civilization it has created will be an ongoing part of the future.

. . . . .


. . . . .

Grand Strategy Annex

. . . . .


old computer

Technologies may be drivers of change or facilitators of change, the latter employed by the former as the technologies that enable the development of technologies that are drivers of change; that is to say, technologies that are facilitators of change are tools for the technologies that are in the vanguard of economic, social, and political change. Technologies, when introduced, have the capability of providing a competitive advantage when one business enterprise has mastered them while other business enterprises have not yet mastered them. Once a technology has been mastered by all elements of the economy it ceases to provide a competitive advantage to any one firm but is equally possessed and employed by all. At that point of its mature development, a technology also ceases to be a driver a change and becomes a facilitator of change.

Any technology that has become a part of the infrastructure may be considered a facilitator of change rather than a driver of change. Civilization requires an infrastructure; industrial-technological civilization requires an industrial-technological infrastructure. We are all familiar with infrastructure such as roads, bridges, ports, railroads, schools, and hospitals. There is also the infrastructure that we think of as “utilities” — water, sewer, electricity, telecommunications, and now computing — which we build into our built environment, retrofitting old buildings and sometimes entire older cities in order to bring them up to the standards of technology assumed by the industrialized world today.

All of the technologies that now constitute the infrastructure of industrial-technological civilization were once drivers of change. Before the industrial revolution, the building of ports and shipping united coastal communities in many regions of the world; the Romans built a network of road and bridges; in medieval Europe, schools and hospitals become a routine part of the structure of cities; early in the industrial revolution railroads became the first mechanized form of rapid overland transportation. Consider how the transcontinental railroad in North America and the trans-Siberian railway in Russia knitted together entire continents, and their role as transformative technologies should be clear.

Similarly, the technologies we think of as utilities were once drivers of change. Hot and cold running water and indoor plumbing, still absent in much of the world, did not become common in the industrialized world until the past century, but early agricultural and urban centers only came into being with the management of water resources, which reached a height in the most sophisticated cities of classical antiquity, with water supplied by aqueducts and sewage taken away by underground drainage systems that were superior to many in existence today. With the advent of natural gas and electricity as fuels for home and industry, industrial cities were retrofitted for these services, and have since been retrofitted again for telecommunications, and now computers.

The most recent technology to have a transformative effect on socioeconomic life was computing. In the past several decades — since the end of the Second World War, when the first digital, programmable electronic computers were built for code breaking (the Colossus in the UK) — computer technology grew exponentially and eventually affected almost every aspect of life in industrialized nation-states. During this period of time, computing has been a driver of change across socioeconomic institutions. Building a faster and more sophisticated computer has been an end in itself for technologists and computer science researchers. While this will continue to be the case for some time, computing has begun to make the transition from being a driver of change in an of itself to being a facilitator of change in other areas of technological innovation. In other words, computers are becoming a part of the infrastructure of industrial-technological civilization.

The transformation of the transformative technology of computing from a driver of change into a facilitator of change for other technologies has been recognized for more than ten years. In 2003 an article by Nicholas G. Carr, Why IT Doesn’t Matter Anymore, stirred up a significant controversy when it was published. More recently, Mark R. DeLong in Research computing as substrate, calls computing a substrate instead of an infrastructure, though the idea is much the same. Delong writes of computing: “It is a common base that supports and nurtures research work and scholarly endeavor all over the university.” Although computing is also a focus of research work and scholarly endeavor in and of itself, it also serves a larger supporting role, not only in the university, but also throughout society.

Although today we still fall far short of computational omniscience, the computer revolution has happened, as evidenced by the pervasive presence of computers in contemporary socioeconomic institutions. Computers have been rapidly integrated into the fabric of industrial-technological civilization, to the point that those of us born before the computer revolution, and who can remember a world in which computers were a negligible influence, can nevertheless only with difficulty remember what life was like without computers.

Depsite, then, what technology enthusiasts tell us, computers are not going to revolutionize our world a second time. We can imagine faster computers, smaller computers, better computers, computers with more storage capacity, and computers running innovative applications that make them useful in unexpected ways, but the pervasive use of computers that has already been achieved gives us a baseline for predicting future computer capacities, and these capacities will be different in degree from earlier computers, but not different in kind. We already know what it is like to see exponential growth in computing technology, and so we can account for this; computers have ceased to be a disruptive technology, and will not become a disruptive technology a second time.

Recently quantum computing made the cover of TIME magazine, together with a number of hyperbolic predictions about how quantum computing will change everything (the quantum computer is called “the infinity machine”). There have been countless articles about how “big data” is going to change everything also. Similar claims are made for artificial intelligence, and especially for “superintelligence.” An entire worldview has been constructed — the technological singularity — in which computing remains an indefinitely disruptive technology, the development of which eventually brings about the advent of the Millennium — the latter suitably re-conceived for a technological age.

Predictions of this nature are made precisely because a technology has become widely familiar, which is almost a guarantee that the technology in question is now part of the infrastructure of the ordinary business of life. One can count on being understood when one makes predictions about the future of the computer, in the same way that one might have been understood in the late nineteenth or early twentieth century if making predictions about the future of railroads. But in so far as this familiarity marks the transition in the life of a technology from being a driver of change to being a facilitator of change, such predictions are misleading at best, and flat out wrong at worst. The technologies that are going to be drivers of change in the coming century are not those that have devolved to the level of infrastructure; they are (or will be) unfamiliar technologies that can only be understood with difficulty.

The distinction between technologies that are drivers of change and technologies that are facilitators of change (like almost all distinctions) admits of a certain ambiguity. In the present context, one of these ambiguities is that of what constitutes a computing technology. Are computing applications distinct from computing? What of technologies for which computing is indispensable, and which could not have come into being without computers? This line of thought can be pursued backward: computers could not exist without electricity, so should computers be considered anything new, or merely an extension of electrical power? And electrical power could not have come about with the steam and fossil-fueled industry that preceded it. This can be pursued back to the first stone tools, and the argument can be made the nothing new has happened, in essence, since the first chipped flint blade.

Perhaps the most obvious point of dispute in this analysis is the possibility of machine consciousness. I will acknowledge without hesitation that the emergence of machine consciousness is a potentially revolutionary development, and it would constitute a disruptive technology. Machine consciousness, however, is frequently conflated with artificial intelligence and with superintelligence, and we must distinguish between the two. Artificial intelligence of a rudimentary form is already crucial to the automation of industry; machine consciousness would be the artificial production, in a machine substrate, of the kind of consciousness that we personally experience as our own identity, and which we infer to be at the basis of the action of others (what philosophers call the problem of other minds).

What makes the possibility of machine consciousness interesting to me, and potentially revolutionary, is that it would constitute a qualitatively novel emergent from computing technology, and not merely another application of computing. Computers stand in the same relationship to electricity that machine consciousness would stand in relation to computing: a novel and transformational technology emergent from an infrastructural technology, that is to say, a driver of change that emerges from a facilitator of change.

The computational infrastructure of industrial-technological civilization is more or less in place at present, a familiar part of our world, like the early electrical grids that appeared in the industrialized world once electricity became sufficiently commonplace to become a utility. Just as the electrical grid has been repeatedly upgraded, and will continue to be ungraded for the foreseeable future, so too the computational infrastructure of industrial-technological civilization will be continually upgraded. But the upgrades to our computational infrastructure will be incremental improvements that will no longer be major drivers of change either in the economy or in sociopolitical institutions. Other technologies will emerge that will take that role, and they will emerge from an infrastructure that is no longer driving socioeconomic change, but is rather the condition of the possibility of this change.

. . . . .


. . . . .


. . . . .

Grand Strategy Annex

. . . . .


Søren Aabye Kierkegaard, 05 May 1813 – 11 November 1855.

Søren Aabye Kierkegaard, 05 May 1813 – 11 November 1855.

Kierkegaard’s Concluding Unscientific Postscript is an impassioned paean to subjectivity, which follows logically (if Kierkegaard will forgive me for saying so) from Kierkegaard’s focus on the individual. The individual experiences subjectivity, and, as far as we know, nothing else in the world experiences subjectivity, so that if the individual is the central ontological category of one’s thought, then the subjectivity that is unique to the individual will be uniquely central to one’s thought, as it is to Kierkegaard’s thought.

Another way to express Kierkegaard’s interest in the individual is to identify his thought as consistently ideographic, to the point of ignoring the nomothetic (on the ideographic and the nomothetic cf. Axes of Historiography). Kierkegaard’s account of the individual and his subjectivity as an individual falls within an overall ontology of individuals, therefore a continuum of contingency. Thus, in a sense, Kierkegaard represents a kind of object-oriented historiography (as a particular expression of an object-oriented ontology). From this point of view, once can easily see Kierkegaard’s resistance to Hegel’s lawlike, i.e., nomothetic, account of history, in which individuals are mere pawns at the mercy of the cunning of Reason.

At the present time, however, I will not discuss the implications of Kierkegaard’s implicit historiography, but rather his implicit futurism, though the two — historiography and futurism — are mirror images of each other, and I have elsewhere quoted Friedrich von Schlegel that, “The historian is a prophet facing backwards.” The same concern for the individual and his subjectivity is present in Kierkegaard’s implicit futurism as in his implicit historiography.

In Kierkegaard’s Concluding Unscientific Postscript, written under the pseudonym Johannes Climacus, we find the following way to distinguish the objective approach from the subjective approach:

The objective accent falls on WHAT is said, the subjective accent on HOW it is said.

Søren Kierkegaard, Concluding Unscientific Postscript, Translated from the Danish by David F. Swenson, completed after his death and provided with Introduction and Notes by Walter Lowrie, Princeton: Princeton University Press, 1968, p. 181

A few pages prior to this in the text, Kierkegaard tells us a story about the importance of the subjective accent upon how something is said:

The objective truth as such, is by no means adequate to determine that whoever utters it is sane; on the contrary, it may even betray the fact that he is mad, although what he says may be entirely true, and especially objectively true. I shall here permit myself to tell a story, which without any sort of adaptation on my part comes direct from an asylum. A patient in such an institution seeks to escape, and actually succeeds in effecting his purpose by leaping out of a window, and prepares to start on the road to freedom, when the thought strikes him (shall I say sanely enough or madly enough?): “When you come to town you will be recognized, and you will at once be brought back here again; hence you need to prepare yourself fully to convince everyone by the objective truth of what you say, that all is in order as far as your sanity is concerned.” As he walks along and thinks about this, he sees a ball lying on the ground, picks it up, and puts it into the tail pocket of his coat. Every step he takes the ball strikes him, politely speaking, on his hinder parts, and every time it thus strikes him he says: “Bang, the earth is round.” He comes to the city, and at once calls on one of his friends; he wants to convince him that he is not crazy, and therefore walks back and forth, saying continually: “Bang, the earth is round!” But is not the earth round? Does the asylum still crave yet another sacrifice for this opinion, as in the time when all men believed it to be flat as a pancake? Or is a man who hopes to prove that he is sane, by uttering a generally accepted and generally respected objective truth, insane? And yet it was clear to the physician that the patient was not yet cured; though it is not to be thought that the cure would consist in getting him to accept the opinion that the earth is flat. But all men are not physicians, and what the age demands seems to have a considerable influence upon the question of what madness is.

Søren Kierkegaard, Concluding Unscientific Postscript, Translated from the Danish by David F. Swenson, completed after his death and provided with Introduction and Notes by Walter Lowrie, Princeton: Princeton University Press, 1968, p. 174

These themes of individuality and subjectivity occur throughout Kierkegaard’s work, always expressed with humor and imagination — Kierkegaard’s writing itself is a testament to the individuality he so valued — as especially illustrated in the passage above. Kierkegaard engages in philosophy by telling a joke; would that more philosophy were written with similar panache.

From Kierkegaard we can learn that how the future is presented can mean the difference between a vision that inspires the individual and a vision that sounds like madness — and this is important. Implicit Kierkegaardian futurism forces us to see the importance of the individual in a schematic conception of the future that is often impersonal and without a role for the individual that the individual would care to assume. Worse yet, there are often aspects of futurism that seem to militate against the individual.

One of the great failings of the communist vision of the future — which inspired many in the twentieth century, and was a paradigm of European manifest destiny such as I described in The Idea and Destiny of Europe — was its open contempt for the individual, which is a feature of most collectivist thought. Not only is it true that, “Where there is no vision, the people perish,” but one might also say that without a personal vision, the people perish.

One of the ways in which futurism has been presented in such a manner that almost seems contrived to deny and belittle the role of the individual is the example of the “twin paradox” in relativity theory. I have discussed this elsewhere (cf. Stepping Stones Across the Cosmos) because I find it so interesting. The twin paradox is used to explain of the oddities of general relativity, such that an accelerated clock moves more slowly relative to a clock that remains stationary.

In the twin paradox, it is postulated that, of two twins on Earth, the two say their goodbyes and one remains on Earth while another travels a great distance (perhaps to another star) at relativistic velocities. When the traveling twin returns to Earth, he finds that his twin has aged beyond recognition and the two scarcely know each other. This already poignant story can be made all the more poignant by postulating an even longer journey in which an individual leaves Earth and returns to find everyone he knew long dead, and perhaps even the places, the cities, and the monuments once familiar to him now long vanished.

The twin paradox, as it is commonly told, is a story, and, moreover, is a parable of cosmic loneliness. We would probably question the sanity of any individual who undertook a journey of space exploration under these conditions, and rightly so. If we imagine this story set within a larger story, the only kind of character who would undertake such a journey would be the villain of the piece, or an outcast, like a crazed scientist maddened by his lack of human contact and obsessed exclusively with his work (a familiar character from fiction).

The twin paradox was formulated to relate the objective truth of our universe, but it sounds more like Kierkegaard’s story of a madman reciting an obvious truth: no one is fooled by the madman. As long as a human future in space is presented in such terms, it will sound like madness to most. What we need in order to present existential risk mitigation to the public are stories of space exploration that touch the heart in a way that anyone can understand. We need new stories of the far future and of the individual’s role in the future in order to bring home such matters in a way that makes the individual respond on a personal level.

A subjective experience is always presented in a personal context. This personal context is important to the individual. Indeed, we know this from many perspectives on human life, whether it be the call to heroic personal self-sacrifice for the good of the community that is found collectivist thought, or the celebration of enlightened self-interest found in individualistic thought. Just as it is possible to paint either approach as a form of selfishness rooted in a personal context, it is possible to paint either as heroic for the same reason. In so far as a conception of history can be made real to the individual, and incorporates a personal context suggestive of subjective experiences, that conception of history will animate effective social action far more readily than even the most seductive vision of a sleek and streamlined future which nevertheless has no obvious place for the individual and his subjective experience.

The ultimate lesson here — and it is a profoundly paradoxical lesson, worthy of the perversity of human nature — is this: the individual life serves as the “big picture” context by which the individual, the individual’s isolated experiences, derive their value.

When we think of “big picture” conceptions of history, humanity, and civilization, we typically think in impersonal terms. This is a mistake. The big picture can be equally formulated in personal or impersonal terms, and it is the vision that is formulated in personal terms that speaks to the individual. In so far as the individual accepts this personal vision of the big picture, the vision informs the individual’s subjective experiences.

The narratives of existential risk would do well to learn this lesson.

. . . . .


. . . . .


. . . . .

Grand Strategy Annex

. . . . .



In my previous post, Akhand Bharat and Ghazwa-e-hind: Conflicting Destinies in South Asia, I discussed the differing manifest destinies of Pakistan and India in South Asia. I also placed this discussion in the context of Europe’s wars of the twentieth century and the Cold War. The conflicting destinies imagined by ideological extremists in Pakistan and India is more closely parallel to European wars in the twentieth century than to the Cold War, because while Europe’s wars escalated into a global conflagrations, it was, at heart, conflicting manifest destinies in Europe that brought about these wars.

A manifest destiny is a vision for a people, that is to say, an imagined future, perhaps inevitable, for a particular ethnic or national community. Thus manifest destinies are produced by visionaries, or communities of visionaries. The latter, communities of visionaries, typically include religious organizations, political parties, professional protesters and political agitators, inter alia. We have become too accustomed to assuming that “visionary” is a good thing, but vision, like attempted utopias, goes wrong much more frequently than it goes well.

Perhaps that last visionary historical project to turn out well was that of the United States, which is essentially en Enlightenment-era thought experiment translated into the real world — supposing we could rule ourselves without kings, starting de novo, how would be do it? — and of course there would be many to argue that the US did not turn out well at all, and that whatever sociopolitical gains that have been realized as a result of the implementation of popular sovereignty, the price has been too high. Whatever narrative one employs to understand the US, and however one values this political experiment, the US is like an alternative history of Europe that Europe itself did not explore, i.e., the US is the result of one of many European ideas that had a brief period of influence in Europe but which was supplanted by later ideas.

Utopians are not nice people who wish only to ameliorate the human condition; utopians are the individuals and movements who place their vision above the needs, and even the lives, of ordinary human beings engaged in the ordinary business of life. Utopians are idealists, who wish to see an ideal put into practice — at any cost. The great utopian movements of the twentieth century were identical to the greatest horrors of the twentieth century: Soviet communism, Nazi Germany, Mao’s Great Leap Forward and the Cultural Revolution, and the attempt by the Khmer Rouge to create an agrarian communist society in Cambodia. It was one of the Khmer Rouge slogans that, “To keep you is no benefit, to destroy you is no loss.”

The Second World War — that is to say, the most destructive conflict in human history — was a direct consequence of the Nazi vision for a utopian Europe. The ideals of a Nazi utopia are not widely shared today, but this is how the Nazis themselves understood their attempt to bring about a Judenrein slave empire in the East, which Nazi overlords ruling illiterate Slav peasants. Nazism is one of the purest exemplars in human history of the attempt to place the value of a principle above the value of individual lives. It would also be said that the Morganthau plan for post-war Germany (which I discussed in The Stalin Doctrine) was almost as visionary as the Nazi vision itself, though certainly less brutal and not requiring any genocide to be put into practice. Visionary adversaries sometimes inspire visionary responses, although the Morganthau plan was not ultimately adopted.

In the wake of the unprecedented destruction of the Second World War, the destiny of Europe has been widely understood to lie in European integration and unity. The attempt to unify Europe in our time — the European Union — is predicated upon an implicit idea of Europe, which is again predicated upon an implicit shared vision of the future. What is this shared vision of the future? I could maliciously characterize the contemporary European vision of the future as Fukuyama’s “end of history,” in which, “economic calculation, the endless solving of technical problems, environmental concerns, and the satisfaction of sophisticated consumer demands,” constitute the only remaining social vision, and, “The struggle for recognition, the willingness to risk one’s life for a purely abstract goal, the worldwide ideological struggle that called forth daring, courage, imagination, and idealism,” have long since disappeared. …

After the horrors of the twentieth century, such a future might not sound too bad, and while it may constitute a kind of progress, this can no longer be understood as a manifest destiny; no one imagines that a unified Europe is one people with one vision; unified Europe is, rather, a conglomerate, and its vision is no more coherent or moving than the typical mission statement of a conglomerate. Indeed, we must view it as an open question as to whether a truly democratic society can generate or sustain a manifest destiny — and Europe today is, if anything, a truly democratic society. There are, of course the examples of Athens at the head of the Delian League and the United States in the nineteenth century. I invite the reader to consider whether these societies were as thoroughly democratic as Europe today, and I leave the question open for the moment.

But Europe did not come to its democratic present easily or quickly. Europe has represented both manifest destinies and conflicting manifest destinies throughout its long history. Europe’s unusual productivity of ideas has given the world countless ideologies that other peoples have adopted as their own, even as the Europeans took them up for a time, only to later cast them aside. Europe for much of its history represented Christendom, that is to say, Christian civilization. In its role as Christian civilization, Europe resisted the Vikings, the Mongols, Russian Orthodox civilization after the Great Schism, Islam during the Crusades, later the Turk, another manifestation of Islam, and eventually Europeans fell on each other and during the religious wars that followed the Protestant Reformation, with Catholics and Protestants representing conflicting manifest destinies that tore Europe apart with an unprecedented savagery and bloodthirstiness.

After Europe exhausted itself with fratricidal war inspired by conflicting manifest destinies, Europe came to represent science, and progress, and modernity, and this came to be a powerful force in the world. But modernity has more than one face, and by the time Europe entered the twentieth century, Europe hosted two mortal enemies that held out radically different visions of the future, the truly modern manifest destinies of fascism and communism. Europe again exhausted itself in fratricidal conflict, and it was left to the New World to sort out the peace and to provide the competing vision to the surviving communist vision that emerged from the mortal conflict in Europe. Now communism, too, has ceded its place as a vision for the future and a manifest destiny, leaving Russia again as the representative of Orthodox civilization, and Europe as the representative of democracy.

On the European periphery, Russia continues to exercise an influence in a direction distinct from that of the idea of Europe embodied in the European Union. Even as I write this, protesters and police are battling in Ukraine, primarily as a result of Russian pressure on the leaders of Ukraine not to more closely associate itself with Europe (cf. Europe’s Crisis in Ukraine by Swedish Foreign Minister Carl Bildt). Ukraine is significant in this connection, because it is a nation-state split between a western portion that shares the European idea and wants to be a part of Europe, and an eastern part that looks to Russia.

What does a nation-state on the European periphery look toward when it looks toward Russia? Does Russia represent an ideology or a destiny, if only on the European periphery and not properly European? As the leading representative of Orthodox civilization, Russia should represent some kind of vision, but what vision exactly? As I have attempted to explain previously in The Principle of Autocracy and Spheres of Influence, I remain puzzled by autocracy and forms of authoritarianism, and I don’t see that Russia has anything to offer other than a kinder, gentler form of autocracy than that what the Tsars offered in earlier centuries.

Previously in The Evolution of Europe I wrote that, “The idea of Europe will not go away,” and, “The saga of Europe is far from over.” I would still say the same now, but I would qualify these claims. The idea of Europe remains strong for the Europeans, but it represents little in the way of a global vision, and while many seek to join Europe, as barbarians sought join the Roman Empire, Europe represents a manifest destiny as little as the later Roman Empire represented anything. But Europe displaced into the New World, where its Enlightenment prodigy, the United States continues its political experiment, still represented something, however tainted the vision.

The idea of Europe remains in Europe, but the destiny of Europe lies in the Western Hemisphere.

. . . . .


. . . . .

Grand Strategy Annex

. . . . .


Get every new post delivered to your Inbox.

Join 458 other followers

%d bloggers like this: