12 July 2020


Pre-civilization non-civilization

Civilization between Non-Civilizations

Some time ago (almost ten years ago) I wrote a post about what I call proto-civilization, which I use to indicate those social and economic institutions that were the immediate precursors of actual civilization. A proto-civilization is something less than civilization, but at the same time it is something more than hunter-gatherer nomadism. Now I would like to introduce a complementary conception, which is that which follows after but which is distinct from civilization sensu stricto, and which I will call para-civilization. I find the need to introduce the concept of para-civilization as I continue my analysis of civilization and refine my formulations. In order to introduce further ideas to follow this post soon (hopefully) I now need to talk about para-civilization, as I am increasingly using this term in my other writings on civilization.


In some earlier work I had the occasion to refer to post-civilizational institutions (e.g., in Civilization Beyond the Prediction Wall, inter alia). This is essentially the same idea as a para-civilization. A para-civilization is not quite a civilization, but it appears in the aftermath of civilization. Whether a para-civilization is more than civilization or less than civilization depends on the historical context in which the para-civilization appears. The only proto-civilizations that have existed on Earth to date have been the remnants of former civilizations that have collapsed or failed, have been conquered and submerged under another civilization, or have otherwise been reduced from a functional civilization to something less than a functional civilization. For example, in the wake of the Spanish conquest of pre-Columbian civilizations of the New World, and later collapse of the pristine civilizations of North America (more from Old World diseases than by conquest), many institutions of these civilizations continued in vestigial form, and some continue to the present day. The Mayan daykeepers are vestiges of Mayan civilization. But since Mayan civilization is now long defunct, these vestiges cannot be considered civilization proper, so they can be called para-civilization. In this historical case, para-civilization is less than civilization.

Civilization in the strict sense

When we consider the possibilities for and fate of civilization in the future, para-civilization might be something less than civilization sensu stricto or something more than civilization. If civilization as we have known it is overtaken and pre-empted by some post-civilizational institution, or by some kind of non-civilization that is more powerful than civilization (say, an intelligence explosion and technological singularity, which is not a social institution at all, but a post-biological replacement of all social institutions), then para-civilization may be something more than civilization, and not less, as has been the case with historical examples of para-civilization.


Given the definition of civilization that I employ — an economic infrastructure joined to a conceptual framework by a central project — I can then define a para-civilization as an institution following after a civilization, which latter fulfills my definition, but which former possesses an institutional structure distinct from that of civilization that fulfills my definition. In other words, a para-civilization is an institution with a simpler institutional structure, a changed institutional structure, or a more complex institutional structure than that of civilization. A simpler institutional structure could result from any of the constituent institutions that compromise civilization failing while one or some of the other constituent institutions continue in existence after a fashion. A changed institutional structure could result from one of the constituent elements of civilization being replaced by some other institution. A more complex institutional structure could result from the addition of a novel institution to the existing institutions that comprise civilization.

Post-civilization non-civilization

The above qualifications are made because civilization has repeatedly mutated over its history, in some cases changing so dramatically that one could plausibly argue that civilization as such had come to an end and a post-civilizational order was now a fact of life. This is most obviously the case with the industrial revolution, which transformed agricultural civilization, and is still transforming civilization today as I write this. However, as I analyze contemporary industrial civilization I still see the institutional structure of economic infrastructure, conceptual framework, and central project. It could be argued that the replacement of an agricultural infrastructure by an industrial infrastructure is an instance of a changed institutional structure such as described in the above paragraph, but from a sufficiently abstract point of view, both agricultural and industrial infrastructures are engaged in extracting energy from the biosphere for human ends, so that this civilizational function is invariant over time.

Civilization in an extended sense, including civilization, its precursor proto-civilization, and its subsequent para-civilization

The institutions themselves have changed — the economic infrastructure has changed most dramatically, and it has dragged the other constituent institutions of civilization along with it — but the institutional structure, and the inter-relationships among these structures, has remained essentially invariant. Therefore I argue that civilization is continuous from its earliest appearance on Earth up through the present day of industrialized civilization. However, something could conceivably occur in the future, even in the near future, that would so transform one or several of the constituent institutions of civilization, or add to these institutions, that we could no longer call the resulting social institution a civilization. It would then be a para-civilization.

Civilization between non-civilizations

The historical periodization consisting of the sequence proto-civilization–civilization–para-civilization constitutes an historical idealization not likely reflected in any actual historical civilizations, because it is an intentionally simplified abstraction for the study of civilization. A further idealized periodization would be constituted by recognizing the historical periods before and after any large scale social institutions whatever: non-civilization–proto-civilization–civilization simpliciter–para-civilization–non-civilization. The two cases in which non-civilization before and after civilization are identical and in which they are different can be distinguished, though I will not discuss this at the moment.

As non-civilizations, proto-civilization and para-civilization are distinct from suboptimal civilizations. These terms describe conditions that obtain prior to the advent of, or after the extinction of, civilization sensu stricto, and must therefore be analyzed in terms of the distinct institutional structures that they may exhibit. Therefore we distinguish between non-civilizations that are similar to civilizations, being either the precursor to civilization or the descendant of civilization, and suboptimal civilizations, having passed at least the initial threshold of civilization before experiencing conditions detrimental to the development of civilization. Suboptimal civilizations are civilization, though ruined, fallen, flawed, thwarted, or otherwise falling short of flourishing.

. . . . .


. . . . .

Grand Strategy Annex

. . . . .

project astrolabe logo smaller

. . . . .

. . . . .

Discord Invitation

. . . . .

. . . . .


old computer

Technologies may be drivers of change or facilitators of change, the latter employed by the former as the technologies that enable the development of technologies that are drivers of change; that is to say, technologies that are facilitators of change are tools for the technologies that are in the vanguard of economic, social, and political change. Technologies, when introduced, have the capability of providing a competitive advantage when one business enterprise has mastered them while other business enterprises have not yet mastered them. Once a technology has been mastered by all elements of the economy it ceases to provide a competitive advantage to any one firm but is equally possessed and employed by all. At that point of its mature development, a technology also ceases to be a driver a change and becomes a facilitator of change.

Any technology that has become a part of the infrastructure may be considered a facilitator of change rather than a driver of change. Civilization requires an infrastructure; industrial-technological civilization requires an industrial-technological infrastructure. We are all familiar with infrastructure such as roads, bridges, ports, railroads, schools, and hospitals. There is also the infrastructure that we think of as “utilities” — water, sewer, electricity, telecommunications, and now computing — which we build into our built environment, retrofitting old buildings and sometimes entire older cities in order to bring them up to the standards of technology assumed by the industrialized world today.

All of the technologies that now constitute the infrastructure of industrial-technological civilization were once drivers of change. Before the industrial revolution, the building of ports and shipping united coastal communities in many regions of the world; the Romans built a network of road and bridges; in medieval Europe, schools and hospitals become a routine part of the structure of cities; early in the industrial revolution railroads became the first mechanized form of rapid overland transportation. Consider how the transcontinental railroad in North America and the trans-Siberian railway in Russia knitted together entire continents, and their role as transformative technologies should be clear.

Similarly, the technologies we think of as utilities were once drivers of change. Hot and cold running water and indoor plumbing, still absent in much of the world, did not become common in the industrialized world until the past century, but early agricultural and urban centers only came into being with the management of water resources, which reached a height in the most sophisticated cities of classical antiquity, with water supplied by aqueducts and sewage taken away by underground drainage systems that were superior to many in existence today. With the advent of natural gas and electricity as fuels for home and industry, industrial cities were retrofitted for these services, and have since been retrofitted again for telecommunications, and now computers.

The most recent technology to have a transformative effect on socioeconomic life was computing. In the past several decades — since the end of the Second World War, when the first digital, programmable electronic computers were built for code breaking (the Colossus in the UK) — computer technology grew exponentially and eventually affected almost every aspect of life in industrialized nation-states. During this period of time, computing has been a driver of change across socioeconomic institutions. Building a faster and more sophisticated computer has been an end in itself for technologists and computer science researchers. While this will continue to be the case for some time, computing has begun to make the transition from being a driver of change in an of itself to being a facilitator of change in other areas of technological innovation. In other words, computers are becoming a part of the infrastructure of industrial-technological civilization.

The transformation of the transformative technology of computing from a driver of change into a facilitator of change for other technologies has been recognized for more than ten years. In 2003 an article by Nicholas G. Carr, Why IT Doesn’t Matter Anymore, stirred up a significant controversy when it was published. More recently, Mark R. DeLong in Research computing as substrate, calls computing a substrate instead of an infrastructure, though the idea is much the same. Delong writes of computing: “It is a common base that supports and nurtures research work and scholarly endeavor all over the university.” Although computing is also a focus of research work and scholarly endeavor in and of itself, it also serves a larger supporting role, not only in the university, but also throughout society.

Although today we still fall far short of computational omniscience, the computer revolution has happened, as evidenced by the pervasive presence of computers in contemporary socioeconomic institutions. Computers have been rapidly integrated into the fabric of industrial-technological civilization, to the point that those of us born before the computer revolution, and who can remember a world in which computers were a negligible influence, can nevertheless only with difficulty remember what life was like without computers.

Depsite, then, what technology enthusiasts tell us, computers are not going to revolutionize our world a second time. We can imagine faster computers, smaller computers, better computers, computers with more storage capacity, and computers running innovative applications that make them useful in unexpected ways, but the pervasive use of computers that has already been achieved gives us a baseline for predicting future computer capacities, and these capacities will be different in degree from earlier computers, but not different in kind. We already know what it is like to see exponential growth in computing technology, and so we can account for this; computers have ceased to be a disruptive technology, and will not become a disruptive technology a second time.

Recently quantum computing made the cover of TIME magazine, together with a number of hyperbolic predictions about how quantum computing will change everything (the quantum computer is called “the infinity machine”). There have been countless articles about how “big data” is going to change everything also. Similar claims are made for artificial intelligence, and especially for “superintelligence.” An entire worldview has been constructed — the technological singularity — in which computing remains an indefinitely disruptive technology, the development of which eventually brings about the advent of the Millennium — the latter suitably re-conceived for a technological age.

Predictions of this nature are made precisely because a technology has become widely familiar, which is almost a guarantee that the technology in question is now part of the infrastructure of the ordinary business of life. One can count on being understood when one makes predictions about the future of the computer, in the same way that one might have been understood in the late nineteenth or early twentieth century if making predictions about the future of railroads. But in so far as this familiarity marks the transition in the life of a technology from being a driver of change to being a facilitator of change, such predictions are misleading at best, and flat out wrong at worst. The technologies that are going to be drivers of change in the coming century are not those that have devolved to the level of infrastructure; they are (or will be) unfamiliar technologies that can only be understood with difficulty.

The distinction between technologies that are drivers of change and technologies that are facilitators of change (like almost all distinctions) admits of a certain ambiguity. In the present context, one of these ambiguities is that of what constitutes a computing technology. Are computing applications distinct from computing? What of technologies for which computing is indispensable, and which could not have come into being without computers? This line of thought can be pursued backward: computers could not exist without electricity, so should computers be considered anything new, or merely an extension of electrical power? And electrical power could not have come about with the steam and fossil-fueled industry that preceded it. This can be pursued back to the first stone tools, and the argument can be made the nothing new has happened, in essence, since the first chipped flint blade.

Perhaps the most obvious point of dispute in this analysis is the possibility of machine consciousness. I will acknowledge without hesitation that the emergence of machine consciousness is a potentially revolutionary development, and it would constitute a disruptive technology. Machine consciousness, however, is frequently conflated with artificial intelligence and with superintelligence, and we must distinguish between the two. Artificial intelligence of a rudimentary form is already crucial to the automation of industry; machine consciousness would be the artificial production, in a machine substrate, of the kind of consciousness that we personally experience as our own identity, and which we infer to be at the basis of the action of others (what philosophers call the problem of other minds).

What makes the possibility of machine consciousness interesting to me, and potentially revolutionary, is that it would constitute a qualitatively novel emergent from computing technology, and not merely another application of computing. Computers stand in the same relationship to electricity that machine consciousness would stand in relation to computing: a novel and transformational technology emergent from an infrastructural technology, that is to say, a driver of change that emerges from a facilitator of change.

The computational infrastructure of industrial-technological civilization is more or less in place at present, a familiar part of our world, like the early electrical grids that appeared in the industrialized world once electricity became sufficiently commonplace to become a utility. Just as the electrical grid has been repeatedly upgraded, and will continue to be ungraded for the foreseeable future, so too the computational infrastructure of industrial-technological civilization will be continually upgraded. But the upgrades to our computational infrastructure will be incremental improvements that will no longer be major drivers of change either in the economy or in sociopolitical institutions. Other technologies will emerge that will take that role, and they will emerge from an infrastructure that is no longer driving socioeconomic change, but is rather the condition of the possibility of this change.

. . . . .


. . . . .


. . . . .

Grand Strategy Annex

. . . . .


Learning to Love the Wisdom

Homo technologiensis

of Industrial-Technological Civilization

A confession of enthusiasm

Allow me to give free rein to my enthusiasm and to proclaim that there has never been a more exciting time in human history to be a philosopher than today. It is ironic that, at the same time, philosophers are probably held in lower esteem today than in any other period of human history. I have recently come to the opinion that it is intrinsic to the structure of industrial-technological civilization to devalue philosophy, but I have discussed the contemporary neglect of philosophy in several posts — Fashionable Anti-Philosophy, Further Fashionable Anti-Philosophy, and Beyond Anti-Philosophy among them — so that is not what I am going to write about today.

Today, on the contrary, I want to write about the great prospects that are now opening up to philosophy, despite its neglect in popular culture and its abuse by the enthusiasts of a positivistically-conceived science. And these prospects are not one but many. In some previous posts about object-oriented philosophy (also called object-oriented ontology, or OOO) I mentioned how exciting it was to be alive at a time when a new philosophical school was coming into being, especially at a time when academic philosophy seems to have stalled and relinquished any engagement with the world or any robust relationship to the ordinary lives of ordinary human beings. (As bitterly as the existentialists were denounced in their day, they did engage quite directly with contemporary events and contemporary life. Sartre made a fool of himself by meeting with Che Guevara and by mouthing Maoist claptrap in his later years, but he reached far more people than most philosophers of his generation, and like fellow existentialist Camus, did so through a variety of prose works, plays, and novels.) Now I see that we live in an age of the emergence of not one but of many different philosophical schools, and this is interesting indeed.

Philosophical periodization: schools of thought

Anyone who discusses so-called “schools” in philosophy is likely to run into immediate resistance, usually from those who have been characterized as belonging to a dubiously-conceived school. As soon as Sartre gave an explicit definition of existentialism as being based on the principle that existence precedes essence, Heidegger and Jaspers explicitly and emphatically denied that they were “existentialists.” And if we think of the hundreds years of philosophical research and the hundreds of philosophers who can be lumped under the label of “scholasticism,” the identification of a school of “scholastic” philosophers would seem to be without any content whatsoever.

Nevertheless, some of these labels remain accurate even when and where they are rejected. While Heidegger and Jaspers rejected the principle that existence precedes essence, there is no question that all three of these great existentialist thinkers were preoccupied with the problematic human condition in the modern world. Similarly, the ordinary language philosophers had their disagreements, but there were unified by a method of the analysis of ordinary language.

The school of techno-philosophy

With this caveat in mind about identifying a philosophical “school” that will almost certainly be rejected by its practitioners, I am going to identify what I will call techno-philosophy. In regard to techno-philosophy I will identify no common goals, aspirations, beliefs, principles, ideas, or ideals that belong to the practitioners of techno-philosophy, but only the common object of philosophical analysis. Techno-philosophy offers an initial exploration of novel ideas and novel facts of life in industrial society, and especially the ideas and facts of life related to technology that rapidly change within a single lifetime.

What makes the school of techno-philosophy interesting is not the special rigor or creativity of the philosophical thought in question — contemporary Anglo-American academic analytical philosophy is far more rigorous, and contemporary continental philosophy is far more imaginative — but rather the objects taken up by techno-philosophy. What are the objects of techno-philosophy? These objects are the novel productions of industrial-technological civilization, which appear and succeed each other in breathless rapidity. The fact of technological change, or even, if one would be so bold, rapid technological progress, is unprecedented. As an unprecedented aspect of life in industrial-technological civilization, rapid technological progress is an appropriate object for philosophical reflection.

The original position of technical society

The artifacts of technological progress have been produced in almost complete blindness as regard to their philosophical significance and consequences. What techno-philosophy represents is the first attempt to make philosophical sense of the artifacts of technology taken collectively, on the whole, and with an eye to their extrapolation across space and through time. In fact, the very idea of technology taken whole may be understood as a conceptual innovation of techno-philosophy, and this very idea has been called the technium by Kevin Kelly. (I wrote about the idea of the technium in Civilization and the Technium and The Genealogy of the Technium.)

Thus we can count Kevin Kelly among techno-philosophers, and even Ray Kurzweil — though Kurzweil does not seem to be interested in philosophy per se, he has pushed the limits of thinking about machine intelligence to the point that he is on the verge of philosophical questions. Thinkers in the newly emerging tradition of the technological singularity and transhumanism belong to techno-philosophy. Academic philosopher David Chalmers, known for his contributions to the philosophy of mind (and especially known for formulating the phrase “explanatory gap” to indicate the chasm between consciousness and attempted physicalistic accounts of mind) was invited to the last singularity conference and tried his hand at an essay in techno-philosophy.

Bostrom and Ćirković and techno-philosophers

The work of Nick Bostrom also represents techno-philosophy, as Professor Bostrom has engaged with a number of contemporary ideas such as superintelligence, the Fermi paradox, extraterrestrial life, transhumanism, posthumanism, the simulation hypothesis (which is a contemporary reformulation of Cartesian evil spirit), and existential risk (which is a contemporary reformulation and secularization of apocalypticism, but with a focus on mitigating apocalyptic scenarios).

Serbian astronomer and physicist Milan M. Ćirković has also dealt with many of the same questions in an admirably daring way (he has co-edited the volume Global Catastrophic Risks with Bostrom). What typifies the work of Bostrom and Ćirković — which definitely constitutes the best work in contemporary techno-philosophy — is their willingness to bring traditional philosophical sensibility to the analysis of contemporary ideas, and especially ideas enabled and facilitated by contemporary technology such as computing and space science.

The branches of industrial-technological philosophy

Industrial-technological civilization is created by practical men who eschew philosophy if they happen to be aware of it, and those with a bent for abstract or theoretical thought, and who desire a robust engagement with the world, turn to science or mathematics, where abstract and theoretical ideas can have a direct and nearly immediate impact upon the development of industrial society. Techno-philosophy picks up where these indispensable men of industrial-technological civilization leave off.

Once we understand the relationship between techno-philosophy and industrial-technological civilization (and its disruptions), and knowing the cycle of science, technology and engineering that drives such a civilization, we can posit a philosophical analysis of each stage in the escalating spiral of industrial-technological civilization, with a philosophy of the science of this civilization, a philosophy of the technology of this civilization, and a philosophy of the engineering of this civilization. Techno-philosophy, then, is the philosophy of the technology of industrial-technological civilization.

Philosophy in a time of model drift

In parallel to the emerging school of techno-philosophy, there is a quasi-philosophical field of popular expositions of science by those actively working on the frontiers of the sciences that have been most profoundly transformed by recent developments, and which are still in the process of transformation. This is the philosophy of the science of industrial-technological civilization, and it is distinct from traditional philosophy of science. The rapid developments in cosmology and physics in particular have led to model drift in these fields, and those scientists who are working on these concepts feel the need to give these abstract and theoretical conceptions a connection to ordinary human experience.

Here I have in mind the books of Brian Green, such as his exposition of string theory, The Elegant Universe: Superstrings, Hidden Dimensions, and the Quest for the Ultimate Theory, as well as criticisms of string theory such as Peter Woit’s Not Even Wrong: The Failure of String Theory and the Search for Unity in Physical Law. Some of these books are more widely ranging and therefore more philosophical, such as David Deutsch’s The Fabric of Reality: The Science of Parallel Universes — and Its Implications, while some appeal to a traditional conception of “natural philosophy” as in David Grinspoon’s Lonely Planets: The Natural Philosophy of Alien Life. While these works do not constitute “techno-philosophy” as I have characterized it above, they stand in a similar relationship to the civilization the thought of which they represent.

The prospects for techno-philosophy

As techno-philosophy grows in scope, rigor, depth, and methodological sophistication, it promises to give to industrial-technological civilization something this civilization never wanted and never desired, but of which it is desperately in need: Depth. Gravitas. Intellectual seriousness. Disciplined reflection on the human condition. In a word: wisdom.

If there is anything the world needs today, it is wisdom.

. . . . .


. . . . .

Grand Strategy Annex

. . . . .

A Note on the Great Filter

29 October 2012


Are we ourselves, as the sole hominid species, the Great Filter?

Parochialism, ironically, knows no bounds. Our habit of blinkering ourselves — what visionary poet William Blake called “mind-forged manacles” — is nearly universal. Sometimes even the most sophisticated minds miss the simple things that are staring them in the face. Usually, I think this is a function of the absence of a theoretical context that would make it possible to understand the simple truth staring us in the face.

I have elsewhere written that one of the things that makes Marx a truly visionary thinker is that he saw the industrial revolution for what it was — a revolution — even while many who lived through this profound series of events where unaware that they were living through a revolution. So even if one’s theoretical context is almost completely wrong, or seriously flawed, the mere fact of having the more comprehensive perspective bequeathed by a theoretical understanding of contemporary events can be enough to make it possible for one to see the forest for the trees.

Darwin wrote somewhere (I can’t recall where as I write this, but will add the reference later when I run across it) that from his conversations with biologists prior to publishing The Origin of Species he knew how few were willing to thing in terms of the mutability of species, but once he had made his theory public it was rapidly adopted as a research program by biologists, and Darwin suggested that countless facts familiar to biologists but hitherto not systematically incorporated into theory suddenly found a framework in which they could be expressed. Obviously, these are my words rather than Darwin’s, and when I can find the actual quote I will include it here, but I think I have remembered the gist of the passage to which I refer.

It would be comical, if it were not so pathetic, that one of the first responses to Darwin’s systematic exposition of evolution was for people to look around for “transitional” evolutionary forms, and, strange to say, they didn’t find any. This failure to find transitional forms was interpreted as a problem for evolution, and expeditions were mounted in order to search for the so-called “missing link.”

The idea that the present consists entirely of life forms having attained a completed and perfected form, and that all previous natural history culminates in these finished forms of the present, therefore placing all transitional forms in the past, is a relic of teleological and equilibrium thinking. Once we dispense the unnecessary and mistaken idea that the present is the aim of the past and exemplifies a kind of equilibrium in the history of life that can henceforth be iterated to infinity, it becomes immediately obvious that every life form is a transitional form, including ourselves.

A few radical thinkers understood this. Nietzsche, for example, understood this all-too-clearly, and wrote that, “Man is a rope stretched between the beasts and the Superman — a rope over an abyss. A dangerous crossing, a dangerous wayfaring, a dangerous looking-back, a dangerous trembling and halting. What is great in man is that he is a bridge and not a goal..” But assertions as bold as that of Nietzsche were rare. Darwin himself didn’t even mention human evolution in The Origin of Species (though he later came back to human origins in The Descent of Man): Darwin first offered a modest formulation of a radical theory.

So what has all this in regard to Marx and Darwin to do with the great filter, mentioned in the title of this post? I have written many posts about the Fermi paradox recently without ever mentioning the great filter, which is an important part of the way that the Fermi paradox is formulated today. If we ask, if the universe is supposedly teaming with alien life, and possibly also with alien civilizations, why we haven’t met any of them, we have to draw that conclusion that, among all the contingencies that must hold in order for an industrial-technological civilization to arise within our cosmos, at least one of these contingencies has tripped up all previous advanced civilizations, or else they would be here already (and we would probably be their slaves).

The contingency that has prevented any other advanced civilization in the cosmos from beating us to the punch is called the great filter. Many who write on the Fermi paradox, then, ask whether the great filter is in our past or in our future. If it is in our past, we have good reason to hope that our civilization can be an ongoing concern. If it is in our future, we have a very real reason to be concerned, since if no other advanced civilization has made it through the great filter in their development, it would seem unlikely that we would prove the exception to that rule. So a neat way to divide the optimists and the pessimists in regard to the future of human civilization is whether someone places the great filter in the past (optimists) or in the future (pessimists).

I would like to suggest that the great filter is neither in our past or in our future. The great filter is now; we ourselves are the great filter.

Human beings are the only species (on the only biosphere known to us) known to have created industrial-technological civilization. This is our special claim to intelligence. But before us there were numerous precursor species, and many hominid species that have since gone extinct. Many of these hominids (who cannot all be called human “ancestors” since many of them were dead ends on the evolutionary tree) were tool users, and it is for this reason that I noted in Civilization and the Technium that the technium is older than civilization (and more widely distributed than civilization). But now we are only only remaining hominid species on the planet. So in the past, we can already see a filter that has narrowed down the human experience to a single sentient and intelligent species.

Writers on the technological singularity and on the post-human and even post-biological future have speculated on a wide variety of possible scenarios in which post-human beings, industrial-technological civilization, and the technium will expand throughout the cosmos. If these events come to past, the narrowing of the human experience to a single biological species will eventually be followed by a great blossoming of sentient and intelligent agents who may not be precisely human in the narrow sense, but in a wider sense will all be our descendants and our progeny. In this eventuality, the narrow bottleneck of humanity will expand exponentially from its present condition.

Looking at the present human condition from the perspective of multiple predecessor species and multiple future species, we see that the history of sentient and intelligent life on earth has narrowed in the present to a single hominid species. The natural history of intelligence on the Earth has all its eggs in one basket. Our existence as the sole sentient and intelligent species means that we are the great filter.

If we survive ourselves, we will have a right to be optimistic about the future of intelligent life in the universe — but not until then. Not until we have been superseded, not until the human era has ended, ought we to be optimistic.

Man is a narrow strand stretched between pre-human diversity and post-human diversity.

. . . . .


. . . . .

Grand Strategy Annex

. . . . .

The Preemption Hypothesis

20 October 2012


Three Little Words: “Where are they?”

In The Visibility Presumption I examined some issues in relation to the response to the Fermi paradox by those who claim that a technological singularity would likely overtake any technologically advanced civilization. I don’t see how the technological singularity visited upon an alien species makes them any less visible (in the sense of “visible” relevant to SETI) nor any less likely to be interested in exploration, adventure, or the quest for scientific knowledge — and finding us would constitute a major scientific discovery for some xenobiological species that had matured into a peer industrial-technological civilization.

The more I think about the Fermi paradox — and I have been thinking a lot about it lately — and the more I contextualize the Fermi paradox in my own emerging theory of civilization — which is a theory I am attempting to formulate in the purest tradition of Russellian generality so that it is equally applicable to human civilization and to any non-human civilization — the more I have come to think that our civilization is relatively isolated in the cosmos, being perhaps one of the few civilizations, or the only civilization, in the Milky Way, and one among only a handful of civilizations in the local cluster of galaxies or our supercluster.

Having an opinion on the Fermi paradox, and even making an attempt to argue for a particular position, does not however relieve one of the intellectual responsibility of exploring all aspects of the paradox. I have also come to think, while reflecting on the Fermi paradox, that the paradox itself has been fruitful in pushing those who care to think about it toward better formulations of the nature and consequences of industrial-technological civilization and of interstellar civilization — whether that of a supposed xenocivilization, or that of ourselves now and in the future.

The human experience of economic and technological growth in the wake of the industrial revolution has made us aware that if there are other peer species in the universe, and if these peer species undergo a process of the development of civilization anything like our own, then these peer species may also have experienced or will experience the escalating exponential growth of economic organization and technological complexity that we have experienced. Looking at our own civilization, again, it seems that the natural telos of continued economic and technological development — for we see no natural or obvious impediment to such continued development — is for human civilization to extend itself beyond the confines of the Earth and the establish itself throughout the solar system and eventually throughout the galaxy and beyond. This natural teleology has been called “The Expansion Hypothesis” by John M. Smart. Smart credits the expansion hypothesis to Kardashev, and while it is implicit in Kardashev, Kardashev himself does not formulate the idea explicitly and does not use the term “expansion hypothesis.”

Aristotle as depicted by Raphael in the Vatican stanze.

Aristotle as depicted by Raphael in the Vatican stanze.

The natural teleology of civilization

I have taken the term “natural teleology” from contemporary philosophical expositions of Aristotle’s distinction between final causes and efficient causes. We can get something of a flavor of Aristotle’s idea of natural teleology (without going too deep into the controversy over final causes) from this paragraph from the second book of Aristotle’s Physics:

We also speak of a thing’s nature as being exhibited in the process of growth by which its nature is attained. The ‘nature’ in this sense is not like ‘doctoring’, which leads not to the art of doctoring but to health. Doctoring must start from the art, not lead to it. But it is not in this way that nature (in the one sense) is related to nature (in the other). What grows qua growing grows from something into something. Into what then does it grow? Not into that from which it arose but into that to which it tends. The shape then is nature.

Aristotle is a systematic philosopher, in which any one doctrine is related to many other doctrines, so that an excerpt really doesn’t do him justice; if the reader cares to, he or she can can look into this more deeply by reading Aristotle and his commentators. But I must say this much in elaboration: the idea of natural teleology is problematic because it suggests a teleological conception of the whole of nature and all of its parts, and ever since Darwin we have understood that many claims to natural teleology are simply the expression of anthropic bias.

Still, kittens grow into cats and puppies grow into dogs (if they live to maturity), and it is pointless to deny this. What is important here is to tightly circumscribe the idea of natural teleology so that we don’t throw out the baby with the bathwater. The difficulty comes in distinguishing the baby from the bathwater in which the baby is immersed. Unless we want to end up with the idea of a natural teleology for human beings and the lives they live — this was the “human nature” that Sartre emphatically denied — we must deny final causes to agents, or find some other principle of distinction.

Are civilizations a natural kind for which we can posit a natural teleology, i.e., a form or a nature toward which they naturally tend as they grow and develop? My answer to this is ambiguous, but it is a principled ambiguity: yes and no. Yes, because some aspects of civilization are clearly developmental, when an institution is growing toward its fulfillment, while other aspects of civilization are clearly non-developmental and discontinuous. But civilization is so complex a whole that there is no simple way to separate the developmental and the non-developmental aspects of any one given civilization.

When we examine high points of civilization like Athens under Pericles or Florence during the Renaissance, we can recognize after the fact the slow build up to these cultural heights, which cannot clearly be distinguished from economic, civil, urban, and military development. The natural teleology of a civilization is the attainment of excellence in its particular mode of being, just as Aristotle said that the great-souled man aims at excellence in his life, but the path to that excellence is as varied as the different lives of individuals and the difference histories of civilizations (Sam Harris might call them distinct peaks on the moral landscape).

Now, I don’t regard this brief exposition of the natural teleology of civilization as anything like a definitive formulation, but a definitive formulation of something so complex and subtle would require years of work. I will save this for another time, rather, counting on the reader’s charity (if not indulgence) to grant me the idea that at least in some respects civilizations tend toward fulfilling an apparent telos implicit in its developmental history.

Early industrialization often had an incongruous if not surreal character, as in this painting of traditional houses silhouetted again the Madeley Wood Furnaces at Coalbrookdale; the incongruity and surrealism is a function of historical preemption.

The Preemption Hypothesis

What I am going to suggest here as another response to the Fermi paradox will sound to some like just another version of the technological singularity response, but I want to try to show that what I am suggesting is a more general conception than that — a potential structural failure of civilization, as it were — and as a more comprehensive concept the technological singularity response to the Fermi paradox can be subsumed under it as a particular instance of civilizational preemption.

The more general conception of a response to the silentium universi I call the preemption hypothesis. According to the preemption hypothesis, the ordinary course of development of industrial-technological civilization — which, if extrapolated, would seem to point to a nearly inevitable expansion of that civilization beyond its home planet and eventually across interstellar space as its natural teleology — is preempted by the emergence of a completely different kind of civilization, a radically different kind of civilization, or by post-civilization, so that the expected natural teleology of the preempted civilization is interrupted and never comes to fruition.

Thus “the lights go out” for a given alien civilization not because that civilization destroys itself (the Doomsday argument, Solution no. 27 in Webb’s book) and not because it collapses into permanent stagnation or even catastrophic civilizational failure (existential risks outlined by Nick Bostrum), and not because it completes a natural cycle of growth, maturity, decay, and death, but rather because it moves on to the next stage of social institution that lies beyond civilization. In simplest terms, the preemption hypothesis is that industrial-technological civilization, for which the expansion hypothesis holds, is preempted by post-civilization, for which the expansion hypothesis no longer holds. Post-civilization is a social institution derived from civilization but no longer recognizably civilization.

The idea of a technological singularity is one kind of potential preemption of industrial-technological civilization, but certainly not the only possible kind of preemption. There are many possible forms of civilizational preemption, and any attempted list of possible preemptions is limited only by our imagination and our parochial conception of civilization, the latter being informed exclusively by human civilization. It is entirely possible, as another example of preemption, that once a civilization attains a certain degree of technological development, everyone recognizes the pointlessness of the the whole endeavor, all the machines are shut down, and the entire population turns to philosophical contemplation as the only worthy undertaking in life.

Acceleration and Preemption

I have previously argued that civilizations come to maturity in an Axial Age. The Axial Age is a conception due to Karl Jaspers, but I have suggested a generalization that holds for any society that achieves a sufficient degree of development and maturity. What Jaspers postulated for agricultural civilizations, and understood to be a turning point for the world entire, I believe holds for most civilizations, and that each stage in the overall development of civilization may have such a turning point.

Also, the history of human civilization reveals an acceleration. Nomadic hunter-gatherer society required hundreds of thousands of years before it matured into a condition capable of producing the great cave paintings of the upper Paleolithic (which I call the Axialization of the Nomadic Paradigm). The agricultural civilizations that superseded Paleolithic societies with the Neolithic Agricultural Revolution required thousands of years to mature to the point of producing what Jaspers called an Axial Age (The Axial Age for Jaspers).

Industrial civilization has not yet produced an industrialized axialization (though we may look back someday and understand one to have been achieved in retrospect), but the early modern civilization that seemed to be producing a decisively different way of life than the medieval period that preceded it experienced a catastrophic preemption: it did not come to fulfillment on its own terms. In Modernism without Industrialism I argued that modern civilization was effectively overtaken by the sudden and catastrophic emergence of industrialization, which set civilization on an entirely new course.

At each stage of the development of human society the maturation of that society, measured by the ability of that society to give a coherent account of itself in a comprehensive cosmological context (also known as mythology), has come sooner than the last, with the abortive civilization of modernism, Enlightenment, and the scientific revolution derailed and suddenly superseded by a novel and unprecedented development from within civilization. Modernism was preempted by accelerating events, and, specifically, by accelerating technology. It is possible that there are other forms of accelerating development that could derail or preempt that course of development that at present appears to be the natural teleology of industrial-technological civilization.

The Dystopian Hypothesis

Because the most obvious forms of the preemption hypothesis, in terms of the prospects for civilization most widely discussed today, would include the technological singularity, transhumanism, and The Transcension Hypothesis, and also because of the human ability (probably reinforced by the survival value of optimism) to look on the bright side of things, we may lose sight of equally obvious sub-optimal forms of preemption. Suboptimal forms of civilizational preemption, in which civilization does not pass on to developments of greater complexity more technically difficult achievement, could be separately identified as the dystopian hypothesis.

In Miserable and Unhappy Civilizations I suggested that the distinction Freud made between neurotic misery and ordinary human unhappiness can be extended to encompass a distinction between a civilization in the grip of neurotic misery as distinct from a civilization experiencing ordinary civilizational unhappiness. I cited the example of the religious wars of early modern Europe as an example of civilization experiencing neurotic misery (and later went on to suggest that contemporary Islam is a civilization in the grip of neurotic misery). It is possible that neurotic misery at the civilizational level could be perpetuated across time and space so that neurotic misery became the enduring condition of civilization. (This might be considered an instance of what Nick Bostrum called “flawed realization” in his analysis of existential risk.)

It would likely be the case that neurotically miserable civilization — which we might also call dystopian civilization, or a suboptimal civilization — would be incapable of anything beyond perpetuating its miserable existence from one day to the next. The dystopian hypothesis could be assimilated to solution no. 23 in Webb’s book, “They have no desire to communicate,” but there many be many reasons that a civilization lacks a desire to communicate over interstellar distances with other civilizations, so I think that the dystopian lack of motivation deserves its own category as a response to the Fermi paradox.

Whether or not chronic and severe dystopianism could be considered a post-civilization institution and therefore a preemption of industrial-technological civilization is open to question. I will think about this.

. . . . .


. . . . .

Grand Strategy Annex

. . . . .

project astrolabe logo smaller

. . . . .

The Visibility Presumption

19 October 2012


SETI visibility

How “visible” is any given industrial-technological civilization from the perspective of interstellar distances? In this context, “visible” means some technological sign that can be detected by technological means. Most obviously this includes any electromagnetic spectrum emissions, but might also include large scale engineering and industrial projects that could be discerned at interstellar distances.

SETI is based upon what we will here call the visibility presumption. SETI can’t really operate in any other way; if you’re going to conduct a search at the present, there are only so many things you can do with current technology at interstellar distances.

In the future (and not all that long from now — in the next ten to twenty years), as I have mentioned in other posts, we will be able to take the spectrum of the atmospheres of exoplanets and from this information we will be able to conduct a genuine Search for Extra-Terrestrial Life (SETL, presumably) by identifying biochemistry in exoplanet atmospheres. Such techniques might also reveal the activities of a civilization prior to the kind of electromechanical technologies that typify industrial-technological civilization and imply the mastery of electromagnetic spectrum emissions.

For the time being, such investigations are just beyond present technology and, as a result, extraterrestrial life that falls below the threshold of industrial-technological civilization with a mastery of electromagnetic technologies is “invisible” to us. In other words, such sub-technological civilizations, or life without civilization, lacks SETI visibility.

Many have commented that, in light of SETI visibility, what we call the search for extraterrestrial intelligence ought to be called something like the search for extraterrestrial technology or the search for advanced extraterrestrial civilizations — but we can keep the familiar SETI acronym by thinking of it as the Search for Extra-Terrestrial Industrialization.

Employing our technology to search for signs of an alien technology is essentially to search for a peer civilization, i.e., another industrial-technological civilization: we are staring into the heavens and trying to find ourselves in the mirror. Not exactly ourselves, but something that would identifiable as life, as intelligence, as rationality, as civilization, and as technology. The visibility presumption implicitly incorporates all of these variables and assumes that the parameters of each variable will be just enough to challenge our assumptions without being so profoundly alien as to be unidentifiable by us as species of a familiar genus.

Recent thought concerning the emergence of a post-human future in the wake of a technological singularity has given a great impetus to the discussion of beings or institutions so changed by rapidly evolving technology that either we would not be able to recognize them, or they would not find us sufficiently interesting to communicate with us. In other words, the technological singularity could make xenocivilization invisible to us or make us essentially invisible (in the sense of being beneath notice) to a xenocivilization, thus posing a challenge to the assumptions of the visibility presumption that another industrial-technological civilization in the galaxy would be a peer civilization and visible to us.

Since I have posted quite a bit recently about the Fermi paradox, I have taken the trouble to look up one of the more thorough books on the topic, If the universe is teeming with aliens… where is everybody?: fifty solutions to the Fermi paradox and the problem of extraterrestrial life by Stephen Webb. The author divides up the solutions according to three broad categories, “They Are Here,” “They Exist But Have Not Yet Communicated,” and “They Do Not Exist.” The Wikipedia entry on the Fermi paradox also incorporates a long list of possible responses to the silentium universi.

Solution No. 28 in Webb’s book, and also mentioned on Wikipedia entry, is that xenocivilizations experience a technological singularity and therefore engage in the cosmic equivalent of Tune in, Turn on, Drop out. Here is what Webb writes:

“Vinge argues that if the Singularity is possible, then it will happen. It has something of the character of a universal law: it will occur whenever intelligent computers learn how to produce even more intelligent computers. If ETCs develop computers — since we routinely assume they will develop radio telescopes, we should assume they will develop computers — then the Singularity will happen to them, too. This, then, is Vinge’s explanation of the Fermi paradox: alien civilizations hit the Singularity and become super-intelligent, transcendent, unknowable beings.”

Stephen Webb, If the universe is teeming with aliens… where is everybody?: fifty solutions to the Fermi paradox and the problem of extraterrestrial life, New York: Praxis Publishing Ltd, 2002, p. 135

This is in itself a complex response to the Fermi paradox, because different people understand different things by the “technological singularity,” and it could just as plausibly be argued that a species experiencing a technological singularity would have its ability to communicate within the known universe exponentially increased and improved, which in turn poses the Fermi paradox in an even stronger form: if alien technological intelligence is so advanced, and has so many technological and intellectual resources at its command, why is it still unable to communicate across interstellar distances? (The protean character of the singularity thesis — anyone seems to be able to make of it what they will — is one reason that I have characterized it as a quasi-theological belief.)

Once the Fermi paradox is posed again in a stronger form, we must have recourse to other familiar responses, such as the singularity makes them lose interest in the outside world, or the technological singularity destroys the civilization in question, and so forth.

Does the idea of a technological singularity or a post-biological future (for ourselves or for some other xenobiological species) fundamentally challenge the visibility presumption?

Recently in Cyberspace and Outer Space I suggested that any civilization expanding beyond its native planet (or other naturally occurring celestial body that is the home of life elsewhere) would almost certainly have some kind of pervasively present radio or EM spectrum communication system — an internet for the solar system, which Heath Rezabek has called a solarnet — and such a network would be highly visible, and perhaps even unintentionally visible, even at interstellar distances.

This can be formulated in even a stronger form: because civilizations that remain exclusively based on their native planets are highly vulnerable to natural disasters, and therefore potentially vulnerable to natural disasters of sufficient scope and scale to result in extinction, such civilizations could be expected to have shorter lifespans and to therefore be less represented in the universe. In other words, exclusively planetary civilizations would be disproportionately selected for extinction.

What we would expect to find in our survey of the cosmos are those long-lived civilizations with the most robust survival mechanisms — redundancy, dispersion, diversity — and robust survival mechanisms of redundancy and dispersion will mean communication between dispersed centers of the civilization in question, and this communication would likely have a high visibility profile — although it could be argued that one survival mechanism would be to go to ground and remain silent so as not to be exterminated by hostile civilizations.

The same considerations of survivability would apply to any civilization that experienced a technological singularity and had subsequently made the transition to post-biological being. While it is fun to imagine mega-engineering projects like a matrioshka brain, a ringworld, an Alderson disk or a Dyson sphere, such massive projects would be very vulnerable, even for an advanced civilization. Horace said that you can drive out Nature with a pitchfork, but she keeps on coming back, and this remains true even at cosmological scales.

One of the arguments made for the Matrioshka brain scenario is that of keeping the whole structure of a massive super-intelligent entity compact in order to reduce communication times between its parts (the speed of light would be where the shoe pinches for a Matrioshka brain), but no super-intelligent entity, biological, post-biological, or non-biological, would put all its eggs in one basket unless its technological hubris had reached the point of considering itself invulnerable. Such hubris would eventually be punished and the brain would go extinct in one fell swoop. Natural selection does not and would not spare technological entities, though it would operate on a cosmological scale rather than at the familiar scale of planetary niches.

It would make much more sense to make the same effort to construct many different megastructures that remain structurally independent but in continuous communication with each other. Since electrical or fiber optic cables strung in space would be even more vulnerable than structures, these independent megastructures would be hard-pressed to find any more robust and survivable form of communication than good old EM spectrum communications, and if multiple megastructures employing massive energy levels were in continuously in communication with each other by way of EM spectrum communication, such a xenocivilization would have a very high visibility profile unless it made a conscious effort to suppress its visibility — which latter is a distinct response to the Fermi paradox.

The technological singularity or post-biological beings do not, in and of themselves, apart from distinct assumptions, argue against the visibility presumption.

. . . . .


. . . . .

Grand Strategy Annex

. . . . .


The Löwenmensch or Lion Man sculpture, about 32,000 years old, is a relic of the Aurignacian culture.

Recently (in Don’t Cry for the Papers) I wrote that, “Books will be a part of human life as long as there are human beings (or some successor species engaged in civilizational activity, or whatever cultural institution is the successor to civilization).” While this was only a single line thrown out as an aside in a discussion of newspapers and magazines, I had to pause over this to think about it and make sure that I would get my phrasing right, and in doing so I realized that there are several ideas implicit in this formulation.

Map of the Aurignacian culture, approximately 47,000 to 41,000 years ago.

Since I make an effort to always think in terms of la longue durée, I have conditioned myself to note that current forms (of civilization, or whatever else is being considered) are always likely to be supplanted by changed forms in the future, so when I said that books, like the poor, will always be with us, for the sake of completeness I had to note that human forms may be supplanted by a successor species and that civilization may be supplanted by a a successor institution. Both the idea of the post-human and the post-civilizational are interesting in their own right. I have briefly considered posthumanity and human speciation in Against Natural History, Right and Left (as well as other posts such as Addendum on the Avoidance of Moral Horror), but the idea of a successor to civilization is something that begs further consideration.

Now, in the sense, everything that I have written about futurist scenarios for the successor to contemporary industrial-technological civilization (which I have described in Three Futures, Another Future: The New Agriculturalism, and other posts) can be taken as attempts to outline what comes after civilization in so far as we understand civilization as contemporary industrial-technological civilization. This investigation of post-industrial civilization is an important aspect of an analytic and theoretical futurism, but we must go further in order to gain a yet more comprehensive perspective that places civilization within the longest possible historical context.

I have adopted the convention of speaking of “civilization” as comprising all settled, urbanized cultures that have emerged since the Neolithic Agricultural Revolution. This is not the use that “civilization” has in classic humanistic historiography, but I have discussed this elsewhere; for example, in Jacob Bronowski and Radical Reflection I wrote:

…Bronowski refers to “civilization as we know it” as being 12,000 years old, which means that he is identifying civilization with the Neolithic Agricultural Revolution and the emergence of settled life in villages and eventually cities.

Taking this long and comprehensive view of civilization, we still must contrast civilization with its prehistoric antecedents. When one realizes that the natural sciences have been writing the history of prehistory since the methods, the technologies, and the conceptual infrastructure for this have been developed since the late nineteenth century, and that paleolithic history itself admits of cultures (the Micoquien, the Mousterian, the Châtelperronian, the Aurignacian, and the Gravettian, for example), it becomes clear that “culture” is a more comprehensive category than “civilization,” and that culture is the older category. The cultures of prehistory are the antecedent institutions to the institution of civilization. This immediately suggests, in the context of futurism, that there could be a successor institution to civilization that no longer could be strictly called “civilization” but which still constituted a human culture.

Thus the question, “What comes after civilization?” when understood in an appropriately radical philosophical sense, invites us to consider post-civilizational human cultures that will not only differ profoundly from contemporary industrial-technological civilization, but which will differ profoundly from all human civilization from the Neolithic Agricultural Revolution to the present day.

Human speciation, if it occurs, will profoundly affect the development of post-human, post-civilizational cultural institutions. I have mentioned in several posts (e.g., Gödel’s Lesson for Geopolitics) that Francis Fukuyama felt obligated to add the qualification to this “end of history” thesis that if biotechnology made fundamental changes to human beings, this could result in a change to human nature, and then all bets are off for the future: in this eventuality, history will not end. Changed human beings, possibly no longer human sensu stricto, may have novel conceptions of social organization and therefore also novel conceptions of social and economic justice. From these novel conceptions may arise cultural institutions that are no longer “civilization” as we here understand civilization.

Human speciation could be facilitated by biotechnology in a way not unlike the facilitation of the industrial revolution by the systematic application of science to technological development.

Above I wrote, “human speciation, if it occurs,” and I should mention that my only hesitation here is that social or technological means may be employed in the attempt to arrest human evolution at more-or-less its present stage of development, thus forestalling human speciation. Thus my qualification on human speciation in no way arises from a hesitation to acknowledge the possibility. As far as I am concerned, human being is first and foremost biological being, and biological being is always subject to natural selection. However, technological intervention might possibly overtake natural selection, in which case we will continue to experience selection as a species, but it will be social selection and technological selection rather than natural selection.

In terms of radical scenarios for the near- and middle-term future, the most familiar on offer at present (at least, the most familiar that has some traction in the public mind) is that of the technological singularity. I have recounted in several posts the detailed predictions that have been made, including several writers and futurists who have placed definite dates on the event. For example, Vernor Vinge, who proposed the idea of the technological singularity, wrote that, “Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended.” (This is from his original essay on the technological singularity published in 1993, which places the date of the advent of the technological singularity at 2023 or sooner; I understand that Mr. Vinge has since revised his forecast.)

To say that “the human era will be ended,” is certainly to predict a radical development, since it postulates a post-human future within the life time of many now living today (much like the claim that, “Verily I say unto you, That there be some of them that stand here, which shall not taste of death, till they have seen the kingdom of God come with power.”). If I had to predict a radical post-human future in the near- to middle-term future I would opt not for post-human machine intelligence but for human speciation facilitated by biotechnology. This latter scenario seems to me far more likely and far more plausible than the technological singularity, since we already have the technology in its essentials; it is only a matter of refining and applying existing biotechnology.

I make no predictions and set no dates because the crowding of seven billion (and counting) human beings on a single planet militates against radical changes to our species. Social pressures to avoid speciation would make such a scenario unlikely in the near- to middle-term future. If we couple human speciation with the scenario of extraterrestrialization, however, everything changes, but this pushes the scenario further into the future because we do not yet possess the infrastructure necessary to extraterrestrialization. Again, however, as with human speciation through biotechnology, we have all the technology necessary to extraterrestrialization, and it is only a matter of refining and applying existing technologies.

From this scenario of human speciation coupled with extraterrestrialization there would unquestionably emerge post-human, post-civilizational cultural institutions that would be propagated into the distant future, possibly marginalizing, and possibly entirely supplanting, human beings and human civilization as we know it today. It is to be expected that these institutions will be directly related to the way of life adopted in view of such a scenario, and this way of life will be sufficiently different from our own that its institutions and its values and its norms would be unprecedented from our perspective.

. . . . .


. . . . .

Grand Strategy Annex

. . . . .


6 May 2011


Back in 2004 Foreign Policy magazine invited a number of writers to pen short pieces on ideas that were destined for the dustbin of history. Among these contributions, Francis Fukuyama of “end of history” fame wrote a page about transhumanism. Now, not many people know what transhumanism is, so it is hard to view it as a threat, say, on a level with the Soviets during the Cold War, but that was the target that Fukuyama chose to dispose of. For me, this was a laugh out loud moment in the history of ideas, because Fukuyama essentially argued that transhumanism can’t or won’t happen because it poses nearly insuperable moral dilemmas for us. This would be a bit like arguing before the Second World War that the Holocaust couldn’t happen because of the moral implications of such a crime. Well, sheer horror never stopped human beings from doing anything. Or, rather, if it has been a barrier to some, it certainly has not been a barrier to all.

To give you some flavor as to exactly what transhumanism is, and to do so from a sympathetic source, I found a Transhumanist Declaration at the Humanity+ blog, which I reproduce below in its entirety:

1. Humanity stands to be profoundly affected by science and technology in the future. We envision the possibility of broadening human potential by overcoming aging, cognitive shortcomings, involuntary suffering, and our confinement to planet Earth.

2. We believe that humanity’s potential is still mostly unrealized. There are possible scenarios that lead to wonderful and exceedingly worthwhile enhanced human conditions.

3. We recognize that humanity faces serious risks, especially from the misuse of new technologies. There are possible realistic scenarios that lead to the loss of most, or even all, of what we hold valuable. Some of these scenarios are drastic, others are subtle. Although all progress is change, not all change is progress.

4. Research effort needs to be invested into understanding these prospects. We need to carefully deliberate how best to reduce risks and expedite beneficial applications. We also need forums where people can constructively discuss what should be done, and a social order where responsible decisions can be implemented.

5. Reduction of existential risks, and development of means for the preservation of life and health, the alleviation of grave suffering, and the improvement of human foresight and wisdom should be pursued as urgent priorities, and heavily funded.

6. Policy making ought to be guided by responsible and inclusive moral vision, taking seriously both opportunities and risks, respecting autonomy and individual rights, and showing solidarity with and concern for the interests and dignity of all people around the globe. We must also consider our moral responsibilities towards generations that will exist in the future.

7. We advocate the well-being of all sentience, including humans, non-human animals, and any future artificial intellects, modified life forms, or other intelligences to which technological and scientific advance may give rise.

8. We favour allowing individuals wide personal choice over how they enable their lives. This includes use of techniques that may be developed to assist memory, concentration, and mental energy; life extension therapies; reproductive choice technologies; cryonics procedures; and many other possible human modification and enhancement technologies.

To this the response of Francis Fukuyama is as follows:

“…we all possess a human essence that dwarfs manifest differences in skin color, beauty, and even intelligence. This essence, and the view that individuals therefore have inherent value, is at the heart of political liberalism. But modifying that essence is the core of the transhumanist project. If we start transforming ourselves into something superior, what rights will these enhanced creatures claim, and what rights will they possess when compared to those left behind? If some move ahead, can anyone afford not to follow? These questions are troubling enough within rich, developed societies. Add in the implications for citizens of the world’s poorest countries — for whom biotechnology’s marvels likely will be out of reach — and the threat to the idea of equality becomes even more menacing.”

Sure, it’s menacing, and change is frightening. No argument there. But asking the questions that Fukuyama asks — and they are certainly legitimate and interesting questions — is not going to spare us the moral nightmare (if not moral horror) of actually having to find a way to go on living despite menacing developments. And moral horror changes over time. When Malthus said that humanity would have to choose between misery and vice, the vice that horrified him, and which was perhaps no less of a horror to contemplate than mass starvation, was birth control. Now it is Malthus himself who is viewed with horror, not the birth control that inspired Malthus with horror. Only crackpots today attach any social stigma to birth control, and the world goes on its way.

Firstly, I should say — Profess? Declare? Proclaim? — that I don’t in the slightest identify myself as a transhumanist. Like the technological singulatarians, to whom they are closely related, they have some interesting ideas and a lot of predictions, but at the present moment transhumanism is as crackpot-ish as moral opposition to birth control. That doesn’t mean that it will always remain so, but only that it is not one of the world’s prominent evils (or even one of the world’s challenges) at the moment. We have much more to worry about when it comes to atrocities and genocide.

Why is transhumanism marginal at the present moment? Here we can return to Fukuyama, for the brief rant he penned against the transhumanists contains a salient and very true observation:

“…we have drawn a red line around the human being and said that it is sacrosanct.”

We have indeed done so. This is what philosophers call a “convention,” which in this context is not a bunch of beer-swilling salesmen staying together at a Holiday Inn, but a decision to adopt a certain standard, much like the metric system or English weights and measures, or indeed to adopt a particular way of thinking about the world. In my Variations on the Theme of Life I said the following about this particular convention:

“We have elaborately constructed conventional distinctions, embodied in law and social practices, that separate man from every other living thing, and so thorough is this contrived divide that even if no qualitative distinction in fact intervened between man and other living things, the distinction would remain absolute in virtue of the established conventions. But the system is imperfect, and breaks down upon close inspection, for just as all cultures construct the distinction between man and everything else that is not man, they construct it differently, and these different constructions cannot be honestly harmonized. Some animal species are deified, some are demonized, some are commodified, some are marginalized, and some are fetishized. The ideal unity of mankind, then, must be based either on dishonesty and dissimulation, or upon some as yet unsuspected human quality that can distinguish man without reference to cultural relativity.” (section 514)

There is another name for this convention, and that is speciesism. The idea that humanity belongs within a charmed circle is an ontological conception, but the convention to act as though this ontological principle were true (whether or not it is true) is the practical consequence of speciesism. As most people do not think abstractly about principles like this, the convention is likely to have a stronger hold on the mind than the principle, which, when stated as a principle in its explicit form, is likely to sound a bit odd and unfamiliar. But leave that aside for the moment.

It is the very speciesism that stands in the way of the technological development of human potential, keeping us within Fukuyama’s red line, isolated and insulated from the rest of life, that will ultimately facilitate the technological development of non-human species. And the perfection of these technologies of biological augmentation and modification in other species will foster an increasing temptation to apply this technology to human beings, despite whatever obstacles are raised, be they moral, legal, practical, or other. Even if initially consummated in secrecy, we can be certain that the temptation will not be avoided forever.

I realized this today when I was thinking about the now widely publicized presence of a dog with the commando team tasked with the raid on Osama Bin Laden’s hideaway. This detail attracted a lot of attention, and Foreign Policy magazine presented the photo essay War Dog, which rapidly became the most viewed story on their webpage.

It is well known that even the most alert soldier on duty is not nearly as aware as a guard dog on duty, and when it comes to specialized tasks like sniffing out explosives or persons, dogs are superior to the highest high technology. Dogs are now trained and valued in the armed forces as never before, and it would be an obvious development to augment the capacities of guard dogs. A dog with better eyesight or a better nose would be a great asset, and a competitive advantage over non-augmented dogs. Most importantly, the barriers to doing so simply don’t exist, or don’t exist in the same way. We don’t surround dogs with the same red line that we draw around human beings, even if we should.

In short, we will see transcanidism before we see transhumanism, and the former will, in the fullness of time, be the slippery slope that leads to the latter. And, yes, I know that the slippery slope is a logical fallacy; it is also a psychological truth, and what we are really discussing here is the psychology of the red line. That red line changes over time, and it changes in response to changed conditions. The red line that Malthus drew around population control still exists for us today, but it exists in a very different way, and it is drawn in a different place and between different alternatives.

There will be red lines in transcanidism too, but not enough, and not sufficiently robust, to prevent the process from starting down the slippery slope. For example, an obvious extension of improving canine senses would be to improve a dog’s mind. I am certain that most people would be deeply uncomfortable with this. There will be laws passed. There will be attempts to enforce a red line. In the long term, however, that line will be crossed. And once we begin to augment the intelligence of dogs and other war animals (or perhaps once we begin to engineer specialized war animals), they might conceivably catch up with us, or, as in the vision of the technological singularity, exponentially surpass us.

The reader should be fully aware that I am fully aware that what I am writing here would be received as anathema to many and as horrific to some. It has become the custom to discuss certain technological developments that touch directly upon human life in the rhetoric of high moral indignation. This is not helpful. In fact, I take it to be counter-productive.

. . . . .


. . . . .

Grand Strategy Annex

. . . . .


extreme academe

Today’s Financial Times has an article about the so called Singularity University. The FT gave a rather hyperbolic cover blurb to the story: “Extreme academe: Teraflops and killer robots at the university trying to save the world.” Thus, in the eyes of the Financial Times, this absurd exercise is given the status of a legitimate university. So much for journalistic standards.

The article is written by Simon Daniels, CEO of Moixa Energy, who, he tells us, is one of 40 selected from 1,200 applications. No doubt Daniels views himself as one of those elites who is going to save the world for the rest of us. I’m glad the scholars at the Singularity University enjoy such a high level of self-esteem.

Someone once said, in response to the question as to why corporations hire management consultants, “Because they can.” And why do CEOs and executives go to the Singularity University? Because they can. Despite endemic poverty in the world — and, of course, the poor will always be with us — and despite the recession, there is still a lot of money in the world looking for a place to go. One place for that money to go is into the coffers of the Singularity University. This way Kurzweil and Diamandis get to pal around with even more movers and shakers, and the singularity scholars get to tick off another item on their been there, done that list.

Daniels tells us that the Singularity University is supposed to “educate ‘a cadre of leaders’ about the rapid pace of technology and to address humanity’s grand challenges.” Wonderful. I feel much better already to know that Daniels and his ilk are taking field trips to the National Research Supercomputing Center to see the supercomputer there. I hope they didn’t steal any knobs off the consoles as souvenirs. And, of course, they are engaging in this sacrifice of their valuable time all for my benefit.

I have previously discussed the fallacy of believing that society can educate “leaders” in The Future of Literacy. If anyone thinks that an absurd stunt like the Singularity University is going to educate leaders that will make a difference in the future, then I have a bridge to sell them. And if they have money to burn by attending the Singularity University, I’m sure they can afford a bridge too, and why not?

It is a particular irritant to me when I hear advanced industrialized democracies characterized as being meritocracies. This is simply not true. And it is only in a very loose sense that we can even think of the most advanced industrialized societies of our day as being “democratic.” Sure, we hold mostly clean elections and we can cashier our rulers for misconduct, but we are a long way from anything like a genuine democracy.

Who you are, where you are from, and who you know still has everything to do with how far you get in the world. I wrote about this previously in The Birth Lottery, but it bears repeating because we will understand nothing about how the world works unless we understand that wealth, power, and privilege are almost always kept “in the family” so to speak.

At this point in my rant, someone is sure to point out to me a few cases of self-made men who came up from nearly nothing to be great successes in their industries. Yes, it does happen. But it is rare. Even within the ossified feudal society of the Middle Ages, where social mobility was rare, there were always a few cases of men born into poor or uninfluential circumstances who went on to assume positions of wealth and privilege. Given the fact that our societies today are supposed to be so different from that of the feudal societies of the Middle Ages (and, in fact, they are), it is remarkable how little has changed in terms of social hierarchy.

The exceptions to the rule are exceptions, i.e., outliers. They are played up in the media precisely because they are exceptions: they constitute an amusing human-interest story. The rule remains the rule, and the rule is that the well-connected and the wealthy disproportionately monopolize positions of leadership and influence. These are the kind of people who go (and will go) to the Singularity University, and these are not the people will “save the world” (as though the world needed saving).

We pretend to have democratic institutions, and we pretend to have a meritocracy, but it is mere pretense. The most brilliant individuals might improve their status in the world incrementally, but they will not come into positions of privilege except in the rarest of circumstances. The people who do come into positions of privilege are the merely passably bright members of the already privileged classes. They have the head start, and those coming from behind cannot realistically hope to pass those with the handicap of birth on their side. At best, the truly knowledgeable can hope to become an adviser of those who possess a de facto hereditary claim to power.

What the world really needs is not the Singularity University, but universal (or as near universal as possible) primary education around the world. It is widely disseminated education that reaches millions, and not elite education for forty people at a time, that makes the biggest difference to society in the long run. This is a non-privileged and non-elite approach that involves no CEOs, no field trips to see supercomputers or to the Joint Bioenergy Institute, and no headlines on the Financial Times.

. . . . .

I’ve written about the Singularity University previously in The Singularity Has No Clothes and several subsequent posts.

. . . . .


. . . . .

Grand Strategy Annex

. . . . .


Recently my attention was brought to a blog that is dedicated to the critical discussion of the Technological Singularity, Blogging Against ‘The Future’. The author of the blog read my posts about artificial intelligence, machine consciousness, and the Technological Singularity from earlier this year and quoted me in one of his posts. I have already received a half dozen referrals from his website, and several of my posts that hadn’t been accessed in some time have shown up as having been read again.

It took John Locke a long time to write a book, and for good reason.

Somewhere, some years ago, I read that John Locke said that he would write a manuscript and then stick it away until he forgot about. Some time later he would take it out again, and from this later perspective he was able to criticize his own work more effectively. I know what he meant by this, and I have experienced it myself. (However, I have also experienced coming back to something I wrote and not being able to pick up the thread of understanding again.) So it was when I went back and re-read some of my Singularity posts.

In my Blindsided by History (a post I had almost completely forgotten), I wrote, “if and when machine consciousness emerges in history, it will be incomprehensibly alien, perhaps unrecognizable for what it is, because it will have emerged from a different evolutionary process than that from which we emerged.” When I read again this I was reminded of a famous quote from Ortega y Gasset: “Man has not an essence but a history.” Over the years I have thought a lot about this line, and I think it is an exceptionally profound observation. Not only man, but much else in the world, probably most of the world, has not an essence but a history. This, if extrapolated to complete generality, becomes a philosophy that is the antithesis of Platonism, but neither is it constructivism or antirealism or any other familiar doctrine formulated in contradistinction to Platonism. We could, if we liked, call it historical constructivism, and this has a certain intuitive plausibility.

José Ortega y Gasset (May 9, 1883 - October 18, 1955)

José Ortega y Gasset (May 9, 1883 - October 18, 1955)

Machines, too, have not an essence but a history. Perhaps they have an essence too, in addition to a history, but it is the history that crucially demarcates organically emergent beings from mechanically emergent beings. Man and machine have different histories, and if Ortega y Gasset is correct, and if we may make a valid extrapolation from his observation, because they have different histories they are differentiated on a level that previous history would have mistaken for essence, i.e., an essential difference.

I might also add to what I wrote in Blindsided by History about unpredictability: “Present technologies will stall, and they will eventually be superseded by unpredicted and unpredictable technologies that will emerge to surpass them.” It is precisely because future technologies will be unpredicted and unpredictable that the future itself will be unpredicted and unpredictable. History emerges from the cumulative events of passing time; it is built upon the details of individual lives, specific technologies with their advantages and disadvantages, particular circumstances, and concrete facts. The unpredictable emergence of technologies contributes its measure of instability to the general instability of history.

History is always in tension between equilibrium and instability. Sometimes the slow and steady accumulation of the minutiae of time changes the world so gradually that we don’t notice that anything has changed; it is only in reflection, retrospectively, that we are able to realize that the world is a different world than it was. sometimes the accumulation of relentless change spills over in a sudden revolution, a punctuation in the equilibrium of history, but in either case the steady rate of background change continues apace.

Evolution is by its nature unpredictable in its outcome. We can predict that certain selection forces will come to bear, that certain selection events will occur, and that certain entities (say, men and machines) will be subject to these forces and events, but we cannot say what will come of it all. But we can say with confidence that the distinct histories of man and machine will issue in distinct and divergent futures.

. . . . .


. . . . .

Grand Strategy Annex

. . . . .

%d bloggers like this: