Review of Part I

In Part I of this series of posts on technological civilization, it was asked, What is technological civilization? And in the attempt to answer this question, a model of civilization was applied to the problem of technological civilization, it was asked whether technology can function as the central project of a civilization, and an inquiry was made into the idea of technology as an end in itself; from these inquiries preliminary conclusions were drawn, and the significance of these preliminary conclusions for the study of civilization were considered.

It was asserted in Part I that a technological civilization in the narrowest sense (a properly technological civilization) is a civilization that takes technology as its central project, and in a civilization that takes technology as its central project, the economic infrastructure and intellectual superstructure cannot remain indifferent to technology, so that technology must be assumed to be pervasively present throughout the institutional structure of a properly technological civilization. However, it was also determined that properly technological civilization are probably rare, and that the common usage of “technological civilizations” covers those cases in which technology is absent in the central project, or only marginally represented in the central project, but is pervasive in the economic infrastructure and the intellectual superstructure.

In this post, Part II of the series, we will further investigate what it means for technology to be pervasively present throughout the institutional structure of civilization, and how this pervasive presence of technology throughout society distinguishes technological civilizations from civilizations that employ technology but which we do not usually call technological.

Australian firehawks intentionally spread fires by carrying and dropping burning sticks.

The prehistory of technological civilization

Technological civilizations do not appear suddenly and without precedent, but have a deep history that long precedes civilization. Thus we must treat technological civilizations developmentally, and, as we shall see, comparatively; technological development and comparative measures are closely linked.

The prehistory of technological civilization is the history of technology prior to civilization, and the history of technology prior to civilization can be pushed back not only into human prehistory, but into pre-human history, and even the use of technology by other species. Whereas it was once a commonplace and human beings were the only tool-using species, we now know that many other species use tools. Perhaps the most famous example of this are the observations of chimpanzees in the wild stripping leaves from a branch, and then using this bare branch to extract termites from a termite mound, which are then consumed. Primate tool use (as well as primate modification of the environment that they inhabit) is now sufficiently recognized that there is a growing discipline of primate archaeology, which employs the methods of archaeology developed for studying the human past in order to study the material culture of non-human primates.

Other species have even been observed using fire, which is another instance of technology previously assumed to be unique to human beings. Australian Firehawks have been observed in the, “transport of burning sticks in talons or beaks,” intentionally spreading fire for purposes of fire foraging (cf. Intentional Fire-Spreading by “Firehawk” Raptors in Northern Australia by Mark Bonta, Robert Gosford, Dick Eussen, Nathan Ferguson, Erana Loveless, and Maxwell Witwer). The deep history of technology in the biosphere, then, recognizes that many species have used tools, and have done so for millions of years; the scope of technology is both larger and older than human history. In this context, the human use of technology is a continuous development of earlier tool use, bringing tools to a level of development and sophistication far beyond that of other species.

One of the unique features of human tool use (in so far as our present knowledge extends) is the production of durable tools that are used repeatedly over time, and, in some cases, continuously modified, as when a chipped stone or flint tool is used until it becomes dull, and then the edge is sharpened by additional chipping. Tool use by other species has not involved the production of durable tools used over time. However, if we interpret shelters as tools, then the nest of the weaver bird or the lodge of the beaver are durable constructions used over time and often repeatedly improved. (Shelter can be understood as a form of niche construction, and it would be an interesting inquiry to examine the relationship between niche construction and technology, but we will not explicitly consider this in the present context.)

Another unique feature of human tool use is the use of tools to make other tools. When a flint cutting edge is used to cut strips of bone and tendon that are then layered together to make a compound bow, this is the use of one tool to make another tool. The iteration of this process has led ultimately to the sophisticated tools that we manufacture today, and nothing like this has been seen in other species, even in other hominid species (though future investigations in archaeology may prove otherwise). Human ancestors used durable stone tools for millions of years, often with little or no change in the design and use of these tools, but the use of tools to make other tools seems to be restricted to homo sapiens, and perhaps also to the Neanderthals.

The point of this discussion of prehistoric technology is to emphasize that tools and technology are not only older than civilization, but also older than humanity, although humanity does bring tool development and use to a degree of complexity unparalleled elsewhere in terrestrial history. Given this deep history of tools in the biosphere, the late appearance of civilization in the past ten thousand years emerges in a context in which human technology had already reached a threshold of complexity unequaled prior to human beings. At its origin, civilization already involved durable tools of iterated manufacture. If this is what has been meant when we speak of “technological civilization,” then the very first civilizations were technological from their inception; in other words, technology according to this usage would provide no differentiation among civilizations because all civilizations are technological.

Charles Darwin approached the origin of civilization naturalistically, which was, in his time, the exception rather than the rule.

Darwin’s Thesis on the origin of civilization

Civilization, then, begins in medias res with regard to technology. Technology gets its start at the shallow end of an exponential growth curve, incrementally and with the simplest0 innovations. The emergence of distinctively human technologies represents an inflection point in the development of technology. This inflection point occurs prior to the advent of civilization, but civilization contributes to the acceleration of technological development. With civilization, more time and resources become available for technological development, and, as civilization expands, technology expanded and grew in power and sophistication.

The origins of civilization, like the origins of technology, are similarly simple and incremental. In an earlier post I posited what I called Darwin’s Thesis on the origin of civilization, or, more simply, Darwin’s Thesis, based on this passage from Darwin:

“The arguments recently advanced… in favour of the belief that man came into the world as a civilised being and that all savages have since undergone degradation, seem to me weak in comparison with those advanced on the other side. Many nations, no doubt, have fallen away in civilisation, and some may have lapsed into utter barbarism, though on this latter head I have not met with any evidence… The evidence that all civilised nations are the descendants of barbarians, consists, on the one side, of clear traces of their former low condition in still-existing customs, beliefs, language, &c.; and on the other side, of proofs that savages are independently able to raise themselves a few steps in the scale of civilisation, and have actually thus risen.”

Charles Darwin, The Descent of Man, Chapter V (I have left Darwin’s spelling in its Anglicized form.)

It may seem pointless to assert something as apparently obvious as Darwin’s thesis, but the state in which the study of civilization finds us (i.e., that it does not yet exist in anything like a scientific form) makes it necessary that we begin with the most rudimentary ideas and state them explicitly so that they can be understood to characterize our theoretical orientation, and can be tested against other similarly rudimentary ideas when we reach the point of being able to perceive that we are assuming these other ideas and that we therefore need to make these other ideas explicit also. Our understanding of civilization — like the origins of technology and civilization themselves — must begin simply and incrementally.

There is a characteristically amusing passage from Bertrand Russell in which Russell mentions beginning with assumptions apparently too obvious to mention:

“My desire and wish is that the things I start with should be so obvious that you wonder why I spend my time stating them. This is what I aim at because the point of philosophy is to start with something so simple as not to seem worth stating, and to end with something so paradoxical that no one will believe it.”

Bertrand Russell, The Philosophy of Logical Atomism, 2, “Particulars, Predicates, and Relations”

Elsewhere, and in this case specifically in relation to history, Russell mentioned the rudimentary beginnings of scientific thought:

“…comparatively small and humble generalizations such as might form a beginning of a science (as opposed to a philosophy) of history.”

Bertrand Russell, Understanding History, New York: Philosophical Library, 1957, pp. 17-18

Perhaps Russell may have distinguished the scientific from the philosophical understanding of history such that philosophical understanding ends in paradox while scientific understand does not. In any case, whether we take Darwin’s Thesis to be too obvious to state, or to be a small and humble generalization (or both), it is at this level of simplicity that we must begin the scientific study of civilization.

The passage quoted above from Darwin makes reference to “barbarism” and “savagery,” which we today take to be evaluative terms with a strongly condescending connotation, but in Darwin’s time these were technical terms, and, moreover, they were technical terms related to a people’s level of technological development. These terms were very common in the late 19th and early 20th century, and subsequently fell out of use. In falling out of use, we have largely forgotten what these terms meant, and so there has been an prochronic misreading of older texts as though these terms were being used formerly as they are used today.

In my post Savagery, Barbarism, and Civilization I discussed the taxonomy of human development developed by Edward Burnett Tylor and expounded by Lewis Henry Morgan, which distinguished between savagery, barbarism, and civilization. For Tylor and Morgan, savagery extends through pre-pottery developments, barbarism from the invention of pottery to metallurgy, and civilization is reserved for societies that have a written language. This taxonomy is broken down in greater detail into eight stages of technological accomplishment — three stages of savagery, three of barbarism, and one of civilization (cf. Chapter I of Morgan’s Ancient Society).

Thus when Darwin wrote that savages have raised themselves by their own efforts a few degrees in the scale of civilization, what he meant was that hunter-gatherer nomads have, over time, developed technologies such as pottery, agriculture, herding, and metallurgy — something that most today would not dispute, even if they would not use the particular language that Darwin employed. Indeed, if Darwin were writing today he would himself employ different terminology, as the Tylor and Morgan terminology has been completely abandoned by the social sciences.

Edward Gibbon focused on the decline and fall of Rome, but he also noted that some technological achievements survived the process of decline he detailed.

Gibbon on the Continuity of Technology

Societies thus, following Darwin’s Thesis, begin in an uncivilized condition and raise themselves up through stages of technological development, and, following Tylor and Morgan, these stages can be quantified by the presence or absence of particular technologies. One might disagree concerning which particular technologies ought to be taken as markers of civilizational achievement, and yet still agree with the principle that technological development over time can be used to differentiate stages of development. One might, for instance, chose different representative technologies — say, the use of the bone needle to sew form-fitting clothing, the production of textiles, etc. It would be another matter to throw out the underlying principle.

Darwin also mentioned the possibility that, “Many nations… have fallen away in civilisation,” which implies that technological accomplishments can be lost. Implicit in this claim is the familiar idea of a cyclical conception of history. One might maintain that societies rise up in technological accomplishment, only to experience a crisis and to be returned to their original state, starting over from scratch in regard to technological development. We find an explicit argument against this in Edward Gibbon.

Gibbon is remembered as the historian of the decline and fall of the Roman Empire, and given Gibbon’s focus on declension it is especially interesting that Gibbon argued for the retention of technological achievement notwithstanding the collapse of social, political, and legal institutions. At the end of Volume 3 of The Decline and Fall of the Roman Empire Gibbon wrote a kind of summary, “General Observations On The Fall Of The Roman Empire In The West,” which includes Gibbon’s thoughts on the technological progress of civilization. Gibbon presents a view that is entirely in accord with common sense, but one that is rarely expressed, though Gibbon has expressed this view in a strong form that probably admits of important qualifications:

“The discoveries of ancient and modern navigators, and the domestic history, or tradition, of the most enlightened nations, represent the human savage, naked both in body and mind and destitute of laws, of arts, of ideas, and almost of language. From this abject condition, perhaps the primitive and universal state of man, he has gradually arisen to command the animals, to fertilize the earth, to traverse the ocean and to measure the heavens. His progress in the improvement and exercise of his mental and corporeal faculties has been irregular and various; infinitely slow in the beginning, and increasing by degrees with redoubled velocity: ages of laborious ascent have been followed by a moment of rapid downfall; and the several climates of the globe have felt the vicissitudes of light and darkness. Yet the experience of four thousand years should enlarge our hopes, and diminish our apprehensions: we cannot determine to what height the human species may aspire in their advances towards perfection; but it may safely be presumed, that no people, unless the face of nature is changed, will relapse into their original barbarism. The improvements of society may be viewed under a threefold aspect. 1. The poet or philosopher illustrates his age and country by the efforts of a single mind; but those superior powers of reason or fancy are rare and spontaneous productions; and the genius of Homer, or Cicero, or Newton, would excite less admiration, if they could be created by the will of a prince, or the lessons of a preceptor. 2. The benefits of law and policy, of trade and manufactures, of arts and sciences, are more solid and permanent: and many individuals may be qualified, by education and discipline, to promote, in their respective stations, the interest of the community. But this general order is the effect of skill and labor; and the complex machinery may be decayed by time, or injured by violence. 3. Fortunately for mankind, the more useful, or, at least, more necessary arts, can be performed without superior talents, or national subordination: without the powers of one, or the union of many. Each village, each family, each individual, must always possess both ability and inclination to perpetuate the use of fire and of metals; the propagation and service of domestic animals; the methods of hunting and fishing; the rudiments of navigation; the imperfect cultivation of corn, or other nutritive grain; and the simple practice of the mechanic trades. Private genius and public industry may be extirpated; but these hardy plants survive the tempest, and strike an everlasting root into the most unfavorable soil. The splendid days of Augustus and Trajan were eclipsed by a cloud of ignorance; and the Barbarians subverted the laws and palaces of Rome. But the scythe, the invention or emblem of Saturn, still continued annually to mow the harvests of Italy; and the human feasts of the Læstrigons have never been renewed on the coast of Campania.”

Edward Gibbon, The Decline and Fall of the Roman Empire, “General Observations On The Fall Of The Roman Empire In The West,” end of Chapter XXXVIII: Reign Of Clovis. Part VI.

Gibbon himself had detailed the extirpation of private genius and public industry in the case of the decline and fall of Rome, but he had also observed that, “…the more useful, or, at least, more necessary arts,” can survive on a local level which does not (or perhaps need not) experience dissolution even when larger social and political wholes fail and result in the extirpation of private genius and public industry on a larger scale. Gibbon concluded this summary as follows:

“Since the first discovery of the arts, war, commerce, and religious zeal have diffused, among the savages of the Old and New World, these inestimable gifts: they have been successively propagated; they can never be lost. We may therefore acquiesce in the pleasing conclusion, that every age of the world has increased, and still increases, the real wealth, the happiness, the knowledge, and perhaps the virtue, of the human race.”

Edward Gibbon, ibid.

In making the distinctions he did, Gibbon provided a relatively nuanced historical account of technological development, such that certain developments like the scythe would continue to be used even while more sophisticated manufactures fell out of production, and eventually out of use. Certainly this is what appears to have occurred with the decline of the industries of classical antiquity.

At some point in the ancient world, industry advanced to the point that it could produce artifacts like the Antikythera mechanism, and then at some point this industrial capacity was lost. One can speculate that the Antikythera mechanism was probably produced in the workshop of some city in which science, technology, and engineering had come together in a critical mass of knowledge and expertise to allow for the construction of such a device, and when Roman cities failed, this critical mass was scattered and the capacity to build devices like the Antikythera mechanism was lost. However, at the same time, the manorial estates and small villages to which urbanites fled when their cities ceased to function were able to keep lower levels of technology functioning. An estate or a village would have a forge at which iron sufficient for agricultural purposes could be produced, even if the ability to manufacture more sophisticated technologies was lost.

This idea of certain technologies being preserved in broadly-based human knowledge, in contradistinction to the technological accomplishments of gifted individuals or public institutions, I will call Gibbon’s Thesis on the Persistence of Technology, or, more simply, Gibbon’s Thesis. If contemporary civilization were to fail catastrophically, Gibbon’s Thesis would suggest to us that the heights of our technological accomplishments would be lost, but that technologies and techniques that could be locally produced and maintained, even without any particularly gifted individual or a larger socioeconomic structure, would persist — perhaps electric lights and basic telephone service, for example.

The Antikythera Mechanism

Technological Horizons

Darwin’s Thesis and Gibbon’s Thesis are theses on the origins and development of technological civilization, but the examples employed by Darwin and Gibbon do not bring us up to the level of technological accomplishment that we usually associate with the term “technological civilization,” though we could clearly associate their examples with nascent technological civilization, or embryonic technological civilization.

Gibbon’s Thesis can be used to define what I will call a horizon of technological development. I have previously discussed the archaeological use of the term “horizon” in Horizons of Spacefaring Civilizations, in which I quoted three definitions of horizon in archaeology, including David W. Anthony’s definition: “…a single artifact type or cluster of artifact types that spreads suddenly over a very wide geographic area.” While I have taken the term “horizon” from its use in archaeology, I have adapted it a bit (or more than a bit) for my own purposes. An artifact type may be an artistic style or a particular technology; in the present context we will only consider technologies and classes of technology that become common and hence widely represented in material culture.

The archaeological usage distinguishes horizon from tradition, and tends to view horizons as being of short duration (and traditions as being of long duration). I will use “horizon” to mean any relatively rapid expansion of some cluster of technologies, which may be the initial appearance of these artifact types, which may (but may not necessarily) remain common from that time forward, until their terminal horizon, if they disappear rapidly. For example, if human civilization were suddenly destroyed by a nuclear war, the technosignature of our EM spectrum radiation into to space would have a sudden terminal horizon when these EM signals ceased at about the same time.

The commonly used and understood technologies that Gibbon’s Thesis posits will survive the absence of gifted individuals and larger socioeconomic institutions are technological horizons of widely available technology that spread rapidly (though rapidity is relative to historical context) and which, if archaeologists were to excavate the appropriate layer, would be commonly represented in the material culture of a given time. When archaeologists dig up classical sites, they find pottery sherds everywhere; they find oil lamps; they find agricultural implements. To date, only one Antikythera mechanism has been found; it is the exception, and not the rule, so it represents a level of accomplishment, but not a horizon.

If a future archaeologist were to dig up the future remains of the present age, in what were industrialized nation-states there would be a horizon of electronic devices — computers, smart phones, DVD players — although outside the wealthy regions of the contemporary world these devices would be much less in evidence. And perhaps, in some technological enclaves, the ability to produce devices like this might continue even when a wider social order had failed. This is doubtful, however, so it may be necessary to reformulate Gibbon’s Thesis a little. Most of us today use technology that we do not understand, and we do not seem to be converging upon a society of engineers and technologists in which most people would understand (and be able to re-create) most of the technology they employ on a daily basis.

With this reflection, we have one possible way to distinguish proper technological civilizations: they are civilizations in which, because technology is the central project of the civilization, knowledge of technology is so widespread and so enthusiastically received that the technological horizon of the society is maintained at such a high level that even a small, local community could produce and maintain the advanced technologies they use on a daily basis.

If the ancient world had attained this kind of technological horizon, archaeologists would find devices like the Antikythera mechanism in every small town, and this kind of technology would have stayed in use and continued in development, rather than being lost of human memory. Our society today also is not at this technological horizon. Our most advanced technologies would be lost in a great social disruption, rather than continuing in use and development.

Those technologies that do persist in use throughout social disruptions also tend to continue in development, though that development may be very gradual. Gibbon cites the example of the scythe; we might also cite the example of the plow. From the first digging sticks employed at the dawn of agriculture to the mechanized plows of today, the plow has been in continual, gradual development for thousands of years. There is scarcely a period of human history in which plow technology did not experience some slight improvement, because it was a widely used technology, easily understood by those who used the technology, and so subject to continual minor improvement.

The Horizon of Industrialization and Technological Civilization

Agricultural civilization coincides with the horizon of agricultural technology. From a human perspective, the thousands of years of agricultural civilization is in no sense rapid or sudden, but from an archaeological, and even more so from a geological or paleontological perspective, the whole of agricultural civilization would represent a very thin layer in the geological record, a layer that in most cases would be lost due to other geological processes, but which is so widely present in the Earth that it could probably be found (especially if one knew what to look for).

Industrialized civilization coincides with the horizon of industrial technologies, and it is from the industrial technologies that our present advanced technologies are derived. Our present advanced technologies give us a hint of the technologies that might be available to a truly advanced civilization — say, a civilization that experienced the equivalent of our industrial revolution and then continued to develop for thousands of years, i.e., the development of industrial technologies on an historical order of magnitude equivalent to that of our experience of agricultural technologies. And this is probably what we intuitively have in mind when we use a term like “technological civilization.”

When industrialized civilization has endured for thousands of years, possibly with several minor disruptions, but not enough of a disruption to prevent the persistence of basic technologies (as per Gibbon’s Thesis), industrialized civilization, like agricultural civilization, will leave only a very thin and easily expungible layer in the Earth’s geological record. But this thin layer will be the industrial horizon, and, from the point of view of a future archaeologist who is digging up the Anthropocene, there won’t be much differentiation between the earliest part of this layer and the latest part of this layer, which latter is several thousand years beyond us yet. In this compactified history of industrial civilization, we are, for all practical purposes, indistinguishable from an advanced technological civilization.

Looking Ahead to Part III

Part II has been a bit of a detour into the origins and development of technological civilization, a departure from the more theoretical concerns about the institutional structure of technological civilizations introduced in Part I. However, this detour has allowed us to introduce and discuss Darwin’s Thesis, the Tylor-Morgan taxonomy, Gibbon’s Thesis, and the idea of technological horizons, which can then be employed in future installments for the exposition of further theoretical issues in the definition of technological civilization.

In Part III we will introduce more theoretical concepts to complement those of Part I, but which bear upon the development of technological civilization, unlike the theoretical concepts introduced in Part I which could be taken to characterize the structure of a civilization irrespective of its history or development.

. . . . .


. . . . .

Grand Strategy Annex

. . . . .

project astrolabe logo smaller

. . . . .

. . . . .



What is a technological civilization?

For lack of better terminology and classifications, we routinely refer to “technical civilizations” or “technological civilizations” in discussions of SETI and more generally when discussing the place of civilization in the cosmos. One often sees the phrase advanced technological civilizations (sometimes abbreviated “ATC,” as in the paper “Galactic Gradients, Postbiological Evolution and the Apparent Failure of SETI” by Milan M. Ćirković and Robert J. Bradbury). Martyn J. Fogg has used an alternative phrase, “extraterrestrial technical civilizations (ETTCs)” (in his paper “Temporal aspects of the Interaction among the First Galactic Civilizations: The ‘lnterdict Hypothesis’”) that seems to carry a similar meaning to “advanced technological civilizations.” Thus the usage “technological civilization” is fairly well established, but its definition is not. What constitutes a technological civilization?

A model of civilization applied to the problem of technological civilization

In formulating a model of civilization — an economic infrastructure joined to an intellectual superstructure by a central project — I have a schematism by which a given civilization can be analyzed into constituent parts, and this makes it possible to lay out the permutations of the relationship of some human activity to the constituents of civilization, and the role that the human activity in question plays in the constitution of these constituents. Recently I have done this for spacefaring civilization (in Indifferently Spacefaring Civilizations) and for scientific civilization (in Science in a Scientific Civilization). A parallel formulation for technological civilization yields the following:

0. The null case: technology is not present in any of the elements that constitute a given civilization. This is a non-technological civilization. We will leave the question open as to whether a non-technological civilization is possible or not.

1. Economically technological civilization: technology is integral only to the economic infrastructure, and is absent elsewhere in the structures of civilization; also called intellectually indifferent technological civilization.

2. Intellectually technological civilization: technology is integral only to the intellectual superstructure of civilization, and is absent elsewhere in the structures of civilization; also called economically indifferent technological civilization.

3. Economically and intellectually technological civilization: technology is integral to both the economic infrastructure and the intellectual superstructure of a civilization, but is absent in the central project; also known as morally indifferent technological civilization.

4. Properly technological civilization: technology is integral to the central project of a civilization.

There are three additional permutations not mentioned above:

Technology constitutes the central project but is absent in the economic infrastructure and the intellectual superstructure.

Technology is integral with the central project and economic infrastructure, but is absent in the intellectual superstructure.

Technology is integral with the central project and intellectual infrastructure, but is absent in the economic infrastructure.

These latter three permutations are non-viable institutional structures and must be set aside. Because of the role that a central project plays in a civilization, whatever defines the central project is also, of necessity, integral to economic infrastructure and intellectual superstructure.

In the case of technology, some of the other permutations I have identified may also be non-viable. As noted above, a non-technological civilization may be impossible, so that the null case would be a non-viable scenario. More troubling (from a technological point of view) is that technology itself may be too limited of an aspect of the human condition to function effectively as a central project. If this were the case, there could still be technological civilizations in the 1st, 2nd, and 3rd senses given above, but there would be no properly technological civilization (as I have defined this). Is this the case?

Can technology function as the central project of a civilization?

At first thought technology would seem to be an unlikely candidate for a viable central project, but there are several ways in which technology could be integral in a central project. Spacefaring is a particular technology; virtual reality is also a particular technology. Presumably civilizations that possess these technologies and pursue them as central projects (either or both of them) are properly technological civilizations, even if the two represent vastly different, or in same cases mutually exclusive, forms of social development. Civilizations that take a particular technology as their central project by definition have technology as their central project, and so would be technological civilizations. For that matter, the same can be said of agriculture: agriculture is a particular technology, and so agricultural civilizations are technological civilizations in this sense.

A scientific civilization such as I discussed in Science in a Scientific Civilization would have technology integral with its central project, in so far as contemporary science, especially “big science,” is part of the STEM cycle in which science develops new technologies that are engineered into industries that supply tools for science to further develop new technologies. Technological development is crucial to continuing scientific development, so that a scientific civilization would also be a technological civilization.

In both of these examples — technological civilizations based on a particular technology, and technological civilizations focused on science — technology as an end in itself, technology for technology’s sake, as it were, is not the focus of the central project, even though technology is inseparable from the central project. Within the central project, then, meaningful distinctions can be made in which a particular element that is integral to the central project may or may not be an end in itself.

Technology as an end in itself

For a civilization to be a properly technological civilization in the sense that technology itself was an end in itself — a civilization of the engineers, by the engineers, and for the engineers, you could say — the valuation of technology would have to be something other than the instrumental valuation of technology as integral to the advancement of science or as the conditio sine qua non of some particular human activity that requires some particular technology. Something like this is suggested in Tinkering with Artificial Intelligence: On the Possibility of a Post-Scientific Technology, in which I speculated on technology that works without us having a scientific context for understanding how it works.

If the human interest were there to make a fascination with such post-scientific technologies central to human concerns, then there would be the possibility of a properly technological civilization in the sense of technology as an end in itself. Arguably, we can already see intimations of this in the contemporary fascination with personal electronic devices, which increasingly are the center of attention of human beings, and not only in the most industrialized nation-states. I remember when I was visiting San Salvador de Jujuy (when I traveled to Argentina in 2010), I saw a street sweeper — not a large piece of machinery, but an individual pushing a small garbage can on wheels and sweeping the street with a broom and a dustpan — focused on his mobile phone, and I was struck by the availability of mobile electronic technologies to be in the hands of a worker in a non-prestigious industry in a nation-state not in the top 20 of global GDP. (San Salvador de Jujuy is not known as place for sightseeing, but the city left a real impression on me, and I had some particularly good empanadas there.)

This scenario for a properly technological civilization is possible, but I still do not view it as likely, as most people do not have an engineer’s fascination with technology. However, it would not be difficult to formulate scenarios in which a somewhat richer central project that included technology as an end in itself, along with other elements that would constitute a cluster of related ideas, could function in such a way as to draw in the bulk of a society’s population and so function as a coherent social focus of a civilization.

Preliminary conclusions

Having come thus far in our examination of technological civilizations, we can already draw some preliminary conclusions, and I think that these preliminary conclusions again point to the utility of the model of civilization that I am employing. Because a properly technological civilization seems to be at least somewhat unlikely, but indifferently technological civilizations seem to be the rule, and are perhaps necessarily the rule (because technology precedes civilization and all civilizations make use of some technologies), the force of the ordinary usage of “technological civilization” is not to single out those civilizations that I would say are properly technological civilizations, but rather to identify a class of civilizations in which technology has reached some stage of development (usually an advanced stage) and some degree of penetration into society (usually a pervasive degree).

How this points to the utility of the model of civilization I am employing is, firstly, to distinguish between properly technological civilizations and indifferently technological civilizations, to know the difference between these two classes, and to be able to identify the ordinary usage of “technological civilization” as the intersection of the class of all properly technological civilizations and the class of all indifferently technological civilizations. Secondly, the model of civilization I am employing allows us to identify classes of civilization based not only upon shared properties, but also upon the continuity of shared properties over time, even when this continuity bridges distinct civilizations and may not single out any one civilization.

In the tripartite model of civilization — as above, an economic infrastructure joined to an intellectual superstructure by a central project — technology and technological development may inhere in any one or all three of these elements of civilization. The narrowest and most restrictive definition of civilization is that which follows from the unbroken continuity of all three elements of the tripartite model: a civilization begins when all three identified elements are present, and it ends when one or more elements fail or change. With the understanding that “technological civilization” is not primarily used to identify civilizations that have technology as their central project, but rather is used to identify the scope and scale of technology employed in a given civilization, this usage does not correspond to the narrowest definition of civilization under the tripartite model.

Significance for the study of civilization

We use “technological civilization” much as we may use labels like “western civilization” or “European civilization” or “agricultural civilization,” and these are not narrow definitions that single out particular individual civilizations, but rather broad categories that identify a large number of distinct civilizations, i.e., under the umbrella concept of European civilizations we might include many civilizations in the narrowest sense. For example, Jacob Burckhardt’s famous study The Civilization of the Renaissance in Italy identifies a regional civilization specific to a place and a time. This is a civilization defined in the narrowest sense. There are continuities between the renaissance civilization in Italy and our own civilization today, but this is a continuity that falls short of the narrowest definition of civilization. Similarly, the continuity of those civilizations we would call “technological” falls short of the narrowest possible definition of a technological civilization (which would be a properly technological civilization), but it is a category of civilization that may involve the continuity of technology in the economic infrastructure, continuity of technology in the intellectual superstructure, or both.

The lesson here for any study of civilization is that “civilization” means different things even though we do not yet have a vocabulary to distinguish the different senses of civilization as we casually employ the term. We may speak of “the civilization of the renaissance in Italy” (the narrowest conception of civilization) in the same breath that we speak of “technological civilization” (a less narrow conception) though we don’t mean the same thing in each case. To preface “civilization” with some modifier — European, western, technological, renaissance — implies that each singles out a class of civilizations in more-or-less the same way, but now we see that this is not the case. The virtue of the tripartite model is that it gives us a systematic method for differentiating the ways in which classes of civilizations are defined. It only remains to formulate an intuitively accessible terminology in order to convey these different meanings.

Looking ahead to Part II

In the case of SETI and its search for technological civilizations (which is the point at which I started this post), the continuity in question would not be that of historical causality, but rather of the shared properties of a category of civilizations. What are these shared properties? What distinguishes the class of technological civilizations? How are technological civilizations related to each other in space and time? We will consider these and other questions in Part II.

. . . . .

This is technological civilization after the industrial revolution, though we don’t think of this as “high” technology; this will be discussed in Part II.

. . . . .


. . . . .

Grand Strategy Annex

. . . . .

project astrolabe logo smaller

. . . . .

. . . . .

The Structure of Hope

20 February 2015


Kant on Hope

Kant famously summed up the concerns of his vast body of philosophical work in three questions:

1) What can I know?

2) What ought I to do? and…

3) What may I hope?

These three questions roughly correspond to his three great philosophical treatises, the Critique of Pure Reason, the Critique of Practical Reason, and the Critique of Judgment, which represent, respectively, rigorous inquiries into knowledge, ethics, and teleology. However much the world has changed since Kant, we can still feel the imperative behind his three questions, and they are still three questions that we can ask today with complete sincerity. This is important, because many men who deceive themselves as to their true motives, ask themselves questions and accept answers that they do not truly believe on a visceral level. I am saying that Kant’s questions are not like this.

In other contexts I have considered what we can know, and what we ought to do. (For example, I have just reviewed some aspects of what we can know in Personal Experience and Empirical Knowledge, and in posts like The Moral Imperative of Human Spaceflight I have looked at what we ought to do.) Here I will consider the third of Kant’s questions — what we are entitled to hope. There is no more important study toward understanding the morale of a people than to grasp the structure of hope that prevails in a given society. Kant’s third question — What may I hope? — is perhaps that imperative of human longing that was felt first, has been felt most strongly through the history of our species, and will be the last that continues to be felt even while others have faded. We have all heard that hope springs eternal in the human breast.

It is hope that gives historical viability both to individuals and their communities. In so far as the ideal of historical viability is permanence, and in so far as we agree with Kenneth Clark that a sense of permanence is central to civilization, then hope that aspires to permanence is the motive force that built the great monuments of civilization that Clark identified as such, and which are the concrete expressions of aspirations to permanence. Here hope is a primary source of civilization. More recent thought might call this concrete expression of aspirations to permanence the tendency of civilizations to raise works of monumental architecture (this is, for example, the terminology employed in Big History).

Four conceptions of history -- human nature and human condition

Hope and Conceptions of History

The structure of hope mirrors the conception of history prevalent within a given society. A particular species of historical consciousness gives rise to a particular conception of history, and a particular conception of history in turn defines the parameters of hope. That is to say, the hope that is possible within a given social context is a function of the conception of history; what hope is possible, what hope makes sense, is limited to those forms of hope that are both actualized by and delimited by a conception of history. The function of delimitation puts certain forms of hope out of consideration, while the function of actualization nurtures those possible forms of hope into life-sustaining structures that, under other conceptions of history, would remain stunted and deformed growths, if they were possible forms of hope at all.

In analyzing the structure of hope I will have recourse to the conceptions of history that I have been developing in this forum. Consequently, I will identify political hope, catastrophic hope, eschatological hope, and naturalistic hope. This proves to be a conceptually fertile way to approach hope, since hope is a reflection of human agency, and I have remarked in Cosmic War: An Eschatological Conception that the four conceptions of history I have been developing are based upon a schematic understanding of the possibilities of human agency in the world.

All of these structures of hope — political, catastrophic, eschatological, and naturalistic — have played important roles in human history. Often we find more than one form of hope within a given society, which tells us that no conception of history is total, that it admits of exceptions, and the societies can admit of pluralistic manifestations of historical consciousness.

Hope begins where human agency ends but human desire still presses forward. A man with political hope looks to a better and more just society in the future, as a function of his own agency and the agency of fellow citizens; a man with catastrophic hope believes that he may win the big one, that his ship will come in, that he will be the recipient of great good fortune; a man with eschatological hope believes that he will be rewarded in the hereafter for his sacrifices and sufferings in this world; a man with naturalistic hope looks to the good life for himself and a better life for his fellow man. Each of these personal forms of hope corresponds to a society that both grows out of such personal hopes and reinforces them in turn, transforming them into social norms.

Woman's Eye and World Globes

Structure and Scope

While a conception of history governs the structure of hope, the contingent circumstances that are the events of history — the specific details that fill in the general structure of history — govern the scope of hope. The lineaments of hope are drawn jointly by its structure and scope, so that we see the particular visage of hope when we understand the historical structure and scope of a civilization.

Like structure, scope is an expression of human agency. An individual — or a society — blessed with great resources possesses great power, and thus great freedom of action. An individual or a society possessed of impoverished resources has much more limited power and therefore is constrained in freedom of action. In so far as one can act — that is to say, in so far as one is an agent — one acts in accords with the possibilities and constraints defined by the scope of one’s world. The scope of human agency has changed over historical time, largely driven by technology; much of the human condition can be defined in terms of humanity as tool makers.

Technology is incremental and cumulative, and it generally describes an exponential growth curve. We labor at a very low level for very long periods of time, so that our posterity can enjoy the fruits of our efforts in a later age of abundance. Thus our hopes for the future are tied up in our posterity and their agency in turn. And it is technology that systematically extends human agency. To a surprising degree, then, the scope of civilization corresponds to the technology of a civilization. This technology can come in different forms. Early civilizations mastered the technology of bureaucratic organization, and managed to administer great empires even with a very low level of technical expertise in material culture. This has changed over time, and political entities have grown in size and increased in stability as increasing technical mastery makes the administration of the planet entire a realistic possibility.

The scope of civilization has expanded as our technologically-assisted agency has expanded, and today as we contemplate our emerging planetary civilization such organization is within our reach because our technologies have achieved a planetary scale. Our hopes have grown along the the expanding scope of our civilization, so that justice, luck, salvation, and the good life all reflect the planetary scope of human agency familiar to us today.

earth eye

Hope in Planetary Civilization

What may we hope in our planetary civilization of today, given its peculiar possibilities and constraints? How may be answer Kant’s third question today? Do we have any answers at all, or is ours an Age of Uncertainty that denies the possibility of any and all answers?

Those of a political frame of mind, hope for, “a thriving global civilization and, therefore… the greater well-being of humanity.” (Sam Harris, The Moral Landscape) Those with a catastrophic outlook hope for some great and miraculous event that will deliver us from the difficulties in which we find ourselves immersed. Those whose hope is primarily eschatological imagine the conversion of the world entire to their particular creed, and the consequent rule of the righteous on a planetary scale. And those of a naturalistic disposition look to what human beings can do for each other, without the intervention of fortune or otherworldly salvation.

How each of these attitudes is interpreted in the scope of our current planetary civilization is largely contingent upon how an individual or group of individuals with shared interests views the growth of technology over the past century, and this splits fairly neatly into the skeptics of technology and the enthusiasts of technology, with a few sitting on the fence and waiting to see what will happen next. Among those with the catastrophic outlook on history will be the fence sitters, because they will be waiting for some contingent event to occur which will tip us in one direction or the other, into technological catastrophe or technological bonanza. Those of an eschatological outlook tend to view technology in purely instrumental terms, and the efficacy of their grand vision of a spiritually unified and righteous planet will largely depend on the pragmatism of their instrumental conception of technology. The political cast of mind also views technological instrumentally, but primarily what it can do to advance the cause of large scale social organization (which in the eschatological conception is given over to otherworldly powers).

Perhaps the greatest dichotomy is to be found in the radically different visions of technology held by those of a naturalistic outlook. The naturalistic outlook today is much more common than it appears to be, despite much heated rhetoric to the contrary, since, as I wrote above, many of us deceive ourselves as to our true motives and our true beliefs. The rise of science since the scientific revolution has transformed the world, and many accept a scientific world view without even being aware that they hold such views. Rhetorically they may give pride of place to political ideology or religious faith, but when they act they act in accordance with reason and evidence, remaining open to change if their first interpretations of reason and evidence seem to be contradicted by circumstances and consequences.

The dichotomy of the naturalistic mind today is that between human agency that retreats from technology, as though it were a failed project, and human agency that embraces technology. Each tends to think of their relation to technology in terms of liberation. For the critics of technology, we have become enslaved to The Machine, and either by overthrowing the technological system, or simply by turning out backs on it, people can help each other by living modest lives, transitioning to a sustainable economy, cultivating community gardens, watching over their neighbors, and, generally speaking, living up to (or, as if you prefer, down to) the “small is beautiful” and “limits to growth” creed that had already emerged in the early 1970s.

The contrast could not be more stark between this naturalistic form of hope and the technology-embracing naturalistic form of hope. The technological humanist also sees people helping each other, but doing so on an ever grander scale, allowing human beings to realistically strive toward levels of self-actualization and fulfillment not even possible in earlier ages, perhaps not even conceivable. The human condition, for such naturalists, has enslaved us to a biological regime, and it is the efficacy of technology that is going to liberate us from the stunted and limited lives that have been our lot since the species emerged. Ultimately, technology embracing naturalists look toward transhumanism and all that it potentially promises to human hopes, which in this context can be literally unbounded.

uncertainty ahead

Hope in the Age of Naturalism

Given the state of the world today, with all its pessimism, and the violence of contesting power centers apparently motivated by unchanged and unchanging conceptions of the human condition, the reader may be surprised that I focus on naturalism and the naturalistic conception of history. If we do not destroy ourselves in the short term, the long term belongs to naturalism. Contemporary political hope, in so far as it is pragmatic is naturalistic, and insofar as it is not pragmatic, it will fail. The hysterical and bloody depredations of religious mania in our time is only as bad as it is because, as an ideology, it is under threat form the success of naturalistically-enabled science and technology. Once the break with the past is made, eschatological hope will no longer be the basis of large-scale social organization, and therefore its ability to cause harm will be greatly limited (though it will not disappear). The catastrophic viewpoint is always limited by its shoulder-shrugging attitude to human agency.

Most people cannot bear to leave their fate to fate, but will take their fate into their own hands if they can. How people take their fate into their hands in the future, and therefore the form of hope they entertain for what they do with the fate held in their hands, will largely be defined by naturalism. Perhaps this is ironic, as it has long been assumed that, of perennial conceptions of the human condition, naturalism had the least to say about hope (and eschatology the most). That is only because the age of naturalism had not yet arrived. But naturalistic despair is just as much a reality as naturalistic hope, so that the coming of the age of naturalism will not bring a Millennia of peace, justice, and happiness for all. Human leave-taking of the ideologies of the past is largely a matter of abandoning neurotic misery in favor of ordinary human unhappiness.

. . . . .


. . . . .

Grand Strategy Annex

. . . . .

project astrolabe logo smaller

. . . . .

The Technological Frontier

12 December 2014


wanderer above the technological frontier

An Exercise in Techno-Philosophy

Quite some time ago in Fear of the Future I employed the phase “the technological frontier,” but I did not follow up on this idea in a systematic way. In the popular mind, the high technology futurism of the technological singularity has largely replaced the futurism of rocketships and jetpacks, so that the idea of a technological frontier has particular resonance for us today. The idea of a technological frontier is particularly compelling in our time, as technology seems to dominate our lives to an increasing degree, and this trend may only accelerate in the future. If our lives are shaped by technology today, how much more profoundly will they be shaped by technology in ten, twenty, fifty, or a hundred years? We would seem to be poised like pioneers on a technological frontier.

How are we to understand the human condition in the age of the technological frontier? The human condition is no longer merely the human condition, but it is the human condition in the context of technology. This was not always the case. Let me try to explain.

While humanity emerged from nature and lived entirely within the context of nature, our long prehistory integrated into nature was occluded and utterly lost after the emergence of civilization, and the origins of civilization was attended by the formulation of etiological mythologies that attributed supernatural causes to the manifold natural causes that shape our lives. We continued to live at the mercy of nature, but posited ourselves as outside nature. This led to a strangely conflicted conception of nature and a fraught relationship with the world from which we emerged.

The fraught human relationship to nature has been characterized by E. O. Wilson in terms of biophilia; the similarly fraught human relationship to technology might be similarly characterized in terms of technophilia, which I posited in The Technophilia Hypothesis (and further elaborated in Technophilia and Evolutionary Psychology). And as with biophilia and biophobia, so, too, while there is technophilia, there is also technophobia.

Today we have so transformed our world that the context of our lives is the technological world; we have substituted technology for nature as the framework within which we conduct the ordinary business of life. And whereas we once asked about humanity’s place in nature, we now ask, or ought to ask, what humanity’s place is or ought to be in this technological world with which we have surrounded ourselves. We ask these questions out of need, existential need, as there is both pessimism and optimism about a human future increasingly dominated by the technology we have created.

I attach considerable importance to the fact that we have literally surrounded ourselves with our technology. Technology began as isolated devices that appeared within the context of nature. A spear, a needle, a comb, or an arrow were set against the background of omnipresent nature. And the relationship of these artifacts to their sources in nature were transparent: the spear was made of wood, the needle and the comb of bone, the arrow head of flint. Technological artifacts, i.e., individual instances of technology, were interpolations into the natural world. Over a period of more than ten thousand years, however, technological artifacts accumulated until they have displaced nature and they constitute the background against which nature is seen. Nature then became an interpolation within the context of the technological innovations of civilizations. We have gardens and parks and zoos that interpolate plants and animals into the built environment, which is the environment created by technology.

With technology as the environment and the background of our lives, and not merely constituted by objects within our lives, technology now has an ontological dimension — it has its own laws, its own features, its own properties — and it has a frontier. We ourselves are objects within a technological world (hence the feeling of anomie from being cogs within an enormous machine); we populate an environment defined and constituted by technology, and as such bear some relationship to the ontology of technology as well as to its frontier. Technology conceived in this way, as a totality, suggests ways of thinking about technology parallel to our conceptions of humanity and civilization, inter alia.

One way to think about the technological frontier is as the human exploration of the technium. The idea of the technium accords well with the conception of the technological world as the context of human life that I described above. The “technium” is a term introduced by Kevin Kelly to denote the totality of technology. Here is the passage in which Kelly introduces the term:

“I dislike inventing new words that no one else uses, but in this case all known alternatives fail to convey the required scope. So I’ve somewhat reluctantly coined a word to designate the greater, global, massively interconnected system of technology vibrating around us. I call it the technium. The technium extends beyond shiny hardware to include culture, art, social institutions, and intellectual creations of all types. It includes intangibles like software, law, and philosophical concepts. And most important, it includes the generative impulses of our inventions to encourage more tool making, more technology invention, and more self-enhancing connections. For the rest of this book I will use the term technium where others might use technology as a plural, and to mean a whole system (as in “technology accelerates”). I reserve the term technology to mean a specific technology, such as radar or plastic polymers.”

Kevin Kelly, What Technology Wants

I previously wrote about the technium in Civilization and the Technium and The Genealogy of the Technium.

The concept of the technium can be extended in parallel to schema I have applied to civilization in Eo-, Eso-, Exo-, Astro-, so that we have the concepts of the eotechnium, the esotechnium, the exotechnium, and the astrotechnium. (Certainly no one is going to employ this battery of unlovely terms I have coined — neither the words nor the concepts are immediately accessible — but I keep this ideas in the back of my mind and hope to further extend, perhaps in a formal context in which symbols can be substituted for awkward words and the ideas can be presented.)

● Eotechnium the origins of technology, wherever and whenever it occurs, terrestrial or otherwise

● Esotechnium our terrestrial technology

● Exotechnium the extraterrestrial technium exclusive of the terrestrial technium

● Astrotechnium the technium in its totality throughout the universe; the terrestrial and extraterrestrial technium taken together in their cosmological context

I previously formulated these permutations of technium in Civilization and the Technium. In that post I wrote:

The esotechnium corresponds to what has been called the technosphere, mentioned above. I have pointed out that the concept of the technosphere (like other -spheres such as the hydrosphere and the sociosphere, etc.) is essentially Ptolemaic in conception, i.e., geocentric, and that to make the transition to fully Copernican conceptions of science and the world we need to transcend our Ptolemaic ideas and begin to employ Copernican ideas. Thus to recognize that the technosphere corresponds to the esotechnium constitutes conceptual progress, because on this basis we can immediately posit the exotechnium, and beyond both the esotechnium and the exotechnium we can posit the astrotechnium.

We can already glimpse the astrotechnium, in so far as human technological artifacts have already reconnoitered the solar system and, in the case of the Voyager space probes, have left the solar system and passed into interstellar space. The technium then, i.e., from the eotechnium originating on Earth, now extends into space, and we can conceive the whole of this terrestrial technology together with our extraterrestrial technology as the astrotechnium.

It is a larger question yet whether there are other technological civilizations in the universe — it is the remit of SETI to discover if this is the case — and, if there are, there is an astrotechnium much greater than that we have created by sending our probes through our solar system. A SETI detection of an extraterrestrial signal would mean that the technology of some other species had linked up with our technology, and by their transmission and our reception an interstellar astrotechnium comes into being.

The astrotechnium is both itself a technological frontier, and it extends throughout the frontier of extraterrestrial space, and a physical frontier of space. The exploration of the astrotechnium would be at once an exploration of the technological frontier and an exploration of an actual physical frontier. This is surely the frontier in every sense of the term. But there are other senses as well.

We can go my taxonomy of the technium one better and also include the endotechnium, where the prefix “endo-” means “inside” or “interior.” The endotechnium is that familiar motif of contemporary thought of virtual reality becoming indistinguishable from the reality of nature. Virtual reality is immersion in the endotechnium.

I have noted (in An Idea for the Employment of “Friendly” AI) that one possible employment of friendly AI would be the on-demand production of virtual worlds for our entertainment (and possibly also our education). One would presumably instruct one’s AI interface (which already has all human artistic and intellectual accomplishments storied in its databanks) that one wishes to enter into a particular story. The AI generates the entire world virtually, and one employs one’s preferred interface to step into the world of the imagination. Why would one so immersed choose to emerge again?

One of the responses to the Fermi paradox is that any sufficiently advanced civilization that had developed to the point of being able to generate virtual reality of a quality comparable to ordinary experience would thereafter devote itself to the exploration of virtual worlds, turning inward rather than outward, forsaking the wider universe outside for the universe of the mind. In this sense, the technological frontier represented by virtual reality is the exploration of the human imagination (or, for some other species, the exploration of the alien imagination). This exploration was formerly carried out in literature and the arts, but we seem poised to enact this exploration in an unprecedented way.

There are, then, many senses of the technological frontier. Is there any common framework within which we can grasp the significance of these several frontiers? The most famous representative of the role of the frontier in history is of course Frederick Jackson Turner, for whom the Turner Thesis is named. At the end of his famous essay on the frontier in American life, Turner wrote:

“From the conditions of frontier life came intellectual traits of profound importance. The works of travelers along each frontier from colonial days onward describe certain common traits, and these traits have, while softening down, still persisted as survivals in the place of their origin, even when a higher social organization succeeded. The result is that to the frontier the American intellect owes its striking characteristics. That coarseness and strength combined with acuteness and inquisitiveness; that practical, inventive turn of mind, quick to find expedients; that masterful grasp of material things, lacking in the artistic but powerful to effect great ends; that restless, nervous energy; that dominant individualism, working for good and for evil, and withal that buoyancy and exuberance which comes with freedom — these are traits of the frontier, or traits called out elsewhere because of the existence of the frontier.”

Frederick Jackson Turner, “The Significance of the Frontier in American History,” which constitutes the first chapter of The Frontier In American History

Turner is not widely cited today, and his work has fallen into disfavor (especially targeted by the “New Western Historians”), but much that Turner observed about the frontier is not only true, but more generally applicable beyond the American experience of the frontier. I think many readers will recognize in the attitudes of those today on the technological frontier the qualities that Turner described in the passage quoted above, attributing them specially to the American frontier, which for Turner was, “an area of free land, its continuous recession, and the advance of American settlement westward.”

The technological frontier, too, is an area of free space — the abstract space of technology — the continuous recession of this free space as frontier technologies migrate into the ordinary business of life even while new frontiers are opened, and the advance of pioneers into the technological frontier.

One of the attractions of a frontier is that it is distant from the centers of civilization, and in this sense represents an escape from the disciplined society of mature institutions. The frontier serves as a refuge; the most marginal elements of society naturally seek the margins of society, at the periphery, far from the centers of civilization. (When I wrote about the center and periphery of civilization in The Farther Reaches of Civilization I could just as well have expressed myself in terms of the frontier.)

In the past, the frontier was defined in terms of its (physical) distance from the centers of civilization, but the world of high technology being created today is a product of the most technologically advanced centers of civilization, so that the technological frontier is defined by its proximity to the centers of civilization, understood at the centers of innovation and production for industrial-technological civilization.

The technological frontier nevertheless exists on the periphery of many of the traditional symbols of high culture that were once definitive of civilizational centers; in this sense, the technological frontier may be defined as the far periphery of the traditional center of civilization. If we identify civilization with the relics of high culture — painting, sculpture, music, dance, and even philosophy, all understood in their high-brow sense (and everything that might have featured symbolically in a seventeenth century Vanitas painting) — we can see that the techno-philosophy of our time has little sympathy for these traditional markers of culture.

The frontier has been the antithesis of civilization — civilization’s other — and the further one penetrates the frontier, moving always away from civilization, the nearer one approaches the absolute other of civilization: wildness and wilderness. The technological frontier offers to the human sense of adventure a kind of wildness distinct from that of nature as well as the intellectual adventure of traditional culture. Although the technological frontier is in one sense antithetical to the post-apocalyptic visions of formerly civilized individuals transformed into a noble savage (which usually marked by technological rejectionism), there is also a sense in which the technological frontier is like the post-apocalyptic frontier in its radical rejection of bourgeois values.

If we take the idea of the technological frontier in the context of the STEM cycle, we would expect that the technological frontier would have parallels in science and engineering — a scientific frontier and an engineering frontier. In fact, the frontier of scientific knowledge has been a familiar motif since at least the middle of the twentieth century. With the profound disruptions of scientific knowledge represented by relativity and quantum theory, the center of scientific inquiry has been displaced into an unfamiliar periphery populated by strange and inexplicable phenomena of the kind that would have been dismissed as anomalies by classical physics.

The displacement of traditional values of civilization, and even of traditional conceptions of science, gives the technological frontier its frontier character even as it emerges within the centers of industrial-technological civilization. In The Interstellar Imperative I asserted that the central imperative of industrial-technological civilization is the propagation of the STEM cycle. It is at least arguable that the technological frontier is both a result and a cause of the ongoing STEM cycle, which experiences its most unexpected advances when its scientific, technological, and engineering innovations seem to be at their most marginal and peripheral. A civilization that places itself within its own frontier in this way is a frontier society par excellence.

. . . . .

artificial intelligence

. . . . .


. . . . .

Grand Strategy Annex

. . . . .

project astrolabe logo smaller

. . . . .


Lund Astronomical Clock

An interesting article on NPR about a new atomic clock being developed by NIST scientists, New Clock May End Time As We Know It, was of great interest to me. Immediately intrigued, I wrote a post on my other blog in which I suggested that the new clock might be used to update the “Einstein’s box” thought experiment (also known as the clock-in-a-box thought experiment). While I would like to follow up on this idea at some time, today I want to write about advanced chronometry in the context of the STEM cycle.

What if, in the clock-in-a-box thought experiment, we replace the clock with one so sensitive it can also function to measure the height of the box?

What if, in the clock-iin-a-box thought experiment, we replace the clock with one so sensitive it can also function to measure the height of the box?

Atomic clocks are among the most precise scientific instruments ever developed. As such, precision clocks offer a good illustration of the STEM cycle, which I identified as the definitive feature of industrial-technological civilization. While this illustration is contemporary, there is nothing new about the use of the most advanced science, technology, and engineering available being employed in chronometry.

The Tower of the Winds in Athens held one of the most advanced timekeeping devices in classical antiquity; the tower still stands, but the mechanism is long gone.

The Tower of the Winds in Athens held one of the most advanced timekeeping devices in classical antiquity; the tower still stands, but the mechanism is long gone.

The earliest sciences, already developed in classical antiquity, were mathematics and astronomy. These early scientific disciplines were applied to the construction of timekeeping mechanisms. Among the most interesting technological artifacts of the ancient world are the clock once installed in the Tower of the Winds in Athens (which was described in antiquity, but which no longer exists) and the Antikythera mechanism, the corroded remains of which were dredged up from a shipwreck off the Greek island of Antikythera (while discovered by sponge divers in 1900, the site is still yielding finds). A classic paper on the Tower of the Winds compares these two technologies: “This is a field in which ancient literature is curiously meager, as we well know from the complete lack of any literary reference to a technology that could produce the Antikythera Mechanism of the same date.” (“The Water Clock in the Tower of the Winds,” Joseph V. Noble and Derek J. de Solla, American Journal of Archaeology, Vol. 72, No. 4, Oct., 1968, pp. 345-355) Both of these artifacts are concerned with chronometry, which demonstrates that the most advanced technologies, then and now, have been employed in the measurement of time.

antikythera mechanism reconstruction

The advent of high technology as we know it today — unprecedented in human history — has been the result of the advent of a new kind of civilization — industrial-technological civilization — and the use of advanced technologies in chronometry provides a useful lens with which to view one of the unique features of our civilization today, which I call the STEM cycle. The acronym STEM is familiar in educational contexts in order to refer to education and training in science, technology, engineering, and mathematics, so I have taken over this acronym as the name for one of the socioeconomic processes that lies at the heart of our civilization: Science seeks to understand nature on its own terms, for its own sake. Technology is that portion of scientific research that can be developed specifically for the realization of practical ends. Engineering is the industrial implementation of a technology. Mathematics is the common language in which the elements of the cycle are formulated. A feedback loop of science driving technology, driving engineering, driving more science, characterizes industrial-technological civilization. This is the STEM cycle.

Ammonia maser frequency standard built 1949 at the US National Bureau of Standards (now National Institute of Standards and Technology) by Harold Lyons and associates. (Wikipedia)

Ammonia maser frequency standard built 1949 at the US National Bureau of Standards (now National Institute of Standards and Technology) by Harold Lyons and associates. (Wikipedia)

The distinctions between science, technology, and engineering are not absolute — far from it. To employ a terminology I developed elsewhere, I would say that science is only weakly distinct from technology, technology is only weakly distinct from engineering, and engineering is only weakly distinct from science. In some contexts any two elements of the STEM cycle are identical, while in other contexts of the STEM cycle they are starkly contrasted. This is not due to inconsistency, but rather to the fact that science, technology, and engineering are open-textured concepts; we could adopt conventional distinctions that would make them strongly distinct, but this would be contrary to usage in ordinary language and would only result in confusion. Given the lack of clear distinctions among science, technology, and engineering, where we draw the dividing lines within the STEM cycle is to some degree arbitrary — we could describe this cycle in different terms, employing different distinctions — but the cycle itself is not arbitrary. By any other name, it drives industrial-technological civilization.

STEM cycle 1

The clock that was the inspiration for this post — the new strontium atomic clock, described in JILA Strontium Atomic Clock Sets New Records in Both Precision and Stability, and the subject of a scientific paper, An optical lattice clock with accuracy and stability at the 10−18 level by B. J. Bloom, T. L. Nicholson, J. R. Williams, S. L. Campbell, M. Bishof, X. Zhang, W. Zhang, S. L. Bromley, and J. Ye (a preprint of the article is available at Arxiv) — is instructive in several respects. In so far as we consider atomic clocks to be a generic “technology,” the strontium clock represents the latest and most advanced instance of this technology yet constructed, a more specific form of technology, the optical lattice clock, within the more generic division of atomic clocks. The sciences involved in the conceptualization of atomic clocks are fundamental: atomic physics, quantum theory, relativity theory, thermodynamics, and optics. Atomic clocks are a technology built from another technologies, including advanced materials, lasers, masers, a vacuum chamber, refrigeration, and computers. Building the technology into an optimal device involves engineering for dependability, economy, miniaturization, portability, and refinements of design.

JILA's experimental atomic clock based on strontium atoms held in a lattice of laser light is the world's most precise and stable atomic clock. The image is a composite of many photos taken with long exposure times and other techniques to make the lasers more visible. (Ye group and Baxley/JILA)

JILA’s experimental atomic clock based on strontium atoms held in a lattice of laser light is the world’s most precise and stable atomic clock. The image is a composite of many photos taken with long exposure times and other techniques to make the lasers more visible. (Ye group and Baxley/JILA)

The NIST web page notes that, “NIST invests in a number of atomic clock technologies because the results of scientific research are unpredictable, and because different clocks are suited for different applications.” (For further background on atomic clocks at NIST cf. A New Era for Atomic Clocks.) The new record breaking clocks in terms of stability and accuracy are experimental devices; the current standard for timekeeping is the NIST-F2 “cesium fountain” atomic clock. The transition from the previous standard timekeeping, NIST-F1, to the present standard, NIST-F2, is largely a result of engineering refinements of the earlier atomic clock. Even the experimental strontium clock is likely to be soon surpassed. JILA Strontium Atomic Clock Sets New Records in Both Precision and Stability quotes Jun Ye as saying, “We already have plans to push the performance even more, so in this sense, even this new Nature paper represents only a ‘mid-term’ report. You can expect more new breakthroughs in our clocks in the next 5 to 10 years.”

STEM cycle epiphenomena 10

The engineering refinement of high technology has two important consequences:

1) inexpensive, widely available devices (which I will call the ubiquity function), and…

2) improved, cutting edge devices that improve the precision of measurement (which I will call the meliorative function), sometimes improved by an order of magnitude (or several orders of magnitude).

These latter devices, those that represent greater precision, are not likely to be inexpensive or widely available, but as the STEM cycle continues to advance science, technology, and engineering in a regular and predictable manner, the older generation of technology becomes widely available and inexpensive as new technologies take their place on the expensive cutting edge. However, these cutting edge technologies are in turn displaced by newer technologies, and the cycle continues. Thus there is a relationship — an historical relationship — between the two consequences of the engineering refinement of technology. Both of these phases in the life of a technology affect the practice of science. NIST Launches a New U.S. Time Standard: NIST-F2 Atomic Clock quotes NIST physicist Steven Jefferts, lead designer of NIST-F2, as saying, “If we’ve learned anything in the last 60 years of building atomic clocks, we’ve learned that every time we build a better clock, somebody comes up with a use for it that you couldn’t have foreseen.”


Widely available precision measurement devices (the ubiquity function) bring down the cost of scientific research and we begin to see science cropping up in all kinds of interesting and unexpected places. The development of computer technology and then the miniaturization of computers had the unintended result of making computers inexpensive and widely available. This, in turn, has meant that everyone doing science carries a portable computer with them, and this widely available computational power (which I have elsewhere called the computational infrastructure of civilization) has transformed how science is done. NIST Atomic Devices and Instrumentation (ADI) now builds “chip-scale” atomic clocks, which is both commercializing and thereby democratizing atomic clock technology in a form factor so small that it could be included in a cell phone (or whatever mobile device form factor you prefer). This is perfect illustration of the ubiquity function in an engineering application of atomic clock technology.

New cutting edge precision measurement devices (the meliorative function), employed only by the governments and industries that can afford to push the envelope with the latest technology, are scientific instruments of great sensitivity; increasing the precision of the measurement of time by an order of magnitude opens up new possibilities the consequences of which cannot be predicted. What can be predicted, however, is the present generation of high precision measurement devices make it possible to construct the next generation of precision measurement devices, which exceed the precision of the previous generation of devices. A clock built to a new design that is far more precise than its predecessors (like the strontium atomic clock) may not necessarily find its cutting edge scientific application exclusively in the measurement of time (though, again, it might do that also), but as a scientific instrument of great sensitivity it suggests uses throughout the sciences. A further distinction can be made, then, between instruments used for the purposes they were intended to serve, and instruments that are exapted for unintended uses.

A loosely-coupled STEM cycle is characterized primarily by the ubiquity function, while a tightly-coupled STEM cycle is characterized primarily by the meliorative function. Human civilization has always involved a loosely-coupled STEM cycle, sometimes operating over thousands of years, with no apparent relationship between science, technology, and engineering. Technological progress was slow and intermittent under these conditions. However, the productivity of industrial-technological civilization is such that its STEM cycle yields both the ubiquity function and the meliorative function, which means that there are in fact multiple STEM cycles running concurrently, both loosely-coupled and tightly-coupled.

The research and development branch of a large business enterprise is the conscious constitution of a limited, tightly-coupled STEM cycle in which only that science is pursued that is expected to generate specific technologies, and only those technologies are developed that can be engineered into marketable products. An open loop STEM cycle, loosely-coupled STEM cycle, or exaptations of the STEM cycle are seen as wasteful, but in some cases the unintended consequences from commercial enterprises can be profound. When Arno Penzias and Robert Wilson were hired by Bell Labs, it was with the promise that they could use the Holmdel Horn Antenna for pure science once they had done the work that Bell Labs would pay them for. As it turned out, the actual work of tracing down interference resulted in the discovery of cosmic microwave background radiation (CMBR), earning Penzias and Wilson the Nobel prize. An engineering problem became a science problem: how do you explain the background interference that cannot be eliminated from electronic devices?

. . . . .


. . . . .

Grand Strategy Annex

. . . . .

project astrolabe logo smaller

. . . . .


old computer

Technologies may be drivers of change or facilitators of change, the latter employed by the former as the technologies that enable the development of technologies that are drivers of change; that is to say, technologies that are facilitators of change are tools for the technologies that are in the vanguard of economic, social, and political change. Technologies, when introduced, have the capability of providing a competitive advantage when one business enterprise has mastered them while other business enterprises have not yet mastered them. Once a technology has been mastered by all elements of the economy it ceases to provide a competitive advantage to any one firm but is equally possessed and employed by all. At that point of its mature development, a technology also ceases to be a driver a change and becomes a facilitator of change.

Any technology that has become a part of the infrastructure may be considered a facilitator of change rather than a driver of change. Civilization requires an infrastructure; industrial-technological civilization requires an industrial-technological infrastructure. We are all familiar with infrastructure such as roads, bridges, ports, railroads, schools, and hospitals. There is also the infrastructure that we think of as “utilities” — water, sewer, electricity, telecommunications, and now computing — which we build into our built environment, retrofitting old buildings and sometimes entire older cities in order to bring them up to the standards of technology assumed by the industrialized world today.

All of the technologies that now constitute the infrastructure of industrial-technological civilization were once drivers of change. Before the industrial revolution, the building of ports and shipping united coastal communities in many regions of the world; the Romans built a network of road and bridges; in medieval Europe, schools and hospitals become a routine part of the structure of cities; early in the industrial revolution railroads became the first mechanized form of rapid overland transportation. Consider how the transcontinental railroad in North America and the trans-Siberian railway in Russia knitted together entire continents, and their role as transformative technologies should be clear.

Similarly, the technologies we think of as utilities were once drivers of change. Hot and cold running water and indoor plumbing, still absent in much of the world, did not become common in the industrialized world until the past century, but early agricultural and urban centers only came into being with the management of water resources, which reached a height in the most sophisticated cities of classical antiquity, with water supplied by aqueducts and sewage taken away by underground drainage systems that were superior to many in existence today. With the advent of natural gas and electricity as fuels for home and industry, industrial cities were retrofitted for these services, and have since been retrofitted again for telecommunications, and now computers.

The most recent technology to have a transformative effect on socioeconomic life was computing. In the past several decades — since the end of the Second World War, when the first digital, programmable electronic computers were built for code breaking (the Colossus in the UK) — computer technology grew exponentially and eventually affected almost every aspect of life in industrialized nation-states. During this period of time, computing has been a driver of change across socioeconomic institutions. Building a faster and more sophisticated computer has been an end in itself for technologists and computer science researchers. While this will continue to be the case for some time, computing has begun to make the transition from being a driver of change in an of itself to being a facilitator of change in other areas of technological innovation. In other words, computers are becoming a part of the infrastructure of industrial-technological civilization.

The transformation of the transformative technology of computing from a driver of change into a facilitator of change for other technologies has been recognized for more than ten years. In 2003 an article by Nicholas G. Carr, Why IT Doesn’t Matter Anymore, stirred up a significant controversy when it was published. More recently, Mark R. DeLong in Research computing as substrate, calls computing a substrate instead of an infrastructure, though the idea is much the same. Delong writes of computing: “It is a common base that supports and nurtures research work and scholarly endeavor all over the university.” Although computing is also a focus of research work and scholarly endeavor in and of itself, it also serves a larger supporting role, not only in the university, but also throughout society.

Although today we still fall far short of computational omniscience, the computer revolution has happened, as evidenced by the pervasive presence of computers in contemporary socioeconomic institutions. Computers have been rapidly integrated into the fabric of industrial-technological civilization, to the point that those of us born before the computer revolution, and who can remember a world in which computers were a negligible influence, can nevertheless only with difficulty remember what life was like without computers.

Depsite, then, what technology enthusiasts tell us, computers are not going to revolutionize our world a second time. We can imagine faster computers, smaller computers, better computers, computers with more storage capacity, and computers running innovative applications that make them useful in unexpected ways, but the pervasive use of computers that has already been achieved gives us a baseline for predicting future computer capacities, and these capacities will be different in degree from earlier computers, but not different in kind. We already know what it is like to see exponential growth in computing technology, and so we can account for this; computers have ceased to be a disruptive technology, and will not become a disruptive technology a second time.

Recently quantum computing made the cover of TIME magazine, together with a number of hyperbolic predictions about how quantum computing will change everything (the quantum computer is called “the infinity machine”). There have been countless articles about how “big data” is going to change everything also. Similar claims are made for artificial intelligence, and especially for “superintelligence.” An entire worldview has been constructed — the technological singularity — in which computing remains an indefinitely disruptive technology, the development of which eventually brings about the advent of the Millennium — the latter suitably re-conceived for a technological age.

Predictions of this nature are made precisely because a technology has become widely familiar, which is almost a guarantee that the technology in question is now part of the infrastructure of the ordinary business of life. One can count on being understood when one makes predictions about the future of the computer, in the same way that one might have been understood in the late nineteenth or early twentieth century if making predictions about the future of railroads. But in so far as this familiarity marks the transition in the life of a technology from being a driver of change to being a facilitator of change, such predictions are misleading at best, and flat out wrong at worst. The technologies that are going to be drivers of change in the coming century are not those that have devolved to the level of infrastructure; they are (or will be) unfamiliar technologies that can only be understood with difficulty.

The distinction between technologies that are drivers of change and technologies that are facilitators of change (like almost all distinctions) admits of a certain ambiguity. In the present context, one of these ambiguities is that of what constitutes a computing technology. Are computing applications distinct from computing? What of technologies for which computing is indispensable, and which could not have come into being without computers? This line of thought can be pursued backward: computers could not exist without electricity, so should computers be considered anything new, or merely an extension of electrical power? And electrical power could not have come about with the steam and fossil-fueled industry that preceded it. This can be pursued back to the first stone tools, and the argument can be made the nothing new has happened, in essence, since the first chipped flint blade.

Perhaps the most obvious point of dispute in this analysis is the possibility of machine consciousness. I will acknowledge without hesitation that the emergence of machine consciousness is a potentially revolutionary development, and it would constitute a disruptive technology. Machine consciousness, however, is frequently conflated with artificial intelligence and with superintelligence, and we must distinguish between the two. Artificial intelligence of a rudimentary form is already crucial to the automation of industry; machine consciousness would be the artificial production, in a machine substrate, of the kind of consciousness that we personally experience as our own identity, and which we infer to be at the basis of the action of others (what philosophers call the problem of other minds).

What makes the possibility of machine consciousness interesting to me, and potentially revolutionary, is that it would constitute a qualitatively novel emergent from computing technology, and not merely another application of computing. Computers stand in the same relationship to electricity that machine consciousness would stand in relation to computing: a novel and transformational technology emergent from an infrastructural technology, that is to say, a driver of change that emerges from a facilitator of change.

The computational infrastructure of industrial-technological civilization is more or less in place at present, a familiar part of our world, like the early electrical grids that appeared in the industrialized world once electricity became sufficiently commonplace to become a utility. Just as the electrical grid has been repeatedly upgraded, and will continue to be ungraded for the foreseeable future, so too the computational infrastructure of industrial-technological civilization will be continually upgraded. But the upgrades to our computational infrastructure will be incremental improvements that will no longer be major drivers of change either in the economy or in sociopolitical institutions. Other technologies will emerge that will take that role, and they will emerge from an infrastructure that is no longer driving socioeconomic change, but is rather the condition of the possibility of this change.

. . . . .


. . . . .


. . . . .

Grand Strategy Annex

. . . . .

The Technology of Living

28 August 2013


Gotland 1

Variations on a Theme of Le Corbusier

Le Corbusier famously (or notoriously, depending upon your point of view) said that a house is a machine for living in (“Une maison est une machine-à-habiter”). This appears in his manifesto of modern architecture Vers une architecture of 1923 (which has been translated as Towards a New Architecture and more recently as Toward an Architecture), and it would be worthwhile to consider the context in which Le Corbusier made this assertion. It appears at least three times in Le Corbusier’s book, as follows, first in the opening “Argument” of the book:

“The airplane is the product of close selection. The lesson of the airplane lies in the logic which governed the statement of the problem and its realization. The problem of the house has not yet been stated. Nevertheless there do exist standards for the dwelling house. Machinery contains in itself the factor of economy, which makes for selection. The house is a machine for living in.” (p. 4)

In the section, “Eyes which Do Not See” (elaborating on the “argument” given above), Le Corbusier wrote:

“A house is a machine for living in. Baths, sun, hot-water, cold-water, warmth at will, conservation of food, hygiene, beauty in the sense of good proportion. An armchair is a machine for sitting in and so on.” (p. 95)

And again in the last essay, “Mass Production Houses,” Le Corbusier wrote:

“‘Citrohan’ (not to say Citroën). That is to say, a house like a motor-car, conceived and carried out like an omnibus or a ship’s cabin. The actual needs of the dwelling can be formulated and demand their solution. We must fight against the old-world house, which made a bad use of space. We must look upon the house as a machine for living in or as a tool.” (p.240)

What Le Corbusier was reacting against in his manifesto was the traditional European house, the old-world house, as it calls it. It is probably pointless to ask if a manifesto is right or wrong, as it is the nature of a manifesto to be polemical, i.e., rhetorical, and therefore not meant to be held to standards of logic or reason applicable elsewhere. It is probably more helpful to go into the detail of what Le Corbusier was condemning in the traditional house: citing his litany of “Baths, sun, hot-water, cold-water, warmth at will, conservation of food, hygiene” we can obtain, by way of the via negativa, his image of the traditional house. In many respects, Le Corbusier was completely justified. Let me try to explain.

I have mentioned in past posts by interest in seeking out open-air museums in Europe. Last year I mentioned the Hardanger open-air museum at Utne and the Sogn open-air museum near Sogndal. Today I visited an open-air museum in the north of Gotland at Bunge, the Bungemuseet, which not only collects many traditional houses and rural industrial buildings together, but also includes many picture stones as I mentioned yesterday.

The traditional houses preserved in open-air museums have a certain kind of rustic beauty, though this may not correspond to Le Corbusier’s canon of “beauty in the sense of good proportion.” I admit I am fascinated by these old houses, and take any opportunity I have to visit them. But as much as I am enthralled by them, I can see that Le Corbusier was right. If you have never lived in an old house you may not understand what Le Corbusier is talking about when he writes of, “warmth at will,” but I can assure you from personal experience that older, drafty houses heated by woodstoves do not give warmth at will. Most houses today do give warmth at will, so people have forgotten what a great advance over the past this is.

As for the rest of Le Corbusier’s litany, these houses had no running water, much less hot and cold running water. They had no indoor bathrooms, showers, or bathtubs. The Windows are small and dim, letting in little light. Their kitchens have no modern conveniences or appliances, so there was no conservation of food. Le Corbusier focused on the needs of the body, but the needs of the mind are equally wanting. When I look around these cramped homes in which people like my ancestors lived, I realize how little intellectual stimulation they had. Even in the midst of civilization, it seems, having entered into a social contract, life can be “poor, nasty, brutish, and short” — in Hobbes’ famous phrase — but it was not likely solitary. People had to live closely packed together just to survive.

It is always humbling to me to see the conditions under which our ancestors lived, and to reflect how far we have come, and how quickly. But I also observe the remarkable level of technology involved in even the most rudimentary dwelling, and the way of life it implies. If a house is a machine for living in, as Le Corbusier said, then different houses are different machines, and each housing mechanism is integrated into a particular technology of living.

In my many visits to museums I have, example, seen many traditional spinning wheels. Some of these are very rudimentary and easy to understand, but the later ones from the 19th century, before the industrial revolution rendered then all obsolete, are quite complex and could only be operated by someone with a significant level of skill and knowledge in this particular technology. I suspect that if a person started with the simplest spinning wheel and used it for a while, the limitations would become obvious over time, and you might begin to see how and why the additional complexities were introduced; one might, in this fashion, ontogenetically reconstruct the phylogeny of a technology.

An entire house, even a traditional house, as a machine for living in, is like the spinning wheel, and to live in a house according to the way of life for which it was designed is to understand why it was built in the way it was built. But we don’t get to live in the houses and rooms we see in museums; we observe them briefly, and so we do not really understand them.

The Gotland open-air museum also displayed a large number of structures associated with rural industries, including an unusual wind-driven saw. Most of these mechanisms were beyond being brought back into service, although I turned the crank on one old mechanism and its wooden teeth and gears still meshed perfectly and I suspect the machine was still useable. But I didn’t know what it was for; I didn’t understand its function. These several literally “cottage” industries all involved the production of the most basic necessities of life — the production of food and clothing — and the industrial processes behind them were surprisingly complex, involving many stages of production and specialized workers. The lives of these workers, in turn, would have reflected their involvement with the industries they have masters. Rural characters such as the butcher, the baker, and the candlestick maker — not to mention the blacksmith, the carpenter, and the miller — are as familiar in the stories we still retain from those times as the social roles of today that represent industrialized society (banker, salesman, clerk, mechanic, etc.).

This made me think of the Vasa warship that I recently saw in Stockholm, which was not only enormous, but also a highly specialized and intricate piece of technology. If you took a few hundred intelligent and and educated persons of today and put them on the Vasa as its crew, they literally would not even know where to begin to get the ship underway. Our advanced technology and engineering knowledge does not replace or supersede the technological and engineering knowledge of our ancestors; we could no more cope with their world than they could cope with ours — though either, given the time, could learn the life of the other.

The technologies of living are many and various; the lives of individuals are integrated into a technology of living that is adapted to their place and time, and houses in which they individuals live are both technologies in and of themselves as well as being integrated into a wider technological context. What is this wider technological context? Adam Smith’s famous example of the woolen coat furnishes us with the perfect example of technological synchrony.

Here is a typically longish paragraph from Smith, which I have not quoted in its entirety, but I have quoted at sufficient length to give a proper appreciate for Smith’s conception:

“The woollen coat, for example, which covers the day-labourer, as coarse and rough as it may appear, is the produce of the joint labour of a great multitude of workmen. The shepherd, the sorter of the wool, the wool-comber or carder, the dyer, the scribbler, the spinner, the weaver, the fuller, the dresser, with many others, must all join their different arts in order to complete even this homely production. How many merchants and carriers, besides, must have been employed in transporting the materials from some of those workmen to others who often live in a very distant part of the country? How much commerce and navigation in particular, how many ship-builders, sailors, sail-makers, rope-makers, must have been employed in order to bring together the different drugs made use of by the dyer, which often come from the remotest corners of the world? What a variety of labour, too, is necessary in order to produce the tools of the meanest of those workmen! To say nothing of such complicated machines as the ship of the sailor, the mill of the fuller, or even the loom of the weaver, let us consider only what a variety of labour is requisite in order to form that very simple machine, the shears with which the shepherd clips the wool. The miner, the builder of the furnace for smelting the ore, the feller of the timber, the burner of the charcoal to be made use of in the smelting-house, the brickmaker, the bricklayer, the workmen who attend the furnace, the millwright, the forger, the smith, must all of them join their different arts in order to produce them. Were we to examine, in the same manner, all the different parts of his dress and household furniture, the coarse linen shirt which he wears next his skin, the shoes which cover his feet, the bed which he lies on, and all the different parts which compose it, the kitchen-grate at which he prepares his victuals, the coals which he makes use of for that purpose, dug from the bowels of the earth, and brought to him, perhaps, by a long sea and a long land-carriage, all the other utensils of his kitchen, all the furniture of his table, the knives and forks, the earthen or pewter plates upon which he serves up and divides his victuals, the different hands employed in preparing his bread and his beer, the glass window which lets in the heat and the light, and keeps out the wind and the rain, with all the knowledge and art requisite for preparing that beautiful and happy invention, without which these northern parts of the world could scarce have afforded a very comfortable habitation, together with the tools of all the different workmen employed in producing those different conveniencies…”

Adam Smith, The Wealth of Nations, Book I, Chap. I

If we think through Smith’s imaginative litany of craftsmen, and reflect on the fact that such a list could be made much longer and with much greater detail, we can better understand how technological change introduced within this complex synchronic web of inter-dependencies must of necessity only slowly make its impact felt throughout the whole system of production. However, all of these innovations are occurring in the same parallel, synchronic fashion, and these collected innovations incrementally affecting the whole slowly lead to changes to the whole, though it is difficult in the extreme of indicate any one point of transition. The temptation is to identify and name a decisive point of transition, but this is a falsification of history.

Our lives, and the mechanisms by which we live it — our technology of living, as it were — are as integrated into a technological context as were the lives of our ancestors. These technologies are very different, so different in fact that it is difficult to discern the underlying continuity that led from the one to the other, but it was countless small changes that added up to the transition from the subsistence agriculture of agrarian-ecclesiastical civilization to the escalating production powers of industrial-technological civilization.

. . . . .

Gotland 3

. . . . .


. . . . .

Grand Strategy Annex

. . . . .


fusion and consciousness

Fusion: nature got there first

Fusion came very early in the history of the universe, and consciousness came very late in the history of the universe — this pair of natural technologies come so early and so late, respectively, that one could say that they “bookend” cosmological history as the Alpha and Omega of cosmic evolution.

big bang nucleosynthesis

After an initial period of big bang nucleosynthesis in the first twenty minutes of the life of the cosmos, the universe did little in the way of producing more baryonic matter until gravity took over, and the baryonic matter condensed into early stars. Stars began to “light up” about 100 million years after the big bang, which in cosmological terms is not a terribly long time. This “lighting up” of the stars has been said to mark the advent of the stelliferous era.


In the almost 14 billion years of the universe’s history, stars have been shining for all but the first 100 million years — the vast majority of the age of the universe. What this means is that fusion has been around for the vast majority of the history of the universe. Nature innovated fusion technology early on, and fusion has continued to be central to the natural processes of the universe up to the present time and for the foreseeable future.

It has been said that human beings are a solar species. I wrote about this in my post Human Beings: A Solar Species. To say that human beings are a solar species is to say that we are a species dependent upon fusion. All life, and not only our species, is dependent upon the energy generated by fusion, so that fusion is responsible for all (or almost all) subsequent emergent complexity.

Fusion is a basic technology of the universe, a conditio sine qua non of cosmological order and its history. As such, fusion is a robust and durable technology proved over billions of years. Fusion as a natural source of energy is achieved through gravitational containment, and while human technology is not yet in a position to exploit the technology of gravitational containment, we have a very clear idea of its mechanism, as we have sophisticated physical theories to account for it. In other words, we have a good understanding of a technology that is one of the early building blocks of the universe.

Other technologies of nature

It is interesting, in this context, to consider other natural technologies and their place in cosmological natural history. We know, for example, from a 1972 discovery at Gabon, Africa, that fission, like fusion, is a natural technology. At Oklo in Gabon, about 1.7 billion years ago, just the right elements came together with a critical mass of fissionables to produce self-sustaining nuclear chain reactions.

oklo gabon

Fissionables are relatively rare, and we know that these heavier elements are created by supernovae, so that natural fission reactors cannot come about until after (at very minimum) generation III stars have gone supernovae and flung their radioactive remnants into the universe. The date of the natural reactor at Gabon makes it quite old, but still not half as old as the earth itself, and nowhere nearly as old as fusion. It has been proposed that there was a “paleo-reactor” on Mars in the distant past, and it is interesting to speculate how widely spread, or how rare, fission technology is in the universe. We will not know until we explore in detail.

Another natural technology of note is life itself. Current biological thought suggests that life emerged on earth not long after the planet began to cool. The Earth is thought to be about 4.54 billion years old, and life may have arisen as much as 3.9 billion years ago. In other words, the Earth has hosted life for much longer than its initial sterility. The earth has, in turn, existed for almost a third as long as the entire universe, so that means that life (at very least on earth, if nowhere else) has been around for a quarter of the age of the known universe. That makes life a well-established and robust natural technology.

A recent paper, Life Before Earth by Alexei A. Sharov and Richard Gordon, suggests that if the complexity of life is extrapolated backward in time we must posit an origin of life at about 9.7 billion years ago, which is almost twice as old as the earth, which suggests in turn that earth was “seeded” with life as soon as its was cool enough to support life, rather than independently arising on Earth. While this thesis is, in my judgment, rather tenuous, its cannot be dismissed out of hand, and if it is correct, it shows life to be an even longer-lived and more durable technology than we now suspect it to be.

Just as we are curious if there have been other naturally occurring fission reactors in the universe, we are intensely interested in the possibility of life elsewhere in the universe: the robust and durable technology of life on earth suggests that this technology may well be replicated elsewhere, as pervasive in the universe, where conditions are right, as fusion technology is pervasive in the universe. The existence of life elsewhere is the cosmos is one of the great scientific questions of our time.

Consciousness: nature got there first, too

In contradistinction to fusion, the technology of consciousness arrives late in the history of the universe. While there were likely rudimentary forms of consciousness prior to the particular forms of mammalian consciousness familiar to us both in ourselves and in the other mammals with whom we often share our lives, and mammalian consciousness is a robust natural technology about 160 million years old (interestingly, not so much more distant from the present as the lighting up of stars was distant from big bang), the intelligent, self-reflective consciousness of human beings seems to be even younger than the bodies of anatomically modern human beings.

The late emergence of consciousness in the history of the universe is interesting in so far as it demonstrates that the universe, even at its present advanced age, is still capable of technological innovation.

In regard to consciousness, we are closing in on the mechanisms of the brain that enable the emergence of consciousness from a material substrate, but, unlike the case with fusion, we have no idea whatsoever what consciousness is and have no theory to account for it. Of course I am aware that many will disagree with me on this — even, if not especially, those scientifically-oriented readers who found themselves nodding over what I wrote above about fusion, and who have convinced themselves of the truth of some reductivist or eliminativist theory of consciousness.

Hugo de Garis, who appeared in the film about Ray Kurzweil, Transcendent Man, said in an interview (Interview with Hugo de Garis: Approaches to AI, Neuroscience, Engineering, Intelligence Theory, Cyborgs interviewed, filmed and edited by Adam A. Ford) that, “…we have ourselves as the existence proof that nature has found a way to [build] a conscious, intelligent creature.” (We could, in the same spirit, say that stars are the existence proof of fusion energy.) This is a perfect evocation of the weak anthropic principle as applied to consciousness and intelligence: we’re here, and we’re conscious, therefore consciousness is possible and the universe is consistent with the emergence of conscious life.

The possibility of conscious knowledge of consciousness

These natural technologies are not just randomly jumbled together, but are in fact closely related. The fusion technology of stars enabled energy production that was exploited by life, which latter grew in complexity until it made possible the even more subtle and complex technology of conscious intelligence. The earliest of these technologies, fusion, we understand well; the latest of these technologies, not surprisingly, still eludes us.

And in saying that a full understanding of consciousness still eludes us, what we are saying is that consciousness so far understands the natural technologies that made itself possible, but it does not yet understand itself in the same way. We may yet attain the full measure of reflexive self-awareness of consciousness when consciousness knows itself in the same way that it understands fusion technology. This will take time, since, as we have noted, consciousness is a youthful technology of nature.

Consciousness may, too, someday become as pervasive in the universe as fusion. Indeed, the fact that we know, that we can see, that fusion is operating everywhere in the known universe, is the first precondition of life, and if life too has been made pervasive by pervasive fusion energy sources, the technology of life may, in the fullness of time, give rise to the technology of conscious intelligence. But consciousness is a late-comer in cosmological order, and has not yet shown itself to be a technology of nature as robust and as durable as fusion. Only the test of time can demonstrate this.

. . . . .


. . . . .

Grand Strategy Annex

. . . . .



Since posting Automation and the Human Future a few days ago, a reader has directed by attention to Technological Unemployment Amidst Stagnation at All Systems Need A Little Disorder by Ashwin Parameswaran. I have previously mentioned Ashwin Parameswaran’s blog, Macroeconomic Resilience, in my post Self-Dissimilarity.

While my last post credited the fear of technogenic unemployment primarily to recession-induced pessimism, Parameswaran takes technogenic unemployment very seriously, and anticipates “Transitioning To The Near-Automated Economy,” even considering the changes that must come about in education as this transition is made. What Parameswaran writes is so wonderfully sane and reasonable, and I agree with so much of it (indeed, it warmed my heart to see him refer to our economy today as “neo-feudal” as this is a point that I have made many times), that I hesitate to differ with him, and I don’t need to differ with Parameswaran too much if we adjust our expectations to la longue durée and make it clear that we are not talking about what is going to happen within 25 years or so.

I am certainly not beyond speculating on the possibility of very different employement structures. In my post Counterfactual Conditionals of the Industrial Revolution, I suggested the possibility of an industrial revolution of a different sort — an industrial revolution resulting in a society in which the supply and the demand for labor were not nearly so close to being in equilibrium as they are today. For despite the problems of unemployment that plague advanced industrialized societies, the astonishing thing about it is not that there is unemployment, but rather that supply and demand of labor are so nearly identical. In a different kind of society, a different kind of industrial civilization, this approximation of employment demand to employment supply might not obtain.

As long as we take a sufficiently long time-horizon I am willing to agree that we will be eventually transitioning to a near-automated economy. In a comment made on the Los Angeles Times article L.A. 2013 — about an article from 03 April 1988 (from the Los Angeles Times Magazine), seeking to predict a quarter century into the future to 2013, Yves Rubin wrote…

“In general, such futuristic articles should multiply time spans by at least 10. Downtown Los Angeles “may” look like in this article’s cover photo in 250 years!”

I largely agree with this. In 25 years we see little change, but in 250 years we are likely to see significant change. Think back to the world 250 years before the present — the world of 1763, when the Treaty of Paris was signed, ending the Seven Years’ War — and if we compare that world, without electricity, without the internal combustion engine, before the industrial revolution, and before the United States existed, with our world today, we can see how radical the changes to the familiar world can be in a future an order of magnitude beyond the modest 25 years of the 1988 article about LA.

I am willing to admit without hesitation that, 250 years from now, we may well have realized a near-automated economy, and that this automation of the economy will have truly profound and far-reaching socioeconomic consequences. However, the original problem then becomes a different problem, because so many other things, unanticipated and unprecedented things, have changed in the intervening years that the problem of labor and employment is likely to look completely different at this future date. If the near-automated economy becomes a reality in 250 years — a scenario that I will not dispute — I don’t think that this will be much of a problem, because we will need machines producing the goods we need to expand the human presence in the Milky Way. Seven billion people is a lot on the surface of the Earth — and there will be even more people by that time — but when spread out in the galaxy, seven billion human beings isn’t even enough to scratch the surface, as it were.

The transition to a near-automated economy (contemplated in isolation from parallel synchronous changes) would require adjustments so radical that it would be an open question, once these changes were in place and the near-automated economy is up and running, whether we would still be living in the same old industrial-technological civilization we have come to know and love, or whether this historical discontinuity was sufficient to cause a rupture that results in the constitution an an entirely new civilization — perhaps even constituting a preemption event that ends industrial-technological civilization by replacing it with whatever comes next. Over time, these adjustments will happen more or less naturally, but contemplated in one fell swoop the necessary adjustments seem incomprehensibly radical.

In the article Real Robot Talk in The Economist that I quoted in my last post, Automation and the Human Future, the author wrote that, “modern economies continue to use wages as the primary means by which purchasing power is distributed.” What mechanism other than wages can be employed as a means for the distribution of purchasing power? How could goods and services be allocated within an economy without the quantification that wages effect? (The problem is similar to that of allocating capital and resources within a socialist economy: how is capital to be allocated to enterprises without a pricing mechanism?)

This is another example of thinking in conventional terms about a time in the future when conventional assumptions will no longer hold. By the time the automated economy will seriously alter social relationships, so many other things will have happened, and will be happening, that terms like “labor” and “capital” and “goods” and “services” will have come to take on such different meanings that to formulate things in the old way would be nothing but an anachronism.

It is to be expected that measures will be taken in the attempt to preserve the present structure of civilization as long as possible (and in so doing to preserve the familiar meanings of familiar terms), and some of these measures may seem quite drastic in their attempts to preserve certain institutions. For example, we may see mass mobility of labor across nation-state boundaries allowing technogenically superfluous labor to seek opportunities for work in regions of the world not yet transformed by the technologies of automated production. As entrenched as the nation-state is in our contemporary thought, it is not as entrenched as our idea of civilization, and we would sooner compromise the nation-state and the international order based upon the nation-state than we would allow our civilization to lapse.

Yet, in the fullness of time, not only will our nation-states lapse, but our distinctive form of civilization will lapse also, and it will be replaced by another form of civilization, as yet unknown to us.

It is one of the distinctive features of civilization that the problems intrinsic to a given form of civilization emerge simultaneously with the civilization and disappear with the disappearance of that civilization; that is to say, for the most past, the problems of a particular form of civilization are not passed along to new forms of civilization, which have their own problems. I take this to be one of the most fascinating features of civilization, and I don’t think that it receives sufficient attention in the study of civilization. What it implies is that, like an artist’s work, a civilization’s problems are never resolved, only abandoned.

The problem of royal legitimacy, for example, scarcely exists today, and in so far as it exists at all it only exists as a holdover from an earlier form of civilization that no longer exists, as is the case with the constitutional monarchies of Europe. But the intense debates over the divine right of kings simply don’t exist any more. The problem was never “solved” but was intrinsic to the form of civilization in which royal authority was central, and once royal authority was no longer the central organizing principle of civilization, the “problem” of royal authority, its source and its legitimacy, simply disappears.

Of course, one of the ways in which one kind of civilization succeeds another is through a radical innovation that “solves” (in a sense) the problems of the earlier civilization, but in so “solving” the problem another kind of civilization is created, and so the solution does not obtain within the previous civilizational paradigm; it defines a new civilizational paradigm, within its own problems (to become manifest in the fullness of time) awaiting a solution that will initiate another civilizational paradigm.

Automated production issuing in maximized abundance and the demise of employment as we know it today would constitute a transition to a distinct form of civilization from the industrial-technological civilization that we know today, and the emergence of a future industrial-technological civilization in which maximized abundance becomes an established fact and human labor superfluous to the maximized abundance would also constitute a changed socioeconomic context that would interact will all other synchronous historical events transpiring in parallel and therefore in mutual relations of influence.

. . . . .


. . . . .

Grand Strategy Annex

. . . . .


Pulp-O-Mizer job stealing robots

During the early years of the industrial revolution, people (including young children) worked the kind of hours in factories that they have been accustomed to working on farms during agrarian civilization. That meant a lot of 14 and 16 hour days. After the initial misery of the “factory system,” things got sorted out and the hours of the work day fell precipitously. Eventually, the work week fell to a standard 40 hours, though in the most productive economies in the world today many if not most people routinely put in overtime hours.

Futurists, however, instead of seeing this declining workweek in historical context as a one-time transition from one kind of social organization to another, forecast that the work week would go on shrinking, from 40 hours to 30 hours, from 30 hours to 20 hours, and eventually automation would make human labor unnecessary. Given this forecast, one of the great social problems that industrial civilization would have to face would be that of what everyone would do in a society of maximized abundance and scarce employment.

It was widely thought by “progressive” thinkers that Europe was on the cutting edge of this revolution in labor and employment, as many European countries statutorily limited the work week to a certain number of hours. In far more recent predictions it was suggested that the vast common market created by the European Union would come to dominate the world economy. (Up until the recent financial crisis, Parag Khanna was predicting ascendancy of Europe as a global force.) Yet European economies proved stagnant, and not a vibrant source of innovation and growth, whether economic or technological.

The optimistic futurism of the 1970s is especially easy to ridicule (though it is often no more wide of the mark than more recent futurist predictions), and I think that this is due to the fact that the early Space Age of the 1960s significantly raised hopes and expectations, when these hopes and expectations were not swiftly gratified with jetpacks, flying cars, and vacations to the moon, the whole enterprise of technological futurism fell into disrepute.

Many supposedly “failed” predictions of futurism — supposedly falsified by history like the political triumph of a given economic system and secularization — may yet come true but on a time scale that lies beyond the brief attention spans of the mass media. Given the fact that big ideas move very slowly through history, like the passage of large prey through the gut of a snake, and given the tendency of the mass media to build up the idea of the moment into a kind of hysteria, only to see interest in that idea collapse soon after, it is nearly inevitable that the same ideas will come up time and again as they continue their passage through contemporary history, going through periods of being considered prescient alternating with periods of being believed to have been “disproved” by history.

Recently the once-discredited futurist idea of widespread automation leading to maximized material abundance issuing in sharply increased and persistent unemployment has been making a significant comeback in the popular press. Let’s make a quick review of how the idea appeared in mid-twentieth century futurism.

In a book intended to be a non-hyped, non-flashy exercise in futurism, Stuart Chase made the case for automation and posed the problem of persistent unemployment for a mass society:

“Computers and automatic mechanism have already taken over a great deal of routine work, such as bank bookkeeping, and they are expected to take over a great deal more. Not only large plants and offices will be computerized, but also small organizations, as the hardware becomes less costly. What then will happen to people? …If people have no jobs, how can they buy the products made by the workers who remain? If, on the other hand, it is possible to subsidize the jobless as consumers, what happens to their nervous systems, self-confidence, and character? Most of us would rather be occupied than not… but in what form?”

Stuart Chase, The Most Probable World, 1968, Chapter 10, “Is man a working animal?,” p. 136

The idea of automation even plays a central role in Valerie Solanas’ S.C.U.M. Manifesto, where the benefits of automation are accepted uncritically:

“There is no human reason for money or for anyone to work more than two or three hours a week at the very most. All non-creative jobs (practically all jobs now being done) could have been automated long ago, and in a moneyless society everyone can have as much of the best of everything as she wants. But there are non-human, male reasons for wanting to maintain the money system…”

Others saw further and thought more critically. Only a year after Stuart Chase’s book, Victor C. Ferkiss had a much more grounded understanding of what technology would mean in the workplace, and his account gives a sense of technological dystopia à la Metropolis, in contradistinction to the wide-eyed technological utopianism that mostly prevailed when he wrote the following:

“Automation has seemingly done little to reduce the drudgery of work. Where the assembly line exists, it is still irksome… Where the old centralized rigid processes have been automated with machines taking over routine tasks, working conditions, especially psychological one, have not improved. Such evidence as exists indicates that the watchers of dials — the checkers and maintainers — are likely to be lonely, bored, and alienated, often feeling less the machine’s master than its servant. Dealing with computers can be as frustrating for the worker as for the client-consumer, with data on a print-out even more difficult to check and rectify than that in human accounts or reports.”

Victor C. Ferkiss, Technological Man: The Myth and the Reality, 1969 (Signet Mentor 1970), Chapter 6, “Technological Change and Economic Inertia,” pp. 122-123

Such quotes and observations could be multiplied at will; I took my quotes from books that I happened to have on hand, but, as I wrote above, it was a prominent feature in mid-twentieth century futurism to ask what would become of the working masses once automation deprived them of labor, and therefore — presumably for the privileged few writing about the problem — the content and meaning of working class lives.

Now the problem of job loss due to automation is being posed again, and almost in precisely the same terms, notwithstanding the computing and telecommunications revolution that has occurred in the meantime. I cannot help but speculate that these elite worries over restive, unemployed masses are almost entirely due to the stagnant if not depressed condition of the global economy since the financial crisis that started unrolling with the sub-prime mortgage debacle in the US, and subsequently moved on to other unemployment-inducing crises around the world.

An article in The Economist, Real robot talk (from 01 March 2013), revisits the theme of technologically-induced (we might even say technogenic) unemployment from automation and robotics. The article finishes with these wise observations:

“Technological progress sufficient to cause these kinds of dislocations should also generate overall economic gains large enough to make everyone better off. But just because everyone could be made better off by progress doesn’t mean that everyone will be made better off. There must be an institutional framework in place to ensure that the gains from growth are shared.”

However, the rest of the article is not nearly so enlightening. The Economist article offers three possible responses to technogenic unemployment:

1. more education for less skilled workers

2. protecting less skilled jobs through regulation

3. direct wealth transfers

I am struck by the utter lack of imagination in these three proposals. If this is all than an elite publication like The Economist has to offer, clearly we are in serious trouble. The whole idea of trying to educate everyone to the level that the elites believe themselves to have attained begs so many questions that it is difficult to know where to start. Therefore I will limit myself only to the comment that many if not most entrepreneurs are drop outs, and the highly educated work for the entrepreneurs who create companies, and so create jobs and opportunities and increase productivity. As for protecting low skilled jobs, this is perhaps the worst possible suggestion, since it would directly impact the increase in productivity that could potentially free those in wage-slave drudgery from their mechanizable tasks. And direct wealth transfers have been tried, almost always with disastrous results.

A similar recent article that is a sign of the times is The Rise of the Robots by Robert Skidelsky. (I won’t quote Skidelsky, since his website says, “Reprinting material from this Web site without written consent from Project Syndicate is a violation of international copyright law.”) A Manichean contrast between optimists and pessimists runs through Skidelsky’s piece, as though the parties to the argument had nothing on their side except temperamental inclinations.

This isn’t about optimists or pessimists, except in so far as present-day commentators are pessimistic because their banker and journalist friends are feeling the pinch, too. That’s what happens when a persistent recession takes a chunk out of contemporary economic history. When the present downturn has passed — it hasn’t passed yet, and by the time it’s over I suspect many will come to speak of a global “lost decade” — I predict that talk of technogenic unemployment will also pass until the next crisis.

In the longer term of industrial-technological civilization, abundance may yet become a problem, and meaningful work a privilege, but we are a very long way from this being the case. The industrial revolution is only now transforming Asia, and it has yet to transform Africa. The problem of global technogenic unemployment cannot be a persistent economic blight until the global economy entire has been technologically transformed by industrialization — incidentally, the same conditions that must obtain for the experimentum crucis of Marxism (another supposedly disconfirmed idea from history).

. . . . .


. . . . .

Grand Strategy Annex

. . . . .

%d bloggers like this: