It cannot be pointed out too often that by far the most extensive period of human history is prehistory. In the past it was possible to evade this fact and its problematic consequences for conventional historiography, because prehistory could be safely set aside as not being history at all. The subsequent rise of scientific historiography, which allows us to read texts other than written language — geological texts, genetic texts, the texts of material culture uncovered by archaeologists, and so on — have been progressively chipping away at the facile distinction between history and prehistory, so that boundary between the two can no longer be maintained and any distinction between history and prehistory must be merely conventional, such as the convention of identifying history sensu stricto with the advent of written language.
The evolutionary psychology of human beings carries the imprint of this long past until recently unknown to us, lost to us, its loss during the earliest period of civilization being a function of history effaced as the events of more recent history wipe clean the slate of the earlier history that preceded it. Scientific historiography provides us with the ability to recover lost histories once effaced, and, like a recovered memory, we recognize ourselves in this recovered past because it is true to what we are, still today.
From the perspective of illuminating contemporary human society, we may begin with the historical recovery of relatively complex societies that emerged from the Upper Paleolithic, which communities were the context from which the Neolithic Agricultural Revolution emerged. But from the perspective of the evolutionary psychology that shaped our minds, we must go back to the origins of the brain in natural history, and follow it forward in time, for each stage in the evolution of the brain left its traces in our behavior. The brainstem that we share with reptiles governs autonomous functions and the most rudimentary drives, the limbic system that we share with other mammals and which is implicated in our sentience-rich biosphere is responsible for our emotions and a higher grade of consciousness than the brainstem alone can support, and the cerebral cortex enables more advanced cognitive functions that include reflexive self-awareness and historical consciousness (awareness of the past and the future in relation to the immediacy of the present).
Each of these developments in terrestrial brain evolution carries with it its own suite of behaviors, with each new set of behaviors superimposed on previous behaviors much as each new layer of the brain is superimposed upon older layers. Over the longue durée of evolution these developments in brain evolution were also coupled with the evolution of our bodies, which enact the behaviors in question. As we descended from the trees and hunted and killed for food, our stomachs shrank and our brains grew. We have the record of this transition preserved in the bones of our ancestors; we can still see today the cone-shaped ribcage of a gorilla, over the large stomach of a species that has remained primarily vegetarian; we can see in almost every other mammal, almost every other vertebrate, the flat skull with nothing above the eyes, compared to which the domed cranium of hominids seems strange and out of place.
As I wrote in Survival Beyond the EEA, “Evolution means that human beings are (or were) optimized for survival and reproduction in the Environment of Evolutionary Adaptedness (EEA).” (Also on the EEA cf. Existential Threat Narratives) The long history of the formation of our cognitive abilities has refined and modified survival and reproduction behaviors, but it has not replaced them. Our hunter-gatherer ancestors of the Upper Paleolithic were already endowed with the full cognitive power that we continue to enjoy today, though admittedly without the concepts we have formulated over the past hundred thousand years, which have allowed us to make better use of our cognitive endowment in the context of civilization. Everything essential to the human mind was in place long before the advent of civilization, and civilization has not endured for a period of time sufficient to make any essential change to the constitution of the human mind.
The most difficult aspects of the human point of view to grasp objectively are those that have been perfectly consistent and unchanging over the history of our species. And so it is that we do not know ourselves as dwellers on the surface of a planet, shaped by the perspective afforded by a planetary surface, looking up to the stars through the distorting lens of the atmosphere, and held tight to the ground beneath our feet by gravity. At least, we have not known ourselves as such until very recently, and this knowledge has endured for a much shorter period of time than civilization, and hence has had even less impact on the constitution of our minds than has civilization, however much impact it has had upon our thoughts. Our conceptualization of ourselves as beings situated in the universe as understood by contemporary cosmology takes place against the background of the EEA, which is a product of our evolutionary psychology.
To understand ourselves aright, then, we need to understand ourselves as beings with the minds of hunter-gatherers who have come into a wealth of scientific knowledge and technological power over an historically insignificant period of time. How did hunter-gatherers conceive and experience their world? What was the Weltanschauung of hunter-gatherers? Or, if you prefer, what was the worldview of hunter-gatherers?
Living in nature as a part of nature, only differentiated in the slightest degree from the condition of prehuman prehistory, the hunter-gatherer lives always in the presence of the sublime, overwhelmed by an environment of a scale that early human beings had no concepts to articulate. And yet the hunter-gatherer learns to bring down sublimely large game — an empowering experience that must have contributed to a belief in human efficacy and agency in spite of vulnerability to a variable food supply, not yet under human control. Always passing through this sublime setting for early human life, moving on to find water, to locate game, to gather nuts and berries, or to escape the depredations of some other band of hunter-gatherers, our ancestor’s way of life was rooted in the landscape without being settled. The hunter-gatherer is rewarded for his curiosity, which occasionally reveals new sources of food, as he is rewarded for his technological innovations that allow him to more easily hunt or to build a fire. The band never has more children than can be carried by the adults, until the children can themselves escape, by running or hiding, the many dangers the band faces.
As settled agriculturalism began to displace hunter-gatherers, first from the fertile lowlands and river valleys were riparian civilizations emerged, new behaviors emerged that were entirely dependent upon the historical consciousness enabled by the cerebral cortex (that is to say, enabled by the ability to explicitly remember the past and to plan for the future). Here we find fatalism in the vulnerability of agriculture to the weather, humanism in this new found power over life, a conscious of human power in its the command of productive forces, and the emergence of soteriology and eschatology, the propitiation of fickle gods, as human compensations for the insecurity inherent in the unknowns and uncertainties of integrating human life cycles with the life cycles of domesticated plants and animals and the establishment of cities, with their social differentiation and political hierarchies, all unprecedented in the history of the world.
The Weltanschauung of hunter-gatherers, which laid the foundations for the emergence of agrarian and pastoral civilizations, I call the homeworld effect in contradistinction to what Frank White has called the overview effect. The homeworld effect is our understanding of ourselves and of our world before we have experienced the overview effect, and before the overview effect has transformed our understanding of ourselves and our world, as it surely will if human beings are able to realize a spacefaring civilization.
The homeworld effect — that our species emerged on a planetary surface and knows the cosmos initially only from this standpoint — allows us to assert the uniqueness of the overview effect for human beings. The overview effect is an unprecedented historical event that cannot be repeated in the history of a civilization. (If a civilization disappears and all memory of its having attained the overview effect is effaced, then the overview effect can be repeated for a species, but only in the context of a distinct civilization.) A corollary of this is that each and every intelligent species originating on a planetary surface (which I assume fulfills the principle of mediocrity for intelligent species during the Stelliferous Era) experiences a unique overview effect upon the advent of spacefaring, should the cohort of emergent complexities on the planet in question include a technologically competent civilization.
The homeworld effect is a consequence of planetary surfaces being a locus of material resources and energy flows where emergent complexities can appear during the Stelliferous Era (this is an idea I have been exploring in my series on planetary endemism, on which cf. Part I, Part II, Part III, Part IV, and Part V). We can say that the homeworld effect follows from this planetary standpoint of intelligent beings emerging on the surface of a planet, subject to planetary constraints, just as the overview effect follows from an extraterrestrial standpoint.
We can generalize from this observation and arrive at the principle that an effect such as the overview effect or the homeworld effect is contingent upon the experience of some standpoint (or, if you prefer, some perspective) that an embodied being experiences in the first person (and in virtue of being embodied). This first level of generalization makes it obvious that there are many standpoints and many effects that result from standpoints. Standing on the surface of a planet is a standpoint, and it yields the homeworld effect, which when formulated theoretically becomes something like Ptolemaic cosmology — A Weltanschauung or worldview that was implicit and informal for our hunter-gatherer ancestors, but which was explicitly formulated and formalized after the advent of civilization. A standpoint in orbit yields a planetary overview effect, with the standpoint being the conditio sine qua non of the effect, and this converges upon a generalization of Copernican cosmology — what Frank White has called the Copernican Perspective. (We could, in which same spirit, posit a Terrestrial Perspective that is an outgrowth of the homeworld effect.) If a demographically significant population attains a particular standpoint and experiences an effect as a result of this standpoint, and the perspective becomes the perspective of a community, a worldview emerges from the community.
Further extrapolation yields classes of standpoints, classes of effects, classes of perspectives, and classes of worldviews, each member of a class possessing an essential property in common. The classes of planetary worldviews and spacefaring worldviews will be different in detail, but all will share important properties. Civilization(s) emerging on planetary surfaces at the bottom of a gravity well constitute a class of homeworld standpoints. Although each homeworld is different in detail, the homeworld effect and the perspective it engenders will be essentially the same. Initial spacefaring efforts by any civilization will yield a class of orbital standpoints, again, each different in detail, but yielding an overview effect and a Copernican perspective. Further overview effects will eventually (if a civilization does not stagnate or collapse) converge upon a worldview of a spacefaring civilization, but this has yet to take shape for human civilization.
A distinctive aspect of the overview effect, which follows from an orbital standpoint, is the suddenness of the revelation. It takes a rocket only a few minutes to travel from the surface of Earth, the home of our species since its inception, into orbit, which no human being saw until the advent of spacefaring. The suddenness of the revelation not only furnishes a visceral counter-example to what our senses have been telling us all throughout our lives, but also stands in stark contrast to the slow and gradual accumulation of knowledge that today makes it possible to understand our position in the universe before we experience this position viscerally by having attained an orbital standpoint, i.e., an extraterrestrial perspective on all things terrestrial.
With the sudden emergence in history of the overview effect (no less suddenly than it emerges in the experience of the individual), we find ourselves faced with a novel sublime, the sublime represented by the cosmos primeval, a wilderness on a far grander scale than any wilderness we once faced on our planet, and, once again, as with our ancestors before the vastness of the world, the thundering thousands of game animals on the hoof, oceans that could not be crossed and horizons that could not be reached, we lack the conceptual infrastructure at present to fully make sense of what we have seen. The experience is sublime, it moves us, precisely because we do not fully understand it. The human experience of the homeworld effect eventually culminated in the emergence of scientific civilization, which in turn made it possible for human beings to understand their world, if not fully, at least adequately. Further extrapolation suggests that the human experience of the overview effect could someday culminate in an adequate understanding of the cosmos, as our hunter-gatherer drives for locating and exploiting resources wherever they can be found, and the reward for technological innovations that serve this end, continue to serve us as a spacefaring species.
. . . . .
I am indebted to my recent correspondence with Frank White and David Beaver, which has influenced the development and formulation of the ideas above. Much of the material above appeared first in this correspondence.
. . . . .
. . . . .
. . . . .
. . . . .
26 October 2015
Between the advent of cognitive modernity, perhaps seventy thousand years ago (more or less), and the advent of settled agricultural civilization, about ten thousand years ago, there is a period of fifty thousand years or more of human history — an order of magnitude of history beyond the historical period, sensu stricto, i.e., the period of written records formerly presumed coextensive with civilization — that we have only recently begun to recover by the methods of scientific historiography. This pre-Holocene world was a world of the “ice age” and of “cave men.” These ideas have become so confused in popular culture that I must put them in scare quotes, but in some senses they are accurate, if occasionally misleading.
One way in which the idea of an “Ice Age” is misleading is that it implies that our warmer climate today is the norm and an ice age is a passing exception to that norm. This is the reverse of the case. For the past two and a half million years the planet has been passing through the Quaternary Period, which mostly consists of long (about 100,000 year) periods of glaciation punctuated by shorter (about 10,000 year) interglacial periods (also called warming periods) during which the global climate warms and the polar ice sheets retreat. I have pointed out elsewhere that, although human ancestors have been present throughout the entire Quaternary, and so have therefore experienced several cycles of glaciation and interglacials, the present interglacial (the Holocene) is the first warming period since cognitive modernity, and we find the beginnings of civilization as soon as this present warming period begins. Thus the Holocene Epoch is dominated, from an anthropocentric perspective, by civilization; the Quaternary Period before the Holocene Epoch is, again from an anthropocentric perspective, human history before civilization: history before history.
We should remind ourselves that this very alien world and its inhabitants is the precursor to our world and the inhabitants are our direct ancestors. In other words, this is us. This is our history, even if we have only recently become accustomed to thinking of prehistory as history no less than the historical period sensu stricto. The Upper Paleolithic, with its ice age, cave bears, cave men, painted animals seen in flickering torchlight, and thousands upon thousands of years of a winter that does not end was a human world — the human world of the Upper Paleolithic — that we can only with effort recover as our own and come to feel its formative power to shape what we have become. The technical term is that his human world of the Upper Paleolithic was our environment of evolutionary adaptedness (EEA). It is this world that made us what we are today.
One website has this very evocative passage describing the world of the Upper Paleolithic:
“The longest war ever fought by humans was not fought against other humans, but against another species — Ursus spelaeus, the Cave Bear. For several hundred thousand years our stone age ancestors fought pitched and bloody battles with these denizens of the most precious commodity on earth — habitable caves. Without these shelters homo sapiens would have had little chance of surviving the Ice Ages, the winter storms, and the myriad of predators that lurked in the dark.”
While there isn’t direct scientific evidence for this compellingly dramatic way of thinking about the Upper Paleolithic (though I was very tempted to title this post “The 100,000 Year War”), it can accurately be said that human/cave bear interactions did occur during the most recent glacial maximum, that both human beings and cave bears are warm-blooded mammals and caves would have provided a measure of protection and warmth that would have endured literally for thousands or tens of thousands of years during this climatological “bottleneck” for mammals, whereas no human-built shelter could have survived these conditions for this period of time. Another species as ill-suited for cold weather as homo sapiens would have simply moved on or gone extinct, but we had our big brains by this time, and this made it possible for early man to fight tenaciously for keep a grip on life even in an environment in which they have to fight cave bears for the few available shelters.
Human beings would have survived elsewhere on the planet in any event, because the equatorial belt was still plenty warm at the time, but the fact that some human beings survived in caves in glaciated Europe is a testament both to their cognitive modernity and their stubbornness. It becomes a little easier to understand how and why early human beings squeezed into caves by passages that cause contemporary archaeologists to experience not a little claustrophobia, when we understand that human beings were routinely inhabiting caves, and probably had to explore them in some depth to make sure they wouldn’t have any unpleasant surprises when a cave bear woke up from its hibernation in the spring.
Unlike human beings, cave bears probably could not have survived elsewhere — they were a species endemic to a particular climate and a particular range and did not have the powers of behavioral adaptation possessed by human beings. The caves of ice age Eurasia were their world, and they spent enough time in these shelters that the walls of caves have a distinctive sheen that is called “Bärenschliffe”:
The “Bärenschliffe” are smooth, polished and often shining surfaces, thought to be caused by passing bears, rubbing their fur along the walls. These surfaces do not only occur in narrow passages, where the bear would come into contact with the walls, but also at corners or rocks in wider passages.
“Trace fossils from bears in caves of Germany and Austria” by Wilfried Rosendahl and Doris Döppes, Scientific Annals, School of Geology Aristotle University of Thessaloniki, Special volume 98, p. 241-249, Thessaloniki, 2006.
Some of these caves are said to be polished “like marble” (I haven’t visited any of these caves myself, so I am reporting what I have read in the literature), so that one must imagine cave bears passing through the narrow passages of their caves for thousands of years, brushing against the wall with their fur until the rough stone is made smooth. The human beings who later took over these caves would have run their hand along these smooth walls, noted the niches where the bears hibernated, and wondered if another bear would come to claim the cave they had claimed.
There is a particularly interesting cave in Switzerland, Drachenloch (which means “dragon’s lair,” as cave bear skulls were once thought to have been the skulls of dragons), in which early human beings seem to have stacked cave bear skulls in a stone “vault” in the floor of the cave. Certainly these two mammal species — ursus spelaeus and homo sapiens — would have known each other by all their shared signs of cave habitation. Indeed, they would have smelled each other.
Mythology scholar Joseph Campbell many times pointed out the fundamental mythological differences between hunter-gatherer peoples and settled agricultural peoples; in the case of the Upper Paleolithic, we have hunter-gatherers and only hunter-gatherers — that is to say, tens of thousands of years of a belief system emergent from a hunting culture with virtually no alternatives. Given the tendency of hunting peoples to animism, and of viewing other species as spiritually significant — metaphysical peers, as it were — one would expect that hunters who fought and killed cave bears in order to take over their shelters would have revered these animals in a religious sense, and this religious reverence for the slain foe (of any species) could explain the prevalence of apparent cave bear altars in caves inhabited by human beings during the Upper Paleolithic.
The human world of the Upper Paleolithic would also have been a world shared with other hominid species — an experience we do not have today, being the sole surviving hominid (perhaps as the result of being a genocidal species) — and most especially shared with Neanderthals. Recent genetic research has demonstrated that there was limited interbreeding between homo sapiens and Neanderthals (cf., e.g., Neanderthals had outsize effect on human biology), but it is likely that these communities were mostly separate. If we reflect on the still powerful effect of in-group bias in our cosmopolitan world, how much stronger must in-group bias have been among these small communities of homo sapiens, homo neanderthalensis, and Denisova hominins? One suspects that strong taboos were associated with other species, and rivals in hunting.
It is likely that Neanderthals evolved in the Levant or Europe from human ancestors who left Africa prior to the speciation of Homo sapiens. The Neanderthal were specifically adapted to life in the cold climates of Eurasia during the last glacial maximum. However, such is the power of intelligence as an adaptive tool that the modern human beings who left Africa were able displace Neanderthals in their own environment, much as homo sapiens displaced a great many other species (and much as they displaced cave bears from their caves). While Neanderthals had larger brains than Homo sapiens, they made tools and they wore clothing after a fashion, Neanderthals did not pass through a selective filter that (would have) resulted in the Neanderthal equivalent of cognitive modernity.
Homo sapiens made better tools and better clothing, and, in the depths of the last glacial maximum, better tools and better clothing constituted the margin between survival and extinction. Perhaps the most significant invention in hominid history after the control of fire was the bone needle, that allowed for the sewing of form-fitting clothing. With form-fitting clothing our prehistoric ancestors were able to make their way through the world of the last glacial maximum and the occupy every biome and every continent on the planet (with the exception of Antarctica).
While “lost worlds” and inexplicable mysteries are a favorite feature of historical popularization, the lost human world of the Upper Paleolithic is being recovered for us by scientific historiography. We are, as a result, reclaiming a part of our identity lost for the ten thousand years of civilization since the advent of the Holocene. The mystery of human origins is gradually becoming less mysterious, and will become less more, the more that we learn.
. . . . .
. . . . .
. . . . .
. . . . .
26 October 2013
In my last post, The Retrodiction Wall, I introduced several ideas that I think were novel, among them:
● A retrodiction wall, complementary to the prediction wall, but in the past rather than the present
● A period of effective history lying between the retrodiction wall in the past and the prediction wall in the future; beyond the retrodiction and prediction walls lies inaccessible history that is not a part of effective history
● A distinction between diachronic and synchronic prediction walls, that is to say, a distinction between the prediction of succession and the prediction of interaction
● A distinction between diachronic and synchronic retrodiction walls, that is to say, a distinction between the retrodiction of succession and the retrodiction of interaction
I also implicitly formulated a principle, though I didn’t give it any name, parallel to Einstein’s principle (also without a name) that mathematical certainty and applicability stand in inverse proportion to each other: historical predictability and historical relevance stand in inverse proportion to each other. When I can think of a good name for this I’ll return to this idea. For the moment, I want to focus on the prediction wall and the retrodiction wall as the boundaries of effective history.
In The Retrodiction Wall I made the assertion that, “Effective history is not fixed for all time, but expands and contracts as a function of our knowledge.” An increase in knowledge allows us to push the boundaries the prediction and retrodiction walls outward, as a diminution of knowledge means the contraction of prediction and retrodiction boundaries of effective history.
We can go farther than this is we interpolate a more subtle and sophisticated conception of knowledge and prediction, and we can find this more subtle and sophisticated understand in the work of Frank Knight, which I previously cited in Existential Risk and Existential Uncertainty. Knight made a tripartite distinction between prediction (or certainty), risk, and uncertainty. Here is the passage from Knight that I quoted in Addendum on Existential Risk and Existential Uncertainty:
1. A priori probability. Absolutely homogeneous classification of instances completely identical except for really indeterminate factors. This judgment of probability is on the same logical plane as the propositions of mathematics (which also may be viewed, and are viewed by the writer, as “ultimately” inductions from experience).
2. Statistical probability. Empirical evaluation of the frequency of association between predicates, not analyzable into varying combinations of equally probable alternatives. It must be emphasized that any high degree of confidence that the proportions found in the past will hold in the future is still based on an a priori judgment of indeterminateness. Two complications are to be kept separate: first, the impossibility of eliminating all factors not really indeterminate; and, second, the impossibility of enumerating the equally probable alternatives involved and determining their mode of combination so as to evaluate the probability by a priori calculation. The main distinguishing characteristic of this type is that it rests on an empirical classification of instances.
3. Estimates. The distinction here is that there is no valid basis of any kind for classifying instances. This form of probability is involved in the greatest logical difficulties of all, and no very satisfactory discussion of it can be given, but its distinction from the other types must be emphasized and some of its complicated relations indicated.
Frank Knight, Risk, Uncertainty, and Profit, Chap. VII
This passage from Knight’s book (as the entire book) is concerned with applications to economics, but the kernel of Knight’s idea can be generalized beyond economics to generally represent different stages in the acquisition of knowledge: Knight’s a priori probability corresponds to certainty, or that which is so exhaustively known that it can be predicted with precision; Knight’s statistical probably corresponds with risk, or partial and incomplete knowledge, or that region of human knowledge where the known and unknown overlap; Knight’s estimates correspond to unknowns or uncertainty.
Knight formulates his tripartite distinction between certainty, risk, and uncertainty exclusively in the context of prediction, and just as Knight’s results can be generalized beyond economics, so too Knight’s distinction can be generalized beyond prediction to also embrace retrodiction. In The Retrodiction Wall I generalized John Smart‘s exposition of a prediction wall in the future to include a retrodiction wall in the past, both of which together define the boundaries of effective history. These two generalizations can be brought together.
A prediction wall in the future or a retrodiction wall in the past are, as I noted, functions of knowledge. That means we can understand this “boundary” not merely as a threshold that is crossed, but also as an epistemic continuum that stretches from the completely unknown (the inaccessible past or future that lies utterly beyond the retrodiction or prediction wall) through an epistemic region of prediction risk or retrodiction risk (where predictions or retrodictions can be made, but are subject to at least as many uncertainties as certainties), to the completely known, in so far as anything can be completely known to human beings, and therefore well understood by us and readily predictable.
Introducing and integrating distinctions between prediction and retrodiction walls, and among prediction, risk and uncertainty gives a much more sophisticated and therefore epistemically satisfying structure to our knowledge and how it is contextualized in the human condition. The fact that we find ourselves, in medias res, living in a world that we must struggle to understand, and that this understanding is an acquisition of knowledge that takes place in time, which is asymmetrical as regards the past and future, are important features of how we engage with the world.
This process of making our model of knowledge more realistic by incorporating distinctions and refinements is not yet finished (nor is it ever likely to be). For example, the unnamed principle alluded to above — that of the inverse relation between historical predictability and relevance, suggests that the prediction and retrodiction walls can be penetrated unevenly, and that our knowledge of the past and future is not consistent across space and time, but varies considerably. An inquiry that could demonstrate this in any systematic and schematic way would be more complicated than the above, so I will leave this for another day.
. . . . .
. . . . .
. . . . .
23 October 2013
Prediction in Science
One of the distinguishing features of science as a system of thought is that it makes testable predictions. The fact that scientific predictions are testable suggests a methodology of testing, and we call the scientific methodology of testing experiment. Hypothesis formation, prediction, experimentation, and resultant modification of the hypothesis (confirmation, disconfirmation, or revision) are all essential elements of the scientific method, which constitutes an escalating spiral of knowledge as the scientific method systematically exposes predictions to experiment and modifies its hypotheses in the light of experimental results, which leads in turn to new predictions.
The escalating spiral of knowledge that science cultivates naturally pushes that knowledge into the future. Sometimes scientific prediction is even formulated in reference to “new facts” or “temporal asymmetries” in order to emphasize that predictions refer to future events that have not yet occurred. In constructing an experiment, we create a few set of facts in the world, and then interpret these facts in the light of our hypothesis. It is this testing of hypotheses by experiment that establishes the concrete relationship of science to the world, and this is also a source of limitation, for experiments are typically designed in order to focus on a single variable and to that end an attempt is made to control for the other variables. (A system of thought that is not limited by the world is not science.)
Alfred North Whitehead captured this artificial feature of scientific experimentation in a clever line that points to the difference between scientific predictions and predictions of a more general character:
“…experiment is nothing else than a mode of cooking the facts for the sake of exemplifying the law. Unfortunately the facts of history, even those of private individual history, are on too large a scale. They surge forward beyond control.”
Alfred North Whitehead, Adventures of Ideas, New York: The Free Press, 1967, Chapter VI, “Foresight,” p. 88
There are limits to prediction, and not only those pointed out by Whitehead. The limits to prediction have been called the prediction wall. Beyond the prediction wall we cannot penetrate.
The Prediction Wall
John Smart has formulated the idea of a prediction wall in his essay, “Considering the Singularity,” as follows:
With increasing anxiety, many of our best thinkers have seen a looming “Prediction Wall” emerge in recent decades. There is a growing inability of human minds to credibly imagine our onrushing future, a future that must apparently include greater-than-human technological sophistication and intelligence. At the same time, we now admit to living in a present populated by growing numbers of interconnected technological systems that no one human being understands. We have awakened to find ourselves in a world of complex and yet amazingly stable technological systems, erected like vast beehives, systems tended to by large swarms of only partially aware human beings, each of which has only a very limited conceptualization of the new technological environment that we have constructed.
Business leaders face the prediction wall acutely in technologically dependent fields (and what enterprise isn’t technologically dependent these days?), where the ten-year business plans of the 1950’s have been replaced with ten-week (quarterly) plans of the 2000’s, and where planning beyond two years in some fields may often be unwise speculation. But perhaps most astonishingly, we are coming to realize that even our traditional seers, the authors of speculative fiction, have failed us in recent decades. In “Science Fiction Without the Future,” 2001, Judith Berman notes that the vast majority of current efforts in this genre have abandoned both foresighted technological critique and any realistic attempt to portray the hyper-accelerated technological world of fifty years hence. It’s as if many of our best minds are giving up and turning to nostalgia as they see the wall of their own conceptualizing limitations rising before them.
Considering the Singularity: A Coming World of Autonomous Intelligence (A.I.) © 2003 by John Smart (This article may be reproduced for noncommercial purposes if it is copied in its entirety, including this notice.)
I would to suggest that there are at least two prediction walls: synchronic and diachronic. The prediction wall formulated above by John Smart is a diachronic prediction wall: it is the onward-rushing pace of events, one following the other, that eventually defeats our ability to see any recognizable order or structure of the future. The kind of prediction wall to which Whitehead alludes is a synchronic prediction wall, in which it is the outward eddies of events in the complexity of the world’s interactions that make it impossible for us to give a complete account of the consequences of any one action. (Cf. Axes of Historiography)
Retrodiction and the Historical Sciences
Science does not live by prediction alone. While some philosophers of science have questioned the scientificity of the historical sciences because they could not make testable (and therefore falsifiable) predictions about the future, it is now widely recognized that the historical sciences don’t make predictions, but they do make retrodictions. A retrodiction is a prediction about the past.
The Oxford Dictionary of Philosophy by Simon Blackburn (p. 330) defines retrodiction thusly:
retrodiction The hypothesis that some event happened in the past, as opposed to the prediction that an event will happen in the future. A successful retrodiction could confirm a theory as much as a successful prediction.
As with predictions, there is also a limit to retrodiction, and this is the retrodiction wall. Beyond the retrodiction wall we cannot penetrate.
I haven’t been thinking about this idea for long enough to fully understand the ramifications of a retrodiction wall, so I’m not yet clear about whether we can distinction diachronic retrodiction and synchronic retrodiction. Or, rather, it would be better to say that the distinction can certainly be made, but that I cannot think of good contrasting examples of the two at the present time.
We can define a span of accessible history that extends from the retrodiction wall in the past to the prediction wall in the future as what I will call effective history (by analogy with effective computability). Effective history can be defined in a way that is closely parallel to effectively computable functions, because all of effective history can be “reached” from the present by means of finite, recursive historical methods of inquiry.
Effective history is not fixed for all time, but expands and contracts as a function of our knowledge. At present, the retrodiction wall is the Big Bang singularity. If anything preceded the Big Bang singularity we are unable to observe it, because the Big Bang itself effectively obliterates any observable signs of any events prior to itself. (Testable theories have been proposed that suggest the possibility of some observable remnant of events prior to the Big Bang, as in conformal cyclic cosmology, but this must at present be regarded as only an early attempt at such a theory.)
Prior to the advent of scientific historiography as we know it today, the retrodiction wall was fixed at the beginning of the historical period narrowly construed as written history, and at times the retrodiction wall has been quite close to the present. When history experiences one of its periodic dark ages that cuts it off from his historical past, little or nothing may be known of a past that once familiar to everyone in a given society.
The emergence of agrarian-ecclesiastical civilization effectively obliterated human history before itself, in a manner parallel to the Big Bang. We know that there were caves that prehistorical peoples visited generation after generation for time out of mind, over tens of thousands of years — much longer than the entire history of agrarian-ecclesiastical civilization, and yet all of this was forgotten as though it had never happened. This long period of prehistory was entirely lost to human memory, and was not recovered again until scientific historiography discovered it through scientific method and empirical evidence, and not through the preservation of human memory, from which prehistory had been eradicated. And this did not occur until after agrarian-ecclesiastical civilization had lapsed and entirely given way to industrial-technological civilization.
We cannot define the limits of the prediction wall as readily as we can define the limits of the retrodiction wall. Predicting the future in terms of overall history has been more problematic than retrodicting the past, and equally subject to ideological and eschatological distortion. The advent of modern science compartmentalized scientific predictions and made them accurate and dependable — but at the cost of largely severing them from overall history, i.e., human history and the events that shape our lives in meaningful ways. We can make predictions about the carbon cycle and plate tectonics, and we are working hard to be able to make accurate predictions about weather and climate, but, for the most part, our accurate predictions about the future dispositions of the continents do not shape our lives in the near- to mid-term future.
I have previously quoted a famous line from Einstein: “As far as the laws of mathematics refer to reality, they are not certain; and as far as they are certain, they do not refer to reality.” We might paraphrase this Einstein line in regard to the relation of mathematics to the world, and say that as far as scientific laws of nature predict events, these events are irrelevant to human history, and in so far as predicted events are relevant to human beings, scientific laws of nature cannot predict them.
Singularities Past and Future
As the term “singularity” is presently employed — as in the technological singularity — the recognition of a retrodiction wall in the past complementary to the prediction wall in the future provides a literal connection between the historiographical use of “singularity” and the use of the term “singularity” in cosmology and astrophysics.
Theorists of the singularity hypothesis place a “singularity” in the future which constitutes an absolute prediction wall beyond which history is so transformed that nothing beyond it is recognizable to us. This future singularity is not the singularity of astrophysics.
If we recognize the actual Big Bang singularity in the past as the retrodiction wall for cosmology — and hence, by extension, for Big History — then an actual singularity of astrophysics is also at the same time an historical singularity.
. . . . .
I have continued my thoughts on the retrodiction wall in Addendum on the Retrodiction Wall.
. . . . .
. . . . .
. . . . .
2 February 2013
In my last post, The Science of Time, I discussed the possibility of taking an absolutely general perspective on time and how this can be done in a way that denies time or in a way that affirms time, after the manner of big history.
David Christian, whose books on big history and his Teaching Company lectures on Big History have been seminal in the field, in the way of introduction to his final lectures, in which he switches from history to speculation on the future, relates that in his early big history courses his students felt as though they were cut off rather abruptly when he had brought them through 13.7 billion years of cosmic history only to drop them unceremoniously in the present without making any effort to discuss the future. It was this reaction that prompted him to continue beyond the present and to try to say something about what comes next.
Another way to understand this reaction of Christian’s students is that they wanted to see the whole of the history they have just been through placed in an even larger, more comprehensive context, and to do this requires going beyond history in the sense of an account of the past. To put the whole of history into a larger context means placing it within a cosmology that extends beyond our strict scientific knowledge of past and future — that which can be observed and demonstrated — and comprises a framework in the same scientific spirit but which looks beyond the immediate barriers to observation and demonstration.
Elsewhere in David Christian’s lectures (if my memory serves) he mentioned how some traditionalist historians, when they encounter the idea of big history, reject the very idea because history has always been about documents and eponymously confined to to the historical period when documents were kept after the advent of literacy. According to this reasoning, anything that happened prior to the invention of written language is, by definition, not history. I have myself encountered similar reasoning as, for example, when it is claimed that prehistory is not history at all because it happened prior to the existence of written records, which latter define history.
This a sadly limited view of history, but apparently it is a view with some currency because I have encountered it in many forms and in different contexts. One way to discredit any intellectual exercise is to define it so narrowly that it cannot benefit from the most recent scientific knowledge, and then to impugn it precisely for its narrowness while not allowing it to change and expand as human knowledge expands. The explosion in scientific knowledge in the last century has made possible a scientific historiography that simply did not exist previously; to deny that this is history on the basis of traditional humanistic history being based on written records means that we must then define some new discipline, with all the characteristics of traditional history, but expanded to include our new knowledge. This seems like a perverse attitude to me, but for some people the label of their discipline is important.
Call it what you will then — call it big history, or scientific historiography, or the study of human origins, or deny that it is history altogether, but don’t try to deny that our knowledge of the past has expanded exponentially since the scientific method has been applied to the past.
In this same spirit, we need to recognize that a greatly expanded conception of history needs to reach into the future, that a scientific futurism needs to be part of our expanded conception of the totality of time and history — or whatever it is that results when we apply Russell’s generalization imperative to time. Once again, it would be unwise to be overly concerned with what we call his emerging discipline, whether it be the totality of time or the whole of time or temporal infinitude or ecological temporality or what Husserl called omnitemporality or even absolute time.
Part of this grand (historical) effort will be a future science of civilizations, as the long term and big picture conception of civilization is of central human interest in this big picture of time and history. We not only want to know the naturalistic answers to traditional eschatological questions — Where did we come from? Where are we going? — but we also want to know the origins and destiny of what we have ourselves contributed to the universe — our institutions, our ideas, civilization, the technium, and all the artifacts of human endeavor.
. . . . .
. . . . .
. . . . .
19 February 2012
Recently (in Don’t Cry for the Papers) I wrote that, “Books will be a part of human life as long as there are human beings (or some successor species engaged in civilizational activity, or whatever cultural institution is the successor to civilization).” While this was only a single line thrown out as an aside in a discussion of newspapers and magazines, I had to pause over this to think about it and make sure that I would get my phrasing right, and in doing so I realized that there are several ideas implicit in this formulation.
Since I make an effort to always think in terms of la longue durée, I have conditioned myself to note that current forms (of civilization, or whatever else is being considered) are always likely to be supplanted by changed forms in the future, so when I said that books, like the poor, will always be with us, for the sake of completeness I had to note that human forms may be supplanted by a successor species and that civilization may be supplanted by a a successor institution. Both the idea of the post-human and the post-civilizational are interesting in their own right. I have briefly considered posthumanity and human speciation in Against Natural History, Right and Left (as well as other posts such as Addendum on the Avoidance of Moral Horror), but the idea of a successor to civilization is something that begs further consideration.
Now, in the sense, everything that I have written about futurist scenarios for the successor to contemporary industrial-technological civilization (which I have described in Three Futures, Another Future: The New Agriculturalism, and other posts) can be taken as attempts to outline what comes after civilization in so far as we understand civilization as contemporary industrial-technological civilization. This investigation of post-industrial civilization is an important aspect of an analytic and theoretical futurism, but we must go further in order to gain a yet more comprehensive perspective that places civilization within the longest possible historical context.
I have adopted the convention of speaking of “civilization” as comprising all settled, urbanized cultures that have emerged since the Neolithic Agricultural Revolution. This is not the use that “civilization” has in classic humanistic historiography, but I have discussed this elsewhere; for example, in Jacob Bronowski and Radical Reflection I wrote:
…Bronowski refers to “civilization as we know it” as being 12,000 years old, which means that he is identifying civilization with the Neolithic Agricultural Revolution and the emergence of settled life in villages and eventually cities.
Taking this long and comprehensive view of civilization, we still must contrast civilization with its prehistoric antecedents. When one realizes that the natural sciences have been writing the history of prehistory since the methods, the technologies, and the conceptual infrastructure for this have been developed since the late nineteenth century, and that paleolithic history itself admits of cultures (the Micoquien, the Mousterian, the Châtelperronian, the Aurignacian, and the Gravettian, for example), it becomes clear that “culture” is a more comprehensive category than “civilization,” and that culture is the older category. The cultures of prehistory are the antecedent institutions to the institution of civilization. This immediately suggests, in the context of futurism, that there could be a successor institution to civilization that no longer could be strictly called “civilization” but which still constituted a human culture.
Thus the question, “What comes after civilization?” when understood in an appropriately radical philosophical sense, invites us to consider post-civilizational human cultures that will not only differ profoundly from contemporary industrial-technological civilization, but which will differ profoundly from all human civilization from the Neolithic Agricultural Revolution to the present day.
Human speciation, if it occurs, will profoundly affect the development of post-human, post-civilizational cultural institutions. I have mentioned in several posts (e.g., Gödel’s Lesson for Geopolitics) that Francis Fukuyama felt obligated to add the qualification to this “end of history” thesis that if biotechnology made fundamental changes to human beings, this could result in a change to human nature, and then all bets are off for the future: in this eventuality, history will not end. Changed human beings, possibly no longer human sensu stricto, may have novel conceptions of social organization and therefore also novel conceptions of social and economic justice. From these novel conceptions may arise cultural institutions that are no longer “civilization” as we here understand civilization.
Above I wrote, “human speciation, if it occurs,” and I should mention that my only hesitation here is that social or technological means may be employed in the attempt to arrest human evolution at more-or-less its present stage of development, thus forestalling human speciation. Thus my qualification on human speciation in no way arises from a hesitation to acknowledge the possibility. As far as I am concerned, human being is first and foremost biological being, and biological being is always subject to natural selection. However, technological intervention might possibly overtake natural selection, in which case we will continue to experience selection as a species, but it will be social selection and technological selection rather than natural selection.
In terms of radical scenarios for the near- and middle-term future, the most familiar on offer at present (at least, the most familiar that has some traction in the public mind) is that of the technological singularity. I have recounted in several posts the detailed predictions that have been made, including several writers and futurists who have placed definite dates on the event. For example, Vernor Vinge, who proposed the idea of the technological singularity, wrote that, “Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended.” (This is from his original essay on the technological singularity published in 1993, which places the date of the advent of the technological singularity at 2023 or sooner; I understand that Mr. Vinge has since revised his forecast.)
To say that “the human era will be ended,” is certainly to predict a radical development, since it postulates a post-human future within the life time of many now living today (much like the claim that, “Verily I say unto you, That there be some of them that stand here, which shall not taste of death, till they have seen the kingdom of God come with power.”). If I had to predict a radical post-human future in the near- to middle-term future I would opt not for post-human machine intelligence but for human speciation facilitated by biotechnology. This latter scenario seems to me far more likely and far more plausible than the technological singularity, since we already have the technology in its essentials; it is only a matter of refining and applying existing biotechnology.
I make no predictions and set no dates because the crowding of seven billion (and counting) human beings on a single planet militates against radical changes to our species. Social pressures to avoid speciation would make such a scenario unlikely in the near- to middle-term future. If we couple human speciation with the scenario of extraterrestrialization, however, everything changes, but this pushes the scenario further into the future because we do not yet possess the infrastructure necessary to extraterrestrialization. Again, however, as with human speciation through biotechnology, we have all the technology necessary to extraterrestrialization, and it is only a matter of refining and applying existing technologies.
From this scenario of human speciation coupled with extraterrestrialization there would unquestionably emerge post-human, post-civilizational cultural institutions that would be propagated into the distant future, possibly marginalizing, and possibly entirely supplanting, human beings and human civilization as we know it today. It is to be expected that these institutions will be directly related to the way of life adopted in view of such a scenario, and this way of life will be sufficiently different from our own that its institutions and its values and its norms would be unprecedented from our perspective.
. . . . .
. . . . .
. . . . .
31 January 2012
A revaluation of agricultural civilization
In several posts I have made a tripartite distinction in human history between hunter-gatherer nomadism, agriculturalism, and industrialism. There is a sense, then, from the perspective of la longue duree, that the macro-historical division of agriculturalism constitutes the “middle ages” of human social development. Prior to agriculturalism, nothing like this settled way of life even existed; now, later, from the perspective of industrialized civilization, agriculture is an enormous industry that can feed seven billion people, but it is a demographically marginal activity that occupies only a small fragment of our species. During those “middle ages” of agriculturalism (comprising maybe fifteen thousand years of human society) the vast bulk of our species was engaged in agricultural production. The very small class of elites oversaw agricultural production and its distribution, and the small class of the career military class or the career priestly class facilitated the work of elites in overseeing agricultural production. This civilizational focus is perhaps unparalleled by any other macro-historical epoch of human social development (and I have elsewhere implicitly referred to this focus in Pure Agriculturalism).
The advent of agricultural civilization was simultaneously the advent of settled civilization, and the transition from agriculturalism to industrialism left the institution of settled civilization in place. Other continuities are also still in place, and many of these continuities from agriculturalism to industrialism are simply the result of the youth of industrial civilization. When industrial civilization is ten thousand years old — should it survive so long, which is not at all certain — I suspect that it will preserve far fewer traces of its agricultural past. For the present, however, we live in a milieu of agricultural institutions held over from the long macro-historical division of agriculturalism and emergent institutions of a still-inchoate industrialism.
The institutions of agricultural civilization are uniquely macabre, and it is worthwhile to inquiry as to how an entire class of civilizations (all the civilizations that belong within the macro-historical division of settled agriculturalism) could come to embody a particular (and, indeed, a peculiar) moral-aesthetic tenor. What do I mean by “macabre”? The online Merriam-Webster dictionary defines “macabre” as follows:
1: having death as a subject: comprising or including a personalized representation of death
2: dwelling on the gruesome
3: tending to produce horror in a beholder
All of the above characterize settled agricultural civilization, which has death as its subject, dwells upon the gruesome, and as a consequence tends to produce horror in the beholder.
The thousand years of medieval European society, which approximated pure agriculturalism perhaps more closely than many other agricultural civilizations (and which we might call a little bit of civilization in its pure form), stands as a monument to the macabre, especially after the experience of the Black Death (bubonic plague), which gave the culture of Europe a decidedly death-obsessed aspect still to be seen in graphically explicit painting and sculpture. But medieval Europe is not unique in this respect; all settled agricultural civilization, to a greater or a lesser extent, has a macabre element at its core. The Agricultural Apocalypse that I wrote about in my previous post constitutes a concrete expression of the horrors that agricultural civilization has inflicted upon itself. What makes agricultural civilization so horrific? What is the source of the macabre Weltanschauung of agriculturalism?
Both the lives of nomadic hunter-gatherers and the lives of settled agriculturalists are bound up with a daily experience of death: human beings must kill in order to live, and other living beings must die so that human beings can live. Occasionally a human being dies so that another species may live, and while this still happens in our own time when someone is eaten by a bear or a mountain lion, it happens much less often that the alternative, which explains why there are seven billion human beings on the planet while no other vertebrate predator comes close to these numbers. The only vertebrate species that flourish are those that we allow to flourish (there are, for example, about sixteen billion chickens in the world), with the exception of a few successful parasitic species such as rats and seagulls. (Even then, there are about five billion rats on the planet, and each rat weighs only a faction of the mass of a human being, so that total human biomass is disproportionately great.)
Although nomadic hunter-gatherers and settled agriculturalists both confront pervasive experiences of death, the experience of death is different in each case, and this difference in the experience and indeed in the practice of death informs everything about human life that is bound up in this relationship to death. John Stuart Mill wrote in his The Utility of Religion:
“Human existence is girt round with mystery: the narrow region of our experience is a small island in the midst of a boundless sea, which at once awes our feelings and stimulates our imagination by its vastness and its obscurity. To add to the mystery, the domain of our earthly existence is not only an island in infinite space, but also in infinite time. The past and the future are alike shrouded from us: we neither know the origin of anything which is, nor, its final destination. If we feel deeply interested in knowing that there are myriads of worlds at an immeasurable, and to our faculties inconceivable, distance from us in space; if we are eager to discover what little we can about these worlds, and when we cannot know what they are, can never satiate ourselves with speculating on what they may be; is it not a matter of far deeper interest to us to learn, or even to conjecture, from whence came this nearer world which we inhabit; what cause or agency made it what it is, and on what powers depend its future fate?”
While Mill wrote that human existence is girt round with mystery, he might well have said that human existence is girt round with death, and in many religious traditions death and mystery or synonymous. The response to the death that surrounds human existence, and the kind of death that surrounds human existence, shapes the mythological traditions of the people so girt round.
Joseph Campbell explicitly recognized the striking difference in mythologies between nomadic hunter-gatherers and settled agricultural peoples. This is a theme to which Campbell returns time and again in his books and lectures. The mythologies of hunting peoples, Campbell maintained, revolved around placating the spirits of killed prey, while the mythologies of agricultural peoples resolved around sacrifice, according to the formula that, since life grows out of death, in order to create more life, one must create more death. Hence sacrifice. Campbell clearly explains a link between the mythologies peculiar to macro-historically distinct peoples, but why should peoples respond so strongly (and so differently) to distinct experiences of death? And, perhaps as importantly, why should peoples respond mythologically to death? To answer this question demands a more fundamental perspective upon human life in its embeddedness in socio-cultural milieux, and we can find such a perspective in a psychoanalytic interpretation of history derived from Freud.
It is abundantly obvious, in observing the struggle for life, that organisms are possessed of a powerful instinct to preserve the life of the individual at all costs and to reproduce that life (sometimes called eros or libido), but Freud theorized that, in addition to the survival instinct that there is also a “death drive” (sometimes called thanatos). Here is Freud’s account of the death drive:
“At one time or another, by some operation of force which still completely baffles conjecture, the properties of life were awakened in lifeless matter. Perhaps the process was a prototype resembling that other one which later in a certain stratum of living matter gave rise to consciousness. The tension then aroused in the previously inanimate matter strove to attain an equilibrium; the first instinct was present, that to return to lifelessness. The living substance at that time had death within easy reach; there was probably only a short course of life to run, the direction of which was determined by the chemical structure of the young organism. So through a long period of time the living substance may have been constantly created anew, and easily extinguished, until decisive external influences altered in such a way as to compel the still surviving substance to ever greater deviations from the original path of life, and to ever more complicated and circuitous routes to the attainment of the goal of death. These circuitous ways to death, faithfully retained by the conservative instincts, would be neither more nor less than the phenomena of life as we now know it. If the exclusively conservative nature of the instincts is accepted as true, it is impossible to arrive at any other suppositions with regard to the origin and goal of life.”
Sigmund Freud, Beyond the Pleasure Principle, authorized translation from the second German edition by C. J. M. Hubback, London and Vienna: The International Psycho-Analytical Press, 1922, pp. 47-48
The death drive, or thanatos, does not appear to be as urgent as the drive to live and to reproduce, but according to Freud it is equally implicated in society and culture. Moreover, given the emergence of war from the same settled agricultural societies that practiced a mythology of sacrifice (according to Campbell), there has been a further “production” of death by the social organization made possible by settled societies. It is to be expected that the production of death by sacrifice in order to ensure a good harvest would become entangled with the production of death in order to ensure the continuity of the community, and indeed in societies in which war became highly ritualized (e.g., Aztec civilization and Japanese civilization) there is a strong element of sacrifice in combat.
Freud’s explanation of the death drive may strike the reader as a bit odd and perhaps unlikely, but the mechanism that Freud is proposing is not all that different from Sartre’s contention that being-for-itself seeks to become being-in-itself (to put it simply, everyone wants to be God): life — finite life, human life — is problematic, unstable, uncertain, subject to calamity, and pregnant with every kind of danger. Why would such a contingent, finite being not desire to possess the quiescence and security of being-in-itself, to be free of all contingencies, which Shakespeare called all the ills that flesh is heir to? The mythologies that Campbell describes as being intrinsic to nomadic and settled peoples are mechanisms that attempt to restore the equilibrium to the world that has been disturbed by human activity.
Agricultural civilization is the institutionalization of the death drive. The mythology of sacrifice institutionalizes death as the norm and even the ideal of agricultural civilizations. As such, settled agricultural civilization is (has been) a pathological permutation of human society that has resulted in the social equivalent of neurotic misery. That is to say, agricultural civilization is a civilization of neurotic misery, but all civilization need not be neurotically miserable. The Industrial Revolution has accomplished part of the world of overcoming the institutions of settled agriculturalism, but we still retain much of its legacy. To make the complete transition from the neurotic misery of settled agricultural civilization to ordinary civilizational unhappiness will require an additional effort above and beyond industrialization.
Despite the explicit recognition of a Paleolithic Golden Age prior to settled agriculturalism, there is a strong bias in contemporary civilization against nomadism and in favor of settled civilization. Both Kenneth Clark’s Civilisation: A Personal View and Jacob Bronowski’s The Ascent of Man (both of which I have cited with approval in many posts) make broad evaluative judgments to the detriment of nomadic societies — an entirely superfluous judgment, as though the representatives of settled civilization felt that they needed to defend an existential orientation of their civilization by condemning the way of life of uncivilized peoples, who are called savages and barbarians. The contempt that has been shown for the world’s surviving nomadic peoples — the Sami, the Gypsies, and others — as well as programs of forced sedentarization — e.g., among the Kyrgyz — show the high level of emotional feeling that still attaches to the difference between fundamentally distinct forms of life, even when one pattern of life has become disproporationately successful and no longer needs to defend itself against the depredations of the other.
Given this low esteem in which existential alternatives are held, it is important to see settled agricultural civilization, as well as its direct descendent, settled industrial civilization, in their true colors and true dimensions, and to explicitly recognize the pathological and explicitly macabre elements of the civilization that we have called our own in order to see it for what it is and therefore to see its overcoming as an historical achievement for the good the species.
We are not yet free of the institutions of settled agricultural civilization, which means that we are not yet free of a Weltanschauung constructed around macabre rituals focused on death. And despite the far-reaching changes to life that have come with the Industrial Revolution, there is no certainly that the developments that separate us from the settled agricultural macabre will continue. I wrote above that, given the consolidation of industrial civilization, we will probably have institutions far less agricultural in character, but it remains possible that the industrialism may falter, may collapse, or may even, after consolidating itself as a macro-historical division, give way to a future macro-historical division in which the old ways of agriculturalism will be reasserted.
I count among the alternatives of future macro-historical developments the possibility of pastoralization and neo-agriculturalism. In any civilization largely constituted by either the historical processes of pastoralization of neo-agriculturalism, agriculture would once again play a central and perhaps a dominant role in the life of the people. In a future macro-historical division in which agriculture was once again the dominant feature of human experience, I would expect that the macabre character of agricultural civilization would once against reassert itself in a new mythology eventually consolidated in the axialization of a future historical paradigm centered on agriculture.
. . . . .
. . . . .
. . . . .
19 October 2011
I have been listening to Cro-Magnon: How the Ice Age Gave Birth to the First Modern Humans by Brian Fagan and am thoroughly enjoying the book. Professor Fagan has written a great many books about prehistory and climatology, and I have previously recommended his lectures for The Teaching Company, Human Prehistory and the First Civilizations. In fact, I was so enthusiastic about this set of lectures that I urged by mother to listen to them also, since she shares my interest in prehistory and anthropology. This led to an interesting coincidence. My mother was listening to these lectures before she took a cruise to Alaska with one of my sisters. A day later when she got on the cruise ship she heard the distinctive voice of the on-board naturalist for this cruise, and asked him, “Are you Brian Fagan?” And indeed it was Brian Fagan.
In any case, I have derived a lot of value (and a lot of enjoyment) from the works of Professor Fagan, and I heartily recommend them. In this recent (2011) book on Cro-Magnons, Fagan is in fine form, delivering both anecdote and research results that enliven the human condition in its earliest iteration. Fagan places particular emphasis on two events, although I hesitate to call them “events” since they have more to do with the longue durée than with any ephemeral or momentary event.
He references the Mount Toba eruption, thought to have happened between 69,000 and 77,000 years ago, and which may have had a major impact upon our early ancestors. Although Fagan emphasizes (as do most anthropologists) that human beings were anatomically modern from the emergence of Homo sapiens (between 200,000 and 120,000 years ago), he implies without explicitly stating that a kind of cognitive modernity emerged during the period of privation following the Mount Toba eruption. I would suggest that the climatological winter following the eruption may have provided an opportunity for cognitive competition, and therefore triggered the emergence of cognitive modernity through evolutionary escalation. In short, only the cleverest of the at-that-time very small population of Homo sapiens in Africa would have survived.
Professor Fagan also places great emphasis upon the last glacial maximum (between 26,500 and 19,000–20,000 years ago) of the last ice age, when the human population, by that time having passed into Europe and Asia, was again under great climatological pressure, and came through a very difficult time. Most human beings alive today would possess neither the skills nor the knowledge to survive during the last glacial maximum, were they set down in Europe or Asia 25,000 years ago. And it is important to emphasize that it is skills and knowledge that make the difference: we are not talking about hulking, cold-weather-adapted Neanderthals, we are talking about fully anatomically modern human beings, indistinguishable from ourselves.
Both of these events — the Mount Toba eruption and the last glacial maximum — were cold weather events. Our ancestors survived and even thrived because of the skills that they developed to carry them through extreme cold weather conditions. Professor Fagan mentions winters lasting nine months of the year, and temperatures routinely colder that our relatively balmy inter-glacial temperatures. And they mastered these skills without the sort of high technology that we would imagine would be necessary to survive such conditions.
Professor Fagan emphasizes the importance of the eyed needle, which he compares to the domestication of fire as an event of the first importance in the history of human technology. It was the eyed needle, cut from bone or antler with a very fine flint blade, that made possible the sewing of close-fitting clothes, and it was close fitting clothes mostly made of reindeer hides that made survival through the past glacial maximum possible.
We are all the descendents of these hearty ancestors who found ingenious ways to live in a hostile climate — and not only a hostile climate, but a climate that changed dramatically in the course of a lifetime, and even more dramatically over a few generations. And while these ice age ancestors were not Eskimos sensu stricto, the closest thing to their lives is preserved by the remnants of the peoples of the far northern polar regions who still survive in extreme cold, who still wear the skins of the animals they eat, and who still cling to the nomadic hunter-gatherer way of life, and who still have the migration of the reindeer at the center of their lives and their culture. The reindeer is perhaps the central animal in human experience taken over the longue durée.
It is interesting to note in this connection that Toynbee, in his attempt at a systematic survey of civilizations, did identify an “Eskimo civilization,” but he identified it as an “arrested civilization” and in this capacity classed it with Polynesian civilization and with nomads generally speaking. These “arrested” civilizations are not to be confused with the “abortive” civilizations of Viking Scandinavia and Far Western Christendom (the “Celtic Fringe”).
The “arrested” civilizations actually play a central role in Toynbee’s “challenge and response” argument for the vitality of civilizations. Toynbee regarded Eskimo civilization as arrested because it confronted a challenge that was too great to overcome and thus to produce a civilization that would not be characterized as arrested. Here is how Toynbee puts it:
“In addition to the two classes already noticed, developed civilizations and abortive civilizations, there is a third, which we must call arrested civilizations. It is the existence of civilizations which have kept alive but failed to grow that compels us to study the problem of growth; and our first step will be to collect and study the available specimens of civilizations of this category.”
Arnold Toynbee, A Study of History, Vol. 1., p. 164
“All these arrested civilizations have been immobilized in consequence of having achieved a tour de force. They are responses to challenges of an order of severity on the very borderline between the degree that affords stimulus to further development and the degree that entails defeat… we may add that four out of the five we have mentioned were in the end compelled to accept defeat. Only one of them, the Eskimo culture, is still maintaining itself.”
Arnold Toynbee, A Study of History, Vol. 1., p. 164-165
I do not necessarily disagree with this, though I wouldn’t formulate the idea of arrested civilizations in exactly the same way, but it is instructive and interesting. Where Toynbee goes seriously wrong is a couple of paragraphs further along:
“As for the Eskimos, their culture was a development of the North American Indian way of life specifically adapted to the conditions of life round the shores of the Arctic Ocean.”
Arnold Toynbee, A Study of History, Vol. 1., p. 165
Here Toynbee has gotten it exactly backward: it is not that the Eskimos were a development of North American Indian cultures, but that North American Indian cultures were a development of Eskimo culture — or, to be more precise, a development from a way of life that is the direct cultural ancestor of contemporary Eskimo life.
The order of derivation is important here because it refers to what is most fundamental in human experience over the longue durée, and this is largely the experience of Eskimo life, taking the latter in its broadest signification. For this reason I would not call Eskimo civilization “arrested” civilization but rather Proto-civilization.
Eskimo civilization is the ancestor of all human civilization, and in a world in which glaciation is the norm (as has been the case throughout the Quaternary Glaciation, which comprises the better part of the duration of human evolution) and brief, balmy inter-glacial periods have been the exception to the climatological rule, Eskimo life also represents the robust and sustainable way of human life to which homo sapiens can return time and again as the ice sheets advance and retreat.
Settled civilizations in an inter-glacial temperate zone — that climatological region most friendly to the growth of civilization that for Toynbee represents the norm for human society — would be fatally threatened by the arrival of a glacial maximum, but homo sapiens can always return to the ways of Eskimo life to weather the storm of severe climatological conditions. This makes of Eskimo civilization the source and the root of human life.
Had it not been for the game-changing emergence of industrial-technological civilization, this calculus would still hold good, but this unpredictable and unprecedented historical contingency has thrown everything into question, including verities that have shaped human life from its beginnings up until the Industrial Revolution only two hundred years ago.
. . . . .
. . . . .
. . . . .
9 May 2011
As I have come to realize how much I enjoy documentaries of large scope, rather than waiting for them to fall me in a moment of serendipity, I now go looking for them. And so it is that I ordered Jacob Bronowski’s famous documentary, The Ascent of Man, from the library, and am now watching it.
The description of the series on the box, from Booklist, says that the series is, “an excellent resource for high-school and college students.” Well, it is much more than this. Like Clark’s Civilisation: A Personal View, Ronald’s Eyre‘s The Long Search, and Gwynne Dyer’s War, this work by Bronowski is not merely educative, as the reference to high-school and college students seems to suggest, but it takes a definite point of view, a personal point of view. This stands in the best tradition of engaged scholarship and academic freedom.
I recently wrote of Clark’s Civilisation that he maintains a number of theses throughout his presentation, and the same is true of Bronowski. These documentaries are not merely showing us pretty pictures and telling us what once happened long ago, they are making arguments, and if you aren’t aware of that an argument is being made, and that others have argued to the contrary, you will miss a lot that is valuable in these presentations. It is only thoughtlessness — and thoughtlessness is the intellectual equivalent of carelessness — that would condescend to identify these arguments as mere “resource for high-school and college students.”
I am reminded, in this context, of Bertrand Russell’s A History of Western Philosophy. There are a few institutionalized philosophers who use this book as a classroom text. If you already know a good deal of philosophy, Russell’s survey is a very funny review, but if you do not know philosophy and attempt to learn it from Russell’s book, you will end up with some serious misapprehensions of the discipline. Russell’s history is a wonderful book, but is adopts a definite point of view, and as such sets its up in opposition to other points of view. Copleston’s history is much more detailed and closer to being objective, and Passmore’s history, though it only covers the nineteenth and twentieth centuries, is better than either, though it too has a point of view.
So it is with Bronowski on the history of science and Clark on the history of civilization: these are not “textbooks,” though there is much to be learned from them. With justification, both television series are subtitled, “A Personal View.” I find much of value in these personal views; that is in fact why I seek them out. It is interesting to know that Bronowski’s science series was conceived in conscious counterpoint to Clark’s series on civilization. In an interview with Sir David Attenborough that is included on the Civilisation DVD, the latter reveals how BBC 2 controller Aubrey Singer was castigated for putting the arts “first” in having Clark’s Civilisation as the first large, multi-part documentary on the network.
While Bronowski’s series was undertaken in conscious contradistinction to Clark’s Civilisation, as a science counterpoint to Clark’s arts perspective, there are in fact many similarities and continuities between the two, and both maintain similar theses, though Bronowski’s are formulated in a much longer time frame. I sharply disagree with many of Bronowski’s central theses, but I will leave any criticisms for another time. For the moment I would like to focus on what I find most valuable in Bronowski’s perspective.
Bronowski is very much a practitioner of the longue durée, though he is not an historian per se, and not a structuralist. But Bronowski’s long term perspective on time adds up to more than the sum of its extended parts. The absolute quantity of history taken in my Bronowski’s perspective ultimately has qualitative consequences for the conception of history given exposition by Bronowski. I enthusiastically approve of this, as this is exactly what I have been trying to get at in what I have come to call metaphysical history (and which I formerly called Integral History).
Comparing Bronowski and Clark on history, Bronowski’s reflections are the more radical. Let me try to explain what I mean by that, and it won’t be easy. We are so familiar with the political use of “radical” that we have lost the larger, more comprehensive meaning of the term. The sense of “radical” to which I am appealing has nothing to do with waving signs in the street or calling for the overthrow of government. “Radical” in a philosophical sense is something beyond the most intemperate demands of the political radical, and therefore in a sense even more radical.
To be radical is to go to the root. It is to get at the fons et origo of things. The political radical wants to get at the root of a political problem by eliminating an old and compromised society and effecting a root-and-branch reconstruction of society. Thus the political sense of being radical is a specialized sense of “radical.” A comprehensive conception of what it is to be radical, an extended sense of radical not restricted to any one concern or discipline, is ultimately metaphysical. To be metaphysically radical is to attempt to get at the root of the world.
Husserl throughout his works of phenomenology, emphasized the radical nature of phenomenological reflection. Husserl is trying to get at the root of things, and that is what makes his thought radical. Throughout his Cartesian Meditations, for example, he calls for philosophical radicalism. This is a consistent theme in Husserl, and here is a characteristic passage:
“In recent times the longing for a fully alive philosophy has led to many a renaissance. Must not the only fruitful renaissance be the one that reawakens the impulse of the Cartesian Meditations: not to adopt their content but, in not doing so, to renew with greater intensity the radicalness of their spirit, the radicalness of self-responsibility, to make that radicalness true for the first time by enhancing it to the last degree, to uncover thereby for the first time the genuine sense of the necessary regress to the ego, and consequently to overcome the hidden but already felt naïveté of earlier philosophizing?”
Edmund Husserl, Cartesian Meditations, The Hague: Martinus Nijhoff, 1960, Introduction, section 2, “The necessity of a radical new beginning of philosophy,” p. 6
I see Bronowski’s perspective as a scientific radicalism that is closely parallel to Husserl’s philosophical radicalism. Both forms of radicalism are metaphysical, and seek to get at the root of reality itself.
One practical example of this is that early in his television series (in the second episode, I think — I don’t yet know this as well as Clark’s Civilisation, which I nearly have memorized) Bronowski refers to “civilization as we know it” as being 12,000 years old, which means that he is identifying civilization with the Neolithic Agricultural Revolution and the emergence of settled life in villages and eventually cities. I have made this claim myself, in this forum, though it is not typical to date civilization in this way. Also early in the series Bronowski says, “nature, i.e., evolution.” This reminds me of the evolutionary bluntness of JFK in his “sea” speech, which I mentioned in my post Inaugural Special Edition.
The evolutionary honesty of Bronowski is presently under attack by anti-scientific crackpots from the right. But Bronowski is equally honest about human adaptive radiation, openly discussing the adaption of various ethnic groups (formerly called “races”) to the different environments into which they migrated, and this is a position that is presently under attack my anti-scientific crackpots on the left. It is, in part, this admirable indifference to political implications that makes Bronowski’s position radical in the best sense of the term.
New forms of dishonesty and humbug are always emerging from human history, as men twist and turn their minds in an anti-heroic effort not to see things as they are, and it often requires considerable effort — a radical effort like that of Husserl or Bronowski — to transcend these ingenious efforts at self-deception. Such an effort is difficult, but the reward is correspondingly great.
. . . . .
. . . . .
. . . . .
2 January 2011
In many posts in which I have discussed prehistory, especially those concerned with the period of time starting with the Neolithic Agricultural Revolution — a period of time that I call the Agricultural Paradigm — I have often referred to the societies of the Agricultural Paradigm as civilizations, and this is a usage that is atypical at best; I don’t know what it might be called at worst. “Civilization” is usually reserved to refer to larger-scale societies with cities, and may be further reserved for the emergence of historical consciousness and its explicit expression in written language, i.e., the historical period sensu stricto. Before the advent of cities and written language, it is more typical to refer to “cultures” rather than to “civilizations.”
From my point of view, the emergence of settled agricultural societies, if not coextensive with civilization proper, is certainly the beginning of civilization, will eventually become civilization, and represents something distinctly different human life during the nomadic paradigm that preceded it.
No rational person without some particular agenda would attempt to reduce the complexity of civilization to any one property or artifact, as, for example, the stirrup, the wheel, written language, or cities of a given size. All of these things emerge gradually in history. The earliest societies of settled agriculturalism did not have written languages, but they did have monuments such as megaliths that preserved certain kinds of knowledge and served a symbolic function. And while these early settled societies did not have cities as we know them today, but they did have interconnected villages and probably interconnected populations equal to cities that would emerge later.
It occurred to me today that I could introduce the term “proto-civilization” to distinguish the transitional period — or perhaps what we might call an incipient period — from clearly non-civilized conditions to clearly civilized conditions. In An unnamed principle and an unnamed fallacy (and which I later called the truncation principle) I made this observation: for any distinction that is made, there will be cases in which the distinction is problematic, but there will also be cases when the distinction is not problematic. This holds for the distinction between civilization and non-civilization as for other distinctions. The fact that there are problematic cases does not render the non-problematic cases irrelevant, and, vice versa, the fact of non-problematic cases does not render problematic cases irrelevant. Proto-civilization is the problematic case of civilization, and is the bridge between civilization and non-civilization.
We typically invoke the prefix “proto-” when we want to indicate an idea that is used before it is made explicit, that is to say, before it is formalized. I considered this in Putting Ideas First, in which I distinguished between ideas that precede their factual realization on the one hand, and on the other hand ideas that are suggested by an actually existing state-of-affairs. In the case of civilization, a state-of-affairs existed long before the idea of civilization was made explicit. But in projecting the idea of civilization backward in history, we already have the idea suggested by a particular cultural milieu, and the question becomes whether this idea can be applied further than the context in which it was initially proposed. (It would be worthwhile to formulate this in greater detail and rigor, but I will save this for another time.)
Since the many properties and artifacts that jointly constitute civilization emerge gradually, I choose to identify as civilizations those societies that first begin to exhibit these properties and artifacts, and I see this first in the settled agricultural societies of the Neolithic Agricultural Revolution. As a matter of disambiguation vis-à-vis more conventional expositions of history, I will try to use the term proto-civilization for this period in future expositions.
If we make a comparison not between historically sequential cultures but between periodically distinct cultures from different traditions, the extent to which so called “stone age” cultures of settled agriculturalism are already fully developed civilizations becomes more obvious. Civilization in the western hemisphere developed according to a slightly different pattern than in the eastern hemisphere, though there is much in common among civilizations all over the world. However, the historical record preserves in significant detail an encounter between a stone age culture and a “civilized” culture, and that is the arrival or Europeans in the Western hemisphere.
The records kept by Europeans reveal to us the civilizations of the Western hemisphere in a way that they could not yet document themselves. When we study the political complexity of these societies, their degree of organization, the art and architecture, and the surviving fragments of life preserved in museums, we do not hesitate to call these cultures of the Western hemisphere civilizations. When we compare them to the civilizations of the Old World, there are family resemblances between the two, but also failures of resemblance. There was writing, but I think the glyphs on Mayan temples are more like the Runes of Scandinavia — an incantatory language more than a utilitarian language — than a pragmatic writing system for record keeping.
Examples such as this can easily be multiplied. The use of the wheel was unknown in the Western hemisphere for anything other than toys before Europeans. So the civilizations of the Western hemisphere had some of the artifacts and properties that we usually attribute to civilization, while they lacked others. But since their level of development was recorded by a people with a long established tradition of documentary record keeping, we know a great deal about these cultures — much more than we know about the European’s own stone age cultures. And I think that if we could go back in time and document the cultures of the Old World in the period of proto-civilization, that they would look a lot like the civilizations of the Western hemisphere.
The achievement of European civilization hid its own origins from itself (the origins of civilization were effaced by the later stages of civilization), and it was not until the late nineteenth century that European civilization began to understand its own prehistoric origins, but these same developments made it possible for the Europeans to recognize as civilizations the cultures they encountered, even if this encounter was violent and resulted in the annihilation of much of the culture encountered. There are many lessons to be learned from this clash of civilizations, and not least is the lesson that these stone age cultures were civilizations, and once we have learned this lesson we can see that the stone age cultures of the Old World were also civilizations. Though, as I noted above, I will try to remember to refer to them as proto-civilizations in order to reduce the confusion over identifying as civilizations cultures formerly identified as not yet civilized.
. . . . .
. . . . .
. . . . .
. . . . .