14 February 2017
Nietzsche’s Big History
One of the most succinct formulations of Big History of which I am aware is a brief paragraph from Nietzsche:
“In some remote corner of the universe, poured out and glittering in innumerable solar systems, there once was a star on which clever animals invented knowledge. That was the highest and most mendacious minute of ‘world history’ — yet only a minute. After nature had drawn a few breaths the star grew cold, and the clever animals had to die.
“On Truth and Lie in an Extra-Moral Sense,” Friedrich Nietzsche, Fragment, 1873: from the Nachlass. Translated by Walter Kaufmann
…and in the original German:
In irgend einem abgelegenen Winkel des in zahllosen Sonnensystemen flimmernd ausgegossenen Weltalls gab es einmal ein Gestirn, auf dem kluge Tiere das Erkennen erfanden. Es war die hochmütigste und verlogenste Minute der “Weltgeschichte”: aber doch nur eine Minute. Nach wenigen Atemzügen der Natur erstarrte das Gestirn, und die klugen Tiere mußten sterben.
Über Wahrheit und Lüge im außermoralischen Sinne, Friedrich Nietzsche, 1873, aus dem Nachlaß
This passage has been translated several times, so, for purposes of comparison, here is another translation:
“In some remote corner of the universe that is poured out in countless flickering solar systems, there once was a star on which clever animals invented knowledge. That was the most arrogant and the most untruthful moment in ‘world history’ — yet indeed only a moment. After nature had taken a few breaths, the star froze over and the clever animals had to die.”
ON TRUTH AND LYING IN AN EXTRA-MORAL SENSE (1873), Edited and Translated with a Critical Introduction by Sander L. Gilman, Carole Blair, and David J. Parent, New York and Oxford: OXFORD UNIVERSITY PRESS, 1989
Bertrand Russell, who rarely passed over an opportunity to criticize Nietzsche in the harshest terms, expressed a tragic interpretation of human endeavor that is quite similar to Nietzsche’s capsule big history:
“That Man is the product of causes which had no prevision of the end they were achieving; that his origin, his growth, his hopes and fears, his loves and his beliefs, are but the outcome of accidental collocations of atoms; that no fire, no heroism, no intensity of thought and feeling, can preserve an individual life beyond the grave; that all the labours of the ages, all the devotion, all the inspiration, all the noonday brightness of human genius, are destined to extinction in the vast death of the solar system, and that the whole temple of Man’s achievement must inevitably be buried beneath the debris of a universe in ruins–all these things, if not quite beyond dispute, are yet so nearly certain, that no philosophy which rejects them can hope to stand. Only within the scaffolding of these truths, only on the firm foundation of unyielding despair, can the soul’s habitation henceforth be safely built.”
Bertrand Russell, “A Free Man’s Worship”
Even closer to Nietzsche, in both style and spirit, is the passage that immediately precedes this in the same essay by Russell, told, as with Nietzsche, in the form of a parable:
“For countless ages the hot nebula whirled aimlessly through space. At length it began to take shape, the central mass threw off planets, the planets cooled, boiling seas and burning mountains heaved and tossed, from black masses of cloud hot sheets of rain deluged the barely solid crust. And now the first germ of life grew in the depths of the ocean, and developed rapidly in the fructifying warmth into vast forest trees, huge ferns springing from the damp mould, sea monsters breeding, fighting, devouring, and passing away. And from the monsters, as the play unfolded itself, Man was born, with the power of thought, the knowledge of good and evil, and the cruel thirst for worship. And Man saw that all is passing in this mad, monstrous world, that all is struggling to snatch, at any cost, a few brief moments of life before Death’s inexorable decree. And Man said: `There is a hidden purpose, could we but fathom it, and the purpose is good; for we must reverence something, and in the visible world there is nothing worthy of reverence.’ And Man stood aside from the struggle, resolving that God intended harmony to come out of chaos by human efforts. And when he followed the instincts which God had transmitted to him from his ancestry of beasts of prey, he called it Sin, and asked God to forgive him. But he doubted whether he could be justly forgiven, until he invented a divine Plan by which God’s wrath was to have been appeased. And seeing the present was bad, he made it yet worse, that thereby the future might be better. And he gave God thanks for the strength that enabled him to forgo even the joys that were possible. And God smiled; and when he saw that Man had become perfect in renunciation and worship, he sent another sun through the sky, which crashed into Man’s sun; and all returned again to nebula.
“`Yes,’ he murmured, `it was a good play; I will have it performed again.'”
Here Russell, unlike Nietzsche, gives theological meaning to the spectacle, however heterodox that meaning may be; I can easily imagine someone preferring Russell’s theological version to Nietzsche’s secular version, though both highlight the meaninglessness of human endeavor in a thermodynamic universe.
Our sun — a star among stars — will be a relatively early casualty in the heat death of the universe. While the life of the sun is orders of magnitude beyond the life of the individual human being, as soon as we understood that the sun’s life will pass through predictable stages of stellar evolution, we understood that the sun, like any human being, was born, will shine for a time, and then will die, and, when the sun dies, everything that is dependent upon the light of the sun for life will die also. It is only if we can make ourselves independent of the sun that we will not inevitably share the fate of the sun.
The idea that the sun is a star among stars, and that any star will do in terms of supporting human life, is embodied in a quote attributed to Wernher von Braun by Tom Wolfe and reported in Bob Ward’s book about von Braun:
“The importance of the space program is not surpassing the Soviets in space. The importance is to build a bridge to the stars, so that when the Sun dies, humanity will not die. The Sun is a star that’s burning up, and when it finally burns up, there will be no Earth… no Mars… no Jupiter.”
quoted in Dr. Space: The Life of Wernher von Braun, Bob Ward, Chapter 22, p. 218, with a footnote giving as the source, “Transcript, NBC’s Today program, New York, November 11, 1998”
Wernher von Braun had seized upon the essential insight of existential risk mitigation, as had many involved in the space program from its inception. As soon as one adopts a naturalistic understand of the place of humanity in the universe, and when technology develops to a point at which its extrapolation offers human beings options and alternatives within the universe, anyone will draw the same conclusion. Another quote from von Braun makes the same point in another way:
“…man’s newly acquired capability to travel through outer space provides us with a way out of our evolutionary dead alley.”
Bob Ward, Dr. Space: The Life of Wernher von Braun, Annapolis, US: Naval Institute Press, 2013.
I have previously written about the idea that humanity is a solar species, but the fact that humanity and the biosphere from which we derive has been utterly dependent upon solar insolation has been an accident of history. Any sun will do. We can, accordingly, re-conceive humanity as a stellar species, the kind of species that requires a star and its planetary system to make a home for ourselves. In this sense, all species of planetary endemism are stellar species.
Even this idea of immigration to another star, and of any other star being as good as the sun, is ultimately too narrow. Our sun, or any star, can be the source of energy that powers our civilization, but it can easily be seen that substitute forms of energy could equally well power the future of our civilization, and that it has merely been an historical contingency — a matter of our planetary endemism — that we have been dependent upon a single star, or upon any star, for our energy needs.
This more radical and farther-reaching vision is embodied in a quote attributed to Ray Bradbury by Oriana Fallaci:
“Don’t let us forget this: that the Earth can die, explode, the Sun can go out, will go out. And if the Sun dies, if the Earth dies, if our race dies, then so will everything die that we have done up to that moment. Homer will die. Michelangelo will die, Galileo, Leonardo, Shakespeare, Einstein will die, all those will die who now are not dead because we are alive, we are thinking of them, we are carrying them within us. And then every single thing, every memory, will hurtle down into the void with us. So let us save them, let us save ourselves. Let us prepare ourselves to escape, to continue life and rebuild our cities on other planets: we shall not be long of this Earth! And if we really fear the darkness, if we really fight against it, then, for the good of all, let us take our rockets, let us get well used to the great cold and heat, the no water, the no oxygen, let us become Martians on Mars, Venusians on Venus, and when Mars and Venus die, let us go to the other solar systems, to Alpha Centauri, to wherever we manage to go, and let us forget the Earth. Let us forget our solar system and our body, the form it used to have, let us become no matter what, lichens, insects, balls of fire, no matter what, all that matters is that somehow life should continue, and the knowledge of what we were and what we did and learned: the knowledge of Homer and Michelangelo, of Galileo, Leonardo, Shakespeare, of Einstein! And the gift of life will continue.”
Oriana Fallaci, If the Sun Dies, New York: Atheneum, 1966, pp. 14-15
Fallaci refers to this as a “prayer,” and indeed we might see this as a prayer or a catechism of the Space Age — not a belief, not merely belief, but an imperative ever-present in the hearts and minds of those who have fully imbibed the spirit of the age and who seek to carry that spirit forward with evangelical fervor, proselytizing to the masses and bringing them to the True Faith through purity of will and vision — another way of saying naïveté.
Do the clever animals have to die? No, not yet. Not if they are clever enough to move on to another planet, another star, another galaxy. Not if they are clever enough to change themselves so that, when the changed conditions of the universe in which they exist no longer allow the lives of clever animals to continue, what the clever animals have achieved can be preserved in some other way, and they themselves can be preserved in another form.
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
2 May 2016
Darwin’s Thesis on the Origin of Civilization
and its extrapolation to exocivilizations
In the scientific study of civilization we are beginning at the beginning because there is no established body of scientific knowledge about civilization — much historical knowledge, to be sure, but no science of civilization, sensu stricto, and therefore no scientific knowledge sensu stricto — and this demands that we begin with the simplest and most obvious propositions about civilization. The simplest and most obvious propositions about civilization are such as most discussions of civilization would simply pass over in silence as necessary presuppositions, or which would be dismissed by hand-waving and the assertion, “It is obvious that…” We will take a different point of view. Only a mathematician would think that the Jordan curve theorem was an idea in need of proof, and only someone engaged in attempting to formulate a science of civilization would think asserting that civilization originates in a pre-civilized condition was a condition of civilization that requires discussion.
Our point of departure in this discussion will be what I call Darwin’s Thesis on the origins of civilization, or, more simply, Darwin’s Thesis. I call this Darwin’s Thesis (and called it such in my presentation “What kind of civilizations build starships?”) because of the following passage from Darwin about the origins of civilization:
“The arguments recently advanced… in favour of the belief that man came into the world as a civilised being and that all savages have since undergone degradation, seem to me weak in comparison with those advanced on the other side. Many nations, no doubt, have fallen away in civilisation, and some may have lapsed into utter barbarism, though on this latter head I have not met with any evidence… The evidence that all civilised nations are the descendants of barbarians, consists, on the one side, of clear traces of their former low condition in still-existing customs, beliefs, language, &c.; and on the other side, of proofs that savages are independently able to raise themselves a few steps in the scale of civilisation, and have actually thus risen.”
Charles Darwin, The Descent of Man, Chapter V (I have left Darwin’s spelling in its Anglicized form.)
Darwin was here taking the same naturalistic stance in regard to civilization that he had earlier taken in regard to biology. Darwin made biology scientific by making it a domain of research approached by way of methodological naturalism; prior to Darwin there was biology of a kind, but not any study of biology that could be reconciled with methodological naturalism. Darwin applied this same reasoning to civilization, and this is the reasoning we must apply to civilization if we are to formulate a science of civilization that can be reconciled with methodological naturalism.
As far as ideas about civilization go, this is extremely basic. However, I will again stress the need to begin a science of civilization with the most basic and rudimentary propositions possible. While this is a proposition so rudimentary as to be mundane, there can be no more interesting question for the science of civilization than that of the origin of civilization (the question of the end of civilization is equally interesting, but I wouldn’t say it is more interesting).
While the simplest theses on civilization seem so mundane as to be uninteresting, they can nevertheless be deductively powerful in their application. We can only address the longevity of a civilization, for example, once we have established a point in time at which civilization begins, and counting forward in whatever temporal units we care to employ up to its demise (which also must be defined, if the civilization in question has come to an end), or up to the present day (if the civilization in question is still in existence).
According to Darwin’s Thesis, then, civilization is descended from a prior savage or barbaric condition (not terms we would likely employ today, but certainly terms we still understand). How are we to characterize this pre-civilized condition of humanity? What constitutes the non-civilization that preceded civilization?
A somewhat discerning distinction, albeit one with moral overtones, was made between savagery, barbarism, and civilization. Like the “three age” system of prehistory — stone age, bronze age, iron age — we still find traces of these distinctions in contemporary thought. Here is how I described it previously:
“Edward Burnett Tylor proposed that human cultures developed through three basic stages consisting of savagery, barbarism, and civilization. The leading proponent of this savagery-barbarism-civilization scale came to be Lewis Henry Morgan, who gave a detailed exposition of it in his 1877 book Ancient Society… A quick sketch of the typology can be found at Anthropological Theories: Cross-Cultural Analysis. One of the interesting features of Morgan’s elaboration of Tylor’s idea is his concern to define his stages in terms of technology. From the ‘lower status of savagery’ with its initial use of fire, through a middle stage at which the bow and arrow is introduced, to the ‘upper status of savagery’ which includes pottery, each stage of human development is marked by a definite technological achievement. Similarly with barbarism, which moves through the domestication of animals, irrigation, metal working, and a phonetic alphabet.”
Elsewhere I suggested that the non-civilization prior to civilization could be called proto-civilization. I just re-read my post on proto-civilization and now I find it inadequate, but I still endorse at least this much of what I said there:
“In the case of civilization, a state-of-affairs existed long before the idea of civilization was made explicit. But in projecting the idea of civilization backward in history, we already have the idea suggested by a particular cultural milieu, and the question becomes whether this idea can be applied further than the context in which it was initially proposed.”
This would be one methodology to employ: take the concept of civilization as it has been elaborated and seek to apply it to past social structures; determining at what point this concept no longer applies gives a point in time for the origin of civilization. This could be called the “retroactive method.”
Given the far greater archaeological data we possess than we possessed at the time the concept of civilization was first formulated, this method has new information to work with that it did not have at the time of its formulation. This is one of the points that I attempted to make, however poorly I did so, in my post on proto-civilization: we have an enormous amount of archaeological data on the Upper Paleolithic and Early Neolithic in the Old World, which is usually described in terms of “cultures” rather than “civilizations.” But when European explorers of the Early Modern period came to the New World, they encountered peoples that had social institutions that we today call civilizations, though these civilizations were closer to the “Stone Age” of the Old World than to the early civilizations of Egypt and Mesopotamia (to take to paradigm cases of civilization).
An alternative to the retroactive method would be to study the artifacts of the past on their own merits, to construct a definition of civilization on the basis of the earliest known human societies (on the basis of their material culture), and then apply this conception of civilization forward in time (for lack of a better term I will call this the proactive method, simply to contrast it to the retroactive method). It is arguable that some archaeologists do in fact follow this method, but I don’t know of anyone who has explicitly advanced this procedure as desirable (much less as necessary), although it does bear some resemblance to the implicit formalism of the cultural processual school in archaeological thought.
Both retroactive and proactive methods incorporate obvious problems that derive from parachronic distortions of evidence (the most obvious parachronism is the familiar idea of an anachronism, i.e., a survival from the past preserved into the present, where it is obviously out of place; the contrary parachronic distortion is that of projecting the present into the past).
To pull back from the provincial considerations of civilization studied by archaeology to date — that is to say, exclusively terrestrial civilizations — we can further develop the idea of Darwin’s Thesis in a cosmological context. Once we do this, we immediately understand that we have been asking questions focused on a particular set of conditions that are characteristic of civilizations during the Stelliferous Era, and our ideas worked out for terrestrial civilization (civilizations of planetary endemism during the Stelliferous Era) may not apply more generally to the largest scales of civilization achieved (or which may yet be achieved) in the cosmos.
Civilizations during the Degenerate Era may possess a different character due to their need to derive energy flows from sources other than stellar flux, which latter defines the conditions of the origins of civilization from intelligent biological agents during the Stelliferous Era, which might also be called the Age of Planetary Endemism. If the Degenerate Era begins with the universe having been exhaustively settled or inhabited by life and civilization, this densely inhabited universe not only would prevent the emergence of new civilizations, but also would mean an end to this living cosmos of starlight. In this case the Degenerate Era begins with what I have called the End-Stelliferous Mass Extinction Event (ESMEE), when widely distributed life and civilization of the Stelliferous Era, primarily supported by energy flows from stellar flux (and concentrated on planetary surfaces), comes to an end as the stars wink out one by one.
The cohort of emergent complexity that survives this transition is likely to be a post-civilization successor institution that is (by this time in the evolution of the universe) further removed from the origins of civilization than we are today removed from the origin of the universe. At this point, the origins of emergent complexity will be a distant question, largely inapplicable to contemporaneous concerns, and the central question will be what of the Stelliferous Era can survive into the Degenerate Era, and how it can perpetuate itself in a universe converging on heat death.
Would these civilizations of the Degenerate Era be newly originating civilizations, or would they be derivative from civilizations of the Stelliferous Era? The obvious answer would seem to be that these civilizations would be derivative, except that over such cosmological spans of time the concept of civilization (and the threshold of what constitutes a civilization) is likely to evolve as much as, if not more than, civilization itself. As civilization develops, and a greater degree of science, technology, and intellectual achievement is believed to be indispensable to what constitutes civilization, civilization may be redefined as something close to prevailing conditions, and everything prior to this is redefined as proto-civilization. For example, civilization today might be considered unimaginable without the conveniences of modern life, and everything prior is consigned to barbarism. This reasoning can be extended to hold that civilization is unimaginable without fusion energy, without strong AI, without interstellar travel, and so on. All of this is entirely consistent with Darwin’s Thesis, which holds regardless of whether we consider the Upper Paleolithic to be utter savagery, or 2016 to be utter savagery.
If we consciously make an effort to formulate and to retain a comprehensive conception of civilization, that is not continually revised forward in time in the light of the later developments of civilization, we can avoid the above problem, and it is this approach that gives us longer ages for our civilization today. I have often mentioned that it was once commonplace, and perhaps still commonplace, to fix the origins of civilization with the origins of written languages (i.e., the origins of the “historical period” sensu stricto), but scientific historiography has been slowly chipping away at the distinction between history and prehistory until it is no longer tenable. Hence I identify the origins of civilization with the emergence of cities during or shortly after the Neolithic Agricultural Revolution, which makes our civilization about ten thousand years old, rather than five thousand years old.
As our archaeological knowledge of the past improves, we may be able to set quantifiable conditions for the origins of civilization (say, a number of cities with a given population size, or a particular degree of sophistication in metallurgy, which latter seems to me to mark the ultimate origins of technological civilization). Again, Darwin’s Thesis is entirely in accord with this method also. Moreover, I think that this method gives a greater degree of independence to the determination of the origins of civilization, as it would also give us metrics by which we could determine the independent origin of a new civilization, say, even in the Degenerate Era, if this were to prove possible (which we really don’t know at present).
Beyond these concerns, and beyond the immediate scope of this post, we may need to posit a condition for the continuity of civilization — say, e.g., that metallurgical technological never lapses below a certain threshold — so that once given Darwin’s Thesis and some definition of civilization, we can determine when a civilization has originated de novo, and when a civilization is an evolutionary mutation of an earlier civilization, or a developmental achievement of an earlier civilization, rather than something new in history. This applies whether we take the threshold of achievement to be the smelting of copper or the building of starships. For example, if a civilization can smelt copper (or better), and never loses this technological capacity, it retains a minimal degree of continuity with the first civilization capable of this achievement, when an unbroken continuity of this capacity can be shown from the origins of this technology forward to some arbitrary date in the future.
. . . . .
. . . . .
. . . . .
. . . . .
2 August 2015
For some philosophers, naturalism is simply an extension of physicalism, which was in turn an extension of materialism. Narrow conceptions of materialism had to be extended to account for physical phenomena not reducible to material objects (like theoretical terms in science), and we can similarly view naturalism as a broadening of physicalism in order to more adequately account for the world. (I have quoted definitions of materialism and physicalism in Materialism, Physicalism, and… What?.) But, coming from this perspective, naturalism is approached from a primarily reductivist or eliminativist point of view that places an emphasis upon economy rather than adequacy in the description of nature (on reductivism and eliminativism cf. my post Reduction, Emergence, Supervenience). Here the principle of parsimony is paramount.
One target of eliminativism and reductionism is a class of concepts sometimes called “folk” concepts. The identification of folk concepts in the exposition of philosophy of science can be traced to philosopher Daniel Dennett. Dennett introduced the term “folk psychology” in The Intentional Stance and thereafter employed the term throughout his books. Here is part of his original introduction of the idea:
“We learn to use folk psychology — as a vernacular social technology, a craft — but we don’t learn it self-consciously as a theory — we learn no meta-theory with the theory — and in this regard our knowledge of folk psychology is like our knowledge of the grammar of our native tongue. This fact does not make our knowledge of folk psychology entirely unlike human knowledge of explicit academic theories, however; one could probably be a good practising chemist and yet find it embarrassingly difficult to produce a satisfactory textbook definition of a metal or an ion.”
Daniel Dennett, The Intentional Stance, Chap. 3, “Three Kinds of Intentional Psychology”
Earlier (in the same chapter of the same book) Dennett had posited “folk physics”:
“In one sense people knew what magnets were — they were things that attracted iron — long before science told them what magnets were. A child learns what the word ‘magnet’ means not, typically, by learning an explicit definition, but by learning the ‘folk physics’ of magnets, in which the ordinary term ‘magnet’ is embedded or implicitly defined as a theoretical term.”
Daniel Dennett, The Intentional Stance, Chap. 3, “Three Kinds of Intentional Psychology”
Here is another characterization of folk psychology:
“Philosophers with a yen for conceptual reform are nowadays prone to describe our ordinary, common sense, Rylean description of the mind as ‘folk psychology,’ the implication being that when we ascribe intentions, beliefs, motives, and emotions to others we are offering explanations of those persons’ behaviour, explanations which belong to a sort of pre-scientific theory.”
Scott M. Christensen and Dale R. Turner, editors, Folk Psychology and the Philosophy of Mind, Chap. 10, “The Very Idea of a Folk Psychology” by Robert A. Sharpe, University of Wales, United Kingdom
There is now quite a considerable literature on folk psychology, and many positions in the philosophy of mind are defined by their relationship to folk psychology — eliminativism is largely the elimination of folk psychology; reductionism is largely the reduction of folk psychology to cognitive science or scientific psychology, and so on. Others have gone on to identify other folk concepts, as, for example, folk biology:
Folk biology is the cognitive study of how people classify and reason about the organic world. Humans everywhere classify animals and plants into species-like groups as obvious to a modern scientist as to a Maya Indian. Such groups are primary loci for thinking about biological causes and relations (Mayr 1969). Historically, they provided a transtheoretical base for scientific biology in that different theories — including evolutionary theory — have sought to account for the apparent constancy of “common species” and the organic processes centering on them. In addition, these preferred groups have “from the most remote period… been classed in groups under groups” (Darwin 1859: 431). This taxonomic array provides a natural framework for inference, and an inductive compendium of information, about organic categories and properties. It is not as conventional or arbitrary in structure and content, nor as variable across cultures, as the assembly of entities into cosmologies, materials, or social groups. From the vantage of EVOLUTIONARY PSYCHOLOGY, such natural systems are arguably routine “habits of mind,” in part a natural selection for grasping relevant and recurrent “habits of the world.”
Robert Andrew Wilson and Frank C. Keil, The MIT Encyclopedia of the Cognitive Sciences
We can easily see that the idea of folk concepts as pre-scientific concepts is applicable throughout all branches of knowledge. This has already been made explicit:
“…there is good evidence that we have or had folk physics, folk chemistry, folk biology, folk botany, and so on. What has happened to these folk endeavors? They seem to have given way to scientific accounts.”
William Andrew Rottschaefer, The Biology and Psychology of Moral Agency, 1998, p. 179.
The simplest reading of the above is that in a pre-scientific state we use pre-scientific concepts, and as the scientific revolution unfolds and begins to transform traditional bodies of knowledge, these pre-scientific folk concepts are replaced with scientific concepts and knowledge becomes scientific knowledge. Thereafter, folk concepts are abandoned (eliminated) or formalized so that they can be systematically located in a scientific body of knowledge. All of this is quite close to the 19th century positivist August Comte’s theory of the three stages of knowledge, according to which theological explanations gave way to metaphysical explanations, which in turn gave way to positive scientific explanations, which demonstrates the continuity of positivist thought — even that philosophical thought that does not recognize itself as being positivist. In each case, an earlier non-scientific mode of thought is gradually replaced by a mature scientific mode of thought.
While this simple replacement model of scientific knowledge has certain advantages, it has a crucial weakness, and this is a weakness shared by all theories that, implicitly or explicitly, assume that the mind and its concepts are static and stagnant. Allow me to once again quote one of my favorite passage from Kurt Gödel, the importance of which I cannot stress enough:
“Turing… gives an argument which is supposed to show that mental procedures cannot go beyond mechanical procedures. However, this argument is inconclusive. What Turing disregards completely is the fact that mind, in its use, is not static, but is constantly developing, i.e., that we understand abstract terms more and more precisely as we go on using them, and that more and more abstract terms enter the sphere of our understanding. There may exist systematic methods of actualizing this development, which could form part of the procedure. Therefore, although at each stage the number and precision of the abstract terms at our disposal may be finite, both (and, therefore, also Turing’s number of distinguishable states of mind) may converge toward infinity in the course of the application of the procedure.”
“Some remarks on the undecidability results” (Italics in original) in Gödel, Kurt, Collected Works, Volume II, Publications 1938-1974, New York and Oxford: Oxford University Press, 1990, p. 306.
Not only does the mind refine its concepts and arrive at more abstract formulations; the mind also introduces wholly new concepts in order to attempt to understand new or hitherto unknown phenomena. In this context, what this means is that we are always introducing new “folk” concepts as our experience expands and diversifies, so that there is not a one-time transition from unscientific folk concepts to scientific concepts, but a continual and ongoing evolution of scientific thought in which folk concepts are introduced, their want of rigor is felt, and more refined and scientific concepts are eventually introduced to address the problem of the folk concepts. But this process can result in the formulation of entirely new sciences, and we must then in turn hazard new “folk” concepts in the attempt to get a handle on this new discipline, however inadequate our first attempts may be to understand some unfamiliar body of knowledge.
For example, before the work of Georg Cantor and Richard Dedekind there was no science of set theory. In formulating set theory, 19th century mathematicians had to introduce a great many novel concepts (set, element, mapping) and mathematical procedures (one-to-one correspondence, diagonalization). These early concepts of set theory are now called “naïve set theory,” which have largely been replaced by (several distinct) axiomatizations of set theory, which have either formalized or eliminated the concepts of naïve set theory, which we might also call “folk” set theory. Nevertheless, many “folk” concepts of set theory persist, and Gödel spent much of his later career attempting to produce better formalizations of the concepts of set theory than those employed in now accepted axiomatizations of set theory.
As civilization has changed, and indeed as civilization emerged, we have had occasion to introduce new terms and concepts in order to describe and explain newly emergent forms of life. The domestication of plants and animals necessitated the introduction of concepts of plant and animal husbandry. The industrial revolution and the macroeconomic forces it loosed upon the world necessitated the introduction of terms and concepts of industry and economics. In each case, non-scientific folk concepts preceded the introduction of scientific concepts explained within a comprehensive theoretical framework. In many cases, our theoretical framework is not yet fully formulated and we are still in a stage of conceptual development that involves the overlapping of folk and scientific concepts.
Given the idea of folk concepts and their replacement by scientific concepts, a mature science could be defined as a science in which all folk concepts have been either formalized, transcended, or eliminated. The infinitistic nature of science mystery (which is discussed in Scientific Curiosity and Existential Need), however, suggests that there will always be sciences in an early and therefore immature stage of development. Our knowledge of the scientific method and the development of science means that we can anticipate scientific developments and understand when our intuitions are inadequate and therefore, in a sense, folk concepts. We have an advantage over the unscientific past that knew nothing of the coming scientific revolution and how it would transform knowledge. But we cannot entirely eliminate folk concepts from the early stages of scientific development, and in so far as our scientific civilization results in continuous scientific development, we will always have sciences in the early stages of development.
Scientific progress, then, does not eliminate folk concepts, but generates new and ever more folk concepts even as it eliminates old and outdated folk concepts.
. . . . .
. . . . .
. . . . .
. . . . .
20 February 2015
Kant on Hope
Kant famously summed up the concerns of his vast body of philosophical work in three questions:
1) What can I know?
2) What ought I to do? and…
3) What may I hope?
These three questions roughly correspond to his three great philosophical treatises, the Critique of Pure Reason, the Critique of Practical Reason, and the Critique of Judgment, which represent, respectively, rigorous inquiries into knowledge, ethics, and teleology. However much the world has changed since Kant, we can still feel the imperative behind his three questions, and they are still three questions that we can ask today with complete sincerity. This is important, because many men who deceive themselves as to their true motives, ask themselves questions and accept answers that they do not truly believe on a visceral level. I am saying that Kant’s questions are not like this.
In other contexts I have considered what we can know, and what we ought to do. (For example, I have just reviewed some aspects of what we can know in Personal Experience and Empirical Knowledge, and in posts like The Moral Imperative of Human Spaceflight I have looked at what we ought to do.) Here I will consider the third of Kant’s questions — what we are entitled to hope. There is no more important study toward understanding the morale of a people than to grasp the structure of hope that prevails in a given society. Kant’s third question — What may I hope? — is perhaps that imperative of human longing that was felt first, has been felt most strongly through the history of our species, and will be the last that continues to be felt even while others have faded. We have all heard that hope springs eternal in the human breast.
It is hope that gives historical viability both to individuals and their communities. In so far as the ideal of historical viability is permanence, and in so far as we agree with Kenneth Clark that a sense of permanence is central to civilization, then hope that aspires to permanence is the motive force that built the great monuments of civilization that Clark identified as such, and which are the concrete expressions of aspirations to permanence. Here hope is a primary source of civilization. More recent thought might call this concrete expression of aspirations to permanence the tendency of civilizations to raise works of monumental architecture (this is, for example, the terminology employed in Big History).
Hope and Conceptions of History
The structure of hope mirrors the conception of history prevalent within a given society. A particular species of historical consciousness gives rise to a particular conception of history, and a particular conception of history in turn defines the parameters of hope. That is to say, the hope that is possible within a given social context is a function of the conception of history; what hope is possible, what hope makes sense, is limited to those forms of hope that are both actualized by and delimited by a conception of history. The function of delimitation puts certain forms of hope out of consideration, while the function of actualization nurtures those possible forms of hope into life-sustaining structures that, under other conceptions of history, would remain stunted and deformed growths, if they were possible forms of hope at all.
In analyzing the structure of hope I will have recourse to the conceptions of history that I have been developing in this forum. Consequently, I will identify political hope, catastrophic hope, eschatological hope, and naturalistic hope. This proves to be a conceptually fertile way to approach hope, since hope is a reflection of human agency, and I have remarked in Cosmic War: An Eschatological Conception that the four conceptions of history I have been developing are based upon a schematic understanding of the possibilities of human agency in the world.
All of these structures of hope — political, catastrophic, eschatological, and naturalistic — have played important roles in human history. Often we find more than one form of hope within a given society, which tells us that no conception of history is total, that it admits of exceptions, and the societies can admit of pluralistic manifestations of historical consciousness.
Hope begins where human agency ends but human desire still presses forward. A man with political hope looks to a better and more just society in the future, as a function of his own agency and the agency of fellow citizens; a man with catastrophic hope believes that he may win the big one, that his ship will come in, that he will be the recipient of great good fortune; a man with eschatological hope believes that he will be rewarded in the hereafter for his sacrifices and sufferings in this world; a man with naturalistic hope looks to the good life for himself and a better life for his fellow man. Each of these personal forms of hope corresponds to a society that both grows out of such personal hopes and reinforces them in turn, transforming them into social norms.
Structure and Scope
While a conception of history governs the structure of hope, the contingent circumstances that are the events of history — the specific details that fill in the general structure of history — govern the scope of hope. The lineaments of hope are drawn jointly by its structure and scope, so that we see the particular visage of hope when we understand the historical structure and scope of a civilization.
Like structure, scope is an expression of human agency. An individual — or a society — blessed with great resources possesses great power, and thus great freedom of action. An individual or a society possessed of impoverished resources has much more limited power and therefore is constrained in freedom of action. In so far as one can act — that is to say, in so far as one is an agent — one acts in accords with the possibilities and constraints defined by the scope of one’s world. The scope of human agency has changed over historical time, largely driven by technology; much of the human condition can be defined in terms of humanity as tool makers.
Technology is incremental and cumulative, and it generally describes an exponential growth curve. We labor at a very low level for very long periods of time, so that our posterity can enjoy the fruits of our efforts in a later age of abundance. Thus our hopes for the future are tied up in our posterity and their agency in turn. And it is technology that systematically extends human agency. To a surprising degree, then, the scope of civilization corresponds to the technology of a civilization. This technology can come in different forms. Early civilizations mastered the technology of bureaucratic organization, and managed to administer great empires even with a very low level of technical expertise in material culture. This has changed over time, and political entities have grown in size and increased in stability as increasing technical mastery makes the administration of the planet entire a realistic possibility.
The scope of civilization has expanded as our technologically-assisted agency has expanded, and today as we contemplate our emerging planetary civilization such organization is within our reach because our technologies have achieved a planetary scale. Our hopes have grown along the the expanding scope of our civilization, so that justice, luck, salvation, and the good life all reflect the planetary scope of human agency familiar to us today.
Hope in Planetary Civilization
What may we hope in our planetary civilization of today, given its peculiar possibilities and constraints? How may be answer Kant’s third question today? Do we have any answers at all, or is ours an Age of Uncertainty that denies the possibility of any and all answers?
Those of a political frame of mind, hope for, “a thriving global civilization and, therefore… the greater well-being of humanity.” (Sam Harris, The Moral Landscape) Those with a catastrophic outlook hope for some great and miraculous event that will deliver us from the difficulties in which we find ourselves immersed. Those whose hope is primarily eschatological imagine the conversion of the world entire to their particular creed, and the consequent rule of the righteous on a planetary scale. And those of a naturalistic disposition look to what human beings can do for each other, without the intervention of fortune or otherworldly salvation.
How each of these attitudes is interpreted in the scope of our current planetary civilization is largely contingent upon how an individual or group of individuals with shared interests views the growth of technology over the past century, and this splits fairly neatly into the skeptics of technology and the enthusiasts of technology, with a few sitting on the fence and waiting to see what will happen next. Among those with the catastrophic outlook on history will be the fence sitters, because they will be waiting for some contingent event to occur which will tip us in one direction or the other, into technological catastrophe or technological bonanza. Those of an eschatological outlook tend to view technology in purely instrumental terms, and the efficacy of their grand vision of a spiritually unified and righteous planet will largely depend on the pragmatism of their instrumental conception of technology. The political cast of mind also views technological instrumentally, but primarily what it can do to advance the cause of large scale social organization (which in the eschatological conception is given over to otherworldly powers).
Perhaps the greatest dichotomy is to be found in the radically different visions of technology held by those of a naturalistic outlook. The naturalistic outlook today is much more common than it appears to be, despite much heated rhetoric to the contrary, since, as I wrote above, many of us deceive ourselves as to our true motives and our true beliefs. The rise of science since the scientific revolution has transformed the world, and many accept a scientific world view without even being aware that they hold such views. Rhetorically they may give pride of place to political ideology or religious faith, but when they act they act in accordance with reason and evidence, remaining open to change if their first interpretations of reason and evidence seem to be contradicted by circumstances and consequences.
The dichotomy of the naturalistic mind today is that between human agency that retreats from technology, as though it were a failed project, and human agency that embraces technology. Each tends to think of their relation to technology in terms of liberation. For the critics of technology, we have become enslaved to The Machine, and either by overthrowing the technological system, or simply by turning out backs on it, people can help each other by living modest lives, transitioning to a sustainable economy, cultivating community gardens, watching over their neighbors, and, generally speaking, living up to (or, as if you prefer, down to) the “small is beautiful” and “limits to growth” creed that had already emerged in the early 1970s.
The contrast could not be more stark between this naturalistic form of hope and the technology-embracing naturalistic form of hope. The technological humanist also sees people helping each other, but doing so on an ever grander scale, allowing human beings to realistically strive toward levels of self-actualization and fulfillment not even possible in earlier ages, perhaps not even conceivable. The human condition, for such naturalists, has enslaved us to a biological regime, and it is the efficacy of technology that is going to liberate us from the stunted and limited lives that have been our lot since the species emerged. Ultimately, technology embracing naturalists look toward transhumanism and all that it potentially promises to human hopes, which in this context can be literally unbounded.
Hope in the Age of Naturalism
Given the state of the world today, with all its pessimism, and the violence of contesting power centers apparently motivated by unchanged and unchanging conceptions of the human condition, the reader may be surprised that I focus on naturalism and the naturalistic conception of history. If we do not destroy ourselves in the short term, the long term belongs to naturalism. Contemporary political hope, in so far as it is pragmatic is naturalistic, and insofar as it is not pragmatic, it will fail. The hysterical and bloody depredations of religious mania in our time is only as bad as it is because, as an ideology, it is under threat form the success of naturalistically-enabled science and technology. Once the break with the past is made, eschatological hope will no longer be the basis of large-scale social organization, and therefore its ability to cause harm will be greatly limited (though it will not disappear). The catastrophic viewpoint is always limited by its shoulder-shrugging attitude to human agency.
Most people cannot bear to leave their fate to fate, but will take their fate into their own hands if they can. How people take their fate into their hands in the future, and therefore the form of hope they entertain for what they do with the fate held in their hands, will largely be defined by naturalism. Perhaps this is ironic, as it has long been assumed that, of perennial conceptions of the human condition, naturalism had the least to say about hope (and eschatology the most). That is only because the age of naturalism had not yet arrived. But naturalistic despair is just as much a reality as naturalistic hope, so that the coming of the age of naturalism will not bring a Millennia of peace, justice, and happiness for all. Human leave-taking of the ideologies of the past is largely a matter of abandoning neurotic misery in favor of ordinary human unhappiness.
. . . . .
. . . . .
. . . . .
. . . . .
26 June 2014
Once upon a time it was believed that the world was eternal and unchanging. The inconvenient truth of life and death of Earth was accommodated by a distinction between the sublunary and the superlunary: in Ptolemaic astronomy, the “sublunary” was everything in the cosmos below the sphere of the moon, and this was subject to time and change and suffering; the superlunary was everything in the cosmos beyond the sphere of the moon, which was eternal, perfect, unchanging, and permanent. Thus it was a major problem when Galileo turned his telescope on the moon and saw craters, and when he looked at the sun he saw spots. This wasn’t supposed to happen.
As a result of Galileo and the scientific revolution, we are still re-thinking the world, and each time we think that we have the world caught in a net of concepts, it escapes once again. Up until 1999 it was widely believed that the universe was expanding at a decreasing rate, and the only question was whether there was enough mass for this expansion to eventually grind to a halt, and then perhaps the universe would contract again, or if the universe would just keep coasting along in its expansion. Now it seems that the expansion of the universe is speeding up, and it is widely thought that, in a very early stage of the universe’s existence, it underwent an extremely rapid phase of expansion (called inflation).
When the scientific revolution at long last came to biology, Darwin and evolution and natural selection exploded in the scientific imagination, and suddenly a human history that had seemed neat and compact and easily circumscribed became very old, large, and messy. We recognize today that all life on the planet evolved, and that in the short interval of human life, the human mind has evolved, language has evolved, social institutions have evolved, civilization has evolved, and technology has evolved perhaps more rapidly than anything else.
The evolution of human social institutions has meant the evolution of human meanings, values, and purposes: precisely those aspects of human life that were once invested with permanency and unchangeableness in an earlier paradigm of human knowledge. Human knowledge evolves also. Science as the systematic pursuit of knowledge (since the scientific revolution, and especially since the advent of industrial-technological civilization, which is driven forward by science) has pushed the evolution of human knowledge beyond all precedent and expectation. As I recently noted in The Moral Truth of Science, science is a method and not a body and knowledge, and even the method itself changes as it is refined over time and adapted to different fields of study.
Slowly, painfully slowly, we are becoming accustomed to an evolving world in which all things are subject to change. The process does not necessarily get easier, though one might easily suppose we get numbed by change. In fact, when all our previous assumptions are forced to huddle down in a single relict of archaic thought, it can be extraordinarily difficult to get past this last stubborn knot of human thought that has attached itself passionately to the past.
I think that it will be like this with our moral ideas, which are likely to be sheltered for some time to come, and in so far as they are sheltered, they will conceal more prejudices that we would like to admit. Even those among us who are considered progressive, if not radical, can take a position that essentially protects our moral prejudices of the past. John Stuart Mill was among the most reasonable of men, and it is difficult to disagree with his claims. While in his day utilitarianism was considered radical by some, now Mill is understood to be an early proponent of the political liberalism that is taken for granted today. But the quasi-logical form that Mill gave to his ultimate moral assumptions is entirely consistent with the fideism of radical Ockhamists or Kierkegaard.
Here is a classic passage from a classic work by Mill:
Questions of ultimate ends are not amenable to direct proof. Whatever can be proved to be good, must be so by being shown to be a means to something admitted to be good without proof. The medical art is proved to be good by its conducing to health; but how is it possible to prove that health is good? The art of music is good, for the reason, among others, that it produces pleasure; but what proof is it possible to give that pleasure is good? If, then, it is asserted that there is a comprehensive formula, including all things which are in themselves good, and that whatever else is good, is not so as an end, but as a mean, the formula may be accepted or rejected, but is not a subject of what is commonly understood by proof. We are not, however, to infer that its acceptance or rejection must depend on blind impulse, or arbitrary choice. There is a larger meaning of the word proof, in which this question is as amenable to it as any other of the disputed questions of philosophy. The subject is within the cognisance of the rational faculty; and neither does that faculty deal with it solely in the way of intuition. Considerations may be presented capable of determining the intellect either to give or withhold its assent to the doctrine; and this is equivalent to proof.
John Stuart Mill, Utilitarianism, Chapter 1
Formulating his moral thought in the context of proof, Mill appeals to the logical tradition of western philosophy, going back to Aristotle. We can already find this dilemma of logical thought explicitly formulated in classical antiquity. Commenting on a passage from Aristotle’s Physics (193a3) that reads: “…to try to prove the obvious from the unobvious is the mark of a man incapable of distinguishing what is self-evident and what is not,” Simplicius wrote:
“…the words ‘the mark of a man incapable of distinguishing between what is self-evident and what is not’ typify the who who is anxious to prove by means of other things that nature, which is self-evident, is not self-evident. And it is even worse if they are to be proved by means of what is less knowable, which is what must happen in the case of things that are all too obvious. The man who wants to employ proof for everything eventually destroys proof. For if the evident must be the starting point of proof, the man who thinks that the evident needs proof no longer agrees that anything is evident, not does he leave any basis of proof, and so he leaves no proof either.”
Simplicius: On Aristotle Physics 2, translated by Barrie Fleet, London and New York: Bloomsbury Academic, 1997, p. 25
The axiological equivalence of self-evidence is intrinsic value, that is to say, self-value. The tradition of intrinsic value in English moral thought arguably reaches its apogee in G. E. Moore’s Principia Ethica, in which intrinsic value is a theme that occurs throughout the work:
“We must know both what degree of intrinsic value different things have, and how these different things may be obtained. But the vast majority of questions which have actually been discussed in Ethics—all practical questions, indeed—involve this double knowledge; and they have been discussed without any clear separation of the two distinct questions involved. A great part of the vast disagreements prevalent in Ethics is to be attributed to this failure in analysis. By the use of conceptions which involve both that of intrinsic value and that of causal relation, as if they involved intrinsic value only, two different errors have been rendered almost universal. Either it is assumed that nothing has intrinsic value which is not possible, or else it is assumed that what is necessary must have intrinsic value. Hence the primary and peculiar business of Ethics, the determination of what things have intrinsic value and in what degrees, has received no adequate treatment at all.”
G. E. Moore, Principia Ethica, section 17
The English, for the most part, had little affinity for Bergson, but it was Bergson who opened up moral philosophy to its temporal reality embedded in changing human experience. In several posts — Epistemic Space: Mapping Time and Object Disoriented Axiology among them — I have discussed Bertrand Russell’s antipathy to Bergson, even though Russell himself was once of the most powerful and passionate advocates of science, and it has been science that has forced us to put aside our equilibrium assumptions and to engage with a dynamic world that forces change upon us even if we would deny it.
The world as we understand it today, from the smallest quantum fluctuations to the evolution of the universe entire, is a dynamic world in which change is the only constant. In such a world, which our traditional eschatologies have invested with eternal moral significance, we would be better served by also abandoning equilibrium assumptions in ethics. There are trivial ways in which this occurs, as when we recognize that different objects have different moral values at different times; there are also more radical ways to think of a morally dynamic world, such as a world in which moral principles themselves must change.
In Bostrom’s qualitative categories of risk, the risks of greatest scope are identified as trans-generational and pan-generational (with the possibility of a risks of cosmic scope also noted). Both the idea of the trans-generational and the pan-generational are essentially categories of intrinsic value over time. when existential risks of smaller scope are considered, they are limited to personal, local, or global circumstances. These smaller, local risks when understood in contradistinction to trans-generational and the pan-generational can also be seen as instances of intrinsic value over time, through shorter periods of time appropriate to personal time, social time, or global time.
While it is gratifying to see this recognition of intrinsic value over time, we can go farther by considering the natural history of value. The simple and fundamental lesson of the natural history of value is that value changes over time, and that particular objects may be the bearers of intrinsic value for a temporary period of time, taking on this value and then ultimately surrendering it. Moreover, intrinsic value itself changes over time, as do the forms in which it is manifested and embodied.
When Sartre gave his famous lecture “Existentialism is a Humanism,” he took the bull by the horns and faced straight on the claims that had been made that existentialism was a gloomy philosophy of despair, quietism, and pessimism. Of his critics Sartre said, “what is annoying them is not so much our pessimism, but, much more likely, our optimism. For at bottom, what is alarming in the doctrine that I am about to try to explain to you is — is it not? — that it confronts man with a possibility of choice.” For Sartre, existentialism is, at bottom, an optimistic philosophy because it affirms the reality of choice and human agency. And so, too, the recognition of the natural history of value — that value is not a fixed and unchanging feature of the world — is an optimistic doctrine preferable to any and all false hopes.
Questioning ancient moral prejudices, as Sartre often did, almost always results in claims on behalf of traditionalists that the sky is falling, and that by opening Pandora’s Box we have unleashed evils into the world that cannot be contained. But to observe that intrinsic value changes over time is no counsel of despair, as when Bertrand Russell (as I recently quoted in Developing an Existential Perspective) said that, “…only on the firm foundation of unyielding despair, can the soul’s habitation henceforth be safely built.” That intrinsic value is subject to change means that the intrinsic value of the world may increase or decrease, and if it may increase, we ourselves may be the agents of this change.
. . . . .
. . . . .
. . . . .
8 February 2014
The Three Eras of Life on Earth
The Earth, it would seem, has been regularly reduced to biological penury throughout its long history, which has been punctuated by mass extinctions that have very nearly reduced biodiversity to zero. It is possible that, in the earliest history of life on Earth, when our planet was regularly bombarded by objects from space, and exposed to especially harsh conditions, life may have emerged multiple times, only to be wiped out again in short order. There would have been plenty of time for this to occur during the 550 million years prior to the emergence of the earliest life known to be continuous with our own.
The repeated denudation of the planet by mass extinctions constituted a kind of ecological succession on a grand scale. Each time life had to recover anew, and, in recovering, the surviving species (the “weeds” that were the most robust and which went on to colonize the denuded landscape and seascape) underwent dramatic periods of adaptive radiation until, in the global climax ecosystems prior to a mass extinction event, almost every niche for life has been filled — possibly several times over, leading to contested niches where multiple species compete for the same limited resources.
The history of life is such a reliable indicator of geological time that there is an entire discipline — biostratigraphy — given over to the dating of rocks by the fossils they contain. Once life becomes sufficiently complex to leave a record of itself in the rocks of our planet, the development of life is a sure guide to the age of the rocks that contain traces of this past life. Contemporary scientific geology largely got its start through biostratigraphy in the work of William Smith (called “strata Smith” by his contemporaries), whom I have previously mentioned in The Transplanetary Perspective.
Three of the major divisions of geological time are named for the eras of life that they comprise: Paleozoic (old life), Mesozoic (middle life), and Cenozoic (common, or recent, life). These divisions of geological time give a “big picture” view of the history of life on Earth. The mass extinction events at the end of the Permian and at the K-T boundary were so catastrophic that the Earth in the case of the end Permian extinction came perilously close to being sterilized, and while the K-T event (now known as the Cretaceous–Paleogene or K–Pg extinction event) was not as disastrous, it ended the dominion of the dinosaurs over most ecological niches and thereby gave mammals the opportunity to experience an explosive adaptive radiation.
Million Year Old Civilizations
We know that intelligent life on Earth arose in the late Cenozoic era, but how clement were these earlier eras of life on Earth to intelligent life? If intelligent life had arisen in the Paleozoic, founded a civilization, and survived to the present, that civilization would be in excess of 250 million years old. If, again, intelligent life had arisen in the Mesozoic, founded a civilization, and survived to the present, that civilization would be in excess of 65 million years old. However, both of these counterfactual civilizations that did not happen would have almost certainly have been destroyed by the catastrophic mass extinctions that separated these eras of terrestrial life (unless they had taken adequate measures to mitigate existential risk, which would seem to be a necessary condition for any truly long-lived civilization).
The idea of a civilization a million or more years old was a theme discussed by Carl Sagan on several occasions. Here is an explicit formulation of the million-year-old civilization theme from Chapter XII, “Encyclopedia Galacitca,” from Sagan’s book Cosmos:
“What does it mean for a civilization to be a million years old? We have had radio telescopes and spaceships for a few decades; our technical civilization is a few hundred years old, scientific ideas of a modern cast a few thousand, civilization in general a few tens of thousands of years; human beings evolved on this planet only a few million years ago. At anything like our present rate of technical progress, an advanced civilization millions of years old is as much beyond us as we are beyond a bush baby or a macaque. Would we even recognize its presence? Would a society a million years in advance of us be interested in colonization or interstellar spaceflight? People have a finite lifespan for a reason. Enormous progress in the biological and medical sciences might uncover that reason and lead to suitable remedies. Could it be that we are so interested in spaceflight because it is a way of perpetuating ourselves beyond our own lifetimes? Might a civilization composed of essentially immortal beings consider interstellar exploration fundamentally childish?”
Carl Sagan, Cosmos, Chapter XII, “Encyclopaedia Galactica”
Human civilization could be considered as being more than ten thousand years old if we date the advent of civilization to the Neolithic Agricultural Revolution. This is an atypical way to think about civilization, but I have seen it in a few sources (Jacob Bronowski, I think, takes this view, more or less), and it is how I myself think about civilization. A civilization ten thousand years old or more is nothing to dismiss; persisting for ten thousand years is a non-trivial accomplishment. Yet the history of terrestrial civilization may be compared to the history of terrestrial life: there is a long period that is nearly stagnant, with painfully slow innovations, and then an event occurs — the Cambrian explosion for life, the industrial revolution for civilization — and what it means to be “alive” or “civilized” is radically altered.
Dating to the Neolithic Agricultural revolution is consistent with my recent suggestion in From Biocentric Civilization to Post-biological Post-Civilization that civilization could be minimally defined as a coevolutionary cohort of species. However, our industrial-technological civilization is barely more than two hundred years old. To consider the geologically insignificant period of time of one hundred years is to contemplate a period of time half again as long as the entire history of industrial-technological civilization. The kind of technological gains that industrial-technological civilization could experience over a period of a hundred years can be quite remarkable, as our experience of the past hundred years suggests.
This year, 2014, we experience the one hundred year anniversary of global industrialized warfare. Not long after, we will experience the hundred year anniversaries of digital computers, jet propulsion, rocketry, and nuclear technology. Some of these technologies have improved by orders of magnitude. Some have improved very little. If the coming century brings commensurate technological innovations (not to mention innovations in science that would drive these technological innovations), even if not all these developments experience exponential development, and many languish in a state of stagnation, our world and our understanding of the world will nevertheless be repeatedly revolutionized.
Given what we know about the rapidity of technological change — bequeathed to our industrial-technological civilization as a consequence of the STEM cycle — we ought to conclude that we can know almost nothing about what a million year civilization would be like, except in so far as we might be able to imagine only the most stagnant aspects of such a civilization. It would be beyond our ability to understand advanced technologies ten thousand years hence, just as our ancestors, only beginning to lay the foundations of agrarian-ecclesiastical civilization ten thousand years ago, could have understood our advanced technologies today. Understanding across these orders of developmental magnitude lie beyond the human zone of proximal development.
I have written previously that there is an earliest bound in the history of our universe for life, for intelligent life, and for civilization. It would not be possible to produce an industrial-technological civilization as we know it (i.e., a peer civilization) without heavier metallic elements, so that the emergence of industrial-technological civilization must minimally wait for the formation of Population I stars and their planetary systems. That being said, many population I stars have been around for billions of years, and there have consequently been billions of years for industrial-technological civilizations to emerge and to attain great age.
Are there other constraints upon the emergence of life, intelligence, and civilization that move the boundary for the earliest possible emergence of these phenomena nearer to the present? Is there any reason to suppose, from our knowledge of the natural history of Earth and the complexity of the human brain, that intelligent life and civilization could not have arisen in earlier eras of life — Paleozoic intelligent life or Mesozoic intelligent life, which would, in turn, according to Civilization-Intelligence Covariance, give rise to Paleozoic civilization or Mesozoic civilization? Or, if not here on Earth, why not some other planet orbiting a population I star where life begins 550 million years after the formation of the planet?
Octopi, cuttlefish, and other cephalopods with large brains and highly sophisticated nervous systems — it takes a lot of raw neural processing power to do what some cephalopods do with their skin color — would seem to be ideal candidates for early terrestrial intelligent life. Octopi date back to the Devonian Period, more than 360 million years ago, during the Paleolithic Era, so that ancestors of this life form survived both the End Permian extinction and the K-T extinction (cf. Fossil Octopuses). Why didn’t cephalopods establish a counterfactual civilization during the Permian? There was certainly time enough to do so before the End Permian extinction.
Is a backbone, or something that can serve a similar function like an exoskeleton, a necessary condition for intelligence to issue in the production of civilization? Multicellular life forms without a backbone, or confined to an aquatic environment, might well develop intelligence, but would have a difficult time building a technological civilization — difficult, but not impossible. This is a question I considered previously in The Place of Bilaterial Symmetry in the History of Life and Counterfactuals Implicit in Naturalism.
If we should find life in the oceans below the icy surface of Europa, or any of the other moons in our solar system internally heated by gravitational forces, it would consist of life forms peculiarly constrained by their environment, i.e., possibly more constrained than terrestrial conditions, and therefore more likely to favor extremophiles. Oceanic lifeforms beneath a crust of ice many kilometers thick would not only have the technological disadvantage faced by any intelligent aquatic species, but would face the additional disadvantage of being cut off from the stars. Unable to physically see their place in the universe, such lifeforms might have an even more difficult time that we had in coming to understand the world. The mythology of such a life form would have to be very different from the mythologies created by early human societies, in which the stars typically played a prominent role. Any civilization that might be conjoined with such a mythology might constitute an extremophile civilization.
Inside the Charmed Circle
Many of the questions that I have posed above are variations on ancient themes of anthropocentrism, and from within the charmed circle of anthropocentrism it is difficult for us to see outside that circle. Our minds are quite literally defined by that circle, being the product of human biology, and our imagination is largely circumscribed by the limitations of our minds. But our minds are also capable, with effort, of passing beyond the charmed circle of anthropocentrism, identifying anthropic bias as such and transcending it.
For us, the third time life got a chance on Earth was the charm. Paleozoic life came and (largely) went without producing intelligence or civilization, as did Mesozoic life. It was not until Cenozoic life that intelligence and civilization emerged. But was this the result of mere contingency, or a function of some operative constraint — possibly even a constraint no one has even noticed because of its pervasive presence — that prevented intelligence and civilization from arising in earlier geological eras?
While there might be reason to believe that other forms of life will have something like a DNA structure, or that something like the transition from prokaryotic cells to eukaryotic cells will have taken place, but there is no particular reason to believe that the large scale structure of life on other worlds would have the terrestrial tripartite structure, since this big picture view of life on Earth was a result of particular mass extinction events that seem too contingent to characterize any possible emergence of life. However, there is reason to believe that there will be some mass extinction events afflicting life on other worlds, and at least some of these mass extinction events will result from large scale cosmological events. If solar systems form elsewhere in a process like the formation of our solar system, life elsewhere would also be exposed to asteroid impacts, comets, solar flares, and the like. This is one of the lessons of astrobiology.
That there will be constraints and contingencies that bear upon life we can be certain; but we cannot (yet) know exactly what these constraints and contingencies will be. This is a non-constructive observation: invoking the existence of constraints and contingencies without saying what they will be. What would a constructive approach to life’s constraints and contingencies look like? Is it necessary to adopt a non-constructive perspective where our knowledge is so lacking? As knowledge of the conditions of astrobiology and astrocivilization grows, may we yet adopt a constructive conception of them?
. . . . .
. . . . .
. . . . .
6 October 2013
What is astrobiology?
I suppose that “astrobiology” could be called one of those “ten dollar” words, but despite being a long word of six syllables and a dozen letters, it can be defined quite simply.
Astrobiology has been called, “The study of life in space” (Mix, Life in Space: Astrobiology for Everyone, 2009) and that, “Astrobiology… removes the distinction between life on our planet and life elsewhere.” (Plaxco and Gross, Astrobiology: A Brief Introduction, 2006). Taking these sententious formulations of astrobiology as the study of life in space, which removes the distinction between life on our planet and life elsewhere, together gives us a new perspective with which to view life on Earth (and beyond).
There are, of course, longer and more detailed definitions of astrobiology. There are two in particular that I have cited in previous posts:
“The study of the living universe. This field provides a scientific foundation for a multidisciplinary study of (1) the origin and distribution of life in the universe, (2) an understanding of the role of gravity in living systems, and (3) the study of the Earth’s atmospheres and ecosystems.”
from the NASA strategic plan of 1996, quoted in Steven J. Dick and James E. Strick, The Living Universe: NASA and the Development of Astrobiology, 2005
“Astrobiology is the study of the origin, evolution, distribution, and future of life in the universe. This multidisciplinary field encompasses the search for habitable environments in our Solar System and habitable planets outside our Solar System, the search for evidence of prebiotic chemistry and life on Mars and other bodies in our Solar System, laboratory and field research into the origins and early evolution of life on Earth, and studies of the potential for life to adapt to challenges on Earth and in space.”
from the NASA astrobiology website
I cited these two definitions of astrobiology from NASA in Eo-, Eso-, Exo-, Astro- and other posts in which I used parallel formulations to define astrocivilization.
Learning to take the astrobiological point of view
Astrobiology is island biogeography writ large.
This is one of the few “tweets” I’ve written that was “re-tweeted” multiple times (I’m not very popular on Twitter.) After I wrote this I began a more extensive blog post on this theme, but didn’t finish it; the topic rapidly became too large and started to look like a book rather than a post. Then last month I posted this on Twitter:
In the same way that Darwin provided a new perspective on life, astrobiology provides a novel perspective that allows us to see life anew.
Recently I’ve also been referring to astrobiology with increasing frequency in my blog posts, and I referenced astrobiology in my 2012 presentation at the 100YSS symposium in Houston and just last month in my presentation at the Icarus Interstellar Starship Congress in Dallas.
It will be apparent to the reader, then, then the idea of astrobiology has been slowly growing on me for the past few years, and the more I think about it, the more I come to realize the fundamentally new perspective that astrobiology offers on life and its evolution. Moreover, astrobiology also is suggestive for the future of life, and what we will discover about life the more we explore the cosmos.
Astrobiology: the Fourth Revolution in the Life Sciences
The more I think about astrobiology, the more I realize that, like earlier revolutions in the life sciences, the astrobiological point of view gives a novel perspective on familiar facts, and in so doing it potentially orients science in a new direction. For this reason I now see astrobiology as the fourth of four revolutions that instantiated the life sciences in their present form and continue to shape the way that we think about biology and the living world.
Here is my list of the four major revolutions in biological thought that have shaped the life sciences:
● Natural selection Independently discovered by Charles Darwin and Alfred Russel Wallace, natural selection gave sharpness of focus to many vague evolutionary ideas that were being circulated in the nineteenth century. With natural selection, biology had a theory by which to work, that could unify biological thought in a way that had not previously been possible. Of the Darwinian revolution Harald Brüssow wrote, “How can biologists cope conceptually and technically with this enormous species number? A deep sigh of relief came for biologists already in 1859 with the publication of Charles Darwin’s book ‘On the Origin of Species’. Suddenly, biologists had a unifying theory for their branch of science. One could even argue that the holy grail of a great unifying theory was achieved by Darwin and Wallace at a time when Maxwell was unifying physics, the older sister of biology, at the level of the electromagnetic field theory.” (“The not so universal tree of life or the place of viruses in the living world” Phil. Trans. R. Soc. B, 2009, 364, 2263–2274)
● Genetics After Darwin and Wallace came Gregor Mengel, who solved fundamental problems in the theory of inheritance and so greatly strengthened the Darwinian theory of descent with modification. As Darwin had provided the mechanism for the overall structure of life, Mendel provided the mechanism that made natural selection possible. Mendel’s work, contemporaneous with Darwin, was forgotten and not rediscovered until the early twentieth century. It was not until the middle of the twentieth century that Crick and Watson were able to delineate the structure of DNA, which made it possible to describe Mendelian genetics on a molecular level, thus making possible molecular biology.
● Evo-devo Evo-devo, which is a contraction of evolutionary developmental biology, once again went back to the roots of biology (as Darwin had done by formulating a fundamental theory, and as Mendel had done by his careful study of inheritance in pea plants), and returned the study of embryology to the center of attention of evolutionary biology. Studying the embryology of organisms with the tools of molecular biology gave (and continues to give) new insights into the fine structure of life’s evolution. Before evo-devo, few if any suspected that the homology that Darwin and others notes on a macro-biological scale (the structural similarity of the hand of a man, the wing of a bat, and the flipper of a dolphin) would be reducible to homology on a genetic level, but evo-devo has demonstrated this in remarkable ways, and in so doing has further underlined the unity of all terrestrial life.
● Astrobiology Astrobiology now lifts life out of its exclusively terrestrial context and studies life in its cosmological context. We have known for some time that climate is a major driver of evolution, and that climatology is in turn largely driven by the vicissitudes of the Earth as the Earth orbits the sun, exchanges material with other bodies in our solar system, and the solar system entire bobs up and down in the plane of the Milky Way galaxy. Of understanding of life gains immensely by being placed in the cosmological context, which forces us both to think big, in terms of the place of life in the universe, as well as to think small, in terms of the details of origins of life on Earth and its potential relation to life elsewhere in the universe.
This is obviously a list of revolutions in biological thought compiled by an outsider, i.e., by someone who is not a biologist. Others might well compile different lists. For example, I can easily imagine someone putting the Woesean revolution on a short list of revolutions in biological thought. Woese was largely responsible for replacing the tripartite division of animals, plants, and fungi with the tripartite division of the biological domains of Bacteria, Archaea and Eukarya. (There remains the question of where viruses fit in to this scheme, as discussed in the Brüssow paper cited above.)
Since I have included molecular phylogeny among the developments of evo-devo (in the graphic at the bottom of this post), I have implicitly place Woese’s work within the evo-devo revolution, since it was the method of molecular phylogeny that made it possible to demonstrate that plants, animals and fungi are all closely related biologically, while the truly fundamental division in terrestrial life is between the eukarya (which includes plants, animals, and fungi, which are all multicellular organisms), bacteria, and archaea. If any biologists happen to read this, I hope you will be a bit indulgent toward my efforts, though I certainly encourage you to leave a comment if I have made any particularly egregious errors.
Toward a Radical Biology
Darwin mentioned the origins of life only briefly and in passing. There is the famous reference to, “some warm little pond with all sorts of ammonia and phosphoric salts, — light, heat, electricity &c. present” in his letter to Joseph Hooker, and there is the famous passage at the end of his Origin of Species which I discussed in Darwin’s Cosmology:
“Thus, from the war of nature, from famine and death, the most exalted object which we are capable of conceiving, namely, the production of the higher animals, directly follows. There is grandeur in this view of life, with its several powers, having been originally breathed into a few forms or into one; and that, whilst this planet has gone cycling on according to the fixed law of gravity, from so simple a beginning endless forms most beautiful and most wonderful have been, and are being, evolved.”
Darwin, of course, had nothing to go on at this point. Trying to understand or explain the origins of life without molecular biology would be like trying to explain the nature of water without the atomic and molecular theory of matter: the conceptual infrastructure to circumscribe the most basic elements of life did not yet exist. (The example of trying to define water without the atomic theory of matter is employed by Robert M. Hazen in his lectures on the Origins of Life.)
Just as Darwin pressed biology beyond the collecting and comparison of beetles in the backyard, and opened up deep time to biology (and, vice versa, biology to deep time), so astrobiology presses forward with the project of evolutionary biology, pursuing the natural origins of life to its chemical antecedents. Astrobiology is a radical biology in the same way that Darwin was radical biology in his time: both go to the root to the matter to the extent possible given the theoretical, scientific, and technological parameters of thought. It is in the radical sense that astrobiology is integral with origins of life research; it is in this sense in which the two are one.
The humble origins of radical ideas
The radical biology of Darwin did not start out as such. In his early life, Darwin considered becoming a country parson, and when Darwin left on his voyage on the Beagle as Captain Fitzroy’s gentleman companion, he held mostly conventional views. It is easy to imagine an alternative history in which Darwin retained his conventional views, went on to become a country parson, and gave Sunday sermons that were mostly moral homilies punctuated by the occasional quote from scripture the illustrate the moral lesson with a story from the tradition he nominally represented. Such a Darwin from an alternative history would have continued to collect beetles during the week and would have maintained his interest in natural history.
Just as Darwin came out of the context of English natural history (which, before Darwin, gave us those classic works of teleology, Paley’s Natural Theology and Chambers’ Vestiges of the Natural History of Creation — a work that the young Darwin greatly admired), so too astrobiology comes out of the context of a later development of natural history — the scientific search for the origins of life and for extraterrestrial life. While the search for extraterrestrial life is “big science” of an order of magnitude only possible by an institution like NASA, in this respect it stands in the humble tradition of natural history, since we must send robots of Mars and the other planets until we can go there ourselves with a shovel and rock hammer. From such humble beginnings sometimes emerge radical consequences.
I think we are already beginning to see the potentially radical character of astrobiology, and that this development in biology promises a paradigm shift almost of the scope and magnitude of natural selection. Indeed, both natural selection and astrobiology can be understood as further (and radical) contextualizations of the theme of man’s place in nature. When Darwin wrote, he contextualized human history in the most comprehensive conception of nature then possible; today astrobiology must contextualize not only human history but also the totality of life on Earth in a much more comprehensive cosmological context.
As our knowledge of the world (which was once very small, and very parochial) steadily expands, we are eventually forced to extend and refine our concepts in order to adequately account for the world that we now know. Natural selection and astrobiology are steps in the extension and refinement of our conception of life, and of the place of life in the world. Life simpliciter is, after all, a “folk” concept. Indeed, “life” is folk biology and “world” is folk cosmology. Astrobiology brings together these folk concepts and attempts to bring scientific rigor to them.
The biology of the future
Astrobiology is laying the foundations for the biology of the future. Here and now on earth, without having surveyed life on other worlds, astrobiologists are attempting for formulate concepts adequate to understanding life at the largest and the smallest scales. Once we take these conceptions along with us when we eventually explore alien worlds — including alien worlds close to home, such as Mars and the ocean beneath the ice of Europa — it is to be expected that further revolutions in the life sciences will come about as a result of attempting to understand what we eventually find in the light of the concepts we have preemptively developed in order to understand biology beyond the surface of the Earth.
Future revolutions in biology will likely have the same radical character as natural selection, genetics, evo-devo, and astrobiology. Future naturalists will do what naturalists do best: they will spend their time in the field finding new specimens and describing them for science, and in the process of the slow and incremental accumulation of scientific knowledge new ideas will suggest themselves. Perhaps someone laid low by some alien fever, like Wallace tossing and turning as he suffered from a fever in the Indonesian archipelago, will, in a moment of insight, rise from their sick bed long enough to dash off a revolutionary paper, sending it off to another naturalist, now settled and meditating over his own experiences of new and unfamiliar forms of life.
The naturalists of alien forms of life will not necessarily have the same point of view as that of astrobiologists — and that is all to the good. Science thrives when it is enriched by new perspectives. At present, the revolutionary new perspective is astrobiology, but that will not likely remain true indefinitely.
. . . . .
. . . . .
. . . . .
. . . . .
7 May 2013
The Transcendental Aesthetic and the Finding of
Other Minds in Other Species
An extrapolation of the “problem of other minds” to other species
What philosophers call “the problem of other minds” is closely related to what philosophers call the “mind-body problem” (both fall within philosophy of mind), and both are paradigmatic metaphysical questions that have been with philosophy from the beginning. Lately I’ve written a good deal about the mind-body problem on my other blog (e.g., in Naturalism and the Mind, Of Distinctions Weak and Strong, Of Distinctions, Principled and Otherwise, Cartesian Formalism, etc.), and this has got me to thinking about the problem of other minds.
I have never found the idea of other minds in other species to be in the least problematic. When you look into the eyes of another living being, whether human being or other being, you are well aware of the moment of mutual recognition, and you are equally well aware at that moment of mutual recognition that you are sharing that moment with another consciousness (that is to say, you experience a social temporality).
In The Eye of the Other I wrote:
It is when we look into the eye of the other that we recognize the consciousness of the other. Even if we feel that the reality of other minds is beyond philosophical demonstration, even if we are skeptics of other minds, it would be extraordinarily difficult to look into the eyes of another and not experience that immediate reaction of recognition of another mind. When we look not only into the eyes of another being but also into the eyes of another species, there is simultaneously the recognition of the awareness of the other and of the alien nature of that awareness.
Some people feel obliged to deny this inter-species recognition of common consciousness on ideological grounds, although few ever think of speciesism as a ideology. As I have recently observed in relationship to geopolitics, which I characterized as an ideology that does not know itself to be an ideology, so too with speciesism: for many it is simply an unexamined presupposition and is never formalized as an explicit article of belief.
While I myself don’t find anything in the least problematic about consciousness in other species, and I think that anyone that takes a naturalistic point of view would be hard-pressed to deny it, I cannot deny that there are some persons who feel a real sense of moral horror in recognizing the consciousness of other species. I am fully aware of this moral horror, and I am utterly unsympathetic to it. To paraphrase Freud on the “oceanic” feeling, I am unable to discover this moral horror in myself.
Some of those who are uncomfortable with the ascription of consciousness to other species simply don’t like animals, and some of those similarly disposed are just completely uninterested in animals and find it peculiar that some human beings seem to be closer to their dogs and cats than they are to other human beings. Such persons sometimes become visibly discomfited at any mention of Johnson’s Hodge or Greyfriars Bobby or Hachikō, all memorialized by statues. I have personally heard individuals of this particular temperament indignantly lecture others (myself included) on the dangers of anthropomorphizing our companion animals. If I were to be so lectured today, I would lecture right back on anthropic bias in the philosophy of mind, which is utterly out of place and unbecoming of a philosopher (which in this instance includes anyone who makes, or who implies, philosophical assertions about mind, specifically, denying mind to certain classes of existents).
Such persons often live in an exclusively human world, and to them the animal world seems inexplicably alien. This in itself is an implicit recognition of an animal world, that is to say, a world constituted by animal consciousness. But, of course, not all who deny consciousness to other species can be so pigeon-holed. Some who have completely succumbed to anthropic bias in the philosophy of mind are in no sense living in an exclusively human world, and certainly when the dogma of human exceptionalism in consciousness gained currency, long before our industrial-technological civilization freed us from animal muscle power as the motive force of civilization, almost everyone lived intimately with animals.
In this latter context, prior to industrialization, there was always a theological overlay to the denial of consciousness to other species. Indeed, it is very likely that, if the terms of the philosophical problem of other minds were carefully explained, those with a theological world view might well without hesitation grant consciousness of other species, and simply deny they other species possess a “soul,” which is simply a theologically-legitimized devalorization. In practice, it comes to much the same as the denial of consciousness to other species and a sedulous distinction between the human and the animal realms.
I observed in The Origins of Physicalism that Cartesianism was the original “mechanical philosophy,” and while Cartesianism in the time of Descartes and immediately afterward incorporated human exceptionalism into the philosophy (i.e., it institutionalized anthropic bias in the philosophy of mind), the logical extrapolation of the theory was evident, and what the Cartesians practised upon other species later philosophers in the mechanistic tradition came to practise also upon human beings: the denial of consciousness.
Today we have a school of thought that is not exactly the denial of consciousness but rather the revaluation, or, better, the devaluation of consciousness, which latter is called a “user illusion” — at least, in techno-philosophy the denial of consciousness is called the “user illusion.” In traditional philosophy, the denial of the existence of consciousness is called “eliminativism,” since instead of seeking to reduce consciousness to something else that is not consciousness (and thereby exemplifying reductivism), eliminativism cuts the Gordian Knot and simply denies that there is any such thing as consciousness — meaning that there is nothing to be “explained away.” I am sure that I am not the only one who finds this to be a thoroughly unsatisfying “solution” to a perennial philosophical problem.
How then are we to understand the minds of other species, i.e., the problem of other minds as generalized to include non-human species? What philosophical framework exists that can provide a conceptual infrastructure for such an understanding? There are many possibilities, but today I would like to consider a Kantian approach.
If we take as the lesson of Kant’s transcendental aesthetic that the mind is being continually bombarded by a riot of sensations from all the various bodily sensory organs, and that the mind then constitutes a kind of conceptual sieve that shapes, channels and directs the mass of sensory experience into something coherent upon which an organism can act, we can recognize that much the same process occurs in other species. All mammals have more or less similar bodies and similar sensory endowments, so that all living mammals are constantly being bombarded by a riot of sensations which each creature must sort into coherent experience. The fact that we can play fetch with a dog, and both successfully interact in one and the same world, simultaneously recognizing the stick at the center of the game as an object that passes between two or more organism involved in a game of fetch, suggests that we and the dog constitute and cognize the world in a remarkably similar fashion.
The dog, like us, is receiving sensory signals from his eyes, ears, nose, and so forth, as well as experiencing kinesthetic sensations from the movement of his body as he exerts himself in lunging after the stick. From all of this sensation the dog successfully distills a world, and that world is remarkably similar to our world.
A few years ago I had an interesting experience that bears directly on games of fetch and shared experience, when I had an opportunity to feel what it was like to be a dog among dogs. I was at a vacation house on a river, and had brought my wetsuit along so I could swim. The river is fed by snow melt from Mt. Hood and it is one of the coldest rivers in which I have ever been swimming. I put on my wetsuit and got into the water just as others were beginning to play fetch with a large black lab that they had brought along. They threw a stick into the frigid waters of the river, and the lab plunged into to fetch the stick. The next time the stick was thrown I started swimming toward it the same time that the lab started swimming toward it. The lab looked at me and instantly saw me as a competitor for the stick. He swam all the harder and made it to the stick before me with an obvious sense of triumphalism.
Of course, most people have had experiences like this in life, and some people will dismiss such experiences as readily as Descartes dismissed his correspondent’s stories attempting to prove that animals are not mere mechanisms. However we interpret such experiences, we share and interact in a common world. Although this is utterly contrary to the spirit of Kant, I have to observe that any animal that could not distill coherent experience of the world out of its mass of sensation would never survive. Evolution selects for those organisms that can best hunt or avoid being prey in the common world in which predator and prey interact. This is a naturalistic point of view, whereas Kant’s point of view was decidedly that of idealism.
Even if one rejects Kant’s idealism, as I do, there seems to me to be some residual value in the idea of the mind being involved in the constitution of experience. I think that Kant was right that we have certain a priori intuitions that order our experience, but I think that this was much more fluid and pluralistic than Kant’s exposition of the transcendental aesthetic allows. While I wrote above that mammals all have a relatively similarly experience of the world, a function of a similar sensory and cognitive endowments, I would allow that there is some important variation. Sight plays a very large role in how human beings cognize the world; smell plays a disproportionate role in how dogs cognize the world; sound plays a disproportionate role in how dolphins cognize the world.
All terrestrial critters of a given level of cognitive complexity have to distill coherent experience of one and the same world out of a mass of sensation, but that mass of sensation differs among different species. I suspect that this sensory difference means that different species also have different a priori conceptions that help them to organize their experience into a coherent whole, and that, just sensory experience differs from species to species, but admits of degrees of greater or less, so too the a priori ideas of distinct species different from species to species but also admit of greater or less similarity. That is to say, smell may shape the world of a dog far more than it shapes our world, but we probably share far more in terms of sensory experience and organizing ideas with a dog than with a marine mammal, and probably we share much more with a marine mammal than with an octopus or other cephalopod. This is a function and an illustration of a point I recently tried to make about the relationship between mind and embodiment.
I tried to make this point in my above referenced post, The Eye of the Other, since when I unexpectedly looked into the eyes of a sealion, a marine mammal, we immediately recognized each other, and in the same moment of recognition also recognized the profound differences between the two of us. Common mammalian minds, differently embodied and living in profoundly different environments, will involve different sensory stimulation, different kinesthetic sensations, and different a priori concepts for organizing experience. But not too different. A shark, with a mind very different from a mammalian mind, can predate marine mammals, so that both sharks and marine mammals interact in the same marine environment just as human beings and tigers interact in the same terrestrial environment.
I suspect that, at least in some senses, the tiger’s mind and the human mind share concepts derived from their common terrestrial environment, while the shark and the marine mammal share concepts derived from the common marine environment, so that a tiger’s mind is more like a human mind than a sea lion’s mind is like a human mind, and, vice versa, a sea lion’s mind is more like a shark’s mind than it is like a human mind. Nevertheless, the human mind and the sea lion mind will share some concepts due to their common mammalian constitution. To employ a Wittgensteinian turn of phrase, the different sensations, concepts, and minds of distinct species overlap and intersect.
The recognition of consciousness in other species is no marginal and recondite inquiry; if, in the fullness of time, we encounter other intelligent species in the universe of extraterrestrial origin, we will need a philosophical framework in which we can integrate the idea of consciousness among other organic species, and if research into artificial intelligence and machine consciousness ever issues in a self-aware mechanism, fashioned by human hands in the same way that we might build a car or a house, we will again require a philosophical framework in which we can integrate the idea of consciousness even more generally, comprehending both naturally-emergent consciousness from organic substrates and artificially emergent consciousness of non-organic substrates.
We need a robust philosophy of mind that does not stagnate in questions of whether there is mind or whether minds can be reduced to other phenomena or eliminated altogether. Such doctrines are — would be — utterly unhelpful in coming to understand what Husserl called the “structures of consciousness.” It is likely that the structures of consciousness vary incrementally among individuals of the same species, vary a little more across distinct species, and will vary even more among minds derived from different sources — different ecosystems and biospheres in the case of organically-originating extraterrestrial minds, and different mechanisms of implementation in the case of inorganically-originating minds of machine consciousness.
. . . . .
. . . . .
. . . . .
3 May 2013
Fourth in a Series on Existential Risk
“The human race’s prospects of survival were considerably better when we were defenceless against tigers than they are today, when we have become defenceless against ourselves.” Arnold Toynbee, “Man and Hunger” (Speech to the World Food Congress, 04 January 1963, quoted on the Anthropocene Blog)
Readers, I trust, will be aware of existential risks (as well as global catastrophic risks) since I’ve recently written several recent posts on this topic, including Research Questions on Existential Risk, Six Theses on Existential Risk, Existential Risk Reminder, Moral Imperatives Posed by Existential Risk, Existential Risk and Existential Uncertainty, and Addendum on Existential Risk and Existential Uncertainty. The idea of the “Death Event” is likely to be much less familiar, so I will try to sketch out the idea itself and its relationship to existential risk.
The idea of the “death event” is due to philosopher Edith Wyschogrod, and given exposition in her book Spirit in Ashes: Hegel, Heidegger, and Man-Made Mass Death. Wyschogrod took the title of her book from an aphorism of Wittgenstein’s from 1930: “I once said, perhaps rightly: The earlier culture will become a heap of rubble and finally a heap of ashes, but spirits will hover over the ashes.”
In defining the scope of the “death event” Wyschogrod wrote:
“I shall define the scope of the event to include three characteristic expressions: recent wars which deploy weapons in the interest of maximum destruction of persons, annihilation of persons, through techniques designed for this purpose (for example, famine, scorched earth, deportation), after the aims of war have been achieved or without reference to war, and the creation of death-worlds, a new and unique form of social existence in which vast populations are subjected to conditions of life simulating imagined conditions of death, conferring upon their inhabitants the status of the living dead.”
Edith Wyschogrod, Spirit in Ashes: Hegel, Heidegger, and Man-Made Mass Death, New Haven and London: Yale University Press, 1985, p. 15.
Wyschogrod’s conception of the “death world,” also given exposition in the text, is introduced in conscious contradistinction to the late Husserlian conception of the “Lifeworld” (Lebenswelt). (Cf. Chapter 1, Kingdoms of Death) I cannot do justice to Wyschogrod’s excellent book in a few quotes, so I will simply encourage the reader to look up the book for himself, but I will give a couple more quotes to locate the “death event” in relation to the larger picture of our civilization. Wyschogrod sees a relation between the “death event” and the peculiar character of industrial-technological civilization:
“The procedures and instruments of death which depend upon the quantification of the qualitied world are innovations deriving from technological society and, to that extent, extend its point of view.”
Op. cit., p. 25
“…the world of the camps is both distinct from and tied to technological society, so too the nuclear void is embedded in the matrix of technological society but not related to it in simple cause and effect fashion.”
Op. cit., p. 29
Perhaps at some future time I will consider Wyschogrod’s “death event” thesis in relation to what I have called Agriculture and the Macabre, which is the particular relationship between agricultural civilization and death, but whether or not the reader agrees with me or not (or with Wyschogrod, for that matter) I will acknowledge without hesitation that the character of the macabre in agricultural civilization is very different from the place of the death event and the death world in industrial-technological civilization.
Wyschogrod focuses on death camps and industrialized warfare, but of course what shocked the world more than anything were the nuclear bombs that ended the war. A considerable bibliography could be devoted to the books exclusively devoted to the anguished reflection that followed the atomic explosions at Hiroshima and Nagasaki, many of them written by and about the scientists who worked on the Manhattan Project and made the bomb possible. Many of the most eminent philosophers of the time immediately began to think about the consequences — both contemporaneously and for the longer term human future — of human beings being in possession of nuclear weapons.
Bertrand Russell wrote two books on the possibility of nuclear war, Common Sense and Nuclear Warfare (1959) and Has Man a Future? (1961) Recently in Bertrand Russell as Futurist I discussed Russell’s views on the need for world government in order to prevent the annihilation of human life due to nuclear weapons — a view shared by Albert Einstein.
In 1958 Karl Jaspers published Die Atombombe und die Zukunft des Menschen, later translated into English as The Future of Mankind. What all of these works have in common is struggling with what Jaspers called “the new fact.” Of this new fact Jaspers wrote:
“The atom bomb of today is a fact novel in essence, for it leads mankind to the brink of self-destruction.”
Karl Jaspers, The Future of Mankind, Chap. I, p. 1
“the atom bomb is today the greatest of all menaces to the future of mankind… The possible reality which we must henceforth reckon with — and reckon with, at the increasing pace of developments, in the near future — is no longer a fictitious end of the world. It is no world’s end at all, but the extinction of life on the surface of the planet.”
Op. cit., p. 4
The fact that fear of nuclear Armageddon was felt viscerally as an all-too-real possibility for our world points to the fact that this was not merely the appearance of a new idea in human history — new ideas appear every day — but a fundamental shift in feeling. When the awful reality of the Second World War, which saw man-made mass death on an unprecedented scale, received its finale in the form of the atomic blasts at Hiroshima and Nagasaki, we had acquired a new object for our instinctual fear of annihilation.
The larger meaning of the “death event” — testified not only in Edith Wyschogrod’s explicit formulation, but also in the work of Bertrand Russell, Karl Jaspers, and a hundred others — is that of formal, reflexive consciousness of anthropogenic existential risk. We not only know that we are vulnerable to existential risk, we also know that we know. It is this formal, reflexive self-consciousness of existential risk that is the differentia between human history before the “death event” and human history after the “death event.” The “death event” was a crystallizing event, a particular moment in history that was a watershed for human suffering that placed that suffering in the naturalistic context.
Earlier catastrophes in human experience did not have this character — or, if they did have this character for a few individuals who realized the larger meaning of events, this formal, reflexive consciousness of human vulnerability did not achieve general recognition. Partly this was a consequence of the non-naturalistic and teleological assumptions that were integral with the outlook of earlier epochs of human civilization, before science made a naturalistic conception of the world entire conceivable. If one believes that a supernatural force will intervene to continue to maintain human beings in existence, there is no reason to be concerned with the possibility of human extinction.
Prior to industrial-technological civilization (made possible by the scientific revolution, which is particularly relevant in this context), the “end of the world” could only be understood in eschatological terms because eschatologies derived from theological cosmogonies were the only “big picture” accounts of the cosmos that had been formulated and which had achieved any degree of currency. (There have always been non-theological philosophical cosmogonies, but these have remained marginal throughout human history.)
The situation in regard to “big picture” conceptions of the world is closely parallel to that of biology prior to Darwin’s theory of natural selection: there were no strictly biological theories of biology prior to Darwin, only theological theories that were employed to “explain” biological facts. With no alternative to a theological account of biology, it is to be expected that this sole point of view was the universal point of reference, just as where there is no alternative to the theological account of history, this theological account is the sole point of reference in history.
In regard to traditional eschatologies, it would be just as apposite to point out that a supernatural agent might intervene to bring about the end of civilization or the extinction of all human beings (in contradistinction to supernatural interventions intended to be to our benefit), regardless of all human efforts made to preserve themselves and their civilization in existence. The point here is that once we recognize the efficacy of supernatural agents in human history, human agency in shaping the human future cannot be assumed, and in fact the idea of “destiny” (especially in the form of predestination) may come to prevail over conceptions of the future that allow a greater scope to human agency. This is why, in my post The Naturalistic Conception of History, I defined naturalism as “non-human non-agency,” i.e., the absence of supernatural agency.
To formulate this from the opposite point of view, we could say that it was only the essentially naturalistic assumptions of our own time, assumptions built into the structure of industrial-technological civilization (because it is dependent upon science, and science cannot systematically expand in the way that science has expanded in recent history without the working philosophical presupposition of methodological naturalism), that made it possible for human beings to understand that no deus ex machina was going to emerge at the end of the human drama to save us in spite of our failure to secure our own future.
Once human beings realized with fearful clarity that they possessed the power to annihilate civilization and possibly also all human life, it is only a small step from this consciousness of human vulnerability to come to a similar consciousness of human vulnerability whether or not the existential threat is anthropogenic or non-anthropogenic. A sufficient number of ill-advised and irreversible choices (choices that result in action or inaction, as the case may be) could mean the extinction of human beings, or the reduction of human activity to a level of insignificance. That is what we now know to be the case, and it shifts a heavy burden of responsibility onto human beings for their own future — a burden that had once been carried on the shoulders of gods.
It is only in the past few decades of contemporary science that we have begun to look at the long antiquity of man with the thought of our existential vulnerability in mind, retrospectively placing our fingers at the nodal points of our past, for there have been many times when we might have all been extirpated before any of the many thresholds of development that have brought us to our present state at which we can adequately conceptualize our existential risk came about.
In this way, existential risk mitigation efforts not only provide a kind of clarity in conceptualizing the human future, especially in so far as we abide by the moral imperatives imposed by existential risk, but also by giving us a novel perspective on the human past.
One of the guiding principles of contemporary thought on existential risk is to focus on those risks that human beings have no record of surviving. In order to make good on this principle, we need to understand what existential risks human beings have survived in the past, and to this end we must acquire a better knowledge of human evolution in a cosmological context, which is, in a sense, the particular concern of astrobiology.
. . . . .
Grand Strategy and Existential Risk: A Series:
4. Existential Risk and the Death Event
. . . . .
. . . . .
. . . . .