2 February 2013
In my last post, The Science of Time, I discussed the possibility of taking an absolutely general perspective on time and how this can be done in a way that denies time or in a way that affirms time, after the manner of big history.
David Christian, whose books on big history and his Teaching Company lectures on Big History have been seminal in the field, in the way of introduction to his final lectures, in which he switches from history to speculation on the future, relates that in his early big history courses his students felt as though they were cut off rather abruptly when he had brought them through 13.7 billion years of cosmic history only to drop them unceremoniously in the present without making any effort to discuss the future. It was this reaction that prompted him to continue beyond the present and to try to say something about what comes next.
Another way to understand this reaction of Christian’s students is that they wanted to see the whole of the history they have just been through placed in an even larger, more comprehensive context, and to do this requires going beyond history in the sense of an account of the past. To put the whole of history into a larger context means placing it within a cosmology that extends beyond our strict scientific knowledge of past and future — that which can be observed and demonstrated — and comprises a framework in the same scientific spirit but which looks beyond the immediate barriers to observation and demonstration.
Elsewhere in David Christian’s lectures (if my memory serves) he mentioned how some traditionalist historians, when they encounter the idea of big history, reject the very idea because history has always been about documents and eponymously confined to to the historical period when documents were kept after the advent of literacy. According to this reasoning, anything that happened prior to the invention of written language is, by definition, not history. I have myself encountered similar reasoning as, for example, when it is claimed that prehistory is not history at all because it happened prior to the existence of written records, which latter define history.
This a sadly limited view of history, but apparently it is a view with some currency because I have encountered it in many forms and in different contexts. One way to discredit any intellectual exercise is to define it so narrowly that it cannot benefit from the most recent scientific knowledge, and then to impugn it precisely for its narrowness while not allowing it to change and expand as human knowledge expands. The explosion in scientific knowledge in the last century has made possible a scientific historiography that simply did not exist previously; to deny that this is history on the basis of traditional humanistic history being based on written records means that we must then define some new discipline, with all the characteristics of traditional history, but expanded to include our new knowledge. This seems like a perverse attitude to me, but for some people the label of their discipline is important.
Call it what you will then — call it big history, or scientific historiography, or the study of human origins, or deny that it is history altogether, but don’t try to deny that our knowledge of the past has expanded exponentially since the scientific method has been applied to the past.
In this same spirit, we need to recognize that a greatly expanded conception of history needs to reach into the future, that a scientific futurism needs to be part of our expanded conception of the totality of time and history — or whatever it is that results when we apply Russell’s generalization imperative to time. Once again, it would be unwise to be overly concerned with what we call his emerging discipline, whether it be the totality of time or the whole of time or temporal infinitude or ecological temporality or what Husserl called omnitemporality or even absolute time.
Part of this grand (historical) effort will be a future science of civilizations, as the long term and big picture conception of civilization is of central human interest in this big picture of time and history. We not only want to know the naturalistic answers to traditional eschatological questions — Where did we come from? Where are we going? — but we also want to know the origins and destiny of what we have ourselves contributed to the universe — our institutions, our ideas, civilization, the technium, and all the artifacts of human endeavor.
. . . . .
. . . . .
. . . . .
30 January 2013
F. H. Bradley in his classic treatise Appearance and Reality: A Metaphysical Essay, made this oft-quoted comment:
“If you identify the Absolute with God, that is not the God of religion. If again you separate them, God becomes a finite factor in the Whole. And the effort of religion is to put an end to, and break down, this relation — a relation which, none the less, it essentially presupposes. Hence, short of the Absolute, God cannot rest, and, having reached that goal, he is lost and religion with him. It is this difficulty which appears in the problem of the religious self-consciousness.”
I think many commentators have taken this passage as emblematic of what they believe to be Bradley’s religious sentimentalism, and in fact the yearning for religious belief (no longer possible for rational men) that characterized much of the school of thought that we now call “British Idealism.”
This is not my interpretation. I’ve read enough Bradley to know that he was no sentimentalist, and while his philosophy diverges radically from contemporary philosophy, he was committed to a philosophical, and not a religious, point of view.
Bradley was an elder contemporary of Bertrand Russell, and Bertrand Russell characterized Bradley as the grand old man of British idealism. This if from Russell’s Our Knowledge of the External World:
“The nature of the philosophy embodied in the classical tradition may be made clearer by taking a particular exponent as an illustration. For this purpose, let us consider for a moment the doctrines of Mr Bradley, who is probably the most distinguished living representative of this school. Mr Bradley’s Appearance and Reality is a book consisting of two parts, the first called Appearance, the second Reality. The first part examines and condemns almost all that makes up our everyday world: things and qualities, relations, space and time, change, causation, activity, the self. All these, though in some sense facts which qualify reality, are not real as they appear. What is real is one single, indivisible, timeless whole, called the Absolute, which is in some sense spiritual, but does not consist of souls, or of thought and will as we know them. And all this is established by abstract logical reasoning professing to find self-contradictions in the categories condemned as mere appearance, and to leave no tenable alternative to the kind of Absolute which is finally affirmed to be real.”
Bertrand Russell, Our Knowledge of the External World, Chapter I, “Current Tendencies”
Although Russell rejected what he called the classical tradition, and distinguished himself in contributing to the origins of a new philosophical school that would come (in time) to be called analytical philosophy, the influence of figures like F. H. Bradley and J. M. E. McTaggart (whom Russell knew personally) can still be found in Russell’s philosophy.
In fact, the above quote from F. H. Bradley — especially the portion most quoted, short of the Absolute, God cannot rest, and, having reached that goal, he is lost and religion with him — is a perfect illustration of a principle found in Russell, and something on which I have quoted Russell many times, as it has been a significant influence on my own thinking.
I have come to refer to this principle as Russell’s generalization imperative. Russell didn’t call it this (the terminology is mine), and he didn’t in fact give any name at all to the principle, but he implicitly employs this principle throughout his philosophical method. Here is how Russell himself formulated the imperative (which I last quoted in The Genealogy of the Technium):
“It is a principle, in all formal reasoning, to generalize to the utmost, since we thereby secure that a given process of deduction shall have more widely applicable results…”
Bertrand Russell, An Introduction to Mathematical Philosophy, Chapter XVIII, “Mathematics and Logic”
One of the distinctive features that Russell identifies as constitutive of the classical tradition, and in fact one of the few explicit commonalities between the classical tradition and Russell’s own thought, was the denial of time. The British idealists denied the reality of time outright, in the best Platonic tradition; Russell did not deny the reality of time, but he was explicit about not taking time too seriously.
Despite Russell’s hostility to mysticism as expressed in his famous essay “Mysticism and Logic,” when it comes to the mystic’s denial of time, Russell softens a bit and shows his sympathy for this particular aspect of mysticism:
“Past and future must be acknowledged to be as real as the present, and a certain emancipation from slavery to time is essential to philosophic thought. The importance of time is rather practical than theoretical, rather in relation to our desires than in relation to truth. A truer image of the world, I think, is obtained by picturing things as entering into the stream of time from an eternal world outside, than from a view which regards time as the devouring tyrant of all that is. Both in thought and in feeling, even though time be real, to realise the unimportance of time is the gate of wisdom.”
“…impartiality of contemplation is, in the intellectual sphere, that very same virtue of disinterestedness which, in the sphere of action, appears as justice and unselfishness. Whoever wishes to see the world truly, to rise in thought above the tyranny of practical desires, must learn to overcome the difference of attitude towards past and future, and to survey the whole stream of time in one comprehensive vision.”
Bertrand Russell, Mysticism and Logic, and Other Essays, Chapter I, “Mysticism and Logic”
While Russell and the classical tradition in philosophy both perpetuated the devalorization of time, this attitude is slowly disappearing from philosophy, and contemporary philosophers are more and more treating time as another reality to be given philosophical exposition rather than denying its reality. I regard this as a salutary development and a riposte to all who claim that philosophy makes no advances. Contemporary philosophy of time is quite sophisticated, and embodies a much more honest attitude to the world than the denial of time. (For those looking at philosophy from the outside, the denial of the reality of time simply sounds like a perverse waste of time, but I won’t go into that here.)
In any case, we can bring Russell’s generalization imperative to time and history even if Russell himself did not do so. That is to say, we ought to generalize to the utmost in our conception of time, and if we do so, we come to a principle parallel to Bradley’s that I think both Russell and Bradley would have endorsed: short of the absolute time cannot rest, and, having reached that goal, time is lost and history with it.
Since I don’t agree with this, but it would be one logical extrapolation of Russell’s generalization imperative as applied to time, this suggests to be that there is more than one way to generalize about time. One way would be the kind of generalization that I formulated above, presumably consistent with Russell’s and Bradley’s devalorization of time. Time generalized in this way becomes a whole, a totality, that ceases to possess the distinctive properties of time as we experience it.
The other way to generalize time is, I think, in accord with the spirit of Big History: here Russell’s generalization imperative takes the form of embedding all times within larger, more comprehensive times, until we reach the time of the entire universe (or beyond). The science of time, as it is emerging today, demands that we almost seek the most comprehensive temporal perspective, placing human action in evolutionary context, placing evolution in biological context, placing biology is in geomorphological context, placing terrestrial geomorphology into a planetary context, and placing this planetary perspective into a cosmological context. This, too, is a kind of generalization, and a generalization that fully feels the imperative that to stop at any particular “level” of time (which I have elsewhere called ecological temporality) is arbitrary.
On my other blog I’ve written several posts related directly or obliquely to Big History as I try to define my own approach to this emerging school of historiography: The Place of Bilateral Symmetry in the History of Life, The Archaeology of Cosmology, and The Stars Down to Earth.
The more we pursue the rapidly growing body of knowledge revealed by scientific historiography, the more we find that we are part of the larger universe; our connections to the world expand as we pursue them outward in pursuit of Russell’s generalization imperative. I think it was Hans Blumenberg in his enormous book The Genesis of the Copernican World, who remarked on the significance of the fact that we can stand with our feet on the earth and look up at the stars. As I remarked in The Archaeology of Cosmology, we now find that by digging into the earth we can reveal past events of cosmological history. As a celestial counterpart to this digging in the earth (almost as though concretely embodying the contrast to which Blumenberg referred), we know that by looking up at the stars, we are also looking back in time, because the light that comes to us ages after it has been produced. Thus is astronomy a kind of luminous archaeology.
In Geometrical Intuition and Epistemic Space I wrote, “…we have no science of time. We have science-like measurements of time, and time as a concept in scientific theories, but no scientific theory of time as such.” Scientists have tried to think scientifically about time, but, as with the case of consciousness, a science of time eludes us as a science of consciousness eludes us. Here a philosophical perspective remains necessary because there are so many open questions and no clear indication of how these questions are to be answered in a clearly scientific spirit.
Therefore I think it is too early to say exactly what Big History is, because we aren’t logically or intellectually prepared to say exactly what the Russellian generalization imperative yields when applied to time and history. I think that we are approaching a point at which we can clarify our concepts of time and history, but we aren’t quite there yet, and a lot of conceptual work is necessary before we can produce a definitive formulation of time and history that will make of Big History the science and it aspires to be.
. . . . .
. . . . .
. . . . .
. . . . .
25 December 2012
Prior to the advent of civilization, the human condition was defined by nature. Evolutionary biologist call this initial human condition the environment of evolutionary adaptedness (or EEA). The biosphere of the Earth, with all its diverse flora and fauna, was the predominant fact of human experience. Very little that human beings did could have an effect on the human condition beyond the most immediate effects an individual might cause in the environment, such as gathering or hunting for food. Nothing was changed by the passage of human beings through an environment that was, for them, their home. Human beings had to conform themselves to this world or die.
Since the advent of civilization, it has been civilization and not nature that determines the human condition. As one civilization has succeeded another, and, more importantly, as one kind of civilization has succeeded another kind of civilization — which latter happens far less frequently, since like kinds of civilization tend to succeed each other except when this process of civilizational succession is preempted by the emergence of an historical anomaly on the order of the initial emergence of civilization itself — the overwhelming fact of human experience has been shaped by civilization and the products of civilization, rather than by nature. This transformation from being shaped by nature to being shaped by civilization is what makes the passage from hunter-gatherer nomadism to settled agrarian civilization such a radical discontinuity in human experience.
This transformation has been gradual. In the earliest period of human civilizations, an entire civilization might grow up from nothing, spread regionally, assimilating local peoples not previously included in the project of civilization, and then die out, all without coming into contact with another civilization. The growth of human civilization has meant a gradual and steady increase in the density of human populations. It has already been thousands of years since a civilization could flourish and fail without encountering another civilization. It has been, moreover, hundreds of years since all human communities were bound together through networks of trade and communication.
Civilization is now continuous across the surface of the planet. The world-city — Doxiadis’ Ecumenopolis, which I discussed in Civilization and the Technium — is already an accomplished fact (though it is called by another name, or no name at all). We retain our green spaces and our nature reserves, but all human communities ultimately are contiguous with each other, and there is no direction that you can go on the surface of the Earth without encountering another human community.
The civilization of the present, which I call industrial-technological civilization, is as distinct from the agricultural civilization (which I call agrarian-ecclesiastical civilization) that preceded it as agricultural civilization was distinct from the nomadic hunter-gatherer paradigm that preceded it in turn. In other words, the emergence of industrialization interpolated a discontinuity in the human condition on the order of the emergence of civilization itself. One of the aspects of industrial-technological civilization that distinguishes it from earlier agricultural civilization is the effective regimentation and indeed rigorization of the human condition.
The emergence of organized human activity, which corresponds to the emergence of the species itself, and which is therefore to be found in hunter-gatherer nomadism as much as in agrarian or industrial civilization, meant the emergence of institutions. At first, these institutions were as unsystematic and implicit as everything else in human experience. When civilizations began to abut each other in the agrarian era, it became necessary to make these institutions explicit and to formulate them in codes of law and regulation. At first, this codification itself was unsystematic. It was the emergence of industrialization that forced human civilizations to make its institutions not only explicit, but also systematic.
This process of systematization and rigorization is most clearly seen in the most abstract realms of thought. In the nineteenth century, when industrialization was beginning to transform the world, we see at the same time a revolution in mathematics that went beyond all the earlier history of mathematics. While Euclid famously systematized geometry in classical antiquity, it was not until the nineteenth century that mathematical thought grew to a point of sophistication that outstripped and exceeded Euclid.
From classical antiquity up to industrialization, it was frequently thought, and frequently asserted, that Euclid was the perfection of human reason in mathematics and that Aristotle was the perfection of human reason in logic, and there was simply nothing more to be done in the these fields beyond learning to repeat the lessons of the masters of antiquity. In the nineteenth century, during the period of rapid industrialization, people began to think about mathematics and logic in a way that was more sophisticated and subtle than even the great achievements of Euclid and Aristotle. Separately, yet almost simultaneously, three different mathematicians (Bolyai, Lobachevski, and Riemann) formulated systems of non-Euclidean geometry. Similarly revolutionary work transformed logic from its Aristotelian syllogistic origins into what is now called mathematical logic, the result of the work of George Boole, Frege, Peano, Russell, Whitehead, and many others.
At the same time that geometry and logic were being transformed, the rest of mathematics was also being profoundly transformed. Many of these transformational forces have roots that go back hundreds of years in history. This is also true of the industrial revolution itself. The growth of European society as a result of state competition within the European peninsula, the explicit formulation of legal codes and the gradual departure from a strictly peasant subsistence economy, the similarly gradual yet steady spread of technology in the form of windmills and watermills, ready to be powered by steam when the steam engine was invented, are all developments that anticipate and point to the industrial revolution. But the point here is that the anticipations did not come to fruition until the nineteenth century.
And so with mathematics. Newton and Leibniz independently invented the calculus, but it was left on unsure foundations for centuries, and Descartes had made the calculus possible by the earlier innovation of analytical geometry. These developments anticipated and pointed to the rigorization of mathematics, but the development did not come to fruition until the nineteenth century. The fruition is sometimes called the arithmetization of analysis, and involved the substitution of the limit method for fluxions in Newton and infinitesimals in Leibniz. This rigorous formulation of the calculus made possible engineering in its contemporary form, and rigorous engineering made it possible to bring the most advanced science of the day to the practical problems of industry. Intrinsically arithmetical realities could now be given a rigorous mathematical exposition.
Historians of mathematics and industrialization would probably cringe at my potted sketch of history, but here it is in sententious outline:
● Rigorization of mathematics also called the arithmetization of analysis
● Mathematization of science
● Scientific systematization of technology
● Technological rationalization of industry
I have discussed part of this cycle in my writings on industrial-technological civilization and the disruption of the industrial-technological cycle. The origins of this cycle involve the additional steps that made the cycle possible, and much of the additional steps are those that made logic, mathematics, and science rigorous in the nineteenth century.
The reader should also keep in mind the parallel rigorization of social institutions that occurred, including the transformation of the social sciences after the model of the hard sciences. Economics, which is particularly central to the considerations of industrial-technological civilization, has been completely transformed into a technical, mathematicized science.
With the rigorization of social institutions, and especially the economic institutions that shape human life from cradle to grave, it has been inevitable that the human condition itself should be made rigorous. Foucault was instrumental in pointing out salient aspects of this, which he called biopower, and which, I suggest, will eventually issues in technopower.
I am not suggesting this this has been a desirable, pleasant, or welcome development. On the contrary, industrial-technological civilization is beset in its most advanced quarters by a persistent apocalypticism and declensionism as industrialized populations fantasize about the end of the social regime that has come to control almost every aspect of life.
I wrote about the social dissatisfaction that issues in apocalypticism in Fear of the Future. I’ve been thinking more about this recently, and I hope to return to this theme when I can formulate my thoughts with the appropriate degree of rigor. I am seeking a definitive formulation of apocalypticism and how it is related to industrialization.
. . . . .
. . . . .
. . . . .
17 October 2012
It is ironic, though not particularly paradoxical, that the earth sciences as we known them today only came into being as the result of the emergence of space science, and space science was a consequence of the advent of the Space Age. We had to leave the Earth and travel into space in order to see the Earth for what it is. Why was this the case, and what do I mean by this?
It has often been commented that we had to go into space in order to discover the earth, which is to say, to understand that the earth is a blue oasis in the blackness of space. The early images of the space program had a profound effect on human self-understanding. Photographs (as much or more than any theory) provided the theoretical context that allowed us to have a unified perspective on the Earth as part of a system of worlds in space. Once we saw the Earth for what it was, What Carl Sagan called a “pale blue dot” in the blackness of space, drove home a new perspective on the human condition that could not be forgotten once it had been glimpsed.
To learn that our sun was a star among stars, and that the stars were suns in their own right, that the Earth is a planet among planets, and perhaps other planets are other Earths, has been a long epistemic struggle for humanity. That the Milky Way is a galaxy among galaxies, a point that has been particularly driven home by recent observational cosmology as with the Hubble Ultra-Deep Field (UDF) image (and now the Hubble eXtreme-Deep Field (XDF) image), is an idea that we still today struggle to comprehend. The planethood of the Earth, the stellarhood of the sun, the galaxyhood of the Milky Way are all exercises in contextualizing our place in the universe, and therefore an exercise in Copernicanism.
But I am getting ahead of myself. I wanted to discuss the earth sciences, and to try to understand what they are and how they have become what they are. What are the Earth sciences? The Biology Online website has this brief and concise definition of the earth sciences:
The Earth Sciences, investigating the way our planet works and the mechanisms of nature that drive it.
Earth Science is the study of the Earth and its neighbors in space… Many different sciences are used to learn about the earth, however, the four basic areas of Earth science study are: geology, meteorology, oceanography and astronomy.
For a more detailed overview of the earth sciences, the Earth Science Literacy Initiative (ESLI), funded by the National Science Foundation, has formulated nine “big ideas” of earth science that it has published in its pamphlet Earth Science Literacy Principles. Here are the nine big ideas taken from their pamphlet:
1. Earth scientists use repeatable observations and testable ideas to understand and explain our planet.
2. Earth is 4.6 billion years old.
3. Earth is a complex system of interacting rock, water, air, and life.
4. Earth is continuously changing.
5. Earth is the water planet.
6. Life evolves on a dynamic Earth and continuously modifies Earth.
7. Humans depend on Earth for resources.
8. Natural hazards pose risks to humans.
9. Humans significantly alter the Earth.
Each of these “big ideas” is further elaborated in subheadings that frequently bring out the planethood of the Earth. For example, section 2.2 reads:
Our Solar System formed from a vast cloud of gas and dust 4.6 billion years ago. Some of this gas and dust was the remains of the supernova explosion of a previous star; our bodies are therefore made of “stardust.” This age of 4.6 billion years is well established from the decay rates of radioactive elements found in meteorites and rocks from the Moon.
Intuitively, we would say that the earth sciences are those sciences that study the Earth and its natural processes, but the rapid expansion of scientific knowledge has made us realize that the Earth is not a closed system that can be studied in isolation. The Earth is part of a system — the solar system, and beyond that a galactic system, etc. — and must be studied as part of this system. But we didn’t always know this, and this comprehensive conception of earth science is still in the process of formulation.
The realization that the processes of the Earth and the sciences that study these processes must ultimately be placed in a cosmological context means that contemporary earth science is now, like astrobiology, which seeks to place biology in a cosmological context, a fully Copernican science, though not perhaps quite as explicitly as in the case of astrobiology. The very idea of Earth science as it is understood today, like planetary science and space science, is essentially Copernican; Copernicanism is now the telos of all the sciences. Copernican civilization needs Copernican sciences. As I said in my presentation to this year’s 100YSS symposium, the scope of an industrial-technological civilization corresponds to the scope of the science that enables this civilization.
What this means is that the sciences that generations of Earth-bound scientists have labored to create in order to describe the planet upon which they have lived, which was the only planet that they could know prior to the advent of space science, are all planetary sciences in embryo — all potentially Copernican sciences that can be extended beyond the Earth that was their inspiration and origin. Before space science, all science was geocentric and therefore essentially Ptolemaic. Space science changed that, and now all the sciences are gradually becoming Copernican.
In the case of earth science, this is a powerful scientific model because the earth sciences have been, by definition, geocentric. That even geocentric sciences can become Copernican is a powerful lesson and provides a model for other sciences to follow. I have often quoted Foucault as saying that “A real science recognizes and accepts its own history without feeling attacked.” I think it can be honestly said that the geosciences recognize and accept their history as geocentric sciences and this in no way inhibits their ability to transcend their geocentric origins and become Copernican sciences no longer exclusively tied to the Earth. I find this rather hopeful for the future of science.
Another way to conceptualize earth science is to think of the earth sciences as those sciences that have come to recognize the planethood of the Earth. This places the Earth in its planetary context among other planets of our solar system, and it also places these planets (as well as the growing roster of exoplanets) in the context of planetary history that we have learned first-hand from the Earth.
To a certain extent, earth science and planetary science (or planetology) are convertible: each is increasingly formulated and refined in reference to the other. What is planetary science? Here is the Wikipedia definition of planetary science:
Planetary science (rarely planetology) is the scientific study of planets (including Earth), moons, and planetary systems, in particular those of the Solar System and the processes that form them. It studies objects ranging in size from micrometeoroids to gas giants, aiming to determine their composition, dynamics, formation, interrelations and history. It is a strongly interdisciplinary field, originally growing from astronomy and earth science, but which now incorporates many disciplines, including planetary astronomy, planetary geology (together with geochemistry and geophysics), atmospheric science, oceanography, hydrology, theoretical planetary science, glaciology, and the study of extrasolar planets. Allied disciplines include space physics, when concerned with the effects of the Sun on the bodies of the Solar System, and astrobiology.
The Division for Planetary Sciences of the American Astronomical Society doesn’t give us the convenience of a definition for planetary science, but in its offerings on A Planet Orbiting Two Suns, A Thousand New Planets, Buried Mars Carbonates, The Lunar Core, Propeller Moons of Saturn, A Six-Planet System, Carbon Dioxide Gullies on Mars, and many others, give us concrete examples of planetary science which examples may, in certain ways, be more helpful than an explicit definition.
The “aims and scope” of the journal Earth and Planetary Science Letters also give something of a sense of what planetary science is:
Earth and Planetary Science Letters (EPSL) is the journal for researchers, policymakers and practitioners from the broad Earth and planetary sciences community. It publishes concise, highly cited articles (“Letters”) focusing on physical, chemical and mechanical processes as well as general properties of the Earth and planets — from their deep interiors to their atmospheres. Extensive data sets are included as electronic supplements and contribute to the short publication times. EPSL also includes a Frontiers section, featuring high-profile synthesis articles by leading experts to bring cutting-edge topics to the broader community.
A recent (2006) controversy over the status of Pluto as a planet led to an attempt by The International Astronomical Union (IAU) to formulate a more precise definition of what a planet is. The definition upon which they settled demoted Pluto from being a planet to being a dwarf planet. While this decision does not have complete unanimity, it is gaining ground in the literature. Here is the IAU of planets, dwarf planets, and small solar system bodies:
(1) A planet is a celestial body that (a) is in orbit around the Sun, (b) has sufficient mass for its self-gravity to overcome rigid body forces so that it assumes a hydrostatic equilibrium (nearly round) shape, and (c) has cleared the neighbourhood around its orbit.
(2) A “dwarf planet” is a celestial body that (a) is in orbit around the Sun, (b) has sufficient mass for its self-gravity to overcome rigid body forces so that it assumes a hydrostatic equilibrium (nearly round) shape, (c) has not cleared the neighbourhood around its orbit, and (d) is not a satellite.
(3) All other objects, except satellites, orbiting the Sun shall be referred to collectively as “Small Solar System Bodies.”
With this greater precision of definition than had previously been the case in regard to planets, we could easily define planetary science as the study of celestial bodies that (a) are in orbit around the Sun, (b) have sufficient mass for their self-gravity to overcome rigid body forces so that they assume a hydrostatic equilibrium (nearly round) shape, and (c) have cleared the neighbourhood around their orbits. Of course, this ultimately won’t do, because a comprehensive planetary science will want to study all three classes of celestial bodies detailed above, and will especially want to study the mechanisms of planet formation, dwarf planet formation, and small object formation for the light that each shines on the other. Like the Earth, that is part of a larger system, all the planets are also part of a larger system, and how they relate to that system will have much to teach us about solar system formation.
This more comprehensive perspective brings us to the space sciences. What is space science? The Wikipedia entry on space sciences characterizes them in this way:
The term space science may mean:
●The study of issues specifically related to space travel and space exploration, including space medicine.
●Science performed in outer space (see space research).
●The study of everything in outer space; this is sometimes called astronomy, but more recently astronomy can also be regarded as a division of broader space science, which has grown to include other related fields.
It is interesting that this definition of space science does not mention cosmology, which is more and more coming to assume the role of the master category of the sciences, since it is ultimately cosmology that is the context for everything else, but we could easily modify that last of the above three stipulations to read “cosmology” in place of “astronomy.” As the definition notes, the space sciences have grown to include other related fields, and in the future it may well be that the space sciences become the most comprehensive scientific category, providing the conceptual infrastructure in which all other scientific enterprises must be contextualized.
Since the Earth is a planet, and planets are to be found in space, one might readily assume that the Earth sciences, planetary sciences, and space sciences might be arranged in a nested hierarchy as follows:
Conceptually this is correct, but genetically, i.e., in terms of historical descent, it is obvious that the sciences that we have created to study our home planet are the sciences that, when generalized and applied beyond the surface of the Earth, are the sciences that become planetary science and space science.
Before space science and planetary science, there were of course the familiar sciences of geology (later geomorphology), atmospheric science or meteorology (later climatology), oceanography, paleontology, and so forth, but it was only when the emergence of space science and planetary science placed these terrestrial sciences into a cosmological context that we came to see that our sciences that study the planet that we call our home together constitute the Earth sciences in contrast to, and really in the context of, space science and planetary science. Great strides have been made in this direction, but further work remains to be done.
We know that the Earth and its solar system is about 4.6 billion years old, and most recent estimates for the age of the known universe put it at about 13.7 billion years. This means that the Earth has been around for almost exactly a third of age of the entire universe, which is not an inconsiderable length of time. Our sun and its solar system stands in relation to other stars of a similar age, and these stars and solar systems with significant traces of heavier elements stand in certain relationships to earlier populations of stars. The whole history of the universe is present in the rocks of the Earth, and we have to keep this in mind in the expanding knowledge base of the earth sciences.
While geological time scales are essentially geocentric, it would be possible to formulate an astrogeography and an astrogeographical time scale, extrapolating earth science to planetary science and thence to space science, that not only placed Earth’s geological history into cosmological context but also placed all planetary bodies and planetary systems and their geology in a cosmological context. For such an undertaking the generations of stars and planetary formation would be of central concern, and we could expect to see patterns across stars and solar systems of the same generations, and across planets within a given solar system.
This work has already begun, as can be seen in the above table laying out the geological histories of the Earth, the Moon, and Mars in parallel. Since one of the major theories for the formation of the Moon is that most of its substance was ripped out of the Earth by an enormous collision, the geological histories of the Earth and the Moon may ultimately be shown to coincide.
Stars and planets formed from the same dust and debris clouds filled with the remnants of the nucleosynthesis of earlier poulations of stars. This is now familiar to everyone. Galaxies, in turn, formed from stars, and thus also reflect a generational index reflecting a galaxy’s position in the natural history of the universe.
Since we now also believe that all or almost all spiral galaxies (and perhaps also other non-spiral or irregular galaxies) have a supermassive black hole at their centers, I have lately come of think of entire galaxies as the vast “solar systems” of supermassive black holes. In other words, a supermassive black hole is to a galaxy as a star is to a solar system. As planetary systems formed around newly born stars, galaxies formed around newly born black holes (if their gravity was sufficiently strong to form such a system). This way of thinking about galaxies introduces another parallelism between the microcosm of the solar system and the macrocosm of the universe at large, the structure of which is defined by galaxies, clusters of galaxies, and super clusters.
All of this falls within a single natural history of which we are a part.
Our history and the history of the universe are one and the same.
. . . . .
. . . . .
. . . . .
. . . . .
26 September 2012
An Hypothesis in the Theory of Civilization
Not long ago in Eo-, Eso-, Exo-, Astro- I discussed how Joshua Lederberg’s distinctions between eobiology, esobiology, and exobiology can be used as a model for the concepts of eocivilization, esocivilization, and exocivilization, all of which are anterior to the more comprehensive conception of astrocivilization (like the more comprehensive conception of astrobiology).
My post on Eo-, Eso-, Exo-, Astro- was in part a correction to my earlier post Eo-, Eso-, Astro-, in which I had contrasted eobiology to exobiology, when I should have been contrasting esobiology to exobiology.
I had derived the contrast of eobiology and exobiology from Steven J. Dick and James E. Strick’s excellent book The Living Universe: NASA and the Development of Astrobiology, in which they cite Lederberg’s contrast of these terms. I had initially drawn the wrong contrast between the two concepts. When I started to read Lederberg’s writings, I realized that Lederberg was making a dramatic contrast between the scientific study of origins and the scientific study of destiny, rather than the contrast I expected. However, the contrast I originally drew remains a valid schema for understanding the comprehensive conception of astrobiology — and, by extension, the comprehensive conception of astrocivilization.
Astrobiology may be understood as the integration of esobiology — our biology, terrestrial biology — and exobiology — biology not of the Earth — into a comprehensive whole that places life in a cosmological context. Parallel to this, I define astrocivilization as the integration of esocivilization — our civilization, terrestrial civilization — and exocivilization — civilization not of the Earth — into a comprehensive whole that places civilization in a cosmological context. These concepts are not merely parallel, but the parallel between concepts of biology and concepts of civilization follows from a naturalistic conception of civilization as an extension of biology.
Civilization can be understood as a greatly elaborated result of behavioral adaptation. Just as evolutionary gradualism takes us imperceptibly over countless generations from the simple origins of life to the complexity of life we know today, so too evolutionary gradualism in the development of civilization takes us imperceptibly over countless generations from the simplest behavioral adaptations to the complexity of behavioral adaptation that culminates in civilization — and which may well culminate in some further post-civilizational social institution. (We must add this last proviso so as not to be mistaken for advocating some kind of teleological conception of civilization, as one might expect, for example, from strong formulations of the anthropic cosmological principle — something I had tried to address in Formulating an Anthropic Cosmological Principle Worthy of the Name.)
In reformulating my contrast of eocivilization and exocivilization as the contrast between esocivilization and exocivilization, the term “eocivilization” is freed up to assume its more etymologically accurate meaning, which properly should be “early civilization” (“eo-” coming from the Greek means “early”). This turns out to be a very useful concept, but it always points to an additional thesis in the theory of civilization.
As in astrobiology, in which we study life on Earth as a clue to life in the cosmos, so too in astrocivilization we study civilization on Earth as a clue to civilization in the universe. Life on Earth is the only life that we know of, and civilization on the Earth is the only civilization that we know of, but in so far as we approach life and civilization from the scientific perspective of methodological naturalism, we do not assume that these are necessarily the only instances of life or of civilization in the cosmos. There may be other instances of life and civilization of which we simply know nothing.
In light of the possibility of life and civilization elsewhere in the universe, but our only knowledge of civilization being terrestrial civilization, I will call the terrestrial eocivilization hypothesis the position that identifies early civilization, i.e., eocivilization, with terrestrial civilization. In other words, our terrestrial civilization is the earliest civilization to emerge in the cosmos. Thus the terrestrial eocivilization hypothesis is the civilizational parallel to the rare earth hypothesis, which maintains, contrary to the Copernican principle, that life on earth is rare. I could call it the “rare civilization hypothesis” but I prefer “terrestrial eocivilization hypothesis.”
It is possible to further distinguish between the position that terrestrial civilization is the first and earliest civilization in the cosmos, and the position that terrestrial civilization is unique and the sole source of civilization in the cosmos. There may be exocivilizations that have and will emerge after terrestrial civilization, meaning that there are several sources of civilization in the cosmos, but that terrestrial civilization is the earliest to emerge. Thus the terrestrial eocivilization thesis can be distinguished from the uniqueness of terrestrial civilization. We might call the non-uniqueness of industrial-technological civilization on the Earth the “multi-regional hypothesis” in astrocivilization (to borrow a term from hominid evolutionary biology), but I would prefer to simply call it the “Non-Uniqueness Thesis.”
In the event that human civilization expands cosmologically and is ultimately the source of civilization on exoplanets that are part of other solar systems and perhaps even other galaxies, the terrestrial eocivilization thesis will have more substantive content than it does now at present, when (if the thesis is true) eocivilization is simply identical to all civilization in the cosmos. All we can say at present, however, is that terrestrial civilization is identical to all known civilization in the cosmos. To assert more than this is to assert the terrestrial eocivilization hypothesis, which is underdetermined and goes well beyond available evidence.
. . . . .
. . . . .
. . . . .
. . . . .
10 September 2012
When writing about civilization I have started using the term “industrial-technological civilization” as I believe this captures more accurately the sense of what is unique about contemporary civilization. In Modernism without Industrialism: Europe 1500-1800 I argued that there is a sense in which this early modern variety of civilization was an abortive civilization (a term used by Toynbee), the development of which was cut short by the sudden and unprecedented emergence of industrial-technological civilization (an instance of preemption). I also discussed this recently in Temporal Structures of Civilization.
What I am suggesting is that the industrial revolution inaugurated a novel form of civilization that overtook modernism and essentially replaced it through the overwhelming rapidity and totality of industrial-technological development. And while the industrial revolution began in England, it was in nineteenth century Germany that industrial-technological civilization proper got its start, because it was in Germany that the essential elements that drive industrial-technological civilization came together for the first time in a mutually-reinforcing feedback loop.
The essential elements of industrial-technological civilization are science, technology, and engineering. Science seeks to understand nature on its own terms, for its own sake. Technology is that portion of scientific research that can be developed specifically for the realization of practical ends. Engineering is the actual industrial implementation of a technology. I realize that I am introducing conventional definitions here, and others have established other conventions for these terms, but I think that this much should be pretty clear and not controversial. If you’d like the parse the journey from science to industry differently, you’ll still come to more or less the same mutually-reinforcing feedback loop.
The important thing to understand about the forces that drive industrial-technological civilization is that this cycle is not only self-reinforcing but also that each stage is selective. Science produces knowledge, but technology only selects that knowledge from the scientific enterprise that can be developed for practical uses; of the many technologies that are developed, engineering selects those that are most robust, reproducible, and effective to create an industrial infrastructure that supplies the mass consumer society of industrial-technological civilization. The process does not stop here. The achievements of technology and engineering are in turn selected by science in order to produce novel and more advanced forms of scientific instrumentation, with which science can produce further knowledge, thus initiating another generation science followed by technology followed by engineering.
Because of this unique self-perpetuating cycle of industrial-technological civilization, continuous scientific, technological, and engineering development is the norm. It is very tempting to call this development “progress,” but as soon as we mention “progress” it gets us into trouble. Progress is problematic because it is ambiguous; different people mean different things when they talk about progress. As soon as someone points out the relentless growth of industrial-technological civilization, someone else will point out some supposed depravity that has flourished along with industrial-technological civilization in order to disprove the idea that such civilization involves “progress.” The ambiguity here is the conflation of technological progress and moral progress.
It is often said that poets only hope to produce poetry as good as that of past poets, and few imagine that they will create something better than Homer, Dante, Chaucer, or Shakespeare. The standards of poetry and art were set high early in the history of civilization, so much so that contemporary poets and sculptors do not imagine progress to be possible. One can give voice to the authentic spirit of one’s time, but one is not likely to do better than artists of the past did in their effort to give voice to the spirit of a different civilization. Thus it would be difficult to argue for aesthetic progress as a feature of civilization, much less industrial-technological civilization, any more than one would be likely to attribute moral progress to it.
Contemporary thinkers are also very hesitant to use the term “progress” because of its abuse in the recent past. When a history is written so that the whole of previous history seems to point to some present state of perfect as the culmination of the whole of history, we call this Whiggish history, and everyone today is contemptuous of Whiggish history because we know that history is not an inevitable progression toward greater rationality, freedom, enlightenment, and happiness. Whiggish history is usually traced to Sir James Mackintosh’s The History of England (1830–1832, 3 vols.), and this was thought to inaugurate a particular nineteenth century fondness for progressive history, so much so that one often hears the phase, “the nineteenth century cult of progress.”
Alternatively, the origins of Whiggish history can be attributed to the Marquis de Condorcet’s Outlines of an historical view of the progress of the human mind (1795), and especially its last section, “TENTH EPOCH. Future Progress of Mankind.”
Given the dubiousness of moral progress, the absence of aesthetic progress, and the bad reputation of history written to illustrate progress, historians have become predictably skittish about saying anything that even suggests progress, but this has created an historiographical climate in which any progress is simply dismissed as impossible or illusory, but we know this is not true. Even while some dimensions of civilization may remain static, and some may become retrograde, there are some dimensions of civilization that have progressed, and we need to say so explicitly or we will misunderstand the central fact of life in industrial-technological civilization.
Thus I will assert as the Industrial-Technological Thesis that technological progress is intrinsic to industrial-technological civilization. (I could call this the “fundamental theorem of industrial-technological civilization” or, if I wanted to be even more tendentious, “the technological-industrial complex.”) I wish to be understood as making a rather strong (but narrow) claim in so asserting the industrial-technological thesis.
More particularly, I wish to be understood as asserting that industrial-technological civilization is uniquely characterized by the escalating feedback loop of science, technology, and engineering, and that if this cycle should fail or shudder to a halt, the result will not be a stagnant industrial-technological civilization, but a wholly distinct form of civilization. Given the scope and scale of contemporary industrial-technological civilization, which possesses massive internal momentum, even if the cycle that characterizes technological progress should begin to fail, the whole of industrial-technological civilization will continue in existence in its present form for quite some time to come. Transitions between distinct forms of civilization are usually glacially slow, and this would likely be the case with the end of industrial-technological civilization; the advent of industrial-technological civilization is the exception due to its rapidity, thus we must acknowledge at least the possibility that another rapid advent is possible (by way of another instance of preemption), even if unlikely.
Because of pervasive contemporary irony and skepticism, which is often perceived as being sufficient in itself to serve as the basis for the denial of the technological-industrial thesis, one expects to hear casual denials of progress. By asserting the technological-industrial thesis, and noting the pervasive nature of technological progress within it (and making no claims whatsoever regarding other forms of progress — moral, aesthetic, or otherwise), I want to point out the casual and dismissive nature of most denials of technological progress. The point here is that if someone is going to assert that technological progress cannot continue, or will not continue, or plays no essential role in contemporary civilization, it is not enough merely to assert this claim; if one denies the industrial-technological thesis, one is obligated to maintain an alternative thesis and to argue the case for the absence of technological progress now or in the future. (We might choose to call this alternative thesis Ecclesiastes’ Thesis, because Ecclesiastes famously maintained that, “The thing that hath been, it is that which shall be; and that which is done is that which shall be done: and there is no new thing under the sun.”)
The industrial-technological thesis has significant consequences. Since civilizations ordinarily develop over a long time frame (i.e., la longue durée), and industrial-technological civilization is very young, we can likely expect that it will last for quite some time, and that means that escalating progress in science, technological, and engineering will continue apace. The wonders that futurists have predicted are still to come, if we will be patient. As I observed above, even if the feedback loop of technological progress is interrupted, the momentum of industrial-technological civilization is likely to continue for some time — perhaps even long enough for novel historical developments to emerge from the womb of a faltering industrial-technological civilization and overtake it in its decline with innovations beyond even the imagination of futurists.
. . . . .
. . . . .
. . . . .
. . . . .
7 September 2012
Some time ago in The Pleasures of Model Drift I discussed how contemporary cosmology is challenged by the accelerating expansion of the universe, and that there are no really good explanations of this yet in terms of the the received cosmological models. The resulting state of cosmological theories, then, is called model drift. This is a Kuhnian term. Almost exactly a year ago, when it was reported that some neutrinos may have traveled faster than light, it looked like we might also have had to face model drift in particle physics. Since these results haven’t been replicated, the standard model not only continues to stand, but has recently been fortified by the announcement of the discovery (sort of discovery) of the Higgs Boson.
But theoretical physics isn’t over yet. Some time ago in The limits of my language are the limits of my world I took up Wittgenstein’s famous aphorism from the perspective of recent work in particle physics that had “bent” the rules of quantum theory. Further work by at least of of the same scientific team at Centre for Quantum Information and Quantum Control and Institute for Optical Sciences, Department of Physics, University of Toronto, Aephraim M. Steinberg, has continued this line of research, which has been reported by the same BBC Science and technology reporter, Jason Palmer, who wrote up the earlier results (cf. Quantum test pricks uncertainty). This story covers research reported in Physical Review Letters, “Violation of Heisenberg’s Measurement-Disturbance Relationship by Weak Measurements.”
The abstract of this most recent research reads as follows:
While there is a rigorously proven relationship about uncertainties intrinsic to any quantum system, often referred to as “Heisenberg’s uncertainty principle,” Heisenberg originally formulated his ideas in terms of a relationship between the precision of a measurement and the disturbance it must create. Although this latter relationship is not rigorously proven, it is commonly believed (and taught) as an aspect of the broader uncertainty principle. Here, we experimentally observe a violation of Heisenberg’s “measurement-disturbance relationship”, using weak measurements to characterize a quantum system before and after it interacts with a measurement apparatus. Our experiment implements a 2010 proposal of Lund and Wiseman to confirm a revised measurement-disturbance relationship derived by Ozawa in 2003. Its results have broad implications for the foundations of quantum mechanics and for practical issues in quantum measurement.
Experimentalists are chipping away at Heisenberg’s Uncertainty Principle. They aren’t presenting their research as something especially radical — one might even think of this recent work as an instantiation of radical theories, modest formulations — but this is theoretically and even philosophically quite important.
We recall that despite himself making crucial early contributions to quantum theory, Einstein eventually came reject quantum theory, offering searching and subtle critiques of the theory. In his time Einstein was isolated among physicists for coming to reject quantum theory at the very time of its greatest triumphs. Quantum theory has gone on to become one of the most verified theories — and verified to the most exacting standards — in the history of physics, notwithstanding Einstein’s criticisms. Einstein primarily fell out with quantum theory over the notion of quantum entanglement, though Einstein, himself a staunch determinism, was also greatly troubled by Heisenberg’s uncertainly principle. Many, perhaps including Einstein himself, conflated physical determinism with scientific realism, so that a denial of determinism came to be associated with a rejection of realism. Heisenberg’s uncertainly principle is Exhibit “A” when it comes to the denial of determinism. So I think that if Einstein had lived to see this most recent work, he would have been both fascinated and intrigued by its implications for the uncertainty principle, and indeed its philosophical implications for physics.
Einstein was a uniquely philosophical physicist — the very antithesis of what recent physics has become, and which I have called Fashionable Anti-Philosophy (and which I elaborated in Further Fasionable Anti-Philosophy). From his earliest years, Einstein carefully studied philosophical works. He is said to have read Kant’s three critiques in his early teens. And Einstein’s rejection of quantum theory, which he modestly and humorously characterized as saying that something in his little finger told him that it couldn’t be right, was a philosophical rejection of quantum theory.
The recent research into Heisenberg’s uncertainly principle is not couched in philosophical terms, but it is philosophically significant. The very fact that this research is going on suggests that others, not only Einstein, have been dissatisfied with the uncertainly principle as it is usually interpreted, and that scientists have continued to think critically about it even as the uncertainty principle has been taught for decades as orthodox physics. This is a perfect example of what I have called Science Behind the Scenes.
The uncertainty of quantum theory, given formal expression in Heisenberg’s uncertainty principle, came to be interpreted not only epistemically, as placing limits on what we can know, but it was also interpreted ontologically, as placing limits on the constituents of the world. In so far as Heisenberg encouraged an ontological interpretation of the uncertainty principle, which I believe to be the case, he was advancing an underdetermined theory, i.e., an ontological interpretation of the uncertainty principle goes beyond — I think it goes far beyond — the epistemic uncertainty that we must posit in order to do quantum theory.
It seems to me that it is pretty easy to interpret the recent research cited above as questioning the ontological interpretation of the uncertainty principle while leaving an epistemic interpretation untouched. The limits of human knowledge are often poignantly brought home to us in our daily lives in a thousand ways, but we need not make the unnecessary leap from limitations on human knowledge to limitations on the world. On the other hand, we also need not make any connection between realism and determinism. It is entirely consistent (even if it seems odd to some of us) that there should be an objective world existing apart from human experience of and knowledge of that world, and that that objective world should not be deterministic. It may well be that it is essentially random and only stochastically defined, when a given radioactive substance decays, but the radioactive substance and the event of decay are as real as Einstein’s little finger. If I could have a conversation with Einstein, I would try to convince him of precisely this.
Indeterminate realism is also an underdetermined theory, and it is to be expected that there are non-realistic theories that are empirically equivalent to indeterminate realism. It is for this reason that I believe there are other arguments, distinct from those above, that favor realism over anti-realism, or even realism over some of the more extravagant interpretations of quantum theory. But I won’t go into that now.
We aren’t about to return to classical theories and their assumptions of continuity such as we had prior to quantum theory, any more than we are about to give up relativistic physics and return to strictly Newtonian physics. That much is clear. Nevertheless, it is important to remember that we are not saddled with any one interpretation of relativity or quantum theory, and we are especially not limited to the philosophical theories formulated by those scientists who originally formulated these physical theories, even if the philosophical theories were part of the “original intent” (if you will forgive me) of the physical theory. Another way to put this is that we are not limited to the original model of a theory, hence model drift.
. . . . .
. . . . .
. . . . .
27 May 2012
In the painfully slow process of the formulation of a secular world view having started from civilizations that, throughout the world, have been permeated by religious significance — so much so that each of the world’s major religions roughly correspond to each of the world’s major civilizations — one of the walls against which we repeatedly crack our heads is that of the traditional sense of grandeur that is so perfectly embodied in the religious rituals of ecclesiastical civilization.
For many if not most human beings, this grandeur of ritual translates into intellectual grandeur, and, again, for many if not most, this equation of religious grandeur with human honor and dignity has meant that any deviation from the traditions of ecclesiastical civilization have been treated as deviations from the intrinsic respect due to human beings as human beings. That is to say, many Westerners (and possibly also many elsewhere in the world) express indignation, outrage, and anger over a naturalistic account of human origins. The whole legacy of Copernicus is seen as invidious to human dignity.
Among those in the sciences and philosophy, it has become commonplace to attribute the strongly negative reaction to naturalism (especially as is touches upon human origins) as a reaction to the re-contextualization of humanity’s place in nature in view of a naturalistic cosmology. Anthropocentric cosmology is here treated as an expression of overweening human pride, and the need to re-conceptualize the cosmos in terms that make human beings and human concerns no longer central is not only a necessary adjustment to scientific understanding but also serves as a stern lesson to human hubris.
In other words, the scientific demonstration of the peripheral position of humanity in a naturalistic cosmos is understood to be a moral good because it, “brings most men’s characters to a level with their fortunes” (to quote Thucydides). Science is a rough master, and by formulating scientific cosmology in these unforgiving terms I have made it sound harsh and unsympathetic. This was intentional, because this formulation comes closer to doing justice to the visceral intuitions of the indignant anthropocentric than the usual formulation in terms of a necessary correction to human pride.
Seen in this way, both anthropocentric-ecclesiastical civilization and Copernican-scientific civilization are both related in an essential way to a conception of human pride. Both conceptions of humanity and of civilization have a fundamentally conflicted conception of pride. In ecclesiastical civilization, human pride in species-being (to employ a Marxist term) is magnified while individualistic pride is the sin of Satan and central to the fallen nature of the world. In Copernican civilization, human pride in human knowledge is magnified — and I note that human knowledge is often an individualistic undertaking, but see below for more on this — but pride in species-being is called into question.
In ecclesiastical civilization, pride in species-being is raised to the status of metaphysical pride and is postulated as the organizational principle of the world. But, of course, pride in species-being is identified with humility, and the whole of humanity is dismissed as sinners. In Copernican civilization, pride in knowledge — epistemic pride — is raised to the status of metaphysical pride and is postulated as the organizational principle of the world. But, of course, the epistemic pride of science is often identified with epistemic humility. As Socrates once said to Antisthenes, “I can see your pride through the holes in your cloak.”
Individualistic pride is closely connected to the heroic conception of civilization, and as civilization continues its relentless consolidation of social institutions integrated within a larger whole of human endeavor, the role (even the possibility) of individual heroic action is abridged. Individualistic pride in this context is even more closely connected with the heroic conception of science, which is (as I have pointed out elsewhere) already an antiquated notion.
When civilization was young and scientific research was the province of individuals, not institutions and their communities of researchers, almost all scientific discoveries were the result of heroic individual efforts. Science, like civilization, is now a collective enterprise, and just as the story of civilization was once told as the deeds of kings, so the story of science was once told as the deeds of discoverers. Such authentic efforts could still be found in the nineteenth century (in the person of Darwin) and even in the early twentieth century (in the person of Einstein). But it is rarely the case today, and will become rarer and possibly extinct in the future.
Pride in species-being (in contradistinction to individualistic pride) is something that I have not spent much time thinking about, but when I think about it now in the present context it seems to me that this represents a heroic conception of the career of humanity — a kind of collective heroism of a biological community striving to overcome adverse selection. Thus, if the world is magnified, how much greater is the glory of the species that triumphs over the deselective obstacles thrown up by the world? Religion magnifies the anthropocentrically-organized world in order to magnify the species-being that has been made the principle of the world; science magnifies the Copernican decentralized world in order to magnify the knower whose knowledge has been made the principle of the world.
As ecclesiastical civilization slowly, gradually, and incrementally gives way before Copernican civilization, novel ways will need to be found to supply the apparent human need for a heroic conception of the career of humanity as a whole. It will not be enough to insist upon the grandeur of the scientifically understood universe. We have seen that religion, science, and philosophy can all appeal to the grandeur of the world in making the case for a unification of the world around a particular principle. The Psalmist wrote, “The heavens declare the glory of God; and the firmament proclaims his handiwork.” Darwin wrote, “There is grandeur in this view of life.” Nietzsche wrote even as he was losing his mind, “Sing me a new song: the world is transfigured and all the heavens are filled with joy.”
Scientific knowledge is now a production of species-being, but I don’t think that science as an institution can bear the heavy burden of human hopes and dreams and expectations. Perhaps civilization, which is also collective and a production of species-being, could be channeled into a heroic conception of species-being that could serve an eschatological function. This seems like a real possibility to me, but it is not something that is yet a palpable reality.
If those who will someday formulate a future science of civilizations also see themselves as engineers of the human soul, i.e., that they conceive of the science of civilization not only descriptively but also prescriptively, they will want to not only formulate a doctrine of what civilization is, but also what civilization will be, can be, and ought to be. If civilization is to be a home for human hopes, then it must become something that is capable of sustaining and nurturing such hopes.
. . . . .
. . . . .
. . . . .
19 May 2012
We can make a distinction among distinctions between ad hoc and principled distinctions. The former category — ad hoc distinctions — may ultimately prove to be based on a principle, but that principle is unknown as long as the distinction remains an ad hoc distinction. This suggests a further distinction among distinctions between ad hoc distinctions that really are ad hoc, and which are based on no principle, and ad hoc distinctions that are really principled distinctions but the principle in question is not yet known, or not yet formulated, at the time the distinction is made. So there you have a principled distinction between distinctions.
A perfect evocation of ad hoc distinctions is to be found in the opening paragraph of the Preface to Foucault’s The Order of Things:
This book first arose out of a passage in Borges, out of the laughter that shattered, as I read the passage, all the familiar landmarks of my thought — our thought, the thought that bears the stamp of our age and our geography — breaking up all the ordered surfaces and all the planes with which we are accustomed to tame the wild profusion of existing things, and continuing long afterwards to disturb and threaten with collapse our age-old distinction between the Same and the Other. This passage quotes a ‘certain Chinese encyclopedia’ in which it is written that ‘animals are divided into: (a) belonging to the Emperor, (b) embalmed, (c) tame, (d) sucking pigs, (e) sirens, (f) fabulous, (g) stray dogs, (h) included in the present classification, (i) frenzied, (j) innumerable, (k) drawn with a very fine camelhair brush, (1) et cetera, (m) having just broken the water pitcher, (n) that from a long way off look like flies’. In the wonderment of this taxonomy, the thing we apprehend in one great leap, the thing that, by means of the fable, is demonstrated as the exotic charm of another system of thought, is the limitation of our own, the stark impossibility of thinking that.
Such distinctions are comic, though Foucault recognizes that our laughter is uneasy: even as we immediately recognize the ad hoc character of these distinctions, we realize that the principled distinctions we routinely employ may not be so principled as we supposed.
Foucault continues this theme for several pages, and then gives another formulation — perhaps, given his interest in mental illness, an illustration that is closer to reality than Borges’ Chinese dictionary:
“It appears that certain aphasiacs, when shown various differently coloured skeins of wool on a table top, are consistently unable to arrange them into any coherent pattern; as though that simple rectangle were unable to serve in their case as a homogeneous and neutral space in which things could be placed so as to display at the same time the continuous order of their identities or differences as well as the semantic field of their denomination. Within this simple space in which things are normally arranged and given names, the aphasiac will create a multiplicity of tiny, fragmented regions in which nameless resemblances agglutinate things into unconnected islets; in one corner, they will place the lightest-coloured skeins, in another the red ones, somewhere else those that are softest in texture, in yet another place the longest, or those that have a tinge of purple or those that have been wound up into a ball. But no sooner have they been adumbrated than all these groupings dissolve again, for the field of identity that sustains them, however limited it may be, is still too wide not to be unstable; and so the sick mind continues to infinity, creating groups then dispersing them again, heaping up diverse similarities, destroying those that seem clearest, splitting up things that are identical, superimposing different criteria, frenziedly beginning all over again, becoming more and more disturbed, and teetering finally on the brink of anxiety.”
Foucault here writes that, “the sick mind continues to infinity,” in other words, the process does not terminate in a definite state-of-affairs. This implies that the healthy mind does not continue to infinity: rational thought must make concessions to human finitude. While I find the use of the concept of the pathological in this context questionable, and I have to wonder if Foucault was unwittingly drawn into the continental anti-Cantorian tradition (Brouwerian intuitionism and the like, though I will leave this aside for now), there is some value to the idea that a scientific process (such as classification) must terminate in a finite state-of-affairs, even if only tentatively. I will try to show, moreover, that there is an implicit principle in this attitude, and that it is in fact a principle that I have discussed previously.
The quantification of continuous data requires certain compromises. Two of these compromises include finite precision errors (also called rounding errors) and finite dimension errors (also called truncation). Rounding errors should be pretty obvious: finite parameters cannot abide infinite decimal expansions, and so we set a limit of six decimal places, or twenty, or more — but we must set a limit. The difference between actual figures and limited decimal expansions of the same figure is called a finite precision error. Finite dimension errors result from the need to arbitrarily introduce gradations into a continuum. Using the real number system, any continuum can be faithfully represented, but this representation would require infinite decimal expansions, so we see that there is a deep consonance between finite precision errors and finite dimension errors. Thus, for example, we measure temperature by degrees, and the arbitrariness of this measure is driven home to us by the different scales we can use for this measurement. And if we could specify temperature using real numbers (including transcendental numbers) we would not have to compromise. But engineering and computers and even human minds need to break things up into manageable finite quantities, so we speak of 3 degrees C, or even 3.14 degrees C, but we don’t try to work with pi degrees C. Thus the increments of temperature, or of any another measurement, involve both finite precision errors and finite dimension errors.
In so far as quantification is necessary to the scientific method, finite dimension errors are necessary to the scientific method. In several posts (e.g., Axioms and Postulates in Strategy) I have cited Carnap’s tripartite distinction among scientific concepts, the three being classificatory, comparative, and quantitative concepts. Carnap characterizes the emergence of quantitative scientific concepts as the most sophisticated form of scientific thought, but in reviewing Carnap’s scientific concepts in the light of finite precision errors and finite dimension errors, it is immediately obvious that classificatory concepts and comparative concepts do not necessarily involve finite precision errors and finite dimension errors. It is only with the introduction of quantitative concepts that science becomes sufficiently precise that its precision forces compromises upon us. However, I should point out that classificatory concepts routinely force us to accept finite dimension errors, although they do not involve finite precision errors. The example given by Foucault, quoted above, illustrates the inherent tension in classificatory concepts.
We accept finite precision errors and finite dimension errors as the price of doing science, and indeed as the price of engaging in rational thought. As Foucault implied in the above quote, the healthy and sane mind must draw lines and define limits and call a halt to things. Sometimes these limits are close to being arbitrary. We retain the ambition of “carving nature at the joints,” but we accept that we can’t always locate the joint but at times must cleave the carcass of nature regardless.
For this willingness to draw lines and establish limits and to call a halt to proceedings I will give the name The Truncation Principle, since it is in virtue to cutting off some portion of the world and treating it as though it were a unified whole that we are able to reason about the world.
As I mentioned above, I have discussed this problem previously, and in my discussion I noted that I wanted to give an exposition of a principle and a fallacy, but that I did not have a name for it yet, so I called it An Unnamed Principle and an Unnamed Fallacy. Now I have a name for it, and I will use this name, i.e., the truncation principle, from now on.
Note: I was tempted to call this principle the “baby retention principle” or even the “hang on to your baby principle” since it is all about the commonsense notion of not throwing out the baby with the bathwater.
In An Unnamed Principle and an Unnamed Fallacy I initially formulated the principle as follows:
The principle is simply this: for any distinction that is made, there will be cases in which the distinction is problematic, but there will also be cases when the distinction is not problematic. The correlative unnamed fallacy is the failure to recognize this principle.
What I most want to highlight is that when someone points out there are gray areas that seem to elude classification by any clear cut distinction, this is sometimes used as a skeptical argument intended to undercut the possibility of making any distinctions whatsoever. The point is that the existence of gray areas and problematic cases does not address the other cases (possibly even the majority of the cases) for which the distinction isn’t in the least problematic.
A distinction that that admits of problematic cases not clearly falling on one side of the distinction or the other, may yet have other cases that are clearly decided by the distinction in question. This might seem too obvious to mention, but distinctions that admit of problematic instances are often impugned and rejected for this reason. Admitting of no exceptions whatsoever is an unrealistic standard for a distinction.
I hope to be able to elaborate on this formulation as I continue to think about the truncation principle and its applications in philosophical, formal, and scientific thought.
Usually when we hear “truncation” we immediately think of the geometrical exercise of regularly cutting away parts of the regular (Platonic) solids, yielding truncated polyhedra and converging on rectified polyhedra. This is truncation in space. Truncation in time, on the other hand, is what is more commonly known as historical periodization. How exactly one historical period is to be cut off from another is always problematic, not least due to the complexity of history and the sheer number of outliers that seem to falsify any attempt at periodization. And yet, we need to break history up into comprehensible chunks. When we do so, we engage in temporal truncation.
All the problems of philosophical logic that present themselves to the subtle and perceptive mind when contemplating a spatial truncation, as, for example, in defining the Pacific Ocean — where exactly does it end in relation to the Indian Ocean? — occur in spades in making a temporal truncation. Yet if rational inquiry is to begin (and here we do not even raise the question of where rational inquiry ends) we must make such truncations, and our initial truncations are crude and mostly ad hoc concessions to human finitude. Thus I introduce the truncation principle as an explicit justification of truncations as we employ them throughout reasoning.
And, as if we hadn’t already laid up enough principles and distinctions for today, here is a principle of principles of distinctions: every principled distinction implies a fallacy that takes the form of neglecting this distinction. With an ad hoc distinction there is no question of fallacy, because there is no principle to violate. Where there is a principle involved, however, the violation of the principle constitutes a fallacy.
Contrariwise, every fallacy implies a principled distinction that ought to have been made. If we observe the appropriate principled distinctions, we avoid fallacies, and if we avoid fallacies we appropriately distinguish that which ought to be distinguished.
. . . . .
. . . . .
. . . . .