25 March 2013
In my last post, The Problem with Diachronic Extrapolation, I attempted to show how diachronic extrapolation, while the most familiar form of futurism, is often misleading because it fails to adequately account for synchronic interactions as a diachronic strategic trend develops. In other posts concerned with unintended consequences I have emphasized that, in the long term, unintended consequences often outweigh intended consequences. Unintended consequences are the result of synchronic interactions that were not foreseen, that were no part of diachronic agency, and those cases in which unintended consequences swamp intended consequences the synchronic interactions have proved more decisive in shaping the future than diachronic causality.
In my post on The Problem with Diachronic Extrapolation I made several assertions that clearly imply the limitation of inferences from the present to the future, which also implies the limitation of inferences from the present to the past. This brings up issues that go far beyond futurism.
In that post I wrote:
“…diachrony over significant periods of time cannot be pursued in isolation, since any diachronic extrapolation will interact with changed conditions over time, and this interaction will eventually come to constitute the consequences as must as the original trend diachronically extrapolated.”
“…the most frequent form of failed futurism is to take a trend in the present and to project it into the future, but any futurism worthy of the name must understand events in both their synchronic and diachronic context; isolation from succession in time is just as invidious as isolation from interaction across time…”
The reader may have noticed the resemblance of this species of failed futurism to uniformitarianism: instead of taking a strategic trend acting at present and extrapolating it into the future, uniformitarianism takes a physical force acting in the present and extrapolates it into the future (or, as is more likely the case in geology, into the past). This idea of uniformitarianism is usually expressed as, “the present is key to the past,” and we might similarly express the parallel form of futurism as being, “the present is key to the future.” These two claims — the present is the key to the past and the present is the key to the future — are logically equivalent since, as I pointed out previously, every present is the future of some past, and the past of some future.
Since these interpretations of uniformitarianism involve uniformity across past and future, these formulations closely resemble formulations of induction also stated in terms of past and future, as when the logical problem of induction is formulated, “Will the future be like the past?” It is at this point that the philosophy of time, the philosophy of history, the philosophy of science, and futurism all coincide, because it concerns a problem that all have in common.
Stephen Jay Gould noticed this similarity of uniformitarianism and induction in his first published paper, “Is uniformitarianism necessary?” Gould, of course, become famous for his critique of uniformitarianism, and for this alternative to it, punctuated equilibrium (for which he shares the credit with Niles Eldredge). In this early paper by Gould, Gould distinguished between substantive uniformitarianism and methodological uniformitarianism. He tried to show that the former is simply false, and the the latter, methodological uniformitarianism, is now subsumed under the scientificity of geology and paleontology. Here is now Gould put it:
“…we see that methodological uniformitarianism amounts to an affirmation of induction and simplicity. But since these principles belong to the modern definition of empirical science in general, uniformitarianism is subsumed in the simple statement: ‘geology is a science’. By specifically invoking methodological uniformitarianism, we do little more than affirm that induction is procedurally valid in geology.”
Stephen Jay Gould, “Is uniformitarianism necessary?” American Journal of Science, Vol. 263, March 1965, p. 227
That is to say, the earth sciences use the scientific method, which Gould characterizes in terms of inductive logic and the principle of parsimony (I would argue that Gould is also assuming methodological naturalism) — therefore everything that is worth saving in uniformitarianism is already secured by the scientific status of geology, and therefore uniformitarianism is dispensable. Having once served an important function in science, uniformitarianism has now, Gould contends, become an obstacle to progress.
As I noted above, Gould didn’t merely assert that uniformitarianism was no longer necessary, but devoted his career to arguing for an alternative, punctuated equilibrium, which asserts that long period of stasis are interrupted by catastrophic discontinuities. While much has been written about uniformitarianism vs. punctuated equilibrium, I see this as the thin end of the wedge for considering all kinds of alternatives to strict uniformitarianism, and to his end I think we would do well to explore all possible patterns of development, whether uniform (slow, gradual, incremental), punctuated (sudden, catastrophic, discontinuous), or otherwise.
Of course, we could easily produce more sophisticated formulations of uniformitarianism that would avoid the subsequent problems that have been raised, but this is the path that leads to Ptolemaic epicycles and attempts to “save the appearances,” whereas what we want is a rich mixture of theoretical innovation from which we can try many different models and select for further development those that are most true to the world.
Since the philosophy of time, the philosophy of history, the philosophy of science, and futurism all coincide at the point represented by the problem of the relationship of parts of time to other parts of time (and the idea of temporal parts is itself philosophical contested), all of these disciplines stand to learn something of value from exploring alternatives to uniformitarianism. In so far as futurism is dominated by nomothetic diachrony, and constitutes a kind of historical uniformitarianism, very different forms of futurism might emerge from a careful study of the alternatives to uniformitarianism, or merely from a recognition that, as Gould put, uniformitarianism is no longer necessary and something of an anachronism. If there is anything of which futurists ought to beware, being an anachronism must be close to the top of the list.
. . . . .
. . . . .
. . . . .
17 March 2012
One of the greatest contributions to science in the twentieth century was Jane Goodall’s study of chimpanzees in the wild at Gombe, Tanzania. Although Goodall’s work represents a major advance in ethology, it did not come without criticism. Here is how Adrian G. Weiss described some of this criticism:
Jane received her Ph.D. from Cambridge University in 1965. She is one of only eight other people to earn a Ph.D. without a bachelor’s (Montgomery 1991). Her adviser, Robert Hinde, said her methods were not professional, and that she was doing her research wrong. Jane’s major mistake was naming her “subjects”. The animals should be given numbers. Jane also used descriptive, narrative writing in her observations and calculations. She anthropomorphized her animals. Her colleagues and classmates thought she was “doing all wrong”. Robert Hinde did approve her thesis, even though she returned with all of his corrections with the original names and anthropomorphizing.
Most innovative science breaks the established rules of the time. If the innovative science is eventually accepted, it eventually also becomes the basis of a new orthodoxy. Given time, that orthodoxy will be displaced as well, as more innovative work demonstrates new ways of acquiring knowledge. As the old orthodoxy passes out of fashion it often falls either into neglect or may become the target of criticism as vicious as that directed at new and innovative research.
I have to imagine that it was this latter phenomenon of formerly accepted scientific discourses falling out of favor and becoming the target of ridicule that inspired one of Foucault’s most famous quotes (which I have cited previously on numerous occasions): “A real science recognizes and accepts its own history without feeling attacked.” Here is the same quote with more context:
Each of my works is a part of my own biography. For one or another reason I had the occasion to feel and live those things. To take a simple example, I used to work in a psychiatric hospital in the 1950s. After having studied philosophy, I wanted to see what madness was: I had been mad enough to study reason; I was reasonable enough to study madness. I was free to move from the patients to the attendants, for I had no precise role. It was the time of the blooming of neurosurgery, the beginning of psychopharmacology, the reign of the traditional institution. At first I accepted things as necessary, but then after three months (I am slow-minded!), I asked, “What is the necessity of these things?” After three years I left the job and went to Sweden in great personal discomfort and started to write a history of these practices. Madness and Civilization was intended to be a first volume. I like to write first volumes, and I hate to write second ones. It was perceived as a psychiatricide, but it was a description from history. You know the difference between a real science and a pseudoscience? A real science recognizes and accepts its own history without feeling attacked. When you tell a psychiatrist his mental institution came from the lazar house, he becomes infuriated.
Truth, Power, Self: An Interview with Michel Foucault — October 25th, 1982, Martin, L. H. et al (1988) Technologies of the Self: A Seminar with Michel Foucault, London: Tavistock. pp.9-15
It remains true that many representatives of even the most sophisticated contemporary sciences react as though attacked when reminded of their discipline’s history. This is true not least because much of science has an unsavory history — at least, by contemporary standards, a lot of scientific history is unsavory, and this gives us reason to believe that many of our efforts today will, in the fullness of time, be consigned to the unsavory inquiries of the past which carry with them norms, evaluations, and assumptions that are no longer considered to be acceptable in polite society. This is, of course, deeply ironic (I could say hypocritical if I wanted to be tendentious) since the standard of acceptability in polite society is one of the most stultifying norms imaginable.
It has long been debated within academia whether history is a science, or an art, or perhaps even a sui generis literary genre with a peculiar respect for evidence. There is no consensus on this question, and I suspect it will continue to be debated so long as the Western intellectual tradition persists. History, at least, is a recognized discipline. I know of no recognized discipline of the study of civilizations, which in part is why I recently wrote The Future Science of Civilizations.
There is, at present, no science of civilization, though there are many scientists who have written about civilization. I don’t know if there are any university departments on “Civilization Studies,” but if there aren’t, there should be. We can at least say that there is an established literary genre, partly scientific, that is concerned with the problems of civilization (including figures as diverse as Toynbee and Jared Diamond). Even among philosophers, who have a great love of writing, “The philosophy of x,” there are very few works on “the philosophy of civilization” — some, yes, but not many — and, I suspect, few if any departments devoted to the philosophy of civilization. This is a regrettable ellipsis.
When, in the future, we do have a science of civilization, and perhaps also a philosophy of civilization (or, at very least, a philosophy of the science of civilization), this science will have to come to terms with its past as every science has had to (or eventually will have to). The prehistory of the science of civilization is already fairly well established, and there are several known classics of the genre. Many of these classics of the study of civilization are as thoroughly unsavory by contemporary standards as one could possibly hope. The history of pronouncements on civilization is filled with short-sighted, baldly prejudiced, privileged, ethnocentric, and thoroughly anthropocentric formulations. For all that, they still may have something of value to offer.
A technological typology of human societies that is no longer in favor is the tripartite distinction between savagery, barbarism, and civilization. This belongs to the prehistory of the prehistory of civilization, since it establishes the natural history of civilization and its antecedents.
Edward Burnett Tylor proposed that human cultures developed through three basic stages consisting of savagery, barbarism, and civilization. The leading proponent of this savagery-barbarism-civilization scale came to be Lewis Henry Morgan, who gave a detailed exposition of it in his 1877 book Ancient Society (the entire book is conveniently available online for your reading pleasure). A quick sketch of the typology can be found at ANTHROPOLOGICAL THEORIES: Cross-Cultural Analysis.
One of the interesting features of Morgan’s elaboration of Tylor’s idea is his concern to define his stages in terms of technology. From the “lower status of savagery” with its initial use of fire, through a middle stage at which the bow and arrow is introduced, to the “upper status of savagery” which includes pottery, each stage of human development is marked by a definite technological achievement. Similarly with barbarism, which moves through the domestication of animals, irrigation, metal working, and a phonetic alphabet. This breakdown is, in its own way, more detailed than many contemporary decompositions of human social development, as well as being admirably tied to material culture and therefore amenable to confirmation and disconfirmation through archaeological research.
Today, of course, we are much too sophisticated to use terms like “savagery” or “barbarism.” These terms are now held in ill repute, as they are thought to suggest strongly negative evaluations. A friend of mine who studied anthropology told me that the word “primitive” is now referred to as “the P-word” within the discipline, so unacceptable has it become. To call a people (even an historical people now extinct) “savage” is similarly considered beyond the pale. We don’t call people “savage” or “primitive” any more. But the dangers of these terminological obsessions are that we get hung up on the terms and no longer consider theories on their theoretical merits. Jane Goodall’s theoretical work was eventually accepted despite her use of proper names in ethology, and now it is not at all uncommon for researchers to name their subjects that belong to other species.
Some theoreticians, moreover, have come to recognize that there are certain things that can be learned through sympathizing with one’s subject that simply cannot be learned in any other way (score one posthumously for Bergson’s conception of “intellectual sympathy”). Of course, science need not limit itself to a single paradigm of valid research. We can have a “big tent” of science with ample room for many methodologies, and hopefully also with plenty of room for disagreements.
It would be an interesting exercise to take a “dated” work like Lewis Henry Morgan’s book Ancient Society, leave the theoretical content intact, and change only the names. In fact, we could formalize Morgan’s gradations, using numbers instead of names just as Jane Goodall was urged to do. I suspect that Morgan’s work would be treated rather better in this case in comparison to the contemporary reception of its original terminology. We ought to ask ourselves why this is the case. Perhaps it is too much to hope for a “big tent” of science so capacious that it could hold Lewis Henry Morgan’s terminology alongside that of contemporary anthropology, but we have arrived at a big tent of science large enough to hold Jane Goodall’s proper names alongside tagged and numbered specimens.
. . . . .
. . . . .
. . . . .
19 November 2009
Yesterday’s meditation upon The Fungibility of the Biome led me to think in very general terms about scientific knowledge. It is one of the remarkable things about contemporary natural science — following rigorously, as it does, the methodological naturalism toward which it has struggled over the past several hundred years since the advent of the Scientific Revolution — that the more complex and sophisticated it becomes, the more closely science is in touch with the details of ordinary experience. This is almost precisely the opposite of what one finds with most intellectual traditions. As an intellectual tradition develops it often becomes involuted and self-involved, veering off in oddball directions and taking unpredictable tangents that take us away from the world and our immediate experience of it, not closer to it. The history of human reason is mostly a history of wild goose chases.
In fact, Western science began exactly in this way, and in so doing gave us the most obvious example of an involuted, self-referential intellectual tradition that was more interested in building on a particular cluster of ideas than of learning about the world. This we now know as scholasticism, when the clerics and monks of medieval Europe read and re-read, studied and commented upon, the works of Aristotle. For a thousand years, Aristotle was synonymous with natural science.
Aristotle is not to be held responsible for the non-science that was done in his name and, to add insult to injury, was called science. If Aristotle had been treated as a point of departure rather than as dogma to be defended and upheld as doctrine, medieval history would have been very different. But at that time Western history was not yet prepared for the wrenching change that science, when properly pursued, forces upon us, both in terms of our understanding of the world and the technology it makes possible (and the industry made possible in turn by technology).
Science forces wrenching change upon us because it plays havoc with some of the more absurd notions that we have inherited from our earlier, pre-scientific history. Pre-scientific beliefs suffer catastrophic failure when confronted with their scientific alternatives, however gently the science is presented in the attempt to spare the feelings of those still wedded to the beliefs of the past.
Once we get past our inherited absurdities, as I implied above, we can see the world for what it is, and science puts us always more closely in touch with what the world it is. Allow me to mention two examples of things that I have recently learned:
Example 1) We know now that not only does the earth circle the sun, and the sun spins with the Milky Way, but we know that this circling and spinning is irregular and imperfect. The earth wobbles in its orbit, and in fact the sun bobs up and down in the plane of the Milky Way as the galaxy spins. This wobbling and bobbing has consequences for life on earth because it changes the climate, sometimes predictably and sometimes unpredictably. But regularity is at least partly a function of the length of time we consider. The impact of extraterrestrial objects on the earth seems like a paradigmatic instance of catastrophism, and the asteroid impact that likely contributed to the demise of the dinosaurs is thought of as a catastrophic punctuation in the history of life, but we now also know that the earth is subject to periods of greater bombardment by extraterrestrial bodies when it is passing through the galactic plane. Viewed from a perspective of cosmological time, asteroid impacts and regular and statistically predictable. And it happens that about 65 million years ago we were passing through the galactic plane and we caught a collision as a result. All of this makes eminently good sense. Matter is present at greater density in the galactic plane, so we are far more likely to experience collisions at this time. All of this accords with ordinary experience.
Example 2) We have had several decades to get used to the idea that the continents and oceans of the earth are not static and unchanging, but dynamic and dramatically different over time. A great many things that remain consistent during the course of one human lifetime have been mistakenly thought to be eternal and unchanging. Now we know that the earth changes and in fact the whole cosmos changes. Even Einstein had to correct himself on this account. His first formulation of general relativity included the cosmological constant in order to maintain the cosmos according to its presently visible structure. Now cosmological evolution is recognized and we detail the lives of stars as carefully as we detail the natural history of a species. Now that we know something of the natural history of our planet, and we know that it changes, we find that it changes according to our ordinary experience. In the midst of an ice age, when much of the world’s water is frozen as ice and is burdening the continental plates as ice, it turns out that the weight of the ice forces the continents lower as they float in the magma beneath them. During the interglacial periods, when much or most of the ice melts, unburdened of the weight the continents bob up again and rise relative to the oceanic plates that have not been been weighted down with ice. And, in fact, this is how things behave in our ordinary experience. It is perhaps also possible (though I don’t know if this is the case) that the weight of ice, melted and now run into the oceans, becomes additional water weight pressing down on the oceanic plates, which could sink a little as a result.
Last night I was reading A Historical Introduction to the Philosophy of Science by John Losee (an excellent book, by the way, that I heartily recommend) and happened across this quote from Larry Laudan (p. 213):
…the degree of adequacy of any theory of scientific appraisal is proportional to how many of the [preferred intuitions] it can do justice to. The more of our deep intuitions a model of rationality can reconstruct, the more confident we will be that it is a sound explication of what we mean by ‘rationality’.
Contemporary Anglo-American analytical philosophers seem to love to employ the locution “deep intuitions” and similar formulations in the way that a few years ago (or a few decades ago) phenomenologists never tired of writing about the “richness of experience.” Certainly experience is rich, and certainly there are deep intuitions, but to have to call attention to either by way of awkward locutions like these points to a weakness in formulating exactly what it is that is rich about experience, and exactly what it is that is deep about a deep intuition.
And this, of course, is the whole problem in a nutshell: what exactly is a deep intuition? What intuitions ought to be considered to be preferred intuitions? I suggest that our preferred intuitions ought to be those most common and ordinary intuitions that we derive from our common and ordinary experience, things like the fact that floating bodies, when weighted down, float a little lower in the water, or whatever medium in which they happen to float. It is in this spirit that we recall the words that Robert Green Ingersoll attributed to Ferdinand Magellan:
“The church says the earth is flat, but I know that it is round, for I have seen the shadow on the moon, and I have more faith in a shadow than in the church”
The quote bears exposition. Almost certainly Magellan never said it, or even anything like it. Nevertheless, we ought to be skeptical for reasons other than those cited by the most familiar skeptics, who like to point out that the church never argued for the flatness of the earth. We ought to be skeptical because Magellan was a deeply pious man, who lost his life before the completion of his circumnavigation by his crew because Magellan was so intent upon the conversion to Catholicism of the many peoples he encountered. Eventually he encountered peoples who did not want to be converted, and they eventually took up arms and killed him in an entirely unnecessary engagement. But what remains interesting in the quote, and its implied reference to Galileo’s early observations of the moon, is not so much about flatness as about perfection. Aristotle in particular, and ancient Greek philosophy in general, held that the heavens were a realm of perfection in which all bodies were perfectly spherical and moved in perfectly circular motions through the sky. We now know this to be false, and Galileo was among the first to graphically demonstrate this with his sketches of superlunary mountains.
What does the word “superlunary” refer to? It is a term that derives from pre-Copernican (or, if you will, Ptolemaic) astronomy. When it was believed that the earth was the center of the universe, the closest extraterrestrial body was believed to be the moon (this happened to be correct, even if much in Ptolemaic astronomy was not correct). Everything below the moon, i.e., everything sublunary, was believed to be tainted and imperfect, contaminated with the dirt of lowly things and the stain of Original Sin, while everything above the moon, i.e., everything superlunary, including all other known extraterrestrial bodies, were believed to be free of this taint and therefore to be perfect, therefore unblemished. Thus it was deeply radical to observe an “imperfection” on the supposedly perfect spheres beyond the earth, as it was equally radical to discover “new” extraterrestrial bodies that had never been seen before, like the moons of Jupiter.
Both of these heresies point to our previous tendency to attribute an eternal and unchanging status to things beyond the earth. It was believed impossible to discover “new” extraterrestrial bodies because the heavens, after all, were complete, perfect, and unchanging. For the same reason, one should not be able to view anything as irregular as mountains or shadows on extraterrestrial bodies. Once we get beyond the absurd postulate of extraterrestrial perfection, we can see the world with our own eyes, and for what it is. And when we begin to do so, we do not negate the properties of perfection once attributed to the superlunary world as much as we find them to be simply irrelevant. The heavens, like the earth, are neither perfect nor imperfect. They simply are, and they are what they are. To attribute evaluative or normative content or significance to them, such as believing in their perfection, is only to send us off on one of those oddball directions or unpredictable tangents that I mentioned in the first paragraph.
. . . . .
. . . . .
. . . . .