2 May 2016
Darwin’s Thesis on the Origin of Civilization
and its extrapolation to exocivilizations
In the scientific study of civilization we are beginning at the beginning because there is no established body of scientific knowledge about civilization — much historical knowledge, to be sure, but no science of civilization, sensu stricto, and therefore no scientific knowledge sensu stricto — and this demands that we begin with the simplest and most obvious propositions about civilization. The simplest and most obvious propositions about civilization are such as most discussions of civilization would simply pass over in silence as necessary presuppositions, or which would be dismissed by hand-waving and the assertion, “It is obvious that…” We will take a different point of view. Only a mathematician would think that the Jordan curve theorem was an idea in need of proof, and only someone engaged in attempting to formulate a science of civilization would think asserting that civilization originates in a pre-civilized condition was a condition of civilization that requires discussion.
Our point of departure in this discussion will be what I call Darwin’s Thesis on the origins of civilization, or, more simply, Darwin’s Thesis. I call this Darwin’s Thesis (and called it such in my presentation “What kind of civilizations build starships?”) because of the following passage from Darwin about the origins of civilization:
“The arguments recently advanced… in favour of the belief that man came into the world as a civilised being and that all savages have since undergone degradation, seem to me weak in comparison with those advanced on the other side. Many nations, no doubt, have fallen away in civilisation, and some may have lapsed into utter barbarism, though on this latter head I have not met with any evidence… The evidence that all civilised nations are the descendants of barbarians, consists, on the one side, of clear traces of their former low condition in still-existing customs, beliefs, language, &c.; and on the other side, of proofs that savages are independently able to raise themselves a few steps in the scale of civilisation, and have actually thus risen.”
Charles Darwin, The Descent of Man, Chapter V (I have left Darwin’s spelling in its Anglicized form.)
Darwin was here taking the same naturalistic stance in regard to civilization that he had earlier taken in regard to biology. Darwin made biology scientific by making it a domain of research approached by way of methodological naturalism; prior to Darwin there was biology of a kind, but not any study of biology that could be reconciled with methodological naturalism. Darwin applied this same reasoning to civilization, and this is the reasoning we must apply to civilization if we are to formulate a science of civilization that can be reconciled with methodological naturalism.
As far as ideas about civilization go, this is extremely basic. However, I will again stress the need to begin a science of civilization with the most basic and rudimentary propositions possible. While this is a proposition so rudimentary as to be mundane, there can be no more interesting question for the science of civilization than that of the origin of civilization (the question of the end of civilization is equally interesting, but I wouldn’t say it is more interesting).
While the simplest theses on civilization seem so mundane as to be uninteresting, they can nevertheless be deductively powerful in their application. We can only address the longevity of a civilization, for example, once we have established a point in time at which civilization begins, and counting forward in whatever temporal units we care to employ up to its demise (which also must be defined, if the civilization in question has come to an end), or up to the present day (if the civilization in question is still in existence).
According to Darwin’s Thesis, then, civilization is descended from a prior savage or barbaric condition (not terms we would likely employ today, but certainly terms we still understand). How are we to characterize this pre-civilized condition of humanity? What constitutes the non-civilization that preceded civilization?
A somewhat discerning distinction, albeit one with moral overtones, was made between savagery, barbarism, and civilization. Like the “three age” system of prehistory — stone age, bronze age, iron age — we still find traces of these distinctions in contemporary thought. Here is how I described it previously:
“Edward Burnett Tylor proposed that human cultures developed through three basic stages consisting of savagery, barbarism, and civilization. The leading proponent of this savagery-barbarism-civilization scale came to be Lewis Henry Morgan, who gave a detailed exposition of it in his 1877 book Ancient Society… A quick sketch of the typology can be found at Anthropological Theories: Cross-Cultural Analysis. One of the interesting features of Morgan’s elaboration of Tylor’s idea is his concern to define his stages in terms of technology. From the ‘lower status of savagery’ with its initial use of fire, through a middle stage at which the bow and arrow is introduced, to the ‘upper status of savagery’ which includes pottery, each stage of human development is marked by a definite technological achievement. Similarly with barbarism, which moves through the domestication of animals, irrigation, metal working, and a phonetic alphabet.”
Elsewhere I suggested that the non-civilization prior to civilization could be called proto-civilization. I just re-read my post on proto-civilization and now I find it inadequate, but I still endorse at least this much of what I said there:
“In the case of civilization, a state-of-affairs existed long before the idea of civilization was made explicit. But in projecting the idea of civilization backward in history, we already have the idea suggested by a particular cultural milieu, and the question becomes whether this idea can be applied further than the context in which it was initially proposed.”
This would be one methodology to employ: take the concept of civilization as it has been elaborated and seek to apply it to past social structures; determining at what point this concept no longer applies gives a point in time for the origin of civilization. This could be called the “retroactive method.”
Given the far greater archaeological data we possess than we possessed at the time the concept of civilization was first formulated, this method has new information to work with that it did not have at the time of its formulation. This is one of the points that I attempted to make, however poorly I did so, in my post on proto-civilization: we have an enormous amount of archaeological data on the Upper Paleolithic and Early Neolithic in the Old World, which is usually described in terms of “cultures” rather than “civilizations.” But when European explorers of the Early Modern period came to the New World, they encountered peoples that had social institutions that we today call civilizations, though these civilizations were closer to the “Stone Age” of the Old World than to the early civilizations of Egypt and Mesopotamia (to take to paradigm cases of civilization).
An alternative to the retroactive method would be to study the artifacts of the past on their own merits, to construct a definition of civilization on the basis of the earliest known human societies (on the basis of their material culture), and then apply this conception of civilization forward in time (for lack of a better term I will call this the proactive method, simply to contrast it to the retroactive method). It is arguable that some archaeologists do in fact follow this method, but I don’t know of anyone who has explicitly advanced this procedure as desirable (much less as necessary), although it does bear some resemblance to the implicit formalism of the cultural processual school in archaeological thought.
Both retroactive and proactive methods incorporate obvious problems that derive from parachronic distortions of evidence (the most obvious parachronism is the familiar idea of an anachronism, i.e., a survival from the past preserved into the present, where it is obviously out of place; the contrary parachronic distortion is that of projecting the present into the past).
To pull back from the provincial considerations of civilization studied by archaeology to date — that is to say, exclusively terrestrial civilizations — we can further develop the idea of Darwin’s Thesis in a cosmological context. Once we do this, we immediately understand that we have been asking questions focused on a particular set of conditions that are characteristic of civilizations during the Stelliferous Era, and our ideas worked out for terrestrial civilization (civilizations of planetary endemism during the Stelliferous Era) may not apply more generally to the largest scales of civilization achieved (or which may yet be achieved) in the cosmos.
Civilizations during the Degenerate Era may possess a different character due to their need to derive energy flows from sources other than stellar flux, which latter defines the conditions of the origins of civilization from intelligent biological agents during the Stelliferous Era, which might also be called the Age of Planetary Endemism. If the Degenerate Era begins with the universe having been exhaustively settled or inhabited by life and civilization, this densely inhabited universe not only would prevent the emergence of new civilizations, but also would mean an end to this living cosmos of starlight. In this case the Degenerate Era begins with what I have called the End-Stelliferous Mass Extinction Event (ESMEE), when widely distributed life and civilization of the Stelliferous Era, primarily supported by energy flows from stellar flux (and concentrated on planetary surfaces), comes to an end as the stars wink out one by one.
The cohort of emergent complexity that survives this transition is likely to be a post-civilization successor institution that is (by this time in the evolution of the universe) further removed from the origins of civilization than we are today removed from the origin of the universe. At this point, the origins of emergent complexity will be a distant question, largely inapplicable to contemporaneous concerns, and the central question will be what of the Stelliferous Era can survive into the Degenerate Era, and how it can perpetuate itself in a universe converging on heat death.
Would these civilizations of the Degenerate Era be newly originating civilizations, or would they be derivative from civilizations of the Stelliferous Era? The obvious answer would seem to be that these civilizations would be derivative, except that over such cosmological spans of time the concept of civilization (and the threshold of what constitutes a civilization) is likely to evolve as much as, if not more than, civilization itself. As civilization develops, and a greater degree of science, technology, and intellectual achievement is believed to be indispensable to what constitutes civilization, civilization may be redefined as something close to prevailing conditions, and everything prior to this is redefined as proto-civilization. For example, civilization today might be considered unimaginable without the conveniences of modern life, and everything prior is consigned to barbarism. This reasoning can be extended to hold that civilization is unimaginable without fusion energy, without strong AI, without interstellar travel, and so on. All of this is entirely consistent with Darwin’s Thesis, which holds regardless of whether we consider the Upper Paleolithic to be utter savagery, or 2016 to be utter savagery.
If we consciously make an effort to formulate and to retain a comprehensive conception of civilization, that is not continually revised forward in time in the light of the later developments of civilization, we can avoid the above problem, and it is this approach that gives us longer ages for our civilization today. I have often mentioned that it was once commonplace, and perhaps still commonplace, to fix the origins of civilization with the origins of written languages (i.e., the origins of the “historical period” sensu stricto), but scientific historiography has been slowly chipping away at the distinction between history and prehistory until it is no longer tenable. Hence I identify the origins of civilization with the emergence of cities during or shortly after the Neolithic Agricultural Revolution, which makes our civilization about ten thousand years old, rather than five thousand years old.
As our archaeological knowledge of the past improves, we may be able to set quantifiable conditions for the origins of civilization (say, a number of cities with a given population size, or a particular degree of sophistication in metallurgy, which latter seems to me to mark the ultimate origins of technological civilization). Again, Darwin’s Thesis is entirely in accord with this method also. Moreover, I think that this method gives a greater degree of independence to the determination of the origins of civilization, as it would also give us metrics by which we could determine the independent origin of a new civilization, say, even in the Degenerate Era, if this were to prove possible (which we really don’t know at present).
Beyond these concerns, and beyond the immediate scope of this post, we may need to posit a condition for the continuity of civilization — say, e.g., that metallurgical technological never lapses below a certain threshold — so that once given Darwin’s Thesis and some definition of civilization, we can determine when a civilization has originated de novo, and when a civilization is an evolutionary mutation of an earlier civilization, or a developmental achievement of an earlier civilization, rather than something new in history. This applies whether we take the threshold of achievement to be the smelting of copper or the building of starships. For example, if a civilization can smelt copper (or better), and never loses this technological capacity, it retains a minimal degree of continuity with the first civilization capable of this achievement, when an unbroken continuity of this capacity can be shown from the origins of this technology forward to some arbitrary date in the future.
. . . . .
. . . . .
. . . . .
. . . . .
A Wittgensteinian Approach to Civilization
One of my most frequently accessed posts is titled following Wittgenstein’s Tractatus Logico-Philosophicus section 5.6, “The limits of my language are the limits of my world” (“Die grenzen meiner sprache sind die grenzen meiner welt”). I noted in Contextualizing Wittgenstein that this earlier post on Wittgenstein was posted on Reddit and as a result gained a large number of views — a larger number, at least, than my posts usually receive.
If there is a general principle that can be derived from Tractatus 5.6, one application of this general principle would be the idea that the limits of science are the limits of scientific civilization. In the same vein we could assert that the limits of agriculture are the limits of agrarian civilization (or even, “the limits of agriculture are the limits of biocentric civilization”), and the limits of technology are the limits of technological civilization, and so forth. Another way to express this idea would be to say, the limits of science are the limits of industrial-technological civilization, in so far as our industrial-technological civilization belongs to the genus of scientific civilizations.
Recently I have taken up the problem of scientific civilizations in Folk Concepts of Scientific Civilization, Types of Scientific Civilization, Suboptimal Civilizations, Addendum on Suboptimal Civilizations, David Hume and Scientific Civilization, The Relevance of Philosophy of Science to Scientific Civilization, and Addendum on the Stages of Civilization, inter alia. None of this, as yet, is a systematic treatment of the idea of scientific civilization, though there are many ideas here that can some day be integrated into a more comprehensive synthesis.
What does it mean to live in a scientific civilization constrained by the limits of science? One of the points that I sought to make in my earlier post on Tractatus 5.6 was a scientific interpretation of Wittgenstein’s aphorism, acknowledging that the different idioms we employ to think about the world involve different conceptions of the world. In that post I wrote, “…scientific theories often broaden our horizons and allow us to see and to understand things of which we were previously unaware. But a scientific theory, being a particular idiom as it is, may also limit us, and limit the way we see the world.” This is part of what it means to be constrained by the limits of science: our scientific idioms constrain the conceptual framework we use to understand ourselves and our civilization.
Significantly in this context, different scientific idioms are possible. Indeed, distinct sciences are possible. We have had an historical succession of scientific idioms, which could also be called a succession of distinct sciences — something that could be presented as a Wittgensteinian formulation of Thomas Kuhn — according to which one scientific paradigm has replaced another over time. There is also the unrealized possibility of different origins of science, and different developmental pathways of science, in different civilizations. This is an idea I explored in Types of Scientific Civilization.
A civilization might develop science in a different way than science emerged in terrestrial history. A civilization might begin with a different mathematical formalism or a different logic. Perhaps logic itself might begin with the kind of logical pluralism we know today, which would contrast sharply with the logical monism that has marked most of human history. Different sciences might develop in a different order. The ancient Greeks developed an axiomatic geometry, but no scientific biology. But the idea of natural selection is, in itself, no more difficult than the idea of axiomatic geometry, and could have developed first.
A civilization might fail to develop axiomatic geometry and instead develop a scientific biology in its earliest history — its equivalent of our classical antiquity — and this kind of early biological knowledge would probably take agricultural civilization in a profoundly different direction. There may be (somewhere in the universe) scientific agrarian civilizations that are qualitatively distinct from both agrarian-ecclesiastical civilization and industrial-technological civilization. Thus the developmental sequence of sciences in a civilization — which sciences are developed in what order, and to what extent — will shape the scientific civilization that eventually emerges from this sequence (if it does in fact emerge). Is this sequence an historical accident? That is a difficult question that I will not attempt to answer at present.
There are, then, many possibilities for scientific civilizations, and we have not, with the history of terrestrial civilizations, fully explored (much less exhausted) these possibilities. But scientific civilizations also come with limitations that are intrinsic to scientific knowledge. In my Centauri Dreams post, “The Scientific Imperative of Human Spaceflight,” I argued that the science of industrial-technological civilization, essentially narrowed by its participation in the STEM cycle that drives our civilization, is riddled with blind spots, and these blind spots mean that the civilization built on this science is riddled with blind spots.
This should not be a surprising conclusion, though I suspect few will agree with me. There is a comment on my Centauri Dreams post that implies I am arguing for the role of mystical experiences in civilization; this is not my purpose or my intention. This is simply a misunderstanding. But, in fact, the better I am understood probably the less likely it will be that others will agree with me. In another context, in A Fly in the Ointment, I argued that science is a particular branch of philosophy — that philosophy also known as methodological naturalism — which subverts the view (predictably prevalent in industrial-technological civilization) that if philosophy has any legitimacy at all, it is because it is a kind of marginal science in its own right. More often, philosophy is simply viewed as a kind of failed science.
Philosophy is not a kind of science. Science, on the contrary, is a kind of philosophy. This is not a common view today, but that is my framework for interpreting and understanding scientific civilization. It follows from this that a philosophical civilization would not necessarily be a kind of scientific civilization (the philosophy of such a civilization might or might not be the philosophy that we identify as science), but that our scientific civilization is a kind of philosophical civilization.
Philosophy is a much wider field of study, and it is from philosophy that we can expect to address the blind spots of science and the scientific civilization that has grown from science. So the limits both of science and scientific civilization can be addressed, but only from a more comprehensive perspective, and that more comprehensive perspective is not possible from within scientific civilization.
. . . . .
. . . . .
. . . . .
. . . . .
2 August 2015
For some philosophers, naturalism is simply an extension of physicalism, which was in turn an extension of materialism. Narrow conceptions of materialism had to be extended to account for physical phenomena not reducible to material objects (like theoretical terms in science), and we can similarly view naturalism as a broadening of physicalism in order to more adequately account for the world. (I have quoted definitions of materialism and physicalism in Materialism, Physicalism, and… What?.) But, coming from this perspective, naturalism is approached from a primarily reductivist or eliminativist point of view that places an emphasis upon economy rather than adequacy in the description of nature (on reductivism and eliminativism cf. my post Reduction, Emergence, Supervenience). Here the principle of parsimony is paramount.
One target of eliminativism and reductionism is a class of concepts sometimes called “folk” concepts. The identification of folk concepts in the exposition of philosophy of science can be traced to philosopher Daniel Dennett. Dennett introduced the term “folk psychology” in The Intentional Stance and thereafter employed the term throughout his books. Here is part of his original introduction of the idea:
“We learn to use folk psychology — as a vernacular social technology, a craft — but we don’t learn it self-consciously as a theory — we learn no meta-theory with the theory — and in this regard our knowledge of folk psychology is like our knowledge of the grammar of our native tongue. This fact does not make our knowledge of folk psychology entirely unlike human knowledge of explicit academic theories, however; one could probably be a good practising chemist and yet find it embarrassingly difficult to produce a satisfactory textbook definition of a metal or an ion.”
Daniel Dennett, The Intentional Stance, Chap. 3, “Three Kinds of Intentional Psychology”
Earlier (in the same chapter of the same book) Dennett had posited “folk physics”:
“In one sense people knew what magnets were — they were things that attracted iron — long before science told them what magnets were. A child learns what the word ‘magnet’ means not, typically, by learning an explicit definition, but by learning the ‘folk physics’ of magnets, in which the ordinary term ‘magnet’ is embedded or implicitly defined as a theoretical term.”
Daniel Dennett, The Intentional Stance, Chap. 3, “Three Kinds of Intentional Psychology”
Here is another characterization of folk psychology:
“Philosophers with a yen for conceptual reform are nowadays prone to describe our ordinary, common sense, Rylean description of the mind as ‘folk psychology,’ the implication being that when we ascribe intentions, beliefs, motives, and emotions to others we are offering explanations of those persons’ behaviour, explanations which belong to a sort of pre-scientific theory.”
Scott M. Christensen and Dale R. Turner, editors, Folk Psychology and the Philosophy of Mind, Chap. 10, “The Very Idea of a Folk Psychology” by Robert A. Sharpe, University of Wales, United Kingdom
There is now quite a considerable literature on folk psychology, and many positions in the philosophy of mind are defined by their relationship to folk psychology — eliminativism is largely the elimination of folk psychology; reductionism is largely the reduction of folk psychology to cognitive science or scientific psychology, and so on. Others have gone on to identify other folk concepts, as, for example, folk biology:
Folk biology is the cognitive study of how people classify and reason about the organic world. Humans everywhere classify animals and plants into species-like groups as obvious to a modern scientist as to a Maya Indian. Such groups are primary loci for thinking about biological causes and relations (Mayr 1969). Historically, they provided a transtheoretical base for scientific biology in that different theories — including evolutionary theory — have sought to account for the apparent constancy of “common species” and the organic processes centering on them. In addition, these preferred groups have “from the most remote period… been classed in groups under groups” (Darwin 1859: 431). This taxonomic array provides a natural framework for inference, and an inductive compendium of information, about organic categories and properties. It is not as conventional or arbitrary in structure and content, nor as variable across cultures, as the assembly of entities into cosmologies, materials, or social groups. From the vantage of EVOLUTIONARY PSYCHOLOGY, such natural systems are arguably routine “habits of mind,” in part a natural selection for grasping relevant and recurrent “habits of the world.”
Robert Andrew Wilson and Frank C. Keil, The MIT Encyclopedia of the Cognitive Sciences
We can easily see that the idea of folk concepts as pre-scientific concepts is applicable throughout all branches of knowledge. This has already been made explicit:
“…there is good evidence that we have or had folk physics, folk chemistry, folk biology, folk botany, and so on. What has happened to these folk endeavors? They seem to have given way to scientific accounts.”
William Andrew Rottschaefer, The Biology and Psychology of Moral Agency, 1998, p. 179.
The simplest reading of the above is that in a pre-scientific state we use pre-scientific concepts, and as the scientific revolution unfolds and begins to transform traditional bodies of knowledge, these pre-scientific folk concepts are replaced with scientific concepts and knowledge becomes scientific knowledge. Thereafter, folk concepts are abandoned (eliminated) or formalized so that they can be systematically located in a scientific body of knowledge. All of this is quite close to the 19th century positivist August Comte’s theory of the three stages of knowledge, according to which theological explanations gave way to metaphysical explanations, which in turn gave way to positive scientific explanations, which demonstrates the continuity of positivist thought — even that philosophical thought that does not recognize itself as being positivist. In each case, an earlier non-scientific mode of thought is gradually replaced by a mature scientific mode of thought.
While this simple replacement model of scientific knowledge has certain advantages, it has a crucial weakness, and this is a weakness shared by all theories that, implicitly or explicitly, assume that the mind and its concepts are static and stagnant. Allow me to once again quote one of my favorite passage from Kurt Gödel, the importance of which I cannot stress enough:
“Turing… gives an argument which is supposed to show that mental procedures cannot go beyond mechanical procedures. However, this argument is inconclusive. What Turing disregards completely is the fact that mind, in its use, is not static, but is constantly developing, i.e., that we understand abstract terms more and more precisely as we go on using them, and that more and more abstract terms enter the sphere of our understanding. There may exist systematic methods of actualizing this development, which could form part of the procedure. Therefore, although at each stage the number and precision of the abstract terms at our disposal may be finite, both (and, therefore, also Turing’s number of distinguishable states of mind) may converge toward infinity in the course of the application of the procedure.”
“Some remarks on the undecidability results” (Italics in original) in Gödel, Kurt, Collected Works, Volume II, Publications 1938-1974, New York and Oxford: Oxford University Press, 1990, p. 306.
Not only does the mind refine its concepts and arrive at more abstract formulations; the mind also introduces wholly new concepts in order to attempt to understand new or hitherto unknown phenomena. In this context, what this means is that we are always introducing new “folk” concepts as our experience expands and diversifies, so that there is not a one-time transition from unscientific folk concepts to scientific concepts, but a continual and ongoing evolution of scientific thought in which folk concepts are introduced, their want of rigor is felt, and more refined and scientific concepts are eventually introduced to address the problem of the folk concepts. But this process can result in the formulation of entirely new sciences, and we must then in turn hazard new “folk” concepts in the attempt to get a handle on this new discipline, however inadequate our first attempts may be to understand some unfamiliar body of knowledge.
For example, before the work of Georg Cantor and Richard Dedekind there was no science of set theory. In formulating set theory, 19th century mathematicians had to introduce a great many novel concepts (set, element, mapping) and mathematical procedures (one-to-one correspondence, diagonalization). These early concepts of set theory are now called “naïve set theory,” which have largely been replaced by (several distinct) axiomatizations of set theory, which have either formalized or eliminated the concepts of naïve set theory, which we might also call “folk” set theory. Nevertheless, many “folk” concepts of set theory persist, and Gödel spent much of his later career attempting to produce better formalizations of the concepts of set theory than those employed in now accepted axiomatizations of set theory.
As civilization has changed, and indeed as civilization emerged, we have had occasion to introduce new terms and concepts in order to describe and explain newly emergent forms of life. The domestication of plants and animals necessitated the introduction of concepts of plant and animal husbandry. The industrial revolution and the macroeconomic forces it loosed upon the world necessitated the introduction of terms and concepts of industry and economics. In each case, non-scientific folk concepts preceded the introduction of scientific concepts explained within a comprehensive theoretical framework. In many cases, our theoretical framework is not yet fully formulated and we are still in a stage of conceptual development that involves the overlapping of folk and scientific concepts.
Given the idea of folk concepts and their replacement by scientific concepts, a mature science could be defined as a science in which all folk concepts have been either formalized, transcended, or eliminated. The infinitistic nature of science mystery (which is discussed in Scientific Curiosity and Existential Need), however, suggests that there will always be sciences in an early and therefore immature stage of development. Our knowledge of the scientific method and the development of science means that we can anticipate scientific developments and understand when our intuitions are inadequate and therefore, in a sense, folk concepts. We have an advantage over the unscientific past that knew nothing of the coming scientific revolution and how it would transform knowledge. But we cannot entirely eliminate folk concepts from the early stages of scientific development, and in so far as our scientific civilization results in continuous scientific development, we will always have sciences in the early stages of development.
Scientific progress, then, does not eliminate folk concepts, but generates new and ever more folk concepts even as it eliminates old and outdated folk concepts.
. . . . .
. . . . .
. . . . .
. . . . .
3 July 2015
Traditional units of measure
Quite some time ago in Linguistic Rationalization I discussed how the adoption of the metric system throughout much of the world meant the loss of traditional measuring systems that were intrinsic to the life of the people, part of the local technology of living, as it were. In that post I wrote:
“The gains that were derived from the standardization of weights and measures… did not come without a cost. Traditional weights and measures were central to the lives and the localities from which they emerged. These local systems of weights and measures were, until they were obliterated by the introduction of the metric system, a large part of local culture. With the metric system supplanting these traditional weights and measures, the traditional culture of which they were a part was dealt a decisive blow. This was not the kind of objection that men of the Enlightenment would have paused over, but with our experience of subsequent history it is the kind of thing that we think of today.”
Perhaps it is not the kind of thing many think of today; most people do not mourn the loss of traditional systems of measurement, but it should be recalled that these traditional systems of measurement were not arbitrary — they were based on the typical experience of individuals in the certain milieu, and they reflected the life and economy of a people, who measured the things that they needed to measure.
It is often noted that languages have an immediate relation to the life of a people — the most common example cited is that of the number of words for snow in the languages of the native peoples of the far north. Weights and measures — in a sense, the language of commerce — also reflect the life of a people in the same immediate way as their vocabulary. Language and measurement are linked: much of the earliest writing preserved from the Fertile Crescent consists of simple accounting of warehouse stores.
A particular example can illustrate what I have in mind. It is common to give the measurement of horses in hands. The hand as as unit of measurement has been standardized as four inches, but it is obvious that the origins of the unit is derived from a human hand. Everyone has an admittedly vague idea of the average size of a human hand, and this gives an anthropocentric measurement of horses, which have been crucial to many if not most human economies. The unit of a hand is intuitive and practical, and it continues to be used by individuals who work with horses. It is, indeed, part of the “lore” of horsemanship. Many traditional units of measurement are like this: derived from the human body — as Pythagoras said, man is the measure of all things — they are intuitive and part of the lore of a tradition. To replace these traditional units has a certain economic rationale, but there is a loss if that replacement is successful. More often (as in measuring horses today), both traditional and SI units are employed.
Units of measure unique to a discipline
One response to the loss of traditional units is to define new units in terms of a system of weights and measures — today, usually the metric system — which reflect the particular concerns of a particular discipline. Having a unit of measurement peculiar to a discipline creates a jargon peculiar to a discipline, which is not necessarily a good thing. However, a unit of measurement unique to a discipline makes it possible to think in terms peculiar to the discipline. This “thinking one’s way into” some mode of thought is probably insufficiently appreciated, but it it quite common in the sciences. There are, for example, many different units that are used to measure energy. In principle, only one unit is necessary, and all units of measuring energy can be given a metric equivalent today, but it is not unusual for the energy of a furnace to be measured in BTUs while the energy of a particle accelerator is measured in electronvolts (eV).
For a science of civilization there must be quantifiable measurements, and quantifiable measurements imply a unit of measure. It is a relatively simple matter to employ (or, if you like, to exapt) existing units of measurement for an unanticipated field of research, but it is also possible to formulate new units of measurement specific to a scientific research program — units that are explicitly conceived and applied with the peculiar object of study of the science in view. It is arguable that the introduction of a unit of measurement specific to civilization would contribute to the formulation of a conceptual framework that allows one to think in terms of civilization in a way not possible, for example, in the borrowed terminology of historiography or some other discipline.
Thinking our way into civilization
With this in mind, I would like to suggest the possibility of a unit of time specific to civilization. We already have terms for ten years (a decade), a hundred years (a century), and a thousand years (a millennium), so that it would make sense to employ a metric of years for the quantification of civilization. The basic unit of time in the metric system is the second, and we can of course define the year in terms of the number of seconds in a year. The measurement of time in terms of a year derives from natural cosmological cycles, like the measurement of time in terms of days. With the increase in the precision of atomic clocks, it became necessary to abandon the calibration of the second in terms of celestial events, and this calibration is now done in terms of nuclear physics. Nevertheless, the year, like the day, remains an anthropocentric unit of time that we all understand and that we are likely to continue to use.
Suppose we posit a period of a thousand years as the basic temporal unit for the measurement of civilization, and we call this unit the chronom. In other words, suppose we think of civilization in increments of 1,000 years. In the spirit of a decimal system we can define a series of units derived from the chronom by powers of ten. The chronom is 1,000 years or 103 years; 1 centichronom is 100 or 102 years (a century), 1 decichronom is 10 years or 101 years (a decade), and 1 millichronom is 1.0 year or 100 years. In other other direction, in increasing size, 1 decachronom is 10 chronom or 10,000 years (104 years), 1 hectochronom is 100 chronom or 100,000 years (105 years), 1 kilochronom is 1,000 chronom or 1,000,000 years (106 years or 1.0 Ma, or mega-annum), and thus we have arrived at the familiar motif of a million year old supercivilization. Continuing upward we eventually would come to the megachronom, which is 1,000,000 chronom or 109 years or 1.0 Ga., i.e., giga-annum, at which point we reach the billion year old supercivilizations discussed by Ray Norris (cf. How old is ET?).
From such a starting point — and I am not suggesting that what I have written above should be the starting point; I have only given an illustration to suggest to the reader what might be possible — it would be possible to extrapolate further coherent units of measure. We would want to do so in terms of non-anthropocentric units, and, moreover, non-geocentric units. While the metric system is a great improvement (in terms of the standardization of scientific practice) over traditional units of measure, it is still a geocentric unit of measure (albeit appealing to geocentrism in an extended sense).
Traditional units of measurement were parochial; the metric system was based on the Earth itself, and so not unique to any nation-state, but still local in a cosmological sense. If we were to extrapolate a metric for civilization according to constants of nature (like the speed of light, or some property of matter such as now exploited by atomic clocks), we would begin to formulate a non-anthropocentric set of units for civilization. A temporal metric for the quantitative study of civilization suggests the possibility of also having a spatial metric for the quantitative study of civilization. For example, a unit of space could be defined that is the area covered by light traveling for 1 chronom. A sphere with a radius of one light year would entirely contain a civilization confined to the region of its star. That could be a useful metric for spacefaring civilizations.
What would be the benefit of such a system to quantify civilization? As I noted above, a system of measurement unique to a discipline allows us to think in terms of the discipline. Units of measurement for the quantification of civilization would allow us to think our way into civilization, and so possibly to avoid some of the traditional prejudices of historiographical thinking which have dominated thinking about civilization so far. Moreover, a non-anthropocentric system of civilization metrics would allow us to think our way into a non-anthropocentric metric for civilization, which would better enable us to recognize other civilizations when we have the opportunity to seek them out.
What I am suggesting here is a process of defamiliarization by way of scientific metrics to take the measure of something so familiar — human civilization — that it is difficult for us to think of it in objective terms. Previously in Kierkegaard and Russell on Rigor I discussed how a defamiliarizing process can be a constituent of rigorous thought. In so far as we aspire to the study of civilization as a rigorous science, the defamiliarization of a scientific set of metrics for quantifying civilization can be a part of that effort.
. . . . .
. . . . .
. . . . .
. . . . .
8 June 2015
In several posts I have discussed the need for a science of civilization (cf., e.g., The Future Science of Civilizations), and this is a theme I intended to continue to pursue in future posts. It is no small matter to constitute a new science where none has existed, and to constitute a new science for an object of knowledge as complex as civilization is a daunting task.
The problem of constituting a science of civilization, de novo for all intents and purposes, may be seen in the light of Husserl’s attempt to constitute (or re-constitute) philosophy as a rigorous science, which was a touchstone of Husserl’s work. Here is a passage from Husserl’s programmatic essay, “Philosophy as Strict Science” (variously translated) in which Husserl distinguishes between profundity and intelligibility:
“Profundity is the symptom of a chaos which true science must strive to resolve into a cosmos, i.e., into a simple, unequivocal, pellucid order. True science, insofar as it has become definable doctrine, knows no profundity. Every science, or part of a science, which has attained finality, is a coherent system of reasoning operations each of which is immediately intelligible; thus, not profound at all. Profundity is the concern of wisdom; that of methodical theory is conceptual clarity and distinctness. To reshape and transform the dark gropings of profundity into unequivocal, rational propositions: that is the essential act in methodically constituting a new science.”
Edmund Husserl, “Philosophy as Rigorous Science” in Phenomenology and the Crisis of Philosophy, edited by Quentin Lauer, New York: Harper, 1965 (originally “Philosophie als strenge Wissenschaft,” Logos, vol. I, 1911)
Recently re-reading this passage from Husserl’s essay I realized that much of what I have attempted in the way of “methodically constituting a new science” of civilization has taken the form of attempting to follow Husserl’s pursuit of “unequivocal, rational propositions” that eschew “the dark gropings of profundity.” I think much of the study of civilization, immersed as it is in history and historiography, has been subject more often to profound meditations (in the sense that Husserl gives to “profound”) than conceptual clarity and distinctness.
The Cartesian demand for clarity and distinctness is especially interesting in the context of constituting a science of civilization given Descartes’ famous disavowal of history (on which cf. the quote from Descartes in Big History and Scientific Historiography); if an historical inquiry is the basis of the study of civilization, and history consists of little more than fables, then a science of civilization becomes rather dubious. The emergence of scientific historiography, however, is relevant in this context.
The structure of Husserl’s essay is strikingly similar to the first lecture in Russell’s Our Knowledge of the External World. Both Russell and Husserl take up major philosophical movements of their time (and although the two were contemporaries, each took different examples — Husserl, naturalism, historicism, and Weltanschauung philosophy; Russell, idealism, which he calls “the classical tradition,” and evolutionism), primarily, it seems, to show how philosophy had gotten off on the wrong track. The two works can profitably be read side-by-side, as Russell is close to being an exemplar of the naturalism Husserl criticized, while Husserl is close to being an exemplar of the idealism that Russell criticized.
Despite the fundamental difference between Husserl and Russell, each had an idea of rigor and each attempted to realize in their philosophical work, and each thought of that rigor as bringing the scientific spirit into philosophy. (In Kierkegaard and Russell on Rigor I discussed Russell’s conception of rigor and its surprising similarity to Kierkegaard’s thought.) Interestingly, however, the two did not criticize each other directly, though they were contemporaries and each knew of the other’s work.
The new science Russell was involved in constituting was mathematical logic, which Roman Ingarden explicitly tells us that Husserl found inadequate for the task of a scientific philosophy:
“It is maybe unexpected and surprising that Husserl who was trained as a mathematician did not seek salvation for philosophy in the mathematical method which had from time to time stood out like a beacon as an ideal worthy of imitation by philosophers. But mathematical logic could not satisfy him… above all he fought for responsibility in philosophical research and devoted many years to the elaboration of a method which, according to him, was to secure for philosophy the status of a science.”
Roman Ingarden, On the Motives which Led Husserl to Transcendental Idealism, Translated from the Polish by Arnor Hannibalsson, Den Haag: Martinus Nijhoff, 1975, p. 9.
Ingarden’s discussion of Husserl is instructive, in so far as he notes the influence of mathematical method upon Husserl’s thought, but also that Husserl did not try to employ a mathematical method directly in philosophy. Rather, Husserl invested his philosophical career in the formulation of a new methodology that would allow the values of rigorous scientific practice to be expressed in philosophy and through a philosophical method — a method that might be said to be parallel to or mirroring the mathematical method, or derived from the same thematic motives as those that inform mathematical methodology.
The same question is posed in considering the possibility of a rigorously scientific method in the study of civilization. If civilization is sui generis, is a sui generis methodology necessary to the formulation of a rigorous theory of civilization? Even if that methodology is not what we today know as the methodology of science, or even if that methodology does not precisely mirror the rigorous method of mathematics, there may be a way to reason rigorously about civilization, though it has yet to be given an explicit form.
The need to think rigorously about civilization I took up implicitly in Thinking about Civilization, Suboptimal Civilizations, and Addendum on Suboptimal Civilizations. (I considered the possibility of thinking rigorously about the human condition in The Human Condition Made Rigorous.) Ultimately I would like to make my implicit methodology explicit and so to provide a theoretical framework for the study of civilization.
Since theories of civilization have been, for the most part, either implicit or vague or both, there has been little theoretical framework to give shape or direction to the historical studies that have been central to the study of civilization to date. Thus the study of civilization has been a discipline adrift, without a proper research program, and without an explicit methodology.
There are at least two sides to the rigorous study of civilization: theoretical and empirical. The empirical study of civilization is familiar to us all in the form of history, but history studied as history, as opposed to history studied for what it can contribute to the theory of civilization, are two different things. One of the initial fundamental problems of the study of civilization is to disentangle civilization from history, which involves a formal rather than a material distinction, because both the study of civilization and the study of history draw from the same material resources.
How do we begin to formulate a science of civlization? It is often said that, while science begins with definitions, philosophy culminates in definitions. There is some truth to this, but when one is attempting to create a new discipline one must be both philosopher and scientist simultaneously, practicing a philosophical science or a scientific philosophy that approaches a definition even as it assumes a definition (admittedly vague) in order for the inquiry to begin. Husserl, clearly, and Russell also, could be counted among those striving for a scientific philosophy, while Einstein and Gödel could be counted as among those practicing a philosophical science. All were engaged in the task of formulating new and unprecedented disciplines.
This division of labor between philosophy and science points to what Kant would have called the architectonic of knowledge. Husserl conceived this architectonic categorically, while we would now formulate the architectonic in hypothetico-deductive terms, and it is Husserl’s categorical conception of knowledge that ties him to the past and at times gives his thought an antiquated cast, but this is merely an historical contingency. Many of Husserl’s formulations are dated and openly appeal to a conception of science that no longer accords with what we would likely today think of as science, but in some respects Husserl grasps the perennial nature of science and what distinguishes the scientific mode of thought from non-scientific modes of thought.
Husserl’s conception of science is rooted in the conception of science already emergent in the ancient world in the work of Aristotle, Euclid, and Ptolemy, and which I described in Addendum on the Agrarian-Ecclesiastical Thesis. Russell’s conception science is that of industrial-technological civilization, jointly emergent from the scientific revolution, the political revolutions of the eighteenth century, and the industrial revolution. With the overthrow of scholasticism as the basis of university curricula (which took hundreds of years following the scientific revolution before the process was complete), a new paradigm of science was to emerge and take shape. It was in this context that Husserl and Russell, Einstein and Gödel, pursued their research, employing a mixture of established traditional ideas and radically new ideas.
In a thorough re-reading of Husserl we could treat his conception of science as an exercise to be updated as we went along, substituting an hypothetico-deductive formulation for each and every one of Husserl’s categorical formulations, ultimately converging upon a scientific conception of knowledge more in accord with contemporary conceptions of scientific knowledge. At the end of this exercise, Husserl’s observation about the different between science and profundity would still be intact, and would still be a valuable guide to the transformation of a profound chaos into a pellucid cosmos.
This ideal, and ever more so the realization of this ideal, ultimately may not prove to be possible. Husserl himself in his later writings famously said, “Philosophy as science, as serious, rigorous, indeed apodictically rigorous, science — the dream is over.”(It is interesting to compare this metaphor of a dream to Kant’s claim that he was awoken from his dogmatic slumbers by Hume.) The impulse to science returns, eventually, even if the idea of an apodictically rigorous science has come to seem a mere dream. And once the impulse to science returns, the impulse to make that science rigorous will reassert itself in time. Our rational nature asserts itself in and through this impulse, which is complementary to, rather than contradictory of, our animal nature. To pursue a rigorous science of civilization is ultimately as human as the satisfaction of any other impulse characteristic of our species.
. . . . .
. . . . .
. . . . .
. . . . .
4 April 2015
Curiosity does not have an especially good reputation, and one often finds the word coupled with “mere” so that “mere curiosity” can be elegantly dismissed as though beneath the dignity of the speaker, who can then go about his much more grand and august pursuits without the distraction of the petty, grubbing motivation of mere curiosity. There may be some connection between this disdainful attitude toward curiosity and the prevalent anti-intellectualism of western civilization, notwithstanding the fact that most of what is unique in this tradition is derived from the scientific spirit; it is no surprise that any driving force in human affairs eventually provokes an equal and opposite reaction.
Many civilizations that publicly value intellectuals do not value the contributions of intellectuals, so that this social prestige is indistinguishable from a kind of feudal regard for special classes of persons. This is not what happened in western civilization, in which scientific knowledge bestowed real wealth and power — in our own day no less than in the past — and so provoked a reaction. One of the most famous stories from classical antiquity was how Thales, predicting an especially good olive harvest, hired all the olive presses at a low rate out of season, and then let them out at inflated rates during the peak season, proving that philosophers could earn money if they wanted to do so.
There are a great many interesting quotes that invoke curiosity, for better or worse — Thomas Hobbes: “…this hope and expectation of future knowledge from anything that happeneth new and strange, is that passion which we commonly call ADMIRATION; and the same considered as appetite, is called CURIOSITY, which is appetite of knowledge.” Edmund Burke: “The first and simplest emotion which we discover in the human mind, is curiosity.” Albert Einstein: “I have no special talent. I am only passionately curious.” — which highlight both the admirable and the disreputable side of curiosity. That curiosity has both admirable and disreputable aspects suggests that one might be admirably curious or disreputably curious, and certainly all of us know individuals who are curious in the best sense of the term and others who are curious in the worst sense of the term.
Human beings are adventurers of the spirit. We must count among the attributes of human nature some basal drive toward questioning. This drive could be given an exposition in purely intellectual terms or in purely emotional terms; I think that the intellectual and emotional manifestations of human curiosity are two sides of the same coin, and that is why I suggest positing some basal drive that lies at the root of both. And it isn’t quite right to reduce this drive to curiosity, as we can formulate it in terms of curiosity or in terms of need.
Curiosity is often contrasted to a presumably more esteemed mode of interrogating the cosmos, that we may call existential need. Jacob Needleman often addressed the contrast between “mere” curiosity (which he sometimes called “low curiosity”) and present need. Here is an example:
“It has been said that any question can lead to truth if it is an aching question. For one person it may be the question of life after death, for another the problem of suffering, the causes of war and injustice. Or it may be something more personal and immediate — a profound ethical dilemma, a problem involving the whole direction of one’s life. An aching question, a question that it not just a curiosity or a fleeting burst of emotion, cannot be answered with old thought. Possessed by such a question, one is hungry for ideas of a very different order than the familiar categories that usually accompany us throughout our lives. One is both hungry and, at the same time, more discriminating, less susceptible to credulity and suggestibility. The intelligence of the heart begins to call to us in our sleep.”
Jacob Needleman, The American Soul: Rediscovering the Wisdom of the Founders, pp. 3-4
I disagree with this on so many levels that it is difficult to know where to start, so instead I will simply say that the kind of existential need Needleman wants to describe is highly credulous and suggestible, and what answers to this need are almost always in the form of an old and painfully familiar form of cognitive bias. However, to try to do justice to Needleman, I will allow that, for an individual immersed in the ordinary business of life who, through some traumatic experience, suddenly comes face to face with profound and difficult questions never before posed in that individual’s experience, then, yes, ideas of a very different order are needed to address such questions.
While I do not think that aching questions are likely to lead to truth — I think it much more likely that they will lead to self-deception — I do not deny that many are gnawed by aching questions, and some few spend their lives trying to answer them. The question, then, is the best method by which an aching question might be given a clear, coherent, and satisfying (in so far as that is possible) answer. Here I am reminded of a passage from Walter Kaufmann:
“Nowhere is the disproportion between effort and result more aggravating than in the pursuit of truth: you may plow through documents or make untold experiments or think and think and think, forgo food, comfort, and distractions, lie awake nights and eat out your heart — and in the end you know what can be memorized by any idiot.”
Walter Kaufmann, Critique of Religion and Philosophy, section 24
However aching our question, presumably we would want to spare ourselves the wasted effort of an inquiry that deprives us of the satisfactions of life while giving an answer that could be memorized by any idiot. Kaufmann did not go far enough here: sometimes individuals who make just such an heroic effort to get at the truth and only arrive at an idiot’s portion convince themselves that the idiot’s portion is in fact a great and profound truth.
Whether or not existential need can be satisfied, how are we to undertand it? Viktor Frankl, a psychiatrist and one of the founders of existential analysis, identified a condition that he called the existential vacuum, which he defined as, “the frustration of the will to meaning.” Frankl knew that of which he spoke, having lost most of his family to Nazi death camps and himself having been interned at Auschwitz and liberated only at the end of the war. Here, in a longer passage, is his exposition of existential need:
“Ever more patients complain of what they call an ‘inner void,’ and that is the reason why I have termed this condition the ‘existential vacuum.’ In contradistinction to the peak-experience so aptly described by Maslow, one could conceive of the existential vacuum in terms of an ‘abyss-experience’.”
Viktor Frankl, The Will to Meaning: Foundations and Applications of Logotherapy, New York: Plume, 2014 (originally published in the US in 1969), Part Two, “The Existential Vacuum: A Challenge to Psychiatry”
One could readily suppose that existential need is occasioned by the existential vacuum; that the former is the condition and cause of the latter. Another and more recent approach to existential need is to be found in the work of James Giles:
“…existential needs are not the product of social construction. For in contrast to socially constructed phenomena, existential needs are an inherent and universal feature of the human condition.”
James Giles, The Nature of Sexual Desire, p. 181
This is not necessarily distinct from existential need occasioned by Frankl’s existential vacuum; one could formulate the existential vacuum so that it is either “an inherent and universal feature of the human condition” or not. And there may well be more than one form of existential need. In fact, I think it is clear that there is a plurality of existential needs, and some of these can be sublimated through scientific inquiry and can be satisfied, while some play out in the fruitless manner described in the passage above from Kaufmann.
How one approaches the mystery that is the world, by way of scientific curiosity or by way of existential need, which we might call the scientific approach and the existential approach, each reflect a valid human response to the individual’s relationship to the cosmos. Most of us, at some point in life, poignantly feel the mysteriousness of the world and the desire to give an account of our existence in relation to this mystery. Consider this from John Stuart Mill:
“Human existence is girt round with mystery: the narrow region of our experience is a small island in the midst of a boundless sea, which at once awes our feelings and stimulates our imagination by its vastness and its obscurity. To add to the mystery, the domain of our earthly existence is not only an island in infinite space, but also in infinite time. The past and the future are alike shrouded from us: we neither know the origin of anything which is, nor its final destination. If we feel deeply interested in knowing that there are myriads of worlds at an immeasurable, and to our faculties inconceivable, distance from us in space; if we are eager to discover what little we can about these worlds, and when we cannot know what they are, can never satiate ourselves with speculating on what they may be…”
Now, John Stuart Mill was an almost preternaturally rational man; he was not given to flights of fancy, though the high-flown rhetoric of this passage might suggest this. The scientific approach to mystery is a rationalistic response to the riddle of the world; answers are to be had, but the world is boundless, so that any one answered question still leaves countless other unanswered questions. The growth of knowledge is attended by a parallel growth in the unknown, as our increasing knowledge makes it possible for us to formulate previously unsuspected questions. One might find this to be invigorating or disappointing: there are real answers, but we will never have a final understanding of the world. The existential approach to mystery acknowledges that the human mind may not be capable of comprehending the mystery that is the world, but this is coupled with a fervent belief that there is a final and transcendent answer out there somewhere, even if it always remains tantalizingly out of reach. These are subtle but important differences in the conception of “ultimate” truth as it relates human beings to their world.
A distinction might be made between scientific mystery and absolute mystery, with scientific mystery being a mystery that admits of an answer, but which also admits of a further mystery. An absolute mystery admits of no answer, nor of any further mystery. The world might take on the character of scientific mystery or of absolute mystery depending on whether we approach the world from the perspective of scientific curiosity or existential need. In other words, the kind of mystery that the world is — even if we all agree that the world is girt round in mystery, as Mill says — corresponds to our attitude to the world.
One could argue that scientific curiosity is a sublimation of existential need. If this is true, there is no reason to be ashamed of this, or to attempt a return to the original existential need. The passage from existential need to scientific curiosity may be a stage in the development of intellectual maturity, as irreversible as the passage from childhood to adulthood.
One might go a step further and call scientific curiosity the secularization of existential need (or, rather, the secularization of religious mystery, which then invites a treatment in terms of the Max Scheler/Paul Tillich claim that all human beings are engaged in worship, it is only a question of whether the object of this worship is worthy or idolatrous), recalling Karl Löwith’s theory of secularization, which made much of modernity into a bastardized form of Christian eschatology. This presupposes not only that existential need precedes scientific curiosity, but that it is the only authentic form of human questioning, and that any attempt to introduce new forms of questioning the human condition is illegitimate.
We are today faced with questions that our ancestors, who first felt the disconcerting stirrings of existential need, could not have imagined. I touched on one of these questions in my post on Centauri Dreams, Cosmic Loneliness and Interstellar Travel, which drew more responses than other of my other posts to that forum. Our cosmic loneliness can now be expressed in scientific terms, and we can offer a scientific response to our attempts so far to answer the question, “Are we alone?” This is one of the great scientific questions of our time, and at the same time it speaks to a modern existential need that has been expressed in Clark’s tertium non datur.
The growth of human knowledge and the civilization created by human knowledge may have its origins in the questioning that naturally emerges from an experience of existential need. Perhaps this feeling never fully dissipates, but in so far as the dissatisfaction and discontent of existential need can be redirected into scientific curiosity, human beings can experience at least a limited satisfaction derived from definite scientific answers to questions formulated with increasing clarity and rigor. Beyond this, we may have to wait for the next stage in human evolution, when we may acquire mental faculties that take us beyond both existential need and scientific curiosity into a frame of mind incomprehensible to us in our present iteration.
. . . . .
. . . . .
. . . . .
. . . . .
3 December 2014
P. F. Strawson called his twentieth century exposition of Kant The Bounds of Sense. I have commented elsewhere what a appropriate title this is. The Kantian project (much like metamathematics in the twentieth century) was a limitative project. Kant himself wrote (in the Preface to the 2nd edition of the Critique of Pure Reason): “…my intention then was, to limit knowledge, in order to make room for faith.” Here is the entire passage from which the quote is taken, though in a different translation:
“This discussion as to the positive advantage of critical principles of pure reason can be similarly developed in regard to the concept of God and of the simple nature of our soul; but for the sake of brevity such further discussion may be omitted. [From what has already been said, it is evident that] even the assumption — as made on behalf of the necessary practical employment of my reason — of God, freedom, and immortality is not permissible unless at the same time speculative reason be deprived of its pretensions to transcendent insight. For in order to arrive at such insight it must make use of principles which, in fact, extend only to objects of possible experience, and which, if also applied to what cannot be an object of experience, always really change this into an appearance, thus rendering all practical extension of pure reason impossible. I have therefore found it necessary to deny knowledge, in order to make room for faith.”
Immanuel Kant, Critique of Pure Reason, Preface to the Second Edition
What lies beyond the bounds of sense? For Kant, faith. And Kant’s theological agenda drove him to seek the bounds of sense so that speculative reason could be deprived of its pretensions to transcendental insight. Thus Kant gives us an epistemology openly freighted with theological and moral concerns. Talk about the theory-ladenness of perception! It is, however, non-perception — i.e., that which cannot be the object of possible experience — that is the Kantian domain of faith.
Of course, this is the whole Kantian project in a nutshell, is it not? It is Kant’s design to show us exactly how perception is laden with theory, the theory native to the mind, the a priori concepts by which we organize experience. Kant propounds the transcendental aesthetic and the transcendental deduction of the categories in order to demonstrate the reliance of even the most ordinary experience upon the mind’s a priori faculties.
Kant was, in part, reacting against the empiricism of Locke and Hume — especially Hume’s skeptical conclusions, although Kant’s own rejection of metaphysics equaled if not surpassed Hume’s anti-metaphysical stance, as famously described in the following passage from Hume:
“When we run over libraries, persuaded of these principles, what havoc must we make? If we take in our hand any volume; of divinity or school metaphysics, for instance; let us ask, Does it contain any abstract reasoning concerning quantity or number? No. Does it contain any experimental reasoning concerning matter of fact and existence? No. Commit it then to the flames: for it can contain nothing but sophistry and illusion.”
David Hume, An Enquiry Concerning Human Understanding, “Of the academical or sceptical Philosophy,” Part III
For Hume, the bounds of sense and the limitation of reason entailed doubt; for Kant the bounds of sense and the limitation of reason entailed belief. There is a lesson in here somewhere, and the lesson is this: from a single state of affairs, multiple interpretations can be shown to follow.
Are the bounds of sense also the bounds of science? It would seem so. In so far as science must appeal to empirical evidence, and empirical evidence comes to us by way of the senses, the limits of the senses impose limits on science. Of course, this is a bit too simplistic to be quite true. There are so many qualifications that need to be made to such an assertion that it is difficult to say where to start.
It should be familiar to everyone that we have come to extensively use instruments to augment our senses. Big Science today sometimes spends years, if not decades, building its enormous machines, without which contemporary science could not be possible. So the limits of the senses are not absolute, and they are subject to manipulation. Also, we sometimes do science without our senses or instruments, when we pursue science by way of thought experiments.
While thought experiments alone, unsupplemented by actual experiments, are probably insufficient to constitute a science, thought experiments have become a necessary requisite to science much as instrumentation has become a necessary requisite to science. Sometimes, when our technology catches up with our ideas, we can transform our thought experiments into actual experiments, so that there is an historical relationship between science properly understood and the penumbra of science represented by thought experiments. And thought experiments too have their controlled conditions, and these are the conditions that Kant attempted to lay down in the transcendental aesthetic.
There is also the question of whether or not mathematics is a science, or one among the sciences. And whether or not we set aside mathematics as something different from the other sciences, we know that the development of unquestionably empirical sciences like physics are deeply mathematicized, so that the mathematical content of empirical theories may act like an abstract instrument, parallel to the material instruments of big science, that extends the possibilities of the senses. Another way to think about mathematics is as an enormous thought experiment that under-girds the rest of science — the one crucial thought experiment, an experimentum crucis, without which the rest of science cannot function. In this sense, thought experiments are indispensable to mathematicized science — as indispensable as mathematics.
At a more radical level of critique, it would be difficult to give a fine-grained account of empirical evidence that did not shade over, at the far edges of the concept, into other kinds of knowledge not strictly empirical. Empirical evidence may shade over into the kind of intuitive evidence that is the basis of mathematics, or the kind of epistemological context that is the setting for our thought experiments. Empirical evidence can also shade over into interoception that cannot be publicly verified (therefore failing a basic test of science) or precisely reproduced by repetition, and which interoception itself in turn shades over into intuitions in which thought and feeling are not clearly distinct.
Where does Kant’s possible experience fit within the continuum of the senses? What is the scope of possible experience? Can we make a clear distinction between extending the senses (and thus human experience) by abstract or concrete instruments and imposing a theory upon experience through these extensions? Does possible experience include all possible past experience? Does past experience include phenomenon that occurred but which were not observed (the famous tree falling in a forest that no one hears)? Does it include all possible future experience, or only those future experiences that will eventually be actualized, and not those that already remain merely shadowy possibilities? Does possible experience include those counterfactuals that feature in the “many worlds” interpretation of quantum theory? Explicit answers to these questions are less important that the lines of inquiry that the questions prompt us to pursue.
. . . . .
. . . . .
. . . . .
. . . . .
27 November 2014
An interesting article on NPR about a new atomic clock being developed by NIST scientists, New Clock May End Time As We Know It, was of great interest to me. Immediately intrigued, I wrote a post on my other blog in which I suggested that the new clock might be used to update the “Einstein’s box” thought experiment (also known as the clock-in-a-box thought experiment). While I would like to follow up on this idea at some time, today I want to write about advanced chronometry in the context of the STEM cycle.
Atomic clocks are among the most precise scientific instruments ever developed. As such, precision clocks offer a good illustration of the STEM cycle, which I identified as the definitive feature of industrial-technological civilization. While this illustration is contemporary, there is nothing new about the use of the most advanced science, technology, and engineering available being employed in chronometry.
The earliest sciences, already developed in classical antiquity, were mathematics and astronomy. These early scientific disciplines were applied to the construction of timekeeping mechanisms. Among the most interesting technological artifacts of the ancient world are the clock once installed in the Tower of the Winds in Athens (which was described in antiquity, but which no longer exists) and the Antikythera mechanism, the corroded remains of which were dredged up from a shipwreck off the Greek island of Antikythera (while discovered by sponge divers in 1900, the site is still yielding finds). A classic paper on the Tower of the Winds compares these two technologies: “This is a field in which ancient literature is curiously meager, as we well know from the complete lack of any literary reference to a technology that could produce the Antikythera Mechanism of the same date.” (“The Water Clock in the Tower of the Winds,” Joseph V. Noble and Derek J. de Solla, American Journal of Archaeology, Vol. 72, No. 4, Oct., 1968, pp. 345-355) Both of these artifacts are concerned with chronometry, which demonstrates that the most advanced technologies, then and now, have been employed in the measurement of time.
The advent of high technology as we know it today — unprecedented in human history — has been the result of the advent of a new kind of civilization — industrial-technological civilization — and the use of advanced technologies in chronometry provides a useful lens with which to view one of the unique features of our civilization today, which I call the STEM cycle. The acronym STEM is familiar in educational contexts in order to refer to education and training in science, technology, engineering, and mathematics, so I have taken over this acronym as the name for one of the socioeconomic processes that lies at the heart of our civilization: Science seeks to understand nature on its own terms, for its own sake. Technology is that portion of scientific research that can be developed specifically for the realization of practical ends. Engineering is the industrial implementation of a technology. Mathematics is the common language in which the elements of the cycle are formulated. A feedback loop of science driving technology, driving engineering, driving more science, characterizes industrial-technological civilization. This is the STEM cycle.
The distinctions between science, technology, and engineering are not absolute — far from it. To employ a terminology I developed elsewhere, I would say that science is only weakly distinct from technology, technology is only weakly distinct from engineering, and engineering is only weakly distinct from science. In some contexts any two elements of the STEM cycle are identical, while in other contexts of the STEM cycle they are starkly contrasted. This is not due to inconsistency, but rather to the fact that science, technology, and engineering are open-textured concepts; we could adopt conventional distinctions that would make them strongly distinct, but this would be contrary to usage in ordinary language and would only result in confusion. Given the lack of clear distinctions among science, technology, and engineering, where we draw the dividing lines within the STEM cycle is to some degree arbitrary — we could describe this cycle in different terms, employing different distinctions — but the cycle itself is not arbitrary. By any other name, it drives industrial-technological civilization.
The clock that was the inspiration for this post — the new strontium atomic clock, described in JILA Strontium Atomic Clock Sets New Records in Both Precision and Stability, and the subject of a scientific paper, An optical lattice clock with accuracy and stability at the 10−18 level by B. J. Bloom, T. L. Nicholson, J. R. Williams, S. L. Campbell, M. Bishof, X. Zhang, W. Zhang, S. L. Bromley, and J. Ye (a preprint of the article is available at Arxiv) — is instructive in several respects. In so far as we consider atomic clocks to be a generic “technology,” the strontium clock represents the latest and most advanced instance of this technology yet constructed, a more specific form of technology, the optical lattice clock, within the more generic division of atomic clocks. The sciences involved in the conceptualization of atomic clocks are fundamental: atomic physics, quantum theory, relativity theory, thermodynamics, and optics. Atomic clocks are a technology built from another technologies, including advanced materials, lasers, masers, a vacuum chamber, refrigeration, and computers. Building the technology into an optimal device involves engineering for dependability, economy, miniaturization, portability, and refinements of design.
The NIST web page notes that, “NIST invests in a number of atomic clock technologies because the results of scientific research are unpredictable, and because different clocks are suited for different applications.” (For further background on atomic clocks at NIST cf. A New Era for Atomic Clocks.) The new record breaking clocks in terms of stability and accuracy are experimental devices; the current standard for timekeeping is the NIST-F2 “cesium fountain” atomic clock. The transition from the previous standard timekeeping, NIST-F1, to the present standard, NIST-F2, is largely a result of engineering refinements of the earlier atomic clock. Even the experimental strontium clock is likely to be soon surpassed. JILA Strontium Atomic Clock Sets New Records in Both Precision and Stability quotes Jun Ye as saying, “We already have plans to push the performance even more, so in this sense, even this new Nature paper represents only a ‘mid-term’ report. You can expect more new breakthroughs in our clocks in the next 5 to 10 years.”
The engineering refinement of high technology has two important consequences:
1) inexpensive, widely available devices (which I will call the ubiquity function), and…
2) improved, cutting edge devices that improve the precision of measurement (which I will call the meliorative function), sometimes improved by an order of magnitude (or several orders of magnitude).
These latter devices, those that represent greater precision, are not likely to be inexpensive or widely available, but as the STEM cycle continues to advance science, technology, and engineering in a regular and predictable manner, the older generation of technology becomes widely available and inexpensive as new technologies take their place on the expensive cutting edge. However, these cutting edge technologies are in turn displaced by newer technologies, and the cycle continues. Thus there is a relationship — an historical relationship — between the two consequences of the engineering refinement of technology. Both of these phases in the life of a technology affect the practice of science. NIST Launches a New U.S. Time Standard: NIST-F2 Atomic Clock quotes NIST physicist Steven Jefferts, lead designer of NIST-F2, as saying, “If we’ve learned anything in the last 60 years of building atomic clocks, we’ve learned that every time we build a better clock, somebody comes up with a use for it that you couldn’t have foreseen.”
Widely available precision measurement devices (the ubiquity function) bring down the cost of scientific research and we begin to see science cropping up in all kinds of interesting and unexpected places. The development of computer technology and then the miniaturization of computers had the unintended result of making computers inexpensive and widely available. This, in turn, has meant that everyone doing science carries a portable computer with them, and this widely available computational power (which I have elsewhere called the computational infrastructure of civilization) has transformed how science is done. NIST Atomic Devices and Instrumentation (ADI) now builds “chip-scale” atomic clocks, which is both commercializing and thereby democratizing atomic clock technology in a form factor so small that it could be included in a cell phone (or whatever mobile device form factor you prefer). This is perfect illustration of the ubiquity function in an engineering application of atomic clock technology.
New cutting edge precision measurement devices (the meliorative function), employed only by the governments and industries that can afford to push the envelope with the latest technology, are scientific instruments of great sensitivity; increasing the precision of the measurement of time by an order of magnitude opens up new possibilities the consequences of which cannot be predicted. What can be predicted, however, is the present generation of high precision measurement devices make it possible to construct the next generation of precision measurement devices, which exceed the precision of the previous generation of devices. A clock built to a new design that is far more precise than its predecessors (like the strontium atomic clock) may not necessarily find its cutting edge scientific application exclusively in the measurement of time (though, again, it might do that also), but as a scientific instrument of great sensitivity it suggests uses throughout the sciences. A further distinction can be made, then, between instruments used for the purposes they were intended to serve, and instruments that are exapted for unintended uses.
A loosely-coupled STEM cycle is characterized primarily by the ubiquity function, while a tightly-coupled STEM cycle is characterized primarily by the meliorative function. Human civilization has always involved a loosely-coupled STEM cycle, sometimes operating over thousands of years, with no apparent relationship between science, technology, and engineering. Technological progress was slow and intermittent under these conditions. However, the productivity of industrial-technological civilization is such that its STEM cycle yields both the ubiquity function and the meliorative function, which means that there are in fact multiple STEM cycles running concurrently, both loosely-coupled and tightly-coupled.
The research and development branch of a large business enterprise is the conscious constitution of a limited, tightly-coupled STEM cycle in which only that science is pursued that is expected to generate specific technologies, and only those technologies are developed that can be engineered into marketable products. An open loop STEM cycle, loosely-coupled STEM cycle, or exaptations of the STEM cycle are seen as wasteful, but in some cases the unintended consequences from commercial enterprises can be profound. When Arno Penzias and Robert Wilson were hired by Bell Labs, it was with the promise that they could use the Holmdel Horn Antenna for pure science once they had done the work that Bell Labs would pay them for. As it turned out, the actual work of tracing down interference resulted in the discovery of cosmic microwave background radiation (CMBR), earning Penzias and Wilson the Nobel prize. An engineering problem became a science problem: how do you explain the background interference that cannot be eliminated from electronic devices?
. . . . .
. . . . .
. . . . .
. . . . .
15 November 2014
When I find myself among conspiracy theorists and pseudo-science aficionados, I probably sound like the most relentless, ruthless, unforgiving positivist that you have ever heard. But, of course, I’m not a positivist at all. When I find myself among those educated in the sciences, I probably sound like the most woolly-headed philosopher imaginable, who seemingly takes every opportunity to needlessly complicate matters that are perfectly clear just as they are. I am caught between defending science among those innocent of science, and defending philosophy among those innocent of philosophy. In other words, I can’t win. And now I’m going to make my hopeless position worse by taking the conflict (rather, the absence of communication) between science and philosophy into the forbidden no-man’s-land of politics.
My particular dilemma is the result of understanding that science is philosophy; that is to say, science as we know it today, is a particular branch of philosophy (something that I began to explain in A Fly in the Ointment). While it may be grudgingly acknowledged that science has philosophical presuppositions, it is step further to see science as a particular philosophy that is rather less comprehensive than the whole of philosophy. Now, it is true that science has become differentiated from the rest of philosophy because of its practical successes, but its practical successes alone are no warrant for separating methodological naturalism, i.e., science, from the rest of philosophy.
Without philosophy we cannot understand science; philosophy provides both the synchronic and the diachronic context of science. The emergence of science within western civilization is the diachronic narrative of philosophy, and the relations of science to other aspects of the world and human experience is the synchronic context of science that can only adequately be addressed by philosophy. The need for a robust engagement between science and philosophy, as is to be found, for example, in the work of Einstein, is a need that grows out of the philosophical context of science.
Previous epochs of civilization — notably, agrarian-ecclesiastical civilization — might point to their own pragmatic implementations of philosophy, no less than the successes of the sciences are heralded today. Enormous monumental building projects that still impress us today, symbols of civilization such as the pyramids, Hagia Sophia, the Taj Mahal, the Daibutsu at Nara, and Borobudur, were possible only through the effort of a philosophically unified civilization, and the monuments themselves are monuments to those civilizations and their philosophical bases.
As an example of a philosophical civilization animated from the power elites at the top down to the lowest rungs of the socioeconomic ladder I have elsewhere quoted Gregory Nazianzus on the Christological controversies in Byzantium:
“Constantinople is full of handicraftsmen and slaves, who are all profound theologians, and preach in their workshops and in the streets. If you want a man to change a piece of silver, he instructs you in which consists the distinction between the Father and the Son; if you ask the price of a loaf of bread, you receive for answer, that the Son is inferior to the Father; and if you ask, whether the bread is ready, the rejoinder is that the genesis of the Son was from nothing.”
Another example might be the reach of stoicism in the Roman empire from the emperor Marcus Aurelius to the slave Epictetus. This philosophical character of agrarian-ecclesiastical civilization is not limited to western civilization, its predecessors, and successors, but is a planetary phenomenon.
The civilization of India is perhaps uniquely philosophical in the world. India is a civilization-state, and Indian civilization is a philosophical civilization. In this respect, it is markedly different from western civilization, which has no contemporary single state representative, and in regard to philosophy is more narrow and focused.
This can give us a certain insight into western civilization, which is not a philosophical civilization in the sense that India is, but is a fragment of a philosophical civilization. In so far as science is a particular branch of philosophy, and in so far as western civilization in its present form (industrial-technological civilization) is founded upon science as the source of the STEM cycle, western civilization is a philosophical civilization for the particular philosophy of methodological naturalism. Indeed, the very insistence today that science can do without philosophy is an expression of the philosophical narrowness of western civilization.
Much is to be learned from the comparison of the philosophies and civilizational structures of those independent civilizations that can be traced all the way to their origins in the Neolithic Agricultural Revolution, during which all agrarian-ecclesiastical civilizations had their earliest origins. But there is a problem here. In reaction against the imperialism of western civilization since that period once called the Age of Discovery, when Columbus, Magellan, Vasco de Gama, Amerigo Vespucci, Vasco Núñez de Balboa, and many others, sailed from Europe and began to survey the world entire, it is now considered in supremely bad taste to compare civilizations. The celebratory model of tolerance is almost universally adopted and every civilization is counted as a special snowflake that has something to contribute to human history.
In my post on The Future Science of Civilizations I noted Carnap’s tripartite distinction among scientific concepts, which Carnap identified as the classificatory, the comparative, and the quantitative. (We note that this typology itself takes a classificatory form, and an entire class of scientific concepts are comparative concepts.) In so far as we understand Carnap’s conceptual schema of measurement as developmental, proceeding in phases so that initial classifications lead to comparisons, and comparisons lead to quantification, all the while gaining in objectivity, Carnap’s schematism of scientific measurement embodies what Edith Wyschogrod called “the quantification of the qualitied world.”
If we take the division of classificatory, comparative, and quantitative concepts not in a developmental sense but as different approaches to a scientific grasp of the world, then each conceptual method of measurement may yield unique information about the world. In either case, whether we take these scientific concepts of measurement in developmental terms or take each in isolation, comparative concepts have a crucial role to play: either they are a stage in the development of a fully quantitative science, or they yield unique information about the world.
We cannot fully or adequately conceptualize civilization without developing comparative concepts of civilization to the greatest extent possible, but the development and exploration of this conceptual space is severely constrained by the contemporary political proscription upon the comparison of civilizations. In this way, the study of civilization today is unnecessarily yet unavoidably political. In order to frankly and bluntly discuss comparative conceptions of civilization, we are forced to seek artful euphemisms to speak evasively. This is unfortunate for the development of a science of civilization, but it is not insuperable, and the appropriate degree of abstraction and formalization in a fully developed theoretical context may be sufficient to violate this taboo in spirit while leaving the letter of the proscription intact.
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
11 November 2014
Wittgenstein was not himself a positivist, but his early work, Tractatus Logico-Philosophicus, had such a profound influence on early twentieth century philosophy that the philosophy that we now identify as logical positivism was born from reading groups that got together to study Wittgenstein’s Tractatus — what I have elsewhere called The Ludwig Wittgenstein Reading Club — primarily the Vienna Circle.
Wittgenstein began his education as an engineer, and only later became interested in philosophy by way of the philosophy of mathematics then emerging from the work of Frege and Russell. It has been said that the early Wittgenstein approached philosophy like an engineer, setting out to drain the swamps of philosophy. A more familiar metaphor for Wittgenstein’s philosophy, though for the later rather than the earlier Wittgenstein, is that of philosophy as a kind of therapy:
“A philosopher is a man who has to cure many intellectual diseases in himself before he can arrive at the notions of common sense.”
Wittgenstein, Culture and Value, 1944, p. 44e
Wittgenstein does not himself use the term “therapy” or “therapeutic,” but frequently recurs to the theme in other words:
“In philosophizing we may not terminate a disease of thought. It must run its natural course, and slow cure is all important. (That is why mathematicians are such bad philosophers.)”
Wittgenstein, Zettel, 382
The idea of philosophy as therapy is not entirely new. In my Variations on the Theme of Life I noted the medieval tradition of conceiving philosophers as “doctors of the soul”:
“During late antiquity philosophers were sometimes called ‘doctors of the soul.’ Later yet, Avicenna was a practicing physician in addition to being both a logician and a philosopher, and he stands at the head of a tradition of doctor-philosophers among the Arabs. All this has a superficial resemblance to the contemporary conception of philosophy as therapy, but in reality it is the antithesis of the modern conception of philosophy as a sickness in need of therapy, of scholarship as an illness, and of the philosopher as corrupt and corrupting.”
Variations on the Theme of Life, section 767
Every age must confront the ancient and perennial questions of philosophy anew, because each age has its own, peculiar therapeutic needs. It has become a commonplace of contemporary commentary, as least since the middle of the twentieth century, that the pace and busyness of our civilization today is driving us insane, and in so far as this is true, we are more in need of therapy than previous ages.
In my previous post, Philosophy for Industrial-Technological Civilization, I suggested, contrary to Quine, that philosophy of science is not philosophy enough; that we also need philosophy of technology and philosophy of engineering, and to unify these aspects of the STEM cycle within the big picture, we need a philosophy of big history. There is only one problem with my vision for the overarching philosophy demanded by the world of today: there is no demand for it. No one is interested in my vision or, for that matter, any other vision of philosophy for the twenty-first century.
Previously I wrote three posts on contemporary anti-philosophy:
The most prestigious scientists of our time seem at one in their insistence upon the irrelevance of philosophy. A post on the SelfAwarePatters blog, E.O. Wilson: Science, not philosophy, will explain the meaning of existence, brought my attention to E. O. Wilson’s recent statements belittling philosophy. SelfAwarePatters has also written about Neil deGrasse Tyson’s “blanket dismissal of philosophy” in Neil deGrasse Tyson is wrong to dismiss all of philosophy, but he may have a point on some of it.
It is almost painful to watch Wilson’s oversimplifications in the above linked “Big Think” piece, though I suspect his oversimplifications will have a wide and sympathetic audience. After implying the pointlessness of studying the history of philosophy and making the claim that philosophy mostly consists of “failed models of how the brain works,” Wilson then appeals to the “full story of humanity” (without mentioning big history, though the interdisciplinary concatenation he mentions is very much in the spirit of big history), and formulates a point of view almost precisely the same as that I heard several times at the 2014 IBHA conference: once we have this big picture view of history, we no longer need to ask what the meaning of life is, because we will know it.
The inescapable reflexivity of philosophical thought means that any principled rejection of philosophy is itself a philosophical claim; unprincipled rejections, that is to say, dismissal without reason or argument, have no more standing than any other unprincipled claim. So the scientists who dismiss philosophy and give reasons for doing so are doing philosophy. The unfortunate consequence is that they are doing philosophy poorly, much like someone who dismisses science but who pontificates on matters scientific, and does so poorly. We are well familiar with this, as pseudo-science has been given a megaphone by the internet and other forms of mass media. Scientists are aware of the problem posed by pseudo-science, but seem to be blissfully unaware of the problem of pseudo-philosophy.
There is a book by Louis Althusser, Philosophy and the Spontaneous Philosophy of Scientists, that I have cited previously (in Fashionable Anti-Philosophy) since the title is so evocative, in which Althusser says, “…in every scientist there sleeps a philosopher or, to put it another way, that every scientist is affected by an ideology or a scientific philosophy which we propose to call by the conventional name: the spontaneous philosophy of the scientists…” It is this spontaneous philosophy of scientists that we see in the anti-philosophical pronouncements of E. O. Wilson and Neil deGrasse Tyson.
Not only eminent scientists, but also science popularizers share this attitude. Michio Kaku’s recent book, The Future of the Mind: The Scientific Quest to Understand, Enhance, and Empower the Mind, is essentially a speculative work in the philosophy of mind. There is a pervasive yet implicit Kantianism running through Kaku’s book of which I am sure he is unaware, because, like most scientists today who write on philosophical topics, he has not bothered to study the philosophical literature. If one knows that one is arguing a neo-Kantian position on the transcendental aesthetic, in trying to come to terms with how the barrage of sensory data is somehow translated into an apparently smooth and unitary stream of consciousness, then one can simply consult the literature to learn where state of the argument over the transcendental aesthetic stands today, what the standard arguments are for and against contemporary Kantianism, but without this basic knowledge, one does little more than repeat what has already been said — better — by others, and long ago. Even Sam Harris, who has some background in philosophy, gives his exposition of determinism in a philosophical vacuum, as though the work of philosophers such as Robert Kane, Helen Steward, and Alfred R. Mele simply did not exist, or is beneath notice.
The anti-philosophy and pseudo-philosophy of prominent scientists is an instance of the spontaneous philosophy noted by Althusser. But this spontaneous expression of uninformed philosophical speculation does not come out of nowhere; it has a basis, albeit dimly understood, in the nature of science itself. What is the nature of science itself? I have an answer to this, but it is not an answer that will be welcome to most of those in science today: science is philosophy. That is to say, science is a particular branch of philosophy, that branch once called natural philosophy, and it is natural philosophy practiced in accordance with methodological naturalism. Science is a narrow slice of a far more comprehensive conception of the world.
Scientists are philosophers without realizing they are philosophers, and when then pronounce upon philosophical questions without reference to the philosophical tradition — which is much broader and pluralistic than any one, single branch of philosophy, such as natural philosophy — they do little more than to restate their presuppositions as principles. Given the preeminent role of science within industrial-technological civilization, this willful ignorance of philosophy, and of the position of science in relation to philosophy, is not only holding back both science and philosophy, it is holding back civilization.
The next stage of development of our civilization (not to mention the macro-evolution of our civilization into another kind of civilization) will not come about until science utterly abandons the positivistic assumptions that are today the unquestioned yet implicit presuppositions of scientific inquiry, and science extends the scientific method, and the sense of responsibility to empirical evidence, beyond the confines of any one branch of philosophy to the whole of philosophy. To paraphrase Plato, until philosophers theorize as scientists or those who are now called scientists and leading thinkers genuinely and adequately philosophize, that is, until science and philosophy entirely coincide, while the many natures who at present pursue either one exclusively are forcibly prevented from doing so, civilization will have no rest from evils… nor, I think, will the human race.
. . . . .
. . . . .
. . . . .
. . . . .