22 April 2017
Science has a political problem, but science as an institution is not prepared to face up to its political problem. Worse, institutionalized science is prepared to dig itself in deeper into its political problem with the March for Science today, which will present scientists to the public as activists.
Science is an institution of western civilization — I would argue the central institution of contemporary western civilization — which latter is, in turn, a macro-institution made up of many other institutions. Big science means institutionalized science; institutionalized science means, in turn, an institution integrated with other institutions, including political institutions. So, as many of the backers of the March for Science have insisted, science cannot avoid being political. But not being able to avoid political entanglements is quite a different matter from consciously and purposefully promoting, in the mind of the public, science as a form of activism and the scientist as an activist.
Lawrence M. Krauss touched on part of the problem in an article for Scientific American, March for Science or March for Reality? Hostility toward the former is troublesome, but hostility toward the latter is the underlying issue, in which he wrote, “The March for Science could then appear as a self-serving political lobbying effort by the scientific community to increase its funding base.” But it is not only the problem of appearing to be self-serving, but the appearance of serving an ideology, that is the problem.
Krauss cited Richard Feynman to the effect that, for a successful technology, reality must take precedence over public relations, for Nature cannot be fooled, and Philip K. Dick to the effect that, Reality is that which continues to exist even when you stop believing in it. Krauss does not cite the also applicable quote from Ayn Rand: “We can ignore reality, but we cannot ignore the consequences of ignoring reality.” This oversight is understandable; Ayn Rand is quite clearly not the kind of figure that the organizers or supporters of the March for Science would want to invoke. The whole populist movement and its isolationist orientation is far too redolent of Rand’s character John Galt. The fact that Ayn Rand doesn’t fit the March for Science narrative tells us something important about the implicit politics of the March for Science.
Though the organizers of the March for Science have made a point to emphasize the non-partisan nature of the march, this claim in disingenuous, and, indeed, those marchers who insist that science cannot avoid being political are explicitly recognizing the political nature of the march.
Inevitably, the March for Science has become political, despite protestations to the contrary, and it has become political in ways that the organizers would prefer not to recognize. You can read about this in Why the ‘March for Science’ Is in Turmoil: A departure from leadership is highlighting diversity issues less than a week before the march by Tanya Basu, which discusses the departure from the organizers of Jacquelyn Gill, who posted a series of remarks on Twitter explaining the reasons for her departure.
Although institutionalized science has bent over backward to accommodate the hypersensitive contemporary university climate and its sometimes bizarre, sometimes petty, demands that it places upon scholars and researchers, the complaint is that the march has been insufficiently solicitous of those who would play the victim card (and of those who claim to be the representatives of the oppressed and the downtrodden) and whose demands for activism on the part of institutionalized science have not been met to their satisfaction. (Note: these demands cannot be met, and are not intended to be met, but are rather intended to be used as a cudgel against those in positions of power.)
There was an article in Nature (one of the world’s leading science journals), How the March for Science splits researchers: Nature asked members of the scientific community whether or not they plan to march on 22 April — and why by Erin Ross, which included a quote from Nathan Gardner, who put his finger on the problem:
“I am not going to the March for Science, because people in America view science as leftist. Maybe it’s because [former US vice-president] Al Gore launched ‘An Inconvenient Truth’. I’ve seen articles from right-wing outlets that are framing the march as focusing on gender equality and identity politics. I think it could easily politicize science because, even though the march’s mission statement isn’t anti-Trump, the marchers seem anti-Trump.”
This, in a nutshell, is science’s political problem, the problem it does not want to acknowledge, and the problem it is not prepared to address, because to address it head-on would be too painful. There has been a lot of talk about respecting the evidence and the need for a frank recognition of what science tells us, but this commitment is exercised lopsidedly. If you want to talk about hostility to reality, as Krauss would have it, consider the institutional response to scientists who have dared to research “no go” areas of knowledge that contradict the dominant social narrative of our time.
In recent decades, science has largely respected the “no go” areas of the left, and has sometimes enthusiastically embraced the ideological agenda of the left. (Jonathan Haidt and his Heterodox Academy have been particularly effective in pointing out the lack of diversity of opinion in academic science.) While the left has had its “no go” areas largely respected, the “no go” areas of the right and of traditionalists have not been respected, and it is not at all unusual to see their failures gleefully pointed out in the spirit of iconoclasm. Certainly, there was a time in the past when academic institutions slavishly respected the “no go” areas of the traditionalists, but these days are long behind us. And I am certainly not suggesting that anyone’s “no go” areas should be respected. Ideally, scientific research would take place without respect to anyone’s feelings or ideologies, but it is dishonest to carefully avoid offending one side while poking and prodding the other side.
While I think that the March for Science will do more harm than good, it is not likely to have much of an impact, so if it makes people feel good about themselves to go marching and waving signs and chanting call-and-response rituals, it probably doesn’t matter much. The loss to science will be only incremental. But if it is followed by more incremental politicization of science, then our entire civilization will be threatened by the death of a thousand cuts to the ideal of an objective, disinterested, and dispassionate science that tells us as much as we are capable of understanding at present, whether we want to hear it or not. There is no tonic for the soul quite like an unwelcome truth, and science has been masterful at administering these draughts in the past. I hope that science does not lose this talent.
. . . . .
. . . . .
. . . . .
18 March 2017
Many years ago, reading a source I cannot now recall (and for which I searched unsuccessfully when I started writing this post), I came upon a passage that has stayed with me. The author was making the argument that no sciences were consistent except those that had been reduced to mere catalogs of facts, like geography and anatomy. I can’t recall the larger context in which this argument appeared, but the observation that sciences might only become fully consistent when they have matured to the point of being exhaustive but static and uninteresting catalogs of facts, implying that the field of research itself had been utterly exhausted, was something I remembered. This idea presents in miniature a developmental conception of the sciences, but I think that it is a developmental conception that is incomplete.
Thinking of this idea of an exhausted field of research, I am reminded of a discussion in Conversations on Mind, Matter, and Mathematics by Jean-Pierre Changeux and Alain Connes, in which mathematician Alain Connes distinguished between fully explored and as yet unexplored parts of mathematics:
“…the list of finite fields is relatively easy to grasp, and it’s a simple matter to prove that the list is complete. It is part of an almost completely explored mathematical reality, where few problems remain. Cultural and social circumstances clearly serve to indicate which directions need to be pursued on the fringe of current research — the conquest of the North Pole, to return again to my comparison, surely obeyed the same type of cultural and social motivations, at least for a certain time. But once exploration is finished, these cultural and social phenomena fade away, and all that’s left is a perfectly stable corpus, perfectly fitted to mathematical reality…”
Jean-Pierre Changeux and Alain Connes, Conversations on Mind, Matter, and Mathematics, Princeton: Princeton University Press, 1995, pp. 33-34
To illustrate a developmental conception of mathematics and the formal sciences would introduce additional complexities that follow from the not-yet-fully-understood relationship between the formal sciences and the empirical sciences, so I am going to focus on developmental conceptions of the empirical sciences, but I hope to return to the formal sciences in this connection.
The idea of the development of science as a two-stage process, with discovery followed by a consistent and exhaustive catalog, implies both that most sciences (and, if we decompose the individual special sciences into subdivisions, parts of most or all sciences) remain in the discovery phase, and that once the discovery phase has passed and we are in possession of an exhaustive and complete catalog of the facts discovered by a science, there is nothing more to be done in a given science. However, I can think of several historical examples in which a science seemed to be converging on a complete catalog, but this development was disrupted (one might say) by conceptual change within the field that forced the reorganization of the materials in a new way. My examples will not be perfect, and some additional scientific discovery always seems to have been involved, but I think that these examples will be at least suggestive.
Prior to the great discoveries of cosmology in the early twentieth century, after which astronomy became indissolubly connected to astrophysics, astronomy seemed to be converging slowly upon an exhaustive catalog of all stars, with the limitation on the research being simply the resolving power of the telescopes employed to view the stars. One could imagine a counterfactual world in which technological innovations in instrumentation supplied nothing more than new telescopes able to resolve more stars, and that the task of astronomy was merely to supply an exhaustive catalog of stars, listing their position in the sky, intrinsic brightness, and a few other facts about the points of light in the sky. But the cataloging of stars itself contributed to the revolution that would follow, particularly when the period-luminosity relationship in Cepheid variable stars was discovered by Henrietta Swan Leavitt (discovered in 1908 and published in 1912). The period-luminosity relationship provided a “standard candle” for astronomy, and this standard candle began the process of constructing the cosmological distance ladder, which in turn made it possible to identify Cepheid variables in the Andromeda galaxy and thus to prove that the Andromeda galaxy was two million light years away and not contained within the Milky Way.
Once astronomy became scientifically coupled to astrophysics, and the resources of physics (both relativistic and quantum) could be brought to bear upon understanding stars, a whole new cosmos opened up. Stars, galaxies, and the universe entire were transformed from something static that might be exhaustively cataloged, to a dynamic and changing reality with a natural history as well as a future. Astronomy went from being something that we might call a Platonic science, or even a Linnaean science, to being an historical science, like geology (after Hutton and Lyell), biology (after Darwin and Wallace), and Paleontology. This coupling of the study of the stars with the study of the matter that makes up the stars has since moved in both directions, with physics driving cosmology and cosmology driving physics. One result of this interaction between astronomy and physics is the illustration above (by Jennifer Johnson) of the periodic table of elements, which prominently exhibits the origins of the elements in cosmological processes. The periodic table once seemed, like a catalog of stars, to be something static to be memorized, and divorced from natural history. This conceptualization of matter in terms of its origins puts the periodic table in a dramatically different light.
As the cosmos was once conceived in Platonic terms as fixed and eternal, to be delineated in a Linnaean science of taxonomical classification, so too the Earth was conceived in Platonic terms as fixed and eternal, to be similarly delineated in a Linnaean science of classification. The first major disruption of this conception came with geology since Hutton and Lyell, followed by plate tectonics and geomorphology in the twentieth century. Now this process has been pushed further by the idea of mineral evolution. I have been listening through for the second time to Robert Hazen’s lectures The Origin and Evolution of Earth: From the Big Bang to the Future of Human Existence, which exposition closely follow the content of his book, The Story of Earth: The First 4.5 Billion Years, from Stardust to Living Planet, in which Hazen wrote:
“The ancient discipline of mineralogy, though absolutely central to everything we know about Earth and its storied past, has been curiously static and detached from the conceptual vagaries of time. For more than two hundred years, measurements of chemical composition, density, hardness, optical properties, and crystal structure have been the meat and potatoes of the mineralogist’s livelihood. Visit any natural history museum, and you’ll see what I mean: gorgeous crustal specimens arrayed in case after glass-fronted case, with labels showing name, chemical formula, crystal system, and locality. These most treasured fragments of Earth are rich in historical context, but you will likely search in vain for any clue as to their birth ages or subsequent geological transformations. The old way all but divorces minerals from their compelling life stories.”
Robert M. Hazen, The Story of Earth: The First 4.5 Billion Years, from Stardust to Living Planet, Viking Penguin, 2012, Introduction
This illustrates, from the perspective of mineralogy, much of what I said above in relation to star charts and catalogs: mineralogy was once about cataloging minerals, and this may have been a finite undertaking once all minerals had been isolated, identified, and cataloged. Now, however, we can understand mineralogy in the context of cosmological history, and this is as revolutionary for our understanding of Earth as the periodic table understood in terms of cosmological history. It could be argued, in addition, that compiling the “particle zoo” of contemporary particle physics is also a task of cataloging the entities studied by physics, but the cataloging of particles has been attended throughout with a theory of how these particles are generated and how they fit into the larger cosmological story — what Aristotle would have called their coming to be and passing away.
The best contemporary example of a science still in its initial phases of discovery and cataloging is the relatively recent confirmation of exoplanets. On my Tumblr blog I recently posted On the Likely Existence of “Random” Planetary Systems, which tried to place our current Golden Age of Exoplanet Discovery in the context of a developing science. We find the planetary systems that we do in fact find partly as a consequence of observation selection effects, and it belongs to the later stages of the development of a science to attempt to correct for observation selection effects built into the original methods of discovery employed. The planetary science that is emerging from exoplanet discoveries, however, and like contemporary particle physics, is attended by theories of planet formation that take into account cosmological history. However, the discovery phase, in terms of exoplanets, is still underway and still very new, and we have a lot to learn. Moreover, once we learn more about the possibilities of planets in our universe, hopefully also we will learn about the varied possibilities of planetary biospheres, and given the continual interaction between biosphere, lithosphere, atmosphere, and hydrosphere, which is a central motif of Hazen’s mineral evolution, we will be able to place planets and their biospheres into a large cosmological context (perhaps even reconstructing biosphere evolution). But first we must discover them, and then we must catalog them.
These observations, I think, have consequences not only for our understanding of the universe in which we find ourselves, but also for our understanding of science. Perhaps, instead of a two-stage process of discovery and taxonomy, science involves a three-stage process of discovery, taxonomy, and natural history, in which latter the objects and facts cataloged by one of the special sciences (earlier in their development) can take their place within cosmological history. If this is the case, then big history is the master category not only of history, but also of science, as big history is the ultimate framework for all knowledge that bears the lowly stamp of its origins. This conception of the task of science, once beyond the initial stages of discovery and classification, to integrate that which was discovered and classified into the framework of big history, suggests a concrete method by which to “cash out” in a meaningful way Wilfrid Sellars’ contention that, “…the specialist must have a sense of how not only his subject matter, but also the methods and principles of his thinking about it, fit into the intellectual landscape.” (cf. Philosophy and the Scientific Image of Man) Big history is the intellectual landscape in which the sciences are located.
A developmental conception of science that recognized stages in the development of science beyond classification, taxonomy, and an exhaustive catalog (which is, in effect, the tombstone of what was a living and growing science), has consequences for the practice of science. Discovery may well be the paradigmatic form of scientific activity, but it is not the only form of scientific activity. The painstakingly detailed and disciplined work of cataloging stars or minerals is the kind of challenge that attracts a certain kind of mind with a particular interest, and the kind of individual who is attracted to this task of systematically cataloging entities and facts is distinct from the kind of individual who might be most attracted by scientific discovery, and also distinct from the kind of individual who might be attracted to fitting the discoveries of a special science into the overall story of the universe and its natural history. There may need to be a division of labor within the sciences, and this may entail an educational difference. Dividing sciences by discipline (and, now, by university departments), which involves inter-generational conflicts among sciences and the paradigm shifts that sometimes emerge as a result of these conflicts, may ultimately make less sense than dividing sciences according their stage of development. Perhaps universities, instead of having departments of chemistry, geology, and botany, should have departments of discovery, taxonomy, and epistemic integration.
Speaking from personal experience, I know that (long ago) when I was in school, I absolutely hated the cataloging approach to the sciences, and I was bored to tears by memorizing facts about minerals or stars. But the developmental science of evolution so intrigued me that I read extensively about evolution and anthropology outside and well beyond the school curriculum. If mineral evolution and the Earth sciences in their contemporary form had been known then, I might have had more of an interest in them.
What are the sciences developing into, or what are the sciences becoming? What is the end and aim of science? I previously touched on this question, a bit obliquely, in What is, or what ought to be, the relationship between science and society? though this line of inquiry is more like a thought experiment. It may be too early in the history of the sciences to say what they are becoming or what they will become. Perhaps an emergent complexity will arise out of knowledge itself, something that I first suggested in Scientific Historiography: Past, Present, and Future, in which I wrote in the final paragraph:
We cannot simply assume an unproblematic diachronic extrapolation of scientific knowledge — or, for that matter, historical knowledge — especially as big history places such great emphasis upon emergent complexity. The linear extrapolation of science eventually may trigger a qualitative change in knowledge. In other words, what will be the emergent form of scientific knowledge (the ninth threshold, perhaps?) and how will it shape our conception of scientific historiography as embodied in big history, not to mention the consequences for civilization itself? We may yet see a scientific historiography as different from big history as big history is different from Augustine’s City of God.
It is only a lack of imagination that would limit science to the three stages of development I have outlined above. There may be developments in science beyond those we can currently understand. Perhaps the qualitative emergent from the quantitative expansion of scientific knowledge will be a change in science itself — possibly a fourth stage in the development of science — that will open up to scientific knowledge aspects of experience and regions of nature currently inaccessible to science.
. . . . .
. . . . .
. . . . .
. . . . .
2 May 2016
Darwin’s Thesis on the Origin of Civilization
and its extrapolation to exocivilizations
In the scientific study of civilization we are beginning at the beginning because there is no established body of scientific knowledge about civilization — much historical knowledge, to be sure, but no science of civilization, sensu stricto, and therefore no scientific knowledge sensu stricto — and this demands that we begin with the simplest and most obvious propositions about civilization. The simplest and most obvious propositions about civilization are such as most discussions of civilization would simply pass over in silence as necessary presuppositions, or which would be dismissed by hand-waving and the assertion, “It is obvious that…” We will take a different point of view. Only a mathematician would think that the Jordan curve theorem was an idea in need of proof, and only someone engaged in attempting to formulate a science of civilization would think asserting that civilization originates in a pre-civilized condition was a condition of civilization that requires discussion.
Our point of departure in this discussion will be what I call Darwin’s Thesis on the origins of civilization, or, more simply, Darwin’s Thesis. I call this Darwin’s Thesis (and called it such in my presentation “What kind of civilizations build starships?”) because of the following passage from Darwin about the origins of civilization:
“The arguments recently advanced… in favour of the belief that man came into the world as a civilised being and that all savages have since undergone degradation, seem to me weak in comparison with those advanced on the other side. Many nations, no doubt, have fallen away in civilisation, and some may have lapsed into utter barbarism, though on this latter head I have not met with any evidence… The evidence that all civilised nations are the descendants of barbarians, consists, on the one side, of clear traces of their former low condition in still-existing customs, beliefs, language, &c.; and on the other side, of proofs that savages are independently able to raise themselves a few steps in the scale of civilisation, and have actually thus risen.”
Charles Darwin, The Descent of Man, Chapter V (I have left Darwin’s spelling in its Anglicized form.)
Darwin was here taking the same naturalistic stance in regard to civilization that he had earlier taken in regard to biology. Darwin made biology scientific by making it a domain of research approached by way of methodological naturalism; prior to Darwin there was biology of a kind, but not any study of biology that could be reconciled with methodological naturalism. Darwin applied this same reasoning to civilization, and this is the reasoning we must apply to civilization if we are to formulate a science of civilization that can be reconciled with methodological naturalism.
As far as ideas about civilization go, this is extremely basic. However, I will again stress the need to begin a science of civilization with the most basic and rudimentary propositions possible. While this is a proposition so rudimentary as to be mundane, there can be no more interesting question for the science of civilization than that of the origin of civilization (the question of the end of civilization is equally interesting, but I wouldn’t say it is more interesting).
While the simplest theses on civilization seem so mundane as to be uninteresting, they can nevertheless be deductively powerful in their application. We can only address the longevity of a civilization, for example, once we have established a point in time at which civilization begins, and counting forward in whatever temporal units we care to employ up to its demise (which also must be defined, if the civilization in question has come to an end), or up to the present day (if the civilization in question is still in existence).
According to Darwin’s Thesis, then, civilization is descended from a prior savage or barbaric condition (not terms we would likely employ today, but certainly terms we still understand). How are we to characterize this pre-civilized condition of humanity? What constitutes the non-civilization that preceded civilization?
A somewhat discerning distinction, albeit one with moral overtones, was made between savagery, barbarism, and civilization. Like the “three age” system of prehistory — stone age, bronze age, iron age — we still find traces of these distinctions in contemporary thought. Here is how I described it previously:
“Edward Burnett Tylor proposed that human cultures developed through three basic stages consisting of savagery, barbarism, and civilization. The leading proponent of this savagery-barbarism-civilization scale came to be Lewis Henry Morgan, who gave a detailed exposition of it in his 1877 book Ancient Society… A quick sketch of the typology can be found at Anthropological Theories: Cross-Cultural Analysis. One of the interesting features of Morgan’s elaboration of Tylor’s idea is his concern to define his stages in terms of technology. From the ‘lower status of savagery’ with its initial use of fire, through a middle stage at which the bow and arrow is introduced, to the ‘upper status of savagery’ which includes pottery, each stage of human development is marked by a definite technological achievement. Similarly with barbarism, which moves through the domestication of animals, irrigation, metal working, and a phonetic alphabet.”
Elsewhere I suggested that the non-civilization prior to civilization could be called proto-civilization. I just re-read my post on proto-civilization and now I find it inadequate, but I still endorse at least this much of what I said there:
“In the case of civilization, a state-of-affairs existed long before the idea of civilization was made explicit. But in projecting the idea of civilization backward in history, we already have the idea suggested by a particular cultural milieu, and the question becomes whether this idea can be applied further than the context in which it was initially proposed.”
This would be one methodology to employ: take the concept of civilization as it has been elaborated and seek to apply it to past social structures; determining at what point this concept no longer applies gives a point in time for the origin of civilization. This could be called the “retroactive method.”
Given the far greater archaeological data we possess than we possessed at the time the concept of civilization was first formulated, this method has new information to work with that it did not have at the time of its formulation. This is one of the points that I attempted to make, however poorly I did so, in my post on proto-civilization: we have an enormous amount of archaeological data on the Upper Paleolithic and Early Neolithic in the Old World, which is usually described in terms of “cultures” rather than “civilizations.” But when European explorers of the Early Modern period came to the New World, they encountered peoples that had social institutions that we today call civilizations, though these civilizations were closer to the “Stone Age” of the Old World than to the early civilizations of Egypt and Mesopotamia (to take to paradigm cases of civilization).
An alternative to the retroactive method would be to study the artifacts of the past on their own merits, to construct a definition of civilization on the basis of the earliest known human societies (on the basis of their material culture), and then apply this conception of civilization forward in time (for lack of a better term I will call this the proactive method, simply to contrast it to the retroactive method). It is arguable that some archaeologists do in fact follow this method, but I don’t know of anyone who has explicitly advanced this procedure as desirable (much less as necessary), although it does bear some resemblance to the implicit formalism of the cultural processual school in archaeological thought.
Both retroactive and proactive methods incorporate obvious problems that derive from parachronic distortions of evidence (the most obvious parachronism is the familiar idea of an anachronism, i.e., a survival from the past preserved into the present, where it is obviously out of place; the contrary parachronic distortion is that of projecting the present into the past).
To pull back from the provincial considerations of civilization studied by archaeology to date — that is to say, exclusively terrestrial civilizations — we can further develop the idea of Darwin’s Thesis in a cosmological context. Once we do this, we immediately understand that we have been asking questions focused on a particular set of conditions that are characteristic of civilizations during the Stelliferous Era, and our ideas worked out for terrestrial civilization (civilizations of planetary endemism during the Stelliferous Era) may not apply more generally to the largest scales of civilization achieved (or which may yet be achieved) in the cosmos.
Civilizations during the Degenerate Era may possess a different character due to their need to derive energy flows from sources other than stellar flux, which latter defines the conditions of the origins of civilization from intelligent biological agents during the Stelliferous Era, which might also be called the Age of Planetary Endemism. If the Degenerate Era begins with the universe having been exhaustively settled or inhabited by life and civilization, this densely inhabited universe not only would prevent the emergence of new civilizations, but also would mean an end to this living cosmos of starlight. In this case the Degenerate Era begins with what I have called the End-Stelliferous Mass Extinction Event (ESMEE), when widely distributed life and civilization of the Stelliferous Era, primarily supported by energy flows from stellar flux (and concentrated on planetary surfaces), comes to an end as the stars wink out one by one.
The cohort of emergent complexity that survives this transition is likely to be a post-civilization successor institution that is (by this time in the evolution of the universe) further removed from the origins of civilization than we are today removed from the origin of the universe. At this point, the origins of emergent complexity will be a distant question, largely inapplicable to contemporaneous concerns, and the central question will be what of the Stelliferous Era can survive into the Degenerate Era, and how it can perpetuate itself in a universe converging on heat death.
Would these civilizations of the Degenerate Era be newly originating civilizations, or would they be derivative from civilizations of the Stelliferous Era? The obvious answer would seem to be that these civilizations would be derivative, except that over such cosmological spans of time the concept of civilization (and the threshold of what constitutes a civilization) is likely to evolve as much as, if not more than, civilization itself. As civilization develops, and a greater degree of science, technology, and intellectual achievement is believed to be indispensable to what constitutes civilization, civilization may be redefined as something close to prevailing conditions, and everything prior to this is redefined as proto-civilization. For example, civilization today might be considered unimaginable without the conveniences of modern life, and everything prior is consigned to barbarism. This reasoning can be extended to hold that civilization is unimaginable without fusion energy, without strong AI, without interstellar travel, and so on. All of this is entirely consistent with Darwin’s Thesis, which holds regardless of whether we consider the Upper Paleolithic to be utter savagery, or 2016 to be utter savagery.
If we consciously make an effort to formulate and to retain a comprehensive conception of civilization, that is not continually revised forward in time in the light of the later developments of civilization, we can avoid the above problem, and it is this approach that gives us longer ages for our civilization today. I have often mentioned that it was once commonplace, and perhaps still commonplace, to fix the origins of civilization with the origins of written languages (i.e., the origins of the “historical period” sensu stricto), but scientific historiography has been slowly chipping away at the distinction between history and prehistory until it is no longer tenable. Hence I identify the origins of civilization with the emergence of cities during or shortly after the Neolithic Agricultural Revolution, which makes our civilization about ten thousand years old, rather than five thousand years old.
As our archaeological knowledge of the past improves, we may be able to set quantifiable conditions for the origins of civilization (say, a number of cities with a given population size, or a particular degree of sophistication in metallurgy, which latter seems to me to mark the ultimate origins of technological civilization). Again, Darwin’s Thesis is entirely in accord with this method also. Moreover, I think that this method gives a greater degree of independence to the determination of the origins of civilization, as it would also give us metrics by which we could determine the independent origin of a new civilization, say, even in the Degenerate Era, if this were to prove possible (which we really don’t know at present).
Beyond these concerns, and beyond the immediate scope of this post, we may need to posit a condition for the continuity of civilization — say, e.g., that metallurgical technological never lapses below a certain threshold — so that once given Darwin’s Thesis and some definition of civilization, we can determine when a civilization has originated de novo, and when a civilization is an evolutionary mutation of an earlier civilization, or a developmental achievement of an earlier civilization, rather than something new in history. This applies whether we take the threshold of achievement to be the smelting of copper or the building of starships. For example, if a civilization can smelt copper (or better), and never loses this technological capacity, it retains a minimal degree of continuity with the first civilization capable of this achievement, when an unbroken continuity of this capacity can be shown from the origins of this technology forward to some arbitrary date in the future.
. . . . .
. . . . .
. . . . .
. . . . .
A Wittgensteinian Approach to Civilization
One of my most frequently accessed posts is titled following Wittgenstein’s Tractatus Logico-Philosophicus section 5.6, “The limits of my language are the limits of my world” (“Die grenzen meiner sprache sind die grenzen meiner welt”). I noted in Contextualizing Wittgenstein that this earlier post on Wittgenstein was posted on Reddit and as a result gained a large number of views — a larger number, at least, than my posts usually receive.
If there is a general principle that can be derived from Tractatus 5.6, one application of this general principle would be the idea that the limits of science are the limits of scientific civilization. In the same vein we could assert that the limits of agriculture are the limits of agrarian civilization (or even, “the limits of agriculture are the limits of biocentric civilization”), and the limits of technology are the limits of technological civilization, and so forth. Another way to express this idea would be to say, the limits of science are the limits of industrial-technological civilization, in so far as our industrial-technological civilization belongs to the genus of scientific civilizations.
Recently I have taken up the problem of scientific civilizations in Folk Concepts of Scientific Civilization, Types of Scientific Civilization, Suboptimal Civilizations, Addendum on Suboptimal Civilizations, David Hume and Scientific Civilization, The Relevance of Philosophy of Science to Scientific Civilization, and Addendum on the Stages of Civilization, inter alia. None of this, as yet, is a systematic treatment of the idea of scientific civilization, though there are many ideas here that can some day be integrated into a more comprehensive synthesis.
What does it mean to live in a scientific civilization constrained by the limits of science? One of the points that I sought to make in my earlier post on Tractatus 5.6 was a scientific interpretation of Wittgenstein’s aphorism, acknowledging that the different idioms we employ to think about the world involve different conceptions of the world. In that post I wrote, “…scientific theories often broaden our horizons and allow us to see and to understand things of which we were previously unaware. But a scientific theory, being a particular idiom as it is, may also limit us, and limit the way we see the world.” This is part of what it means to be constrained by the limits of science: our scientific idioms constrain the conceptual framework we use to understand ourselves and our civilization.
Significantly in this context, different scientific idioms are possible. Indeed, distinct sciences are possible. We have had an historical succession of scientific idioms, which could also be called a succession of distinct sciences — something that could be presented as a Wittgensteinian formulation of Thomas Kuhn — according to which one scientific paradigm has replaced another over time. There is also the unrealized possibility of different origins of science, and different developmental pathways of science, in different civilizations. This is an idea I explored in Types of Scientific Civilization.
A civilization might develop science in a different way than science emerged in terrestrial history. A civilization might begin with a different mathematical formalism or a different logic. Perhaps logic itself might begin with the kind of logical pluralism we know today, which would contrast sharply with the logical monism that has marked most of human history. Different sciences might develop in a different order. The ancient Greeks developed an axiomatic geometry, but no scientific biology. But the idea of natural selection is, in itself, no more difficult than the idea of axiomatic geometry, and could have developed first.
A civilization might fail to develop axiomatic geometry and instead develop a scientific biology in its earliest history — its equivalent of our classical antiquity — and this kind of early biological knowledge would probably take agricultural civilization in a profoundly different direction. There may be (somewhere in the universe) scientific agrarian civilizations that are qualitatively distinct from both agrarian-ecclesiastical civilization and industrial-technological civilization. Thus the developmental sequence of sciences in a civilization — which sciences are developed in what order, and to what extent — will shape the scientific civilization that eventually emerges from this sequence (if it does in fact emerge). Is this sequence an historical accident? That is a difficult question that I will not attempt to answer at present.
There are, then, many possibilities for scientific civilizations, and we have not, with the history of terrestrial civilizations, fully explored (much less exhausted) these possibilities. But scientific civilizations also come with limitations that are intrinsic to scientific knowledge. In my Centauri Dreams post, “The Scientific Imperative of Human Spaceflight,” I argued that the science of industrial-technological civilization, essentially narrowed by its participation in the STEM cycle that drives our civilization, is riddled with blind spots, and these blind spots mean that the civilization built on this science is riddled with blind spots.
This should not be a surprising conclusion, though I suspect few will agree with me. There is a comment on my Centauri Dreams post that implies I am arguing for the role of mystical experiences in civilization; this is not my purpose or my intention. This is simply a misunderstanding. But, in fact, the better I am understood probably the less likely it will be that others will agree with me. In another context, in A Fly in the Ointment, I argued that science is a particular branch of philosophy — that philosophy also known as methodological naturalism — which subverts the view (predictably prevalent in industrial-technological civilization) that if philosophy has any legitimacy at all, it is because it is a kind of marginal science in its own right. More often, philosophy is simply viewed as a kind of failed science.
Philosophy is not a kind of science. Science, on the contrary, is a kind of philosophy. This is not a common view today, but that is my framework for interpreting and understanding scientific civilization. It follows from this that a philosophical civilization would not necessarily be a kind of scientific civilization (the philosophy of such a civilization might or might not be the philosophy that we identify as science), but that our scientific civilization is a kind of philosophical civilization.
Philosophy is a much wider field of study, and it is from philosophy that we can expect to address the blind spots of science and the scientific civilization that has grown from science. So the limits both of science and scientific civilization can be addressed, but only from a more comprehensive perspective, and that more comprehensive perspective is not possible from within scientific civilization.
. . . . .
. . . . .
. . . . .
. . . . .
2 August 2015
For some philosophers, naturalism is simply an extension of physicalism, which was in turn an extension of materialism. Narrow conceptions of materialism had to be extended to account for physical phenomena not reducible to material objects (like theoretical terms in science), and we can similarly view naturalism as a broadening of physicalism in order to more adequately account for the world. (I have quoted definitions of materialism and physicalism in Materialism, Physicalism, and… What?.) But, coming from this perspective, naturalism is approached from a primarily reductivist or eliminativist point of view that places an emphasis upon economy rather than adequacy in the description of nature (on reductivism and eliminativism cf. my post Reduction, Emergence, Supervenience). Here the principle of parsimony is paramount.
One target of eliminativism and reductionism is a class of concepts sometimes called “folk” concepts. The identification of folk concepts in the exposition of philosophy of science can be traced to philosopher Daniel Dennett. Dennett introduced the term “folk psychology” in The Intentional Stance and thereafter employed the term throughout his books. Here is part of his original introduction of the idea:
“We learn to use folk psychology — as a vernacular social technology, a craft — but we don’t learn it self-consciously as a theory — we learn no meta-theory with the theory — and in this regard our knowledge of folk psychology is like our knowledge of the grammar of our native tongue. This fact does not make our knowledge of folk psychology entirely unlike human knowledge of explicit academic theories, however; one could probably be a good practising chemist and yet find it embarrassingly difficult to produce a satisfactory textbook definition of a metal or an ion.”
Daniel Dennett, The Intentional Stance, Chap. 3, “Three Kinds of Intentional Psychology”
Earlier (in the same chapter of the same book) Dennett had posited “folk physics”:
“In one sense people knew what magnets were — they were things that attracted iron — long before science told them what magnets were. A child learns what the word ‘magnet’ means not, typically, by learning an explicit definition, but by learning the ‘folk physics’ of magnets, in which the ordinary term ‘magnet’ is embedded or implicitly defined as a theoretical term.”
Daniel Dennett, The Intentional Stance, Chap. 3, “Three Kinds of Intentional Psychology”
Here is another characterization of folk psychology:
“Philosophers with a yen for conceptual reform are nowadays prone to describe our ordinary, common sense, Rylean description of the mind as ‘folk psychology,’ the implication being that when we ascribe intentions, beliefs, motives, and emotions to others we are offering explanations of those persons’ behaviour, explanations which belong to a sort of pre-scientific theory.”
Scott M. Christensen and Dale R. Turner, editors, Folk Psychology and the Philosophy of Mind, Chap. 10, “The Very Idea of a Folk Psychology” by Robert A. Sharpe, University of Wales, United Kingdom
There is now quite a considerable literature on folk psychology, and many positions in the philosophy of mind are defined by their relationship to folk psychology — eliminativism is largely the elimination of folk psychology; reductionism is largely the reduction of folk psychology to cognitive science or scientific psychology, and so on. Others have gone on to identify other folk concepts, as, for example, folk biology:
Folk biology is the cognitive study of how people classify and reason about the organic world. Humans everywhere classify animals and plants into species-like groups as obvious to a modern scientist as to a Maya Indian. Such groups are primary loci for thinking about biological causes and relations (Mayr 1969). Historically, they provided a transtheoretical base for scientific biology in that different theories — including evolutionary theory — have sought to account for the apparent constancy of “common species” and the organic processes centering on them. In addition, these preferred groups have “from the most remote period… been classed in groups under groups” (Darwin 1859: 431). This taxonomic array provides a natural framework for inference, and an inductive compendium of information, about organic categories and properties. It is not as conventional or arbitrary in structure and content, nor as variable across cultures, as the assembly of entities into cosmologies, materials, or social groups. From the vantage of EVOLUTIONARY PSYCHOLOGY, such natural systems are arguably routine “habits of mind,” in part a natural selection for grasping relevant and recurrent “habits of the world.”
Robert Andrew Wilson and Frank C. Keil, The MIT Encyclopedia of the Cognitive Sciences
We can easily see that the idea of folk concepts as pre-scientific concepts is applicable throughout all branches of knowledge. This has already been made explicit:
“…there is good evidence that we have or had folk physics, folk chemistry, folk biology, folk botany, and so on. What has happened to these folk endeavors? They seem to have given way to scientific accounts.”
William Andrew Rottschaefer, The Biology and Psychology of Moral Agency, 1998, p. 179.
The simplest reading of the above is that in a pre-scientific state we use pre-scientific concepts, and as the scientific revolution unfolds and begins to transform traditional bodies of knowledge, these pre-scientific folk concepts are replaced with scientific concepts and knowledge becomes scientific knowledge. Thereafter, folk concepts are abandoned (eliminated) or formalized so that they can be systematically located in a scientific body of knowledge. All of this is quite close to the 19th century positivist August Comte’s theory of the three stages of knowledge, according to which theological explanations gave way to metaphysical explanations, which in turn gave way to positive scientific explanations, which demonstrates the continuity of positivist thought — even that philosophical thought that does not recognize itself as being positivist. In each case, an earlier non-scientific mode of thought is gradually replaced by a mature scientific mode of thought.
While this simple replacement model of scientific knowledge has certain advantages, it has a crucial weakness, and this is a weakness shared by all theories that, implicitly or explicitly, assume that the mind and its concepts are static and stagnant. Allow me to once again quote one of my favorite passage from Kurt Gödel, the importance of which I cannot stress enough:
“Turing… gives an argument which is supposed to show that mental procedures cannot go beyond mechanical procedures. However, this argument is inconclusive. What Turing disregards completely is the fact that mind, in its use, is not static, but is constantly developing, i.e., that we understand abstract terms more and more precisely as we go on using them, and that more and more abstract terms enter the sphere of our understanding. There may exist systematic methods of actualizing this development, which could form part of the procedure. Therefore, although at each stage the number and precision of the abstract terms at our disposal may be finite, both (and, therefore, also Turing’s number of distinguishable states of mind) may converge toward infinity in the course of the application of the procedure.”
“Some remarks on the undecidability results” (Italics in original) in Gödel, Kurt, Collected Works, Volume II, Publications 1938-1974, New York and Oxford: Oxford University Press, 1990, p. 306.
Not only does the mind refine its concepts and arrive at more abstract formulations; the mind also introduces wholly new concepts in order to attempt to understand new or hitherto unknown phenomena. In this context, what this means is that we are always introducing new “folk” concepts as our experience expands and diversifies, so that there is not a one-time transition from unscientific folk concepts to scientific concepts, but a continual and ongoing evolution of scientific thought in which folk concepts are introduced, their want of rigor is felt, and more refined and scientific concepts are eventually introduced to address the problem of the folk concepts. But this process can result in the formulation of entirely new sciences, and we must then in turn hazard new “folk” concepts in the attempt to get a handle on this new discipline, however inadequate our first attempts may be to understand some unfamiliar body of knowledge.
For example, before the work of Georg Cantor and Richard Dedekind there was no science of set theory. In formulating set theory, 19th century mathematicians had to introduce a great many novel concepts (set, element, mapping) and mathematical procedures (one-to-one correspondence, diagonalization). These early concepts of set theory are now called “naïve set theory,” which have largely been replaced by (several distinct) axiomatizations of set theory, which have either formalized or eliminated the concepts of naïve set theory, which we might also call “folk” set theory. Nevertheless, many “folk” concepts of set theory persist, and Gödel spent much of his later career attempting to produce better formalizations of the concepts of set theory than those employed in now accepted axiomatizations of set theory.
As civilization has changed, and indeed as civilization emerged, we have had occasion to introduce new terms and concepts in order to describe and explain newly emergent forms of life. The domestication of plants and animals necessitated the introduction of concepts of plant and animal husbandry. The industrial revolution and the macroeconomic forces it loosed upon the world necessitated the introduction of terms and concepts of industry and economics. In each case, non-scientific folk concepts preceded the introduction of scientific concepts explained within a comprehensive theoretical framework. In many cases, our theoretical framework is not yet fully formulated and we are still in a stage of conceptual development that involves the overlapping of folk and scientific concepts.
Given the idea of folk concepts and their replacement by scientific concepts, a mature science could be defined as a science in which all folk concepts have been either formalized, transcended, or eliminated. The infinitistic nature of science mystery (which is discussed in Scientific Curiosity and Existential Need), however, suggests that there will always be sciences in an early and therefore immature stage of development. Our knowledge of the scientific method and the development of science means that we can anticipate scientific developments and understand when our intuitions are inadequate and therefore, in a sense, folk concepts. We have an advantage over the unscientific past that knew nothing of the coming scientific revolution and how it would transform knowledge. But we cannot entirely eliminate folk concepts from the early stages of scientific development, and in so far as our scientific civilization results in continuous scientific development, we will always have sciences in the early stages of development.
Scientific progress, then, does not eliminate folk concepts, but generates new and ever more folk concepts even as it eliminates old and outdated folk concepts.
. . . . .
. . . . .
. . . . .
. . . . .
3 July 2015
Traditional units of measure
Quite some time ago in Linguistic Rationalization I discussed how the adoption of the metric system throughout much of the world meant the loss of traditional measuring systems that were intrinsic to the life of the people, part of the local technology of living, as it were. In that post I wrote:
“The gains that were derived from the standardization of weights and measures… did not come without a cost. Traditional weights and measures were central to the lives and the localities from which they emerged. These local systems of weights and measures were, until they were obliterated by the introduction of the metric system, a large part of local culture. With the metric system supplanting these traditional weights and measures, the traditional culture of which they were a part was dealt a decisive blow. This was not the kind of objection that men of the Enlightenment would have paused over, but with our experience of subsequent history it is the kind of thing that we think of today.”
Perhaps it is not the kind of thing many think of today; most people do not mourn the loss of traditional systems of measurement, but it should be recalled that these traditional systems of measurement were not arbitrary — they were based on the typical experience of individuals in the certain milieu, and they reflected the life and economy of a people, who measured the things that they needed to measure.
It is often noted that languages have an immediate relation to the life of a people — the most common example cited is that of the number of words for snow in the languages of the native peoples of the far north. Weights and measures — in a sense, the language of commerce — also reflect the life of a people in the same immediate way as their vocabulary. Language and measurement are linked: much of the earliest writing preserved from the Fertile Crescent consists of simple accounting of warehouse stores.
A particular example can illustrate what I have in mind. It is common to give the measurement of horses in hands. The hand as as unit of measurement has been standardized as four inches, but it is obvious that the origins of the unit is derived from a human hand. Everyone has an admittedly vague idea of the average size of a human hand, and this gives an anthropocentric measurement of horses, which have been crucial to many if not most human economies. The unit of a hand is intuitive and practical, and it continues to be used by individuals who work with horses. It is, indeed, part of the “lore” of horsemanship. Many traditional units of measurement are like this: derived from the human body — as Pythagoras said, man is the measure of all things — they are intuitive and part of the lore of a tradition. To replace these traditional units has a certain economic rationale, but there is a loss if that replacement is successful. More often (as in measuring horses today), both traditional and SI units are employed.
Units of measure unique to a discipline
One response to the loss of traditional units is to define new units in terms of a system of weights and measures — today, usually the metric system — which reflect the particular concerns of a particular discipline. Having a unit of measurement peculiar to a discipline creates a jargon peculiar to a discipline, which is not necessarily a good thing. However, a unit of measurement unique to a discipline makes it possible to think in terms peculiar to the discipline. This “thinking one’s way into” some mode of thought is probably insufficiently appreciated, but it it quite common in the sciences. There are, for example, many different units that are used to measure energy. In principle, only one unit is necessary, and all units of measuring energy can be given a metric equivalent today, but it is not unusual for the energy of a furnace to be measured in BTUs while the energy of a particle accelerator is measured in electronvolts (eV).
For a science of civilization there must be quantifiable measurements, and quantifiable measurements imply a unit of measure. It is a relatively simple matter to employ (or, if you like, to exapt) existing units of measurement for an unanticipated field of research, but it is also possible to formulate new units of measurement specific to a scientific research program — units that are explicitly conceived and applied with the peculiar object of study of the science in view. It is arguable that the introduction of a unit of measurement specific to civilization would contribute to the formulation of a conceptual framework that allows one to think in terms of civilization in a way not possible, for example, in the borrowed terminology of historiography or some other discipline.
Thinking our way into civilization
With this in mind, I would like to suggest the possibility of a unit of time specific to civilization. We already have terms for ten years (a decade), a hundred years (a century), and a thousand years (a millennium), so that it would make sense to employ a metric of years for the quantification of civilization. The basic unit of time in the metric system is the second, and we can of course define the year in terms of the number of seconds in a year. The measurement of time in terms of a year derives from natural cosmological cycles, like the measurement of time in terms of days. With the increase in the precision of atomic clocks, it became necessary to abandon the calibration of the second in terms of celestial events, and this calibration is now done in terms of nuclear physics. Nevertheless, the year, like the day, remains an anthropocentric unit of time that we all understand and that we are likely to continue to use.
Suppose we posit a period of a thousand years as the basic temporal unit for the measurement of civilization, and we call this unit the chronom. In other words, suppose we think of civilization in increments of 1,000 years. In the spirit of a decimal system we can define a series of units derived from the chronom by powers of ten. The chronom is 1,000 years or 103 years; 1 centichronom is 100 or 102 years (a century), 1 decichronom is 10 years or 101 years (a decade), and 1 millichronom is 1.0 year or 100 years. In other other direction, in increasing size, 1 decachronom is 10 chronom or 10,000 years (104 years), 1 hectochronom is 100 chronom or 100,000 years (105 years), 1 kilochronom is 1,000 chronom or 1,000,000 years (106 years or 1.0 Ma, or mega-annum), and thus we have arrived at the familiar motif of a million year old supercivilization. Continuing upward we eventually would come to the megachronom, which is 1,000,000 chronom or 109 years or 1.0 Ga., i.e., giga-annum, at which point we reach the billion year old supercivilizations discussed by Ray Norris (cf. How old is ET?).
From such a starting point — and I am not suggesting that what I have written above should be the starting point; I have only given an illustration to suggest to the reader what might be possible — it would be possible to extrapolate further coherent units of measure. We would want to do so in terms of non-anthropocentric units, and, moreover, non-geocentric units. While the metric system is a great improvement (in terms of the standardization of scientific practice) over traditional units of measure, it is still a geocentric unit of measure (albeit appealing to geocentrism in an extended sense).
Traditional units of measurement were parochial; the metric system was based on the Earth itself, and so not unique to any nation-state, but still local in a cosmological sense. If we were to extrapolate a metric for civilization according to constants of nature (like the speed of light, or some property of matter such as now exploited by atomic clocks), we would begin to formulate a non-anthropocentric set of units for civilization. A temporal metric for the quantitative study of civilization suggests the possibility of also having a spatial metric for the quantitative study of civilization. For example, a unit of space could be defined that is the area covered by light traveling for 1 chronom. A sphere with a radius of one light year would entirely contain a civilization confined to the region of its star. That could be a useful metric for spacefaring civilizations.
What would be the benefit of such a system to quantify civilization? As I noted above, a system of measurement unique to a discipline allows us to think in terms of the discipline. Units of measurement for the quantification of civilization would allow us to think our way into civilization, and so possibly to avoid some of the traditional prejudices of historiographical thinking which have dominated thinking about civilization so far. Moreover, a non-anthropocentric system of civilization metrics would allow us to think our way into a non-anthropocentric metric for civilization, which would better enable us to recognize other civilizations when we have the opportunity to seek them out.
What I am suggesting here is a process of defamiliarization by way of scientific metrics to take the measure of something so familiar — human civilization — that it is difficult for us to think of it in objective terms. Previously in Kierkegaard and Russell on Rigor I discussed how a defamiliarizing process can be a constituent of rigorous thought. In so far as we aspire to the study of civilization as a rigorous science, the defamiliarization of a scientific set of metrics for quantifying civilization can be a part of that effort.
. . . . .
. . . . .
. . . . .
. . . . .
8 June 2015
In several posts I have discussed the need for a science of civilization (cf., e.g., The Future Science of Civilizations), and this is a theme I intended to continue to pursue in future posts. It is no small matter to constitute a new science where none has existed, and to constitute a new science for an object of knowledge as complex as civilization is a daunting task.
The problem of constituting a science of civilization, de novo for all intents and purposes, may be seen in the light of Husserl’s attempt to constitute (or re-constitute) philosophy as a rigorous science, which was a touchstone of Husserl’s work. Here is a passage from Husserl’s programmatic essay, “Philosophy as Strict Science” (variously translated) in which Husserl distinguishes between profundity and intelligibility:
“Profundity is the symptom of a chaos which true science must strive to resolve into a cosmos, i.e., into a simple, unequivocal, pellucid order. True science, insofar as it has become definable doctrine, knows no profundity. Every science, or part of a science, which has attained finality, is a coherent system of reasoning operations each of which is immediately intelligible; thus, not profound at all. Profundity is the concern of wisdom; that of methodical theory is conceptual clarity and distinctness. To reshape and transform the dark gropings of profundity into unequivocal, rational propositions: that is the essential act in methodically constituting a new science.”
Edmund Husserl, “Philosophy as Rigorous Science” in Phenomenology and the Crisis of Philosophy, edited by Quentin Lauer, New York: Harper, 1965 (originally “Philosophie als strenge Wissenschaft,” Logos, vol. I, 1911)
Recently re-reading this passage from Husserl’s essay I realized that much of what I have attempted in the way of “methodically constituting a new science” of civilization has taken the form of attempting to follow Husserl’s pursuit of “unequivocal, rational propositions” that eschew “the dark gropings of profundity.” I think much of the study of civilization, immersed as it is in history and historiography, has been subject more often to profound meditations (in the sense that Husserl gives to “profound”) than conceptual clarity and distinctness.
The Cartesian demand for clarity and distinctness is especially interesting in the context of constituting a science of civilization given Descartes’ famous disavowal of history (on which cf. the quote from Descartes in Big History and Scientific Historiography); if an historical inquiry is the basis of the study of civilization, and history consists of little more than fables, then a science of civilization becomes rather dubious. The emergence of scientific historiography, however, is relevant in this context.
The structure of Husserl’s essay is strikingly similar to the first lecture in Russell’s Our Knowledge of the External World. Both Russell and Husserl take up major philosophical movements of their time (and although the two were contemporaries, each took different examples — Husserl, naturalism, historicism, and Weltanschauung philosophy; Russell, idealism, which he calls “the classical tradition,” and evolutionism), primarily, it seems, to show how philosophy had gotten off on the wrong track. The two works can profitably be read side-by-side, as Russell is close to being an exemplar of the naturalism Husserl criticized, while Husserl is close to being an exemplar of the idealism that Russell criticized.
Despite the fundamental difference between Husserl and Russell, each had an idea of rigor and each attempted to realize in their philosophical work, and each thought of that rigor as bringing the scientific spirit into philosophy. (In Kierkegaard and Russell on Rigor I discussed Russell’s conception of rigor and its surprising similarity to Kierkegaard’s thought.) Interestingly, however, the two did not criticize each other directly, though they were contemporaries and each knew of the other’s work.
The new science Russell was involved in constituting was mathematical logic, which Roman Ingarden explicitly tells us that Husserl found inadequate for the task of a scientific philosophy:
“It is maybe unexpected and surprising that Husserl who was trained as a mathematician did not seek salvation for philosophy in the mathematical method which had from time to time stood out like a beacon as an ideal worthy of imitation by philosophers. But mathematical logic could not satisfy him… above all he fought for responsibility in philosophical research and devoted many years to the elaboration of a method which, according to him, was to secure for philosophy the status of a science.”
Roman Ingarden, On the Motives which Led Husserl to Transcendental Idealism, Translated from the Polish by Arnor Hannibalsson, Den Haag: Martinus Nijhoff, 1975, p. 9.
Ingarden’s discussion of Husserl is instructive, in so far as he notes the influence of mathematical method upon Husserl’s thought, but also that Husserl did not try to employ a mathematical method directly in philosophy. Rather, Husserl invested his philosophical career in the formulation of a new methodology that would allow the values of rigorous scientific practice to be expressed in philosophy and through a philosophical method — a method that might be said to be parallel to or mirroring the mathematical method, or derived from the same thematic motives as those that inform mathematical methodology.
The same question is posed in considering the possibility of a rigorously scientific method in the study of civilization. If civilization is sui generis, is a sui generis methodology necessary to the formulation of a rigorous theory of civilization? Even if that methodology is not what we today know as the methodology of science, or even if that methodology does not precisely mirror the rigorous method of mathematics, there may be a way to reason rigorously about civilization, though it has yet to be given an explicit form.
The need to think rigorously about civilization I took up implicitly in Thinking about Civilization, Suboptimal Civilizations, and Addendum on Suboptimal Civilizations. (I considered the possibility of thinking rigorously about the human condition in The Human Condition Made Rigorous.) Ultimately I would like to make my implicit methodology explicit and so to provide a theoretical framework for the study of civilization.
Since theories of civilization have been, for the most part, either implicit or vague or both, there has been little theoretical framework to give shape or direction to the historical studies that have been central to the study of civilization to date. Thus the study of civilization has been a discipline adrift, without a proper research program, and without an explicit methodology.
There are at least two sides to the rigorous study of civilization: theoretical and empirical. The empirical study of civilization is familiar to us all in the form of history, but history studied as history, as opposed to history studied for what it can contribute to the theory of civilization, are two different things. One of the initial fundamental problems of the study of civilization is to disentangle civilization from history, which involves a formal rather than a material distinction, because both the study of civilization and the study of history draw from the same material resources.
How do we begin to formulate a science of civlization? It is often said that, while science begins with definitions, philosophy culminates in definitions. There is some truth to this, but when one is attempting to create a new discipline one must be both philosopher and scientist simultaneously, practicing a philosophical science or a scientific philosophy that approaches a definition even as it assumes a definition (admittedly vague) in order for the inquiry to begin. Husserl, clearly, and Russell also, could be counted among those striving for a scientific philosophy, while Einstein and Gödel could be counted as among those practicing a philosophical science. All were engaged in the task of formulating new and unprecedented disciplines.
This division of labor between philosophy and science points to what Kant would have called the architectonic of knowledge. Husserl conceived this architectonic categorically, while we would now formulate the architectonic in hypothetico-deductive terms, and it is Husserl’s categorical conception of knowledge that ties him to the past and at times gives his thought an antiquated cast, but this is merely an historical contingency. Many of Husserl’s formulations are dated and openly appeal to a conception of science that no longer accords with what we would likely today think of as science, but in some respects Husserl grasps the perennial nature of science and what distinguishes the scientific mode of thought from non-scientific modes of thought.
Husserl’s conception of science is rooted in the conception of science already emergent in the ancient world in the work of Aristotle, Euclid, and Ptolemy, and which I described in Addendum on the Agrarian-Ecclesiastical Thesis. Russell’s conception science is that of industrial-technological civilization, jointly emergent from the scientific revolution, the political revolutions of the eighteenth century, and the industrial revolution. With the overthrow of scholasticism as the basis of university curricula (which took hundreds of years following the scientific revolution before the process was complete), a new paradigm of science was to emerge and take shape. It was in this context that Husserl and Russell, Einstein and Gödel, pursued their research, employing a mixture of established traditional ideas and radically new ideas.
In a thorough re-reading of Husserl we could treat his conception of science as an exercise to be updated as we went along, substituting an hypothetico-deductive formulation for each and every one of Husserl’s categorical formulations, ultimately converging upon a scientific conception of knowledge more in accord with contemporary conceptions of scientific knowledge. At the end of this exercise, Husserl’s observation about the different between science and profundity would still be intact, and would still be a valuable guide to the transformation of a profound chaos into a pellucid cosmos.
This ideal, and ever more so the realization of this ideal, ultimately may not prove to be possible. Husserl himself in his later writings famously said, “Philosophy as science, as serious, rigorous, indeed apodictically rigorous, science — the dream is over.”(It is interesting to compare this metaphor of a dream to Kant’s claim that he was awoken from his dogmatic slumbers by Hume.) The impulse to science returns, eventually, even if the idea of an apodictically rigorous science has come to seem a mere dream. And once the impulse to science returns, the impulse to make that science rigorous will reassert itself in time. Our rational nature asserts itself in and through this impulse, which is complementary to, rather than contradictory of, our animal nature. To pursue a rigorous science of civilization is ultimately as human as the satisfaction of any other impulse characteristic of our species.
. . . . .
. . . . .
. . . . .
. . . . .
4 April 2015
Curiosity does not have an especially good reputation, and one often finds the word coupled with “mere” so that “mere curiosity” can be elegantly dismissed as though beneath the dignity of the speaker, who can then go about his much more grand and august pursuits without the distraction of the petty, grubbing motivation of mere curiosity. There may be some connection between this disdainful attitude toward curiosity and the prevalent anti-intellectualism of western civilization, notwithstanding the fact that most of what is unique in this tradition is derived from the scientific spirit; it is no surprise that any driving force in human affairs eventually provokes an equal and opposite reaction.
Many civilizations that publicly value intellectuals do not value the contributions of intellectuals, so that this social prestige is indistinguishable from a kind of feudal regard for special classes of persons. This is not what happened in western civilization, in which scientific knowledge bestowed real wealth and power — in our own day no less than in the past — and so provoked a reaction. One of the most famous stories from classical antiquity was how Thales, predicting an especially good olive harvest, hired all the olive presses at a low rate out of season, and then let them out at inflated rates during the peak season, proving that philosophers could earn money if they wanted to do so.
There are a great many interesting quotes that invoke curiosity, for better or worse — Thomas Hobbes: “…this hope and expectation of future knowledge from anything that happeneth new and strange, is that passion which we commonly call ADMIRATION; and the same considered as appetite, is called CURIOSITY, which is appetite of knowledge.” Edmund Burke: “The first and simplest emotion which we discover in the human mind, is curiosity.” Albert Einstein: “I have no special talent. I am only passionately curious.” — which highlight both the admirable and the disreputable side of curiosity. That curiosity has both admirable and disreputable aspects suggests that one might be admirably curious or disreputably curious, and certainly all of us know individuals who are curious in the best sense of the term and others who are curious in the worst sense of the term.
Human beings are adventurers of the spirit. We must count among the attributes of human nature some basal drive toward questioning. This drive could be given an exposition in purely intellectual terms or in purely emotional terms; I think that the intellectual and emotional manifestations of human curiosity are two sides of the same coin, and that is why I suggest positing some basal drive that lies at the root of both. And it isn’t quite right to reduce this drive to curiosity, as we can formulate it in terms of curiosity or in terms of need.
Curiosity is often contrasted to a presumably more esteemed mode of interrogating the cosmos, that we may call existential need. Jacob Needleman often addressed the contrast between “mere” curiosity (which he sometimes called “low curiosity”) and present need. Here is an example:
“It has been said that any question can lead to truth if it is an aching question. For one person it may be the question of life after death, for another the problem of suffering, the causes of war and injustice. Or it may be something more personal and immediate — a profound ethical dilemma, a problem involving the whole direction of one’s life. An aching question, a question that it not just a curiosity or a fleeting burst of emotion, cannot be answered with old thought. Possessed by such a question, one is hungry for ideas of a very different order than the familiar categories that usually accompany us throughout our lives. One is both hungry and, at the same time, more discriminating, less susceptible to credulity and suggestibility. The intelligence of the heart begins to call to us in our sleep.”
Jacob Needleman, The American Soul: Rediscovering the Wisdom of the Founders, pp. 3-4
I disagree with this on so many levels that it is difficult to know where to start, so instead I will simply say that the kind of existential need Needleman wants to describe is highly credulous and suggestible, and what answers to this need are almost always in the form of an old and painfully familiar form of cognitive bias. However, to try to do justice to Needleman, I will allow that, for an individual immersed in the ordinary business of life who, through some traumatic experience, suddenly comes face to face with profound and difficult questions never before posed in that individual’s experience, then, yes, ideas of a very different order are needed to address such questions.
While I do not think that aching questions are likely to lead to truth — I think it much more likely that they will lead to self-deception — I do not deny that many are gnawed by aching questions, and some few spend their lives trying to answer them. The question, then, is the best method by which an aching question might be given a clear, coherent, and satisfying (in so far as that is possible) answer. Here I am reminded of a passage from Walter Kaufmann:
“Nowhere is the disproportion between effort and result more aggravating than in the pursuit of truth: you may plow through documents or make untold experiments or think and think and think, forgo food, comfort, and distractions, lie awake nights and eat out your heart — and in the end you know what can be memorized by any idiot.”
Walter Kaufmann, Critique of Religion and Philosophy, section 24
However aching our question, presumably we would want to spare ourselves the wasted effort of an inquiry that deprives us of the satisfactions of life while giving an answer that could be memorized by any idiot. Kaufmann did not go far enough here: sometimes individuals who make just such an heroic effort to get at the truth and only arrive at an idiot’s portion convince themselves that the idiot’s portion is in fact a great and profound truth.
Whether or not existential need can be satisfied, how are we to undertand it? Viktor Frankl, a psychiatrist and one of the founders of existential analysis, identified a condition that he called the existential vacuum, which he defined as, “the frustration of the will to meaning.” Frankl knew that of which he spoke, having lost most of his family to Nazi death camps and himself having been interned at Auschwitz and liberated only at the end of the war. Here, in a longer passage, is his exposition of existential need:
“Ever more patients complain of what they call an ‘inner void,’ and that is the reason why I have termed this condition the ‘existential vacuum.’ In contradistinction to the peak-experience so aptly described by Maslow, one could conceive of the existential vacuum in terms of an ‘abyss-experience’.”
Viktor Frankl, The Will to Meaning: Foundations and Applications of Logotherapy, New York: Plume, 2014 (originally published in the US in 1969), Part Two, “The Existential Vacuum: A Challenge to Psychiatry”
One could readily suppose that existential need is occasioned by the existential vacuum; that the former is the condition and cause of the latter. Another and more recent approach to existential need is to be found in the work of James Giles:
“…existential needs are not the product of social construction. For in contrast to socially constructed phenomena, existential needs are an inherent and universal feature of the human condition.”
James Giles, The Nature of Sexual Desire, p. 181
This is not necessarily distinct from existential need occasioned by Frankl’s existential vacuum; one could formulate the existential vacuum so that it is either “an inherent and universal feature of the human condition” or not. And there may well be more than one form of existential need. In fact, I think it is clear that there is a plurality of existential needs, and some of these can be sublimated through scientific inquiry and can be satisfied, while some play out in the fruitless manner described in the passage above from Kaufmann.
How one approaches the mystery that is the world, by way of scientific curiosity or by way of existential need, which we might call the scientific approach and the existential approach, each reflect a valid human response to the individual’s relationship to the cosmos. Most of us, at some point in life, poignantly feel the mysteriousness of the world and the desire to give an account of our existence in relation to this mystery. Consider this from John Stuart Mill:
“Human existence is girt round with mystery: the narrow region of our experience is a small island in the midst of a boundless sea, which at once awes our feelings and stimulates our imagination by its vastness and its obscurity. To add to the mystery, the domain of our earthly existence is not only an island in infinite space, but also in infinite time. The past and the future are alike shrouded from us: we neither know the origin of anything which is, nor its final destination. If we feel deeply interested in knowing that there are myriads of worlds at an immeasurable, and to our faculties inconceivable, distance from us in space; if we are eager to discover what little we can about these worlds, and when we cannot know what they are, can never satiate ourselves with speculating on what they may be…”
Now, John Stuart Mill was an almost preternaturally rational man; he was not given to flights of fancy, though the high-flown rhetoric of this passage might suggest this. The scientific approach to mystery is a rationalistic response to the riddle of the world; answers are to be had, but the world is boundless, so that any one answered question still leaves countless other unanswered questions. The growth of knowledge is attended by a parallel growth in the unknown, as our increasing knowledge makes it possible for us to formulate previously unsuspected questions. One might find this to be invigorating or disappointing: there are real answers, but we will never have a final understanding of the world. The existential approach to mystery acknowledges that the human mind may not be capable of comprehending the mystery that is the world, but this is coupled with a fervent belief that there is a final and transcendent answer out there somewhere, even if it always remains tantalizingly out of reach. These are subtle but important differences in the conception of “ultimate” truth as it relates human beings to their world.
A distinction might be made between scientific mystery and absolute mystery, with scientific mystery being a mystery that admits of an answer, but which also admits of a further mystery. An absolute mystery admits of no answer, nor of any further mystery. The world might take on the character of scientific mystery or of absolute mystery depending on whether we approach the world from the perspective of scientific curiosity or existential need. In other words, the kind of mystery that the world is — even if we all agree that the world is girt round in mystery, as Mill says — corresponds to our attitude to the world.
One could argue that scientific curiosity is a sublimation of existential need. If this is true, there is no reason to be ashamed of this, or to attempt a return to the original existential need. The passage from existential need to scientific curiosity may be a stage in the development of intellectual maturity, as irreversible as the passage from childhood to adulthood.
One might go a step further and call scientific curiosity the secularization of existential need (or, rather, the secularization of religious mystery, which then invites a treatment in terms of the Max Scheler/Paul Tillich claim that all human beings are engaged in worship, it is only a question of whether the object of this worship is worthy or idolatrous), recalling Karl Löwith’s theory of secularization, which made much of modernity into a bastardized form of Christian eschatology. This presupposes not only that existential need precedes scientific curiosity, but that it is the only authentic form of human questioning, and that any attempt to introduce new forms of questioning the human condition is illegitimate.
We are today faced with questions that our ancestors, who first felt the disconcerting stirrings of existential need, could not have imagined. I touched on one of these questions in my post on Centauri Dreams, Cosmic Loneliness and Interstellar Travel, which drew more responses than other of my other posts to that forum. Our cosmic loneliness can now be expressed in scientific terms, and we can offer a scientific response to our attempts so far to answer the question, “Are we alone?” This is one of the great scientific questions of our time, and at the same time it speaks to a modern existential need that has been expressed in Clark’s tertium non datur.
The growth of human knowledge and the civilization created by human knowledge may have its origins in the questioning that naturally emerges from an experience of existential need. Perhaps this feeling never fully dissipates, but in so far as the dissatisfaction and discontent of existential need can be redirected into scientific curiosity, human beings can experience at least a limited satisfaction derived from definite scientific answers to questions formulated with increasing clarity and rigor. Beyond this, we may have to wait for the next stage in human evolution, when we may acquire mental faculties that take us beyond both existential need and scientific curiosity into a frame of mind incomprehensible to us in our present iteration.
. . . . .
. . . . .
. . . . .
. . . . .
3 December 2014
P. F. Strawson called his twentieth century exposition of Kant The Bounds of Sense. I have commented elsewhere what a appropriate title this is. The Kantian project (much like metamathematics in the twentieth century) was a limitative project. Kant himself wrote (in the Preface to the 2nd edition of the Critique of Pure Reason): “…my intention then was, to limit knowledge, in order to make room for faith.” Here is the entire passage from which the quote is taken, though in a different translation:
“This discussion as to the positive advantage of critical principles of pure reason can be similarly developed in regard to the concept of God and of the simple nature of our soul; but for the sake of brevity such further discussion may be omitted. [From what has already been said, it is evident that] even the assumption — as made on behalf of the necessary practical employment of my reason — of God, freedom, and immortality is not permissible unless at the same time speculative reason be deprived of its pretensions to transcendent insight. For in order to arrive at such insight it must make use of principles which, in fact, extend only to objects of possible experience, and which, if also applied to what cannot be an object of experience, always really change this into an appearance, thus rendering all practical extension of pure reason impossible. I have therefore found it necessary to deny knowledge, in order to make room for faith.”
Immanuel Kant, Critique of Pure Reason, Preface to the Second Edition
What lies beyond the bounds of sense? For Kant, faith. And Kant’s theological agenda drove him to seek the bounds of sense so that speculative reason could be deprived of its pretensions to transcendental insight. Thus Kant gives us an epistemology openly freighted with theological and moral concerns. Talk about the theory-ladenness of perception! It is, however, non-perception — i.e., that which cannot be the object of possible experience — that is the Kantian domain of faith.
Of course, this is the whole Kantian project in a nutshell, is it not? It is Kant’s design to show us exactly how perception is laden with theory, the theory native to the mind, the a priori concepts by which we organize experience. Kant propounds the transcendental aesthetic and the transcendental deduction of the categories in order to demonstrate the reliance of even the most ordinary experience upon the mind’s a priori faculties.
Kant was, in part, reacting against the empiricism of Locke and Hume — especially Hume’s skeptical conclusions, although Kant’s own rejection of metaphysics equaled if not surpassed Hume’s anti-metaphysical stance, as famously described in the following passage from Hume:
“When we run over libraries, persuaded of these principles, what havoc must we make? If we take in our hand any volume; of divinity or school metaphysics, for instance; let us ask, Does it contain any abstract reasoning concerning quantity or number? No. Does it contain any experimental reasoning concerning matter of fact and existence? No. Commit it then to the flames: for it can contain nothing but sophistry and illusion.”
David Hume, An Enquiry Concerning Human Understanding, “Of the academical or sceptical Philosophy,” Part III
For Hume, the bounds of sense and the limitation of reason entailed doubt; for Kant the bounds of sense and the limitation of reason entailed belief. There is a lesson in here somewhere, and the lesson is this: from a single state of affairs, multiple interpretations can be shown to follow.
Are the bounds of sense also the bounds of science? It would seem so. In so far as science must appeal to empirical evidence, and empirical evidence comes to us by way of the senses, the limits of the senses impose limits on science. Of course, this is a bit too simplistic to be quite true. There are so many qualifications that need to be made to such an assertion that it is difficult to say where to start.
It should be familiar to everyone that we have come to extensively use instruments to augment our senses. Big Science today sometimes spends years, if not decades, building its enormous machines, without which contemporary science could not be possible. So the limits of the senses are not absolute, and they are subject to manipulation. Also, we sometimes do science without our senses or instruments, when we pursue science by way of thought experiments.
While thought experiments alone, unsupplemented by actual experiments, are probably insufficient to constitute a science, thought experiments have become a necessary requisite to science much as instrumentation has become a necessary requisite to science. Sometimes, when our technology catches up with our ideas, we can transform our thought experiments into actual experiments, so that there is an historical relationship between science properly understood and the penumbra of science represented by thought experiments. And thought experiments too have their controlled conditions, and these are the conditions that Kant attempted to lay down in the transcendental aesthetic.
There is also the question of whether or not mathematics is a science, or one among the sciences. And whether or not we set aside mathematics as something different from the other sciences, we know that the development of unquestionably empirical sciences like physics are deeply mathematicized, so that the mathematical content of empirical theories may act like an abstract instrument, parallel to the material instruments of big science, that extends the possibilities of the senses. Another way to think about mathematics is as an enormous thought experiment that under-girds the rest of science — the one crucial thought experiment, an experimentum crucis, without which the rest of science cannot function. In this sense, thought experiments are indispensable to mathematicized science — as indispensable as mathematics.
At a more radical level of critique, it would be difficult to give a fine-grained account of empirical evidence that did not shade over, at the far edges of the concept, into other kinds of knowledge not strictly empirical. Empirical evidence may shade over into the kind of intuitive evidence that is the basis of mathematics, or the kind of epistemological context that is the setting for our thought experiments. Empirical evidence can also shade over into interoception that cannot be publicly verified (therefore failing a basic test of science) or precisely reproduced by repetition, and which interoception itself in turn shades over into intuitions in which thought and feeling are not clearly distinct.
Where does Kant’s possible experience fit within the continuum of the senses? What is the scope of possible experience? Can we make a clear distinction between extending the senses (and thus human experience) by abstract or concrete instruments and imposing a theory upon experience through these extensions? Does possible experience include all possible past experience? Does past experience include phenomenon that occurred but which were not observed (the famous tree falling in a forest that no one hears)? Does it include all possible future experience, or only those future experiences that will eventually be actualized, and not those that already remain merely shadowy possibilities? Does possible experience include those counterfactuals that feature in the “many worlds” interpretation of quantum theory? Explicit answers to these questions are less important that the lines of inquiry that the questions prompt us to pursue.
. . . . .
. . . . .
. . . . .
. . . . .
27 November 2014
An interesting article on NPR about a new atomic clock being developed by NIST scientists, New Clock May End Time As We Know It, was of great interest to me. Immediately intrigued, I wrote a post on my other blog in which I suggested that the new clock might be used to update the “Einstein’s box” thought experiment (also known as the clock-in-a-box thought experiment). While I would like to follow up on this idea at some time, today I want to write about advanced chronometry in the context of the STEM cycle.
Atomic clocks are among the most precise scientific instruments ever developed. As such, precision clocks offer a good illustration of the STEM cycle, which I identified as the definitive feature of industrial-technological civilization. While this illustration is contemporary, there is nothing new about the use of the most advanced science, technology, and engineering available being employed in chronometry.
The earliest sciences, already developed in classical antiquity, were mathematics and astronomy. These early scientific disciplines were applied to the construction of timekeeping mechanisms. Among the most interesting technological artifacts of the ancient world are the clock once installed in the Tower of the Winds in Athens (which was described in antiquity, but which no longer exists) and the Antikythera mechanism, the corroded remains of which were dredged up from a shipwreck off the Greek island of Antikythera (while discovered by sponge divers in 1900, the site is still yielding finds). A classic paper on the Tower of the Winds compares these two technologies: “This is a field in which ancient literature is curiously meager, as we well know from the complete lack of any literary reference to a technology that could produce the Antikythera Mechanism of the same date.” (“The Water Clock in the Tower of the Winds,” Joseph V. Noble and Derek J. de Solla, American Journal of Archaeology, Vol. 72, No. 4, Oct., 1968, pp. 345-355) Both of these artifacts are concerned with chronometry, which demonstrates that the most advanced technologies, then and now, have been employed in the measurement of time.
The advent of high technology as we know it today — unprecedented in human history — has been the result of the advent of a new kind of civilization — industrial-technological civilization — and the use of advanced technologies in chronometry provides a useful lens with which to view one of the unique features of our civilization today, which I call the STEM cycle. The acronym STEM is familiar in educational contexts in order to refer to education and training in science, technology, engineering, and mathematics, so I have taken over this acronym as the name for one of the socioeconomic processes that lies at the heart of our civilization: Science seeks to understand nature on its own terms, for its own sake. Technology is that portion of scientific research that can be developed specifically for the realization of practical ends. Engineering is the industrial implementation of a technology. Mathematics is the common language in which the elements of the cycle are formulated. A feedback loop of science driving technology, driving engineering, driving more science, characterizes industrial-technological civilization. This is the STEM cycle.
The distinctions between science, technology, and engineering are not absolute — far from it. To employ a terminology I developed elsewhere, I would say that science is only weakly distinct from technology, technology is only weakly distinct from engineering, and engineering is only weakly distinct from science. In some contexts any two elements of the STEM cycle are identical, while in other contexts of the STEM cycle they are starkly contrasted. This is not due to inconsistency, but rather to the fact that science, technology, and engineering are open-textured concepts; we could adopt conventional distinctions that would make them strongly distinct, but this would be contrary to usage in ordinary language and would only result in confusion. Given the lack of clear distinctions among science, technology, and engineering, where we draw the dividing lines within the STEM cycle is to some degree arbitrary — we could describe this cycle in different terms, employing different distinctions — but the cycle itself is not arbitrary. By any other name, it drives industrial-technological civilization.
The clock that was the inspiration for this post — the new strontium atomic clock, described in JILA Strontium Atomic Clock Sets New Records in Both Precision and Stability, and the subject of a scientific paper, An optical lattice clock with accuracy and stability at the 10−18 level by B. J. Bloom, T. L. Nicholson, J. R. Williams, S. L. Campbell, M. Bishof, X. Zhang, W. Zhang, S. L. Bromley, and J. Ye (a preprint of the article is available at Arxiv) — is instructive in several respects. In so far as we consider atomic clocks to be a generic “technology,” the strontium clock represents the latest and most advanced instance of this technology yet constructed, a more specific form of technology, the optical lattice clock, within the more generic division of atomic clocks. The sciences involved in the conceptualization of atomic clocks are fundamental: atomic physics, quantum theory, relativity theory, thermodynamics, and optics. Atomic clocks are a technology built from another technologies, including advanced materials, lasers, masers, a vacuum chamber, refrigeration, and computers. Building the technology into an optimal device involves engineering for dependability, economy, miniaturization, portability, and refinements of design.
The NIST web page notes that, “NIST invests in a number of atomic clock technologies because the results of scientific research are unpredictable, and because different clocks are suited for different applications.” (For further background on atomic clocks at NIST cf. A New Era for Atomic Clocks.) The new record breaking clocks in terms of stability and accuracy are experimental devices; the current standard for timekeeping is the NIST-F2 “cesium fountain” atomic clock. The transition from the previous standard timekeeping, NIST-F1, to the present standard, NIST-F2, is largely a result of engineering refinements of the earlier atomic clock. Even the experimental strontium clock is likely to be soon surpassed. JILA Strontium Atomic Clock Sets New Records in Both Precision and Stability quotes Jun Ye as saying, “We already have plans to push the performance even more, so in this sense, even this new Nature paper represents only a ‘mid-term’ report. You can expect more new breakthroughs in our clocks in the next 5 to 10 years.”
The engineering refinement of high technology has two important consequences:
1) inexpensive, widely available devices (which I will call the ubiquity function), and…
2) improved, cutting edge devices that improve the precision of measurement (which I will call the meliorative function), sometimes improved by an order of magnitude (or several orders of magnitude).
These latter devices, those that represent greater precision, are not likely to be inexpensive or widely available, but as the STEM cycle continues to advance science, technology, and engineering in a regular and predictable manner, the older generation of technology becomes widely available and inexpensive as new technologies take their place on the expensive cutting edge. However, these cutting edge technologies are in turn displaced by newer technologies, and the cycle continues. Thus there is a relationship — an historical relationship — between the two consequences of the engineering refinement of technology. Both of these phases in the life of a technology affect the practice of science. NIST Launches a New U.S. Time Standard: NIST-F2 Atomic Clock quotes NIST physicist Steven Jefferts, lead designer of NIST-F2, as saying, “If we’ve learned anything in the last 60 years of building atomic clocks, we’ve learned that every time we build a better clock, somebody comes up with a use for it that you couldn’t have foreseen.”
Widely available precision measurement devices (the ubiquity function) bring down the cost of scientific research and we begin to see science cropping up in all kinds of interesting and unexpected places. The development of computer technology and then the miniaturization of computers had the unintended result of making computers inexpensive and widely available. This, in turn, has meant that everyone doing science carries a portable computer with them, and this widely available computational power (which I have elsewhere called the computational infrastructure of civilization) has transformed how science is done. NIST Atomic Devices and Instrumentation (ADI) now builds “chip-scale” atomic clocks, which is both commercializing and thereby democratizing atomic clock technology in a form factor so small that it could be included in a cell phone (or whatever mobile device form factor you prefer). This is perfect illustration of the ubiquity function in an engineering application of atomic clock technology.
New cutting edge precision measurement devices (the meliorative function), employed only by the governments and industries that can afford to push the envelope with the latest technology, are scientific instruments of great sensitivity; increasing the precision of the measurement of time by an order of magnitude opens up new possibilities the consequences of which cannot be predicted. What can be predicted, however, is the present generation of high precision measurement devices make it possible to construct the next generation of precision measurement devices, which exceed the precision of the previous generation of devices. A clock built to a new design that is far more precise than its predecessors (like the strontium atomic clock) may not necessarily find its cutting edge scientific application exclusively in the measurement of time (though, again, it might do that also), but as a scientific instrument of great sensitivity it suggests uses throughout the sciences. A further distinction can be made, then, between instruments used for the purposes they were intended to serve, and instruments that are exapted for unintended uses.
A loosely-coupled STEM cycle is characterized primarily by the ubiquity function, while a tightly-coupled STEM cycle is characterized primarily by the meliorative function. Human civilization has always involved a loosely-coupled STEM cycle, sometimes operating over thousands of years, with no apparent relationship between science, technology, and engineering. Technological progress was slow and intermittent under these conditions. However, the productivity of industrial-technological civilization is such that its STEM cycle yields both the ubiquity function and the meliorative function, which means that there are in fact multiple STEM cycles running concurrently, both loosely-coupled and tightly-coupled.
The research and development branch of a large business enterprise is the conscious constitution of a limited, tightly-coupled STEM cycle in which only that science is pursued that is expected to generate specific technologies, and only those technologies are developed that can be engineered into marketable products. An open loop STEM cycle, loosely-coupled STEM cycle, or exaptations of the STEM cycle are seen as wasteful, but in some cases the unintended consequences from commercial enterprises can be profound. When Arno Penzias and Robert Wilson were hired by Bell Labs, it was with the promise that they could use the Holmdel Horn Antenna for pure science once they had done the work that Bell Labs would pay them for. As it turned out, the actual work of tracing down interference resulted in the discovery of cosmic microwave background radiation (CMBR), earning Penzias and Wilson the Nobel prize. An engineering problem became a science problem: how do you explain the background interference that cannot be eliminated from electronic devices?
. . . . .
. . . . .
. . . . .
. . . . .