3 July 2015
Traditional units of measure
Quite some time ago in Linguistic Rationalization I discussed how the adoption of the metric system throughout much of the world meant the loss of traditional measuring systems that were intrinsic to the life of the people, part of the local technology of living, as it were. In that post I wrote:
“The gains that were derived from the standardization of weights and measures… did not come without a cost. Traditional weights and measures were central to the lives and the localities from which they emerged. These local systems of weights and measures were, until they were obliterated by the introduction of the metric system, a large part of local culture. With the metric system supplanting these traditional weights and measures, the traditional culture of which they were a part was dealt a decisive blow. This was not the kind of objection that men of the Enlightenment would have paused over, but with our experience of subsequent history it is the kind of thing that we think of today.”
Perhaps it is not the kind of thing many think of today; most people do not mourn the loss of traditional systems of measurement, but it should be recalled that these traditional systems of measurement were not arbitrary — they were based on the typical experience of individuals in the certain milieu, and they reflected the life and economy of a people, who measured the things that they needed to measure.
It is often noted that languages have an immediate relation to the life of a people — the most common example cited is that of the number of words for snow in the languages of the native peoples of the far north. Weights and measures — in a sense, the language of commerce — also reflect the life of a people in the same immediate way as their vocabulary. Language and measurement are linked: much of the earliest writing preserved from the Fertile Crescent consists of simple accounting of warehouse stores.
A particular example can illustrate what I have in mind. It is common to give the measurement of horses in hands. The hand as as unit of measurement has been standardized as four inches, but it is obvious that the origins of the unit is derived from a human hand. Everyone has an admittedly vague idea of the average size of a human hand, and this gives an anthropocentric measurement of horses, which have been crucial to many if not most human economies. The unit of a hand is intuitive and practical, and it continues to be used by individuals who work with horses. It is, indeed, part of the “lore” of horsemanship. Many traditional units of measurement are like this: derived from the human body — as Pythagoras said, man is the measure of all things — they are intuitive and part of the lore of a tradition. To replace these traditional units has a certain economic rationale, but there is a loss if that replacement is successful. More often (as in measuring horses today), both traditional and SI units are employed.
Units of measure unique to a discipline
One response to the loss of traditional units is to define new units in terms of a system of weights and measures — today, usually the metric system — which reflect the particular concerns of a particular discipline. Having a unit of measurement peculiar to a discipline creates a jargon peculiar to a discipline, which is not necessarily a good thing. However, a unit of measurement unique to a discipline makes it possible to think in terms peculiar to the discipline. This “thinking one’s way into” some mode of thought is probably insufficiently appreciated, but it it quite common in the sciences. There are, for example, many different units that are used to measure energy. In principle, only one unit is necessary, and all units of measuring energy can be given a metric equivalent today, but it is not unusual for the energy of a furnace to be measured in BTUs while the energy of a particle accelerator is measured in electronvolts (eV).
For a science of civilization there must be quantifiable measurements, and quantifiable measurements imply a unit of measure. It is a relatively simple matter to employ (or, if you like, to exapt) existing units of measurement for an unanticipated field of research, but it is also possible to formulate new units of measurement specific to a scientific research program — units that are explicitly conceived and applied with the peculiar object of study of the science in view. It is arguable that the introduction of a unit of measurement specific to civilization would contribute to the formulation of a conceptual framework that allows one to think in terms of civilization in a way not possible, for example, in the borrowed terminology of historiography or some other discipline.
Thinking our way into civilization
With this in mind, I would like to suggest the possibility of a unit of time specific to civilization. We already have terms for ten years (a decade), a hundred years (a century), and a thousand years (a millennium), so that it would make sense to employ a metric of years for the quantification of civilization. The basic unit of time in the metric system is the second, and we can of course define the year in terms of the number of seconds in a year. The measurement of time in terms of a year derives from natural cosmological cycles, like the measurement of time in terms of days. With the increase in the precision of atomic clocks, it became necessary to abandon the calibration of the second in terms of celestial events, and this calibration is now done in terms of nuclear physics. Nevertheless, the year, like the day, remains an anthropocentric unit of time that we all understand and that we are likely to continue to use.
Suppose we posit a period of a thousand years as the basic temporal unit for the measurement of civilization, and we call this unit the chronom. In other words, suppose we think of civilization in increments of 1,000 years. In the spirit of a decimal system we can define a series of units derived from the chronom by powers of ten. The chronom is 1,000 years or 103 years; 1 centichronom is 100 or 102 years (a century), 1 decichronom is 10 years or 101 years (a decade), and 1 millichronom is 1.0 year or 100 years. In other other direction, in increasing size, 1 decachronom is 10 chronom or 10,000 years (104 years), 1 hectochronom is 100 chronom or 100,000 years (105 years), 1 kilochronom is 1,000 chronom or 1,000,000 years (106 years or 1.0 Ma, or mega-annum), and thus we have arrived at the familiar motif of a million year old supercivilization. Continuing upward we eventually would come to the megachronom, which is 1,000,000 chronom or 109 years or 1.0 Ga., i.e., giga-annum, at which point we reach the billion year old supercivilizations discussed by Ray Norris (cf. How old is ET?).
From such a starting point — and I am not suggesting that what I have written above should be the starting point; I have only given an illustration to suggest to the reader what might be possible — it would be possible to extrapolate further coherent units of measure. We would want to do so in terms of non-anthropocentric units, and, moreover, non-geocentric units. While the metric system is a great improvement (in terms of the standardization of scientific practice) over traditional units of measure, it is still a geocentric unit of measure (albeit appealing to geocentrism in an extended sense).
Traditional units of measurement were parochial; the metric system was based on the Earth itself, and so not unique to any nation-state, but still local in a cosmological sense. If we were to extrapolate a metric for civilization according to constants of nature (like the speed of light, or some property of matter such as now exploited by atomic clocks), we would begin to formulate a non-anthropocentric set of units for civilization. A temporal metric for the quantitative study of civilization suggests the possibility of also having a spatial metric for the quantitative study of civilization. For example, a unit of space could be defined that is the area covered by light traveling for 1 chronom. A sphere with a radius of one light year would entirely contain a civilization confined to the region of its star. That could be a useful metric for spacefaring civilizations.
What would be the benefit of such a system to quantify civilization? As I noted above, a system of measurement unique to a discipline allows us to think in terms of the discipline. Units of measurement for the quantification of civilization would allow us to think our way into civilization, and so possibly to avoid some of the traditional prejudices of historiographical thinking which have dominated thinking about civilization so far. Moreover, a non-anthropocentric system of civilization metrics would allow us to think our way into a non-anthropocentric metric for civilization, which would better enable us to recognize other civilizations when we have the opportunity to seek them out.
What I am suggesting here is a process of defamiliarization by way of scientific metrics to take the measure of something so familiar — human civilization — that it is difficult for us to think of it in objective terms. Previously in Kierkegaard and Russell on Rigor I discussed how a defamiliarizing process can be a constituent of rigorous thought. In so far as we aspire to the study of civilization as a rigorous science, the defamiliarization of a scientific set of metrics for quantifying civilization can be a part of that effort.
. . . . .
. . . . .
. . . . .
. . . . .
8 June 2015
In several posts I have discussed the need for a science of civilization (cf., e.g., The Future Science of Civilizations), and this is a theme I intended to continue to pursue in future posts. It is no small matter to constitute a new science where none has existed, and to constitute a new science for an object of knowledge as complex as civilization is a daunting task.
The problem of constituting a science of civilization, de novo for all intents and purposes, may be seen in the light of Husserl’s attempt to constitute (or re-constitute) philosophy as a rigorous science, which was a touchstone of Husserl’s work. Here is a passage from Husserl’s programmatic essay, “Philosophy as Strict Science” (variously translated) in which Husserl distinguishes between profundity and intelligibility:
“Profundity is the symptom of a chaos which true science must strive to resolve into a cosmos, i.e., into a simple, unequivocal, pellucid order. True science, insofar as it has become definable doctrine, knows no profundity. Every science, or part of a science, which has attained finality, is a coherent system of reasoning operations each of which is immediately intelligible; thus, not profound at all. Profundity is the concern of wisdom; that of methodical theory is conceptual clarity and distinctness. To reshape and transform the dark gropings of profundity into unequivocal, rational propositions: that is the essential act in methodically constituting a new science.”
Edmund Husserl, “Philosophy as Rigorous Science” in Phenomenology and the Crisis of Philosophy, edited by Quentin Lauer, New York: Harper, 1965 (originally “Philosophie als strenge Wissenschaft,” Logos, vol. I, 1911)
Recently re-reading this passage from Husserl’s essay I realized that much of what I have attempted in the way of “methodically constituting a new science” of civilization has taken the form of attempting to follow Husserl’s pursuit of “unequivocal, rational propositions” that eschew “the dark gropings of profundity.” I think much of the study of civilization, immersed as it is in history and historiography, has been subject more often to profound meditations (in the sense that Husserl gives to “profound”) than conceptual clarity and distinctness.
The Cartesian demand for clarity and distinctness is especially interesting in the context of constituting a science of civilization given Descartes’ famous disavowal of history (on which cf. the quote from Descartes in Big History and Scientific Historiography); if an historical inquiry is the basis of the study of civilization, and history consists of little more than fables, then a science of civilization becomes rather dubious. The emergence of scientific historiography, however, is relevant in this context.
The structure of Husserl’s essay is strikingly similar to the first lecture in Russell’s Our Knowledge of the External World. Both Russell and Husserl take up major philosophical movements of their time (and although the two were contemporaries, each took different examples — Husserl, naturalism, historicism, and Weltanschauung philosophy; Russell, idealism, which he calls “the classical tradition,” and evolutionism), primarily, it seems, to show how philosophy had gotten off on the wrong track. The two works can profitably be read side-by-side, as Russell is close to being an exemplar of the naturalism Husserl criticized, while Husserl is close to being an exemplar of the idealism that Russell criticized.
Despite the fundamental difference between Husserl and Russell, each had an idea of rigor and each attempted to realize in their philosophical work, and each thought of that rigor as bringing the scientific spirit into philosophy. (In Kierkegaard and Russell on Rigor I discussed Russell’s conception of rigor and its surprising similarity to Kierkegaard’s thought.) Interestingly, however, the two did not criticize each other directly, though they were contemporaries and each knew of the other’s work.
The new science Russell was involved in constituting was mathematical logic, which Roman Ingarden explicitly tells us that Husserl found inadequate for the task of a scientific philosophy:
“It is maybe unexpected and surprising that Husserl who was trained as a mathematician did not seek salvation for philosophy in the mathematical method which had from time to time stood out like a beacon as an ideal worthy of imitation by philosophers. But mathematical logic could not satisfy him… above all he fought for responsibility in philosophical research and devoted many years to the elaboration of a method which, according to him, was to secure for philosophy the status of a science.”
Roman Ingarden, On the Motives which Led Husserl to Transcendental Idealism, Translated from the Polish by Arnor Hannibalsson, Den Haag: Martinus Nijhoff, 1975, p. 9.
Ingarden’s discussion of Husserl is instructive, in so far as he notes the influence of mathematical method upon Husserl’s thought, but also that Husserl did not try to employ a mathematical method directly in philosophy. Rather, Husserl invested his philosophical career in the formulation of a new methodology that would allow the values of rigorous scientific practice to be expressed in philosophy and through a philosophical method — a method that might be said to be parallel to or mirroring the mathematical method, or derived from the same thematic motives as those that inform mathematical methodology.
The same question is posed in considering the possibility of a rigorously scientific method in the study of civilization. If civilization is sui generis, is a sui generis methodology necessary to the formulation of a rigorous theory of civilization? Even if that methodology is not what we today know as the methodology of science, or even if that methodology does not precisely mirror the rigorous method of mathematics, there may be a way to reason rigorously about civilization, though it has yet to be given an explicit form.
The need to think rigorously about civilization I took up implicitly in Thinking about Civilization, Suboptimal Civilizations, and Addendum on Suboptimal Civilizations. (I considered the possibility of thinking rigorously about the human condition in The Human Condition Made Rigorous.) Ultimately I would like to make my implicit methodology explicit and so to provide a theoretical framework for the study of civilization.
Since theories of civilization have been, for the most part, either implicit or vague or both, there has been little theoretical framework to give shape or direction to the historical studies that have been central to the study of civilization to date. Thus the study of civilization has been a discipline adrift, without a proper research program, and without an explicit methodology.
There are at least two sides to the rigorous study of civilization: theoretical and empirical. The empirical study of civilization is familiar to us all in the form of history, but history studied as history, as opposed to history studied for what it can contribute to the theory of civilization, are two different things. One of the initial fundamental problems of the study of civilization is to disentangle civilization from history, which involves a formal rather than a material distinction, because both the study of civilization and the study of history draw from the same material resources.
How do we begin to formulate a science of civlization? It is often said that, while science begins with definitions, philosophy culminates in definitions. There is some truth to this, but when one is attempting to create a new discipline one must be both philosopher and scientist simultaneously, practicing a philosophical science or a scientific philosophy that approaches a definition even as it assumes a definition (admittedly vague) in order for the inquiry to begin. Husserl, clearly, and Russell also, could be counted among those striving for a scientific philosophy, while Einstein and Gödel could be counted as among those practicing a philosophical science. All were engaged in the task of formulating new and unprecedented disciplines.
This division of labor between philosophy and science points to what Kant would have called the architectonic of knowledge. Husserl conceived this architectonic categorically, while we would now formulate the architectonic in hypothetico-deductive terms, and it is Husserl’s categorical conception of knowledge that ties him to the past and at times gives his thought an antiquated cast, but this is merely an historical contingency. Many of Husserl’s formulations are dated and openly appeal to a conception of science that no longer accords with what we would likely today think of as science, but in some respects Husserl grasps the perennial nature of science and what distinguishes the scientific mode of thought from non-scientific modes of thought.
Husserl’s conception of science is rooted in the conception of science already emergent in the ancient world in the work of Aristotle, Euclid, and Ptolemy, and which I described in Addendum on the Agrarian-Ecclesiastical Thesis. Russell’s conception science is that of industrial-technological civilization, jointly emergent from the scientific revolution, the political revolutions of the eighteenth century, and the industrial revolution. With the overthrow of scholasticism as the basis of university curricula (which took hundreds of years following the scientific revolution before the process was complete), a new paradigm of science was to emerge and take shape. It was in this context that Husserl and Russell, Einstein and Gödel, pursued their research, employing a mixture of established traditional ideas and radically new ideas.
In a thorough re-reading of Husserl we could treat his conception of science as an exercise to be updated as we went along, substituting an hypothetico-deductive formulation for each and every one of Husserl’s categorical formulations, ultimately converging upon a scientific conception of knowledge more in accord with contemporary conceptions of scientific knowledge. At the end of this exercise, Husserl’s observation about the different between science and profundity would still be intact, and would still be a valuable guide to the transformation of a profound chaos into a pellucid cosmos.
This ideal, and ever more so the realization of this ideal, ultimately may not prove to be possible. Husserl himself in his later writings famously said, “Philosophy as science, as serious, rigorous, indeed apodictically rigorous, science — the dream is over.”(It is interesting to compare this metaphor of a dream to Kant’s claim that he was awoken from his dogmatic slumbers by Hume.) The impulse to science returns, eventually, even if the idea of an apodictically rigorous science has come to seem a mere dream. And once the impulse to science returns, the impulse to make that science rigorous will reassert itself in time. Our rational nature asserts itself in and through this impulse, which is complementary to, rather than contradictory of, our animal nature. To pursue a rigorous science of civilization is ultimately as human as the satisfaction of any other impulse characteristic of our species.
. . . . .
. . . . .
. . . . .
. . . . .
4 April 2015
Curiosity does not have an especially good reputation, and one often finds the word coupled with “mere” so that “mere curiosity” can be elegantly dismissed as though beneath the dignity of the speaker, who can then go about his much more grand and august pursuits without the distraction of the petty, grubbing motivation of mere curiosity. There may be some connection between this disdainful attitude toward curiosity and the prevalent anti-intellectualism of western civilization, notwithstanding the fact that most of what is unique in this tradition is derived from the scientific spirit; it is no surprise that any driving force in human affairs eventually provokes an equal and opposite reaction.
Many civilizations that publicly value intellectuals do not value the contributions of intellectuals, so that this social prestige is indistinguishable from a kind of feudal regard for special classes of persons. This is not what happened in western civilization, in which scientific knowledge bestowed real wealth and power — in our own day no less than in the past — and so provoked a reaction. One of the most famous stories from classical antiquity was how Thales, predicting an especially good olive harvest, hired all the olive presses at a low rate out of season, and then let them out at inflated rates during the peak season, proving that philosophers could earn money if they wanted to do so.
There are a great many interesting quotes that invoke curiosity, for better or worse — Thomas Hobbes: “…this hope and expectation of future knowledge from anything that happeneth new and strange, is that passion which we commonly call ADMIRATION; and the same considered as appetite, is called CURIOSITY, which is appetite of knowledge.” Edmund Burke: “The first and simplest emotion which we discover in the human mind, is curiosity.” Albert Einstein: “I have no special talent. I am only passionately curious.” — which highlight both the admirable and the disreputable side of curiosity. That curiosity has both admirable and disreputable aspects suggests that one might be admirably curious or disreputably curious, and certainly all of us know individuals who are curious in the best sense of the term and others who are curious in the worst sense of the term.
Human beings are adventurers of the spirit. We must count among the attributes of human nature some basal drive toward questioning. This drive could be given an exposition in purely intellectual terms or in purely emotional terms; I think that the intellectual and emotional manifestations of human curiosity are two sides of the same coin, and that is why I suggest positing some basal drive that lies at the root of both. And it isn’t quite right to reduce this drive to curiosity, as we can formulate it in terms of curiosity or in terms of need.
Curiosity is often contrasted to a presumably more esteemed mode of interrogating the cosmos, that we may call existential need. Jacob Needleman often addressed the contrast between “mere” curiosity (which he sometimes called “low curiosity”) and present need. Here is an example:
“It has been said that any question can lead to truth if it is an aching question. For one person it may be the question of life after death, for another the problem of suffering, the causes of war and injustice. Or it may be something more personal and immediate — a profound ethical dilemma, a problem involving the whole direction of one’s life. An aching question, a question that it not just a curiosity or a fleeting burst of emotion, cannot be answered with old thought. Possessed by such a question, one is hungry for ideas of a very different order than the familiar categories that usually accompany us throughout our lives. One is both hungry and, at the same time, more discriminating, less susceptible to credulity and suggestibility. The intelligence of the heart begins to call to us in our sleep.”
Jacob Needleman, The American Soul: Rediscovering the Wisdom of the Founders, pp. 3-4
I disagree with this on so many levels that it is difficult to know where to start, so instead I will simply say that the kind of existential need Needleman wants to describe is highly credulous and suggestible, and what answers to this need are almost always in the form of an old and painfully familiar form of cognitive bias. However, to try to do justice to Needleman, I will allow that, for an individual immersed in the ordinary business of life who, through some traumatic experience, suddenly comes face to face with profound and difficult questions never before posed in that individual’s experience, then, yes, ideas of a very different order are needed to address such questions.
While I do not think that aching questions are likely to lead to truth — I think it much more likely that they will lead to self-deception — I do not deny that many are gnawed by aching questions, and some few spend their lives trying to answer them. The question, then, is the best method by which an aching question might be given a clear, coherent, and satisfying (in so far as that is possible) answer. Here I am reminded of a passage from Walter Kaufmann:
“Nowhere is the disproportion between effort and result more aggravating than in the pursuit of truth: you may plow through documents or make untold experiments or think and think and think, forgo food, comfort, and distractions, lie awake nights and eat out your heart — and in the end you know what can be memorized by any idiot.”
Walter Kaufmann, Critique of Religion and Philosophy, section 24
However aching our question, presumably we would want to spare ourselves the wasted effort of an inquiry that deprives us of the satisfactions of life while giving an answer that could be memorized by any idiot. Kaufmann did not go far enough here: sometimes individuals who make just such an heroic effort to get at the truth and only arrive at an idiot’s portion convince themselves that the idiot’s portion is in fact a great and profound truth.
Whether or not existential need can be satisfied, how are we to undertand it? Viktor Frankl, a psychiatrist and one of the founders of existential analysis, identified a condition that he called the existential vacuum, which he defined as, “the frustration of the will to meaning.” Frankl knew that of which he spoke, having lost most of his family to Nazi death camps and himself having been interned at Auschwitz and liberated only at the end of the war. Here, in a longer passage, is his exposition of existential need:
“Ever more patients complain of what they call an ‘inner void,’ and that is the reason why I have termed this condition the ‘existential vacuum.’ In contradistinction to the peak-experience so aptly described by Maslow, one could conceive of the existential vacuum in terms of an ‘abyss-experience’.”
Viktor Frankl, The Will to Meaning: Foundations and Applications of Logotherapy, New York: Plume, 2014 (originally published in the US in 1969), Part Two, “The Existential Vacuum: A Challenge to Psychiatry”
One could readily suppose that existential need is occasioned by the existential vacuum; that the former is the condition and cause of the latter. Another and more recent approach to existential need is to be found in the work of James Giles:
“…existential needs are not the product of social construction. For in contrast to socially constructed phenomena, existential needs are an inherent and universal feature of the human condition.”
James Giles, The Nature of Sexual Desire, p. 181
This is not necessarily distinct from existential need occasioned by Frankl’s existential vacuum; one could formulate the existential vacuum so that it is either “an inherent and universal feature of the human condition” or not. And there may well be more than one form of existential need. In fact, I think it is clear that there is a plurality of existential needs, and some of these can be sublimated through scientific inquiry and can be satisfied, while some play out in the fruitless manner described in the passage above from Kaufmann.
How one approaches the mystery that is the world, by way of scientific curiosity or by way of existential need, which we might call the scientific approach and the existential approach, each reflect a valid human response to the individual’s relationship to the cosmos. Most of us, at some point in life, poignantly feel the mysteriousness of the world and the desire to give an account of our existence in relation to this mystery. Consider this from John Stuart Mill:
“Human existence is girt round with mystery: the narrow region of our experience is a small island in the midst of a boundless sea, which at once awes our feelings and stimulates our imagination by its vastness and its obscurity. To add to the mystery, the domain of our earthly existence is not only an island in infinite space, but also in infinite time. The past and the future are alike shrouded from us: we neither know the origin of anything which is, nor its final destination. If we feel deeply interested in knowing that there are myriads of worlds at an immeasurable, and to our faculties inconceivable, distance from us in space; if we are eager to discover what little we can about these worlds, and when we cannot know what they are, can never satiate ourselves with speculating on what they may be…”
Now, John Stuart Mill was an almost preternaturally rational man; he was not given to flights of fancy, though the high-flown rhetoric of this passage might suggest this. The scientific approach to mystery is a rationalistic response to the riddle of the world; answers are to be had, but the world is boundless, so that any one answered question still leaves countless other unanswered questions. The growth of knowledge is attended by a parallel growth in the unknown, as our increasing knowledge makes it possible for us to formulate previously unsuspected questions. One might find this to be invigorating or disappointing: there are real answers, but we will never have a final understanding of the world. The existential approach to mystery acknowledges that the human mind may not be capable of comprehending the mystery that is the world, but this is coupled with a fervent belief that there is a final and transcendent answer out there somewhere, even if it always remains tantalizingly out of reach. These are subtle but important differences in the conception of “ultimate” truth as it relates human beings to their world.
A distinction might be made between scientific mystery and absolute mystery, with scientific mystery being a mystery that admits of an answer, but which also admits of a further mystery. An absolute mystery admits of no answer, nor of any further mystery. The world might take on the character of scientific mystery or of absolute mystery depending on whether we approach the world from the perspective of scientific curiosity or existential need. In other words, the kind of mystery that the world is — even if we all agree that the world is girt round in mystery, as Mill says — corresponds to our attitude to the world.
One could argue that scientific curiosity is a sublimation of existential need. If this is true, there is no reason to be ashamed of this, or to attempt a return to the original existential need. The passage from existential need to scientific curiosity may be a stage in the development of intellectual maturity, as irreversible as the passage from childhood to adulthood.
One might go a step further and call scientific curiosity the secularization of existential need (or, rather, the secularization of religious mystery, which then invites a treatment in terms of the Max Scheler/Paul Tillich claim that all human beings are engaged in worship, it is only a question of whether the object of this worship is worthy or idolatrous), recalling Karl Löwith’s theory of secularization, which made much of modernity into a bastardized form of Christian eschatology. This presupposes not only that existential need precedes scientific curiosity, but that it is the only authentic form of human questioning, and that any attempt to introduce new forms of questioning the human condition is illegitimate.
We are today faced with questions that our ancestors, who first felt the disconcerting stirrings of existential need, could not have imagined. I touched on one of these questions in my post on Centauri Dreams, Cosmic Loneliness and Interstellar Travel, which drew more responses than other of my other posts to that forum. Our cosmic loneliness can now be expressed in scientific terms, and we can offer a scientific response to our attempts so far to answer the question, “Are we alone?” This is one of the great scientific questions of our time, and at the same time it speaks to a modern existential need that has been expressed in Clark’s tertium non datur.
The growth of human knowledge and the civilization created by human knowledge may have its origins in the questioning that naturally emerges from an experience of existential need. Perhaps this feeling never fully dissipates, but in so far as the dissatisfaction and discontent of existential need can be redirected into scientific curiosity, human beings can experience at least a limited satisfaction derived from definite scientific answers to questions formulated with increasing clarity and rigor. Beyond this, we may have to wait for the next stage in human evolution, when we may acquire mental faculties that take us beyond both existential need and scientific curiosity into a frame of mind incomprehensible to us in our present iteration.
. . . . .
. . . . .
. . . . .
. . . . .
3 December 2014
P. F. Strawson called his twentieth century exposition of Kant The Bounds of Sense. I have commented elsewhere what a appropriate title this is. The Kantian project (much like metamathematics in the twentieth century) was a limitative project. Kant himself wrote (in the Preface to the 2nd edition of the Critique of Pure Reason): “…my intention then was, to limit knowledge, in order to make room for faith.” Here is the entire passage from which the quote is taken, though in a different translation:
“This discussion as to the positive advantage of critical principles of pure reason can be similarly developed in regard to the concept of God and of the simple nature of our soul; but for the sake of brevity such further discussion may be omitted. [From what has already been said, it is evident that] even the assumption — as made on behalf of the necessary practical employment of my reason — of God, freedom, and immortality is not permissible unless at the same time speculative reason be deprived of its pretensions to transcendent insight. For in order to arrive at such insight it must make use of principles which, in fact, extend only to objects of possible experience, and which, if also applied to what cannot be an object of experience, always really change this into an appearance, thus rendering all practical extension of pure reason impossible. I have therefore found it necessary to deny knowledge, in order to make room for faith.”
Immanuel Kant, Critique of Pure Reason, Preface to the Second Edition
What lies beyond the bounds of sense? For Kant, faith. And Kant’s theological agenda drove him to seek the bounds of sense so that speculative reason could be deprived of its pretensions to transcendental insight. Thus Kant gives us an epistemology openly freighted with theological and moral concerns. Talk about the theory-ladenness of perception! It is, however, non-perception — i.e., that which cannot be the object of possible experience — that is the Kantian domain of faith.
Of course, this is the whole Kantian project in a nutshell, is it not? It is Kant’s design to show us exactly how perception is laden with theory, the theory native to the mind, the a priori concepts by which we organize experience. Kant propounds the transcendental aesthetic and the transcendental deduction of the categories in order to demonstrate the reliance of even the most ordinary experience upon the mind’s a priori faculties.
Kant was, in part, reacting against the empiricism of Locke and Hume — especially Hume’s skeptical conclusions, although Kant’s own rejection of metaphysics equaled if not surpassed Hume’s anti-metaphysical stance, as famously described in the following passage from Hume:
“When we run over libraries, persuaded of these principles, what havoc must we make? If we take in our hand any volume; of divinity or school metaphysics, for instance; let us ask, Does it contain any abstract reasoning concerning quantity or number? No. Does it contain any experimental reasoning concerning matter of fact and existence? No. Commit it then to the flames: for it can contain nothing but sophistry and illusion.”
David Hume, An Enquiry Concerning Human Understanding, “Of the academical or sceptical Philosophy,” Part III
For Hume, the bounds of sense and the limitation of reason entailed doubt; for Kant the bounds of sense and the limitation of reason entailed belief. There is a lesson in here somewhere, and the lesson is this: from a single state of affairs, multiple interpretations can be shown to follow.
Are the bounds of sense also the bounds of science? It would seem so. In so far as science must appeal to empirical evidence, and empirical evidence comes to us by way of the senses, the limits of the senses impose limits on science. Of course, this is a bit too simplistic to be quite true. There are so many qualifications that need to be made to such an assertion that it is difficult to say where to start.
It should be familiar to everyone that we have come to extensively use instruments to augment our senses. Big Science today sometimes spends years, if not decades, building its enormous machines, without which contemporary science could not be possible. So the limits of the senses are not absolute, and they are subject to manipulation. Also, we sometimes do science without our senses or instruments, when we pursue science by way of thought experiments.
While thought experiments alone, unsupplemented by actual experiments, are probably insufficient to constitute a science, thought experiments have become a necessary requisite to science much as instrumentation has become a necessary requisite to science. Sometimes, when our technology catches up with our ideas, we can transform our thought experiments into actual experiments, so that there is an historical relationship between science properly understood and the penumbra of science represented by thought experiments. And thought experiments too have their controlled conditions, and these are the conditions that Kant attempted to lay down in the transcendental aesthetic.
There is also the question of whether or not mathematics is a science, or one among the sciences. And whether or not we set aside mathematics as something different from the other sciences, we know that the development of unquestionably empirical sciences like physics are deeply mathematicized, so that the mathematical content of empirical theories may act like an abstract instrument, parallel to the material instruments of big science, that extends the possibilities of the senses. Another way to think about mathematics is as an enormous thought experiment that under-girds the rest of science — the one crucial thought experiment, an experimentum crucis, without which the rest of science cannot function. In this sense, thought experiments are indispensable to mathematicized science — as indispensable as mathematics.
At a more radical level of critique, it would be difficult to give a fine-grained account of empirical evidence that did not shade over, at the far edges of the concept, into other kinds of knowledge not strictly empirical. Empirical evidence may shade over into the kind of intuitive evidence that is the basis of mathematics, or the kind of epistemological context that is the setting for our thought experiments. Empirical evidence can also shade over into interoception that cannot be publicly verified (therefore failing a basic test of science) or precisely reproduced by repetition, and which interoception itself in turn shades over into intuitions in which thought and feeling are not clearly distinct.
Where does Kant’s possible experience fit within the continuum of the senses? What is the scope of possible experience? Can we make a clear distinction between extending the senses (and thus human experience) by abstract or concrete instruments and imposing a theory upon experience through these extensions? Does possible experience include all possible past experience? Does past experience include phenomenon that occurred but which were not observed (the famous tree falling in a forest that no one hears)? Does it include all possible future experience, or only those future experiences that will eventually be actualized, and not those that already remain merely shadowy possibilities? Does possible experience include those counterfactuals that feature in the “many worlds” interpretation of quantum theory? Explicit answers to these questions are less important that the lines of inquiry that the questions prompt us to pursue.
. . . . .
. . . . .
. . . . .
. . . . .
27 November 2014
An interesting article on NPR about a new atomic clock being developed by NIST scientists, New Clock May End Time As We Know It, was of great interest to me. Immediately intrigued, I wrote a post on my other blog in which I suggested that the new clock might be used to update the “Einstein’s box” thought experiment (also known as the clock-in-a-box thought experiment). While I would like to follow up on this idea at some time, today I want to write about advanced chronometry in the context of the STEM cycle.
Atomic clocks are among the most precise scientific instruments ever developed. As such, precision clocks offer a good illustration of the STEM cycle, which I identified as the definitive feature of industrial-technological civilization. While this illustration is contemporary, there is nothing new about the use of the most advanced science, technology, and engineering available being employed in chronometry.
The earliest sciences, already developed in classical antiquity, were mathematics and astronomy. These early scientific disciplines were applied to the construction of timekeeping mechanisms. Among the most interesting technological artifacts of the ancient world are the clock once installed in the Tower of the Winds in Athens (which was described in antiquity, but which no longer exists) and the Antikythera mechanism, the corroded remains of which were dredged up from a shipwreck off the Greek island of Antikythera (while discovered by sponge divers in 1900, the site is still yielding finds). A classic paper on the Tower of the Winds compares these two technologies: “This is a field in which ancient literature is curiously meager, as we well know from the complete lack of any literary reference to a technology that could produce the Antikythera Mechanism of the same date.” (“The Water Clock in the Tower of the Winds,” Joseph V. Noble and Derek J. de Solla, American Journal of Archaeology, Vol. 72, No. 4, Oct., 1968, pp. 345-355) Both of these artifacts are concerned with chronometry, which demonstrates that the most advanced technologies, then and now, have been employed in the measurement of time.
The advent of high technology as we know it today — unprecedented in human history — has been the result of the advent of a new kind of civilization — industrial-technological civilization — and the use of advanced technologies in chronometry provides a useful lens with which to view one of the unique features of our civilization today, which I call the STEM cycle. The acronym STEM is familiar in educational contexts in order to refer to education and training in science, technology, engineering, and mathematics, so I have taken over this acronym as the name for one of the socioeconomic processes that lies at the heart of our civilization: Science seeks to understand nature on its own terms, for its own sake. Technology is that portion of scientific research that can be developed specifically for the realization of practical ends. Engineering is the industrial implementation of a technology. Mathematics is the common language in which the elements of the cycle are formulated. A feedback loop of science driving technology, driving engineering, driving more science, characterizes industrial-technological civilization. This is the STEM cycle.
The distinctions between science, technology, and engineering are not absolute — far from it. To employ a terminology I developed elsewhere, I would say that science is only weakly distinct from technology, technology is only weakly distinct from engineering, and engineering is only weakly distinct from science. In some contexts any two elements of the STEM cycle are identical, while in other contexts of the STEM cycle they are starkly contrasted. This is not due to inconsistency, but rather to the fact that science, technology, and engineering are open-textured concepts; we could adopt conventional distinctions that would make them strongly distinct, but this would be contrary to usage in ordinary language and would only result in confusion. Given the lack of clear distinctions among science, technology, and engineering, where we draw the dividing lines within the STEM cycle is to some degree arbitrary — we could describe this cycle in different terms, employing different distinctions — but the cycle itself is not arbitrary. By any other name, it drives industrial-technological civilization.
The clock that was the inspiration for this post — the new strontium atomic clock, described in JILA Strontium Atomic Clock Sets New Records in Both Precision and Stability, and the subject of a scientific paper, An optical lattice clock with accuracy and stability at the 10−18 level by B. J. Bloom, T. L. Nicholson, J. R. Williams, S. L. Campbell, M. Bishof, X. Zhang, W. Zhang, S. L. Bromley, and J. Ye (a preprint of the article is available at Arxiv) — is instructive in several respects. In so far as we consider atomic clocks to be a generic “technology,” the strontium clock represents the latest and most advanced instance of this technology yet constructed, a more specific form of technology, the optical lattice clock, within the more generic division of atomic clocks. The sciences involved in the conceptualization of atomic clocks are fundamental: atomic physics, quantum theory, relativity theory, thermodynamics, and optics. Atomic clocks are a technology built from another technologies, including advanced materials, lasers, masers, a vacuum chamber, refrigeration, and computers. Building the technology into an optimal device involves engineering for dependability, economy, miniaturization, portability, and refinements of design.
The NIST web page notes that, “NIST invests in a number of atomic clock technologies because the results of scientific research are unpredictable, and because different clocks are suited for different applications.” (For further background on atomic clocks at NIST cf. A New Era for Atomic Clocks.) The new record breaking clocks in terms of stability and accuracy are experimental devices; the current standard for timekeeping is the NIST-F2 “cesium fountain” atomic clock. The transition from the previous standard timekeeping, NIST-F1, to the present standard, NIST-F2, is largely a result of engineering refinements of the earlier atomic clock. Even the experimental strontium clock is likely to be soon surpassed. JILA Strontium Atomic Clock Sets New Records in Both Precision and Stability quotes Jun Ye as saying, “We already have plans to push the performance even more, so in this sense, even this new Nature paper represents only a ‘mid-term’ report. You can expect more new breakthroughs in our clocks in the next 5 to 10 years.”
The engineering refinement of high technology has two important consequences:
1) inexpensive, widely available devices (which I will call the ubiquity function), and…
2) improved, cutting edge devices that improve the precision of measurement (which I will call the meliorative function), sometimes improved by an order of magnitude (or several orders of magnitude).
These latter devices, those that represent greater precision, are not likely to be inexpensive or widely available, but as the STEM cycle continues to advance science, technology, and engineering in a regular and predictable manner, the older generation of technology becomes widely available and inexpensive as new technologies take their place on the expensive cutting edge. However, these cutting edge technologies are in turn displaced by newer technologies, and the cycle continues. Thus there is a relationship — an historical relationship — between the two consequences of the engineering refinement of technology. Both of these phases in the life of a technology affect the practice of science. NIST Launches a New U.S. Time Standard: NIST-F2 Atomic Clock quotes NIST physicist Steven Jefferts, lead designer of NIST-F2, as saying, “If we’ve learned anything in the last 60 years of building atomic clocks, we’ve learned that every time we build a better clock, somebody comes up with a use for it that you couldn’t have foreseen.”
Widely available precision measurement devices (the ubiquity function) bring down the cost of scientific research and we begin to see science cropping up in all kinds of interesting and unexpected places. The development of computer technology and then the miniaturization of computers had the unintended result of making computers inexpensive and widely available. This, in turn, has meant that everyone doing science carries a portable computer with them, and this widely available computational power (which I have elsewhere called the computational infrastructure of civilization) has transformed how science is done. NIST Atomic Devices and Instrumentation (ADI) now builds “chip-scale” atomic clocks, which is both commercializing and thereby democratizing atomic clock technology in a form factor so small that it could be included in a cell phone (or whatever mobile device form factor you prefer). This is perfect illustration of the ubiquity function in an engineering application of atomic clock technology.
New cutting edge precision measurement devices (the meliorative function), employed only by the governments and industries that can afford to push the envelope with the latest technology, are scientific instruments of great sensitivity; increasing the precision of the measurement of time by an order of magnitude opens up new possibilities the consequences of which cannot be predicted. What can be predicted, however, is the present generation of high precision measurement devices make it possible to construct the next generation of precision measurement devices, which exceed the precision of the previous generation of devices. A clock built to a new design that is far more precise than its predecessors (like the strontium atomic clock) may not necessarily find its cutting edge scientific application exclusively in the measurement of time (though, again, it might do that also), but as a scientific instrument of great sensitivity it suggests uses throughout the sciences. A further distinction can be made, then, between instruments used for the purposes they were intended to serve, and instruments that are exapted for unintended uses.
A loosely-coupled STEM cycle is characterized primarily by the ubiquity function, while a tightly-coupled STEM cycle is characterized primarily by the meliorative function. Human civilization has always involved a loosely-coupled STEM cycle, sometimes operating over thousands of years, with no apparent relationship between science, technology, and engineering. Technological progress was slow and intermittent under these conditions. However, the productivity of industrial-technological civilization is such that its STEM cycle yields both the ubiquity function and the meliorative function, which means that there are in fact multiple STEM cycles running concurrently, both loosely-coupled and tightly-coupled.
The research and development branch of a large business enterprise is the conscious constitution of a limited, tightly-coupled STEM cycle in which only that science is pursued that is expected to generate specific technologies, and only those technologies are developed that can be engineered into marketable products. An open loop STEM cycle, loosely-coupled STEM cycle, or exaptations of the STEM cycle are seen as wasteful, but in some cases the unintended consequences from commercial enterprises can be profound. When Arno Penzias and Robert Wilson were hired by Bell Labs, it was with the promise that they could use the Holmdel Horn Antenna for pure science once they had done the work that Bell Labs would pay them for. As it turned out, the actual work of tracing down interference resulted in the discovery of cosmic microwave background radiation (CMBR), earning Penzias and Wilson the Nobel prize. An engineering problem became a science problem: how do you explain the background interference that cannot be eliminated from electronic devices?
. . . . .
. . . . .
. . . . .
. . . . .
15 November 2014
When I find myself among conspiracy theorists and pseudo-science aficionados, I probably sound like the most relentless, ruthless, unforgiving positivist that you have ever heard. But, of course, I’m not a positivist at all. When I find myself among those educated in the sciences, I probably sound like the most woolly-headed philosopher imaginable, who seemingly takes every opportunity to needlessly complicate matters that are perfectly clear just as they are. I am caught between defending science among those innocent of science, and defending philosophy among those innocent of philosophy. In other words, I can’t win. And now I’m going to make my hopeless position worse by taking the conflict (rather, the absence of communication) between science and philosophy into the forbidden no-man’s-land of politics.
My particular dilemma is the result of understanding that science is philosophy; that is to say, science as we know it today, is a particular branch of philosophy (something that I began to explain in A Fly in the Ointment). While it may be grudgingly acknowledged that science has philosophical presuppositions, it is step further to see science as a particular philosophy that is rather less comprehensive than the whole of philosophy. Now, it is true that science has become differentiated from the rest of philosophy because of its practical successes, but its practical successes alone are no warrant for separating methodological naturalism, i.e., science, from the rest of philosophy.
Without philosophy we cannot understand science; philosophy provides both the synchronic and the diachronic context of science. The emergence of science within western civilization is the diachronic narrative of philosophy, and the relations of science to other aspects of the world and human experience is the synchronic context of science that can only adequately be addressed by philosophy. The need for a robust engagement between science and philosophy, as is to be found, for example, in the work of Einstein, is a need that grows out of the philosophical context of science.
Previous epochs of civilization — notably, agrarian-ecclesiastical civilization — might point to their own pragmatic implementations of philosophy, no less than the successes of the sciences are heralded today. Enormous monumental building projects that still impress us today, symbols of civilization such as the pyramids, Hagia Sophia, the Taj Mahal, the Daibutsu at Nara, and Borobudur, were possible only through the effort of a philosophically unified civilization, and the monuments themselves are monuments to those civilizations and their philosophical bases.
As an example of a philosophical civilization animated from the power elites at the top down to the lowest rungs of the socioeconomic ladder I have elsewhere quoted Gregory Nazianzus on the Christological controversies in Byzantium:
“Constantinople is full of handicraftsmen and slaves, who are all profound theologians, and preach in their workshops and in the streets. If you want a man to change a piece of silver, he instructs you in which consists the distinction between the Father and the Son; if you ask the price of a loaf of bread, you receive for answer, that the Son is inferior to the Father; and if you ask, whether the bread is ready, the rejoinder is that the genesis of the Son was from nothing.”
Another example might be the reach of stoicism in the Roman empire from the emperor Marcus Aurelius to the slave Epictetus. This philosophical character of agrarian-ecclesiastical civilization is not limited to western civilization, its predecessors, and successors, but is a planetary phenomenon.
The civilization of India is perhaps uniquely philosophical in the world. India is a civilization-state, and Indian civilization is a philosophical civilization. In this respect, it is markedly different from western civilization, which has no contemporary single state representative, and in regard to philosophy is more narrow and focused.
This can give us a certain insight into western civilization, which is not a philosophical civilization in the sense that India is, but is a fragment of a philosophical civilization. In so far as science is a particular branch of philosophy, and in so far as western civilization in its present form (industrial-technological civilization) is founded upon science as the source of the STEM cycle, western civilization is a philosophical civilization for the particular philosophy of methodological naturalism. Indeed, the very insistence today that science can do without philosophy is an expression of the philosophical narrowness of western civilization.
Much is to be learned from the comparison of the philosophies and civilizational structures of those independent civilizations that can be traced all the way to their origins in the Neolithic Agricultural Revolution, during which all agrarian-ecclesiastical civilizations had their earliest origins. But there is a problem here. In reaction against the imperialism of western civilization since that period once called the Age of Discovery, when Columbus, Magellan, Vasco de Gama, Amerigo Vespucci, Vasco Núñez de Balboa, and many others, sailed from Europe and began to survey the world entire, it is now considered in supremely bad taste to compare civilizations. The celebratory model of tolerance is almost universally adopted and every civilization is counted as a special snowflake that has something to contribute to human history.
In my post on The Future Science of Civilizations I noted Carnap’s tripartite distinction among scientific concepts, which Carnap identified as the classificatory, the comparative, and the quantitative. (We note that this typology itself takes a classificatory form, and an entire class of scientific concepts are comparative concepts.) In so far as we understand Carnap’s conceptual schema of measurement as developmental, proceeding in phases so that initial classifications lead to comparisons, and comparisons lead to quantification, all the while gaining in objectivity, Carnap’s schematism of scientific measurement embodies what Edith Wyschogrod called “the quantification of the qualitied world.”
If we take the division of classificatory, comparative, and quantitative concepts not in a developmental sense but as different approaches to a scientific grasp of the world, then each conceptual method of measurement may yield unique information about the world. In either case, whether we take these scientific concepts of measurement in developmental terms or take each in isolation, comparative concepts have a crucial role to play: either they are a stage in the development of a fully quantitative science, or they yield unique information about the world.
We cannot fully or adequately conceptualize civilization without developing comparative concepts of civilization to the greatest extent possible, but the development and exploration of this conceptual space is severely constrained by the contemporary political proscription upon the comparison of civilizations. In this way, the study of civilization today is unnecessarily yet unavoidably political. In order to frankly and bluntly discuss comparative conceptions of civilization, we are forced to seek artful euphemisms to speak evasively. This is unfortunate for the development of a science of civilization, but it is not insuperable, and the appropriate degree of abstraction and formalization in a fully developed theoretical context may be sufficient to violate this taboo in spirit while leaving the letter of the proscription intact.
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
11 November 2014
Wittgenstein was not himself a positivist, but his early work, Tractatus Logico-Philosophicus, had such a profound influence on early twentieth century philosophy that the philosophy that we now identify as logical positivism was born from reading groups that got together to study Wittgenstein’s Tractatus — what I have elsewhere called The Ludwig Wittgenstein Reading Club — primarily the Vienna Circle.
Wittgenstein began his education as an engineer, and only later became interested in philosophy by way of the philosophy of mathematics then emerging from the work of Frege and Russell. It has been said that the early Wittgenstein approached philosophy like an engineer, setting out to drain the swamps of philosophy. A more familiar metaphor for Wittgenstein’s philosophy, though for the later rather than the earlier Wittgenstein, is that of philosophy as a kind of therapy:
“A philosopher is a man who has to cure many intellectual diseases in himself before he can arrive at the notions of common sense.”
Wittgenstein, Culture and Value, 1944, p. 44e
Wittgenstein does not himself use the term “therapy” or “therapeutic,” but frequently recurs to the theme in other words:
“In philosophizing we may not terminate a disease of thought. It must run its natural course, and slow cure is all important. (That is why mathematicians are such bad philosophers.)”
Wittgenstein, Zettel, 382
The idea of philosophy as therapy is not entirely new. In my Variations on the Theme of Life I noted the medieval tradition of conceiving philosophers as “doctors of the soul”:
“During late antiquity philosophers were sometimes called ‘doctors of the soul.’ Later yet, Avicenna was a practicing physician in addition to being both a logician and a philosopher, and he stands at the head of a tradition of doctor-philosophers among the Arabs. All this has a superficial resemblance to the contemporary conception of philosophy as therapy, but in reality it is the antithesis of the modern conception of philosophy as a sickness in need of therapy, of scholarship as an illness, and of the philosopher as corrupt and corrupting.”
Variations on the Theme of Life, section 767
Every age must confront the ancient and perennial questions of philosophy anew, because each age has its own, peculiar therapeutic needs. It has become a commonplace of contemporary commentary, as least since the middle of the twentieth century, that the pace and busyness of our civilization today is driving us insane, and in so far as this is true, we are more in need of therapy than previous ages.
In my previous post, Philosophy for Industrial-Technological Civilization, I suggested, contrary to Quine, that philosophy of science is not philosophy enough; that we also need philosophy of technology and philosophy of engineering, and to unify these aspects of the STEM cycle within the big picture, we need a philosophy of big history. There is only one problem with my vision for the overarching philosophy demanded by the world of today: there is no demand for it. No one is interested in my vision or, for that matter, any other vision of philosophy for the twenty-first century.
Previously I wrote three posts on contemporary anti-philosophy:
The most prestigious scientists of our time seem at one in their insistence upon the irrelevance of philosophy. A post on the SelfAwarePatters blog, E.O. Wilson: Science, not philosophy, will explain the meaning of existence, brought my attention to E. O. Wilson’s recent statements belittling philosophy. SelfAwarePatters has also written about Neil deGrasse Tyson’s “blanket dismissal of philosophy” in Neil deGrasse Tyson is wrong to dismiss all of philosophy, but he may have a point on some of it.
It is almost painful to watch Wilson’s oversimplifications in the above linked “Big Think” piece, though I suspect his oversimplifications will have a wide and sympathetic audience. After implying the pointlessness of studying the history of philosophy and making the claim that philosophy mostly consists of “failed models of how the brain works,” Wilson then appeals to the “full story of humanity” (without mentioning big history, though the interdisciplinary concatenation he mentions is very much in the spirit of big history), and formulates a point of view almost precisely the same as that I heard several times at the 2014 IBHA conference: once we have this big picture view of history, we no longer need to ask what the meaning of life is, because we will know it.
The inescapable reflexivity of philosophical thought means that any principled rejection of philosophy is itself a philosophical claim; unprincipled rejections, that is to say, dismissal without reason or argument, have no more standing than any other unprincipled claim. So the scientists who dismiss philosophy and give reasons for doing so are doing philosophy. The unfortunate consequence is that they are doing philosophy poorly, much like someone who dismisses science but who pontificates on matters scientific, and does so poorly. We are well familiar with this, as pseudo-science has been given a megaphone by the internet and other forms of mass media. Scientists are aware of the problem posed by pseudo-science, but seem to be blissfully unaware of the problem of pseudo-philosophy.
There is a book by Louis Althusser, Philosophy and the Spontaneous Philosophy of Scientists, that I have cited previously (in Fashionable Anti-Philosophy) since the title is so evocative, in which Althusser says, “…in every scientist there sleeps a philosopher or, to put it another way, that every scientist is affected by an ideology or a scientific philosophy which we propose to call by the conventional name: the spontaneous philosophy of the scientists…” It is this spontaneous philosophy of scientists that we see in the anti-philosophical pronouncements of E. O. Wilson and Neil deGrasse Tyson.
Not only eminent scientists, but also science popularizers share this attitude. Michio Kaku’s recent book, The Future of the Mind: The Scientific Quest to Understand, Enhance, and Empower the Mind, is essentially a speculative work in the philosophy of mind. There is a pervasive yet implicit Kantianism running through Kaku’s book of which I am sure he is unaware, because, like most scientists today who write on philosophical topics, he has not bothered to study the philosophical literature. If one knows that one is arguing a neo-Kantian position on the transcendental aesthetic, in trying to come to terms with how the barrage of sensory data is somehow translated into an apparently smooth and unitary stream of consciousness, then one can simply consult the literature to learn where state of the argument over the transcendental aesthetic stands today, what the standard arguments are for and against contemporary Kantianism, but without this basic knowledge, one does little more than repeat what has already been said — better — by others, and long ago. Even Sam Harris, who has some background in philosophy, gives his exposition of determinism in a philosophical vacuum, as though the work of philosophers such as Robert Kane, Helen Steward, and Alfred R. Mele simply did not exist, or is beneath notice.
The anti-philosophy and pseudo-philosophy of prominent scientists is an instance of the spontaneous philosophy noted by Althusser. But this spontaneous expression of uninformed philosophical speculation does not come out of nowhere; it has a basis, albeit dimly understood, in the nature of science itself. What is the nature of science itself? I have an answer to this, but it is not an answer that will be welcome to most of those in science today: science is philosophy. That is to say, science is a particular branch of philosophy, that branch once called natural philosophy, and it is natural philosophy practiced in accordance with methodological naturalism. Science is a narrow slice of a far more comprehensive conception of the world.
Scientists are philosophers without realizing they are philosophers, and when then pronounce upon philosophical questions without reference to the philosophical tradition — which is much broader and pluralistic than any one, single branch of philosophy, such as natural philosophy — they do little more than to restate their presuppositions as principles. Given the preeminent role of science within industrial-technological civilization, this willful ignorance of philosophy, and of the position of science in relation to philosophy, is not only holding back both science and philosophy, it is holding back civilization.
The next stage of development of our civilization (not to mention the macro-evolution of our civilization into another kind of civilization) will not come about until science utterly abandons the positivistic assumptions that are today the unquestioned yet implicit presuppositions of scientific inquiry, and science extends the scientific method, and the sense of responsibility to empirical evidence, beyond the confines of any one branch of philosophy to the whole of philosophy. To paraphrase Plato, until philosophers theorize as scientists or those who are now called scientists and leading thinkers genuinely and adequately philosophize, that is, until science and philosophy entirely coincide, while the many natures who at present pursue either one exclusively are forcibly prevented from doing so, civilization will have no rest from evils… nor, I think, will the human race.
. . . . .
. . . . .
. . . . .
. . . . .
12 June 2014
Scientific civilization changes when scientific knowledge changes, and scientific knowledge changes continuously. Science is a process, and that means that scientific civilization is based on a process, a method. Science is not a set of truths to which one might assent, or from which one might withhold one’s assent. It is rather the scientific method that is central to science, and not any scientific doctrine. Theories will evolve and knowledge will change as the scientific method is pursued, and the method itself will be refined and improved, but method will remain at the heart of science.
Pre-scientific civilization was predicated on a profoundly different conception of knowledge: the idea that truth is to be found at the source of being, the fons et origo of the world (as I discussed in my last post, The Metaphysics of the Bureaucratic Nation-State). Knowledge here consists of delineating the truth of the world prior to its later historical accretions, which are to be stripped away to the extent possible. More experience of the world only further removes us from the original source of the world. The proper method of arriving at knowledge is either through the study of the original revelation of the original truth, or through direct communion with the source and origin of being, which remains unchanged to this day (according to the doctrine of divine impassibility).
The central conceit of agrarian-ecclesiastical civilization to be based upon revealed eternal verities has been so completely overturned that its successor civilization, industrial-technological civilization, recognizes no eternal verities at all. Even the scientific method, that drives the progress of science, is continually being revised and refined. As Marx put it in the Communist Manifesto: “All fixed, fast-frozen relations, with their train of ancient and venerable prejudices and opinions, are swept away, all new-formed ones become antiquated before they can ossify. All that is solid melts into air…”
Scientific civilization always looks forward to the next development in science that will resolve our present perplexities, but this comes at the cost of posing new questions that further put off the definitive formulation of scientific truth, which remains perpetually incomplete even as it expands and becomes more comprehensive.
This has been recently expressed by Kevin Kelly in an interview:
“Every time we use science to try to answer a question, to give us some insight, invariably that insight or answer provokes two or three other new questions. Anybody who works in science knows that they’re constantly finding out new things that they don’t know. It increases their ignorance, and so in a certain sense, while science is certainly increasing knowledge, it’s actually increasing our ignorance even faster. So you could say that the chief effect of science is the expansion of ignorance.”
The Technium: A Conversation with Kevin Kelly [02.03.2014]
Scientific civilization, then, is not based on a naïve belief in progress, as is often alleged, but rather embodies an idea of progress that is securely founded in the very nature of scientific knowledge. There is nothing naïve in the scientific conception of knowledge; on the contrary, the scientific conception of knowledge had a long and painfully slow gestation in western civilization, and it is rather the paradigm that science supplants, the theological conception of knowledge (according to which all relevant truths are known from the outset, and are never subject to change), that is the naïve conception of knowledge, sustainable only in the infancy of civilization.
We are coming to understand that our own civilization, while not yet mature, is a civilization that has developed beyond its infancy to the degree that the ideas and institutions of infantile civilization are no longer viable, and if we attempt to preserve these ideas and institutions beyond their natural span, the result may be catastrophic for us. And so we have come to the point of conceptualizing our civilization in terms of existential risk, which is a thoroughly naturalistic way of thinking about the fate and future of humanity, and is amenable to scientific treatment.
It would be misleading to attribute our passing beyond the infancy of civilization to the advent of the particular civilization we have today, industrial-technological civilization. Even without the industrial revolution, scientific civilization would likely have gradually come to maturity, in some form or another, as the scientific revolution dates to that period of history that could be called modern civilization in the narrow sense — what I have called Modernism without Industrialism. And here by “maturity” I do not mean that science is exhausted and can produce no new scientific knowledge, but that we become reflexively aware of what we are doing when we do science. That is to say, scientific maturity is when we know ourselves to be engaged in science. In so far as “we” in this context means scientists, this was probably largely true by the time of the industrial revolution; in so far as “we” means mass man of industrial-technological civilization, it is not yet true today.
The way in which science enters into industrial-technological civilization — i.e., by way of spurring forward the open loop of industrial-technological civilization — means that science has been incorporated as an integral part of the civilization that immediately and disruptively followed the scientific civilization of modernism without industrialism (according to the Preemption Hypothesis). While the industrial revolution disrupted and preempted almost every aspect of the civilization that preceded it, it did not disrupt or preempt science, but rather gave a new urgency to science.
In several posts I have speculated on possible counterfactual civilizations (according to the counterfactuals implicit in naturalism), that is to say, forms of civilization that were possible but which were not actualized in history. One counterfactual civilization might have been agrarian-ecclesiastical civilization undisrupted by the scientific or industrial revolutions. Another counterfactual civilization might have been modern civilization in the narrow sense (i.e., Modernism without Industrialism) coming to maturity without being disrupted and preempted by the industrial revolution. It now occurs to me that yet another counterfactual form of civilization could have been that of industrialization without the scientific conception of knowledge or the systematic application of science to industry.
How could this work? Is it even possible? Perhaps not, and certainly not in the long term, or with high technology, which cannot exist without substantial scientific understanding. But the simple expedient of powered machinery might have come about by the effort of tinkerers, as did much of the industrial revolution as it happened. If we look at the halting and inconsistent efforts in the ancient world to produce large scale industries we get something of this idea, and this we could call industrialism without modernity. Science was not yet at the point at which it could be very helpful in the design of machinery; none of the sciences were yet mathematicized. And yet some large industrial enterprises were built, though few in number. It seems likely that it was not the lack of science that limited industrialization in classical antiquity, but the slave labor economy, which made labor-saving devices pointless.
There are, today, many possibilities for the future of civilization. Technically, these are future contingents (like Aristotle’s sea battle tomorrow), and as history unfolds one of these contingencies will be realized while the others become counterfactuals or are put off yet further. And in so far as there is a finite window of opportunity for a particular future contingent to come into being, beyond that window all unactualized contingents become counterfactuals.
. . . . .
. . . . .
I have written more on the nature of scientific civilization in…
. . . . .
. . . . .
. . . . .
. . . . .
24 November 2013
The world, we are learning every day, is a very large place. Or perhaps I should say that the universe is a very large place. It is also a very complex and strange place. J. B. S. Haldane famously said that, “I have no doubt that in reality the future will be vastly more surprising than anything I can imagine. Now my own suspicion is that the Universe is not only queerer than we suppose, but queerer than we can suppose.” (Possible Worlds and Other Papers, 1927, p. 286) In other words, human beings, no matter how valiantly they attempt to understand the universe, may not be cognitively equipped to understand it; our minds may not be the kind of minds that can understand the kind of place that the world is.
This idea of our inability to understand the world in which we find ourselves (an admirably humble Copernican insight that we might call metaphysical modesty, and which stands in contrast to epistemic hubris) has received many glosses since Haldane’s time. Most notable (notable, at least, from my perspective) are the evolutionary gloss, the quantum physics gloss, and the philosophical gloss. I will consider each of these in turn.
In terms of evolution, there is no reason to suppose that descent with modification in a context of a struggle for vital resources on the plains of Africa (the environment of evolutionary adaptedness, or EEA) is going to produce minds capable of understanding higher dimensional spatial manifolds or quantum physics at microscopic scales that differ radically from the macroscopic scales of ordinary human perception. Alvin Plantinga (about whom I wrote some time ago in A Note on Plantinga, inter alia) has used this argument for theological purposes. However, there is no intrinsic reason that a mind born in the mud and the muck cannot raise itself above its origins and come to contemplate the world in Copernican terms. The evolutionary argument cuts both ways, and since we have ourselves as the evidence of an organism that can raise itself from strictly survival behavior to forms of thought that have nothing to do with survival, from the perspective of the weak anthropic principle this is proof enough that natural selection can result in such a mind.
In terms of quantum theory, we are all familiar with famous quotes from the leading lights of quantum theory as to the essentially incomprehensibility of that theory. For example, Richard Feynman said, “I think I can safely say that nobody understands quantum mechanics.” However, I have observed (in The limits of my language are the limits of my world and elsewhere) that recent research is making strides in working around the epistemic limitations of quantum theory, revealing its uncertainties to be not absolute and categorical, but rather subject to careful and painstaking narrowing that renders the uncertainty a little less uncertain. I anticipate two developments that will emerge from the further elaborate of quantum theory: 1) the finding of ways to gradually and incrementally chip away at an absolutist conception of uncertainty (as just mentioned), and 2) the formulation of more adequate intuitions to make quantum theory more palatable to the human mind.
In terms of philosophy, Colin McGinn’s book Problems in philosophy: The Limits of Inquiry formulates a position which he calls Transcendental Naturalism:
“Philosophy is an attempt to get outside the constitutive structure of our minds. Reality itself is everywhere flatly natural, but because of our cognitive limits we are unable to make good on this general ontological principle. Our epistemic architecture obstructs knowledge of the real nature of the objective world. I shall call this thesis transcendental naturalism, TN for short.” (pp. 2-3)
I have previously written about McGinn’s work in Transcendental Non-Naturalism and Naturalism and Object Oriented Ontology, inter alia. Our ability to get outside the constitutive structure of our minds is severely limited at best, and so our ability to understand the world as it is is limited at best.
While our cognitive abilities are admittedly limited (for all the reasons discussed above, as well as other reasons not discussed), these limits are not absolute, but rather admit of revision. McGinn’s position as stated above implies a false dichotomy between staying within the constitutive structure of our minds and getting outside it. This is a classic case of facing the sheer cliff of Mount Improbable: while it is impossible to get outside our cognitive architecture in one fell swoop, we can little by little transgress the boundaries of our cognitive architecture, each time ever-so-slightly expanding our capacities. Incrementally over time we improve our ability to stand outside those limits that once marked the boundaries of our cognitive architecture. Thus in an ironic twist of intellectual history, the evolutionary argument, rather than demonstrating metaphysical modesty, is rather the key to limiting the limitations on the human mind.
All of this is related to one of the central problems in the philosophy of science of our time — the whole Kuhnian legacy that is the framework of so much contemporary philosophy of science. Copernican revelations and revolutions, which formerly disturbed our anthropocentric bias every few hundred years, now occur with alarming frequency. The difference today, of course, is that science is much more advanced than it was with past Copernican revelations and revolutions — it has much more advanced instrumentation available to it (as a result of the STEM cycle), and we have a much better idea of what to look for in the cosmos.
It was a shock to almost everyone to have it scientifically demonstrated that the universe is not static and eternal, but dynamic and changing. It was a shock when quantum theory demonstrated the world to be fundamentally indeterministic. This is by now a very familiar narrative. In fact, it is so familiar that it has been expropriated (dare I say exapted?) by obscurantists and irrationalists of our time, who point at continual changes at scientific knowledge as “proof” that science doesn’t give us any “truth” because it changes. The assumption here is that change in scientific knowledge demonstrates the weakness of science; in fact, change in scientific knowledge is the strength of science. Scientific knowledge is what I have elsewhere called an intelligent institution in so far as it is institutionalized knowledge, but that institution is formulated with internal mechanisms that facilitate the re-shaping of the institution itself over time. That mechanism is the scientific method.
It is important to see that the overturning of familiar conceptions of the world — some of which are ancient and some of which are not — is not arbitrary. Less comprehensive conceptions are being replaced by more comprehensive conceptions. The more comprehensive our perspective on the world, the greater the number of anomalies we must face, and the greater the number of anomalies we face the more likely it is that our theories will be overturned, or at least partially falsified. But it is the wrong debate to ask whether theory change is rational or irrational. It is misleading, because what ought to concern us is how well our theories account for the ever-larger world that is revealed to us through our ever-more comprehensive methods of science, and not how well our theories conform to our presuppositions about rationality. The more we get the science right, reason will follow, shaping new intuitions and formulating new theories.
Our ability to discover (and to understand) ever greater scales of the universe is contingent upon our growing intellectual capabilities, which are cumulative. Just as in the STEM cycle science begets technologies that beget industries that create better scientific instruments, so too on a purely intellectual level the intellectual capabilities of one generation are the formative context of the intellectual capabilities of the next generation, which allows the later generation to exceed the earlier generation. Concepts are the tools of the mind, and we use our familiar concepts to create the next generation of concepts, which latter are both more refined and more powerful than the former, in the same way as we use each generation of tools to build the next generation of tools, which makes each generation of tools better than the last (except for computer software — but I expect that this degradation in the practicability of computer software is simply the software equivalent of planned obsolescence).
Our current generation of tools — both conceptual and technological — are daily revealing to us the inadequacy of our past conceptions of the world. Several recent discoveries have in particular called into question our understanding of the size of the world, especially in so far as the world is defined in terms of its origins in the Big Bang. For example, the discovery of hyperclusters suggest physical structures of the universe that are larger than the upper limit set to physical structures by contemporary cosmologies theories (cf. ‘Hyperclusters’ of the Universe — “Something is Behaving Very Strangely”).
In a similar vein, writing of the recent discovery of a “large quasar group” (LQG) as much as four billion light years across, the article The Largest Discovered Structure in the Universe Contradicts Big-Bang Theory Cosmology states:
“This LQG challenges the Cosmological Principle, the assumption that the universe, when viewed at a sufficiently large scale, looks the same no matter where you are observing it from. The modern theory of cosmology is based on the work of Albert Einstein, and depends on the assumption of the Cosmological Principle. The principle is assumed, but has never been demonstrated observationally ‘beyond reasonable doubt’.”
This formulation gets the order of theory and observation wrong. The cosmological principle is not a principle that can be proved or disproved by evidence; it is a theoretical idea that is used to give structure and meaning to observations, to organize observations into a theoretical whole. The cosmological principle belongs to theoretical cosmology; recent discoveries such as hyperclusters and large quasar groups belong to observational cosmology. While the two — i.e., theoretical and observational — cannot be separated in the practice of science, it is also true that they are not identical. Theoretical methods are distinct from observational methods, and vice versa.
Thus the cosmological principle may be helpful or unhelpful in organizing our knowledge of the cosmos, but it is not the kind of thing that can be falsified in the same way that, for example, a theory of planetary formation can be falsified. That is to say, the cosmological principle might be opposed to (falsified by) another principle that negates the cosmological principle, but this anti-cosmological principle will similarly belong to an order not falsifiable by empirical observations.
The discoveries of hyperclusters and LQGs are particularly problematic because they question some of the fundamental assumptions and conclusions of Big Bang cosmology, which is, essentially, the only large scale cosmological model in contemporary science. Big Bang cosmology is the explanation for the structure of the cosmos that was formulated in response to the discovery of the red shift, which implies that, on the largest observable scales, the universe is expanding. It is important to add the qualification, “on the largest observable scales” because stars within a given galaxy are circulating around the galaxy, and while a given star may be moving away from another given star, it is also likely to be moving toward yet some other star. And, even at larger scales, not all galaxies are receding from each other. It is fairly well known that galaxies collide and commingle; the Helmi stream of our own Milky Way is the result of a long past galactic collision, and at some far time in the future the Milky Way itself will merge with the larger Andromeda galaxy, and be absorbed by it.
Cosmology during the period of the big bang theory (a period in which we still find ourselves today) is in some respects like biology before Darwin. Almost all biology before Darwin was essentially theological, but no one had a better idea so biology had to wait to become a science capable of methodologically naturalistic formulations until after Darwin. The big bang theory was, on the contrary, proposed as a scientific theory (not merely bequeathed to us by pre-scientific tradition), and most scientists working within the big bang tradition have formulated the Big Bang in meticulously naturalistic terms. Nevertheless, once the steady state theory was overthrown, no one really had an alternative to the big bang theory, so all cosmology centered on the Big Bang for lack of imagination of alternatives — but also due to the limitations of the scientific instruments, which at the time of the triumph of the big bang theory were much more modest than they are today.
As disconcerting as it was to discover that the cosmos did not embody an eternal order, that it is expanding and had a history of violent episodes, and that it was much larger than an “island universe” comprising only the Milky Way, the observations that we need to explain today are no less disconcerting in their own way.
Here is how Leonard Susskind describes our contemporary observations of the expanding universe:
“In every direction that we look, galaxies are passing the point at which they are moving away from us faster than light can travel. Each of us is surrounded by a cosmic horizon — a sphere where things are receding with the speed of light — and no signal can reach us from beyond that horizon. When a star passes the point of no return, it is gone forever. Far out, at about fifteen billion light years, our cosmic horizon is swallowing galaxies, stars, and probably even life. It is as if we all live in our own private inside-out black hole.”
Leonard Susskind, The Black Hole War: My Battle with Stephen Hawking to make the World Safe for Quantum Mechanics, New York, Boston, and London: Little, Brown and Company, 2008, pp. 437-438
This observation has not yet been sufficiently appreciated. What lies beyond Susskind’s cosmic horizon is unobservable, as anything that disappears beyond the event horizon of a black hole has become unobservable, and that places such matters beyond the reach of science understood in a narrow sense of observation. But as I have noted above, in the practice of science we cannot disentangle the theoretical and the observational, but the two are not the same. While our observations come to an end at the cosmic horizon, our principles encounter no such boundary. Thus it is that we naturally extrapolate our science beyond the boundaries of observation, but if we get our scientific principles wrong, anything beyond the boundary of observation will be wrong and will be incapable or correction by observation.
Science in the narrow sense must, then, come to an end with observation. But this does not satisfy the mind. One response is to deny the mind its satisfaction and refuse to pass beyond observation. Another response is to fill the void with mythology and fiction. Yet another response is to take up the principles on their own merits and consider them in the light of reason. This response is the philosophical response, and we see that it is a rational response to the world that is continuous with science even when it passes beyond science.
. . . . .
. . . . .
. . . . .