4 April 2015
Curiosity does not have an especially good reputation, and one often finds the word coupled with “mere” so that “mere curiosity” can be elegantly dismissed as though beneath the dignity of the speaker, who can then go about his much more grand and august pursuits without the distraction of the petty, grubbing motivation of mere curiosity. There may be some connection between this disdainful attitude toward curiosity and the prevalent anti-intellectualism of western civilization, notwithstanding the fact that most of what is unique in this tradition is derived from the scientific spirit; it is no surprise that any driving force in human affairs eventually provokes an equal and opposite reaction.
Many civilizations that publicly value intellectuals do not value the contributions of intellectuals, so that this social prestige is indistinguishable from a kind of feudal regard for special classes of persons. This is not what happened in western civilization, in which scientific knowledge bestowed real wealth and power — in our own day no less than in the past — and so provoked a reaction. One of the most famous stories from classical antiquity was how Thales, predicting an especially good olive harvest, hired all the olive presses at a low rate out of season, and then let them out at inflated rates during the peak season, proving that philosophers could earn money if they wanted to do so.
There are a great many interesting quotes that invoke curiosity, for better or worse — Thomas Hobbes: “…this hope and expectation of future knowledge from anything that happeneth new and strange, is that passion which we commonly call ADMIRATION; and the same considered as appetite, is called CURIOSITY, which is appetite of knowledge.” Edmund Burke: “The first and simplest emotion which we discover in the human mind, is curiosity.” Albert Einstein: “I have no special talent. I am only passionately curious.” — which highlight both the admirable and the disreputable side of curiosity. That curiosity has both admirable and disreputable aspects suggests that one might be admirably curious or disreputably curious, and certainly all of us know individuals who are curious in the best sense of the term and others who are curious in the worst sense of the term.
Human beings are adventurers of the spirit. We must count among the attributes of human nature some basal drive toward questioning. This drive could be given an exposition in purely intellectual terms or in purely emotional terms; I think that the intellectual and emotional manifestations of human curiosity are two sides of the same coin, and that is why I suggest positing some basal drive that lies at the root of both. And it isn’t quite right to reduce this drive to curiosity, as we can formulate it in terms of curiosity or in terms of need.
Curiosity is often contrasted to a presumably more esteemed mode of interrogating the cosmos, that we may call existential need. Jacob Needleman often addressed the contrast between “mere” curiosity (which he sometimes called “low curiosity”) and present need. Here is an example:
“It has been said that any question can lead to truth if it is an aching question. For one person it may be the question of life after death, for another the problem of suffering, the causes of war and injustice. Or it may be something more personal and immediate — a profound ethical dilemma, a problem involving the whole direction of one’s life. An aching question, a question that it not just a curiosity or a fleeting burst of emotion, cannot be answered with old thought. Possessed by such a question, one is hungry for ideas of a very different order than the familiar categories that usually accompany us throughout our lives. One is both hungry and, at the same time, more discriminating, less susceptible to credulity and suggestibility. The intelligence of the heart begins to call to us in our sleep.”
Jacob Needleman, The American Soul: Rediscovering the Wisdom of the Founders, pp. 3-4
I disagree with this on so many levels that it is difficult to know where to start, so instead I will simply say that the kind of existential need Needleman wants to describe is highly credulous and suggestible, and what answers to this need are almost always in the form of an old and painfully familiar form of cognitive bias. However, to try to do justice to Needleman, I will allow that, for an individual immersed in the ordinary business of life who, through some traumatic experience, suddenly comes face to face with profound and difficult questions never before posed in that individual’s experience, then, yes, ideas of a very different order are needed to address such questions.
While I do not think that aching questions are likely to lead to truth — I think it much more likely that they will lead to self-deception — I do not deny that many are gnawed by aching questions, and some few spend their lives trying to answer them. The question, then, is the best method by which an aching question might be given a clear, coherent, and satisfying (in so far as that is possible) answer. Here I am reminded of a passage from Walter Kaufmann:
“Nowhere is the disproportion between effort and result more aggravating than in the pursuit of truth: you may plow through documents or make untold experiments or think and think and think, forgo food, comfort, and distractions, lie awake nights and eat out your heart — and in the end you know what can be memorized by any idiot.”
Walter Kaufmann, Critique of Religion and Philosophy, section 24
However aching our question, presumably we would want to spare ourselves the wasted effort of an inquiry that deprives us of the satisfactions of life while giving an answer that could be memorized by any idiot. Kaufmann did not go far enough here: sometimes individuals who make just such an heroic effort to get at the truth and only arrive at an idiot’s portion convince themselves that the idiot’s portion is in fact a great and profound truth.
Whether or not existential need can be satisfied, how are we to undertand it? Viktor Frankl, a psychiatrist and one of the founders of existential analysis, identified a condition that he called the existential vacuum, which he defined as, “the frustration of the will to meaning.” Frankl knew that of which he spoke, having lost most of his family to Nazi death camps and himself having been interned at Auschwitz and liberated only at the end of the war. Here, in a longer passage, is his exposition of existential need:
“Ever more patients complain of what they call an ‘inner void,’ and that is the reason why I have termed this condition the ‘existential vacuum.’ In contradistinction to the peak-experience so aptly described by Maslow, one could conceive of the existential vacuum in terms of an ‘abyss-experience’.”
Viktor Frankl, The Will to Meaning: Foundations and Applications of Logotherapy, New York: Plume, 2014 (originally published in the US in 1969), Part Two, “The Existential Vacuum: A Challenge to Psychiatry”
One could readily suppose that existential need is occasioned by the existential vacuum; that the former is the condition and cause of the latter. Another and more recent approach to existential need is to be found in the work of James Giles:
“…existential needs are not the product of social construction. For in contrast to socially constructed phenomena, existential needs are an inherent and universal feature of the human condition.”
James Giles, The Nature of Sexual Desire, p. 181
This is not necessarily distinct from existential need occasioned by Frankl’s existential vacuum; one could formulate the existential vacuum so that it is either “an inherent and universal feature of the human condition” or not. And there may well be more than one form of existential need. In fact, I think it is clear that there is a plurality of existential needs, and some of these can be sublimated through scientific inquiry and can be satisfied, while some play out in the fruitless manner described in the passage above from Kaufmann.
How one approaches the mystery that is the world, by way of scientific curiosity or by way of existential need, which we might call the scientific approach and the existential approach, each reflect a valid human response to the individual’s relationship to the cosmos. Most of us, at some point in life, poignantly feel the mysteriousness of the world and the desire to give an account of our existence in relation to this mystery. Consider this from John Stuart Mill:
“Human existence is girt round with mystery: the narrow region of our experience is a small island in the midst of a boundless sea, which at once awes our feelings and stimulates our imagination by its vastness and its obscurity. To add to the mystery, the domain of our earthly existence is not only an island in infinite space, but also in infinite time. The past and the future are alike shrouded from us: we neither know the origin of anything which is, nor its final destination. If we feel deeply interested in knowing that there are myriads of worlds at an immeasurable, and to our faculties inconceivable, distance from us in space; if we are eager to discover what little we can about these worlds, and when we cannot know what they are, can never satiate ourselves with speculating on what they may be…”
Now, John Stuart Mill was an almost preternaturally rational man; he was not given to flights of fancy, though the high-flown rhetoric of this passage might suggest this. The scientific approach to mystery is a rationalistic response to the riddle of the world; answers are to be had, but the world is boundless, so that any one answered question still leaves countless other unanswered questions. The growth of knowledge is attended by a parallel growth in the unknown, as our increasing knowledge makes it possible for us to formulate previously unsuspected questions. One might find this to be invigorating or disappointing: there are real answers, but we will never have a final understanding of the world. The existential approach to mystery acknowledges that the human mind may not be capable of comprehending the mystery that is the world, but this is coupled with a fervent belief that there is a final and transcendent answer out there somewhere, even if it always remains tantalizingly out of reach. These are subtle but important differences in the conception of “ultimate” truth as it relates human beings to their world.
A distinction might be made between scientific mystery and absolute mystery, with scientific mystery being a mystery that admits of an answer, but which also admits of a further mystery. An absolute mystery admits of no answer, nor of any further mystery. The world might take on the character of scientific mystery or of absolute mystery depending on whether we approach the world from the perspective of scientific curiosity or existential need. In other words, the kind of mystery that the world is — even if we all agree that the world is girt round in mystery, as Mill says — corresponds to our attitude to the world.
One could argue that scientific curiosity is a sublimation of existential need. If this is true, there is no reason to be ashamed of this, or to attempt a return to the original existential need. The passage from existential need to scientific curiosity may be a stage in the development of intellectual maturity, as irreversible as the passage from childhood to adulthood.
One might go a step further and call scientific curiosity the secularization of existential need (or, rather, the secularization of religious mystery, which then invites a treatment in terms of the Max Scheler/Paul Tillich claim that all human beings are engaged in worship, it is only a question of whether the object of this worship is worthy or idolatrous), recalling Karl Löwith’s theory of secularization, which made much of modernity into a bastardized form of Christian eschatology. This presupposes not only that existential need precedes scientific curiosity, but that it is the only authentic form of human questioning, and that any attempt to introduce new forms of questioning the human condition is illegitimate.
We are today faced with questions that our ancestors, who first felt the disconcerting stirrings of existential need, could not have imagined. I touched on one of these questions in my post on Centauri Dreams, Cosmic Loneliness and Interstellar Travel, which drew more responses than other of my other posts to that forum. Our cosmic loneliness can now be expressed in scientific terms, and we can offer a scientific response to our attempts so far to answer the question, “Are we alone?” This is one of the great scientific questions of our time, and at the same time it speaks to a modern existential need that has been expressed in Clark’s tertium non datur.
The growth of human knowledge and the civilization created by human knowledge may have its origins in the questioning that naturally emerges from an experience of existential need. Perhaps this feeling never fully dissipates, but in so far as the dissatisfaction and discontent of existential need can be redirected into scientific curiosity, human beings can experience at least a limited satisfaction derived from definite scientific answers to questions formulated with increasing clarity and rigor. Beyond this, we may have to wait for the next stage in human evolution, when we may acquire mental faculties that take us beyond both existential need and scientific curiosity into a frame of mind incomprehensible to us in our present iteration.
. . . . .
. . . . .
. . . . .
. . . . .
3 December 2014
P. F. Strawson called his twentieth century exposition of Kant The Bounds of Sense. I have commented elsewhere what a appropriate title this is. The Kantian project (much like metamathematics in the twentieth century) was a limitative project. Kant himself wrote (in the Preface to the 2nd edition of the Critique of Pure Reason): “…my intention then was, to limit knowledge, in order to make room for faith.” Here is the entire passage from which the quote is taken, though in a different translation:
“This discussion as to the positive advantage of critical principles of pure reason can be similarly developed in regard to the concept of God and of the simple nature of our soul; but for the sake of brevity such further discussion may be omitted. [From what has already been said, it is evident that] even the assumption — as made on behalf of the necessary practical employment of my reason — of God, freedom, and immortality is not permissible unless at the same time speculative reason be deprived of its pretensions to transcendent insight. For in order to arrive at such insight it must make use of principles which, in fact, extend only to objects of possible experience, and which, if also applied to what cannot be an object of experience, always really change this into an appearance, thus rendering all practical extension of pure reason impossible. I have therefore found it necessary to deny knowledge, in order to make room for faith.”
Immanuel Kant, Critique of Pure Reason, Preface to the Second Edition
What lies beyond the bounds of sense? For Kant, faith. And Kant’s theological agenda drove him to seek the bounds of sense so that speculative reason could be deprived of its pretensions to transcendental insight. Thus Kant gives us an epistemology openly freighted with theological and moral concerns. Talk about the theory-ladenness of perception! It is, however, non-perception — i.e., that which cannot be the object of possible experience — that is the Kantian domain of faith.
Of course, this is the whole Kantian project in a nutshell, is it not? It is Kant’s design to show us exactly how perception is laden with theory, the theory native to the mind, the a priori concepts by which we organize experience. Kant propounds the transcendental aesthetic and the transcendental deduction of the categories in order to demonstrate the reliance of even the most ordinary experience upon the mind’s a priori faculties.
Kant was, in part, reacting against the empiricism of Locke and Hume — especially Hume’s skeptical conclusions, although Kant’s own rejection of metaphysics equaled if not surpassed Hume’s anti-metaphysical stance, as famously described in the following passage from Hume:
“When we run over libraries, persuaded of these principles, what havoc must we make? If we take in our hand any volume; of divinity or school metaphysics, for instance; let us ask, Does it contain any abstract reasoning concerning quantity or number? No. Does it contain any experimental reasoning concerning matter of fact and existence? No. Commit it then to the flames: for it can contain nothing but sophistry and illusion.”
David Hume, An Enquiry Concerning Human Understanding, “Of the academical or sceptical Philosophy,” Part III
For Hume, the bounds of sense and the limitation of reason entailed doubt; for Kant the bounds of sense and the limitation of reason entailed belief. There is a lesson in here somewhere, and the lesson is this: from a single state of affairs, multiple interpretations can be shown to follow.
Are the bounds of sense also the bounds of science? It would seem so. In so far as science must appeal to empirical evidence, and empirical evidence comes to us by way of the senses, the limits of the senses impose limits on science. Of course, this is a bit too simplistic to be quite true. There are so many qualifications that need to be made to such an assertion that it is difficult to say where to start.
It should be familiar to everyone that we have come to extensively use instruments to augment our senses. Big Science today sometimes spends years, if not decades, building its enormous machines, without which contemporary science could not be possible. So the limits of the senses are not absolute, and they are subject to manipulation. Also, we sometimes do science without our senses or instruments, when we pursue science by way of thought experiments.
While thought experiments alone, unsupplemented by actual experiments, are probably insufficient to constitute a science, thought experiments have become a necessary requisite to science much as instrumentation has become a necessary requisite to science. Sometimes, when our technology catches up with our ideas, we can transform our thought experiments into actual experiments, so that there is an historical relationship between science properly understood and the penumbra of science represented by thought experiments. And thought experiments too have their controlled conditions, and these are the conditions that Kant attempted to lay down in the transcendental aesthetic.
There is also the question of whether or not mathematics is a science, or one among the sciences. And whether or not we set aside mathematics as something different from the other sciences, we know that the development of unquestionably empirical sciences like physics are deeply mathematicized, so that the mathematical content of empirical theories may act like an abstract instrument, parallel to the material instruments of big science, that extends the possibilities of the senses. Another way to think about mathematics is as an enormous thought experiment that under-girds the rest of science — the one crucial thought experiment, an experimentum crucis, without which the rest of science cannot function. In this sense, thought experiments are indispensable to mathematicized science — as indispensable as mathematics.
At a more radical level of critique, it would be difficult to give a fine-grained account of empirical evidence that did not shade over, at the far edges of the concept, into other kinds of knowledge not strictly empirical. Empirical evidence may shade over into the kind of intuitive evidence that is the basis of mathematics, or the kind of epistemological context that is the setting for our thought experiments. Empirical evidence can also shade over into interoception that cannot be publicly verified (therefore failing a basic test of science) or precisely reproduced by repetition, and which interoception itself in turn shades over into intuitions in which thought and feeling are not clearly distinct.
Where does Kant’s possible experience fit within the continuum of the senses? What is the scope of possible experience? Can we make a clear distinction between extending the senses (and thus human experience) by abstract or concrete instruments and imposing a theory upon experience through these extensions? Does possible experience include all possible past experience? Does past experience include phenomenon that occurred but which were not observed (the famous tree falling in a forest that no one hears)? Does it include all possible future experience, or only those future experiences that will eventually be actualized, and not those that already remain merely shadowy possibilities? Does possible experience include those counterfactuals that feature in the “many worlds” interpretation of quantum theory? Explicit answers to these questions are less important that the lines of inquiry that the questions prompt us to pursue.
. . . . .
. . . . .
. . . . .
. . . . .
27 November 2014
An interesting article on NPR about a new atomic clock being developed by NIST scientists, New Clock May End Time As We Know It, was of great interest to me. Immediately intrigued, I wrote a post on my other blog in which I suggested that the new clock might be used to update the “Einstein’s box” thought experiment (also known as the clock-in-a-box thought experiment). While I would like to follow up on this idea at some time, today I want to write about advanced chronometry in the context of the STEM cycle.
Atomic clocks are among the most precise scientific instruments ever developed. As such, precision clocks offer a good illustration of the STEM cycle, which I identified as the definitive feature of industrial-technological civilization. While this illustration is contemporary, there is nothing new about the use of the most advanced science, technology, and engineering available being employed in chronometry.
The earliest sciences, already developed in classical antiquity, were mathematics and astronomy. These early scientific disciplines were applied to the construction of timekeeping mechanisms. Among the most interesting technological artifacts of the ancient world are the clock once installed in the Tower of the Winds in Athens (which was described in antiquity, but which no longer exists) and the Antikythera mechanism, the corroded remains of which were dredged up from a shipwreck off the Greek island of Antikythera (while discovered by sponge divers in 1900, the site is still yielding finds). A classic paper on the Tower of the Winds compares these two technologies: “This is a field in which ancient literature is curiously meager, as we well know from the complete lack of any literary reference to a technology that could produce the Antikythera Mechanism of the same date.” (“The Water Clock in the Tower of the Winds,” Joseph V. Noble and Derek J. de Solla, American Journal of Archaeology, Vol. 72, No. 4, Oct., 1968, pp. 345-355) Both of these artifacts are concerned with chronometry, which demonstrates that the most advanced technologies, then and now, have been employed in the measurement of time.
The advent of high technology as we know it today — unprecedented in human history — has been the result of the advent of a new kind of civilization — industrial-technological civilization — and the use of advanced technologies in chronometry provides a useful lens with which to view one of the unique features of our civilization today, which I call the STEM cycle. The acronym STEM is familiar in educational contexts in order to refer to education and training in science, technology, engineering, and mathematics, so I have taken over this acronym as the name for one of the socioeconomic processes that lies at the heart of our civilization: Science seeks to understand nature on its own terms, for its own sake. Technology is that portion of scientific research that can be developed specifically for the realization of practical ends. Engineering is the industrial implementation of a technology. Mathematics is the common language in which the elements of the cycle are formulated. A feedback loop of science driving technology, driving engineering, driving more science, characterizes industrial-technological civilization. This is the STEM cycle.
The distinctions between science, technology, and engineering are not absolute — far from it. To employ a terminology I developed elsewhere, I would say that science is only weakly distinct from technology, technology is only weakly distinct from engineering, and engineering is only weakly distinct from science. In some contexts any two elements of the STEM cycle are identical, while in other contexts of the STEM cycle they are starkly contrasted. This is not due to inconsistency, but rather to the fact that science, technology, and engineering are open-textured concepts; we could adopt conventional distinctions that would make them strongly distinct, but this would be contrary to usage in ordinary language and would only result in confusion. Given the lack of clear distinctions among science, technology, and engineering, where we draw the dividing lines within the STEM cycle is to some degree arbitrary — we could describe this cycle in different terms, employing different distinctions — but the cycle itself is not arbitrary. By any other name, it drives industrial-technological civilization.
The clock that was the inspiration for this post — the new strontium atomic clock, described in JILA Strontium Atomic Clock Sets New Records in Both Precision and Stability, and the subject of a scientific paper, An optical lattice clock with accuracy and stability at the 10−18 level by B. J. Bloom, T. L. Nicholson, J. R. Williams, S. L. Campbell, M. Bishof, X. Zhang, W. Zhang, S. L. Bromley, and J. Ye (a preprint of the article is available at Arxiv) — is instructive in several respects. In so far as we consider atomic clocks to be a generic “technology,” the strontium clock represents the latest and most advanced instance of this technology yet constructed, a more specific form of technology, the optical lattice clock, within the more generic division of atomic clocks. The sciences involved in the conceptualization of atomic clocks are fundamental: atomic physics, quantum theory, relativity theory, thermodynamics, and optics. Atomic clocks are a technology built from another technologies, including advanced materials, lasers, masers, a vacuum chamber, refrigeration, and computers. Building the technology into an optimal device involves engineering for dependability, economy, miniaturization, portability, and refinements of design.
The NIST web page notes that, “NIST invests in a number of atomic clock technologies because the results of scientific research are unpredictable, and because different clocks are suited for different applications.” (For further background on atomic clocks at NIST cf. A New Era for Atomic Clocks.) The new record breaking clocks in terms of stability and accuracy are experimental devices; the current standard for timekeeping is the NIST-F2 “cesium fountain” atomic clock. The transition from the previous standard timekeeping, NIST-F1, to the present standard, NIST-F2, is largely a result of engineering refinements of the earlier atomic clock. Even the experimental strontium clock is likely to be soon surpassed. JILA Strontium Atomic Clock Sets New Records in Both Precision and Stability quotes Jun Ye as saying, “We already have plans to push the performance even more, so in this sense, even this new Nature paper represents only a ‘mid-term’ report. You can expect more new breakthroughs in our clocks in the next 5 to 10 years.”
The engineering refinement of high technology has two important consequences:
1) inexpensive, widely available devices (which I will call the ubiquity function), and…
2) improved, cutting edge devices that improve the precision of measurement (which I will call the meliorative function), sometimes improved by an order of magnitude (or several orders of magnitude).
These latter devices, those that represent greater precision, are not likely to be inexpensive or widely available, but as the STEM cycle continues to advance science, technology, and engineering in a regular and predictable manner, the older generation of technology becomes widely available and inexpensive as new technologies take their place on the expensive cutting edge. However, these cutting edge technologies are in turn displaced by newer technologies, and the cycle continues. Thus there is a relationship — an historical relationship — between the two consequences of the engineering refinement of technology. Both of these phases in the life of a technology affect the practice of science. NIST Launches a New U.S. Time Standard: NIST-F2 Atomic Clock quotes NIST physicist Steven Jefferts, lead designer of NIST-F2, as saying, “If we’ve learned anything in the last 60 years of building atomic clocks, we’ve learned that every time we build a better clock, somebody comes up with a use for it that you couldn’t have foreseen.”
Widely available precision measurement devices (the ubiquity function) bring down the cost of scientific research and we begin to see science cropping up in all kinds of interesting and unexpected places. The development of computer technology and then the miniaturization of computers had the unintended result of making computers inexpensive and widely available. This, in turn, has meant that everyone doing science carries a portable computer with them, and this widely available computational power (which I have elsewhere called the computational infrastructure of civilization) has transformed how science is done. NIST Atomic Devices and Instrumentation (ADI) now builds “chip-scale” atomic clocks, which is both commercializing and thereby democratizing atomic clock technology in a form factor so small that it could be included in a cell phone (or whatever mobile device form factor you prefer). This is perfect illustration of the ubiquity function in an engineering application of atomic clock technology.
New cutting edge precision measurement devices (the meliorative function), employed only by the governments and industries that can afford to push the envelope with the latest technology, are scientific instruments of great sensitivity; increasing the precision of the measurement of time by an order of magnitude opens up new possibilities the consequences of which cannot be predicted. What can be predicted, however, is the present generation of high precision measurement devices make it possible to construct the next generation of precision measurement devices, which exceed the precision of the previous generation of devices. A clock built to a new design that is far more precise than its predecessors (like the strontium atomic clock) may not necessarily find its cutting edge scientific application exclusively in the measurement of time (though, again, it might do that also), but as a scientific instrument of great sensitivity it suggests uses throughout the sciences. A further distinction can be made, then, between instruments used for the purposes they were intended to serve, and instruments that are exapted for unintended uses.
A loosely-coupled STEM cycle is characterized primarily by the ubiquity function, while a tightly-coupled STEM cycle is characterized primarily by the meliorative function. Human civilization has always involved a loosely-coupled STEM cycle, sometimes operating over thousands of years, with no apparent relationship between science, technology, and engineering. Technological progress was slow and intermittent under these conditions. However, the productivity of industrial-technological civilization is such that its STEM cycle yields both the ubiquity function and the meliorative function, which means that there are in fact multiple STEM cycles running concurrently, both loosely-coupled and tightly-coupled.
The research and development branch of a large business enterprise is the conscious constitution of a limited, tightly-coupled STEM cycle in which only that science is pursued that is expected to generate specific technologies, and only those technologies are developed that can be engineered into marketable products. An open loop STEM cycle, loosely-coupled STEM cycle, or exaptations of the STEM cycle are seen as wasteful, but in some cases the unintended consequences from commercial enterprises can be profound. When Arno Penzias and Robert Wilson were hired by Bell Labs, it was with the promise that they could use the Holmdel Horn Antenna for pure science once they had done the work that Bell Labs would pay them for. As it turned out, the actual work of tracing down interference resulted in the discovery of cosmic microwave background radiation (CMBR), earning Penzias and Wilson the Nobel prize. An engineering problem became a science problem: how do you explain the background interference that cannot be eliminated from electronic devices?
. . . . .
. . . . .
. . . . .
. . . . .
15 November 2014
When I find myself among conspiracy theorists and pseudo-science aficionados, I probably sound like the most relentless, ruthless, unforgiving positivist that you have ever heard. But, of course, I’m not a positivist at all. When I find myself among those educated in the sciences, I probably sound like the most woolly-headed philosopher imaginable, who seemingly takes every opportunity to needlessly complicate matters that are perfectly clear just as they are. I am caught between defending science among those innocent of science, and defending philosophy among those innocent of philosophy. In other words, I can’t win. And now I’m going to make my hopeless position worse by taking the conflict (rather, the absence of communication) between science and philosophy into the forbidden no-man’s-land of politics.
My particular dilemma is the result of understanding that science is philosophy; that is to say, science as we know it today, is a particular branch of philosophy (something that I began to explain in A Fly in the Ointment). While it may be grudgingly acknowledged that science has philosophical presuppositions, it is step further to see science as a particular philosophy that is rather less comprehensive than the whole of philosophy. Now, it is true that science has become differentiated from the rest of philosophy because of its practical successes, but its practical successes alone are no warrant for separating methodological naturalism, i.e., science, from the rest of philosophy.
Without philosophy we cannot understand science; philosophy provides both the synchronic and the diachronic context of science. The emergence of science within western civilization is the diachronic narrative of philosophy, and the relations of science to other aspects of the world and human experience is the synchronic context of science that can only adequately be addressed by philosophy. The need for a robust engagement between science and philosophy, as is to be found, for example, in the work of Einstein, is a need that grows out of the philosophical context of science.
Previous epochs of civilization — notably, agrarian-ecclesiastical civilization — might point to their own pragmatic implementations of philosophy, no less than the successes of the sciences are heralded today. Enormous monumental building projects that still impress us today, symbols of civilization such as the pyramids, Hagia Sophia, the Taj Mahal, the Daibutsu at Nara, and Borobudur, were possible only through the effort of a philosophically unified civilization, and the monuments themselves are monuments to those civilizations and their philosophical bases.
As an example of a philosophical civilization animated from the power elites at the top down to the lowest rungs of the socioeconomic ladder I have elsewhere quoted Gregory Nazianzus on the Christological controversies in Byzantium:
“Constantinople is full of handicraftsmen and slaves, who are all profound theologians, and preach in their workshops and in the streets. If you want a man to change a piece of silver, he instructs you in which consists the distinction between the Father and the Son; if you ask the price of a loaf of bread, you receive for answer, that the Son is inferior to the Father; and if you ask, whether the bread is ready, the rejoinder is that the genesis of the Son was from nothing.”
Another example might be the reach of stoicism in the Roman empire from the emperor Marcus Aurelius to the slave Epictetus. This philosophical character of agrarian-ecclesiastical civilization is not limited to western civilization, its predecessors, and successors, but is a planetary phenomenon.
The civilization of India is perhaps uniquely philosophical in the world. India is a civilization-state, and Indian civilization is a philosophical civilization. In this respect, it is markedly different from western civilization, which has no contemporary single state representative, and in regard to philosophy is more narrow and focused.
This can give us a certain insight into western civilization, which is not a philosophical civilization in the sense that India is, but is a fragment of a philosophical civilization. In so far as science is a particular branch of philosophy, and in so far as western civilization in its present form (industrial-technological civilization) is founded upon science as the source of the STEM cycle, western civilization is a philosophical civilization for the particular philosophy of methodological naturalism. Indeed, the very insistence today that science can do without philosophy is an expression of the philosophical narrowness of western civilization.
Much is to be learned from the comparison of the philosophies and civilizational structures of those independent civilizations that can be traced all the way to their origins in the Neolithic Agricultural Revolution, during which all agrarian-ecclesiastical civilizations had their earliest origins. But there is a problem here. In reaction against the imperialism of western civilization since that period once called the Age of Discovery, when Columbus, Magellan, Vasco de Gama, Amerigo Vespucci, Vasco Núñez de Balboa, and many others, sailed from Europe and began to survey the world entire, it is now considered in supremely bad taste to compare civilizations. The celebratory model of tolerance is almost universally adopted and every civilization is counted as a special snowflake that has something to contribute to human history.
In my post on The Future Science of Civilizations I noted Carnap’s tripartite distinction among scientific concepts, which Carnap identified as the classificatory, the comparative, and the quantitative. (We note that this typology itself takes a classificatory form, and an entire class of scientific concepts are comparative concepts.) In so far as we understand Carnap’s conceptual schema of measurement as developmental, proceeding in phases so that initial classifications lead to comparisons, and comparisons lead to quantification, all the while gaining in objectivity, Carnap’s schematism of scientific measurement embodies what Edith Wyschogrod called “the quantification of the qualitied world.”
If we take the division of classificatory, comparative, and quantitative concepts not in a developmental sense but as different approaches to a scientific grasp of the world, then each conceptual method of measurement may yield unique information about the world. In either case, whether we take these scientific concepts of measurement in developmental terms or take each in isolation, comparative concepts have a crucial role to play: either they are a stage in the development of a fully quantitative science, or they yield unique information about the world.
We cannot fully or adequately conceptualize civilization without developing comparative concepts of civilization to the greatest extent possible, but the development and exploration of this conceptual space is severely constrained by the contemporary political proscription upon the comparison of civilizations. In this way, the study of civilization today is unnecessarily yet unavoidably political. In order to frankly and bluntly discuss comparative conceptions of civilization, we are forced to seek artful euphemisms to speak evasively. This is unfortunate for the development of a science of civilization, but it is not insuperable, and the appropriate degree of abstraction and formalization in a fully developed theoretical context may be sufficient to violate this taboo in spirit while leaving the letter of the proscription intact.
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
11 November 2014
Wittgenstein was not himself a positivist, but his early work, Tractatus Logico-Philosophicus, had such a profound influence on early twentieth century philosophy that the philosophy that we now identify as logical positivism was born from reading groups that got together to study Wittgenstein’s Tractatus — what I have elsewhere called The Ludwig Wittgenstein Reading Club — primarily the Vienna Circle.
Wittgenstein began his education as an engineer, and only later became interested in philosophy by way of the philosophy of mathematics then emerging from the work of Frege and Russell. It has been said that the early Wittgenstein approached philosophy like an engineer, setting out to drain the swamps of philosophy. A more familiar metaphor for Wittgenstein’s philosophy, though for the later rather than the earlier Wittgenstein, is that of philosophy as a kind of therapy:
“A philosopher is a man who has to cure many intellectual diseases in himself before he can arrive at the notions of common sense.”
Wittgenstein, Culture and Value, 1944, p. 44e
Wittgenstein does not himself use the term “therapy” or “therapeutic,” but frequently recurs to the theme in other words:
“In philosophizing we may not terminate a disease of thought. It must run its natural course, and slow cure is all important. (That is why mathematicians are such bad philosophers.)”
Wittgenstein, Zettel, 382
The idea of philosophy as therapy is not entirely new. In my Variations on the Theme of Life I noted the medieval tradition of conceiving philosophers as “doctors of the soul”:
“During late antiquity philosophers were sometimes called ‘doctors of the soul.’ Later yet, Avicenna was a practicing physician in addition to being both a logician and a philosopher, and he stands at the head of a tradition of doctor-philosophers among the Arabs. All this has a superficial resemblance to the contemporary conception of philosophy as therapy, but in reality it is the antithesis of the modern conception of philosophy as a sickness in need of therapy, of scholarship as an illness, and of the philosopher as corrupt and corrupting.”
Variations on the Theme of Life, section 767
Every age must confront the ancient and perennial questions of philosophy anew, because each age has its own, peculiar therapeutic needs. It has become a commonplace of contemporary commentary, as least since the middle of the twentieth century, that the pace and busyness of our civilization today is driving us insane, and in so far as this is true, we are more in need of therapy than previous ages.
In my previous post, Philosophy for Industrial-Technological Civilization, I suggested, contrary to Quine, that philosophy of science is not philosophy enough; that we also need philosophy of technology and philosophy of engineering, and to unify these aspects of the STEM cycle within the big picture, we need a philosophy of big history. There is only one problem with my vision for the overarching philosophy demanded by the world of today: there is no demand for it. No one is interested in my vision or, for that matter, any other vision of philosophy for the twenty-first century.
Previously I wrote three posts on contemporary anti-philosophy:
The most prestigious scientists of our time seem at one in their insistence upon the irrelevance of philosophy. A post on the SelfAwarePatters blog, E.O. Wilson: Science, not philosophy, will explain the meaning of existence, brought my attention to E. O. Wilson’s recent statements belittling philosophy. SelfAwarePatters has also written about Neil deGrasse Tyson’s “blanket dismissal of philosophy” in Neil deGrasse Tyson is wrong to dismiss all of philosophy, but he may have a point on some of it.
It is almost painful to watch Wilson’s oversimplifications in the above linked “Big Think” piece, though I suspect his oversimplifications will have a wide and sympathetic audience. After implying the pointlessness of studying the history of philosophy and making the claim that philosophy mostly consists of “failed models of how the brain works,” Wilson then appeals to the “full story of humanity” (without mentioning big history, though the interdisciplinary concatenation he mentions is very much in the spirit of big history), and formulates a point of view almost precisely the same as that I heard several times at the 2014 IBHA conference: once we have this big picture view of history, we no longer need to ask what the meaning of life is, because we will know it.
The inescapable reflexivity of philosophical thought means that any principled rejection of philosophy is itself a philosophical claim; unprincipled rejections, that is to say, dismissal without reason or argument, have no more standing than any other unprincipled claim. So the scientists who dismiss philosophy and give reasons for doing so are doing philosophy. The unfortunate consequence is that they are doing philosophy poorly, much like someone who dismisses science but who pontificates on matters scientific, and does so poorly. We are well familiar with this, as pseudo-science has been given a megaphone by the internet and other forms of mass media. Scientists are aware of the problem posed by pseudo-science, but seem to be blissfully unaware of the problem of pseudo-philosophy.
There is a book by Louis Althusser, Philosophy and the Spontaneous Philosophy of Scientists, that I have cited previously (in Fashionable Anti-Philosophy) since the title is so evocative, in which Althusser says, “…in every scientist there sleeps a philosopher or, to put it another way, that every scientist is affected by an ideology or a scientific philosophy which we propose to call by the conventional name: the spontaneous philosophy of the scientists…” It is this spontaneous philosophy of scientists that we see in the anti-philosophical pronouncements of E. O. Wilson and Neil deGrasse Tyson.
Not only eminent scientists, but also science popularizers share this attitude. Michio Kaku’s recent book, The Future of the Mind: The Scientific Quest to Understand, Enhance, and Empower the Mind, is essentially a speculative work in the philosophy of mind. There is a pervasive yet implicit Kantianism running through Kaku’s book of which I am sure he is unaware, because, like most scientists today who write on philosophical topics, he has not bothered to study the philosophical literature. If one knows that one is arguing a neo-Kantian position on the transcendental aesthetic, in trying to come to terms with how the barrage of sensory data is somehow translated into an apparently smooth and unitary stream of consciousness, then one can simply consult the literature to learn where state of the argument over the transcendental aesthetic stands today, what the standard arguments are for and against contemporary Kantianism, but without this basic knowledge, one does little more than repeat what has already been said — better — by others, and long ago. Even Sam Harris, who has some background in philosophy, gives his exposition of determinism in a philosophical vacuum, as though the work of philosophers such as Robert Kane, Helen Steward, and Alfred R. Mele simply did not exist, or is beneath notice.
The anti-philosophy and pseudo-philosophy of prominent scientists is an instance of the spontaneous philosophy noted by Althusser. But this spontaneous expression of uninformed philosophical speculation does not come out of nowhere; it has a basis, albeit dimly understood, in the nature of science itself. What is the nature of science itself? I have an answer to this, but it is not an answer that will be welcome to most of those in science today: science is philosophy. That is to say, science is a particular branch of philosophy, that branch once called natural philosophy, and it is natural philosophy practiced in accordance with methodological naturalism. Science is a narrow slice of a far more comprehensive conception of the world.
Scientists are philosophers without realizing they are philosophers, and when then pronounce upon philosophical questions without reference to the philosophical tradition — which is much broader and pluralistic than any one, single branch of philosophy, such as natural philosophy — they do little more than to restate their presuppositions as principles. Given the preeminent role of science within industrial-technological civilization, this willful ignorance of philosophy, and of the position of science in relation to philosophy, is not only holding back both science and philosophy, it is holding back civilization.
The next stage of development of our civilization (not to mention the macro-evolution of our civilization into another kind of civilization) will not come about until science utterly abandons the positivistic assumptions that are today the unquestioned yet implicit presuppositions of scientific inquiry, and science extends the scientific method, and the sense of responsibility to empirical evidence, beyond the confines of any one branch of philosophy to the whole of philosophy. To paraphrase Plato, until philosophers theorize as scientists or those who are now called scientists and leading thinkers genuinely and adequately philosophize, that is, until science and philosophy entirely coincide, while the many natures who at present pursue either one exclusively are forcibly prevented from doing so, civilization will have no rest from evils… nor, I think, will the human race.
. . . . .
. . . . .
. . . . .
. . . . .
12 June 2014
Scientific civilization changes when scientific knowledge changes, and scientific knowledge changes continuously. Science is a process, and that means that scientific civilization is based on a process, a method. Science is not a set of truths to which one might assent, or from which one might withhold one’s assent. It is rather the scientific method that is central to science, and not any scientific doctrine. Theories will evolve and knowledge will change as the scientific method is pursued, and the method itself will be refined and improved, but method will remain at the heart of science.
Pre-scientific civilization was predicated on a profoundly different conception of knowledge: the idea that truth is to be found at the source of being, the fons et origo of the world (as I discussed in my last post, The Metaphysics of the Bureaucratic Nation-State). Knowledge here consists of delineating the truth of the world prior to its later historical accretions, which are to be stripped away to the extent possible. More experience of the world only further removes us from the original source of the world. The proper method of arriving at knowledge is either through the study of the original revelation of the original truth, or through direct communion with the source and origin of being, which remains unchanged to this day (according to the doctrine of divine impassibility).
The central conceit of agrarian-ecclesiastical civilization to be based upon revealed eternal verities has been so completely overturned that its successor civilization, industrial-technological civilization, recognizes no eternal verities at all. Even the scientific method, that drives the progress of science, is continually being revised and refined. As Marx put it in the Communist Manifesto: “All fixed, fast-frozen relations, with their train of ancient and venerable prejudices and opinions, are swept away, all new-formed ones become antiquated before they can ossify. All that is solid melts into air…”
Scientific civilization always looks forward to the next development in science that will resolve our present perplexities, but this comes at the cost of posing new questions that further put off the definitive formulation of scientific truth, which remains perpetually incomplete even as it expands and becomes more comprehensive.
This has been recently expressed by Kevin Kelly in an interview:
“Every time we use science to try to answer a question, to give us some insight, invariably that insight or answer provokes two or three other new questions. Anybody who works in science knows that they’re constantly finding out new things that they don’t know. It increases their ignorance, and so in a certain sense, while science is certainly increasing knowledge, it’s actually increasing our ignorance even faster. So you could say that the chief effect of science is the expansion of ignorance.”
The Technium: A Conversation with Kevin Kelly [02.03.2014]
Scientific civilization, then, is not based on a naïve belief in progress, as is often alleged, but rather embodies an idea of progress that is securely founded in the very nature of scientific knowledge. There is nothing naïve in the scientific conception of knowledge; on the contrary, the scientific conception of knowledge had a long and painfully slow gestation in western civilization, and it is rather the paradigm that science supplants, the theological conception of knowledge (according to which all relevant truths are known from the outset, and are never subject to change), that is the naïve conception of knowledge, sustainable only in the infancy of civilization.
We are coming to understand that our own civilization, while not yet mature, is a civilization that has developed beyond its infancy to the degree that the ideas and institutions of infantile civilization are no longer viable, and if we attempt to preserve these ideas and institutions beyond their natural span, the result may be catastrophic for us. And so we have come to the point of conceptualizing our civilization in terms of existential risk, which is a thoroughly naturalistic way of thinking about the fate and future of humanity, and is amenable to scientific treatment.
It would be misleading to attribute our passing beyond the infancy of civilization to the advent of the particular civilization we have today, industrial-technological civilization. Even without the industrial revolution, scientific civilization would likely have gradually come to maturity, in some form or another, as the scientific revolution dates to that period of history that could be called modern civilization in the narrow sense — what I have called Modernism without Industrialism. And here by “maturity” I do not mean that science is exhausted and can produce no new scientific knowledge, but that we become reflexively aware of what we are doing when we do science. That is to say, scientific maturity is when we know ourselves to be engaged in science. In so far as “we” in this context means scientists, this was probably largely true by the time of the industrial revolution; in so far as “we” means mass man of industrial-technological civilization, it is not yet true today.
The way in which science enters into industrial-technological civilization — i.e., by way of spurring forward the open loop of industrial-technological civilization — means that science has been incorporated as an integral part of the civilization that immediately and disruptively followed the scientific civilization of modernism without industrialism (according to the Preemption Hypothesis). While the industrial revolution disrupted and preempted almost every aspect of the civilization that preceded it, it did not disrupt or preempt science, but rather gave a new urgency to science.
In several posts I have speculated on possible counterfactual civilizations (according to the counterfactuals implicit in naturalism), that is to say, forms of civilization that were possible but which were not actualized in history. One counterfactual civilization might have been agrarian-ecclesiastical civilization undisrupted by the scientific or industrial revolutions. Another counterfactual civilization might have been modern civilization in the narrow sense (i.e., Modernism without Industrialism) coming to maturity without being disrupted and preempted by the industrial revolution. It now occurs to me that yet another counterfactual form of civilization could have been that of industrialization without the scientific conception of knowledge or the systematic application of science to industry.
How could this work? Is it even possible? Perhaps not, and certainly not in the long term, or with high technology, which cannot exist without substantial scientific understanding. But the simple expedient of powered machinery might have come about by the effort of tinkerers, as did much of the industrial revolution as it happened. If we look at the halting and inconsistent efforts in the ancient world to produce large scale industries we get something of this idea, and this we could call industrialism without modernity. Science was not yet at the point at which it could be very helpful in the design of machinery; none of the sciences were yet mathematicized. And yet some large industrial enterprises were built, though few in number. It seems likely that it was not the lack of science that limited industrialization in classical antiquity, but the slave labor economy, which made labor-saving devices pointless.
There are, today, many possibilities for the future of civilization. Technically, these are future contingents (like Aristotle’s sea battle tomorrow), and as history unfolds one of these contingencies will be realized while the others become counterfactuals or are put off yet further. And in so far as there is a finite window of opportunity for a particular future contingent to come into being, beyond that window all unactualized contingents become counterfactuals.
. . . . .
. . . . .
. . . . .
. . . . .
24 November 2013
The world, we are learning every day, is a very large place. Or perhaps I should say that the universe is a very large place. It is also a very complex and strange place. J. B. S. Haldane famously said that, “I have no doubt that in reality the future will be vastly more surprising than anything I can imagine. Now my own suspicion is that the Universe is not only queerer than we suppose, but queerer than we can suppose.” (Possible Worlds and Other Papers, 1927, p. 286) In other words, human beings, no matter how valiantly they attempt to understand the universe, may not be cognitively equipped to understand it; our minds may not be the kind of minds that can understand the kind of place that the world is.
This idea of our inability to understand the world in which we find ourselves (an admirably humble Copernican insight that we might call metaphysical modesty, and which stands in contrast to epistemic hubris) has received many glosses since Haldane’s time. Most notable (notable, at least, from my perspective) are the evolutionary gloss, the quantum physics gloss, and the philosophical gloss. I will consider each of these in turn.
In terms of evolution, there is no reason to suppose that descent with modification in a context of a struggle for vital resources on the plains of Africa (the environment of evolutionary adaptedness, or EEA) is going to produce minds capable of understanding higher dimensional spatial manifolds or quantum physics at microscopic scales that differ radically from the macroscopic scales of ordinary human perception. Alvin Plantinga (about whom I wrote some time ago in A Note on Plantinga, inter alia) has used this argument for theological purposes. However, there is no intrinsic reason that a mind born in the mud and the muck cannot raise itself above its origins and come to contemplate the world in Copernican terms. The evolutionary argument cuts both ways, and since we have ourselves as the evidence of an organism that can raise itself from strictly survival behavior to forms of thought that have nothing to do with survival, from the perspective of the weak anthropic principle this is proof enough that natural selection can result in such a mind.
In terms of quantum theory, we are all familiar with famous quotes from the leading lights of quantum theory as to the essentially incomprehensibility of that theory. For example, Richard Feynman said, “I think I can safely say that nobody understands quantum mechanics.” However, I have observed (in The limits of my language are the limits of my world and elsewhere) that recent research is making strides in working around the epistemic limitations of quantum theory, revealing its uncertainties to be not absolute and categorical, but rather subject to careful and painstaking narrowing that renders the uncertainty a little less uncertain. I anticipate two developments that will emerge from the further elaborate of quantum theory: 1) the finding of ways to gradually and incrementally chip away at an absolutist conception of uncertainty (as just mentioned), and 2) the formulation of more adequate intuitions to make quantum theory more palatable to the human mind.
In terms of philosophy, Colin McGinn’s book Problems in philosophy: The Limits of Inquiry formulates a position which he calls Transcendental Naturalism:
“Philosophy is an attempt to get outside the constitutive structure of our minds. Reality itself is everywhere flatly natural, but because of our cognitive limits we are unable to make good on this general ontological principle. Our epistemic architecture obstructs knowledge of the real nature of the objective world. I shall call this thesis transcendental naturalism, TN for short.” (pp. 2-3)
I have previously written about McGinn’s work in Transcendental Non-Naturalism and Naturalism and Object Oriented Ontology, inter alia. Our ability to get outside the constitutive structure of our minds is severely limited at best, and so our ability to understand the world as it is is limited at best.
While our cognitive abilities are admittedly limited (for all the reasons discussed above, as well as other reasons not discussed), these limits are not absolute, but rather admit of revision. McGinn’s position as stated above implies a false dichotomy between staying within the constitutive structure of our minds and getting outside it. This is a classic case of facing the sheer cliff of Mount Improbable: while it is impossible to get outside our cognitive architecture in one fell swoop, we can little by little transgress the boundaries of our cognitive architecture, each time ever-so-slightly expanding our capacities. Incrementally over time we improve our ability to stand outside those limits that once marked the boundaries of our cognitive architecture. Thus in an ironic twist of intellectual history, the evolutionary argument, rather than demonstrating metaphysical modesty, is rather the key to limiting the limitations on the human mind.
All of this is related to one of the central problems in the philosophy of science of our time — the whole Kuhnian legacy that is the framework of so much contemporary philosophy of science. Copernican revelations and revolutions, which formerly disturbed our anthropocentric bias every few hundred years, now occur with alarming frequency. The difference today, of course, is that science is much more advanced than it was with past Copernican revelations and revolutions — it has much more advanced instrumentation available to it (as a result of the STEM cycle), and we have a much better idea of what to look for in the cosmos.
It was a shock to almost everyone to have it scientifically demonstrated that the universe is not static and eternal, but dynamic and changing. It was a shock when quantum theory demonstrated the world to be fundamentally indeterministic. This is by now a very familiar narrative. In fact, it is so familiar that it has been expropriated (dare I say exapted?) by obscurantists and irrationalists of our time, who point at continual changes at scientific knowledge as “proof” that science doesn’t give us any “truth” because it changes. The assumption here is that change in scientific knowledge demonstrates the weakness of science; in fact, change in scientific knowledge is the strength of science. Scientific knowledge is what I have elsewhere called an intelligent institution in so far as it is institutionalized knowledge, but that institution is formulated with internal mechanisms that facilitate the re-shaping of the institution itself over time. That mechanism is the scientific method.
It is important to see that the overturning of familiar conceptions of the world — some of which are ancient and some of which are not — is not arbitrary. Less comprehensive conceptions are being replaced by more comprehensive conceptions. The more comprehensive our perspective on the world, the greater the number of anomalies we must face, and the greater the number of anomalies we face the more likely it is that our theories will be overturned, or at least partially falsified. But it is the wrong debate to ask whether theory change is rational or irrational. It is misleading, because what ought to concern us is how well our theories account for the ever-larger world that is revealed to us through our ever-more comprehensive methods of science, and not how well our theories conform to our presuppositions about rationality. The more we get the science right, reason will follow, shaping new intuitions and formulating new theories.
Our ability to discover (and to understand) ever greater scales of the universe is contingent upon our growing intellectual capabilities, which are cumulative. Just as in the STEM cycle science begets technologies that beget industries that create better scientific instruments, so too on a purely intellectual level the intellectual capabilities of one generation are the formative context of the intellectual capabilities of the next generation, which allows the later generation to exceed the earlier generation. Concepts are the tools of the mind, and we use our familiar concepts to create the next generation of concepts, which latter are both more refined and more powerful than the former, in the same way as we use each generation of tools to build the next generation of tools, which makes each generation of tools better than the last (except for computer software — but I expect that this degradation in the practicability of computer software is simply the software equivalent of planned obsolescence).
Our current generation of tools — both conceptual and technological — are daily revealing to us the inadequacy of our past conceptions of the world. Several recent discoveries have in particular called into question our understanding of the size of the world, especially in so far as the world is defined in terms of its origins in the Big Bang. For example, the discovery of hyperclusters suggest physical structures of the universe that are larger than the upper limit set to physical structures by contemporary cosmologies theories (cf. ‘Hyperclusters’ of the Universe — “Something is Behaving Very Strangely”).
In a similar vein, writing of the recent discovery of a “large quasar group” (LQG) as much as four billion light years across, the article The Largest Discovered Structure in the Universe Contradicts Big-Bang Theory Cosmology states:
“This LQG challenges the Cosmological Principle, the assumption that the universe, when viewed at a sufficiently large scale, looks the same no matter where you are observing it from. The modern theory of cosmology is based on the work of Albert Einstein, and depends on the assumption of the Cosmological Principle. The principle is assumed, but has never been demonstrated observationally ‘beyond reasonable doubt’.”
This formulation gets the order of theory and observation wrong. The cosmological principle is not a principle that can be proved or disproved by evidence; it is a theoretical idea that is used to give structure and meaning to observations, to organize observations into a theoretical whole. The cosmological principle belongs to theoretical cosmology; recent discoveries such as hyperclusters and large quasar groups belong to observational cosmology. While the two — i.e., theoretical and observational — cannot be separated in the practice of science, it is also true that they are not identical. Theoretical methods are distinct from observational methods, and vice versa.
Thus the cosmological principle may be helpful or unhelpful in organizing our knowledge of the cosmos, but it is not the kind of thing that can be falsified in the same way that, for example, a theory of planetary formation can be falsified. That is to say, the cosmological principle might be opposed to (falsified by) another principle that negates the cosmological principle, but this anti-cosmological principle will similarly belong to an order not falsifiable by empirical observations.
The discoveries of hyperclusters and LQGs are particularly problematic because they question some of the fundamental assumptions and conclusions of Big Bang cosmology, which is, essentially, the only large scale cosmological model in contemporary science. Big Bang cosmology is the explanation for the structure of the cosmos that was formulated in response to the discovery of the red shift, which implies that, on the largest observable scales, the universe is expanding. It is important to add the qualification, “on the largest observable scales” because stars within a given galaxy are circulating around the galaxy, and while a given star may be moving away from another given star, it is also likely to be moving toward yet some other star. And, even at larger scales, not all galaxies are receding from each other. It is fairly well known that galaxies collide and commingle; the Helmi stream of our own Milky Way is the result of a long past galactic collision, and at some far time in the future the Milky Way itself will merge with the larger Andromeda galaxy, and be absorbed by it.
Cosmology during the period of the big bang theory (a period in which we still find ourselves today) is in some respects like biology before Darwin. Almost all biology before Darwin was essentially theological, but no one had a better idea so biology had to wait to become a science capable of methodologically naturalistic formulations until after Darwin. The big bang theory was, on the contrary, proposed as a scientific theory (not merely bequeathed to us by pre-scientific tradition), and most scientists working within the big bang tradition have formulated the Big Bang in meticulously naturalistic terms. Nevertheless, once the steady state theory was overthrown, no one really had an alternative to the big bang theory, so all cosmology centered on the Big Bang for lack of imagination of alternatives — but also due to the limitations of the scientific instruments, which at the time of the triumph of the big bang theory were much more modest than they are today.
As disconcerting as it was to discover that the cosmos did not embody an eternal order, that it is expanding and had a history of violent episodes, and that it was much larger than an “island universe” comprising only the Milky Way, the observations that we need to explain today are no less disconcerting in their own way.
Here is how Leonard Susskind describes our contemporary observations of the expanding universe:
“In every direction that we look, galaxies are passing the point at which they are moving away from us faster than light can travel. Each of us is surrounded by a cosmic horizon — a sphere where things are receding with the speed of light — and no signal can reach us from beyond that horizon. When a star passes the point of no return, it is gone forever. Far out, at about fifteen billion light years, our cosmic horizon is swallowing galaxies, stars, and probably even life. It is as if we all live in our own private inside-out black hole.”
Leonard Susskind, The Black Hole War: My Battle with Stephen Hawking to make the World Safe for Quantum Mechanics, New York, Boston, and London: Little, Brown and Company, 2008, pp. 437-438
This observation has not yet been sufficiently appreciated. What lies beyond Susskind’s cosmic horizon is unobservable, as anything that disappears beyond the event horizon of a black hole has become unobservable, and that places such matters beyond the reach of science understood in a narrow sense of observation. But as I have noted above, in the practice of science we cannot disentangle the theoretical and the observational, but the two are not the same. While our observations come to an end at the cosmic horizon, our principles encounter no such boundary. Thus it is that we naturally extrapolate our science beyond the boundaries of observation, but if we get our scientific principles wrong, anything beyond the boundary of observation will be wrong and will be incapable or correction by observation.
Science in the narrow sense must, then, come to an end with observation. But this does not satisfy the mind. One response is to deny the mind its satisfaction and refuse to pass beyond observation. Another response is to fill the void with mythology and fiction. Yet another response is to take up the principles on their own merits and consider them in the light of reason. This response is the philosophical response, and we see that it is a rational response to the world that is continuous with science even when it passes beyond science.
. . . . .
. . . . .
. . . . .
17 November 2013
Inefficiency in the STEM cycle
In my previous post, The Open Loop of Industrial-Technological Civilization, I ended on the apparently pessimistic note of the existential risks posed to industrial-technological civilization by friction and inefficiency in the STEM cycle that drives our civilization headlong into the future. Much that is produced by the feedback loop of science, technology, and engineering is dissipated in science that does not result in technologies, technologies that are not engineered in to industries, and industries that do not produce new scientific instruments. However, just enough science feeds into technology, technology into engineering, and engineering into science to keep the STEM cycle going.
These “inefficiencies” should not be seen as a “bad” thing, since much pure science that is valuable as an intellectual contribution to civilization has few if any practical consequences. The “inefficient” science that does not contribute directly to the STEM cycle is some of the best science that does humanity credit. Indeed, G. H. Hardy was famously emphatic that all practical mathematics was “ugly” and only pure mathematics, untainted by practical application, was truly beautiful — and Hardy made it clear that beautiful mathematics was ultimately the only thing that mattered. Thus these “inefficiencies” that appear to weaken the STEM cycle and hence pose an existential risk to our industrial-technological civilization, are at the same time existential opportunities — as always, risk and opportunity are one and the same.
Opportunities of the STEM cycle
The apparently pessimistic formulation of my previous took this form:
“It is entirely possible that a shift in social, economic, cultural, or other factors that influence or are influenced by the STEM cycle could increase the amount of epiphenomenal science, technology, and engineering, thus decreasing the efficiency of the STEM cycle.”
Such a formulation must be balanced by an appropriate and parallel formulation to the effect that it is entirely possible that a shift in social, economic, cultural, or other factors that influence or are influenced by the STEM cycle could decrease the amount of epiphenomenal science, technology, and engineering, thus increasing the efficiency of the STEM cycle.
However, making the STEM cycle more “efficient” might well be catastrophic, or nearly catastrophic, for civilization, as it would imply a narrowing of human life to the parameters defined by the STEM cycle. This might lead to a realization of the existential risks of permanent stagnation (i.e., the stagnation of all aspects of civilization other than those that advance industrial-technological civilization, which could prove frightening) or flawed realization, in which an acceleration or consolidation of the STEM cycle leads to the sort of civilization no one would find desirable or welcome.
There is no reason one could not, however, both strengthen the STEM cycle, making industrial-technological civilization more robust and more productive of advanced science, technology, and engineering, while at the same time also producing more pure science, more marginal technologies, and more engineering curiosities that don’t feed directly into the STEM cycle. The bigger the pie, the bigger each piece of the pie and the more to go around for everyone. Also, pure science and practical science exist in a cycle of mutual escalation of their own, in which pure science inspires practical science and practical science inspires more pure science. Perhaps the same is true also of marginal and practical technologies and the engineering of curiosities and the engineering of mass industries.
Scaling the STEM cycle
The dissipation of excess productions of the STEM cycle mean that unexpected sectors of the economy (as well as unexpected sectors of society) are occasionally the recipients of disproportional inputs. These disproportional inputs, like the inefficiencies discussed above, might be understood as either risks or opportunities. Some socioeconomic sectors might be catastrophically stressed by a disproportionate input, while others might unexpected flourish with a flourishing input. To control the possibilities of catastrophic failure or flourishing success, we must consider the possibility scaling the STEM cycle.
To what degree can the STEM cycle be scaled? By this question I mean that, once we are explicitly and consciously aware that it is the STEM cycle that drives industrial-technological civilization (or, minimally, that it is among the drivers of industrial-technological civilization), if we want to further drive that civilization forward (as I would like to see it driven until earth-originating life has established extraterrestrial redundancy in the interest of existential risk mitigation) can we consciously do so? To what extent can the STEM cycle be controlled, or can its scaling be controlled? Can we consciously direct the STEM cycle so that more science begets more technology, more technology begets more engineering, and more engineering begets more science? I think that we can. But, as with the matters discussed above, we must always be aware of the risk/opportunity trade-off. Focusing too much of the STEM cycle may have disadvantages.
Once we understand an underlying mechanism of civilization, like the STEM cycle, we can consciously cultivate this mechanism if we wish to see more of this kind of civilization, or we can attempt to dampen this mechanism if we want to see less of this civilization. These attempts to cultivate or dampen a mechanism of civilization can take microscopic or macroscopic forms. Macroscopically, we are concerned with the total picture of civilization; microscopically we may discern the smallest manifestations of the mechanism, as when the STEM cycle is purposefully pursued by the R&D division of a business, which funds a certain kind of science with an eye toward creating certain technologies that can be engineered into specific industries — all in the interest of making a profit for the shareholders.
This last example is a very conscious exemplification of the STEM cycle, that might conceivably be reduced the work of a single individual, working in turn as scientist, technologist, and engineer. The very narrowness of this process which is likely to produce specific and quantifiable results is also likely to produce very little in terms of epiphenomenal manifestations of the STEM cycle, and thus may contribute little or nothing to the more edifying dimensions of civilization. But this is not necessarily the case. Arno Penzias and Robert Wilson were working as scientists trying to solve a practical problem for Bell Labs when they discovered the cosmic microwave background radiation.
Reason for Hope
We have at least as much reason to hope for the future as to despair of the future, if not more reason to hope. The longer civilization persists, the more robust it becomes, and the more robust civilization becomes, the more internal diversity and experimentation civilization can tolerate (i.e., greater social differentiation, as Siggi Becker has recently pointed out to me). The extreme social measures taken in the past to enforce conformity within society have been softened in Western civilization, and individuals have a great deal of latitude that was unthinkable even in the recent past.
Perhaps more significantly from the perspective of civilization, the more robust and tolerant our civilization, the more latitude there is for like-minded individuals to cooperate in the founding and advancement of innovative social movements which, if they prove to be effective and to meet a need, can result in real change to the overall structure of society, and this sort of bottom-up social change was precisely the kind of change that agrarian-ecclesiastical civilization was structured to frustrate, resist, and suppress. In this respect, if in no other, we have seen social progress in the development of civilization that is distinct from the technological and economic progress that characterizes the STEM cycle.
As I wrote in my recent Centauri Dreams post, SETI, METI, and Existential Risk, to exist is to be subject to existential risk. Given the relation of risk and opportunity, it is also the case that to exist is to choose among existential opportunities. This is why we fight so desperately to stay alive, and struggle so insistently to improve our condition once we have secured the essentials of existence. To be alive is to have countless existential opportunities within reach; once we die, all of this is lost to us. And to improve one’s condition is to increase the actionable existential opportunities within one’s grasp.
The development of civilization, for all its faults and deficiencies, is tending toward increasing the range of existential opportunities available as “live options” (as William James would say) for both individuals and communities. That this increased range of existential opportunities also comes with an increased variety of existential risks should not be employed as an excuse to attempt to reverse the real social gains bequeathed by industrial-technological civilization.
. . . . .
. . . . .
. . . . .
23 October 2013
Prediction in Science
One of the distinguishing features of science as a system of thought is that it makes testable predictions. The fact that scientific predictions are testable suggests a methodology of testing, and we call the scientific methodology of testing experiment. Hypothesis formation, prediction, experimentation, and resultant modification of the hypothesis (confirmation, disconfirmation, or revision) are all essential elements of the scientific method, which constitutes an escalating spiral of knowledge as the scientific method systematically exposes predictions to experiment and modifies its hypotheses in the light of experimental results, which leads in turn to new predictions.
The escalating spiral of knowledge that science cultivates naturally pushes that knowledge into the future. Sometimes scientific prediction is even formulated in reference to “new facts” or “temporal asymmetries” in order to emphasize that predictions refer to future events that have not yet occurred. In constructing an experiment, we create a few set of facts in the world, and then interpret these facts in the light of our hypothesis. It is this testing of hypotheses by experiment that establishes the concrete relationship of science to the world, and this is also a source of limitation, for experiments are typically designed in order to focus on a single variable and to that end an attempt is made to control for the other variables. (A system of thought that is not limited by the world is not science.)
Alfred North Whitehead captured this artificial feature of scientific experimentation in a clever line that points to the difference between scientific predictions and predictions of a more general character:
“…experiment is nothing else than a mode of cooking the facts for the sake of exemplifying the law. Unfortunately the facts of history, even those of private individual history, are on too large a scale. They surge forward beyond control.”
Alfred North Whitehead, Adventures of Ideas, New York: The Free Press, 1967, Chapter VI, “Foresight,” p. 88
There are limits to prediction, and not only those pointed out by Whitehead. The limits to prediction have been called the prediction wall. Beyond the prediction wall we cannot penetrate.
The Prediction Wall
John Smart has formulated the idea of a prediction wall in his essay, “Considering the Singularity,” as follows:
With increasing anxiety, many of our best thinkers have seen a looming “Prediction Wall” emerge in recent decades. There is a growing inability of human minds to credibly imagine our onrushing future, a future that must apparently include greater-than-human technological sophistication and intelligence. At the same time, we now admit to living in a present populated by growing numbers of interconnected technological systems that no one human being understands. We have awakened to find ourselves in a world of complex and yet amazingly stable technological systems, erected like vast beehives, systems tended to by large swarms of only partially aware human beings, each of which has only a very limited conceptualization of the new technological environment that we have constructed.
Business leaders face the prediction wall acutely in technologically dependent fields (and what enterprise isn’t technologically dependent these days?), where the ten-year business plans of the 1950’s have been replaced with ten-week (quarterly) plans of the 2000’s, and where planning beyond two years in some fields may often be unwise speculation. But perhaps most astonishingly, we are coming to realize that even our traditional seers, the authors of speculative fiction, have failed us in recent decades. In “Science Fiction Without the Future,” 2001, Judith Berman notes that the vast majority of current efforts in this genre have abandoned both foresighted technological critique and any realistic attempt to portray the hyper-accelerated technological world of fifty years hence. It’s as if many of our best minds are giving up and turning to nostalgia as they see the wall of their own conceptualizing limitations rising before them.
Considering the Singularity: A Coming World of Autonomous Intelligence (A.I.) © 2003 by John Smart (This article may be reproduced for noncommercial purposes if it is copied in its entirety, including this notice.)
I would to suggest that there are at least two prediction walls: synchronic and diachronic. The prediction wall formulated above by John Smart is a diachronic prediction wall: it is the onward-rushing pace of events, one following the other, that eventually defeats our ability to see any recognizable order or structure of the future. The kind of prediction wall to which Whitehead alludes is a synchronic prediction wall, in which it is the outward eddies of events in the complexity of the world’s interactions that make it impossible for us to give a complete account of the consequences of any one action. (Cf. Axes of Historiography)
Retrodiction and the Historical Sciences
Science does not live by prediction alone. While some philosophers of science have questioned the scientificity of the historical sciences because they could not make testable (and therefore falsifiable) predictions about the future, it is now widely recognized that the historical sciences don’t make predictions, but they do make retrodictions. A retrodiction is a prediction about the past.
The Oxford Dictionary of Philosophy by Simon Blackburn (p. 330) defines retrodiction thusly:
retrodiction The hypothesis that some event happened in the past, as opposed to the prediction that an event will happen in the future. A successful retrodiction could confirm a theory as much as a successful prediction.
As with predictions, there is also a limit to retrodiction, and this is the retrodiction wall. Beyond the retrodiction wall we cannot penetrate.
I haven’t been thinking about this idea for long enough to fully understand the ramifications of a retrodiction wall, so I’m not yet clear about whether we can distinction diachronic retrodiction and synchronic retrodiction. Or, rather, it would be better to say that the distinction can certainly be made, but that I cannot think of good contrasting examples of the two at the present time.
We can define a span of accessible history that extends from the retrodiction wall in the past to the prediction wall in the future as what I will call effective history (by analogy with effective computability). Effective history can be defined in a way that is closely parallel to effectively computable functions, because all of effective history can be “reached” from the present by means of finite, recursive historical methods of inquiry.
Effective history is not fixed for all time, but expands and contracts as a function of our knowledge. At present, the retrodiction wall is the Big Bang singularity. If anything preceded the Big Bang singularity we are unable to observe it, because the Big Bang itself effectively obliterates any observable signs of any events prior to itself. (Testable theories have been proposed that suggest the possibility of some observable remnant of events prior to the Big Bang, as in conformal cyclic cosmology, but this must at present be regarded as only an early attempt at such a theory.)
Prior to the advent of scientific historiography as we know it today, the retrodiction wall was fixed at the beginning of the historical period narrowly construed as written history, and at times the retrodiction wall has been quite close to the present. When history experiences one of its periodic dark ages that cuts it off from his historical past, little or nothing may be known of a past that once familiar to everyone in a given society.
The emergence of agrarian-ecclesiastical civilization effectively obliterated human history before itself, in a manner parallel to the Big Bang. We know that there were caves that prehistorical peoples visited generation after generation for time out of mind, over tens of thousands of years — much longer than the entire history of agrarian-ecclesiastical civilization, and yet all of this was forgotten as though it had never happened. This long period of prehistory was entirely lost to human memory, and was not recovered again until scientific historiography discovered it through scientific method and empirical evidence, and not through the preservation of human memory, from which prehistory had been eradicated. And this did not occur until after agrarian-ecclesiastical civilization had lapsed and entirely given way to industrial-technological civilization.
We cannot define the limits of the prediction wall as readily as we can define the limits of the retrodiction wall. Predicting the future in terms of overall history has been more problematic than retrodicting the past, and equally subject to ideological and eschatological distortion. The advent of modern science compartmentalized scientific predictions and made them accurate and dependable — but at the cost of largely severing them from overall history, i.e., human history and the events that shape our lives in meaningful ways. We can make predictions about the carbon cycle and plate tectonics, and we are working hard to be able to make accurate predictions about weather and climate, but, for the most part, our accurate predictions about the future dispositions of the continents do not shape our lives in the near- to mid-term future.
I have previously quoted a famous line from Einstein: “As far as the laws of mathematics refer to reality, they are not certain; and as far as they are certain, they do not refer to reality.” We might paraphrase this Einstein line in regard to the relation of mathematics to the world, and say that as far as scientific laws of nature predict events, these events are irrelevant to human history, and in so far as predicted events are relevant to human beings, scientific laws of nature cannot predict them.
Singularities Past and Future
As the term “singularity” is presently employed — as in the technological singularity — the recognition of a retrodiction wall in the past complementary to the prediction wall in the future provides a literal connection between the historiographical use of “singularity” and the use of the term “singularity” in cosmology and astrophysics.
Theorists of the singularity hypothesis place a “singularity” in the future which constitutes an absolute prediction wall beyond which history is so transformed that nothing beyond it is recognizable to us. This future singularity is not the singularity of astrophysics.
If we recognize the actual Big Bang singularity in the past as the retrodiction wall for cosmology — and hence, by extension, for Big History — then an actual singularity of astrophysics is also at the same time an historical singularity.
. . . . .
I have continued my thoughts on the retrodiction wall in Addendum on the Retrodiction Wall.
. . . . .
. . . . .
. . . . .