12 June 2014
Scientific civilization changes when scientific knowledge changes, and scientific knowledge changes continuously. Science is a process, and that means that scientific civilization is based on a process, a method. Science is not a set of truths to which one might assent, or from which one might withhold one’s assent. It is rather the scientific method that is central to science, and not any scientific doctrine. Theories will evolve and knowledge will change as the scientific method is pursued, and the method itself will be refined and improved, but method will remain at the heart of science.
Pre-scientific civilization was predicated on a profoundly different conception of knowledge: the idea that truth is to be found at the source of being, the fons et origo of the world (as I discussed in my last post, The Metaphysics of the Bureaucratic Nation-State). Knowledge here consists of delineating the truth of the world prior to its later historical accretions, which are to be stripped away to the extent possible. More experience of the world only further removes us from the original source of the world. The proper method of arriving at knowledge is either through the study of the original revelation of the original truth, or through direct communion with the source and origin of being, which remains unchanged to this day (according to the doctrine of divine impassibility).
The central conceit of agrarian-ecclesiastical civilization to be based upon revealed eternal verities has been so completely overturned that its successor civilization, industrial-technological civilization, recognizes no eternal verities at all. Even the scientific method, that drives the progress of science, is continually being revised and refined. As Marx put it in the Communist Manifesto: “All fixed, fast-frozen relations, with their train of ancient and venerable prejudices and opinions, are swept away, all new-formed ones become antiquated before they can ossify. All that is solid melts into air…”
Scientific civilization always looks forward to the next development in science that will resolve our present perplexities, but this comes at the cost of posing new questions that further put off the definitive formulation of scientific truth, which remains perpetually incomplete even as it expands and becomes more comprehensive.
This has been recently expressed by Kevin Kelly in an interview:
“Every time we use science to try to answer a question, to give us some insight, invariably that insight or answer provokes two or three other new questions. Anybody who works in science knows that they’re constantly finding out new things that they don’t know. It increases their ignorance, and so in a certain sense, while science is certainly increasing knowledge, it’s actually increasing our ignorance even faster. So you could say that the chief effect of science is the expansion of ignorance.”
The Technium: A Conversation with Kevin Kelly [02.03.2014]
Scientific civilization, then, is not based on a naïve belief in progress, as is often alleged, but rather embodies an idea of progress that is securely founded in the very nature of scientific knowledge. There is nothing naïve in the scientific conception of knowledge; on the contrary, the scientific conception of knowledge had a long and painfully slow gestation in western civilization, and it is rather the paradigm that science supplants, the theological conception of knowledge (according to which all relevant truths are known from the outset, and are never subject to change), that is the naïve conception of knowledge, sustainable only in the infancy of civilization.
We are coming to understand that our own civilization, while not yet mature, is a civilization that has developed beyond its infancy to the degree that the ideas and institutions of infantile civilization are no longer viable, and if we attempt to preserve these ideas and institutions beyond their natural span, the result may be catastrophic for us. And so we have come to the point of conceptualizing our civilization in terms of existential risk, which is a thoroughly naturalistic way of thinking about the fate and future of humanity, and is amenable to scientific treatment.
It would be misleading to attribute our passing beyond the infancy of civilization to the advent of the particular civilization we have today, industrial-technological civilization. Even without the industrial revolution, scientific civilization would likely have gradually come to maturity, in some form or another, as the scientific revolution dates to that period of history that could be called modern civilization in the narrow sense — what I have called Modernism without Industrialism. And here by “maturity” I do not mean that science is exhausted and can produce no new scientific knowledge, but that we become reflexively aware of what we are doing when we do science. That is to say, scientific maturity is when we know ourselves to be engaged in science. In so far as “we” in this context means scientists, this was probably largely true by the time of the industrial revolution; in so far as “we” means mass man of industrial-technological civilization, it is not yet true today.
The way in which science enters into industrial-technological civilization — i.e., by way of spurring forward the open loop of industrial-technological civilization — means that science has been incorporated as an integral part of the civilization that immediately and disruptively followed the scientific civilization of modernism without industrialism (according to the Preemption Hypothesis). While the industrial revolution disrupted and preempted almost every aspect of the civilization that preceded it, it did not disrupt or preempt science, but rather gave a new urgency to science.
In several posts I have speculated on possible counterfactual civilizations (according to the counterfactuals implicit in naturalism), that is to say, forms of civilization that were possible but which were not actualized in history. One counterfactual civilization might have been agrarian-ecclesiastical civilization undisrupted by the scientific or industrial revolutions. Another counterfactual civilization might have been modern civilization in the narrow sense (i.e., Modernism without Industrialism) coming to maturity without being disrupted and preempted by the industrial revolution. It now occurs to me that yet another counterfactual form of civilization could have been that of industrialization without the scientific conception of knowledge or the systematic application of science to industry.
How could this work? Is it even possible? Perhaps not, and certainly not in the long term, or with high technology, which cannot exist without substantial scientific understanding. But the simple expedient of powered machinery might have come about by the effort of tinkerers, as did much of the industrial revolution as it happened. If we look at the halting and inconsistent efforts in the ancient world to produce large scale industries we get something of this idea, and this we could call industrialism without modernity. Science was not yet at the point at which it could be very helpful in the design of machinery; none of the sciences were yet mathematicized. And yet some large industrial enterprises were built, though few in number. It seems likely that it was not the lack of science that limited industrialization in classical antiquity, but the slave labor economy, which made labor-saving devices pointless.
There are, today, many possibilities for the future of civilization. Technically, these are future contingents (like Aristotle’s sea battle tomorrow), and as history unfolds one of these contingencies will be realized while the others become counterfactuals or are put off yet further. And in so far as there is a finite window of opportunity for a particular future contingent to come into being, beyond that window all unactualized contingents become counterfactuals.
. . . . .
. . . . .
I have written more on the nature of scientific civilization in…
. . . . .
. . . . .
. . . . .
. . . . .
24 November 2013
The world, we are learning every day, is a very large place. Or perhaps I should say that the universe is a very large place. It is also a very complex and strange place. J. B. S. Haldane famously said that, “I have no doubt that in reality the future will be vastly more surprising than anything I can imagine. Now my own suspicion is that the Universe is not only queerer than we suppose, but queerer than we can suppose.” (Possible Worlds and Other Papers, 1927, p. 286) In other words, human beings, no matter how valiantly they attempt to understand the universe, may not be cognitively equipped to understand it; our minds may not be the kind of minds that can understand the kind of place that the world is.
This idea of our inability to understand the world in which we find ourselves (an admirably humble Copernican insight that we might call metaphysical modesty, and which stands in contrast to epistemic hubris) has received many glosses since Haldane’s time. Most notable (notable, at least, from my perspective) are the evolutionary gloss, the quantum physics gloss, and the philosophical gloss. I will consider each of these in turn.
In terms of evolution, there is no reason to suppose that descent with modification in a context of a struggle for vital resources on the plains of Africa (the environment of evolutionary adaptedness, or EEA) is going to produce minds capable of understanding higher dimensional spatial manifolds or quantum physics at microscopic scales that differ radically from the macroscopic scales of ordinary human perception. Alvin Plantinga (about whom I wrote some time ago in A Note on Plantinga, inter alia) has used this argument for theological purposes. However, there is no intrinsic reason that a mind born in the mud and the muck cannot raise itself above its origins and come to contemplate the world in Copernican terms. The evolutionary argument cuts both ways, and since we have ourselves as the evidence of an organism that can raise itself from strictly survival behavior to forms of thought that have nothing to do with survival, from the perspective of the weak anthropic principle this is proof enough that natural selection can result in such a mind.
In terms of quantum theory, we are all familiar with famous quotes from the leading lights of quantum theory as to the essentially incomprehensibility of that theory. For example, Richard Feynman said, “I think I can safely say that nobody understands quantum mechanics.” However, I have observed (in The limits of my language are the limits of my world and elsewhere) that recent research is making strides in working around the epistemic limitations of quantum theory, revealing its uncertainties to be not absolute and categorical, but rather subject to careful and painstaking narrowing that renders the uncertainty a little less uncertain. I anticipate two developments that will emerge from the further elaborate of quantum theory: 1) the finding of ways to gradually and incrementally chip away at an absolutist conception of uncertainty (as just mentioned), and 2) the formulation of more adequate intuitions to make quantum theory more palatable to the human mind.
In terms of philosophy, Colin McGinn’s book Problems in philosophy: The Limits of Inquiry formulates a position which he calls Transcendental Naturalism:
“Philosophy is an attempt to get outside the constitutive structure of our minds. Reality itself is everywhere flatly natural, but because of our cognitive limits we are unable to make good on this general ontological principle. Our epistemic architecture obstructs knowledge of the real nature of the objective world. I shall call this thesis transcendental naturalism, TN for short.” (pp. 2-3)
I have previously written about McGinn’s work in Transcendental Non-Naturalism and Naturalism and Object Oriented Ontology, inter alia. Our ability to get outside the constitutive structure of our minds is severely limited at best, and so our ability to understand the world as it is is limited at best.
While our cognitive abilities are admittedly limited (for all the reasons discussed above, as well as other reasons not discussed), these limits are not absolute, but rather admit of revision. McGinn’s position as stated above implies a false dichotomy between staying within the constitutive structure of our minds and getting outside it. This is a classic case of facing the sheer cliff of Mount Improbable: while it is impossible to get outside our cognitive architecture in one fell swoop, we can little by little transgress the boundaries of our cognitive architecture, each time ever-so-slightly expanding our capacities. Incrementally over time we improve our ability to stand outside those limits that once marked the boundaries of our cognitive architecture. Thus in an ironic twist of intellectual history, the evolutionary argument, rather than demonstrating metaphysical modesty, is rather the key to limiting the limitations on the human mind.
All of this is related to one of the central problems in the philosophy of science of our time — the whole Kuhnian legacy that is the framework of so much contemporary philosophy of science. Copernican revelations and revolutions, which formerly disturbed our anthropocentric bias every few hundred years, now occur with alarming frequency. The difference today, of course, is that science is much more advanced than it was with past Copernican revelations and revolutions — it has much more advanced instrumentation available to it (as a result of the STEM cycle), and we have a much better idea of what to look for in the cosmos.
It was a shock to almost everyone to have it scientifically demonstrated that the universe is not static and eternal, but dynamic and changing. It was a shock when quantum theory demonstrated the world to be fundamentally indeterministic. This is by now a very familiar narrative. In fact, it is so familiar that it has been expropriated (dare I say exapted?) by obscurantists and irrationalists of our time, who point at continual changes at scientific knowledge as “proof” that science doesn’t give us any “truth” because it changes. The assumption here is that change in scientific knowledge demonstrates the weakness of science; in fact, change in scientific knowledge is the strength of science. Scientific knowledge is what I have elsewhere called an intelligent institution in so far as it is institutionalized knowledge, but that institution is formulated with internal mechanisms that facilitate the re-shaping of the institution itself over time. That mechanism is the scientific method.
It is important to see that the overturning of familiar conceptions of the world — some of which are ancient and some of which are not — is not arbitrary. Less comprehensive conceptions are being replaced by more comprehensive conceptions. The more comprehensive our perspective on the world, the greater the number of anomalies we must face, and the greater the number of anomalies we face the more likely it is that our theories will be overturned, or at least partially falsified. But it is the wrong debate to ask whether theory change is rational or irrational. It is misleading, because what ought to concern us is how well our theories account for the ever-larger world that is revealed to us through our ever-more comprehensive methods of science, and not how well our theories conform to our presuppositions about rationality. The more we get the science right, reason will follow, shaping new intuitions and formulating new theories.
Our ability to discover (and to understand) ever greater scales of the universe is contingent upon our growing intellectual capabilities, which are cumulative. Just as in the STEM cycle science begets technologies that beget industries that create better scientific instruments, so too on a purely intellectual level the intellectual capabilities of one generation are the formative context of the intellectual capabilities of the next generation, which allows the later generation to exceed the earlier generation. Concepts are the tools of the mind, and we use our familiar concepts to create the next generation of concepts, which latter are both more refined and more powerful than the former, in the same way as we use each generation of tools to build the next generation of tools, which makes each generation of tools better than the last (except for computer software — but I expect that this degradation in the practicability of computer software is simply the software equivalent of planned obsolescence).
Our current generation of tools — both conceptual and technological — are daily revealing to us the inadequacy of our past conceptions of the world. Several recent discoveries have in particular called into question our understanding of the size of the world, especially in so far as the world is defined in terms of its origins in the Big Bang. For example, the discovery of hyperclusters suggest physical structures of the universe that are larger than the upper limit set to physical structures by contemporary cosmologies theories (cf. ‘Hyperclusters’ of the Universe — “Something is Behaving Very Strangely”).
In a similar vein, writing of the recent discovery of a “large quasar group” (LQG) as much as four billion light years across, the article The Largest Discovered Structure in the Universe Contradicts Big-Bang Theory Cosmology states:
“This LQG challenges the Cosmological Principle, the assumption that the universe, when viewed at a sufficiently large scale, looks the same no matter where you are observing it from. The modern theory of cosmology is based on the work of Albert Einstein, and depends on the assumption of the Cosmological Principle. The principle is assumed, but has never been demonstrated observationally ‘beyond reasonable doubt’.”
This formulation gets the order of theory and observation wrong. The cosmological principle is not a principle that can be proved or disproved by evidence; it is a theoretical idea that is used to give structure and meaning to observations, to organize observations into a theoretical whole. The cosmological principle belongs to theoretical cosmology; recent discoveries such as hyperclusters and large quasar groups belong to observational cosmology. While the two — i.e., theoretical and observational — cannot be separated in the practice of science, it is also true that they are not identical. Theoretical methods are distinct from observational methods, and vice versa.
Thus the cosmological principle may be helpful or unhelpful in organizing our knowledge of the cosmos, but it is not the kind of thing that can be falsified in the same way that, for example, a theory of planetary formation can be falsified. That is to say, the cosmological principle might be opposed to (falsified by) another principle that negates the cosmological principle, but this anti-cosmological principle will similarly belong to an order not falsifiable by empirical observations.
The discoveries of hyperclusters and LQGs are particularly problematic because they question some of the fundamental assumptions and conclusions of Big Bang cosmology, which is, essentially, the only large scale cosmological model in contemporary science. Big Bang cosmology is the explanation for the structure of the cosmos that was formulated in response to the discovery of the red shift, which implies that, on the largest observable scales, the universe is expanding. It is important to add the qualification, “on the largest observable scales” because stars within a given galaxy are circulating around the galaxy, and while a given star may be moving away from another given star, it is also likely to be moving toward yet some other star. And, even at larger scales, not all galaxies are receding from each other. It is fairly well known that galaxies collide and commingle; the Helmi stream of our own Milky Way is the result of a long past galactic collision, and at some far time in the future the Milky Way itself will merge with the larger Andromeda galaxy, and be absorbed by it.
Cosmology during the period of the big bang theory (a period in which we still find ourselves today) is in some respects like biology before Darwin. Almost all biology before Darwin was essentially theological, but no one had a better idea so biology had to wait to become a science capable of methodologically naturalistic formulations until after Darwin. The big bang theory was, on the contrary, proposed as a scientific theory (not merely bequeathed to us by pre-scientific tradition), and most scientists working within the big bang tradition have formulated the Big Bang in meticulously naturalistic terms. Nevertheless, once the steady state theory was overthrown, no one really had an alternative to the big bang theory, so all cosmology centered on the Big Bang for lack of imagination of alternatives — but also due to the limitations of the scientific instruments, which at the time of the triumph of the big bang theory were much more modest than they are today.
As disconcerting as it was to discover that the cosmos did not embody an eternal order, that it is expanding and had a history of violent episodes, and that it was much larger than an “island universe” comprising only the Milky Way, the observations that we need to explain today are no less disconcerting in their own way.
Here is how Leonard Susskind describes our contemporary observations of the expanding universe:
“In every direction that we look, galaxies are passing the point at which they are moving away from us faster than light can travel. Each of us is surrounded by a cosmic horizon — a sphere where things are receding with the speed of light — and no signal can reach us from beyond that horizon. When a star passes the point of no return, it is gone forever. Far out, at about fifteen billion light years, our cosmic horizon is swallowing galaxies, stars, and probably even life. It is as if we all live in our own private inside-out black hole.”
Leonard Susskind, The Black Hole War: My Battle with Stephen Hawking to make the World Safe for Quantum Mechanics, New York, Boston, and London: Little, Brown and Company, 2008, pp. 437-438
This observation has not yet been sufficiently appreciated. What lies beyond Susskind’s cosmic horizon is unobservable, as anything that disappears beyond the event horizon of a black hole has become unobservable, and that places such matters beyond the reach of science understood in a narrow sense of observation. But as I have noted above, in the practice of science we cannot disentangle the theoretical and the observational, but the two are not the same. While our observations come to an end at the cosmic horizon, our principles encounter no such boundary. Thus it is that we naturally extrapolate our science beyond the boundaries of observation, but if we get our scientific principles wrong, anything beyond the boundary of observation will be wrong and will be incapable or correction by observation.
Science in the narrow sense must, then, come to an end with observation. But this does not satisfy the mind. One response is to deny the mind its satisfaction and refuse to pass beyond observation. Another response is to fill the void with mythology and fiction. Yet another response is to take up the principles on their own merits and consider them in the light of reason. This response is the philosophical response, and we see that it is a rational response to the world that is continuous with science even when it passes beyond science.
. . . . .
. . . . .
. . . . .
17 November 2013
Inefficiency in the STEM cycle
In my previous post, The Open Loop of Industrial-Technological Civilization, I ended on the apparently pessimistic note of the existential risks posed to industrial-technological civilization by friction and inefficiency in the STEM cycle that drives our civilization headlong into the future. Much that is produced by the feedback loop of science, technology, and engineering is dissipated in science that does not result in technologies, technologies that are not engineered in to industries, and industries that do not produce new scientific instruments. However, just enough science feeds into technology, technology into engineering, and engineering into science to keep the STEM cycle going.
These “inefficiencies” should not be seen as a “bad” thing, since much pure science that is valuable as an intellectual contribution to civilization has few if any practical consequences. The “inefficient” science that does not contribute directly to the STEM cycle is some of the best science that does humanity credit. Indeed, G. H. Hardy was famously emphatic that all practical mathematics was “ugly” and only pure mathematics, untainted by practical application, was truly beautiful — and Hardy made it clear that beautiful mathematics was ultimately the only thing that mattered. Thus these “inefficiencies” that appear to weaken the STEM cycle and hence pose an existential risk to our industrial-technological civilization, are at the same time existential opportunities — as always, risk and opportunity are one and the same.
Opportunities of the STEM cycle
The apparently pessimistic formulation of my previous took this form:
“It is entirely possible that a shift in social, economic, cultural, or other factors that influence or are influenced by the STEM cycle could increase the amount of epiphenomenal science, technology, and engineering, thus decreasing the efficiency of the STEM cycle.”
Such a formulation must be balanced by an appropriate and parallel formulation to the effect that it is entirely possible that a shift in social, economic, cultural, or other factors that influence or are influenced by the STEM cycle could decrease the amount of epiphenomenal science, technology, and engineering, thus increasing the efficiency of the STEM cycle.
However, making the STEM cycle more “efficient” might well be catastrophic, or nearly catastrophic, for civilization, as it would imply a narrowing of human life to the parameters defined by the STEM cycle. This might lead to a realization of the existential risks of permanent stagnation (i.e., the stagnation of all aspects of civilization other than those that advance industrial-technological civilization, which could prove frightening) or flawed realization, in which an acceleration or consolidation of the STEM cycle leads to the sort of civilization no one would find desirable or welcome.
There is no reason one could not, however, both strengthen the STEM cycle, making industrial-technological civilization more robust and more productive of advanced science, technology, and engineering, while at the same time also producing more pure science, more marginal technologies, and more engineering curiosities that don’t feed directly into the STEM cycle. The bigger the pie, the bigger each piece of the pie and the more to go around for everyone. Also, pure science and practical science exist in a cycle of mutual escalation of their own, in which pure science inspires practical science and practical science inspires more pure science. Perhaps the same is true also of marginal and practical technologies and the engineering of curiosities and the engineering of mass industries.
Scaling the STEM cycle
The dissipation of excess productions of the STEM cycle mean that unexpected sectors of the economy (as well as unexpected sectors of society) are occasionally the recipients of disproportional inputs. These disproportional inputs, like the inefficiencies discussed above, might be understood as either risks or opportunities. Some socioeconomic sectors might be catastrophically stressed by a disproportionate input, while others might unexpected flourish with a flourishing input. To control the possibilities of catastrophic failure or flourishing success, we must consider the possibility scaling the STEM cycle.
To what degree can the STEM cycle be scaled? By this question I mean that, once we are explicitly and consciously aware that it is the STEM cycle that drives industrial-technological civilization (or, minimally, that it is among the drivers of industrial-technological civilization), if we want to further drive that civilization forward (as I would like to see it driven until earth-originating life has established extraterrestrial redundancy in the interest of existential risk mitigation) can we consciously do so? To what extent can the STEM cycle be controlled, or can its scaling be controlled? Can we consciously direct the STEM cycle so that more science begets more technology, more technology begets more engineering, and more engineering begets more science? I think that we can. But, as with the matters discussed above, we must always be aware of the risk/opportunity trade-off. Focusing too much of the STEM cycle may have disadvantages.
Once we understand an underlying mechanism of civilization, like the STEM cycle, we can consciously cultivate this mechanism if we wish to see more of this kind of civilization, or we can attempt to dampen this mechanism if we want to see less of this civilization. These attempts to cultivate or dampen a mechanism of civilization can take microscopic or macroscopic forms. Macroscopically, we are concerned with the total picture of civilization; microscopically we may discern the smallest manifestations of the mechanism, as when the STEM cycle is purposefully pursued by the R&D division of a business, which funds a certain kind of science with an eye toward creating certain technologies that can be engineered into specific industries — all in the interest of making a profit for the shareholders.
This last example is a very conscious exemplification of the STEM cycle, that might conceivably be reduced the work of a single individual, working in turn as scientist, technologist, and engineer. The very narrowness of this process which is likely to produce specific and quantifiable results is also likely to produce very little in terms of epiphenomenal manifestations of the STEM cycle, and thus may contribute little or nothing to the more edifying dimensions of civilization. But this is not necessarily the case. Arno Penzias and Robert Wilson were working as scientists trying to solve a practical problem for Bell Labs when they discovered the cosmic microwave background radiation.
Reason for Hope
We have at least as much reason to hope for the future as to despair of the future, if not more reason to hope. The longer civilization persists, the more robust it becomes, and the more robust civilization becomes, the more internal diversity and experimentation civilization can tolerate (i.e., greater social differentiation, as Siggi Becker has recently pointed out to me). The extreme social measures taken in the past to enforce conformity within society have been softened in Western civilization, and individuals have a great deal of latitude that was unthinkable even in the recent past.
Perhaps more significantly from the perspective of civilization, the more robust and tolerant our civilization, the more latitude there is for like-minded individuals to cooperate in the founding and advancement of innovative social movements which, if they prove to be effective and to meet a need, can result in real change to the overall structure of society, and this sort of bottom-up social change was precisely the kind of change that agrarian-ecclesiastical civilization was structured to frustrate, resist, and suppress. In this respect, if in no other, we have seen social progress in the development of civilization that is distinct from the technological and economic progress that characterizes the STEM cycle.
As I wrote in my recent Centauri Dreams post, SETI, METI, and Existential Risk, to exist is to be subject to existential risk. Given the relation of risk and opportunity, it is also the case that to exist is to choose among existential opportunities. This is why we fight so desperately to stay alive, and struggle so insistently to improve our condition once we have secured the essentials of existence. To be alive is to have countless existential opportunities within reach; once we die, all of this is lost to us. And to improve one’s condition is to increase the actionable existential opportunities within one’s grasp.
The development of civilization, for all its faults and deficiencies, is tending toward increasing the range of existential opportunities available as “live options” (as William James would say) for both individuals and communities. That this increased range of existential opportunities also comes with an increased variety of existential risks should not be employed as an excuse to attempt to reverse the real social gains bequeathed by industrial-technological civilization.
. . . . .
. . . . .
. . . . .
23 October 2013
Prediction in Science
One of the distinguishing features of science as a system of thought is that it makes testable predictions. The fact that scientific predictions are testable suggests a methodology of testing, and we call the scientific methodology of testing experiment. Hypothesis formation, prediction, experimentation, and resultant modification of the hypothesis (confirmation, disconfirmation, or revision) are all essential elements of the scientific method, which constitutes an escalating spiral of knowledge as the scientific method systematically exposes predictions to experiment and modifies its hypotheses in the light of experimental results, which leads in turn to new predictions.
The escalating spiral of knowledge that science cultivates naturally pushes that knowledge into the future. Sometimes scientific prediction is even formulated in reference to “new facts” or “temporal asymmetries” in order to emphasize that predictions refer to future events that have not yet occurred. In constructing an experiment, we create a few set of facts in the world, and then interpret these facts in the light of our hypothesis. It is this testing of hypotheses by experiment that establishes the concrete relationship of science to the world, and this is also a source of limitation, for experiments are typically designed in order to focus on a single variable and to that end an attempt is made to control for the other variables. (A system of thought that is not limited by the world is not science.)
Alfred North Whitehead captured this artificial feature of scientific experimentation in a clever line that points to the difference between scientific predictions and predictions of a more general character:
“…experiment is nothing else than a mode of cooking the facts for the sake of exemplifying the law. Unfortunately the facts of history, even those of private individual history, are on too large a scale. They surge forward beyond control.”
Alfred North Whitehead, Adventures of Ideas, New York: The Free Press, 1967, Chapter VI, “Foresight,” p. 88
There are limits to prediction, and not only those pointed out by Whitehead. The limits to prediction have been called the prediction wall. Beyond the prediction wall we cannot penetrate.
The Prediction Wall
John Smart has formulated the idea of a prediction wall in his essay, “Considering the Singularity,” as follows:
With increasing anxiety, many of our best thinkers have seen a looming “Prediction Wall” emerge in recent decades. There is a growing inability of human minds to credibly imagine our onrushing future, a future that must apparently include greater-than-human technological sophistication and intelligence. At the same time, we now admit to living in a present populated by growing numbers of interconnected technological systems that no one human being understands. We have awakened to find ourselves in a world of complex and yet amazingly stable technological systems, erected like vast beehives, systems tended to by large swarms of only partially aware human beings, each of which has only a very limited conceptualization of the new technological environment that we have constructed.
Business leaders face the prediction wall acutely in technologically dependent fields (and what enterprise isn’t technologically dependent these days?), where the ten-year business plans of the 1950’s have been replaced with ten-week (quarterly) plans of the 2000’s, and where planning beyond two years in some fields may often be unwise speculation. But perhaps most astonishingly, we are coming to realize that even our traditional seers, the authors of speculative fiction, have failed us in recent decades. In “Science Fiction Without the Future,” 2001, Judith Berman notes that the vast majority of current efforts in this genre have abandoned both foresighted technological critique and any realistic attempt to portray the hyper-accelerated technological world of fifty years hence. It’s as if many of our best minds are giving up and turning to nostalgia as they see the wall of their own conceptualizing limitations rising before them.
Considering the Singularity: A Coming World of Autonomous Intelligence (A.I.) © 2003 by John Smart (This article may be reproduced for noncommercial purposes if it is copied in its entirety, including this notice.)
I would to suggest that there are at least two prediction walls: synchronic and diachronic. The prediction wall formulated above by John Smart is a diachronic prediction wall: it is the onward-rushing pace of events, one following the other, that eventually defeats our ability to see any recognizable order or structure of the future. The kind of prediction wall to which Whitehead alludes is a synchronic prediction wall, in which it is the outward eddies of events in the complexity of the world’s interactions that make it impossible for us to give a complete account of the consequences of any one action. (Cf. Axes of Historiography)
Retrodiction and the Historical Sciences
Science does not live by prediction alone. While some philosophers of science have questioned the scientificity of the historical sciences because they could not make testable (and therefore falsifiable) predictions about the future, it is now widely recognized that the historical sciences don’t make predictions, but they do make retrodictions. A retrodiction is a prediction about the past.
The Oxford Dictionary of Philosophy by Simon Blackburn (p. 330) defines retrodiction thusly:
retrodiction The hypothesis that some event happened in the past, as opposed to the prediction that an event will happen in the future. A successful retrodiction could confirm a theory as much as a successful prediction.
As with predictions, there is also a limit to retrodiction, and this is the retrodiction wall. Beyond the retrodiction wall we cannot penetrate.
I haven’t been thinking about this idea for long enough to fully understand the ramifications of a retrodiction wall, so I’m not yet clear about whether we can distinction diachronic retrodiction and synchronic retrodiction. Or, rather, it would be better to say that the distinction can certainly be made, but that I cannot think of good contrasting examples of the two at the present time.
We can define a span of accessible history that extends from the retrodiction wall in the past to the prediction wall in the future as what I will call effective history (by analogy with effective computability). Effective history can be defined in a way that is closely parallel to effectively computable functions, because all of effective history can be “reached” from the present by means of finite, recursive historical methods of inquiry.
Effective history is not fixed for all time, but expands and contracts as a function of our knowledge. At present, the retrodiction wall is the Big Bang singularity. If anything preceded the Big Bang singularity we are unable to observe it, because the Big Bang itself effectively obliterates any observable signs of any events prior to itself. (Testable theories have been proposed that suggest the possibility of some observable remnant of events prior to the Big Bang, as in conformal cyclic cosmology, but this must at present be regarded as only an early attempt at such a theory.)
Prior to the advent of scientific historiography as we know it today, the retrodiction wall was fixed at the beginning of the historical period narrowly construed as written history, and at times the retrodiction wall has been quite close to the present. When history experiences one of its periodic dark ages that cuts it off from his historical past, little or nothing may be known of a past that once familiar to everyone in a given society.
The emergence of agrarian-ecclesiastical civilization effectively obliterated human history before itself, in a manner parallel to the Big Bang. We know that there were caves that prehistorical peoples visited generation after generation for time out of mind, over tens of thousands of years — much longer than the entire history of agrarian-ecclesiastical civilization, and yet all of this was forgotten as though it had never happened. This long period of prehistory was entirely lost to human memory, and was not recovered again until scientific historiography discovered it through scientific method and empirical evidence, and not through the preservation of human memory, from which prehistory had been eradicated. And this did not occur until after agrarian-ecclesiastical civilization had lapsed and entirely given way to industrial-technological civilization.
We cannot define the limits of the prediction wall as readily as we can define the limits of the retrodiction wall. Predicting the future in terms of overall history has been more problematic than retrodicting the past, and equally subject to ideological and eschatological distortion. The advent of modern science compartmentalized scientific predictions and made them accurate and dependable — but at the cost of largely severing them from overall history, i.e., human history and the events that shape our lives in meaningful ways. We can make predictions about the carbon cycle and plate tectonics, and we are working hard to be able to make accurate predictions about weather and climate, but, for the most part, our accurate predictions about the future dispositions of the continents do not shape our lives in the near- to mid-term future.
I have previously quoted a famous line from Einstein: “As far as the laws of mathematics refer to reality, they are not certain; and as far as they are certain, they do not refer to reality.” We might paraphrase this Einstein line in regard to the relation of mathematics to the world, and say that as far as scientific laws of nature predict events, these events are irrelevant to human history, and in so far as predicted events are relevant to human beings, scientific laws of nature cannot predict them.
Singularities Past and Future
As the term “singularity” is presently employed — as in the technological singularity — the recognition of a retrodiction wall in the past complementary to the prediction wall in the future provides a literal connection between the historiographical use of “singularity” and the use of the term “singularity” in cosmology and astrophysics.
Theorists of the singularity hypothesis place a “singularity” in the future which constitutes an absolute prediction wall beyond which history is so transformed that nothing beyond it is recognizable to us. This future singularity is not the singularity of astrophysics.
If we recognize the actual Big Bang singularity in the past as the retrodiction wall for cosmology — and hence, by extension, for Big History — then an actual singularity of astrophysics is also at the same time an historical singularity.
. . . . .
I have continued my thoughts on the retrodiction wall in Addendum on the Retrodiction Wall.
. . . . .
. . . . .
. . . . .
2 February 2013
In my last post, The Science of Time, I discussed the possibility of taking an absolutely general perspective on time and how this can be done in a way that denies time or in a way that affirms time, after the manner of big history.
David Christian, whose books on big history and his Teaching Company lectures on Big History have been seminal in the field, in the way of introduction to his final lectures, in which he switches from history to speculation on the future, relates that in his early big history courses his students felt as though they were cut off rather abruptly when he had brought them through 13.7 billion years of cosmic history only to drop them unceremoniously in the present without making any effort to discuss the future. It was this reaction that prompted him to continue beyond the present and to try to say something about what comes next.
Another way to understand this reaction of Christian’s students is that they wanted to see the whole of the history they have just been through placed in an even larger, more comprehensive context, and to do this requires going beyond history in the sense of an account of the past. To put the whole of history into a larger context means placing it within a cosmology that extends beyond our strict scientific knowledge of past and future — that which can be observed and demonstrated — and comprises a framework in the same scientific spirit but which looks beyond the immediate barriers to observation and demonstration.
Elsewhere in David Christian’s lectures (if my memory serves) he mentioned how some traditionalist historians, when they encounter the idea of big history, reject the very idea because history has always been about documents and eponymously confined to to the historical period when documents were kept after the advent of literacy. According to this reasoning, anything that happened prior to the invention of written language is, by definition, not history. I have myself encountered similar reasoning as, for example, when it is claimed that prehistory is not history at all because it happened prior to the existence of written records, which latter define history.
This a sadly limited view of history, but apparently it is a view with some currency because I have encountered it in many forms and in different contexts. One way to discredit any intellectual exercise is to define it so narrowly that it cannot benefit from the most recent scientific knowledge, and then to impugn it precisely for its narrowness while not allowing it to change and expand as human knowledge expands. The explosion in scientific knowledge in the last century has made possible a scientific historiography that simply did not exist previously; to deny that this is history on the basis of traditional humanistic history being based on written records means that we must then define some new discipline, with all the characteristics of traditional history, but expanded to include our new knowledge. This seems like a perverse attitude to me, but for some people the label of their discipline is important.
Call it what you will then — call it big history, or scientific historiography, or the study of human origins, or deny that it is history altogether, but don’t try to deny that our knowledge of the past has expanded exponentially since the scientific method has been applied to the past.
In this same spirit, we need to recognize that a greatly expanded conception of history needs to reach into the future, that a scientific futurism needs to be part of our expanded conception of the totality of time and history — or whatever it is that results when we apply Russell’s generalization imperative to time. Once again, it would be unwise to be overly concerned with what we call his emerging discipline, whether it be the totality of time or the whole of time or temporal infinitude or ecological temporality or what Husserl called omnitemporality or even absolute time.
Part of this grand (historical) effort will be a future science of civilizations, as the long term and big picture conception of civilization is of central human interest in this big picture of time and history. We not only want to know the naturalistic answers to traditional eschatological questions — Where did we come from? Where are we going? — but we also want to know the origins and destiny of what we have ourselves contributed to the universe — our institutions, our ideas, civilization, the technium, and all the artifacts of human endeavor.
. . . . .
. . . . .
. . . . .
30 January 2013
F. H. Bradley in his classic treatise Appearance and Reality: A Metaphysical Essay, made this oft-quoted comment:
“If you identify the Absolute with God, that is not the God of religion. If again you separate them, God becomes a finite factor in the Whole. And the effort of religion is to put an end to, and break down, this relation — a relation which, none the less, it essentially presupposes. Hence, short of the Absolute, God cannot rest, and, having reached that goal, he is lost and religion with him. It is this difficulty which appears in the problem of the religious self-consciousness.”
I think many commentators have taken this passage as emblematic of what they believe to be Bradley’s religious sentimentalism, and in fact the yearning for religious belief (no longer possible for rational men) that characterized much of the school of thought that we now call “British Idealism.”
This is not my interpretation. I’ve read enough Bradley to know that he was no sentimentalist, and while his philosophy diverges radically from contemporary philosophy, he was committed to a philosophical, and not a religious, point of view.
Bradley was an elder contemporary of Bertrand Russell, and Bertrand Russell characterized Bradley as the grand old man of British idealism. This if from Russell’s Our Knowledge of the External World:
“The nature of the philosophy embodied in the classical tradition may be made clearer by taking a particular exponent as an illustration. For this purpose, let us consider for a moment the doctrines of Mr Bradley, who is probably the most distinguished living representative of this school. Mr Bradley’s Appearance and Reality is a book consisting of two parts, the first called Appearance, the second Reality. The first part examines and condemns almost all that makes up our everyday world: things and qualities, relations, space and time, change, causation, activity, the self. All these, though in some sense facts which qualify reality, are not real as they appear. What is real is one single, indivisible, timeless whole, called the Absolute, which is in some sense spiritual, but does not consist of souls, or of thought and will as we know them. And all this is established by abstract logical reasoning professing to find self-contradictions in the categories condemned as mere appearance, and to leave no tenable alternative to the kind of Absolute which is finally affirmed to be real.”
Bertrand Russell, Our Knowledge of the External World, Chapter I, “Current Tendencies”
Although Russell rejected what he called the classical tradition, and distinguished himself in contributing to the origins of a new philosophical school that would come (in time) to be called analytical philosophy, the influence of figures like F. H. Bradley and J. M. E. McTaggart (whom Russell knew personally) can still be found in Russell’s philosophy.
In fact, the above quote from F. H. Bradley — especially the portion most quoted, short of the Absolute, God cannot rest, and, having reached that goal, he is lost and religion with him — is a perfect illustration of a principle found in Russell, and something on which I have quoted Russell many times, as it has been a significant influence on my own thinking.
I have come to refer to this principle as Russell’s generalization imperative. Russell didn’t call it this (the terminology is mine), and he didn’t in fact give any name at all to the principle, but he implicitly employs this principle throughout his philosophical method. Here is how Russell himself formulated the imperative (which I last quoted in The Genealogy of the Technium):
“It is a principle, in all formal reasoning, to generalize to the utmost, since we thereby secure that a given process of deduction shall have more widely applicable results…”
Bertrand Russell, An Introduction to Mathematical Philosophy, Chapter XVIII, “Mathematics and Logic”
One of the distinctive features that Russell identifies as constitutive of the classical tradition, and in fact one of the few explicit commonalities between the classical tradition and Russell’s own thought, was the denial of time. The British idealists denied the reality of time outright, in the best Platonic tradition; Russell did not deny the reality of time, but he was explicit about not taking time too seriously.
Despite Russell’s hostility to mysticism as expressed in his famous essay “Mysticism and Logic,” when it comes to the mystic’s denial of time, Russell softens a bit and shows his sympathy for this particular aspect of mysticism:
“Past and future must be acknowledged to be as real as the present, and a certain emancipation from slavery to time is essential to philosophic thought. The importance of time is rather practical than theoretical, rather in relation to our desires than in relation to truth. A truer image of the world, I think, is obtained by picturing things as entering into the stream of time from an eternal world outside, than from a view which regards time as the devouring tyrant of all that is. Both in thought and in feeling, even though time be real, to realise the unimportance of time is the gate of wisdom.”
“…impartiality of contemplation is, in the intellectual sphere, that very same virtue of disinterestedness which, in the sphere of action, appears as justice and unselfishness. Whoever wishes to see the world truly, to rise in thought above the tyranny of practical desires, must learn to overcome the difference of attitude towards past and future, and to survey the whole stream of time in one comprehensive vision.”
Bertrand Russell, Mysticism and Logic, and Other Essays, Chapter I, “Mysticism and Logic”
While Russell and the classical tradition in philosophy both perpetuated the devalorization of time, this attitude is slowly disappearing from philosophy, and contemporary philosophers are more and more treating time as another reality to be given philosophical exposition rather than denying its reality. I regard this as a salutary development and a riposte to all who claim that philosophy makes no advances. Contemporary philosophy of time is quite sophisticated, and embodies a much more honest attitude to the world than the denial of time. (For those looking at philosophy from the outside, the denial of the reality of time simply sounds like a perverse waste of time, but I won’t go into that here.)
In any case, we can bring Russell’s generalization imperative to time and history even if Russell himself did not do so. That is to say, we ought to generalize to the utmost in our conception of time, and if we do so, we come to a principle parallel to Bradley’s that I think both Russell and Bradley would have endorsed: short of the absolute time cannot rest, and, having reached that goal, time is lost and history with it.
Since I don’t agree with this, but it would be one logical extrapolation of Russell’s generalization imperative as applied to time, this suggests to be that there is more than one way to generalize about time. One way would be the kind of generalization that I formulated above, presumably consistent with Russell’s and Bradley’s devalorization of time. Time generalized in this way becomes a whole, a totality, that ceases to possess the distinctive properties of time as we experience it.
The other way to generalize time is, I think, in accord with the spirit of Big History: here Russell’s generalization imperative takes the form of embedding all times within larger, more comprehensive times, until we reach the time of the entire universe (or beyond). The science of time, as it is emerging today, demands that we almost seek the most comprehensive temporal perspective, placing human action in evolutionary context, placing evolution in biological context, placing biology is in geomorphological context, placing terrestrial geomorphology into a planetary context, and placing this planetary perspective into a cosmological context. This, too, is a kind of generalization, and a generalization that fully feels the imperative that to stop at any particular “level” of time (which I have elsewhere called ecological temporality) is arbitrary.
On my other blog I’ve written several posts related directly or obliquely to Big History as I try to define my own approach to this emerging school of historiography: The Place of Bilateral Symmetry in the History of Life, The Archaeology of Cosmology, and The Stars Down to Earth.
The more we pursue the rapidly growing body of knowledge revealed by scientific historiography, the more we find that we are part of the larger universe; our connections to the world expand as we pursue them outward in pursuit of Russell’s generalization imperative. I think it was Hans Blumenberg in his enormous book The Genesis of the Copernican World, who remarked on the significance of the fact that we can stand with our feet on the earth and look up at the stars. As I remarked in The Archaeology of Cosmology, we now find that by digging into the earth we can reveal past events of cosmological history. As a celestial counterpart to this digging in the earth (almost as though concretely embodying the contrast to which Blumenberg referred), we know that by looking up at the stars, we are also looking back in time, because the light that comes to us ages after it has been produced. Thus is astronomy a kind of luminous archaeology.
In Geometrical Intuition and Epistemic Space I wrote, “…we have no science of time. We have science-like measurements of time, and time as a concept in scientific theories, but no scientific theory of time as such.” Scientists have tried to think scientifically about time, but, as with the case of consciousness, a science of time eludes us as a science of consciousness eludes us. Here a philosophical perspective remains necessary because there are so many open questions and no clear indication of how these questions are to be answered in a clearly scientific spirit.
Therefore I think it is too early to say exactly what Big History is, because we aren’t logically or intellectually prepared to say exactly what the Russellian generalization imperative yields when applied to time and history. I think that we are approaching a point at which we can clarify our concepts of time and history, but we aren’t quite there yet, and a lot of conceptual work is necessary before we can produce a definitive formulation of time and history that will make of Big History the science and it aspires to be.
. . . . .
. . . . .
. . . . .
. . . . .
25 December 2012
Prior to the advent of civilization, the human condition was defined by nature. Evolutionary biologist call this initial human condition the environment of evolutionary adaptedness (or EEA). The biosphere of the Earth, with all its diverse flora and fauna, was the predominant fact of human experience. Very little that human beings did could have an effect on the human condition beyond the most immediate effects an individual might cause in the environment, such as gathering or hunting for food. Nothing was changed by the passage of human beings through an environment that was, for them, their home. Human beings had to conform themselves to this world or die.
Since the advent of civilization, it has been civilization and not nature that determines the human condition. As one civilization has succeeded another, and, more importantly, as one kind of civilization has succeeded another kind of civilization — which latter happens far less frequently, since like kinds of civilization tend to succeed each other except when this process of civilizational succession is preempted by the emergence of an historical anomaly on the order of the initial emergence of civilization itself — the overwhelming fact of human experience has been shaped by civilization and the products of civilization, rather than by nature. This transformation from being shaped by nature to being shaped by civilization is what makes the passage from hunter-gatherer nomadism to settled agrarian civilization such a radical discontinuity in human experience.
This transformation has been gradual. In the earliest period of human civilizations, an entire civilization might grow up from nothing, spread regionally, assimilating local peoples not previously included in the project of civilization, and then die out, all without coming into contact with another civilization. The growth of human civilization has meant a gradual and steady increase in the density of human populations. It has already been thousands of years since a civilization could flourish and fail without encountering another civilization. It has been, moreover, hundreds of years since all human communities were bound together through networks of trade and communication.
Civilization is now continuous across the surface of the planet. The world-city — Doxiadis’ Ecumenopolis, which I discussed in Civilization and the Technium — is already an accomplished fact (though it is called by another name, or no name at all). We retain our green spaces and our nature reserves, but all human communities ultimately are contiguous with each other, and there is no direction that you can go on the surface of the Earth without encountering another human community.
The civilization of the present, which I call industrial-technological civilization, is as distinct from the agricultural civilization (which I call agrarian-ecclesiastical civilization) that preceded it as agricultural civilization was distinct from the nomadic hunter-gatherer paradigm that preceded it in turn. In other words, the emergence of industrialization interpolated a discontinuity in the human condition on the order of the emergence of civilization itself. One of the aspects of industrial-technological civilization that distinguishes it from earlier agricultural civilization is the effective regimentation and indeed rigorization of the human condition.
The emergence of organized human activity, which corresponds to the emergence of the species itself, and which is therefore to be found in hunter-gatherer nomadism as much as in agrarian or industrial civilization, meant the emergence of institutions. At first, these institutions were as unsystematic and implicit as everything else in human experience. When civilizations began to abut each other in the agrarian era, it became necessary to make these institutions explicit and to formulate them in codes of law and regulation. At first, this codification itself was unsystematic. It was the emergence of industrialization that forced human civilizations to make its institutions not only explicit, but also systematic.
This process of systematization and rigorization is most clearly seen in the most abstract realms of thought. In the nineteenth century, when industrialization was beginning to transform the world, we see at the same time a revolution in mathematics that went beyond all the earlier history of mathematics. While Euclid famously systematized geometry in classical antiquity, it was not until the nineteenth century that mathematical thought grew to a point of sophistication that outstripped and exceeded Euclid.
From classical antiquity up to industrialization, it was frequently thought, and frequently asserted, that Euclid was the perfection of human reason in mathematics and that Aristotle was the perfection of human reason in logic, and there was simply nothing more to be done in the these fields beyond learning to repeat the lessons of the masters of antiquity. In the nineteenth century, during the period of rapid industrialization, people began to think about mathematics and logic in a way that was more sophisticated and subtle than even the great achievements of Euclid and Aristotle. Separately, yet almost simultaneously, three different mathematicians (Bolyai, Lobachevski, and Riemann) formulated systems of non-Euclidean geometry. Similarly revolutionary work transformed logic from its Aristotelian syllogistic origins into what is now called mathematical logic, the result of the work of George Boole, Frege, Peano, Russell, Whitehead, and many others.
At the same time that geometry and logic were being transformed, the rest of mathematics was also being profoundly transformed. Many of these transformational forces have roots that go back hundreds of years in history. This is also true of the industrial revolution itself. The growth of European society as a result of state competition within the European peninsula, the explicit formulation of legal codes and the gradual departure from a strictly peasant subsistence economy, the similarly gradual yet steady spread of technology in the form of windmills and watermills, ready to be powered by steam when the steam engine was invented, are all developments that anticipate and point to the industrial revolution. But the point here is that the anticipations did not come to fruition until the nineteenth century.
And so with mathematics. Newton and Leibniz independently invented the calculus, but it was left on unsure foundations for centuries, and Descartes had made the calculus possible by the earlier innovation of analytical geometry. These developments anticipated and pointed to the rigorization of mathematics, but the development did not come to fruition until the nineteenth century. The fruition is sometimes called the arithmetization of analysis, and involved the substitution of the limit method for fluxions in Newton and infinitesimals in Leibniz. This rigorous formulation of the calculus made possible engineering in its contemporary form, and rigorous engineering made it possible to bring the most advanced science of the day to the practical problems of industry. Intrinsically arithmetical realities could now be given a rigorous mathematical exposition.
Historians of mathematics and industrialization would probably cringe at my potted sketch of history, but here it is in sententious outline:
● Rigorization of mathematics also called the arithmetization of analysis
● Mathematization of science
● Scientific systematization of technology
● Technological rationalization of industry
I have discussed part of this cycle in my writings on industrial-technological civilization and the disruption of the industrial-technological cycle. The origins of this cycle involve the additional steps that made the cycle possible, and much of the additional steps are those that made logic, mathematics, and science rigorous in the nineteenth century.
The reader should also keep in mind the parallel rigorization of social institutions that occurred, including the transformation of the social sciences after the model of the hard sciences. Economics, which is particularly central to the considerations of industrial-technological civilization, has been completely transformed into a technical, mathematicized science.
With the rigorization of social institutions, and especially the economic institutions that shape human life from cradle to grave, it has been inevitable that the human condition itself should be made rigorous. Foucault was instrumental in pointing out salient aspects of this, which he called biopower, and which, I suggest, will eventually issues in technopower.
I am not suggesting this this has been a desirable, pleasant, or welcome development. On the contrary, industrial-technological civilization is beset in its most advanced quarters by a persistent apocalypticism and declensionism as industrialized populations fantasize about the end of the social regime that has come to control almost every aspect of life.
I wrote about the social dissatisfaction that issues in apocalypticism in Fear of the Future. I’ve been thinking more about this recently, and I hope to return to this theme when I can formulate my thoughts with the appropriate degree of rigor. I am seeking a definitive formulation of apocalypticism and how it is related to industrialization.
. . . . .
. . . . .
. . . . .
17 October 2012
It is ironic, though not particularly paradoxical, that the earth sciences as we known them today only came into being as the result of the emergence of space science, and space science was a consequence of the advent of the Space Age. We had to leave the Earth and travel into space in order to see the Earth for what it is. Why was this the case, and what do I mean by this?
It has often been commented that we had to go into space in order to discover the earth, which is to say, to understand that the earth is a blue oasis in the blackness of space. The early images of the space program had a profound effect on human self-understanding. Photographs (as much or more than any theory) provided the theoretical context that allowed us to have a unified perspective on the Earth as part of a system of worlds in space. Once we saw the Earth for what it was, What Carl Sagan called a “pale blue dot” in the blackness of space, drove home a new perspective on the human condition that could not be forgotten once it had been glimpsed.
To learn that our sun was a star among stars, and that the stars were suns in their own right, that the Earth is a planet among planets, and perhaps other planets are other Earths, has been a long epistemic struggle for humanity. That the Milky Way is a galaxy among galaxies, a point that has been particularly driven home by recent observational cosmology as with the Hubble Ultra-Deep Field (UDF) image (and now the Hubble eXtreme-Deep Field (XDF) image), is an idea that we still today struggle to comprehend. The planethood of the Earth, the stellarhood of the sun, the galaxyhood of the Milky Way are all exercises in contextualizing our place in the universe, and therefore an exercise in Copernicanism.
But I am getting ahead of myself. I wanted to discuss the earth sciences, and to try to understand what they are and how they have become what they are. What are the Earth sciences? The Biology Online website has this brief and concise definition of the earth sciences:
The Earth Sciences, investigating the way our planet works and the mechanisms of nature that drive it.
Earth Science is the study of the Earth and its neighbors in space… Many different sciences are used to learn about the earth, however, the four basic areas of Earth science study are: geology, meteorology, oceanography and astronomy.
For a more detailed overview of the earth sciences, the Earth Science Literacy Initiative (ESLI), funded by the National Science Foundation, has formulated nine “big ideas” of earth science that it has published in its pamphlet Earth Science Literacy Principles. Here are the nine big ideas taken from their pamphlet:
1. Earth scientists use repeatable observations and testable ideas to understand and explain our planet.
2. Earth is 4.6 billion years old.
3. Earth is a complex system of interacting rock, water, air, and life.
4. Earth is continuously changing.
5. Earth is the water planet.
6. Life evolves on a dynamic Earth and continuously modifies Earth.
7. Humans depend on Earth for resources.
8. Natural hazards pose risks to humans.
9. Humans significantly alter the Earth.
Each of these “big ideas” is further elaborated in subheadings that frequently bring out the planethood of the Earth. For example, section 2.2 reads:
Our Solar System formed from a vast cloud of gas and dust 4.6 billion years ago. Some of this gas and dust was the remains of the supernova explosion of a previous star; our bodies are therefore made of “stardust.” This age of 4.6 billion years is well established from the decay rates of radioactive elements found in meteorites and rocks from the Moon.
Intuitively, we would say that the earth sciences are those sciences that study the Earth and its natural processes, but the rapid expansion of scientific knowledge has made us realize that the Earth is not a closed system that can be studied in isolation. The Earth is part of a system — the solar system, and beyond that a galactic system, etc. — and must be studied as part of this system. But we didn’t always know this, and this comprehensive conception of earth science is still in the process of formulation.
The realization that the processes of the Earth and the sciences that study these processes must ultimately be placed in a cosmological context means that contemporary earth science is now, like astrobiology, which seeks to place biology in a cosmological context, a fully Copernican science, though not perhaps quite as explicitly as in the case of astrobiology. The very idea of Earth science as it is understood today, like planetary science and space science, is essentially Copernican; Copernicanism is now the telos of all the sciences. Copernican civilization needs Copernican sciences. As I said in my presentation to this year’s 100YSS symposium, the scope of an industrial-technological civilization corresponds to the scope of the science that enables this civilization.
What this means is that the sciences that generations of Earth-bound scientists have labored to create in order to describe the planet upon which they have lived, which was the only planet that they could know prior to the advent of space science, are all planetary sciences in embryo — all potentially Copernican sciences that can be extended beyond the Earth that was their inspiration and origin. Before space science, all science was geocentric and therefore essentially Ptolemaic. Space science changed that, and now all the sciences are gradually becoming Copernican.
In the case of earth science, this is a powerful scientific model because the earth sciences have been, by definition, geocentric. That even geocentric sciences can become Copernican is a powerful lesson and provides a model for other sciences to follow. I have often quoted Foucault as saying that “A real science recognizes and accepts its own history without feeling attacked.” I think it can be honestly said that the geosciences recognize and accept their history as geocentric sciences and this in no way inhibits their ability to transcend their geocentric origins and become Copernican sciences no longer exclusively tied to the Earth. I find this rather hopeful for the future of science.
Another way to conceptualize earth science is to think of the earth sciences as those sciences that have come to recognize the planethood of the Earth. This places the Earth in its planetary context among other planets of our solar system, and it also places these planets (as well as the growing roster of exoplanets) in the context of planetary history that we have learned first-hand from the Earth.
To a certain extent, earth science and planetary science (or planetology) are convertible: each is increasingly formulated and refined in reference to the other. What is planetary science? Here is the Wikipedia definition of planetary science:
Planetary science (rarely planetology) is the scientific study of planets (including Earth), moons, and planetary systems, in particular those of the Solar System and the processes that form them. It studies objects ranging in size from micrometeoroids to gas giants, aiming to determine their composition, dynamics, formation, interrelations and history. It is a strongly interdisciplinary field, originally growing from astronomy and earth science, but which now incorporates many disciplines, including planetary astronomy, planetary geology (together with geochemistry and geophysics), atmospheric science, oceanography, hydrology, theoretical planetary science, glaciology, and the study of extrasolar planets. Allied disciplines include space physics, when concerned with the effects of the Sun on the bodies of the Solar System, and astrobiology.
The Division for Planetary Sciences of the American Astronomical Society doesn’t give us the convenience of a definition for planetary science, but in its offerings on A Planet Orbiting Two Suns, A Thousand New Planets, Buried Mars Carbonates, The Lunar Core, Propeller Moons of Saturn, A Six-Planet System, Carbon Dioxide Gullies on Mars, and many others, give us concrete examples of planetary science which examples may, in certain ways, be more helpful than an explicit definition.
The “aims and scope” of the journal Earth and Planetary Science Letters also give something of a sense of what planetary science is:
Earth and Planetary Science Letters (EPSL) is the journal for researchers, policymakers and practitioners from the broad Earth and planetary sciences community. It publishes concise, highly cited articles (“Letters”) focusing on physical, chemical and mechanical processes as well as general properties of the Earth and planets — from their deep interiors to their atmospheres. Extensive data sets are included as electronic supplements and contribute to the short publication times. EPSL also includes a Frontiers section, featuring high-profile synthesis articles by leading experts to bring cutting-edge topics to the broader community.
A recent (2006) controversy over the status of Pluto as a planet led to an attempt by The International Astronomical Union (IAU) to formulate a more precise definition of what a planet is. The definition upon which they settled demoted Pluto from being a planet to being a dwarf planet. While this decision does not have complete unanimity, it is gaining ground in the literature. Here is the IAU of planets, dwarf planets, and small solar system bodies:
(1) A planet is a celestial body that (a) is in orbit around the Sun, (b) has sufficient mass for its self-gravity to overcome rigid body forces so that it assumes a hydrostatic equilibrium (nearly round) shape, and (c) has cleared the neighbourhood around its orbit.
(2) A “dwarf planet” is a celestial body that (a) is in orbit around the Sun, (b) has sufficient mass for its self-gravity to overcome rigid body forces so that it assumes a hydrostatic equilibrium (nearly round) shape, (c) has not cleared the neighbourhood around its orbit, and (d) is not a satellite.
(3) All other objects, except satellites, orbiting the Sun shall be referred to collectively as “Small Solar System Bodies.”
With this greater precision of definition than had previously been the case in regard to planets, we could easily define planetary science as the study of celestial bodies that (a) are in orbit around the Sun, (b) have sufficient mass for their self-gravity to overcome rigid body forces so that they assume a hydrostatic equilibrium (nearly round) shape, and (c) have cleared the neighbourhood around their orbits. Of course, this ultimately won’t do, because a comprehensive planetary science will want to study all three classes of celestial bodies detailed above, and will especially want to study the mechanisms of planet formation, dwarf planet formation, and small object formation for the light that each shines on the other. Like the Earth, that is part of a larger system, all the planets are also part of a larger system, and how they relate to that system will have much to teach us about solar system formation.
This more comprehensive perspective brings us to the space sciences. What is space science? The Wikipedia entry on space sciences characterizes them in this way:
The term space science may mean:
●The study of issues specifically related to space travel and space exploration, including space medicine.
●Science performed in outer space (see space research).
●The study of everything in outer space; this is sometimes called astronomy, but more recently astronomy can also be regarded as a division of broader space science, which has grown to include other related fields.
It is interesting that this definition of space science does not mention cosmology, which is more and more coming to assume the role of the master category of the sciences, since it is ultimately cosmology that is the context for everything else, but we could easily modify that last of the above three stipulations to read “cosmology” in place of “astronomy.” As the definition notes, the space sciences have grown to include other related fields, and in the future it may well be that the space sciences become the most comprehensive scientific category, providing the conceptual infrastructure in which all other scientific enterprises must be contextualized.
Since the Earth is a planet, and planets are to be found in space, one might readily assume that the Earth sciences, planetary sciences, and space sciences might be arranged in a nested hierarchy as follows:
Conceptually this is correct, but genetically, i.e., in terms of historical descent, it is obvious that the sciences that we have created to study our home planet are the sciences that, when generalized and applied beyond the surface of the Earth, are the sciences that become planetary science and space science.
Before space science and planetary science, there were of course the familiar sciences of geology (later geomorphology), atmospheric science or meteorology (later climatology), oceanography, paleontology, and so forth, but it was only when the emergence of space science and planetary science placed these terrestrial sciences into a cosmological context that we came to see that our sciences that study the planet that we call our home together constitute the Earth sciences in contrast to, and really in the context of, space science and planetary science. Great strides have been made in this direction, but further work remains to be done.
We know that the Earth and its solar system is about 4.6 billion years old, and most recent estimates for the age of the known universe put it at about 13.7 billion years. This means that the Earth has been around for almost exactly a third of age of the entire universe, which is not an inconsiderable length of time. Our sun and its solar system stands in relation to other stars of a similar age, and these stars and solar systems with significant traces of heavier elements stand in certain relationships to earlier populations of stars. The whole history of the universe is present in the rocks of the Earth, and we have to keep this in mind in the expanding knowledge base of the earth sciences.
While geological time scales are essentially geocentric, it would be possible to formulate an astrogeography and an astrogeographical time scale, extrapolating earth science to planetary science and thence to space science, that not only placed Earth’s geological history into cosmological context but also placed all planetary bodies and planetary systems and their geology in a cosmological context. For such an undertaking the generations of stars and planetary formation would be of central concern, and we could expect to see patterns across stars and solar systems of the same generations, and across planets within a given solar system.
This work has already begun, as can be seen in the above table laying out the geological histories of the Earth, the Moon, and Mars in parallel. Since one of the major theories for the formation of the Moon is that most of its substance was ripped out of the Earth by an enormous collision, the geological histories of the Earth and the Moon may ultimately be shown to coincide.
Stars and planets formed from the same dust and debris clouds filled with the remnants of the nucleosynthesis of earlier poulations of stars. This is now familiar to everyone. Galaxies, in turn, formed from stars, and thus also reflect a generational index reflecting a galaxy’s position in the natural history of the universe.
Since we now also believe that all or almost all spiral galaxies (and perhaps also other non-spiral or irregular galaxies) have a supermassive black hole at their centers, I have lately come of think of entire galaxies as the vast “solar systems” of supermassive black holes. In other words, a supermassive black hole is to a galaxy as a star is to a solar system. As planetary systems formed around newly born stars, galaxies formed around newly born black holes (if their gravity was sufficiently strong to form such a system). This way of thinking about galaxies introduces another parallelism between the microcosm of the solar system and the macrocosm of the universe at large, the structure of which is defined by galaxies, clusters of galaxies, and super clusters.
All of this falls within a single natural history of which we are a part.
Our history and the history of the universe are one and the same.
. . . . .
. . . . .
. . . . .
. . . . .
26 September 2012
An Hypothesis in the Theory of Civilization
Not long ago in Eo-, Eso-, Exo-, Astro- I discussed how Joshua Lederberg’s distinctions between eobiology, esobiology, and exobiology can be used as a model for the concepts of eocivilization, esocivilization, and exocivilization, all of which are anterior to the more comprehensive conception of astrocivilization (like the more comprehensive conception of astrobiology).
My post on Eo-, Eso-, Exo-, Astro- was in part a correction to my earlier post Eo-, Eso-, Astro-, in which I had contrasted eobiology to exobiology, when I should have been contrasting esobiology to exobiology.
I had derived the contrast of eobiology and exobiology from Steven J. Dick and James E. Strick’s excellent book The Living Universe: NASA and the Development of Astrobiology, in which they cite Lederberg’s contrast of these terms. I had initially drawn the wrong contrast between the two concepts. When I started to read Lederberg’s writings, I realized that Lederberg was making a dramatic contrast between the scientific study of origins and the scientific study of destiny, rather than the contrast I expected. However, the contrast I originally drew remains a valid schema for understanding the comprehensive conception of astrobiology — and, by extension, the comprehensive conception of astrocivilization.
Astrobiology may be understood as the integration of esobiology — our biology, terrestrial biology — and exobiology — biology not of the Earth — into a comprehensive whole that places life in a cosmological context. Parallel to this, I define astrocivilization as the integration of esocivilization — our civilization, terrestrial civilization — and exocivilization — civilization not of the Earth — into a comprehensive whole that places civilization in a cosmological context. These concepts are not merely parallel, but the parallel between concepts of biology and concepts of civilization follows from a naturalistic conception of civilization as an extension of biology.
Civilization can be understood as a greatly elaborated result of behavioral adaptation. Just as evolutionary gradualism takes us imperceptibly over countless generations from the simple origins of life to the complexity of life we know today, so too evolutionary gradualism in the development of civilization takes us imperceptibly over countless generations from the simplest behavioral adaptations to the complexity of behavioral adaptation that culminates in civilization — and which may well culminate in some further post-civilizational social institution. (We must add this last proviso so as not to be mistaken for advocating some kind of teleological conception of civilization, as one might expect, for example, from strong formulations of the anthropic cosmological principle — something I had tried to address in Formulating an Anthropic Cosmological Principle Worthy of the Name.)
In reformulating my contrast of eocivilization and exocivilization as the contrast between esocivilization and exocivilization, the term “eocivilization” is freed up to assume its more etymologically accurate meaning, which properly should be “early civilization” (“eo-” coming from the Greek means “early”). This turns out to be a very useful concept, but it always points to an additional thesis in the theory of civilization.
As in astrobiology, in which we study life on Earth as a clue to life in the cosmos, so too in astrocivilization we study civilization on Earth as a clue to civilization in the universe. Life on Earth is the only life that we know of, and civilization on the Earth is the only civilization that we know of, but in so far as we approach life and civilization from the scientific perspective of methodological naturalism, we do not assume that these are necessarily the only instances of life or of civilization in the cosmos. There may be other instances of life and civilization of which we simply know nothing.
In light of the possibility of life and civilization elsewhere in the universe, but our only knowledge of civilization being terrestrial civilization, I will call the terrestrial eocivilization hypothesis the position that identifies early civilization, i.e., eocivilization, with terrestrial civilization. In other words, our terrestrial civilization is the earliest civilization to emerge in the cosmos. Thus the terrestrial eocivilization hypothesis is the civilizational parallel to the rare earth hypothesis, which maintains, contrary to the Copernican principle, that life on earth is rare. I could call it the “rare civilization hypothesis” but I prefer “terrestrial eocivilization hypothesis.”
It is possible to further distinguish between the position that terrestrial civilization is the first and earliest civilization in the cosmos, and the position that terrestrial civilization is unique and the sole source of civilization in the cosmos. There may be exocivilizations that have and will emerge after terrestrial civilization, meaning that there are several sources of civilization in the cosmos, but that terrestrial civilization is the earliest to emerge. Thus the terrestrial eocivilization thesis can be distinguished from the uniqueness of terrestrial civilization. We might call the non-uniqueness of industrial-technological civilization on the Earth the “multi-regional hypothesis” in astrocivilization (to borrow a term from hominid evolutionary biology), but I would prefer to simply call it the “Non-Uniqueness Thesis.”
In the event that human civilization expands cosmologically and is ultimately the source of civilization on exoplanets that are part of other solar systems and perhaps even other galaxies, the terrestrial eocivilization thesis will have more substantive content than it does now at present, when (if the thesis is true) eocivilization is simply identical to all civilization in the cosmos. All we can say at present, however, is that terrestrial civilization is identical to all known civilization in the cosmos. To assert more than this is to assert the terrestrial eocivilization hypothesis, which is underdetermined and goes well beyond available evidence.
. . . . .
. . . . .
. . . . .
. . . . .