11 November 2014
Wittgenstein was not himself a positivist, but his early work, Tractatus Logico-Philosophicus, had such a profound influence on early twentieth century philosophy that the philosophy that we now identify as logical positivism was born from reading groups that got together to study Wittgenstein’s Tractatus — what I have elsewhere called The Ludwig Wittgenstein Reading Club — primarily the Vienna Circle.
Wittgenstein began his education as an engineer, and only later became interested in philosophy by way of the philosophy of mathematics then emerging from the work of Frege and Russell. It has been said that the early Wittgenstein approached philosophy like an engineer, setting out to drain the swamps of philosophy. A more familiar metaphor for Wittgenstein’s philosophy, though for the later rather than the earlier Wittgenstein, is that of philosophy as a kind of therapy:
“A philosopher is a man who has to cure many intellectual diseases in himself before he can arrive at the notions of common sense.”
Wittgenstein, Culture and Value, 1944, p. 44e
Wittgenstein does not himself use the term “therapy” or “therapeutic,” but frequently recurs to the theme in other words:
“In philosophizing we may not terminate a disease of thought. It must run its natural course, and slow cure is all important. (That is why mathematicians are such bad philosophers.)”
Wittgenstein, Zettel, 382
The idea of philosophy as therapy is not entirely new. In my Variations on the Theme of Life I noted the medieval tradition of conceiving philosophers as “doctors of the soul”:
“During late antiquity philosophers were sometimes called ‘doctors of the soul.’ Later yet, Avicenna was a practicing physician in addition to being both a logician and a philosopher, and he stands at the head of a tradition of doctor-philosophers among the Arabs. All this has a superficial resemblance to the contemporary conception of philosophy as therapy, but in reality it is the antithesis of the modern conception of philosophy as a sickness in need of therapy, of scholarship as an illness, and of the philosopher as corrupt and corrupting.”
Variations on the Theme of Life, section 767
Every age must confront the ancient and perennial questions of philosophy anew, because each age has its own, peculiar therapeutic needs. It has become a commonplace of contemporary commentary, as least since the middle of the twentieth century, that the pace and busyness of our civilization today is driving us insane, and in so far as this is true, we are more in need of therapy than previous ages.
In my previous post, Philosophy for Industrial-Technological Civilization, I suggested, contrary to Quine, that philosophy of science is not philosophy enough; that we also need philosophy of technology and philosophy of engineering, and to unify these aspects of the STEM cycle within the big picture, we need a philosophy of big history. There is only one problem with my vision for the overarching philosophy demanded by the world of today: there is no demand for it. No one is interested in my vision or, for that matter, any other vision of philosophy for the twenty-first century.
Previously I wrote three posts on contemporary anti-philosophy:
The most prestigious scientists of our time seem at one in their insistence upon the irrelevance of philosophy. A post on the SelfAwarePatters blog, E.O. Wilson: Science, not philosophy, will explain the meaning of existence, brought my attention to E. O. Wilson’s recent statements belittling philosophy. SelfAwarePatters has also written about Neil deGrasse Tyson’s “blanket dismissal of philosophy” in Neil deGrasse Tyson is wrong to dismiss all of philosophy, but he may have a point on some of it.
It is almost painful to watch Wilson’s oversimplifications in the above linked “Big Think” piece, though I suspect his oversimplifications will have a wide and sympathetic audience. After implying the pointlessness of studying the history of philosophy and making the claim that philosophy mostly consists of “failed models of how the brain works,” Wilson then appeals to the “full story of humanity” (without mentioning big history, though the interdisciplinary concatenation he mentions is very much in the spirit of big history), and formulates a point of view almost precisely the same as that I heard several times at the 2014 IBHA conference: once we have this big picture view of history, we no longer need to ask what the meaning of life is, because we will know it.
The inescapable reflexivity of philosophical thought means that any principled rejection of philosophy is itself a philosophical claim; unprincipled rejections, that is to say, dismissal without reason or argument, have no more standing than any other unprincipled claim. So the scientists who dismiss philosophy and give reasons for doing so are doing philosophy. The unfortunate consequence is that they are doing philosophy poorly, much like someone who dismisses science but who pontificates on matters scientific, and does so poorly. We are well familiar with this, as pseudo-science has been given a megaphone by the internet and other forms of mass media. Scientists are aware of the problem posed by pseudo-science, but seem to be blissfully unaware of the problem of pseudo-philosophy.
There is a book by Louis Althusser, Philosophy and the Spontaneous Philosophy of Scientists, that I have cited previously (in Fashionable Anti-Philosophy) since the title is so evocative, in which Althusser says, “…in every scientist there sleeps a philosopher or, to put it another way, that every scientist is affected by an ideology or a scientific philosophy which we propose to call by the conventional name: the spontaneous philosophy of the scientists…” It is this spontaneous philosophy of scientists that we see in the anti-philosophical pronouncements of E. O. Wilson and Neil deGrasse Tyson.
Not only eminent scientists, but also science popularizers share this attitude. Michio Kaku’s recent book, The Future of the Mind: The Scientific Quest to Understand, Enhance, and Empower the Mind, is essentially a speculative work in the philosophy of mind. There is a pervasive yet implicit Kantianism running through Kaku’s book of which I am sure he is unaware, because, like most scientists today who write on philosophical topics, he has not bothered to study the philosophical literature. If one knows that one is arguing a neo-Kantian position on the transcendental aesthetic, in trying to come to terms with how the barrage of sensory data is somehow translated into an apparently smooth and unitary stream of consciousness, then one can simply consult the literature to learn where state of the argument over the transcendental aesthetic stands today, what the standard arguments are for and against contemporary Kantianism, but without this basic knowledge, one does little more than repeat what has already been said — better — by others, and long ago. Even Sam Harris, who has some background in philosophy, gives his exposition of determinism in a philosophical vacuum, as though the work of philosophers such as Robert Kane, Helen Steward, and Alfred R. Mele simply did not exist, or is beneath notice.
The anti-philosophy and pseudo-philosophy of prominent scientists is an instance of the spontaneous philosophy noted by Althusser. But this spontaneous expression of uninformed philosophical speculation does not come out of nowhere; it has a basis, albeit dimly understood, in the nature of science itself. What is the nature of science itself? I have an answer to this, but it is not an answer that will be welcome to most of those in science today: science is philosophy. That is to say, science is a particular branch of philosophy, that branch once called natural philosophy, and it is natural philosophy practiced in accordance with methodological naturalism. Science is a narrow slice of a far more comprehensive conception of the world.
Scientists are philosophers without realizing they are philosophers, and when then pronounce upon philosophical questions without reference to the philosophical tradition — which is much broader and pluralistic than any one, single branch of philosophy, such as natural philosophy — they do little more than to restate their presuppositions as principles. Given the preeminent role of science within industrial-technological civilization, this willful ignorance of philosophy, and of the position of science in relation to philosophy, is not only holding back both science and philosophy, it is holding back civilization.
The next stage of development of our civilization (not to mention the macro-evolution of our civilization into another kind of civilization) will not come about until science utterly abandons the positivistic assumptions that are today the unquestioned yet implicit presuppositions of scientific inquiry, and science extends the scientific method, and the sense of responsibility to empirical evidence, beyond the confines of any one branch of philosophy to the whole of philosophy. To paraphrase Plato, until philosophers theorize as scientists or those who are now called scientists and leading thinkers genuinely and adequately philosophize, that is, until science and philosophy entirely coincide, while the many natures who at present pursue either one exclusively are forcibly prevented from doing so, civilization will have no rest from evils… nor, I think, will the human race.
. . . . .
. . . . .
. . . . .
. . . . .
12 June 2014
Scientific civilization changes when scientific knowledge changes, and scientific knowledge changes continuously. Science is a process, and that means that scientific civilization is based on a process, a method. Science is not a set of truths to which one might assent, or from which one might withhold one’s assent. It is rather the scientific method that is central to science, and not any scientific doctrine. Theories will evolve and knowledge will change as the scientific method is pursued, and the method itself will be refined and improved, but method will remain at the heart of science.
Pre-scientific civilization was predicated on a profoundly different conception of knowledge: the idea that truth is to be found at the source of being, the fons et origo of the world (as I discussed in my last post, The Metaphysics of the Bureaucratic Nation-State). Knowledge here consists of delineating the truth of the world prior to its later historical accretions, which are to be stripped away to the extent possible. More experience of the world only further removes us from the original source of the world. The proper method of arriving at knowledge is either through the study of the original revelation of the original truth, or through direct communion with the source and origin of being, which remains unchanged to this day (according to the doctrine of divine impassibility).
The central conceit of agrarian-ecclesiastical civilization to be based upon revealed eternal verities has been so completely overturned that its successor civilization, industrial-technological civilization, recognizes no eternal verities at all. Even the scientific method, that drives the progress of science, is continually being revised and refined. As Marx put it in the Communist Manifesto: “All fixed, fast-frozen relations, with their train of ancient and venerable prejudices and opinions, are swept away, all new-formed ones become antiquated before they can ossify. All that is solid melts into air…”
Scientific civilization always looks forward to the next development in science that will resolve our present perplexities, but this comes at the cost of posing new questions that further put off the definitive formulation of scientific truth, which remains perpetually incomplete even as it expands and becomes more comprehensive.
This has been recently expressed by Kevin Kelly in an interview:
“Every time we use science to try to answer a question, to give us some insight, invariably that insight or answer provokes two or three other new questions. Anybody who works in science knows that they’re constantly finding out new things that they don’t know. It increases their ignorance, and so in a certain sense, while science is certainly increasing knowledge, it’s actually increasing our ignorance even faster. So you could say that the chief effect of science is the expansion of ignorance.”
The Technium: A Conversation with Kevin Kelly [02.03.2014]
Scientific civilization, then, is not based on a naïve belief in progress, as is often alleged, but rather embodies an idea of progress that is securely founded in the very nature of scientific knowledge. There is nothing naïve in the scientific conception of knowledge; on the contrary, the scientific conception of knowledge had a long and painfully slow gestation in western civilization, and it is rather the paradigm that science supplants, the theological conception of knowledge (according to which all relevant truths are known from the outset, and are never subject to change), that is the naïve conception of knowledge, sustainable only in the infancy of civilization.
We are coming to understand that our own civilization, while not yet mature, is a civilization that has developed beyond its infancy to the degree that the ideas and institutions of infantile civilization are no longer viable, and if we attempt to preserve these ideas and institutions beyond their natural span, the result may be catastrophic for us. And so we have come to the point of conceptualizing our civilization in terms of existential risk, which is a thoroughly naturalistic way of thinking about the fate and future of humanity, and is amenable to scientific treatment.
It would be misleading to attribute our passing beyond the infancy of civilization to the advent of the particular civilization we have today, industrial-technological civilization. Even without the industrial revolution, scientific civilization would likely have gradually come to maturity, in some form or another, as the scientific revolution dates to that period of history that could be called modern civilization in the narrow sense — what I have called Modernism without Industrialism. And here by “maturity” I do not mean that science is exhausted and can produce no new scientific knowledge, but that we become reflexively aware of what we are doing when we do science. That is to say, scientific maturity is when we know ourselves to be engaged in science. In so far as “we” in this context means scientists, this was probably largely true by the time of the industrial revolution; in so far as “we” means mass man of industrial-technological civilization, it is not yet true today.
The way in which science enters into industrial-technological civilization — i.e., by way of spurring forward the open loop of industrial-technological civilization — means that science has been incorporated as an integral part of the civilization that immediately and disruptively followed the scientific civilization of modernism without industrialism (according to the Preemption Hypothesis). While the industrial revolution disrupted and preempted almost every aspect of the civilization that preceded it, it did not disrupt or preempt science, but rather gave a new urgency to science.
In several posts I have speculated on possible counterfactual civilizations (according to the counterfactuals implicit in naturalism), that is to say, forms of civilization that were possible but which were not actualized in history. One counterfactual civilization might have been agrarian-ecclesiastical civilization undisrupted by the scientific or industrial revolutions. Another counterfactual civilization might have been modern civilization in the narrow sense (i.e., Modernism without Industrialism) coming to maturity without being disrupted and preempted by the industrial revolution. It now occurs to me that yet another counterfactual form of civilization could have been that of industrialization without the scientific conception of knowledge or the systematic application of science to industry.
How could this work? Is it even possible? Perhaps not, and certainly not in the long term, or with high technology, which cannot exist without substantial scientific understanding. But the simple expedient of powered machinery might have come about by the effort of tinkerers, as did much of the industrial revolution as it happened. If we look at the halting and inconsistent efforts in the ancient world to produce large scale industries we get something of this idea, and this we could call industrialism without modernity. Science was not yet at the point at which it could be very helpful in the design of machinery; none of the sciences were yet mathematicized. And yet some large industrial enterprises were built, though few in number. It seems likely that it was not the lack of science that limited industrialization in classical antiquity, but the slave labor economy, which made labor-saving devices pointless.
There are, today, many possibilities for the future of civilization. Technically, these are future contingents (like Aristotle’s sea battle tomorrow), and as history unfolds one of these contingencies will be realized while the others become counterfactuals or are put off yet further. And in so far as there is a finite window of opportunity for a particular future contingent to come into being, beyond that window all unactualized contingents become counterfactuals.
. . . . .
. . . . .
I have written more on the nature of scientific civilization in…
. . . . .
. . . . .
. . . . .
. . . . .
24 November 2013
The world, we are learning every day, is a very large place. Or perhaps I should say that the universe is a very large place. It is also a very complex and strange place. J. B. S. Haldane famously said that, “I have no doubt that in reality the future will be vastly more surprising than anything I can imagine. Now my own suspicion is that the Universe is not only queerer than we suppose, but queerer than we can suppose.” (Possible Worlds and Other Papers, 1927, p. 286) In other words, human beings, no matter how valiantly they attempt to understand the universe, may not be cognitively equipped to understand it; our minds may not be the kind of minds that can understand the kind of place that the world is.
This idea of our inability to understand the world in which we find ourselves (an admirably humble Copernican insight that we might call metaphysical modesty, and which stands in contrast to epistemic hubris) has received many glosses since Haldane’s time. Most notable (notable, at least, from my perspective) are the evolutionary gloss, the quantum physics gloss, and the philosophical gloss. I will consider each of these in turn.
In terms of evolution, there is no reason to suppose that descent with modification in a context of a struggle for vital resources on the plains of Africa (the environment of evolutionary adaptedness, or EEA) is going to produce minds capable of understanding higher dimensional spatial manifolds or quantum physics at microscopic scales that differ radically from the macroscopic scales of ordinary human perception. Alvin Plantinga (about whom I wrote some time ago in A Note on Plantinga, inter alia) has used this argument for theological purposes. However, there is no intrinsic reason that a mind born in the mud and the muck cannot raise itself above its origins and come to contemplate the world in Copernican terms. The evolutionary argument cuts both ways, and since we have ourselves as the evidence of an organism that can raise itself from strictly survival behavior to forms of thought that have nothing to do with survival, from the perspective of the weak anthropic principle this is proof enough that natural selection can result in such a mind.
In terms of quantum theory, we are all familiar with famous quotes from the leading lights of quantum theory as to the essentially incomprehensibility of that theory. For example, Richard Feynman said, “I think I can safely say that nobody understands quantum mechanics.” However, I have observed (in The limits of my language are the limits of my world and elsewhere) that recent research is making strides in working around the epistemic limitations of quantum theory, revealing its uncertainties to be not absolute and categorical, but rather subject to careful and painstaking narrowing that renders the uncertainty a little less uncertain. I anticipate two developments that will emerge from the further elaborate of quantum theory: 1) the finding of ways to gradually and incrementally chip away at an absolutist conception of uncertainty (as just mentioned), and 2) the formulation of more adequate intuitions to make quantum theory more palatable to the human mind.
In terms of philosophy, Colin McGinn’s book Problems in philosophy: The Limits of Inquiry formulates a position which he calls Transcendental Naturalism:
“Philosophy is an attempt to get outside the constitutive structure of our minds. Reality itself is everywhere flatly natural, but because of our cognitive limits we are unable to make good on this general ontological principle. Our epistemic architecture obstructs knowledge of the real nature of the objective world. I shall call this thesis transcendental naturalism, TN for short.” (pp. 2-3)
I have previously written about McGinn’s work in Transcendental Non-Naturalism and Naturalism and Object Oriented Ontology, inter alia. Our ability to get outside the constitutive structure of our minds is severely limited at best, and so our ability to understand the world as it is is limited at best.
While our cognitive abilities are admittedly limited (for all the reasons discussed above, as well as other reasons not discussed), these limits are not absolute, but rather admit of revision. McGinn’s position as stated above implies a false dichotomy between staying within the constitutive structure of our minds and getting outside it. This is a classic case of facing the sheer cliff of Mount Improbable: while it is impossible to get outside our cognitive architecture in one fell swoop, we can little by little transgress the boundaries of our cognitive architecture, each time ever-so-slightly expanding our capacities. Incrementally over time we improve our ability to stand outside those limits that once marked the boundaries of our cognitive architecture. Thus in an ironic twist of intellectual history, the evolutionary argument, rather than demonstrating metaphysical modesty, is rather the key to limiting the limitations on the human mind.
All of this is related to one of the central problems in the philosophy of science of our time — the whole Kuhnian legacy that is the framework of so much contemporary philosophy of science. Copernican revelations and revolutions, which formerly disturbed our anthropocentric bias every few hundred years, now occur with alarming frequency. The difference today, of course, is that science is much more advanced than it was with past Copernican revelations and revolutions — it has much more advanced instrumentation available to it (as a result of the STEM cycle), and we have a much better idea of what to look for in the cosmos.
It was a shock to almost everyone to have it scientifically demonstrated that the universe is not static and eternal, but dynamic and changing. It was a shock when quantum theory demonstrated the world to be fundamentally indeterministic. This is by now a very familiar narrative. In fact, it is so familiar that it has been expropriated (dare I say exapted?) by obscurantists and irrationalists of our time, who point at continual changes at scientific knowledge as “proof” that science doesn’t give us any “truth” because it changes. The assumption here is that change in scientific knowledge demonstrates the weakness of science; in fact, change in scientific knowledge is the strength of science. Scientific knowledge is what I have elsewhere called an intelligent institution in so far as it is institutionalized knowledge, but that institution is formulated with internal mechanisms that facilitate the re-shaping of the institution itself over time. That mechanism is the scientific method.
It is important to see that the overturning of familiar conceptions of the world — some of which are ancient and some of which are not — is not arbitrary. Less comprehensive conceptions are being replaced by more comprehensive conceptions. The more comprehensive our perspective on the world, the greater the number of anomalies we must face, and the greater the number of anomalies we face the more likely it is that our theories will be overturned, or at least partially falsified. But it is the wrong debate to ask whether theory change is rational or irrational. It is misleading, because what ought to concern us is how well our theories account for the ever-larger world that is revealed to us through our ever-more comprehensive methods of science, and not how well our theories conform to our presuppositions about rationality. The more we get the science right, reason will follow, shaping new intuitions and formulating new theories.
Our ability to discover (and to understand) ever greater scales of the universe is contingent upon our growing intellectual capabilities, which are cumulative. Just as in the STEM cycle science begets technologies that beget industries that create better scientific instruments, so too on a purely intellectual level the intellectual capabilities of one generation are the formative context of the intellectual capabilities of the next generation, which allows the later generation to exceed the earlier generation. Concepts are the tools of the mind, and we use our familiar concepts to create the next generation of concepts, which latter are both more refined and more powerful than the former, in the same way as we use each generation of tools to build the next generation of tools, which makes each generation of tools better than the last (except for computer software — but I expect that this degradation in the practicability of computer software is simply the software equivalent of planned obsolescence).
Our current generation of tools — both conceptual and technological — are daily revealing to us the inadequacy of our past conceptions of the world. Several recent discoveries have in particular called into question our understanding of the size of the world, especially in so far as the world is defined in terms of its origins in the Big Bang. For example, the discovery of hyperclusters suggest physical structures of the universe that are larger than the upper limit set to physical structures by contemporary cosmologies theories (cf. ‘Hyperclusters’ of the Universe — “Something is Behaving Very Strangely”).
In a similar vein, writing of the recent discovery of a “large quasar group” (LQG) as much as four billion light years across, the article The Largest Discovered Structure in the Universe Contradicts Big-Bang Theory Cosmology states:
“This LQG challenges the Cosmological Principle, the assumption that the universe, when viewed at a sufficiently large scale, looks the same no matter where you are observing it from. The modern theory of cosmology is based on the work of Albert Einstein, and depends on the assumption of the Cosmological Principle. The principle is assumed, but has never been demonstrated observationally ‘beyond reasonable doubt’.”
This formulation gets the order of theory and observation wrong. The cosmological principle is not a principle that can be proved or disproved by evidence; it is a theoretical idea that is used to give structure and meaning to observations, to organize observations into a theoretical whole. The cosmological principle belongs to theoretical cosmology; recent discoveries such as hyperclusters and large quasar groups belong to observational cosmology. While the two — i.e., theoretical and observational — cannot be separated in the practice of science, it is also true that they are not identical. Theoretical methods are distinct from observational methods, and vice versa.
Thus the cosmological principle may be helpful or unhelpful in organizing our knowledge of the cosmos, but it is not the kind of thing that can be falsified in the same way that, for example, a theory of planetary formation can be falsified. That is to say, the cosmological principle might be opposed to (falsified by) another principle that negates the cosmological principle, but this anti-cosmological principle will similarly belong to an order not falsifiable by empirical observations.
The discoveries of hyperclusters and LQGs are particularly problematic because they question some of the fundamental assumptions and conclusions of Big Bang cosmology, which is, essentially, the only large scale cosmological model in contemporary science. Big Bang cosmology is the explanation for the structure of the cosmos that was formulated in response to the discovery of the red shift, which implies that, on the largest observable scales, the universe is expanding. It is important to add the qualification, “on the largest observable scales” because stars within a given galaxy are circulating around the galaxy, and while a given star may be moving away from another given star, it is also likely to be moving toward yet some other star. And, even at larger scales, not all galaxies are receding from each other. It is fairly well known that galaxies collide and commingle; the Helmi stream of our own Milky Way is the result of a long past galactic collision, and at some far time in the future the Milky Way itself will merge with the larger Andromeda galaxy, and be absorbed by it.
Cosmology during the period of the big bang theory (a period in which we still find ourselves today) is in some respects like biology before Darwin. Almost all biology before Darwin was essentially theological, but no one had a better idea so biology had to wait to become a science capable of methodologically naturalistic formulations until after Darwin. The big bang theory was, on the contrary, proposed as a scientific theory (not merely bequeathed to us by pre-scientific tradition), and most scientists working within the big bang tradition have formulated the Big Bang in meticulously naturalistic terms. Nevertheless, once the steady state theory was overthrown, no one really had an alternative to the big bang theory, so all cosmology centered on the Big Bang for lack of imagination of alternatives — but also due to the limitations of the scientific instruments, which at the time of the triumph of the big bang theory were much more modest than they are today.
As disconcerting as it was to discover that the cosmos did not embody an eternal order, that it is expanding and had a history of violent episodes, and that it was much larger than an “island universe” comprising only the Milky Way, the observations that we need to explain today are no less disconcerting in their own way.
Here is how Leonard Susskind describes our contemporary observations of the expanding universe:
“In every direction that we look, galaxies are passing the point at which they are moving away from us faster than light can travel. Each of us is surrounded by a cosmic horizon — a sphere where things are receding with the speed of light — and no signal can reach us from beyond that horizon. When a star passes the point of no return, it is gone forever. Far out, at about fifteen billion light years, our cosmic horizon is swallowing galaxies, stars, and probably even life. It is as if we all live in our own private inside-out black hole.”
Leonard Susskind, The Black Hole War: My Battle with Stephen Hawking to make the World Safe for Quantum Mechanics, New York, Boston, and London: Little, Brown and Company, 2008, pp. 437-438
This observation has not yet been sufficiently appreciated. What lies beyond Susskind’s cosmic horizon is unobservable, as anything that disappears beyond the event horizon of a black hole has become unobservable, and that places such matters beyond the reach of science understood in a narrow sense of observation. But as I have noted above, in the practice of science we cannot disentangle the theoretical and the observational, but the two are not the same. While our observations come to an end at the cosmic horizon, our principles encounter no such boundary. Thus it is that we naturally extrapolate our science beyond the boundaries of observation, but if we get our scientific principles wrong, anything beyond the boundary of observation will be wrong and will be incapable or correction by observation.
Science in the narrow sense must, then, come to an end with observation. But this does not satisfy the mind. One response is to deny the mind its satisfaction and refuse to pass beyond observation. Another response is to fill the void with mythology and fiction. Yet another response is to take up the principles on their own merits and consider them in the light of reason. This response is the philosophical response, and we see that it is a rational response to the world that is continuous with science even when it passes beyond science.
. . . . .
. . . . .
. . . . .
17 November 2013
Inefficiency in the STEM cycle
In my previous post, The Open Loop of Industrial-Technological Civilization, I ended on the apparently pessimistic note of the existential risks posed to industrial-technological civilization by friction and inefficiency in the STEM cycle that drives our civilization headlong into the future. Much that is produced by the feedback loop of science, technology, and engineering is dissipated in science that does not result in technologies, technologies that are not engineered in to industries, and industries that do not produce new scientific instruments. However, just enough science feeds into technology, technology into engineering, and engineering into science to keep the STEM cycle going.
These “inefficiencies” should not be seen as a “bad” thing, since much pure science that is valuable as an intellectual contribution to civilization has few if any practical consequences. The “inefficient” science that does not contribute directly to the STEM cycle is some of the best science that does humanity credit. Indeed, G. H. Hardy was famously emphatic that all practical mathematics was “ugly” and only pure mathematics, untainted by practical application, was truly beautiful — and Hardy made it clear that beautiful mathematics was ultimately the only thing that mattered. Thus these “inefficiencies” that appear to weaken the STEM cycle and hence pose an existential risk to our industrial-technological civilization, are at the same time existential opportunities — as always, risk and opportunity are one and the same.
Opportunities of the STEM cycle
The apparently pessimistic formulation of my previous took this form:
“It is entirely possible that a shift in social, economic, cultural, or other factors that influence or are influenced by the STEM cycle could increase the amount of epiphenomenal science, technology, and engineering, thus decreasing the efficiency of the STEM cycle.”
Such a formulation must be balanced by an appropriate and parallel formulation to the effect that it is entirely possible that a shift in social, economic, cultural, or other factors that influence or are influenced by the STEM cycle could decrease the amount of epiphenomenal science, technology, and engineering, thus increasing the efficiency of the STEM cycle.
However, making the STEM cycle more “efficient” might well be catastrophic, or nearly catastrophic, for civilization, as it would imply a narrowing of human life to the parameters defined by the STEM cycle. This might lead to a realization of the existential risks of permanent stagnation (i.e., the stagnation of all aspects of civilization other than those that advance industrial-technological civilization, which could prove frightening) or flawed realization, in which an acceleration or consolidation of the STEM cycle leads to the sort of civilization no one would find desirable or welcome.
There is no reason one could not, however, both strengthen the STEM cycle, making industrial-technological civilization more robust and more productive of advanced science, technology, and engineering, while at the same time also producing more pure science, more marginal technologies, and more engineering curiosities that don’t feed directly into the STEM cycle. The bigger the pie, the bigger each piece of the pie and the more to go around for everyone. Also, pure science and practical science exist in a cycle of mutual escalation of their own, in which pure science inspires practical science and practical science inspires more pure science. Perhaps the same is true also of marginal and practical technologies and the engineering of curiosities and the engineering of mass industries.
Scaling the STEM cycle
The dissipation of excess productions of the STEM cycle mean that unexpected sectors of the economy (as well as unexpected sectors of society) are occasionally the recipients of disproportional inputs. These disproportional inputs, like the inefficiencies discussed above, might be understood as either risks or opportunities. Some socioeconomic sectors might be catastrophically stressed by a disproportionate input, while others might unexpected flourish with a flourishing input. To control the possibilities of catastrophic failure or flourishing success, we must consider the possibility scaling the STEM cycle.
To what degree can the STEM cycle be scaled? By this question I mean that, once we are explicitly and consciously aware that it is the STEM cycle that drives industrial-technological civilization (or, minimally, that it is among the drivers of industrial-technological civilization), if we want to further drive that civilization forward (as I would like to see it driven until earth-originating life has established extraterrestrial redundancy in the interest of existential risk mitigation) can we consciously do so? To what extent can the STEM cycle be controlled, or can its scaling be controlled? Can we consciously direct the STEM cycle so that more science begets more technology, more technology begets more engineering, and more engineering begets more science? I think that we can. But, as with the matters discussed above, we must always be aware of the risk/opportunity trade-off. Focusing too much of the STEM cycle may have disadvantages.
Once we understand an underlying mechanism of civilization, like the STEM cycle, we can consciously cultivate this mechanism if we wish to see more of this kind of civilization, or we can attempt to dampen this mechanism if we want to see less of this civilization. These attempts to cultivate or dampen a mechanism of civilization can take microscopic or macroscopic forms. Macroscopically, we are concerned with the total picture of civilization; microscopically we may discern the smallest manifestations of the mechanism, as when the STEM cycle is purposefully pursued by the R&D division of a business, which funds a certain kind of science with an eye toward creating certain technologies that can be engineered into specific industries — all in the interest of making a profit for the shareholders.
This last example is a very conscious exemplification of the STEM cycle, that might conceivably be reduced the work of a single individual, working in turn as scientist, technologist, and engineer. The very narrowness of this process which is likely to produce specific and quantifiable results is also likely to produce very little in terms of epiphenomenal manifestations of the STEM cycle, and thus may contribute little or nothing to the more edifying dimensions of civilization. But this is not necessarily the case. Arno Penzias and Robert Wilson were working as scientists trying to solve a practical problem for Bell Labs when they discovered the cosmic microwave background radiation.
Reason for Hope
We have at least as much reason to hope for the future as to despair of the future, if not more reason to hope. The longer civilization persists, the more robust it becomes, and the more robust civilization becomes, the more internal diversity and experimentation civilization can tolerate (i.e., greater social differentiation, as Siggi Becker has recently pointed out to me). The extreme social measures taken in the past to enforce conformity within society have been softened in Western civilization, and individuals have a great deal of latitude that was unthinkable even in the recent past.
Perhaps more significantly from the perspective of civilization, the more robust and tolerant our civilization, the more latitude there is for like-minded individuals to cooperate in the founding and advancement of innovative social movements which, if they prove to be effective and to meet a need, can result in real change to the overall structure of society, and this sort of bottom-up social change was precisely the kind of change that agrarian-ecclesiastical civilization was structured to frustrate, resist, and suppress. In this respect, if in no other, we have seen social progress in the development of civilization that is distinct from the technological and economic progress that characterizes the STEM cycle.
As I wrote in my recent Centauri Dreams post, SETI, METI, and Existential Risk, to exist is to be subject to existential risk. Given the relation of risk and opportunity, it is also the case that to exist is to choose among existential opportunities. This is why we fight so desperately to stay alive, and struggle so insistently to improve our condition once we have secured the essentials of existence. To be alive is to have countless existential opportunities within reach; once we die, all of this is lost to us. And to improve one’s condition is to increase the actionable existential opportunities within one’s grasp.
The development of civilization, for all its faults and deficiencies, is tending toward increasing the range of existential opportunities available as “live options” (as William James would say) for both individuals and communities. That this increased range of existential opportunities also comes with an increased variety of existential risks should not be employed as an excuse to attempt to reverse the real social gains bequeathed by industrial-technological civilization.
. . . . .
. . . . .
. . . . .
23 October 2013
Prediction in Science
One of the distinguishing features of science as a system of thought is that it makes testable predictions. The fact that scientific predictions are testable suggests a methodology of testing, and we call the scientific methodology of testing experiment. Hypothesis formation, prediction, experimentation, and resultant modification of the hypothesis (confirmation, disconfirmation, or revision) are all essential elements of the scientific method, which constitutes an escalating spiral of knowledge as the scientific method systematically exposes predictions to experiment and modifies its hypotheses in the light of experimental results, which leads in turn to new predictions.
The escalating spiral of knowledge that science cultivates naturally pushes that knowledge into the future. Sometimes scientific prediction is even formulated in reference to “new facts” or “temporal asymmetries” in order to emphasize that predictions refer to future events that have not yet occurred. In constructing an experiment, we create a few set of facts in the world, and then interpret these facts in the light of our hypothesis. It is this testing of hypotheses by experiment that establishes the concrete relationship of science to the world, and this is also a source of limitation, for experiments are typically designed in order to focus on a single variable and to that end an attempt is made to control for the other variables. (A system of thought that is not limited by the world is not science.)
Alfred North Whitehead captured this artificial feature of scientific experimentation in a clever line that points to the difference between scientific predictions and predictions of a more general character:
“…experiment is nothing else than a mode of cooking the facts for the sake of exemplifying the law. Unfortunately the facts of history, even those of private individual history, are on too large a scale. They surge forward beyond control.”
Alfred North Whitehead, Adventures of Ideas, New York: The Free Press, 1967, Chapter VI, “Foresight,” p. 88
There are limits to prediction, and not only those pointed out by Whitehead. The limits to prediction have been called the prediction wall. Beyond the prediction wall we cannot penetrate.
The Prediction Wall
John Smart has formulated the idea of a prediction wall in his essay, “Considering the Singularity,” as follows:
With increasing anxiety, many of our best thinkers have seen a looming “Prediction Wall” emerge in recent decades. There is a growing inability of human minds to credibly imagine our onrushing future, a future that must apparently include greater-than-human technological sophistication and intelligence. At the same time, we now admit to living in a present populated by growing numbers of interconnected technological systems that no one human being understands. We have awakened to find ourselves in a world of complex and yet amazingly stable technological systems, erected like vast beehives, systems tended to by large swarms of only partially aware human beings, each of which has only a very limited conceptualization of the new technological environment that we have constructed.
Business leaders face the prediction wall acutely in technologically dependent fields (and what enterprise isn’t technologically dependent these days?), where the ten-year business plans of the 1950’s have been replaced with ten-week (quarterly) plans of the 2000’s, and where planning beyond two years in some fields may often be unwise speculation. But perhaps most astonishingly, we are coming to realize that even our traditional seers, the authors of speculative fiction, have failed us in recent decades. In “Science Fiction Without the Future,” 2001, Judith Berman notes that the vast majority of current efforts in this genre have abandoned both foresighted technological critique and any realistic attempt to portray the hyper-accelerated technological world of fifty years hence. It’s as if many of our best minds are giving up and turning to nostalgia as they see the wall of their own conceptualizing limitations rising before them.
Considering the Singularity: A Coming World of Autonomous Intelligence (A.I.) © 2003 by John Smart (This article may be reproduced for noncommercial purposes if it is copied in its entirety, including this notice.)
I would to suggest that there are at least two prediction walls: synchronic and diachronic. The prediction wall formulated above by John Smart is a diachronic prediction wall: it is the onward-rushing pace of events, one following the other, that eventually defeats our ability to see any recognizable order or structure of the future. The kind of prediction wall to which Whitehead alludes is a synchronic prediction wall, in which it is the outward eddies of events in the complexity of the world’s interactions that make it impossible for us to give a complete account of the consequences of any one action. (Cf. Axes of Historiography)
Retrodiction and the Historical Sciences
Science does not live by prediction alone. While some philosophers of science have questioned the scientificity of the historical sciences because they could not make testable (and therefore falsifiable) predictions about the future, it is now widely recognized that the historical sciences don’t make predictions, but they do make retrodictions. A retrodiction is a prediction about the past.
The Oxford Dictionary of Philosophy by Simon Blackburn (p. 330) defines retrodiction thusly:
retrodiction The hypothesis that some event happened in the past, as opposed to the prediction that an event will happen in the future. A successful retrodiction could confirm a theory as much as a successful prediction.
As with predictions, there is also a limit to retrodiction, and this is the retrodiction wall. Beyond the retrodiction wall we cannot penetrate.
I haven’t been thinking about this idea for long enough to fully understand the ramifications of a retrodiction wall, so I’m not yet clear about whether we can distinction diachronic retrodiction and synchronic retrodiction. Or, rather, it would be better to say that the distinction can certainly be made, but that I cannot think of good contrasting examples of the two at the present time.
We can define a span of accessible history that extends from the retrodiction wall in the past to the prediction wall in the future as what I will call effective history (by analogy with effective computability). Effective history can be defined in a way that is closely parallel to effectively computable functions, because all of effective history can be “reached” from the present by means of finite, recursive historical methods of inquiry.
Effective history is not fixed for all time, but expands and contracts as a function of our knowledge. At present, the retrodiction wall is the Big Bang singularity. If anything preceded the Big Bang singularity we are unable to observe it, because the Big Bang itself effectively obliterates any observable signs of any events prior to itself. (Testable theories have been proposed that suggest the possibility of some observable remnant of events prior to the Big Bang, as in conformal cyclic cosmology, but this must at present be regarded as only an early attempt at such a theory.)
Prior to the advent of scientific historiography as we know it today, the retrodiction wall was fixed at the beginning of the historical period narrowly construed as written history, and at times the retrodiction wall has been quite close to the present. When history experiences one of its periodic dark ages that cuts it off from his historical past, little or nothing may be known of a past that once familiar to everyone in a given society.
The emergence of agrarian-ecclesiastical civilization effectively obliterated human history before itself, in a manner parallel to the Big Bang. We know that there were caves that prehistorical peoples visited generation after generation for time out of mind, over tens of thousands of years — much longer than the entire history of agrarian-ecclesiastical civilization, and yet all of this was forgotten as though it had never happened. This long period of prehistory was entirely lost to human memory, and was not recovered again until scientific historiography discovered it through scientific method and empirical evidence, and not through the preservation of human memory, from which prehistory had been eradicated. And this did not occur until after agrarian-ecclesiastical civilization had lapsed and entirely given way to industrial-technological civilization.
We cannot define the limits of the prediction wall as readily as we can define the limits of the retrodiction wall. Predicting the future in terms of overall history has been more problematic than retrodicting the past, and equally subject to ideological and eschatological distortion. The advent of modern science compartmentalized scientific predictions and made them accurate and dependable — but at the cost of largely severing them from overall history, i.e., human history and the events that shape our lives in meaningful ways. We can make predictions about the carbon cycle and plate tectonics, and we are working hard to be able to make accurate predictions about weather and climate, but, for the most part, our accurate predictions about the future dispositions of the continents do not shape our lives in the near- to mid-term future.
I have previously quoted a famous line from Einstein: “As far as the laws of mathematics refer to reality, they are not certain; and as far as they are certain, they do not refer to reality.” We might paraphrase this Einstein line in regard to the relation of mathematics to the world, and say that as far as scientific laws of nature predict events, these events are irrelevant to human history, and in so far as predicted events are relevant to human beings, scientific laws of nature cannot predict them.
Singularities Past and Future
As the term “singularity” is presently employed — as in the technological singularity — the recognition of a retrodiction wall in the past complementary to the prediction wall in the future provides a literal connection between the historiographical use of “singularity” and the use of the term “singularity” in cosmology and astrophysics.
Theorists of the singularity hypothesis place a “singularity” in the future which constitutes an absolute prediction wall beyond which history is so transformed that nothing beyond it is recognizable to us. This future singularity is not the singularity of astrophysics.
If we recognize the actual Big Bang singularity in the past as the retrodiction wall for cosmology — and hence, by extension, for Big History — then an actual singularity of astrophysics is also at the same time an historical singularity.
. . . . .
I have continued my thoughts on the retrodiction wall in Addendum on the Retrodiction Wall.
. . . . .
. . . . .
. . . . .
2 February 2013
In my last post, The Science of Time, I discussed the possibility of taking an absolutely general perspective on time and how this can be done in a way that denies time or in a way that affirms time, after the manner of big history.
David Christian, whose books on big history and his Teaching Company lectures on Big History have been seminal in the field, in the way of introduction to his final lectures, in which he switches from history to speculation on the future, relates that in his early big history courses his students felt as though they were cut off rather abruptly when he had brought them through 13.7 billion years of cosmic history only to drop them unceremoniously in the present without making any effort to discuss the future. It was this reaction that prompted him to continue beyond the present and to try to say something about what comes next.
Another way to understand this reaction of Christian’s students is that they wanted to see the whole of the history they have just been through placed in an even larger, more comprehensive context, and to do this requires going beyond history in the sense of an account of the past. To put the whole of history into a larger context means placing it within a cosmology that extends beyond our strict scientific knowledge of past and future — that which can be observed and demonstrated — and comprises a framework in the same scientific spirit but which looks beyond the immediate barriers to observation and demonstration.
Elsewhere in David Christian’s lectures (if my memory serves) he mentioned how some traditionalist historians, when they encounter the idea of big history, reject the very idea because history has always been about documents and eponymously confined to to the historical period when documents were kept after the advent of literacy. According to this reasoning, anything that happened prior to the invention of written language is, by definition, not history. I have myself encountered similar reasoning as, for example, when it is claimed that prehistory is not history at all because it happened prior to the existence of written records, which latter define history.
This a sadly limited view of history, but apparently it is a view with some currency because I have encountered it in many forms and in different contexts. One way to discredit any intellectual exercise is to define it so narrowly that it cannot benefit from the most recent scientific knowledge, and then to impugn it precisely for its narrowness while not allowing it to change and expand as human knowledge expands. The explosion in scientific knowledge in the last century has made possible a scientific historiography that simply did not exist previously; to deny that this is history on the basis of traditional humanistic history being based on written records means that we must then define some new discipline, with all the characteristics of traditional history, but expanded to include our new knowledge. This seems like a perverse attitude to me, but for some people the label of their discipline is important.
Call it what you will then — call it big history, or scientific historiography, or the study of human origins, or deny that it is history altogether, but don’t try to deny that our knowledge of the past has expanded exponentially since the scientific method has been applied to the past.
In this same spirit, we need to recognize that a greatly expanded conception of history needs to reach into the future, that a scientific futurism needs to be part of our expanded conception of the totality of time and history — or whatever it is that results when we apply Russell’s generalization imperative to time. Once again, it would be unwise to be overly concerned with what we call his emerging discipline, whether it be the totality of time or the whole of time or temporal infinitude or ecological temporality or what Husserl called omnitemporality or even absolute time.
Part of this grand (historical) effort will be a future science of civilizations, as the long term and big picture conception of civilization is of central human interest in this big picture of time and history. We not only want to know the naturalistic answers to traditional eschatological questions — Where did we come from? Where are we going? — but we also want to know the origins and destiny of what we have ourselves contributed to the universe — our institutions, our ideas, civilization, the technium, and all the artifacts of human endeavor.
. . . . .
. . . . .
. . . . .
30 January 2013
F. H. Bradley in his classic treatise Appearance and Reality: A Metaphysical Essay, made this oft-quoted comment:
“If you identify the Absolute with God, that is not the God of religion. If again you separate them, God becomes a finite factor in the Whole. And the effort of religion is to put an end to, and break down, this relation — a relation which, none the less, it essentially presupposes. Hence, short of the Absolute, God cannot rest, and, having reached that goal, he is lost and religion with him. It is this difficulty which appears in the problem of the religious self-consciousness.”
I think many commentators have taken this passage as emblematic of what they believe to be Bradley’s religious sentimentalism, and in fact the yearning for religious belief (no longer possible for rational men) that characterized much of the school of thought that we now call “British Idealism.”
This is not my interpretation. I’ve read enough Bradley to know that he was no sentimentalist, and while his philosophy diverges radically from contemporary philosophy, he was committed to a philosophical, and not a religious, point of view.
Bradley was an elder contemporary of Bertrand Russell, and Bertrand Russell characterized Bradley as the grand old man of British idealism. This if from Russell’s Our Knowledge of the External World:
“The nature of the philosophy embodied in the classical tradition may be made clearer by taking a particular exponent as an illustration. For this purpose, let us consider for a moment the doctrines of Mr Bradley, who is probably the most distinguished living representative of this school. Mr Bradley’s Appearance and Reality is a book consisting of two parts, the first called Appearance, the second Reality. The first part examines and condemns almost all that makes up our everyday world: things and qualities, relations, space and time, change, causation, activity, the self. All these, though in some sense facts which qualify reality, are not real as they appear. What is real is one single, indivisible, timeless whole, called the Absolute, which is in some sense spiritual, but does not consist of souls, or of thought and will as we know them. And all this is established by abstract logical reasoning professing to find self-contradictions in the categories condemned as mere appearance, and to leave no tenable alternative to the kind of Absolute which is finally affirmed to be real.”
Bertrand Russell, Our Knowledge of the External World, Chapter I, “Current Tendencies”
Although Russell rejected what he called the classical tradition, and distinguished himself in contributing to the origins of a new philosophical school that would come (in time) to be called analytical philosophy, the influence of figures like F. H. Bradley and J. M. E. McTaggart (whom Russell knew personally) can still be found in Russell’s philosophy.
In fact, the above quote from F. H. Bradley — especially the portion most quoted, short of the Absolute, God cannot rest, and, having reached that goal, he is lost and religion with him — is a perfect illustration of a principle found in Russell, and something on which I have quoted Russell many times, as it has been a significant influence on my own thinking.
I have come to refer to this principle as Russell’s generalization imperative. Russell didn’t call it this (the terminology is mine), and he didn’t in fact give any name at all to the principle, but he implicitly employs this principle throughout his philosophical method. Here is how Russell himself formulated the imperative (which I last quoted in The Genealogy of the Technium):
“It is a principle, in all formal reasoning, to generalize to the utmost, since we thereby secure that a given process of deduction shall have more widely applicable results…”
Bertrand Russell, An Introduction to Mathematical Philosophy, Chapter XVIII, “Mathematics and Logic”
One of the distinctive features that Russell identifies as constitutive of the classical tradition, and in fact one of the few explicit commonalities between the classical tradition and Russell’s own thought, was the denial of time. The British idealists denied the reality of time outright, in the best Platonic tradition; Russell did not deny the reality of time, but he was explicit about not taking time too seriously.
Despite Russell’s hostility to mysticism as expressed in his famous essay “Mysticism and Logic,” when it comes to the mystic’s denial of time, Russell softens a bit and shows his sympathy for this particular aspect of mysticism:
“Past and future must be acknowledged to be as real as the present, and a certain emancipation from slavery to time is essential to philosophic thought. The importance of time is rather practical than theoretical, rather in relation to our desires than in relation to truth. A truer image of the world, I think, is obtained by picturing things as entering into the stream of time from an eternal world outside, than from a view which regards time as the devouring tyrant of all that is. Both in thought and in feeling, even though time be real, to realise the unimportance of time is the gate of wisdom.”
“…impartiality of contemplation is, in the intellectual sphere, that very same virtue of disinterestedness which, in the sphere of action, appears as justice and unselfishness. Whoever wishes to see the world truly, to rise in thought above the tyranny of practical desires, must learn to overcome the difference of attitude towards past and future, and to survey the whole stream of time in one comprehensive vision.”
Bertrand Russell, Mysticism and Logic, and Other Essays, Chapter I, “Mysticism and Logic”
While Russell and the classical tradition in philosophy both perpetuated the devalorization of time, this attitude is slowly disappearing from philosophy, and contemporary philosophers are more and more treating time as another reality to be given philosophical exposition rather than denying its reality. I regard this as a salutary development and a riposte to all who claim that philosophy makes no advances. Contemporary philosophy of time is quite sophisticated, and embodies a much more honest attitude to the world than the denial of time. (For those looking at philosophy from the outside, the denial of the reality of time simply sounds like a perverse waste of time, but I won’t go into that here.)
In any case, we can bring Russell’s generalization imperative to time and history even if Russell himself did not do so. That is to say, we ought to generalize to the utmost in our conception of time, and if we do so, we come to a principle parallel to Bradley’s that I think both Russell and Bradley would have endorsed: short of the absolute time cannot rest, and, having reached that goal, time is lost and history with it.
Since I don’t agree with this, but it would be one logical extrapolation of Russell’s generalization imperative as applied to time, this suggests to be that there is more than one way to generalize about time. One way would be the kind of generalization that I formulated above, presumably consistent with Russell’s and Bradley’s devalorization of time. Time generalized in this way becomes a whole, a totality, that ceases to possess the distinctive properties of time as we experience it.
The other way to generalize time is, I think, in accord with the spirit of Big History: here Russell’s generalization imperative takes the form of embedding all times within larger, more comprehensive times, until we reach the time of the entire universe (or beyond). The science of time, as it is emerging today, demands that we almost seek the most comprehensive temporal perspective, placing human action in evolutionary context, placing evolution in biological context, placing biology is in geomorphological context, placing terrestrial geomorphology into a planetary context, and placing this planetary perspective into a cosmological context. This, too, is a kind of generalization, and a generalization that fully feels the imperative that to stop at any particular “level” of time (which I have elsewhere called ecological temporality) is arbitrary.
On my other blog I’ve written several posts related directly or obliquely to Big History as I try to define my own approach to this emerging school of historiography: The Place of Bilateral Symmetry in the History of Life, The Archaeology of Cosmology, and The Stars Down to Earth.
The more we pursue the rapidly growing body of knowledge revealed by scientific historiography, the more we find that we are part of the larger universe; our connections to the world expand as we pursue them outward in pursuit of Russell’s generalization imperative. I think it was Hans Blumenberg in his enormous book The Genesis of the Copernican World, who remarked on the significance of the fact that we can stand with our feet on the earth and look up at the stars. As I remarked in The Archaeology of Cosmology, we now find that by digging into the earth we can reveal past events of cosmological history. As a celestial counterpart to this digging in the earth (almost as though concretely embodying the contrast to which Blumenberg referred), we know that by looking up at the stars, we are also looking back in time, because the light that comes to us ages after it has been produced. Thus is astronomy a kind of luminous archaeology.
In Geometrical Intuition and Epistemic Space I wrote, “…we have no science of time. We have science-like measurements of time, and time as a concept in scientific theories, but no scientific theory of time as such.” Scientists have tried to think scientifically about time, but, as with the case of consciousness, a science of time eludes us as a science of consciousness eludes us. Here a philosophical perspective remains necessary because there are so many open questions and no clear indication of how these questions are to be answered in a clearly scientific spirit.
Therefore I think it is too early to say exactly what Big History is, because we aren’t logically or intellectually prepared to say exactly what the Russellian generalization imperative yields when applied to time and history. I think that we are approaching a point at which we can clarify our concepts of time and history, but we aren’t quite there yet, and a lot of conceptual work is necessary before we can produce a definitive formulation of time and history that will make of Big History the science and it aspires to be.
. . . . .
. . . . .
. . . . .
. . . . .
25 December 2012
Prior to the advent of civilization, the human condition was defined by nature. Evolutionary biologist call this initial human condition the environment of evolutionary adaptedness (or EEA). The biosphere of the Earth, with all its diverse flora and fauna, was the predominant fact of human experience. Very little that human beings did could have an effect on the human condition beyond the most immediate effects an individual might cause in the environment, such as gathering or hunting for food. Nothing was changed by the passage of human beings through an environment that was, for them, their home. Human beings had to conform themselves to this world or die.
Since the advent of civilization, it has been civilization and not nature that determines the human condition. As one civilization has succeeded another, and, more importantly, as one kind of civilization has succeeded another kind of civilization — which latter happens far less frequently, since like kinds of civilization tend to succeed each other except when this process of civilizational succession is preempted by the emergence of an historical anomaly on the order of the initial emergence of civilization itself — the overwhelming fact of human experience has been shaped by civilization and the products of civilization, rather than by nature. This transformation from being shaped by nature to being shaped by civilization is what makes the passage from hunter-gatherer nomadism to settled agrarian civilization such a radical discontinuity in human experience.
This transformation has been gradual. In the earliest period of human civilizations, an entire civilization might grow up from nothing, spread regionally, assimilating local peoples not previously included in the project of civilization, and then die out, all without coming into contact with another civilization. The growth of human civilization has meant a gradual and steady increase in the density of human populations. It has already been thousands of years since a civilization could flourish and fail without encountering another civilization. It has been, moreover, hundreds of years since all human communities were bound together through networks of trade and communication.
Civilization is now continuous across the surface of the planet. The world-city — Doxiadis’ Ecumenopolis, which I discussed in Civilization and the Technium — is already an accomplished fact (though it is called by another name, or no name at all). We retain our green spaces and our nature reserves, but all human communities ultimately are contiguous with each other, and there is no direction that you can go on the surface of the Earth without encountering another human community.
The civilization of the present, which I call industrial-technological civilization, is as distinct from the agricultural civilization (which I call agrarian-ecclesiastical civilization) that preceded it as agricultural civilization was distinct from the nomadic hunter-gatherer paradigm that preceded it in turn. In other words, the emergence of industrialization interpolated a discontinuity in the human condition on the order of the emergence of civilization itself. One of the aspects of industrial-technological civilization that distinguishes it from earlier agricultural civilization is the effective regimentation and indeed rigorization of the human condition.
The emergence of organized human activity, which corresponds to the emergence of the species itself, and which is therefore to be found in hunter-gatherer nomadism as much as in agrarian or industrial civilization, meant the emergence of institutions. At first, these institutions were as unsystematic and implicit as everything else in human experience. When civilizations began to abut each other in the agrarian era, it became necessary to make these institutions explicit and to formulate them in codes of law and regulation. At first, this codification itself was unsystematic. It was the emergence of industrialization that forced human civilizations to make its institutions not only explicit, but also systematic.
This process of systematization and rigorization is most clearly seen in the most abstract realms of thought. In the nineteenth century, when industrialization was beginning to transform the world, we see at the same time a revolution in mathematics that went beyond all the earlier history of mathematics. While Euclid famously systematized geometry in classical antiquity, it was not until the nineteenth century that mathematical thought grew to a point of sophistication that outstripped and exceeded Euclid.
From classical antiquity up to industrialization, it was frequently thought, and frequently asserted, that Euclid was the perfection of human reason in mathematics and that Aristotle was the perfection of human reason in logic, and there was simply nothing more to be done in the these fields beyond learning to repeat the lessons of the masters of antiquity. In the nineteenth century, during the period of rapid industrialization, people began to think about mathematics and logic in a way that was more sophisticated and subtle than even the great achievements of Euclid and Aristotle. Separately, yet almost simultaneously, three different mathematicians (Bolyai, Lobachevski, and Riemann) formulated systems of non-Euclidean geometry. Similarly revolutionary work transformed logic from its Aristotelian syllogistic origins into what is now called mathematical logic, the result of the work of George Boole, Frege, Peano, Russell, Whitehead, and many others.
At the same time that geometry and logic were being transformed, the rest of mathematics was also being profoundly transformed. Many of these transformational forces have roots that go back hundreds of years in history. This is also true of the industrial revolution itself. The growth of European society as a result of state competition within the European peninsula, the explicit formulation of legal codes and the gradual departure from a strictly peasant subsistence economy, the similarly gradual yet steady spread of technology in the form of windmills and watermills, ready to be powered by steam when the steam engine was invented, are all developments that anticipate and point to the industrial revolution. But the point here is that the anticipations did not come to fruition until the nineteenth century.
And so with mathematics. Newton and Leibniz independently invented the calculus, but it was left on unsure foundations for centuries, and Descartes had made the calculus possible by the earlier innovation of analytical geometry. These developments anticipated and pointed to the rigorization of mathematics, but the development did not come to fruition until the nineteenth century. The fruition is sometimes called the arithmetization of analysis, and involved the substitution of the limit method for fluxions in Newton and infinitesimals in Leibniz. This rigorous formulation of the calculus made possible engineering in its contemporary form, and rigorous engineering made it possible to bring the most advanced science of the day to the practical problems of industry. Intrinsically arithmetical realities could now be given a rigorous mathematical exposition.
Historians of mathematics and industrialization would probably cringe at my potted sketch of history, but here it is in sententious outline:
● Rigorization of mathematics also called the arithmetization of analysis
● Mathematization of science
● Scientific systematization of technology
● Technological rationalization of industry
I have discussed part of this cycle in my writings on industrial-technological civilization and the disruption of the industrial-technological cycle. The origins of this cycle involve the additional steps that made the cycle possible, and much of the additional steps are those that made logic, mathematics, and science rigorous in the nineteenth century.
The reader should also keep in mind the parallel rigorization of social institutions that occurred, including the transformation of the social sciences after the model of the hard sciences. Economics, which is particularly central to the considerations of industrial-technological civilization, has been completely transformed into a technical, mathematicized science.
With the rigorization of social institutions, and especially the economic institutions that shape human life from cradle to grave, it has been inevitable that the human condition itself should be made rigorous. Foucault was instrumental in pointing out salient aspects of this, which he called biopower, and which, I suggest, will eventually issues in technopower.
I am not suggesting this this has been a desirable, pleasant, or welcome development. On the contrary, industrial-technological civilization is beset in its most advanced quarters by a persistent apocalypticism and declensionism as industrialized populations fantasize about the end of the social regime that has come to control almost every aspect of life.
I wrote about the social dissatisfaction that issues in apocalypticism in Fear of the Future. I’ve been thinking more about this recently, and I hope to return to this theme when I can formulate my thoughts with the appropriate degree of rigor. I am seeking a definitive formulation of apocalypticism and how it is related to industrialization.
. . . . .
. . . . .
. . . . .