27 November 2013
Immanuel Kant, in an often-quoted passage, spoke of, “…the starry heavens above me and the moral law within me.” Kant might have with equal justification spoken of the formal law within and the starry heavens above. There is a sense in which the formal laws of thought are the moral laws of the mind — in logic, a good thought is a rigorous thought — so that given sufficient latitude of translation, we can interpret Kant in this way — except that we know (as Nietzsche put it) that Kant was a moral fanatic à la Rousseau.
However we choose to interpret Kant, I would like to quote more fully from the passage in the Critique of Practical Reason where Kant invokes the starry heavens above and the moral law within:
“Two things fill the mind with ever new and increasing admiration and awe, the oftener and the more steadily we reflect on them: the starry heavens above and the moral law within. I have not to search for them and conjecture them as though they were veiled in darkness or were in the transcendent region beyond my horizon; I see them before me and connect them directly with the consciousness of my existence. The former begins from the place I occupy in the external world of sense, and enlarges my connection therein to an unbounded extent with worlds upon worlds and systems of systems, and moreover into limitless times of their periodic motion, its beginning and continuance. The second begins from my invisible self, my personality, and exhibits me in a world which has true infinity, but which is traceable only by the understanding, and with which I discern that I am not in a merely contingent but in a universal and necessary connection, as I am also thereby with all those visible worlds. The former view of a countless multitude of worlds annihilates as it were my importance as an animal creature, which after it has been for a short time provided with vital power, one knows not how, must again give back the matter of which it was formed to the planet it inhabits (a mere speck in the universe). The second, on the contrary, infinitely elevates my worth as an intelligence by my personality, in which the moral law reveals to me a life independent of animality and even of the whole sensible world, at least so far as may be inferred from the destination assigned to my existence by this law, a destination not restricted to conditions and limits of this life, but reaching into the infinite.”
Immanuel Kant, Critique of Practical Reason, 1788, translated by Thomas Kingsmill Abbott, Part 2, Conclusion
This passage is striking for many reasons, not least among them them degree to which Kant has assimilated the Copernican revolution, acknowledging Earth as a mere speck in the universe. Also particularly interesting is Kant’s implicit appeal to objectivity and realism, notwithstanding the fact that Kant himself established the tradition of transcendental idealism. Kant in this passage invokes the starry heavens above and the moral law within because they are independent of the individual …
Moreover, Kant identifies both the starry heavens above and the moral law within not only as objective and independent realities, but also as infinitistic. Just as Kant the idealist looks to the stars and the moral law in a realistic spirit, so Kant the proto-constructivist invokes the “…unbounded extent with worlds upon worlds” of the starry heavens and the moral law as, “…reaching into the infinite.” I have earlier and elsewhere observed how Kant’s proto-constructivism nevertheless involves spectacularly non-constructive arguments. In the passage quoted above both Kant’s proto-constructivism and his non-constructive moments are retained in lines such as, “exhibits me in a world which has true infinity,” which by invoking exhibition in intuition toes the constructivist line, while invoking true infinity allows a legitimate role for the non-constructive.
When it comes to constructivism, we can see that Kant is conflicted. He’s not the only one. One might call Aristotle the first constructivist (or, at least, the first proto-constructivist) as the originator of the idea of the potential infinite, and here (i.e., in the context of the above discussion of Kant’s use of the infinite) Aristotelian permissive finitism is particularly relevant. (Aristotle would likely not have had much sympathy for intuitionistic constructivism, which its rejection of tertium non datur.)
The Greek intellectual attitude to the infinite was complex and conflicted. I have written about this previously in Reason in Moderation and Salto Mortale. The Greek quest for harmony, order, and proportion rejected the infinite as something that transgresses the boundaries of good taste and propriety (dismissing the infinite as apeiron, in contradistinction to peras). Nevertheless, Greek philosophers routinely argued from the infinity and eternity of the world.
Here is a famous passage from Democritus, who was perhaps best known among the Greek philosophers in arguing for the infinity of the world, making the doctrine a virtual tenet among ancient atomists:
“Worlds are unlimited and of different sizes. In some worlds there is no Sun and Moon, in others, they are larger than in our world, and in others more numerous. … Intervals between worlds are unequal. In some parts there are more worlds, in others fewer; some are increasing, some at their height, some decreasing; in some parts they are arising, in others failing… There are some worlds devoid of living creatures or plants or any moisture.”
…and Epicurus on the same theme of the infinity of the world…
“…there is an infinite number of worlds, some like this world, others unlike it. For the atoms being infinite in number, as has just been proved, are borne ever further in their course. For the atoms out of which a world might arise, or by which a world might be formed, have not all been expended on one world or a finite number of worlds, whether like or unlike this one. Hence there will be nothing to hinder an infinity of worlds.”
Epicurus, Letter to Herodotus
There were also poetic invocations of the idea of the infinity of the world, which demonstrates the extent to which the idea had penetrated popular consciousness in classical antiquity:
“When Alexander heard from Anaxarchus of the infinite number of worlds, he wept, and when his friends asked him what was the matter, he replied, ‘Is it not a matter for tears that, when the number of worlds is infinite, I have not conquered one?’”
Plutarch, PLUTARCH’S MORALS, ETHICAL ESSAYS TRANSLATED WITH NOTES AND INDEX BY ARTHUR RICHARD SHILLETO, M.A., Sometime Scholar of Trinity College, Cambridge, Translator of Pausanias, LONDON: GEORGE BELL AND SONS, 1898, “On Contentedness of Mind,” section IV
Like poetry, history had particular prestige in the ancient world, and here the theme of the infinity of the world also occurs:
“…Constantius, elated by this extravagant passion for flattery, and confidently believing that from now on he would be free from every mortal ill, swerved swiftly aside from just conduct so immoderately that sometimes in dictation he signed himself ‘My Eternity,’ and in writing with his own hand called himself lord of the whole world — an expression which, if used by others, ought to have been received with just indignation by one who, as he often asserted, laboured with extreme care to model his life and character in rivalry with those of the constitutional emperors. For even if he ruled the infinity of worlds postulated by Democritus, of which Alexander the Great dreamed under the stimulus of Anaxarchus, yet from reading or hearsay he should have considered that (as the astronomers unanimously teach) the circuit of whole earth, which to us seems endless, compared with the greatness of the universe has the likeness of a mere tiny point.
Ammianus Marcellinus, Roman Antiquities, Book XV, section 1
Like the quote from Kant quoted above, this passage is remarkable for its Copernican outlook, which shows that the ancients were not only capable of thinking in infinitistic terms, but also in more-or-less Copernican terms.
Lucretius was a follower of Epicurus, and gave one of the more detailed arguments for the infinity of the world to be found in ancient philosophy:
It matters nothing where thou post thyself,
In whatsoever regions of the same;
Even any place a man has set him down
Still leaves about him the unbounded all
Outward in all directions; or, supposing
moment the all of space finite to be,
If some one farthest traveller runs forth
Unto the extreme coasts and throws ahead
A flying spear, is’t then thy wish to think
It goes, hurled off amain, to where ’twas sent
And shoots afar, or that some object there
Can thwart and stop it? For the one or other
Thou must admit; and take. Either of which
Shuts off escape for thee, and does compel
That thou concede the all spreads everywhere,
Owning no confines. Since whether there be
Aught that may block and check it so it comes
Not where ’twas sent, nor lodges in its goal,
Or whether borne along, in either view
‘Thas started not from any end. And so
I’ll follow on, and whereso’er thou set
The extreme coasts, I’ll query, “what becomes
Thereafter of thy spear?” ‘Twill come to pass
That nowhere can a world’s-end be, and that
The chance for further flight prolongs forever
The flight itself. Besides, were all the space
Of the totality and sum shut in
With fixed coasts, and bounded everywhere,
Then would the abundance of world’s matter flow
Together by solid weight from everywhere
Still downward to the bottom of the world,
Nor aught could happen under cope of sky,
Nor could there be a sky at all or sun-
Indeed, where matter all one heap would lie,
By having settled during infinite time.
Lucretius, De rerum natura
The above argument is one that is still likely to be heard today, in various forms. If you go to the edge of the universe and throw a spear, either it is stopped by the boundary of the universe, or it continues on, and, as Lucretius says, For the one or other, Thou must admit. If the spear is stopped, what stopped it? And if it continues on, into what does it continue?
The contemporary relativistic cosmology has a novel answer to this ancient idea: the universe is finite and unbounded, so that space is wrapped back around on itself. What this means for the spear-thrower at the edge of the universe is that if he throws the spear with enough force, it may travel around the cosmos and return to pierce him in the back. There is nothing to stop the spear, because the universe is unbounded, but since the universe is also finite the spear will eventually cross its own path if it continues to travel. I do not myself think that the universe is finite and unbounded in precisely the way the many modern cosmologists argue, but I am not going to go into this interesting problem at the present time.
Other than the response to Lucretius in terms of relativistic cosmology, with its curved spacetime — a material response to the Lucretian argument for the infinity of the world — there is another response, that of intuitionistic constructivism, which denies the law of the excluded middle (tertium non datur) — i.e, a formal response to Lucretius. Lucretius asserted that, For the one or other, Thou must admit, and this is exactly what the intuitionist does not admit. As with the relativistic response to Lucretius, I do not myself agree with the intuitionist response to Lucretius. Consequently, I believe that Lucretius argument is still valid in spirit, though it must be reformulated in order to be applicable to the world as revealed to us by contemporary science. Consequently, I take it as demonstrable that the universe is infinite, taking the view of ancient natural philosophers.
Within the overall context of Greek thought, within its contending finitist and infinitistic strains, Greek cosmology was non-constructive, and the Greeks asserted (and argued for) the infinity of the world on the basis of non-constructive argument. Perhaps it would even be fair to say that the Greeks assumed the universe to be infinite in extent, and they at times sought to justify this assumption by philosophical argument, while at other times they confined themselves to the sphere of the peras.
Much of contemporary science is constructivist in spirit, though this constructivism is rarely made explicit, except among logicians and mathematicians. By this I mean that the general drift of science ever since the scientific revolution has been toward bottom-up constructions on the basis of quantifiable evidence and away from top-down argument. I made this point previously in Advanced Thinking and A Non-Constructive World, as well as other posts, though I haven’t yet given a detailed formulation of this idea. Yet the emergence of a “quantum logic” in quantum theory that does away with the principle of the excluded middle is a clear expression of the increasing constructivism of science.
In A Non-Constructive World I also made the point that the world appears to have both constructive and non-constructive features. In several posts about constructivism (e.g., P or not-P) I have argued that constructivism and non-constructivism are complementary perspectives on formal thought, and that each needs the other for an adequate account of the world.
In so far as contemporary science is essentially constructive, it lacks a non-constructive perspective on the phenomena it investigates. This is, I believe, intrinsic to science, and to the kind of civilization that emerges from the application of science to the economy (viz. industrial-technological civilization). By the constructive methods of science we can attain ever larger and ever more comprehensive conceptions of the universe — such as I described in my previous post, The Size of the World — but these constructive methods will never reach the infinite universe contemplated by the ancient Greeks.
How could the logical framework employed by a scientist have any effect over what they see in the heavens? Well, constructive science is logically incapable of formulating the idea of an infinite universe in any sense other than an Aristotelian potential infinite. No one can observe the infinite (in the philosophy of mathematics we say that the infinite is “unsurveyable”). And if you cannot produce observational evidence of the infinite, then you cannot formulate a falsifiable theory of an infinite universe. Thus the infinity of the world is, in effect, ruled out by our methods.
No one should be surprised at the direct impact the ethos of formal thought has a upon the natural sciences; one of the fundamental trends of the scientific revolution has been the mathematization of natural science, and one of the fundamental trends of mathematical rigor since the late nineteenth century has been the arithmetization of analysis, which has been taken as far as the logicization of mathematics. Logic and mathematics have been “finitized” and these finite formal methods have been employed in the rational reconstruction of the sciences.
I look forward to the day when the precision and rigor of formal methods employed in the natural sciences again includes infinitistic methods, and it once again becomes possible to formulate the thesis of the infinity of the world in science — and possible once again to see the world as infinite.
. . . . .
. . . . .
. . . . .
. . . . .
24 November 2013
The world, we are learning every day, is a very large place. Or perhaps I should say that the universe is a very large place. It is also a very complex and strange place. J. B. S. Haldane famously said that, “I have no doubt that in reality the future will be vastly more surprising than anything I can imagine. Now my own suspicion is that the Universe is not only queerer than we suppose, but queerer than we can suppose.” (Possible Worlds and Other Papers, 1927, p. 286) In other words, human beings, no matter how valiantly they attempt to understand the universe, may not be cognitively equipped to understand it; our minds may not be the kind of minds that can understand the kind of place that the world is.
This idea of our inability to understand the world in which we find ourselves (an admirably humble Copernican insight that we might call metaphysical modesty, and which stands in contrast to epistemic hubris) has received many glosses since Haldane’s time. Most notable (notable, at least, from my perspective) are the evolutionary gloss, the quantum physics gloss, and the philosophical gloss. I will consider each of these in turn.
In terms of evolution, there is no reason to suppose that descent with modification in a context of a struggle for vital resources on the plains of Africa (the environment of evolutionary adaptedness, or EEA) is going to produce minds capable of understanding higher dimensional spatial manifolds or quantum physics at microscopic scales that differ radically from the macroscopic scales of ordinary human perception. Alvin Plantinga (about whom I wrote some time ago in A Note on Plantinga, inter alia) has used this argument for theological purposes. However, there is no intrinsic reason that a mind born in the mud and the muck cannot raise itself above its origins and come to contemplate the world in Copernican terms. The evolutionary argument cuts both ways, and since we have ourselves as the evidence of an organism that can raise itself from strictly survival behavior to forms of thought that have nothing to do with survival, from the perspective of the weak anthropic principle this is proof enough that natural selection can result in such a mind.
In terms of quantum theory, we are all familiar with famous quotes from the leading lights of quantum theory as to the essentially incomprehensibility of that theory. For example, Richard Feynman said, “I think I can safely say that nobody understands quantum mechanics.” However, I have observed (in The limits of my language are the limits of my world and elsewhere) that recent research is making strides in working around the epistemic limitations of quantum theory, revealing its uncertainties to be not absolute and categorical, but rather subject to careful and painstaking narrowing that renders the uncertainty a little less uncertain. I anticipate two developments that will emerge from the further elaborate of quantum theory: 1) the finding of ways to gradually and incrementally chip away at an absolutist conception of uncertainty (as just mentioned), and 2) the formulation of more adequate intuitions to make quantum theory more palatable to the human mind.
In terms of philosophy, Colin McGinn’s book Problems in philosophy: The Limits of Inquiry formulates a position which he calls Transcendental Naturalism:
“Philosophy is an attempt to get outside the constitutive structure of our minds. Reality itself is everywhere flatly natural, but because of our cognitive limits we are unable to make good on this general ontological principle. Our epistemic architecture obstructs knowledge of the real nature of the objective world. I shall call this thesis transcendental naturalism, TN for short.” (pp. 2-3)
I have previously written about McGinn’s work in Transcendental Non-Naturalism and Naturalism and Object Oriented Ontology, inter alia. Our ability to get outside the constitutive structure of our minds is severely limited at best, and so our ability to understand the world as it is is limited at best.
While our cognitive abilities are admittedly limited (for all the reasons discussed above, as well as other reasons not discussed), these limits are not absolute, but rather admit of revision. McGinn’s position as stated above implies a false dichotomy between staying within the constitutive structure of our minds and getting outside it. This is a classic case of facing the sheer cliff of Mount Improbable: while it is impossible to get outside our cognitive architecture in one fell swoop, we can little by little transgress the boundaries of our cognitive architecture, each time ever-so-slightly expanding our capacities. Incrementally over time we improve our ability to stand outside those limits that once marked the boundaries of our cognitive architecture. Thus in an ironic twist of intellectual history, the evolutionary argument, rather than demonstrating metaphysical modesty, is rather the key to limiting the limitations on the human mind.
All of this is related to one of the central problems in the philosophy of science of our time — the whole Kuhnian legacy that is the framework of so much contemporary philosophy of science. Copernican revelations and revolutions, which formerly disturbed our anthropocentric bias every few hundred years, now occur with alarming frequency. The difference today, of course, is that science is much more advanced than it was with past Copernican revelations and revolutions — it has much more advanced instrumentation available to it (as a result of the STEM cycle), and we have a much better idea of what to look for in the cosmos.
It was a shock to almost everyone to have it scientifically demonstrated that the universe is not static and eternal, but dynamic and changing. It was a shock when quantum theory demonstrated the world to be fundamentally indeterministic. This is by now a very familiar narrative. In fact, it is so familiar that it has been expropriated (dare I say exapted?) by obscurantists and irrationalists of our time, who point at continual changes at scientific knowledge as “proof” that science doesn’t give us any “truth” because it changes. The assumption here is that change in scientific knowledge demonstrates the weakness of science; in fact, change in scientific knowledge is the strength of science. Scientific knowledge is what I have elsewhere called an intelligent institution in so far as it is institutionalized knowledge, but that institution is formulated with internal mechanisms that facilitate the re-shaping of the institution itself over time. That mechanism is the scientific method.
It is important to see that the overturning of familiar conceptions of the world — some of which are ancient and some of which are not — is not arbitrary. Less comprehensive conceptions are being replaced by more comprehensive conceptions. The more comprehensive our perspective on the world, the greater the number of anomalies we must face, and the greater the number of anomalies we face the more likely it is that our theories will be overturned, or at least partially falsified. But it is the wrong debate to ask whether theory change is rational or irrational. It is misleading, because what ought to concern us is how well our theories account for the ever-larger world that is revealed to us through our ever-more comprehensive methods of science, and not how well our theories conform to our presuppositions about rationality. The more we get the science right, reason will follow, shaping new intuitions and formulating new theories.
Our ability to discover (and to understand) ever greater scales of the universe is contingent upon our growing intellectual capabilities, which are cumulative. Just as in the STEM cycle science begets technologies that beget industries that create better scientific instruments, so too on a purely intellectual level the intellectual capabilities of one generation are the formative context of the intellectual capabilities of the next generation, which allows the later generation to exceed the earlier generation. Concepts are the tools of the mind, and we use our familiar concepts to create the next generation of concepts, which latter are both more refined and more powerful than the former, in the same way as we use each generation of tools to build the next generation of tools, which makes each generation of tools better than the last (except for computer software — but I expect that this degradation in the practicability of computer software is simply the software equivalent of planned obsolescence).
Our current generation of tools — both conceptual and technological — are daily revealing to us the inadequacy of our past conceptions of the world. Several recent discoveries have in particular called into question our understanding of the size of the world, especially in so far as the world is defined in terms of its origins in the Big Bang. For example, the discovery of hyperclusters suggest physical structures of the universe that are larger than the upper limit set to physical structures by contemporary cosmologies theories (cf. ‘Hyperclusters’ of the Universe — “Something is Behaving Very Strangely”).
In a similar vein, writing of the recent discovery of a “large quasar group” (LQG) as much as four billion light years across, the article The Largest Discovered Structure in the Universe Contradicts Big-Bang Theory Cosmology states:
“This LQG challenges the Cosmological Principle, the assumption that the universe, when viewed at a sufficiently large scale, looks the same no matter where you are observing it from. The modern theory of cosmology is based on the work of Albert Einstein, and depends on the assumption of the Cosmological Principle. The principle is assumed, but has never been demonstrated observationally ‘beyond reasonable doubt’.”
This formulation gets the order of theory and observation wrong. The cosmological principle is not a principle that can be proved or disproved by evidence; it is a theoretical idea that is used to give structure and meaning to observations, to organize observations into a theoretical whole. The cosmological principle belongs to theoretical cosmology; recent discoveries such as hyperclusters and large quasar groups belong to observational cosmology. While the two — i.e., theoretical and observational — cannot be separated in the practice of science, it is also true that they are not identical. Theoretical methods are distinct from observational methods, and vice versa.
Thus the cosmological principle may be helpful or unhelpful in organizing our knowledge of the cosmos, but it is not the kind of thing that can be falsified in the same way that, for example, a theory of planetary formation can be falsified. That is to say, the cosmological principle might be opposed to (falsified by) another principle that negates the cosmological principle, but this anti-cosmological principle will similarly belong to an order not falsifiable by empirical observations.
The discoveries of hyperclusters and LQGs are particularly problematic because they question some of the fundamental assumptions and conclusions of Big Bang cosmology, which is, essentially, the only large scale cosmological model in contemporary science. Big Bang cosmology is the explanation for the structure of the cosmos that was formulated in response to the discovery of the red shift, which implies that, on the largest observable scales, the universe is expanding. It is important to add the qualification, “on the largest observable scales” because stars within a given galaxy are circulating around the galaxy, and while a given star may be moving away from another given star, it is also likely to be moving toward yet some other star. And, even at larger scales, not all galaxies are receding from each other. It is fairly well known that galaxies collide and commingle; the Helmi stream of our own Milky Way is the result of a long past galactic collision, and at some far time in the future the Milky Way itself will merge with the larger Andromeda galaxy, and be absorbed by it.
Cosmology during the period of the big bang theory (a period in which we still find ourselves today) is in some respects like biology before Darwin. Almost all biology before Darwin was essentially theological, but no one had a better idea so biology had to wait to become a science capable of methodologically naturalistic formulations until after Darwin. The big bang theory was, on the contrary, proposed as a scientific theory (not merely bequeathed to us by pre-scientific tradition), and most scientists working within the big bang tradition have formulated the Big Bang in meticulously naturalistic terms. Nevertheless, once the steady state theory was overthrown, no one really had an alternative to the big bang theory, so all cosmology centered on the Big Bang for lack of imagination of alternatives — but also due to the limitations of the scientific instruments, which at the time of the triumph of the big bang theory were much more modest than they are today.
As disconcerting as it was to discover that the cosmos did not embody an eternal order, that it is expanding and had a history of violent episodes, and that it was much larger than an “island universe” comprising only the Milky Way, the observations that we need to explain today are no less disconcerting in their own way.
Here is how Leonard Susskind describes our contemporary observations of the expanding universe:
“In every direction that we look, galaxies are passing the point at which they are moving away from us faster than light can travel. Each of us is surrounded by a cosmic horizon — a sphere where things are receding with the speed of light — and no signal can reach us from beyond that horizon. When a star passes the point of no return, it is gone forever. Far out, at about fifteen billion light years, our cosmic horizon is swallowing galaxies, stars, and probably even life. It is as if we all live in our own private inside-out black hole.”
Leonard Susskind, The Black Hole War: My Battle with Stephen Hawking to make the World Safe for Quantum Mechanics, New York, Boston, and London: Little, Brown and Company, 2008, pp. 437-438
This observation has not yet been sufficiently appreciated. What lies beyond Susskind’s cosmic horizon is unobservable, as anything that disappears beyond the event horizon of a black hole has become unobservable, and that places such matters beyond the reach of science understood in a narrow sense of observation. But as I have noted above, in the practice of science we cannot disentangle the theoretical and the observational, but the two are not the same. While our observations come to an end at the cosmic horizon, our principles encounter no such boundary. Thus it is that we naturally extrapolate our science beyond the boundaries of observation, but if we get our scientific principles wrong, anything beyond the boundary of observation will be wrong and will be incapable or correction by observation.
Science in the narrow sense must, then, come to an end with observation. But this does not satisfy the mind. One response is to deny the mind its satisfaction and refuse to pass beyond observation. Another response is to fill the void with mythology and fiction. Yet another response is to take up the principles on their own merits and consider them in the light of reason. This response is the philosophical response, and we see that it is a rational response to the world that is continuous with science even when it passes beyond science.
. . . . .
. . . . .
. . . . .
21 November 2013
When Frank Drake first formulated the eponymously-named Drake equation the number of planetary systems in the universe (the second term in the Drake equation, fp) was an unknown among other unknowns. Now we are rapidly approaching a scientifically-based quantification of this once unknown number. We now know that planetary systems are common, and moreover that planetary systems with smallish, rocky planets in the habitable zones of stars are relatively common. (Cf., e.g., Earth-Like Worlds “Very Common”)
As soon as we reached a level of technological and scientific expertise that made the search for exoplanets practical, we began to find them. The most recent exoplanet discoveries, and the recent announcement that planets and planetary system are common, are primarily due to the NASA Kepler mission. According to the NASA website, the Kepler mission was…
“…specifically designed to survey a portion of our region of the Milky Way galaxy to discover dozens of Earth-size planets in or near the habitable zone and determine how many of the billions of stars in our galaxy have such planets.”
In this, the Kepler mission has been wildly successful. But in order to get to the point at which our civilization could conceive, design, build, and operate the Kepler mission we had to pass through thousands of years of development, and before our civilization developed to its current state of technological prowess, it took terrestrial biology billions of years of development to arrive at organisms capable of creating a civilization that could develop to this level.
Contrast the experience of Kepler’s exoplanet search with the experience of SETI, the search for extraterrestrial intelligence. What did not happen as soon as we began searching for SETI signals? We did not immediately begin hearing a whole range of intelligent extraterrestrial signals, which would have been a result parallel to the immediate successes of the exoplanet search (immediate, that is, in the technological zone of proximal development). Both Kepler and SETI are searches of the sky. The Kepler mission gave nearly immediate results; Frank Drake conducted the first SETI study in 1960. Drake found only an eerie silence, and ever since we have only heard an eerie silence. Once the technological threshold of exoplanet search was reached, the search immediately discovered its object, but once the technological threshold of SETI was reached, the search revealed nothing.
Please understand that, in making this observation, I am in no sense criticizing SETI efforts; I am not saying that SETI is a waste of effort, or a waste of money; I am not saying that SETI is wrongheaded or that it is not a science. On the contrary, I think SETI is interesting and important, and that includes the fact that SETI has found only an eerie silence — this is in itself important and interesting. We have discovered radio silence, except for natural sources. This tells us something about the universe. If there were a truly predatory peer civilization in our region of the Milky Way, it would be expected that they would go to the trouble to broadcast their presence to the universe, in hope of luring unsuspecting peer civilizations. Like Odysseus having himself strapped to the mast of his ship so that he could hear the song of the Sirens while his crew rowed on oblivious, their ears stopped with wax, we would have to listen to such signals restraining ourselves from rushing toward that fatal lure.
What we now know, as a result of SETI’s discovery of the eerie silence, is that METI (messaging extraterrestrial intelligence) beacons are not common. If METI beacons were common in the Milky Way, we would have heard them by now. There may yet be METI beacons, but they are not the first thing that you hear when you begin a SETI program (unlike looking for exoplanets and finding them as soon as you have the capability of looking). If METI beacons exist, they are rare and difficult to find. I think we can go further than this, and assert with some degree of confidence that there is no alien “super-civilization” in our galactic neighborhood constructing vast mega-engineering projects and pumping out high-power EM spectrum emissions that would be easily detectable by any technological civilization that suddenly had the idea to begin listening for such signals.
I wrote above that SETI and exoplanet searches are sensitive to a technological threshold. We passed the SETI threshold in the 1960s, and we have passed the exoplanet search threshold in the first decade of the twenty-first century. There is a further technological threshold, which is also an economic threshold — the ability to detect the unintentional EM spectrum radiation “leakage” from technological civilizations that have not had the interest or the resources to establish a METI beacon, but which, like us, are radiating EM spectrum signals as an epiphenomenal expression of our industrial-technological civilization. I say that this is also an economic threshold, as James Benford and colleagues have taken pains to point out the expense associated with establishing a METI beacon. (This is something I discussed in my Centauri Dreams post SETI, METI, and Existential Risk; James Benford responded on Centauri Dreams with James Benford: Comments on METI; my post on Centauri Dreams, along with responses from Benford and from David Brin, received quite a few comments, so if the reader is interested, it is worthwhile to follow the links and read the ensuing discussion.)
If METI is “shouting to the galaxy” (as James Benford put it), then the unintentional leakage of EM spectrum radiation of industrial-technological civilization is not shouting to the galaxy but rather whispering to the cosmos, and in order to be able to hear a whisper we must listen intently — holding our breath and putting a hand to our ear. Whether or not we choose to listen intently for whispers from the cosmos, we have not yet reached the developmental stage of civilization in which this is practical, though we seem to be moving in that direction. If we should continue the trajectory of our technological development — which, as I see it, entails both increasing automation and routine travel between Earth and space — such an effort will be within our grasp within the coming century.
Advanced industrial-technological civilizations will, by definition, know much more than we know. Their science will be commensurate with their technology and their engineering, since their civilization, if it is an industrial-technological peer civilization (and in so far as industrial-technological civilization is defined by the STEM cycle, which I believe to be the case), will experience the advance of science joined inseparably to the advance of technology and engineering. What would they do with this epistemic advantage? Such an epistemic advantage presents the possibility of SETI and METI asymmetry. We have an asymmetrical advantage over civilizations at an earlier stage of development, as older industrial-technological civilizations would have an asymmetrical advantage over us, with the ability to find us while concealing themselves.
The developmental direction of industrial-technological civilization as defined by the STEM cycle means that any advanced industrial-technological civilization will be “backward compatible” with earlier forms of technological communication. We might not (yet) be able to build a quantum entanglement transmitter in order to communicate instantaneously over cosmic distances (even though we can conceive the possibility), but an advanced peer civilization will be able to listen for our EM spectrum radiation leakage, in the same way that we today could continue to look for signs of ETI compatible with earlier stages of industrial-technological civilization. Karl Friedrich Gauss suggested geometrical shapes laid out in wheat in the wastes of Siberia to get the attention of extraterrestrials, while Joseph von Littrow suggested trenches filled with burning oil in the Sahara. Interesting in this context, although our civilization had the technology to pursue these methods, no one undertook them on a large scale.
When, in the future, we have the ability to image the surface of exoplanets with large extraterrestrial telescopes, we could look for such attempted signals within the capability of less developed civilizations to produce, such as those suggested by Gauss and Littrow. But when it comes to advanced peer civilizations, we don’t have the knowledge to know what to look for. The more advanced the civilization, the farther it lies beyond our civilizational zone of proximal development (ZPD), but the more advanced a civilization the earlier it would have to have its origins in the history of the universe, and at some point in the development of the universe (going backward in time to the origins of the universe) it would not be possible for an industrial-technological civilization to emerge because if we go far enough back in time, the elements necessary to an industrial-technological civilization do not yet exist. So there seems to be a window of development in the history of the universe for the emergence of industrial-technological civilizations. This strikes me as a non-anthropocentric way of expressing one formulation of the anthropic cosmological principle (and an idea worth developing further, since I have been searching for a formulation of the anthropic cosmological principle worthy of the name).
In an optimistic assessment of our place in the universe, we could hope that any substantially more advanced civilization could serve as the “more knowledgeable other” (MKO) that would facilitate our progress through the civilizational zone of proximal development.
. . . . .
. . . . .
. . . . .
17 November 2013
Inefficiency in the STEM cycle
In my previous post, The Open Loop of Industrial-Technological Civilization, I ended on the apparently pessimistic note of the existential risks posed to industrial-technological civilization by friction and inefficiency in the STEM cycle that drives our civilization headlong into the future. Much that is produced by the feedback loop of science, technology, and engineering is dissipated in science that does not result in technologies, technologies that are not engineered in to industries, and industries that do not produce new scientific instruments. However, just enough science feeds into technology, technology into engineering, and engineering into science to keep the STEM cycle going.
These “inefficiencies” should not be seen as a “bad” thing, since much pure science that is valuable as an intellectual contribution to civilization has few if any practical consequences. The “inefficient” science that does not contribute directly to the STEM cycle is some of the best science that does humanity credit. Indeed, G. H. Hardy was famously emphatic that all practical mathematics was “ugly” and only pure mathematics, untainted by practical application, was truly beautiful — and Hardy made it clear that beautiful mathematics was ultimately the only thing that mattered. Thus these “inefficiencies” that appear to weaken the STEM cycle and hence pose an existential risk to our industrial-technological civilization, are at the same time existential opportunities — as always, risk and opportunity are one and the same.
Opportunities of the STEM cycle
The apparently pessimistic formulation of my previous took this form:
“It is entirely possible that a shift in social, economic, cultural, or other factors that influence or are influenced by the STEM cycle could increase the amount of epiphenomenal science, technology, and engineering, thus decreasing the efficiency of the STEM cycle.”
Such a formulation must be balanced by an appropriate and parallel formulation to the effect that it is entirely possible that a shift in social, economic, cultural, or other factors that influence or are influenced by the STEM cycle could decrease the amount of epiphenomenal science, technology, and engineering, thus increasing the efficiency of the STEM cycle.
However, making the STEM cycle more “efficient” might well be catastrophic, or nearly catastrophic, for civilization, as it would imply a narrowing of human life to the parameters defined by the STEM cycle. This might lead to a realization of the existential risks of permanent stagnation (i.e., the stagnation of all aspects of civilization other than those that advance industrial-technological civilization, which could prove frightening) or flawed realization, in which an acceleration or consolidation of the STEM cycle leads to the sort of civilization no one would find desirable or welcome.
There is no reason one could not, however, both strengthen the STEM cycle, making industrial-technological civilization more robust and more productive of advanced science, technology, and engineering, while at the same time also producing more pure science, more marginal technologies, and more engineering curiosities that don’t feed directly into the STEM cycle. The bigger the pie, the bigger each piece of the pie and the more to go around for everyone. Also, pure science and practical science exist in a cycle of mutual escalation of their own, in which pure science inspires practical science and practical science inspires more pure science. Perhaps the same is true also of marginal and practical technologies and the engineering of curiosities and the engineering of mass industries.
Scaling the STEM cycle
The dissipation of excess productions of the STEM cycle mean that unexpected sectors of the economy (as well as unexpected sectors of society) are occasionally the recipients of disproportional inputs. These disproportional inputs, like the inefficiencies discussed above, might be understood as either risks or opportunities. Some socioeconomic sectors might be catastrophically stressed by a disproportionate input, while others might unexpected flourish with a flourishing input. To control the possibilities of catastrophic failure or flourishing success, we must consider the possibility scaling the STEM cycle.
To what degree can the STEM cycle be scaled? By this question I mean that, once we are explicitly and consciously aware that it is the STEM cycle that drives industrial-technological civilization (or, minimally, that it is among the drivers of industrial-technological civilization), if we want to further drive that civilization forward (as I would like to see it driven until earth-originating life has established extraterrestrial redundancy in the interest of existential risk mitigation) can we consciously do so? To what extent can the STEM cycle be controlled, or can its scaling be controlled? Can we consciously direct the STEM cycle so that more science begets more technology, more technology begets more engineering, and more engineering begets more science? I think that we can. But, as with the matters discussed above, we must always be aware of the risk/opportunity trade-off. Focusing too much of the STEM cycle may have disadvantages.
Once we understand an underlying mechanism of civilization, like the STEM cycle, we can consciously cultivate this mechanism if we wish to see more of this kind of civilization, or we can attempt to dampen this mechanism if we want to see less of this civilization. These attempts to cultivate or dampen a mechanism of civilization can take microscopic or macroscopic forms. Macroscopically, we are concerned with the total picture of civilization; microscopically we may discern the smallest manifestations of the mechanism, as when the STEM cycle is purposefully pursued by the R&D division of a business, which funds a certain kind of science with an eye toward creating certain technologies that can be engineered into specific industries — all in the interest of making a profit for the shareholders.
This last example is a very conscious exemplification of the STEM cycle, that might conceivably be reduced the work of a single individual, working in turn as scientist, technologist, and engineer. The very narrowness of this process which is likely to produce specific and quantifiable results is also likely to produce very little in terms of epiphenomenal manifestations of the STEM cycle, and thus may contribute little or nothing to the more edifying dimensions of civilization. But this is not necessarily the case. Arno Penzias and Robert Wilson were working as scientists trying to solve a practical problem for Bell Labs when they discovered the cosmic microwave background radiation.
Reason for Hope
We have at least as much reason to hope for the future as to despair of the future, if not more reason to hope. The longer civilization persists, the more robust it becomes, and the more robust civilization becomes, the more internal diversity and experimentation civilization can tolerate (i.e., greater social differentiation, as Siggi Becker has recently pointed out to me). The extreme social measures taken in the past to enforce conformity within society have been softened in Western civilization, and individuals have a great deal of latitude that was unthinkable even in the recent past.
Perhaps more significantly from the perspective of civilization, the more robust and tolerant our civilization, the more latitude there is for like-minded individuals to cooperate in the founding and advancement of innovative social movements which, if they prove to be effective and to meet a need, can result in real change to the overall structure of society, and this sort of bottom-up social change was precisely the kind of change that agrarian-ecclesiastical civilization was structured to frustrate, resist, and suppress. In this respect, if in no other, we have seen social progress in the development of civilization that is distinct from the technological and economic progress that characterizes the STEM cycle.
As I wrote in my recent Centauri Dreams post, SETI, METI, and Existential Risk, to exist is to be subject to existential risk. Given the relation of risk and opportunity, it is also the case that to exist is to choose among existential opportunities. This is why we fight so desperately to stay alive, and struggle so insistently to improve our condition once we have secured the essentials of existence. To be alive is to have countless existential opportunities within reach; once we die, all of this is lost to us. And to improve one’s condition is to increase the actionable existential opportunities within one’s grasp.
The development of civilization, for all its faults and deficiencies, is tending toward increasing the range of existential opportunities available as “live options” (as William James would say) for both individuals and communities. That this increased range of existential opportunities also comes with an increased variety of existential risks should not be employed as an excuse to attempt to reverse the real social gains bequeathed by industrial-technological civilization.
. . . . .
. . . . .
. . . . .
14 November 2013
In my post The Industrial-Technological Thesis I proposed that our industrial-technological civilization is uniquely characterized by an escalating feedback loop in which scientific discoveries lead to new technologies, technologies are engineered into industries, and industries produce new instruments for science, which results in further scientific discoveries. I have elaborated this view in several posts, most recently in The Growth of Historical Consciousness, in which latter I noted that I would call this cyclical feedback loop the “STEM cycle,” given that “STEM” has become a common acronym for “science, technology, engineering, and mathematics,” and these are the elements involved in the escalating spiral of industrial-technological civilization.
Elsewhere, in Industrial-Technological Disruption, I considered some of the distinctive ways in which the STEM cycle stalls or fails. In that post I wrote, in part:
Science falters when model drift gives way to model crisis and normal science begins to give way to revolutionary science… Technology falters when its exponential growth tapers off and its attains a mature plateau, after which time it changes little and becomes a stalled technology. Engineering falters when industries experience the inevitable industrial accidents, intrinsic to the very fabric of industrialized society, or even experience the catastrophic failures to which complex systems are vulnerable.
The last of the above items — failures of engineering and industrial accidents — I have further elaborated more recently in How industrial accidents shape industrial-technological civilization.
This is not at all to say that these are the only ways in which the STEM cycle falters or fails. As I noted in Complex Systems and Complex Failure, complex systems fail in complex ways, and industrial-technological civilization is by far the most complex system on the planet. (Biological systems are extremely complex, but industrial-technological civilization supervenes upon biological complexity, and therefore, in the most comprehensive sense, includes biological complexity in its own complexity.)
In several of my posts on what I now call the STEM cycle I have called this cycle driving industrial-technological civilization a “closed loop.” I now realize that the STEM cycle is only a closed loop under certain “ideal” conditions (I will try to explain below why I put “ideal” in scare quotes). The messiness and imprecision of the real world means that most structures that we impose upon the world in order to understand it are simplified and schematic, and my description of the STEM cycle has been simplistic and schematic in this way. The actual function of science, technology, and engineering under contemporary socioeconomic conditions is far more complex, and that means that the STEM cycle is not a closed loop, but rather an unclosed loop, or an open feedback loop in which extrinsic forces at times enter into the STEM cycle while much of the productive energy of the STEM cycle is dissipated into extrinsic channels that contribute little or nothing to the furtherance of the STEM cycle.
Not every scientific discovery leads to technologies; not every technology can be engineered into an industry; not every industry produces new scientific instrumentation that can be employed in further scientific discoveries. Industrial-technological civilization produces epiphenomenal scientific knowledge, epiphenomenal technologies, and epiphenomenal engineering and industry — but enough science, technology and engineering participate in the STEM cycle to keep the processes of industrial-technological civilization moving forward for the time being.
I noted above that the STEM cycle is a closed loop only under “ideal” conditions, and these “ideal” conditions for the STEM cycle are not necessarily the “ideal” conditions for anything else — including the development of the features we value most highly in civilization. Pure science often results in little or no technology, and only rarely does it produce technologies in the near term. Many if not most technological innovations emerge from a long process of technological development that has scientific research only as a distant ancestor. The purest of the pure sciences — mathematics — has recently shown itself to have important applications in computer science, which has a direct impact on the economy, but it would be easy to cite numerous branches of mathematics which seem to have little or no relation to any technology, now or in the future.
Many perfectly viable technologies remain as mere curiosities. The history of technology is filled with such “hopeful monsters” that never caught on with the public or never found an application that would have justified their mass production. An interesting example of this would be the Einstein-Szilárd refrigerator, designed by Albert Einstein and Leo Szilárd. Both were to have much more “commercial” success with the atomic bomb, though I suspect both would have rather been successful with their refrigerator.
A great many industries, perhaps most industries, fulfill and respond to consumer demands that have little or no relationship to producing new scientific instruments that will lead to new scientific discoveries. And when industries do change science, it is often unintentional. The mass production of personal computers has profoundly affected the way that science is pursued, and has greatly stimulated scientific discovery (as has the internet), but little of this was the direct result of attempting to produce new and better scientific instruments.
It is entirely possible that a shift in social, economic, cultural, or other factors that influence or are influenced by the STEM cycle could increase the amount of epiphenomenal science, technology, and engineering, thus decreasing the efficiency of the STEM cycle. A permanent or semi-permanent change in social conditions (i.e., the social context in which the STEM cycle is played out) could introduce sufficient friction and inefficiency into the STEM cycle to retard or cease development and thereby to induce permanent stagnation (one of the categories of existential risk) into industrial-technological civilization.
There are, today, no end of prophecies of civilizational doom and stagnation, and it is not my intention merely to add one to their number, but it is an occupational hazard of the study of existential risk to consider such scenarios. The particular scenario I contemplate here is based on a particular mechanism that I believe uniquely characterizes industrial-technological civilization, and therefore demands our attention as it directly bears upon our viability as a civilization.
. . . . .
. . . . .
. . . . .
5 November 2013
Today I celebrate the fifth anniversary of this blog. I hope you will join me in toasting the end of another year and the beginning of a new year of blogging and ideas.
When I started this blog it was something of a spontaneous amusement, an impulse. My posts were short, simple and required little or no research. I purposefully wrote about matters that interest me while avoiding the “important” ideas I kept in my notebooks for book projects, which I saw at that time the primary beneficiaries of my intellectual effort.
Over time, the blog posts expanded, became longer and more detailed, and required more research. I still save aside material I plan to put into manuscripts, but the topics with which I began — mostly strategy and civilization — now have a much higher profile in my thought and are at least equal beneficiaries of my intellectual effort. In retrospect, I’m glad that I started to write about civilization here, as these thoughts have expanded over time and have pushed me unexpectedly in interesting directions.
With my posts getting longer, I have been posting far less often — once or twice a week. I’ve also been blogging at Tumblr, which has a very different demographic (meaning that I reach a different crowd there than I do by blogging here on WordPress). Also, in the past year I’ve had posts appear on the Transhumanity blog and on Paul Gilster’s Centauri Dreams blog (where, by the way, another post by me is scheduled to appear this coming Friday).
Over the past year the hits to my blog took a major hit, and I have gone from an average of nearly two thousand hits per day to an average of around four hundred or fewer per day. Interestingly, most of the lost traffic seems to have been image searches, so the few of you who come here to read and to reflect is perhaps about the same number as in earlier years.
I guess you could say that I write for my handful of subscribers — those few who return to read, spending precious and irretrievable moments of life to find something in what I have spent precious and irretrievable moments of life to write. That is a fair bargain — a part of my life for a part of your life — and as there are few fair bargains in the world today, I should count myself fortunate (which I do).
Nietzsche wrote, “…everywhere else I have my readers — nothing but first-rate intellects and proven characters, trained in high positions and duties; I even have real geniuses among my readers. In Vienna, in St. Petersburg, in Stockholm, in Copenhagen, in Paris, in New York — everywhere I have been discovered; but not in the shallows of Europe, Germany.” (Ecce Homo)
Nietzsche could perhaps speak in the plural; I must speak in the singular. I may not have readers (in the plural) in these celebrated cultural capitals of the world, but I do know from my statistics (which show repeat visits) that I have a reader in Invercargill, Southland, New Zealand, and in Mercer Island, Washington; in Washington D.C. at the Catholic University of America, and a reader in Groningen in the Netherlands; I have a reader in that ancient center of Western civilization, Greece, and in the ancient centers of learning in Paris, France, and Oxford, England; I have a reader in the Balkans, in Belgrade, Serbia, and elsewhere in the Balkans in Skopje, Macedonia; I even have a reader in Hillsboro, Oregon, just minutes away from my office, as well as a reader elsewhere in the Pacific Northwest, in Vancouver, British Columbia.
To all of you — those who return, and those who stop by only a single time — my thanks.
. . . . .
. . . . .
. . . . .
. . . . .
3 November 2013
Often when I write about emerging strategic trends I consider the long term future and make a particular effort to stress that little of the trend will be glimpsed in our lifetime, but at present I will consider the development of a strategic trend that is likely to be realized in the near- to mid-term future, i.e., a strategically significant technology that may develop into maturity or near-maturity within the lifetime of those now living. The technology is precision munitions and weaponry, and the strategic capability that mature precision weaponry will make possible is what I will call qualitative strikes. Before I come to qualitative strikes proper, I want to review the military and strategic context out of which the possibility of qualitative strikes has emerged.
In the early stages of the Cold War when nuclear weapons were primarily ballistic missiles and ballistic missiles were the most accurate of nuclear delivery vehicles, the nightmare scenario (featured in many films of the era) was a NORAD alert that hundreds of thousands of Soviet Missiles were already launched and were on their way over the pole to targets in North America. The US would then have less than thirty minutes to decide whether or not to launch a massive retaliatory strike of its own, and it could not wait until the missiles actually landed and nuclear strikes were confirmed because that would be too late. This was the Atomic Age parallel to the First World War dilemma of putting troops on trains that could not be recalled because the scheduling of transportation was so precise. Once the missiles flew, there was no calling them back. If you launched, MAD was initiated, so you needed to be sure you were responding to the real thing.
The essence of Cold War MAD doctrine was this massive nuclear exchange. Cold War targeting lists were almost indiscriminate in their presumption of mass annihilation; many major cities had a dozen or more warheads targeted for them, as though the intention were simply to “make the rubble jump,” as Churchill said of the Nazi bombardment of London. A massive nuclear exchange involved mutually assured destruction for the powers involved in the exchange, and since MAD was understood to be a guarantor of Cold War peace — since it would literally be madness to allow a massive nuclear exchange to take place — the very idea of either anti-ICBM “counter-force” targeting or of developing a “second strike” capability was interpreted as a hostile act of one power against the other.
We think of the end of these developments in nuclear warfighting strategy as a consequence of the collapse of the Soviet Union and the end of the Cold War, but this phase of nuclear strategy would be ended anyway, regardless of the fate of the Cold War. If the Soviet Union were still in existence today, we would no longer be talking about MAD — or, if we were, it would only be because traditionalists were clinging to a doctrine that no longer had strategic relevance. While many nation-states have land-based ICBMs, these weapons systems are already relics. They belong to a age of indiscriminate and massive attacks that emerged from the strategic bombing of the Second World War. If the bombers of the Second World War had had the capability to execute precision strikes, they would have done so. But this technology was not yet available. As the next best strategy, the only possible strategy, “area bombing” for the purpose of “de-housing” enemy populations became the norm. Once planners, strategists, air crews, and populations became inured to the routine of leveling entire cities, the atomic bomb was simply a cheaper, quicker, more efficient way to do the same thing.
The only subtlety at the stage of nuclear strategy brought to maturity during the Cold War — if it could even be called a subtlety — was whether any nuclear capacity would remain on either side to deliver a second strike after the initial massive exchange (a “second strike” capability). Cold War strike capacity did not lie exclusively in ICBMs. In addition to ICBMs, there was the Strategic Air Command (SAC) under Curtis LeMay, who learned his trade during the Second World War. While LeMay was perhaps the most renown American advocate of strategic air power, it was Arthur “Bomber” Harris of the RAF who presided over the strategic bombing of Germany, with the mantra that, “The bomber will always get through.” Again, the Second World War was the template for what followed.
The ultimate guarantor of second strike capability was the ballistic missile submarine. With dozens of submarines submerged deep in the world’s oceans, each submarine with a dozen missiles or more, and each missile with a MIRV with a dozen or so warheads, a single surviving submarine had the capacity to deliver a devastating second strike. Moreover, a submarine could sneak up close to the coast of an enemy’s territory and let loose its ballistic missiles from short range, leaving the enemy with only minutes to respond — and no real assets that could respond to a strike less than 15 minutes away. The traditional “triad” of Cold War deterrence consisted of land-based ICBMs, strategic bombers, and missile boats, but all of this took time to develop; it was not until the early 1960s that both the US and the USSR had a fleet of operational missile boats. When both sides in the Cold War possessed the nuclear triad, and therefore a second strike capability, the MAD equation continued to hold good.
In the strategic context of MAD, nuclear strikes were quantitative strikes, and each side in the Cold War was motivated by the competition to assemble the quantitatively largest arsenal in order to deter the other side. The Cold War was a numbers game — cf. Kennedy’s “Missile Gap” — and this numbers game escalated with predictable results: tens of thousands of nuclear warheads perpetually maintained in readiness. The agreements to limit nuclear weapons only institutionalized the overkill of MAD doctrines.
From this point, it would have been difficult to escalate any further, except for technologies that were viewed as inherently destabilizing because they might shift the balance and make one side or the other believe that they were no longer subject to the MAD calculation. It is of the essence to understand that global Cold War stability depended centrally on the inescapability of MAD. The Reagan-era “Star Wars” missile defense initiative was just such a destabilizing factor, but by this time the Soviet Union was already in terminal decline. Anti-missile defense systems had been designed and built prior to this, but clearly the initiative still law with the offense; the technology simply did not yet exist to bring down an ICBM.
Soviet decline coupled with the emergence of technologies that would make missile defense a viable possibility led to the end of the Soviet Union and MAD and the Cold War. Not only are these Cold War ideas dated by subsequent political developments, they are also dated by subsequent technological developments. Even if the Soviet Union had survived intact to the present day, the nightmare MAD scenario of Cold War planners would no longer be relevant because weapons systems have moved on.
One of the greatest of the revolutions in military affairs (RMA) has been the introduction of precision-guided munitions, and the eventual issue of converting to a “smart” arsenal means a transition from quantitative strikes to qualitative strikes. The shift in emphasis from nuclear to conventional armaments with the end of the Cold War facilitated the speed of this transition. Nuclear strategy suddenly went from being a top priority to barely making the list of priorities, and defense dollars began to flow into conventional weapons, and here there were opportunities for improvement that were not understood to be politically destabilizing.
The idea of qualitative strikes is not at all new. One could say that qualitative strikes have always been the telos of military operations. The air forces of the Second World War aspired to precision bombing, but this was not yet possible. During the Cold War, some missiles were targeted according to a “counter-force” strategy, i.e., they were targeted at enemy ballistic missile silos, but this only played into the MAD calculation, because it meant that to wait meant to lose one’s primary strike capability. If you could completely wipe out your enemy’s ballistic missile silos in a age when ICBMs were the primary nuclear deterrent, you would leave your enemy with the uncomfortable choice of retaliating massively on civilian population centers or accepting defeat. A successful counter-force attack would constitute a qualitative strike, and qualitative strikes pose political dilemmas such as that outlined. This is why such ideas were considered inherently destabilizing. But this level of technology was not practicable during the time when ICBMs were the primary nuclear deterrent.
Although the press today reports civilian casualties as if they were disproportionately high, in historic terms both civilian and military casualties are at the lowest levels ever. With the industrialization of war the technologies of warfighting experience an initial exponential growth in lethality, but as precision begins to outpace sheer quantitative destructive power, the warfare of industrial-technological civilization passes The Lethality Peak and casualties fall as strikes converge upon qualitative precision. In other words, the rapid emergence of precision guided munitions in the battlespace has been effective. They work. And they’re getting better all the time. The efficacy of precision guided munitions suggests the possibility of a complete shift away from quantitative destruction to qualitative strikes, i.e., strikes that selectively pick out a certain kind of target, or a certain class of targets. This is already a reality to a limited extent, but it will take time before it is fully translated into policy and doctrine.
In A Glimpse at the Near Future of Combat I mentioned a Norwegian satellite that will track all ships (over 300 gross tons) in Norwegian coastal waters. Most ships have transponders, indicating basic identification information for the vessel. In the near future of autonomous vehicles, it is likely that most vehicles will have transponders on them. Most individuals carry cell phones, which are essentially transponders, and we know the the Snowden leaks about the NSA surveillance program how thoroughly “big data” applications can track the world’s cellular phone calls. Fixed assets like cities and industrial facilities are even easier to map and track than mobile assets like ships, planes, vehicles, and people.
What we are looking at here is the possibility of computer systems sufficiently sophisticated that almost everything on the surface of the earth can the identified and tracked. To have a total system of identification and tracking is to have a targeting computer. Couple a targeting computer with precision guided munitions that can pick out small targets in a crowd and be assured of destroying these targets with a near-total absence of collateral damage, and you have the possibility of a military strike that does not depend in the least upon quantitative destruction, but rather upon picking out just the right selection of targets to have just the right effect (political or military, keeping in mind Clausewitz’s dictum that war is the pursuit of politics by other means). This is a qualitative strike.
None of these developments will go unchallenged. The dependency of qualitative warfare upon computer systems points to the centrality of cyberwarfare in the integrated battlespace. If you can confuse the targeting computer of the weapons’ guidance systems, you can defeat the system, but systems can in turn be hardened and made redundant. Other measures and counter-measures will be developed, and escalation will be an escalation in precision and the possibility of qualitative warfare (since those who attack precision warfighting infrastructure will need to be equally precision in their attempt to defeat a precision weapons system) in contradistinction to the escalation of quantitative warfare that defined the twentieth century.
. . . . .
. . . . .
. . . . .
26 October 2013
In my last post, The Retrodiction Wall, I introduced several ideas that I think were novel, among them:
● A retrodiction wall, complementary to the prediction wall, but in the past rather than the present
● A period of effective history lying between the retrodiction wall in the past and the prediction wall in the future; beyond the retrodiction and prediction walls lies inaccessible history that is not a part of effective history
● A distinction between diachronic and synchronic prediction walls, that is to say, a distinction between the prediction of succession and the prediction of interaction
● A distinction between diachronic and synchronic retrodiction walls, that is to say, a distinction between the retrodiction of succession and the retrodiction of interaction
I also implicitly formulated a principle, though I didn’t give it any name, parallel to Einstein’s principle (also without a name) that mathematical certainty and applicability stand in inverse proportion to each other: historical predictability and historical relevance stand in inverse proportion to each other. When I can think of a good name for this I’ll return to this idea. For the moment, I want to focus on the prediction wall and the retrodiction wall as the boundaries of effective history.
In The Retrodiction Wall I made the assertion that, “Effective history is not fixed for all time, but expands and contracts as a function of our knowledge.” An increase in knowledge allows us to push the boundaries the prediction and retrodiction walls outward, as a diminution of knowledge means the contraction of prediction and retrodiction boundaries of effective history.
We can go farther than this is we interpolate a more subtle and sophisticated conception of knowledge and prediction, and we can find this more subtle and sophisticated understand in the work of Frank Knight, which I previously cited in Existential Risk and Existential Uncertainty. Knight made a tripartite distinction between prediction (or certainty), risk, and uncertainty. Here is the passage from Knight that I quoted in Addendum on Existential Risk and Existential Uncertainty:
1. A priori probability. Absolutely homogeneous classification of instances completely identical except for really indeterminate factors. This judgment of probability is on the same logical plane as the propositions of mathematics (which also may be viewed, and are viewed by the writer, as “ultimately” inductions from experience).
2. Statistical probability. Empirical evaluation of the frequency of association between predicates, not analyzable into varying combinations of equally probable alternatives. It must be emphasized that any high degree of confidence that the proportions found in the past will hold in the future is still based on an a priori judgment of indeterminateness. Two complications are to be kept separate: first, the impossibility of eliminating all factors not really indeterminate; and, second, the impossibility of enumerating the equally probable alternatives involved and determining their mode of combination so as to evaluate the probability by a priori calculation. The main distinguishing characteristic of this type is that it rests on an empirical classification of instances.
3. Estimates. The distinction here is that there is no valid basis of any kind for classifying instances. This form of probability is involved in the greatest logical difficulties of all, and no very satisfactory discussion of it can be given, but its distinction from the other types must be emphasized and some of its complicated relations indicated.
Frank Knight, Risk, Uncertainty, and Profit, Chap. VII
This passage from Knight’s book (as the entire book) is concerned with applications to economics, but the kernel of Knight’s idea can be generalized beyond economics to generally represent different stages in the acquisition of knowledge: Knight’s a priori probability corresponds to certainty, or that which is so exhaustively known that it can be predicted with precision; Knight’s statistical probably corresponds with risk, or partial and incomplete knowledge, or that region of human knowledge where the known and unknown overlap; Knight’s estimates correspond to unknowns or uncertainty.
Knight formulates his tripartite distinction between certainty, risk, and uncertainty exclusively in the context of prediction, and just as Knight’s results can be generalized beyond economics, so too Knight’s distinction can be generalized beyond prediction to also embrace retrodiction. In The Retrodiction Wall I generalized John Smart‘s exposition of a prediction wall in the future to include a retrodiction wall in the past, both of which together define the boundaries of effective history. These two generalizations can be brought together.
A prediction wall in the future or a retrodiction wall in the past are, as I noted, functions of knowledge. That means we can understand this “boundary” not merely as a threshold that is crossed, but also as an epistemic continuum that stretches from the completely unknown (the inaccessible past or future that lies utterly beyond the retrodiction or prediction wall) through an epistemic region of prediction risk or retrodiction risk (where predictions or retrodictions can be made, but are subject to at least as many uncertainties as certainties), to the completely known, in so far as anything can be completely known to human beings, and therefore well understood by us and readily predictable.
Introducing and integrating distinctions between prediction and retrodiction walls, and among prediction, risk and uncertainty gives a much more sophisticated and therefore epistemically satisfying structure to our knowledge and how it is contextualized in the human condition. The fact that we find ourselves, in medias res, living in a world that we must struggle to understand, and that this understanding is an acquisition of knowledge that takes place in time, which is asymmetrical as regards the past and future, are important features of how we engage with the world.
This process of making our model of knowledge more realistic by incorporating distinctions and refinements is not yet finished (nor is it ever likely to be). For example, the unnamed principle alluded to above — that of the inverse relation between historical predictability and relevance, suggests that the prediction and retrodiction walls can be penetrated unevenly, and that our knowledge of the past and future is not consistent across space and time, but varies considerably. An inquiry that could demonstrate this in any systematic and schematic way would be more complicated than the above, so I will leave this for another day.
. . . . .
. . . . .
. . . . .
23 October 2013
Prediction in Science
One of the distinguishing features of science as a system of thought is that it makes testable predictions. The fact that scientific predictions are testable suggests a methodology of testing, and we call the scientific methodology of testing experiment. Hypothesis formation, prediction, experimentation, and resultant modification of the hypothesis (confirmation, disconfirmation, or revision) are all essential elements of the scientific method, which constitutes an escalating spiral of knowledge as the scientific method systematically exposes predictions to experiment and modifies its hypotheses in the light of experimental results, which leads in turn to new predictions.
The escalating spiral of knowledge that science cultivates naturally pushes that knowledge into the future. Sometimes scientific prediction is even formulated in reference to “new facts” or “temporal asymmetries” in order to emphasize that predictions refer to future events that have not yet occurred. In constructing an experiment, we create a few set of facts in the world, and then interpret these facts in the light of our hypothesis. It is this testing of hypotheses by experiment that establishes the concrete relationship of science to the world, and this is also a source of limitation, for experiments are typically designed in order to focus on a single variable and to that end an attempt is made to control for the other variables. (A system of thought that is not limited by the world is not science.)
Alfred North Whitehead captured this artificial feature of scientific experimentation in a clever line that points to the difference between scientific predictions and predictions of a more general character:
“…experiment is nothing else than a mode of cooking the facts for the sake of exemplifying the law. Unfortunately the facts of history, even those of private individual history, are on too large a scale. They surge forward beyond control.”
Alfred North Whitehead, Adventures of Ideas, New York: The Free Press, 1967, Chapter VI, “Foresight,” p. 88
There are limits to prediction, and not only those pointed out by Whitehead. The limits to prediction have been called the prediction wall. Beyond the prediction wall we cannot penetrate.
The Prediction Wall
John Smart has formulated the idea of a prediction wall in his essay, “Considering the Singularity,” as follows:
With increasing anxiety, many of our best thinkers have seen a looming “Prediction Wall” emerge in recent decades. There is a growing inability of human minds to credibly imagine our onrushing future, a future that must apparently include greater-than-human technological sophistication and intelligence. At the same time, we now admit to living in a present populated by growing numbers of interconnected technological systems that no one human being understands. We have awakened to find ourselves in a world of complex and yet amazingly stable technological systems, erected like vast beehives, systems tended to by large swarms of only partially aware human beings, each of which has only a very limited conceptualization of the new technological environment that we have constructed.
Business leaders face the prediction wall acutely in technologically dependent fields (and what enterprise isn’t technologically dependent these days?), where the ten-year business plans of the 1950′s have been replaced with ten-week (quarterly) plans of the 2000′s, and where planning beyond two years in some fields may often be unwise speculation. But perhaps most astonishingly, we are coming to realize that even our traditional seers, the authors of speculative fiction, have failed us in recent decades. In “Science Fiction Without the Future,” 2001, Judith Berman notes that the vast majority of current efforts in this genre have abandoned both foresighted technological critique and any realistic attempt to portray the hyper-accelerated technological world of fifty years hence. It’s as if many of our best minds are giving up and turning to nostalgia as they see the wall of their own conceptualizing limitations rising before them.
Considering the Singularity: A Coming World of Autonomous Intelligence (A.I.) © 2003 by John Smart (This article may be reproduced for noncommercial purposes if it is copied in its entirety, including this notice.)
I would to suggest that there are at least two prediction walls: synchronic and diachronic. The prediction wall formulated above by John Smart is a diachronic prediction wall: it is the onward-rushing pace of events, one following the other, that eventually defeats our ability to see any recognizable order or structure of the future. The kind of prediction wall to which Whitehead alludes is a synchronic prediction wall, in which it is the outward eddies of events in the complexity of the world’s interactions that make it impossible for us to give a complete account of the consequences of any one action. (Cf. Axes of Historiography)
Retrodiction and the Historical Sciences
Science does not live by prediction alone. While some philosophers of science have questioned the scientificity of the historical sciences because they could not make testable (and therefore falsifiable) predictions about the future, it is now widely recognized that the historical sciences don’t make predictions, but they do make retrodictions. A retrodiction is a prediction about the past.
The Oxford Dictionary of Philosophy by Simon Blackburn (p. 330) defines retrodiction thusly:
retrodiction The hypothesis that some event happened in the past, as opposed to the prediction that an event will happen in the future. A successful retrodiction could confirm a theory as much as a successful prediction.
As with predictions, there is also a limit to retrodiction, and this is the retrodiction wall. Beyond the retrodiction wall we cannot penetrate.
I haven’t been thinking about this idea for long enough to fully understand the ramifications of a retrodiction wall, so I’m not yet clear about whether we can distinction diachronic retrodiction and synchronic retrodiction. Or, rather, it would be better to say that the distinction can certainly be made, but that I cannot think of good contrasting examples of the two at the present time.
We can define a span of accessible history that extends from the retrodiction wall in the past to the prediction wall in the future as what I will call effective history (by analogy with effective computability). Effective history can be defined in a way that is closely parallel to effectively computable functions, because all of effective history can be “reached” from the present by means of finite, recursive historical methods of inquiry.
Effective history is not fixed for all time, but expands and contracts as a function of our knowledge. At present, the retrodiction wall is the Big Bang singularity. If anything preceded the Big Bang singularity we are unable to observe it, because the Big Bang itself effectively obliterates any observable signs of any events prior to itself. (Testable theories have been proposed that suggest the possibility of some observable remnant of events prior to the Big Bang, as in conformal cyclic cosmology, but this must at present be regarded as only an early attempt at such a theory.)
Prior to the advent of scientific historiography as we know it today, the retrodiction wall was fixed at the beginning of the historical period narrowly construed as written history, and at times the retrodiction wall has been quite close to the present. When history experiences one of its periodic dark ages that cuts it off from his historical past, little or nothing may be known of a past that once familiar to everyone in a given society.
The emergence of agrarian-ecclesiastical civilization effectively obliterated human history before itself, in a manner parallel to the Big Bang. We know that there were caves that prehistorical peoples visited generation after generation for time out of mind, over tens of thousands of years — much longer than the entire history of agrarian-ecclesiastical civilization, and yet all of this was forgotten as though it had never happened. This long period of prehistory was entirely lost to human memory, and was not recovered again until scientific historiography discovered it through scientific method and empirical evidence, and not through the preservation of human memory, from which prehistory had been eradicated. And this did not occur until after agrarian-ecclesiastical civilization had lapsed and entirely given way to industrial-technological civilization.
We cannot define the limits of the prediction wall as readily as we can define the limits of the retrodiction wall. Predicting the future in terms of overall history has been more problematic than retrodicting the past, and equally subject to ideological and eschatological distortion. The advent of modern science compartmentalized scientific predictions and made them accurate and dependable — but at the cost of largely severing them from overall history, i.e., human history and the events that shape our lives in meaningful ways. We can make predictions about the carbon cycle and plate tectonics, and we are working hard to be able to make accurate predictions about weather and climate, but, for the most part, our accurate predictions about the future dispositions of the continents do not shape our lives in the near- to mid-term future.
I have previously quoted a famous line from Einstein: “As far as the laws of mathematics refer to reality, they are not certain; and as far as they are certain, they do not refer to reality.” We might paraphrase this Einstein line in regard to the relation of mathematics to the world, and say that as far as scientific laws of nature predict events, these events are irrelevant to human history, and in so far as predicted events are relevant to human beings, scientific laws of nature cannot predict them.
Singularities Past and Future
As the term “singularity” is presently employed — as in the technological singularity — the recognition of a retrodiction wall in the past complementary to the prediction wall in the future provides a literal connection between the historiographical use of “singularity” and the use of the term “singularity” in cosmology and astrophysics.
Theorists of the singularity hypothesis place a “singularity” in the future which constitutes an absolute prediction wall beyond which history is so transformed that nothing beyond it is recognizable to us. This future singularity is not the singularity of astrophysics.
If we recognize the actual Big Bang singularity in the past as the retrodiction wall for cosmology — and hence, by extension, for Big History — then an actual singularity of astrophysics is also at the same time an historical singularity.
. . . . .
I have continued my thoughts on the retrodiction wall in Addendum on the Retrodiction Wall.
. . . . .
. . . . .
. . . . .