23 April 2013
In my post Existential Risk and Existential Uncertainty I cited Frank Knight’s distinction between risk and certainty and attempted to apply this to the idea of existential risk. I suggested that discussions of existential risk ought to distinguish between existential risk (sensu stricto) and existential uncertainty.
In Knight’s now-classic work Risk, Uncertainty, and Profit, Frank Knight actually made a threefold distinction in the kinds of probabilities that face the entrepreneur:
1. A priori probability. Absolutely homogeneous classification of instances completely identical except for really indeterminate factors. This judgment of probability is on the same logical plane as the propositions of mathematics (which also may be viewed, and are viewed by the writer, as “ultimately” inductions from experience).
2. Statistical probability. Empirical evaluation of the frequency of association between predicates, not analyzable into varying combinations of equally probable alternatives. It must be emphasized that any high degree of confidence that the proportions found in the past will hold in the future is still based on an a priori judgment of indeterminateness. Two complications are to be kept separate: first, the impossibility of eliminating all factors not really indeterminate; and, second, the impossibility of enumerating the equally probable alternatives involved and determining their mode of combination so as to evaluate the probability by a priori calculation. The main distinguishing characteristic of this type is that it rests on an empirical classification of instances.
3. Estimates. The distinction here is that there is no valid basis of any kind for classifying instances. This form of probability is involved in the greatest logical difficulties of all, and no very satisfactory discussion of it can be given, but its distinction from the other types must be emphasized and some of its complicated relations indicated.
Frank Knight, Risk, Uncertainty, and Profit, Chap. VII
At the end of the chapter Knight finally made his point fully explicit:
It is this third type of probability or uncertainty which has been neglected in economic theory, and which we propose to put in its rightful place. As we have repeatedly pointed out, an uncertainty which can by any method be reduced to an objective, quantitatively determinate probability, can be reduced to complete certainty by grouping cases. The business world has evolved several organization devices for effectuating this consolidation, with the result that when the technique of business organization is fairly developed, measurable uncertainties do not introduce into business any uncertainty whatever. Later in our study we shall glance hurriedly at some of these organization expedients, which are the only economic effect of uncertainty in the probability sense; but the present and more important task is to follow out the consequences of that higher form of uncertainty not susceptible to measurement and hence to elimination. It is this true uncertainty which by preventing the theoretically perfect outworking of the tendencies of competition gives the characteristic form of “enterprise” to economic organization as a whole and accounts for the peculiar income of the entrepreneur.
Frank Knight, Risk, Uncertainty, and Profit, Chap. VII
Knight’s distinction between risk and uncertainty — between probabilities that can be calculated, managed, and insured against and probabilities that cannot be calculated and therefore cannot be managed or insured against — continues to be taught in business and economics today. (It is a distinction closely parallel to Rumsfeld’s distinction between known unknowns and unknown unknowns, though worked out in considerably greater detail and sophistication.) Knight’s slightly more subtle threefold distinction among probabilities might be characterized as a tripartite distinction between certainty, risk, and uncertainty.
Knight acknowledges, in his account of statistical probability, i.e., risk, that there are at least two complications:
1. that of eliminating all truly indeterminate features, and…
2. the impossibility of enumerating the equally probable alternatives involved
Knight’s hedged account of risk obliquely acknowledges the gray area that lies between risk and uncertainty — a gray area that can be enlarged in favor of risk as our knowledge improves, or which can be enlarged in favor of uncertainty as additional complicating favors enter into our calculation of risk and render our knowledge less effective and our uncertainty all the greater. That is to say, the line between risk and uncertainty is unclear, and it can move, which makes it doubly ambiguous.
These hedges are important qualifications to make, because we all know from real-life experience that additional complicating factors always enter into actual risks. We may try to insure ourselves, and therefore to secure our interests against risk, but fine print in the insurance contract may deny us a settlement, or we may have forgotten to pay our premiums, or a thousand other things might go wrong between our careful planning and the actual events of life that so often preempt our planning and force us to deal with the unexpected with insufficient preparation. As Bobby Burns put it, “The best laid schemes o’ Mice an’ Men, Gang aft agley, An’ lea’e us nought but grief an’ pain, For promis’d joy!”
Such consideration span the entire universe from field mice to galaxies. A coldly rational assessment of risk can be made, and resources can be expended to mitigate risk to the extent calculated, but not only are the limits to our knowledge, but we don’t know where the limits to our knowledge lie. Indeterminate features can creep into our calculation and equally probable alternatives could be in play without our even being aware of the fact.
Some events that pose existential risks or global catastrophic risks can be predicted with a high degree of accuracy, and some cannot. Even about those risks that seem predictable to a high degree of accuracy — say, the life of the sun, which can be predicted on the basis of our knowledge of cosmology, and which thereby would seem to give us some knowledge of how long a time we have on earth to lay our schemes — admit of indeterminate elements and equally probably scenarios. The end of the earth seems a long way off, if the earth lasts almost as long as the sun, and putting our existential risk far in the future seems to diminish the threat.
There is a famous quote from Frank Ramsey (who died tragically young in a mountain climbing accident, as might happen to anyone, anytime) that is relevant here, both economically and philosophically:
My picture of the world is drawn in perspective and not like a model to scale. The foreground is occupied by human beings and the stars are all as small as three-penny bits. I don’t really believe in astronomy, except as a complicated description of part of the course of human and possibly animal sensation. I apply my perspective not merely to space but also to time. In time the world will cool and everything will die; but that is a long time off still and its present value at compound discount is almost nothing.
From a paper read to the Apostles, a Cambridge discussion society (1925). In ‘The Foundations of Mathematics’ (1925), collected in Frank Plumpton Ramsey and D. H. Mellor (ed.), Philosophical Papers (1990), Epilogue, 249
Despite Ramsey having referred (in another context) to the “Bolshevik menace” of Brouwer and Weyl, it has been said that Ramsey became a constructivist not long before he died. This conversion should not surprise us, given Ramsey’s Protagorean profession in his passage.
Protagoras famously said that Man is the measure of all things, of those things that are, that they are, and of those things that are not, that they are not. (πάντων χρημάτων μέτρον ἐστὶν ἄνθρωπος, τῶν μὲν ὄντων ὡς ἔστιν, τῶν δὲ οὐκ ὄντων ὡς οὐκ ἔστιν.) Protagoras may be counted as the earliest of proto-constructivists, of which company Kant and Poincaré may be considered the most famous.
In the passage quoted above, Ramsey is essentially saying in a modern idiom that man is the measure of all things, even of time and space — that man is the measure of the farthest reaches of time and space, and in so far as these distant prospects of human experience are inaccessible, they are irrelevant. Ramsey is important in his respect because of his consciously chosen anthropocentrism. In a post-Copernican age, this is significant. We are all, of course, familiar with the advocates of the anthropic cosmological principle, and their implicit anthropocentrism, but Ramsey gives us a slightly different perspective on anthropocentrism. Ramsey essentially brings constructivism to our moral life, and in so doing suggests that the moral imperatives posed by existential risk can be safely ignored for the time being.
What Ramsey is saying here is that we can make a definite calculation of the lives of the stars — and also the expected life of our sun — and that we can insure against this risk, but that the risk lies so far in the future that its present value is practically nil. In other words, the eventual burning out of the sun is a risk and not an uncertainty. On the contrary, it is not an uncertainty at all, but a certainty. Just as the finite amount of oil on Earth must eventually come to an end, the finite sun must also come to an end.
What our growing knowledge of cosmology is teaching us is that we are not isolated from the wider universe. Events on a cosmic scale have influenced the development of life on earth, and may well be responsible for our development as a species. If the earth had not been hit by an asteroid or comet about 65 million years ago, mammals may never have developed as they did, and we would not exist. And if our solar system did not bob up and down through the galactic plane of the Milky Way, resulting in geophysical rhythms from the the gravitational interaction with the rest of the galaxy, a distant asteroid of comet might not have been dislodged from its stable orbit and sent careening toward earth.
Given our connection with the wider universe, and our vulnerability to its convulsions, what we know about our local risks (which is not nearly enough, as recent unpredicted episodes have shown us) is not enough to make a calculation of our vulnerability. What appears superficially to be a calculable risk has uncertainty injected back into it by the cosmological context in which all astronomical events take place.
If the death of the sun were the only existential risk against which we needed to insure ourselves, we would not need to be in any hurry about existential risk mitigation, because we would have literally millions of years to build a spacefaring civilization and thus to insure ourselves against that predictable catastrophe. But in our violent universe (as Nigel Calder called it) scarcely a million years goes by without some cosmic catastrophe occurring, and we don’t know when then next one will arrive.
Carl Sagan rightly pointed out that an event that is unlikely to happen in a hundred years may be inevitable in a hundred millions years:
The Earth is a lovely and more or less placid place. Things change, but slowly. We can lead a full life and never personally encounter a natural disaster more violent than a storm. And so we become complacent, relaxed, unconcerned. But in the history of Nature, the record is clear. Worlds have been devastated. Even we humans have achieved the dubious technical distinction of being able to make our own disasters, both intentional and inadvertent. On the landscapes of other planets where the records of the past have been preserved, there is abundant evidence of major catastrophes. It is all a matter of time scale. An event that would be unthinkable in a hundred years may be inevitable in a hundred million.
Carl Sagan, Cosmos, Chapter IV, “Heaven and Hell”
Perhaps in one of his most quoted lines, Sagan said that we are “star stuff,” and certainly this is true. However, the corollary of this inspiring thought is that our star stuff is subject to the natural forces that shape the destinies of stars, and in shaping the destiny of stars, shape the destiny of men who live on planets orbiting stars.
Understanding ourselves as “star stuff” entails understanding ourselves as living in a dangerous universe replete with devouring black holes, gamma ray bursts, supernovae, and other cataclysms almost beyond the capacity of human beings to conceive.
. . . . .
. . . . .
. . . . .
25 March 2013
In my last post, The Problem with Diachronic Extrapolation, I attempted to show how diachronic extrapolation, while the most familiar form of futurism, is often misleading because it fails to adequately account for synchronic interactions as a diachronic strategic trend develops. In other posts concerned with unintended consequences I have emphasized that, in the long term, unintended consequences often outweigh intended consequences. Unintended consequences are the result of synchronic interactions that were not foreseen, that were no part of diachronic agency, and those cases in which unintended consequences swamp intended consequences the synchronic interactions have proved more decisive in shaping the future than diachronic causality.
In my post on The Problem with Diachronic Extrapolation I made several assertions that clearly imply the limitation of inferences from the present to the future, which also implies the limitation of inferences from the present to the past. This brings up issues that go far beyond futurism.
In that post I wrote:
“…diachrony over significant periods of time cannot be pursued in isolation, since any diachronic extrapolation will interact with changed conditions over time, and this interaction will eventually come to constitute the consequences as must as the original trend diachronically extrapolated.”
“…the most frequent form of failed futurism is to take a trend in the present and to project it into the future, but any futurism worthy of the name must understand events in both their synchronic and diachronic context; isolation from succession in time is just as invidious as isolation from interaction across time…”
The reader may have noticed the resemblance of this species of failed futurism to uniformitarianism: instead of taking a strategic trend acting at present and extrapolating it into the future, uniformitarianism takes a physical force acting in the present and extrapolates it into the future (or, as is more likely the case in geology, into the past). This idea of uniformitarianism is usually expressed as, “the present is key to the past,” and we might similarly express the parallel form of futurism as being, “the present is key to the future.” These two claims — the present is the key to the past and the present is the key to the future — are logically equivalent since, as I pointed out previously, every present is the future of some past, and the past of some future.
Since these interpretations of uniformitarianism involve uniformity across past and future, these formulations closely resemble formulations of induction also stated in terms of past and future, as when the logical problem of induction is formulated, “Will the future be like the past?” It is at this point that the philosophy of time, the philosophy of history, the philosophy of science, and futurism all coincide, because it concerns a problem that all have in common.
Stephen Jay Gould noticed this similarity of uniformitarianism and induction in his first published paper, “Is uniformitarianism necessary?” Gould, of course, become famous for his critique of uniformitarianism, and for this alternative to it, punctuated equilibrium (for which he shares the credit with Niles Eldredge). In this early paper by Gould, Gould distinguished between substantive uniformitarianism and methodological uniformitarianism. He tried to show that the former is simply false, and the the latter, methodological uniformitarianism, is now subsumed under the scientificity of geology and paleontology. Here is now Gould put it:
“…we see that methodological uniformitarianism amounts to an affirmation of induction and simplicity. But since these principles belong to the modern definition of empirical science in general, uniformitarianism is subsumed in the simple statement: ‘geology is a science’. By specifically invoking methodological uniformitarianism, we do little more than affirm that induction is procedurally valid in geology.”
Stephen Jay Gould, “Is uniformitarianism necessary?” American Journal of Science, Vol. 263, March 1965, p. 227
That is to say, the earth sciences use the scientific method, which Gould characterizes in terms of inductive logic and the principle of parsimony (I would argue that Gould is also assuming methodological naturalism) — therefore everything that is worth saving in uniformitarianism is already secured by the scientific status of geology, and therefore uniformitarianism is dispensable. Having once served an important function in science, uniformitarianism has now, Gould contends, become an obstacle to progress.
As I noted above, Gould didn’t merely assert that uniformitarianism was no longer necessary, but devoted his career to arguing for an alternative, punctuated equilibrium, which asserts that long period of stasis are interrupted by catastrophic discontinuities. While much has been written about uniformitarianism vs. punctuated equilibrium, I see this as the thin end of the wedge for considering all kinds of alternatives to strict uniformitarianism, and to his end I think we would do well to explore all possible patterns of development, whether uniform (slow, gradual, incremental), punctuated (sudden, catastrophic, discontinuous), or otherwise.
Of course, we could easily produce more sophisticated formulations of uniformitarianism that would avoid the subsequent problems that have been raised, but this is the path that leads to Ptolemaic epicycles and attempts to “save the appearances,” whereas what we want is a rich mixture of theoretical innovation from which we can try many different models and select for further development those that are most true to the world.
Since the philosophy of time, the philosophy of history, the philosophy of science, and futurism all coincide at the point represented by the problem of the relationship of parts of time to other parts of time (and the idea of temporal parts is itself philosophical contested), all of these disciplines stand to learn something of value from exploring alternatives to uniformitarianism. In so far as futurism is dominated by nomothetic diachrony, and constitutes a kind of historical uniformitarianism, very different forms of futurism might emerge from a careful study of the alternatives to uniformitarianism, or merely from a recognition that, as Gould put, uniformitarianism is no longer necessary and something of an anachronism. If there is anything of which futurists ought to beware, being an anachronism must be close to the top of the list.
. . . . .
. . . . .
. . . . .
23 March 2013
In Synchronic and Diachronic Approaches to Civilization and Axes of Historiography I discussed the differences between synchronic and diachronic approaches to historiographical analysis (and in much greater detail in Ecological Temporality and the Axes of Historiography). The synchronic/diachronic distinction can also be useful in futurism, and in fact we can readily distinguish between what I will call synchronic extrapolation and diachronic extrapolation.
If we understand synchrony as, “the present construed broadly enough to admit of short term historical interaction” (as I formulated it in Axes of Historiography), then synchronic extrapolation is the extrapolation of a broadly construed present across its interactions. This may not sound very enlightening, but you’ll understand immediately what I mean when I relate it to chaos and complexity. Recent interest in chaos theory and what is known as the “butterfly effect” has led some to think in terms of synchronic extrapolation since the idea of the is of a small event the interactions of which cascade to produce significant consequences.
As a form of futurism, synchronic extrapolation is not familiar (probably because it doesn’t take us very far forward into the future), but we need to keep it in mind in order to contrast it with diachronic extrapolation. Diachronic extrapolation is one of the most familiar forms of futurism today, especially as embodied in Ray Kurzweil’s love of exponential growth curves, which are usually diachronic extrapolations. One of the reasons that I remain so skeptical about the claims of Kurzweil and other singulatarians (even though I have learned a lot about them recently and have a less negative picture overall than initially) is the heavy reliance on diachronic extrapolation in their futurism. I frequently cite specific examples of failed exponential growth curves or technologies (like chemical rockets) that seem to be stuck in a technological rut (what I have called a stalled technology), experiencing little or no development (and certainly not exponential development), and I do this because readers usually find specific, particular examples persuasive.
I have discovered over the course of many conversations that most people tune out extended theoretical expositions, and only sort of wake up and pay attention when you give a concrete example. So I do this, to the best of my ability. But really, the dispute with diachronic extrapolation (and particular schools of futurist thought that employ diachronic extrapolation to the exclusion of other methods, such as the singulatarians) is theoretical, and all the examples in the world aren’t going to get to the nub of the problem, which must be given the theoretical exposition that it deserves. And the nub of the problem is simply that diachrony over significant periods of time cannot be pursued in isolation, since any diachronic extrapolation will interact with changed conditions over time, and this interaction will eventually come to constitute the consequences as must as the original trend diachronically extrapolated.
Diachronic extrapolation may be derailed by historical singularities, but it is far more frequent that nothing so discontinuous as a singularity need happen in order for a straight-forward extrapolation of present trends fail to be be realized. I specifically single out diachronic extrapolation in isolation, because the most frequent form of failed futurism is to take a trend in the present and to project it into the future, but any futurism worthy of the name must understand events in both their synchronic and diachronic context; isolation from succession in time is just as invidious as isolation from interaction across time. This simultaneous synchrony and diachrony resembles a chain reaction of ever-growing consequences from the initial point of departure.
In my two immediately previous posts — Addendum on Automation and the Human Future and Bertrand Russell as Futurist — I dealt obliquely with the problems of diachronic extrapolation. Predicting technogenic unemployment on the basis of contemporary automation, or predicting a bifurcation between annihilation or world government, is a paradigm case of diachronic extrapolation that fails to sufficiently take into account future interactions that will become as important or more important than the diachronically extrapolated trend.
This was the point that I was trying to make in Addendum on Automation and the Human Future when I wrote:
I am willing to admit without hesitation that, 250 years from now, we may well have realized a near-automated economy, and that this automation of the economy will have truly profound and far-reaching socioeconomic consequences. However, the original problem then becomes a different problem, because so many other things, unanticipated and unprecedented things, have changed in the intervening years that the problem of labor and employment is likely to look completely different at this future date.
In other words, a diachronic extrapolation of current employment trends — technogenic unemployment, new jobs created by new industries, and perennial problems of unemployment and underemployment — is helpful in so far as it goes, but it doesn’t go nearly far enough in capturing the different world that the future will be.
Similar concerns hold for Russell’s failed futurism that I reviewed in Bertrand Russell as Futurist: Russell took several trends operating at present — war, nuclear weapons, anarchic competition among nation-states — and extrapolated them into the future as though nothing else would happen in history except these closely related group of strategically significant trends.
In my post on Russell’s futurism I cited his essay “The Future of Man”, but Russell made the same point innumerable. times. In his first essay on the atomic bomb, “The Bomb and Civilization,” he wrote:
Either war or civilization must end, and if it is to be war that ends, there must be an international authority with the sole power to make the new bombs. All supplies of uranium must be placed under the control of the international authority, which shall have the right to safeguard the ore by armed forces. As soon as such an authority has been created, all existing atomic bombs, and all plants for their manufacture, must be handed over. And of course the international authority must have sufficient armed forces to protect whatever has been handed over to it. If this system were once established, the international authority would be irresistible, and wars would cease. At worst, there might be occasional brief revolts that would be easily quelled.
And in his book-length study of the same question, Has Man a Future? Russell made the same point again:
“So long as armed forces are under the command of single nations, or groups of nations, not strong enough to have unquestioned control over the whole world — so long it is almost certain that sooner or later there will be war, and, so long as scientific technique persists, war will grow more and more deadly.”
Bertrand Russell, Has Man a Future? New York: Simon & Schuster, 1962, p. 69
We have seen that armed forces continue to be under the command of individual nation-states, and in fact they continue to go to war with each other. Moreover, scientific technique has markedly improved, and while the construction of weapons of mass destruction remains today a topic of considerable political comment, the availability of improved weapons of mass destruction did not automatically or inevitably lead to global nuclear war and human extinction.
In the same book Russell went on to say:
“…it seems indubitable that scientific man cannot long survive unless all the major weapons of war, and all the means of mass destruction, are in the hands of a single authority, which, in consequence of its monopoly, would have irresistible power and, if challenged to war, could wipe out any rebellion within a few days without much damage except to the rebels.”
Bertrand Russell, Has Man a Future? New York: Simon & Schuster, 1962, p. 70
In writing these comments, we can now see in hindsight that one of the major strategic trends of the second half of the twentieth century that Russell missed was the rise in the efficacy of asymmetrical resistance to irresistible power. Russell does not seem to have recognized that authorities in possession of de facto irresistible power might choose not to annihilate a weaker power because of global opinion and the hit that such an actor would take to its soft power if it simply wiped out a rebellion. Moreover, the wide distribution of automatic weapons — not weapons of mass destruction — proved to be a disruptive force in global political affairs by providing just enough friction to the military operations of great powers that rebellions could not be wiped out within a few days.
The rise of twentieth century guerrilla resistance and rebellion was an important development in global affairs, and a development not acknowledged until it was already a fait accompli, but I don’t think that it constituted an historical singularity — as it is part of a devolution of warfare rather than a breakthrough to a new order of magnitude of war (which seems to have been what Russell feared would come about).
It has been said (by L. P. Hartley, a contemporary of Russell) that the past is a foreign country. This is true. It is also true that the future is a foreign country. (Logically, these two claims are identical; every present is the future to some past.) We ought to make no pretense to false familiarity with the future, since they do things differently there.
. . . . .
. . . . .
. . . . .
21 March 2013
In 1948, shortly after the end of the Second World War and the first use of atomic weapons, Bertrand Russell wrote an essay titled, “The Future of Man”, apparently published in The Atlantic in 1951 (and subsequently collected in Russell’s Unpopular Essays). Russell opened his essay with a sweeping prediction:
Before the end of the present century, unless something quite unforeseeable occurs, one of three possibilities will have been realized. These three are: —
1. The end of human life, perhaps of all life on our planet.
2. A reversion to barbarism after a catastrophic diminution of the population of the globe.
3. A unification of the world under a single government, possessing a monopoly of all the major weapons of war.
I do not pretend to know which of these will happen, or even which is the most likely. What I do contend is that the kind of system to which we have been accustomed cannot possibly continue.
Russell numbered three possibilities for the future, but there is a fourth, which we can call the zeroeth possibility: something quite unforeseeable. Russell left himself an out, but even with the out, I will argue, he got it wrong.
In any case, here are Russell’s four possibilities, which closely correspond to several categories of futurism hotly debated at the present time:
● 0th scenario: unforeseeable developments — this is Russell’s singularity, i.e., the occurrence of an event so discontinuous with previous history that it results in a “prediction wall” that prevents us from seeing or understanding subsequent historical developments.
● 1st scenario: human extinction — following the use of nuclear weapons to end the Second World War, Russell (like Jaspers and other contemporaneous philosophers) was fully aware of anthropogenic existential risks, of which human extinction from nuclear war is a paradigm case, so this is one of Russell’s qualitative risk categories.
● 2nd scenario: global catastrophic failure — Russell identified a two-fold global catastrophic event — drastic diminution of the human population followed by a return to barbarism — which obviously followed from his concern that the next war would be so catastrophic as to end civilization (this is a scenario that also worried Einstein, who famously said that, “I know not with what weapons World War III will be fought, but world War IV will be fought with sticks and stones.”). Whether we consider this a global catastrophic risk, or a form of subsequent ruination, this is another of Russell’s qualitative risk categories.
● 3rd scenario: world government — again like Einstein, Russell was an advocate for world government, and thought it likely the only means by which we could escape our own destruction. In the immediate post-war period, when the US had a nuclear monopoly, Russell actually advocated that the US should use its nuclear monopoly to assert global hegemony and enforce a world government. Later, Russell was to become much more well known for protesting against nuclear weapons, being sharply critical of the Cold War, and writing telegrams to both Khrushchev and Kennedy during the Cuban Missile Crisis.
It seems to me beyond dispute that human life has not come to an end (Russell’s 1st scenario), that human society has not reverted to barbarism after a catastrophic diminution of population (Russell’s 2nd scenario), the world has not been unified under a single government (Russell’s 3rd scenario), and nothing quite unforeseen has happened (Russell’s 0th scenario). It is important to spell this out, being entirely explicit about it, because it is easy to imagine that any or all four of these possibilities might be disputed.
Of the strictly quantifiable predictions, any disputant would really have to tie themselves in knots in order to maintain the human beings have gone extinct or that there has been a catastrophic diminution of population. Only the philosophically desperate would attempt to argue that human life, as we knew it in 1951, has ended forever, or that the seven billion souls alive today somehow do not represent a much larger human population than in 1948. However, I must pause to say this, because there clearly are philosophically desperate disputants who are willing to make claims precisely of this character. But having explicitly acknowledged these strategies of desperation, I will henceforth dismiss them and consider them no further, except in so far as the bear upon the other scenarios.
It could be argued, and it has been argued, that the result of the resolution of the Cold War (which did occur before the end of the century in which Russell was writing) was the installation of US global hegemony as a de facto world government. It has also been argued by conspiracy theorists that there is in fact a world government operating behind the scenes, but not in any public and explicit fashion. It might also be argued that the UN and its associated international agencies (like the International Criminal Court) constitute a nascent world government that will someday coalesce into something more robust and capable of exercising authority. Sometimes these latter theses — government by conspiracy and the UN as world government — are merged together into a single claim.
Even if any or all of these claims are true, none of them have accomplished what was central to Russell’s concern for the future: the abolition of war. Near the end of the same essay Russell wrote:
Owing to the increased productivity of labor, it has become possible to devote a larger percentage of the population to war. If atomic energy were to make production easier, the only effect, as things are, would be to make wars worse, since fewer people would be needed for producing necessaries. Unless we can cope with the problem of abolishing war, there is no reason whatever to rejoice in laborsaving technique, but quite the reverse. On the other hand, if the danger of war were removed, scientific technique could at last be used to promote human happiness. There is no longer any technical reason for the persistence of poverty, even in such densely populated countries as India and China. If war no longer occupied men’s thoughts and energies, we could, within a generation, put an end to all serious poverty throughout the world.
The conspiracy theorists argue that war is part of the plan of subduing the global population, but this isn’t at all the kind of world government that Russell had in mind. When Russell and Einstein wrote about world government in the middle part of the twentieth century, they implicitly had in mind the Weberian conception of sovereignty, i.e., a legal monopoly on violence. Both Russell and Einstein wanted to see a single military power that would beneficently impose its unilateral will upon the world so that we would not see the perpetuation of armed conflict between nation-states.
This did not happen, nor did anything like it happen. On the contrary, the second half of the twentieth century demonstrated the possibility of a state of near-permanent armed conflict as definitive of the world order. In order for this to happen, something did come about, which I have called the devolution of warfare — that is to say, parties to conflicts throughout the world realized that nuclear war could lead to global catastrophic risks, so everyone decided to continue to make war, but to do so without atomic weapons. This way human beings could indulge to the full their love of war and violence without making themselves extinct (and thereby ending the fun for everyone).
This brings us to Russell’s 0th scenario: has the devolution of warfare constituted something quite unforeseeable? Not in my judgment. The devolution of warfare is a negative historical development, involving the suppression or limitation of human agency and capabilities previously demonstrated. The limitation of a demonstrated human capability represents a retrograde development, and I don’t think retrograde developments of this kind rise to the level of constituting a singularity in history.
If anything, the development and use of nuclear weapons constituted an historical singularity, therefore creating a “prediction wall,” so that the deliberate tradition of non-use represents a step back from an historical singularity and a return to predictability. Indeed, what some scholars have called “the return of history” might also be called “the return of predictability” in the sense of being a return to the predictable behavior of nation-states in anarchic competition employing conventional weapons.
It could be argued that what Russell did not see was that at precisely the time he was writing his essay a world order of sorts was being forged, in the post-war agreements on economics at Bretton-Woods and on political matters at Yalta — and, as importantly, if not more importantly, how these explicitly formulated agreements were worked out in practice, sometimes through open warfare, and usually through superpower competition, as in the Berlin Airlift. This de facto world order essentially held throughout the period that Russell considered in his essay — the second half of the twentieth century. Since the actually working out of these agreements in practice was as essential as the agreements themselves, we cannot blame Russell for a lack of prescience in not recognizing in Bretton-Woods and Yalta the foundations of the post-war world. And I don’t think that anything in that war-torn whilst stable post-war world could be said to have fulfilled any of Russell’s predictions.
Now that the post-war world that Russell failed to recognize as it was taking shape has finally become unraveled, we find ourselves once again contemplating the future with great uncertainty, and asking ourselves about the possibilities of radical historical discontinuity (i.e., a singularity), global catastrophic risks, existential risks, and world governance. Dante similarly found himself asking questions of this sort just at the very earliest moment when the scholastic synthesis of the medieval world was beginning to unravel — not only did Dante consider eschatological scenarios that would have constituted a singularity, global catastrophic risks, and existential risks, but also considered world government in his De Monarchia. But Dante was a great poet, and great poets are sensitive souls, and are likely to hear the rumbling on the horizon even when the rest of us are blissfully unaware.
Perhaps whenever the world finds itself at a point of historical transition, grand narratives of transition are contemplated — but in the final analysis (the Hegelian analysis, in which the owl of Minerva takes flight only with the setting of the sun) we usually end up muddling through in the best human tradition, rarely realizing any grand narrative.
. . . . .
. . . . .
. . . . .
17 March 2013
Since posting Automation and the Human Future a few days ago, a reader has directed by attention to Technological Unemployment Amidst Stagnation at All Systems Need A Little Disorder by Ashwin Parameswaran. I have previously mentioned Ashwin Parameswaran’s blog, Macroeconomic Resilience, in my post Self-Dissimilarity.
While my last post credited the fear of technogenic unemployment primarily to recession-induced pessimism, Parameswaran takes technogenic unemployment very seriously, and anticipates “Transitioning To The Near-Automated Economy,” even considering the changes that must come about in education as this transition is made. What Parameswaran writes is so wonderfully sane and reasonable, and I agree with so much of it (indeed, it warmed my heart to see him refer to our economy today as “neo-feudal” as this is a point that I have made many times), that I hesitate to differ with him, and I don’t need to differ with Parameswaran too much if we adjust our expectations to la longue durée and make it clear that we are not talking about what is going to happen within 25 years or so.
I am certainly not beyond speculating on the possibility of very different employement structures. In my post Counterfactual Conditionals of the Industrial Revolution, I suggested the possibility of an industrial revolution of a different sort — an industrial revolution resulting in a society in which the supply and the demand for labor were not nearly so close to being in equilibrium as they are today. For despite the problems of unemployment that plague advanced industrialized societies, the astonishing thing about it is not that there is unemployment, but rather that supply and demand of labor are so nearly identical. In a different kind of society, a different kind of industrial civilization, this approximation of employment demand to employment supply might not obtain.
As long as we take a sufficiently long time-horizon I am willing to agree that we will be eventually transitioning to a near-automated economy. In a comment made on the Los Angeles Times article L.A. 2013 — about an article from 03 April 1988 (from the Los Angeles Times Magazine), seeking to predict a quarter century into the future to 2013, Yves Rubin wrote…
“In general, such futuristic articles should multiply time spans by at least 10. Downtown Los Angeles “may” look like in this article’s cover photo in 250 years!”
I largely agree with this. In 25 years we see little change, but in 250 years we are likely to see significant change. Think back to the world 250 years before the present — the world of 1763, when the Treaty of Paris was signed, ending the Seven Years’ War — and if we compare that world, without electricity, without the internal combustion engine, before the industrial revolution, and before the United States existed, with our world today, we can see how radical the changes to the familiar world can be in a future an order of magnitude beyond the modest 25 years of the 1988 article about LA.
I am willing to admit without hesitation that, 250 years from now, we may well have realized a near-automated economy, and that this automation of the economy will have truly profound and far-reaching socioeconomic consequences. However, the original problem then becomes a different problem, because so many other things, unanticipated and unprecedented things, have changed in the intervening years that the problem of labor and employment is likely to look completely different at this future date. If the near-automated economy becomes a reality in 250 years — a scenario that I will not dispute — I don’t think that this will be much of a problem, because we will need machines producing the goods we need to expand the human presence in the Milky Way. Seven billion people is a lot on the surface of the Earth — and there will be even more people by that time — but when spread out in the galaxy, seven billion human beings isn’t even enough to scratch the surface, as it were.
The transition to a near-automated economy (contemplated in isolation from parallel synchronous changes) would require adjustments so radical that it would be an open question, once these changes were in place and the near-automated economy is up and running, whether we would still be living in the same old industrial-technological civilization we have come to know and love, or whether this historical discontinuity was sufficient to cause a rupture that results in the constitution an an entirely new civilization — perhaps even constituting a preemption event that ends industrial-technological civilization by replacing it with whatever comes next. Over time, these adjustments will happen more or less naturally, but contemplated in one fell swoop the necessary adjustments seem incomprehensibly radical.
In the article Real Robot Talk in The Economist that I quoted in my last post, Automation and the Human Future, the author wrote that, “modern economies continue to use wages as the primary means by which purchasing power is distributed.” What mechanism other than wages can be employed as a means for the distribution of purchasing power? How could goods and services be allocated within an economy without the quantification that wages effect? (The problem is similar to that of allocating capital and resources within a socialist economy: how is capital to be allocated to enterprises without a pricing mechanism?)
This is another example of thinking in conventional terms about a time in the future when conventional assumptions will no longer hold. By the time the automated economy will seriously alter social relationships, so many other things will have happened, and will be happening, that terms like “labor” and “capital” and “goods” and “services” will have come to take on such different meanings that to formulate things in the old way would be nothing but an anachronism.
It is to be expected that measures will be taken in the attempt to preserve the present structure of civilization as long as possible (and in so doing to preserve the familiar meanings of familiar terms), and some of these measures may seem quite drastic in their attempts to preserve certain institutions. For example, we may see mass mobility of labor across nation-state boundaries allowing technogenically superfluous labor to seek opportunities for work in regions of the world not yet transformed by the technologies of automated production. As entrenched as the nation-state is in our contemporary thought, it is not as entrenched as our idea of civilization, and we would sooner compromise the nation-state and the international order based upon the nation-state than we would allow our civilization to lapse.
Yet, in the fullness of time, not only will our nation-states lapse, but our distinctive form of civilization will lapse also, and it will be replaced by another form of civilization, as yet unknown to us.
It is one of the distinctive features of civilization that the problems intrinsic to a given form of civilization emerge simultaneously with the civilization and disappear with the disappearance of that civilization; that is to say, for the most past, the problems of a particular form of civilization are not passed along to new forms of civilization, which have their own problems. I take this to be one of the most fascinating features of civilization, and I don’t think that it receives sufficient attention in the study of civilization. What it implies is that, like an artist’s work, a civilization’s problems are never resolved, only abandoned.
The problem of royal legitimacy, for example, scarcely exists today, and in so far as it exists at all it only exists as a holdover from an earlier form of civilization that no longer exists, as is the case with the constitutional monarchies of Europe. But the intense debates over the divine right of kings simply don’t exist any more. The problem was never “solved” but was intrinsic to the form of civilization in which royal authority was central, and once royal authority was no longer the central organizing principle of civilization, the “problem” of royal authority, its source and its legitimacy, simply disappears.
Of course, one of the ways in which one kind of civilization succeeds another is through a radical innovation that “solves” (in a sense) the problems of the earlier civilization, but in so “solving” the problem another kind of civilization is created, and so the solution does not obtain within the previous civilizational paradigm; it defines a new civilizational paradigm, within its own problems (to become manifest in the fullness of time) awaiting a solution that will initiate another civilizational paradigm.
Automated production issuing in maximized abundance and the demise of employment as we know it today would constitute a transition to a distinct form of civilization from the industrial-technological civilization that we know today, and the emergence of a future industrial-technological civilization in which maximized abundance becomes an established fact and human labor superfluous to the maximized abundance would also constitute a changed socioeconomic context that would interact will all other synchronous historical events transpiring in parallel and therefore in mutual relations of influence.
. . . . .
. . . . .
. . . . .
14 March 2013
During the early years of the industrial revolution, people (including young children) worked the kind of hours in factories that they have been accustomed to working on farms during agrarian civilization. That meant a lot of 14 and 16 hour days. After the initial misery of the “factory system,” things got sorted out and the hours of the work day fell precipitously. Eventually, the work week fell to a standard 40 hours, though in the most productive economies in the world today many if not most people routinely put in overtime hours.
Futurists, however, instead of seeing this declining workweek in historical context as a one-time transition from one kind of social organization to another, forecast that the work week would go on shrinking, from 40 hours to 30 hours, from 30 hours to 20 hours, and eventually automation would make human labor unnecessary. Given this forecast, one of the great social problems that industrial civilization would have to face would be that of what everyone would do in a society of maximized abundance and scarce employment.
It was widely thought by “progressive” thinkers that Europe was on the cutting edge of this revolution in labor and employment, as many European countries statutorily limited the work week to a certain number of hours. In far more recent predictions it was suggested that the vast common market created by the European Union would come to dominate the world economy. (Up until the recent financial crisis, Parag Khanna was predicting ascendancy of Europe as a global force.) Yet European economies proved stagnant, and not a vibrant source of innovation and growth, whether economic or technological.
The optimistic futurism of the 1970s is especially easy to ridicule (though it is often no more wide of the mark than more recent futurist predictions), and I think that this is due to the fact that the early Space Age of the 1960s significantly raised hopes and expectations, when these hopes and expectations were not swiftly gratified with jetpacks, flying cars, and vacations to the moon, the whole enterprise of technological futurism fell into disrepute.
Many supposedly “failed” predictions of futurism — supposedly falsified by history like the political triumph of a given economic system and secularization — may yet come true but on a time scale that lies beyond the brief attention spans of the mass media. Given the fact that big ideas move very slowly through history, like the passage of large prey through the gut of a snake, and given the tendency of the mass media to build up the idea of the moment into a kind of hysteria, only to see interest in that idea collapse soon after, it is nearly inevitable that the same ideas will come up time and again as they continue their passage through contemporary history, going through periods of being considered prescient alternating with periods of being believed to have been “disproved” by history.
Recently the once-discredited futurist idea of widespread automation leading to maximized material abundance issuing in sharply increased and persistent unemployment has been making a significant comeback in the popular press. Let’s make a quick review of how the idea appeared in mid-twentieth century futurism.
In a book intended to be a non-hyped, non-flashy exercise in futurism, Stuart Chase made the case for automation and posed the problem of persistent unemployment for a mass society:
“Computers and automatic mechanism have already taken over a great deal of routine work, such as bank bookkeeping, and they are expected to take over a great deal more. Not only large plants and offices will be computerized, but also small organizations, as the hardware becomes less costly. What then will happen to people? …If people have no jobs, how can they buy the products made by the workers who remain? If, on the other hand, it is possible to subsidize the jobless as consumers, what happens to their nervous systems, self-confidence, and character? Most of us would rather be occupied than not… but in what form?”
Stuart Chase, The Most Probable World, 1968, Chapter 10, “Is man a working animal?,” p. 136
The idea of automation even plays a central role in Valerie Solanas’ S.C.U.M. Manifesto, where the benefits of automation are accepted uncritically:
“There is no human reason for money or for anyone to work more than two or three hours a week at the very most. All non-creative jobs (practically all jobs now being done) could have been automated long ago, and in a moneyless society everyone can have as much of the best of everything as she wants. But there are non-human, male reasons for wanting to maintain the money system…”
Others saw further and thought more critically. Only a year after Stuart Chase’s book, Victor C. Ferkiss had a much more grounded understanding of what technology would mean in the workplace, and his account gives a sense of technological dystopia à la Metropolis, in contradistinction to the wide-eyed technological utopianism that mostly prevailed when he wrote the following:
“Automation has seemingly done little to reduce the drudgery of work. Where the assembly line exists, it is still irksome… Where the old centralized rigid processes have been automated with machines taking over routine tasks, working conditions, especially psychological one, have not improved. Such evidence as exists indicates that the watchers of dials — the checkers and maintainers — are likely to be lonely, bored, and alienated, often feeling less the machine’s master than its servant. Dealing with computers can be as frustrating for the worker as for the client-consumer, with data on a print-out even more difficult to check and rectify than that in human accounts or reports.”
Victor C. Ferkiss, Technological Man: The Myth and the Reality, 1969 (Signet Mentor 1970), Chapter 6, “Technological Change and Economic Inertia,” pp. 122-123
Such quotes and observations could be multiplied at will; I took my quotes from books that I happened to have on hand, but, as I wrote above, it was a prominent feature in mid-twentieth century futurism to ask what would become of the working masses once automation deprived them of labor, and therefore — presumably for the privileged few writing about the problem — the content and meaning of working class lives.
Now the problem of job loss due to automation is being posed again, and almost in precisely the same terms, notwithstanding the computing and telecommunications revolution that has occurred in the meantime. I cannot help but speculate that these elite worries over restive, unemployed masses are almost entirely due to the stagnant if not depressed condition of the global economy since the financial crisis that started unrolling with the sub-prime mortgage debacle in the US, and subsequently moved on to other unemployment-inducing crises around the world.
An article in The Economist, Real robot talk (from 01 March 2013), revisits the theme of technologically-induced (we might even say technogenic) unemployment from automation and robotics. The article finishes with these wise observations:
“Technological progress sufficient to cause these kinds of dislocations should also generate overall economic gains large enough to make everyone better off. But just because everyone could be made better off by progress doesn’t mean that everyone will be made better off. There must be an institutional framework in place to ensure that the gains from growth are shared.”
However, the rest of the article is not nearly so enlightening. The Economist article offers three possible responses to technogenic unemployment:
1. more education for less skilled workers
2. protecting less skilled jobs through regulation
3. direct wealth transfers
I am struck by the utter lack of imagination in these three proposals. If this is all than an elite publication like The Economist has to offer, clearly we are in serious trouble. The whole idea of trying to educate everyone to the level that the elites believe themselves to have attained begs so many questions that it is difficult to know where to start. Therefore I will limit myself only to the comment that many if not most entrepreneurs are drop outs, and the highly educated work for the entrepreneurs who create companies, and so create jobs and opportunities and increase productivity. As for protecting low skilled jobs, this is perhaps the worst possible suggestion, since it would directly impact the increase in productivity that could potentially free those in wage-slave drudgery from their mechanizable tasks. And direct wealth transfers have been tried, almost always with disastrous results.
A similar recent article that is a sign of the times is The Rise of the Robots by Robert Skidelsky. (I won’t quote Skidelsky, since his website says, “Reprinting material from this Web site without written consent from Project Syndicate is a violation of international copyright law.”) A Manichean contrast between optimists and pessimists runs through Skidelsky’s piece, as though the parties to the argument had nothing on their side except temperamental inclinations.
This isn’t about optimists or pessimists, except in so far as present-day commentators are pessimistic because their banker and journalist friends are feeling the pinch, too. That’s what happens when a persistent recession takes a chunk out of contemporary economic history. When the present downturn has passed — it hasn’t passed yet, and by the time it’s over I suspect many will come to speak of a global “lost decade” — I predict that talk of technogenic unemployment will also pass until the next crisis.
In the longer term of industrial-technological civilization, abundance may yet become a problem, and meaningful work a privilege, but we are a very long way from this being the case. The industrial revolution is only now transforming Asia, and it has yet to transform Africa. The problem of global technogenic unemployment cannot be a persistent economic blight until the global economy entire has been technologically transformed by industrialization — incidentally, the same conditions that must obtain for the experimentum crucis of Marxism (another supposedly disconfirmed idea from history).
. . . . .
. . . . .
. . . . .
6 March 2013
Frank Knight on risk and uncertainty
Early Chicago school economist Frank Knight was known for his work on risk, and especially for the distinction between risk and uncertainty, which is still taught in economics and business courses. Like Schumpeter, Knight was interested in the function of the entrepreneur in the modern commercial economy, and he employed his distinction between risk and uncertainty in order to illuminate the function of the entrepreneur.
Although it is easy to conflate risk and uncertainty, and to speak as though facing a risk were the same thing as facing uncertain or unknown circumstances, Knight doesn’t see it like this at all. A risk can be quantified and calculated, and because risks can be quantified and calculated, they can be controlled. This is the function of insurance: to quantify and price risk. If you have correctly factored risk into your calculation, it is no longer an uncertainty. You might not know the exact date or magnitude of losses, but you know statistically that there will be a certain number of losses of a certain magnitude. It is the job of actuaries to calculate this, and one buys insurance to control the risk to which one is exposed.
The ordinary business of life, and of business, according to Knight, involves risk management, but the unique function of the entrepreneur is to accept uncertainty that cannot be quantified, priced, or insured. The entrepreneur makes his profit not in spite of uncertainty, but because of uncertainty. No insurance can be bought for uncertainty, so that in taking on an uncertain situation the entrepreneur enters into a realm in which it is recognized that there are factors beyond control. If he is not destroyed financially by these uncontrollable factors, he may profit from them, and this profit is likely to exceed the profit made in ordinary business operations exposed to risk but not to uncertainty.
Here is how Knight formulated his distinction between risk and uncertainty:
To preserve the distinction which has been drawn in the last chapter between the measurable uncertainty and an unmeasurable one we may use the term “risk” to designate the former and the term “uncertainty” for the latter. The word “risk” is ordinarily used in a loose way to refer to any sort of uncertainty viewed from the standpoint of the unfavorable contingency and the term “uncertainty” similarly with reference to the favorable outcome; we speak of the “risk” of a loss, the “uncertainty” of a gain. But if our reasoning so far is at an correct, there is a fatal ambiguity in these terms which must be gotten rid of and the use of the term “risk” in connection with the measurable uncertainties or probabilities of insurance gives some justification for specializing the terms as just indicated. We can also employ the terms “objective” and “subjective” probability to designate the risk and uncertainty respectively, as these expressions are already in general use with a signification akin to that proposed.
Frank Knight, Risk, Uncertainty, and Profit, CHAPTER VIII, STRUCTURES AND METHODS FOR MEETING UNCERTAINTY
Knight went on to add…
The practical difference between the two categories, risk and uncertainty, is that in the former the distribution of the outcome in a group of instances is known (either through calculation a priori or from statistics of past experience), while in the case of uncertainty this is not true, the reason being in general that it is impossible to form a group of instances, because the situation dealt with is in a high degree unique.
Frank Knight, Risk, Uncertainty, and Profit, CHAPTER VIII, STRUCTURES AND METHODS FOR MEETING UNCERTAINTY
The growth of knowledge and experience can transform uncertainty into risk if it contextualizes a formerly unique situation in such a way as to demonstrate that it is not unique but belongs to a group of instances. Of the tremendous gains made in the space sciences during the last forty years, during our selective space age stagnation, it could be said that the function of this considerable gain in knowledge has been to transform uncertainty into risk. But this goes only so far.
Even if the boundary between risk and uncertainty can be pushed outward by the growth of knowledge, the same growth of civilization that attends the growth of knowledge and technology means that the boundaries of civilization itself will also be pushed further out, with the result being that we are likely to always encounter further uncertainties even as old uncertainties are transformed by knowledge into risk.
The evolution of the existential risk concept
In many recent posts I have been discussing the idea of existential risk. These posts include, but are not limited to, Moral Imperatives Posed by Existential Risk, Research Questions on Existential Risk, and Six Theses on Existential Risk. The idea of existential risk is due to Nick Bostrom. (I first heard about this at the first 100YSS symposium in Orlando in 2011, when I was talking to Christian Weidemann.)
Nick Bostrom defined existential risk as follows:
Existential risk – One where an adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential.
An existential risk is one where humankind as a whole is imperiled. Existential disasters have major adverse consequences for the course of human civilization for all time to come.
“Existential Risks: Analyzing Human Extinction Scenarios and Related Hazards,” Nick Bostrom, Professor, Faculty of Philosophy, Oxford University, Published in the Journal of Evolution and Technology, Vol. 9, No. 1 (2002)
In his papers on existential risk and the book on Global Catastrophic Risks, Bostrom steadily expanded and refined the parameters of disasters that have (or would have) major adverse consequences for human beings and their civilization.
The table from an early existential risk paper above divides qualitative risks into six categories. the table below from the book Global Catastrophic Risks includes twelve qualitative risk categories and implies another eight; the table further below from a more recent paper includes fifteen qualitative risk categories and implies another nine. From a philosophical point of view, these further distinctions represent in advance in clarity, contextualizing both existential risks and global catastrophic risks in a matrix of related horrors.
The specific possible events that Bostrom describes range from the imperceptible loss of one hair to human extinction. Recently in Moral Imperatives Posed by Existential Risk I tried to point out how further distinctions can be made within the variety of human extinction scenarios, and that some distinct outcomes might be morally preferable over other outcomes. For example, even if human beings were to become extinct, we might want some of our legacy to remain to potentially be discovered by alien species visiting our solar system. Given the presence of space probes throughout our solar system, it seems highly likely that these would survive any human extinction scenario, so that we have left some kind of mark on the cosmos — a cosmic equivalent of “Kilroy was here.”
Further distinction can be made, however, and the distinction that I would like to urge today is that of distinguishing existential risks from existential uncertainties.
The need to explicitly formulate existential uncertainty
Once the distinction is made between existential risks and existential uncertainties, we recognize that existential risks can be quantified and calculated. Ultimately, existential risks can also be insured. The industrial and financial infrastructure is not now in place to do this, although we can clearly see how to do this. And this much is obvious, because much of the discussion of existential risk focuses on potential mitigation efforts. Existential risk mitigation is insurance against extinction.
We can clearly understand that we can guard against the existential risks posed by massive asteroid impacts by a system of observation of objects in space likely to cross the path of the Earth, and building spacecraft that could deflect or otherwise render harmless such threatening asteroids. It was once thought that the appearance of comets or “new stars” (novae) in the sky heralded the death of kings of the end of empires. No longer. This is the perfect example of a former uncertainty that has been transformed into a risk by the growth of knowledge (or, at very least, is in the process of being transformed from an uncertainty into a risk).
We can also clearly see that we could back up the Earth’s biosphere about a truly catastrophic global disaster by transplanting Earth-originating life elsewhere. Far in the future we can even understand the risk of the sun swelling into a red giant and consuming the Earth in its fires — unless by that time we have moved the Earth to an orbit where it remains safe, or perhaps even transported it to another star. All of these are existential risks where “risk” is used sensu stricto.
There are a great many existential risks and global catastrophic risks that have been proposed. When it comes to geological events — like massive vulcanization — or cosmological events — the death of our sun — the sciences of geology and cosmology are likely to mature to the point where these risks are quantifiable, and if industrial-technological civilization continues its path of exponential development, we should also someday have the technology to adequately “insure” against these existential risks.
The vagaries of history and civilization
When it comes to scenarios that involve events and processes not of the variety that contemporary natural science can formulate, we are clearly pushing the envelope of existential risks and verging on existential uncertainties. Such scenarios would include those predicated upon the development of human history and civilization. For example, scenarios of wars of an order of magnitude that far exceed the magnitude of the global wars of the twentieth century are on the outer edges of risk and, as they become more speculative in their formulation, verge onto uncertainty. Similarly, scenarios that involve the intervention of alien species in human history and human civilization — alien invasion, alien enslavement, alien visitation, etc. — verge onto being existential uncertainties.
The anthropogenic existential risks that are of primary concern to Nick Bostrom, Martin Rees, and others — risks from artificial intelligence, machine consciousness, unintended consequences of advanced technologies, and the “gray goo” problem potentially posed by nanotechnology — are similarly problematic as risks, and many must be accounted as uncertainties. In regard to the anthropogenic dimension of many existential uncertainties I am reminded of a passage from Carl Sagan’s Cosmos:
“Biology is more like history than it is like physics. You have to know the past to understand the present. And you have to know it in exquisite detail. There is as yet no predictive theory of biology, just as there is not yet a predictive theory of history. The reasons are the same: both subjects are still too complicated for us. But we can know ourselves better by understanding other cases. The study of a single instance of extraterrestrial life, no matter how humble, will deprovincialize biology. For the first time, the biologists will know what other kinds of life are possible. When we say the search for life elsewhere is important, we are not guaranteeing that it will be easy to find – only that it is very much worth seeking.
Carl Sagan, Cosmos, CHAPTER II, One Voice in the Cosmic Fugue
This strikes me as one of the most powerful and important passages in Cosmos. When Sagan writes that, “[t]here is as yet no predictive theory of biology, just as there is not yet a predictive theory of history,” while leaving open the possibility of a future predictive science of biology and history — he wrote as yet — he squarely recognized that neither biology nor human history (much of which derives more or less directly from biology) can be predicted or quantified or measured in a scientific way. If we had a science of history, such as Marx thought we had discovered, then the potential disasters of human history could be quantified, and we could insure against them.
Well, we can insure against some eventualities of history, though certainly not against all. This is a point that Machiavelli makes:
It is not unknown to me how many men have had, and still have, the opinion that the affairs of the world are in such wise governed by fortune and by God that men with their wisdom cannot direct them and that no one can even help them; and because of this they would have us believe that it is not necessary to labour much in affairs, but to let chance govern them. This opinion has been more credited in our times because of the great changes in affairs which have been seen, and may still be seen, every day, beyond all human conjecture. Sometimes pondering over this, I am in some degree inclined to their opinion. Nevertheless, not to extinguish our free will, I hold it to be true that Fortune is the arbiter of one-half of our actions, but that she still leaves us to direct the other half, or perhaps a little less.
I compare her to one of those raging rivers, which when in flood overflows the plains, sweeping away trees and buildings, bearing away the soil from place to place; everything flies before it, all yield to its violence, without being able in any way to withstand it; and yet, though its nature be such, it does not follow therefore that men, when the weather becomes fair, shall not make provision, both with defences and barriers, in such a manner that, rising again, the waters may pass away by canal, and their force be neither so unrestrained nor so dangerous. So it happens with fortune, who shows her power where valour has not prepared to resist her, and thither she turns her forces where she knows that barriers and defences have not been raised to constrain her.
Nicolo Machiavelli, The Prince, CHAPTER XXV, “What Fortune Can Effect In Human Affairs, And How To Withstand Her”
What remains beyond the predictable storms of floods of history are the true uncertainties, the unknown unknowns, and these pose a danger we cannot predict, quantify, or insure. They are not, then, risks in the strict sense. They are existential uncertainties.
It could be argued that our inability to take specific, concrete, effective measures to mitigate the obvious uncertainties of life has resulted in religious responses to uncertainty that systematically avoid falsifiability and thereby secure the immunity of hopes to exterior circumstances. Whether or not this has been true in the past, merely the recognition of existential uncertainty is the first step toward rationally assessing them.
Existential risk suggests a clear course of mitigating action; existential uncertainty cannot, on the contrary, be the object of planning and preparation. The most that one can do to address existential uncertainty is to keep oneself open and flexible, ready to roll with the punches, and responsive to any challenge that might arise, meeting it at the height of one’s powers; any attempt to prepare specific measures will be fruitless, and quite possibly counter-productive because of the wasted effort.
. . . . .
. . . . .
. . . . .
. . . . .
25 February 2013
Morally Distinguishable Outcomes in
Global Catastrophic Scenarios
Below is Nick Bostrom’s table of qualitative categories of risk. Bostrom and Milan M. Ćirković have together edited a book on Global Catastrophic Risks, which includes this table. Existential risks, that is to say, risk that could result in human extinction, are identified as “an especially severe subset” of global catastrophic risks.
Of existential risks and their potential consequences I recently wrote this:
“When we think about what this means for us, our other ‘priorities’ pale by comparison. Nothing else matters, no matter how apparently pressing, if we are made extinct by an accident of local cosmology.”
Thinking of this further, I realized that there are many ethical presuppositions implicit in my formulation, and that (at least some of) these presuppositions can be spelled out and made explicit.
Bostrom’s table of qualitative risk categories suggest possibilities of scope and intensity beyond those comprised by global catastrophic risk and existential risk, and on the margin of the table we see “Cosmic?” as a possible scope beyond “pan-generational” and “Hellish?” as a potential intensity beyond “Terminal.” Thus what is cosmic and hellish is a qualitative risk category beyond even that of existential risk. I think that there are moral intuitions from catastrophic outcomes that correspond to these almost unthinkable scenarios.
While it would seem that there is little worse that could happen (from a human perspective, i.e., fully informed by anthropic bias) than human extinction, even given our anthropic bias and therefore our desire to avoid human extinction there are morally distinguishable outcomes in many different scenarios of global catastrophe and human extinction, and where there is the possibility of morally distinguishable outcomes there also will be the possibility of ranking these moral outcomes from the least awful possibility to the worst of all possibilities. There is also the likelihood of moral disagreements on these rankings, and these moral disagreements over prioritizing existential risk mitigation could prove crucial in future debates over the allocation of civilizational resources to existential risk mitigation. Thus even if existential risk comes to be seen as an overriding priority for human beings and civilization, this is not yet the convergence of human moral effort; room for profound disagreement yet remains.
Considering a range of devastating and catastrophic events that could compromise human life and human civilization, possibly to the point of their extinction, I can think of six scenarios in order of severity:
1. Massive but survivable catastrophe A global catastrophic risk realized that results in the loss of millions or billions of lives and deals a major setback to civilization, without either extinguishing human beings or human civilization (in Bostrom’s table of qualitative risks these would include global, trans-generational, and pan-generational endurable risks).
2. Catastrophic failure of civilization A global catastrophic risk realized that resulted in the catastrophic failure of civilization, but does not result in the extinction of human beings. The human population might be drastically reduced to paleolithic population levels, but potentially could rebound. There remains the possibility that civilization might be reconstituted, but this is likely to take hundreds if not thousands of years. (“Global dark age” in the table above.)
3. Human extinction The first level of human extinction I will call simple extinction, which is an existential risk realized, which however leaves the Earth intact, and the legacy of human civilization intact. I add this latter qualification because it is possible, even if human beings become extinct, that human civilization might leave monuments that could be appreciated by other sentient species that could visit the Earth. It is even possible (however unlikely) that other species might appreciate the human record of civilization more than we appreciate it ourselves. Thus human extinction need not mean the loss of human cultural legacy. A pandemic that killed only human beings could have this result. (X marks the spot in the table above.)
4. Human extinction with the extirpation of all human legacy The second level of human extinction I will call compound extinction, which is an existential risk realized that results in human extinction and the elimination of all (or almost all) signs of human presence, but which leaves the biosphere largely intact, and the ordinary business of terrestrial life continues largely unchanged. (This is human extinction coupled with “destruction of cultural heritage.”)
5. Catastrophic compromise of the biosphere The third level of human extinction involves not only the extinction of human beings and all human legacy, but also the extinction of all complex life on the Earth. Terrestrial life continues, but is reduced to single celled organisms. Thus there remains the possibility that life on Earth may recover, but this would probably require billions of years and result in very different life forms.
6. Terrestrial sterilization The most radical form of realized existential risk is terrestrial sterilization which results in human extinction, the extirpation of all human legacy, and the elimination of all terrestrial life, i.e., complete catastrophic failure of the biosphere. From this point there is nothing that can be recovered and no human legacy remains.
I tried to arrange these various morally distinct outcomes on an expanded version of Bostrom’s table of qualitative risk categories, but couldn’t yet find a conceptually neat and straight-forward way to do so. Further thought is needed here. I don’t think there is a need to distinguish further qualitative categories of risk beyond existential risk — in other words, we can refer to all of these morally distinct outcomes as outcomes of existential risk, as realized in distinct scenarios. However, one could make such distinctions if it were helpful to do so.
The most radical moral imperative of existential risk is to take existential risk as absolute and as trumping all other concerns, which is what I clearly implied when I wrote that, “…our other ‘priorities’ pale by comparison. Nothing else matters, no matter how apparently pressing…” if we are made (or make ourselves) extinct. This radical position has profound and discomfiting implications.
If we survey the evils of the world, we would be forced to acknowledge that it is better that any or all of these evils continue than that human life should be permanently extinguished, because the continuation of these evils is consistent with the continuation of human life and human civilization. The end of all human life would also mean the end of all the cruelties and inhumanity that we inflict upon our fellow man, and this would be a good and indeed a desirable state of affairs, but from a radical perspective on existential risk we would have to affirm that, as good a state of affairs as this represents, it would not be as morally good as the state of affairs that involves the perpetuation of these evils together with the perpetuation of human life and civilization.
Of course, under most conceivable scenarios there is no reason whatsoever to suppose that we had to choose between the perpetuation of all the evils of the world and human extinction. That is to say, there is no reason that we cannot work toward the elimination of human evils and the mitigation of existential risks. As a moral thought experiment, however, we can employ the method of isolation and ask whether the survival of human beings and human civilization, together with all the evils this entails is better than the annihilation of human beings and human civilization, so that neither human good nor human evil remains.
While I would be willing to assert that existential risk mitigation trumps all other concerns, even in a thought experiment in which human evils remain unmitigated, I can easily imagine that there are many who would disagree with this judgment. Moral diversity is a fact of human life, and we must recognize that if some among us (myself included) would be willing to explicitly affirm the radical moral consequences of prioritizing existential risk mitigation, there will be others who will equally explicitly reject a radical prioritization of existential risk mitigation, and who will affirm that it is better that the world should come to an end than that the manifold evils of our time should persist. From this point of view, in view of the limited resources available to human beings, we would do better to direct these resources to the mitigation of human evils than to direct these resources to the mitigation of existential risk.
It is entirely possible that someone might affirm that it is a good thing civilization should be ended, and the idea has incredible romantic appeal that cannot be denied and should not be ignored. Many are the science fiction books and films (for example, think of Logan’s Run or 12 Monkeys) that depict a world empty of human beings and populated only by collapsing buildings and animals hunting in the ruins. This scenario is depicted, for example, in Alan Weisman’s book The World Without Us.
The idea that civilization is evil can easily be extended to the idea that humanity is evil in and of itself. The predictions of the original Club of Rome report of 1972, The Limits to Growth, have been widely discussed on its recent 40th anniversary, but what has not been remarked is the language and tone of that original document (which you will not find on the internet, despite the millions of used copies kicking around). The report boldly asserted, “The earth has cancer and the cancer is Man.” This kind of rhetoric, which is less common today, can easily play into a principled denial of the moral value of humanity.
And it is easy to understand why. The world is filled with evils, and the most horrific evils are those that human beings perpetrate upon other human beings — homo homini lupus. If we prioritize existential risk mitigation over the mitigation of human evils, we find ourselves forced into the uncomfortable position of tolerating Kantian radical evil, Marilyn McCord Adams’ conception of horrendous evils, and Claudia Card’s atrocities. Imagine the horrors of genocide, torture, and industrialized warfare and then imagine being forced to admit that it is better than genocides occur, better that torture continues, and better that industrialized warfare persists than that an existential risk be realized. This is a hard saying; nevertheless, this is the argument that must be made, and it is always best to face a hard argument directly than to attempt to avoid it.
In Marilyn McCord Adams’ exposition of what she calls “horrendous evils” in her book Horrendous Evils and the Goodness of God Adams wrote:
“Among the evils that infect this world, some are worse than others. I want to try to capture the most pernicious of them within the category of horrendous evils, which I define (for present purposes) as ‘evils the participation in which (that is, the doing or suffering of which) constitutes prima facie reason to doubt whether the participant’s life could (given their inclusion in it) be a great good to him/her on the whole.’ The class of paradigm horrors includes both individual and massive collective suffering…”
Marilyn McCord Adams, Horrendous Evils and the Goodness of God, Ithica: Cornell University Press, 1999, p. 26.
She went on to add in the next section:
“I believe most people would agree that such evils as listed above constitute reason to doubt whether the participants’ life can be world living, because it is so difficult humanly to conceive how such evils could be overcome.”
In the last paragraph of her paper of the same title, Adams again suggests that horrendous evils call into question the possibility of having a life worth living:
“I would go one step further: assuming the pragmatic and/or moral (I would prefer to say, broadly speaking, religious) importance of believing that (one’s own) human life is worth living, the ability of Christianity to exhibit how this could be so despite human vulnerability to horrendous evil, constitutes a pragmatic/moral/religious consideration in its favour, relative to value schemes that do not.”
Marilyn McCord Adams, “Horrendous Evils and the Goodness of God.” Anthologized in The Problem of Evil, edited by Marilyn McCord Adams and Robert Merrihew Adams, Oxford: Oxford University Press, 1990, p. 221.
A generalization of Adams’ argument could easily bring us from the point where horrendous evils make the individual doubt or question that one’s life is worth living to the point where humanity on the whole legitimately, and on principle, questions whether any human life at all is worth living. If humanity comes to decide that horrendous evils overwhelm all value in the world and make human existence utterly meaningless and pointless, then the mitigation of existential risk can come to seem like an evil or an impiety.
Adams finds her answer to this in Christianity; we naturalists cannot appeal to supernaturalistic validation or justification: we must take human evil on its face along with human good, and if we prioritize the mitigation of existential risk (and therefore the continuity of humanity and human civilization), we do so knowing that human evils will continue and are probably ineradicable if not inseparable from human history.
We can actively seek to mitigate human evils, and the effort has intrinsic value, but the intrinsic value of the mitigation of suffering and mundane meliorism can only continue in the case that humanity and organized human activity continue. Therefore the prioritization of the mitigation of existential risk is what makes possible the realization of the intrinsic value of the mitigation of suffering and efforts toward meliorism. With the end of humanity would also come not only an end to all intrinsic goods of human life, but also an end to the intrinsic good of the mitigation of suffering and the effort to make the world a better place.
We can only create a better civilization if civilization continues. If we are perfectibilists, we may believe in the perfectibility of man and indeed even the perfectibility of civilization. This project cannot even be undertaken if humanity and human civilization are cut short in their imperfect state.
. . . . .
. . . . .
. . . . .
. . . . .
21 February 2013
When I returned from my recent trip to Tokyo my sister picked me up at the airport and on the drive she asked me about the weather. I said that it was cold and windy, but also very clear and sunny. How cold? I had to pause. I didn’t really know how cold it had been. I didn’t even know whether or not it had been below freezing. In a rural environment one would know immediately whether or not the temperature had dropped below freezing, but in the urban intensity of Tokyo there were no obvious (natural) signs of the temperature. One would only know that it was freezing if puddles in the street were frozen over; if there are no puddles, as when it is cold and clear, there are not obvious signs of the temperature. This made me think about the differences between urban and rural life, and ultimately rural and urban civilization.
In Kenneth Clark’s Civilisation: A Personal View the author introduces the idea of a civilized countryside, immediately after describing what he considered to be one of the high points of (urban) civilization in Urbino under Federigo and Guidobaldo Montefeltro:
“…there is such thing as civilized countryside. Looking at the Tuscan landscape with its terraces of vines and olives and the dark vertical accents of the cypresses, one has the impression of timeless order. There must have been a time when it was all forest and swamp — shapeless and formless; and to bring order out of chaos is a process of civilization. But of this ancient, rustic civilization we have no record beyond the farmhouses themselves, whose noble proportions seem to be the basis of Italian architecture; and when the men of the Renaissance looked at the countryside it was not as a place of ploughing and digging, but as a kind of earthly paradise.”
Kenneth Clark, Civilisation: A Personal View, pp. 112-113, I have selectively Americanized Clark’s irritatingly British orthography
There are several themes in this passage that touch on concerns to which Clark returned repeatedly in his survey of civilization: his mention of “timeless order” invokes his earlier emphasis on permanence and the ambition to engage in monumental, multi-generational projects. Yet it is a bit odd that Clark should mention the romanticization of the countryside during the renaissance as an earthly paradise, as this points to older models of the countryside as an Arcadian paradise, as in Virgil’s Pastorals, in which shepherds play the lyre and sing poetry to each other. This is an idyllic picture of the Golden Age in which the countryside is most definitely not civilized, but rather a retreat from the corruption of civilization.
It would be easy to dismiss the whole idea of a civilized countryside both for its internal contradictions and romantic idealization of country life that has little to do with the reality of life in the country — however. However. The civilization of the European Middle Ages, which was a pervasively agrarian civilization, and especially in so far as it approximated pure agriculturalism, was essentially a rural civilization. The great manors or feudal lords were located in the countryside because this is where the food production activity that was the basis of the medieval economy was centered. In other words, the economy was centered on the rural countryside, and not on cities.
Certainly during the Middle Ages there were thriving and cosmopolitan cities engaged in sea-borne commerce with the known world, but these were at this time essentially centers of luxury commerce that touched the lives of only a very few persons. The vast majority of the population were peasants working the land; a few percent were landed nobility and a few percent were churchmen. This left only a very small fragment of bourgeoisie — people of the town, i.e., of the berg (bourg) — who were engaged in urban life year-round. This was important, but not central, to the medieval economy. What was central was agrarian production on great landed estates, which were the true measure of medieval wealth. Having money scarcely counted as “wealth.”
It is a bias of industrial-technological civilization to assume that cities are the center of civilization, because cities are the centers of industrial-technological civilization, and the industrial city is the center of industrial production. This early paradigm of industrial cities is already changing as industrial production facilities move to industrial parks on the outskirts of cities, and we tend to identify the great cities as centers of administration, education and research, the arts and cultural opportunities, and so on. But whatever the function of the city, whether producing articles of manufacture or producing prestige requirements, the city is central to the kind of civilization we have created since the end of the Middle Ages and the end of medieval agrarian civilization.
The life of the countryside has its own complexity, but this complexity is of a different order and of a different kind than the complexity of life in the city; in the city, one finds that the primary features of the intellectual landscape are the actions of other human beings whereas in the country the primary intellectual landscape is that of the natural order of things. These differing sources of complexity structure lives differently.
A certain kind of mind is cultivated by urban life in the same way that a certain kind of mind is cultivated by life in the country, which latter of course Marx dismissed as rural idiocy. The mind and life of the country, as opposed to the city, results in its own distinctive institutions. The kind of civilization that emerges in the countryside is the kind of civilization that is going to emerge from the kind of mind that is cultivated by life in the country, and, contrariwise, the kind of civilization that emerges in the city is the kind of civilization that is going to emerge from the kind of mind that is cultivated by urban life.
At least for the moment, the tradition of rural civilization has been lost to us. The great demographic development of our time is the movement of mass populations into urban areas — and the corollary of rural depopulation — as though by a spontaneous agreement the world’s peoples had decided to attempt to prove Doxiadis right about ecumenopolis as the telos of the city and of human life. This demographic trend shows every sign of smoothly extrapolating into the future, so that we can expect even more urban growth and rural depopulation over time.
Nevertheless, it remains possible to consider alternative futures in which this trend is reversed or replaced by a different trend — or even a different civilization. Global networking means that anyone can live anywhere and be in touch with the world’s rapidly changing knowledge. If you have a connection to the internet, you can live in a rural village not necessarily be subject to the idiocy of rural life that Marx bemoaned. However, this doesn’t seem to be enough right now to keep people in the countryside, especially when all the economic opportunities are to be found in the world’s growing cities.
But there is nothing inevitable about the relentless expansion or indefinite continuation of industrial-technological civilization. Agrarian civilization, like the European Middle Ages with which it is identified, is a completed part of our past, which stands like a whole, with a beginning, a middle, and an end. In this way we can fashion a narrative of agrarian civilization, but we cannot yet fashion a narrative of industrial-technological civilization, since this is today a going concern and not a completed whole. There is a sense in which we can treat scientific civilization — what I have called modernism without industrialism — as a completed whole, a finished era of history. Although I do not regard it as likely, it is possible that our civilization may join the ranks of finished civilizations that have run their course and added themselves to the archive of human history.
I have touched on these possibilities in several posts, as when I have considered Invariant Civilizational Properties in Futurist Scenarios and in my argument for Viking Civilization, which constituted a very different kind of civilization — neither rural nor urban, but mobile, i.e., a nomadic civilization. This latter is the possibility that seems so apparently remote but which most fascinates me. Other kinds of civilizations have existed in the past; distinct forms remain possible today, however unlikely.
. . . . .
. . . . .
. . . . .
2 February 2013
In my last post, The Science of Time, I discussed the possibility of taking an absolutely general perspective on time and how this can be done in a way that denies time or in a way that affirms time, after the manner of big history.
David Christian, whose books on big history and his Teaching Company lectures on Big History have been seminal in the field, in the way of introduction to his final lectures, in which he switches from history to speculation on the future, relates that in his early big history courses his students felt as though they were cut off rather abruptly when he had brought them through 13.7 billion years of cosmic history only to drop them unceremoniously in the present without making any effort to discuss the future. It was this reaction that prompted him to continue beyond the present and to try to say something about what comes next.
Another way to understand this reaction of Christian’s students is that they wanted to see the whole of the history they have just been through placed in an even larger, more comprehensive context, and to do this requires going beyond history in the sense of an account of the past. To put the whole of history into a larger context means placing it within a cosmology that extends beyond our strict scientific knowledge of past and future — that which can be observed and demonstrated — and comprises a framework in the same scientific spirit but which looks beyond the immediate barriers to observation and demonstration.
Elsewhere in David Christian’s lectures (if my memory serves) he mentioned how some traditionalist historians, when they encounter the idea of big history, reject the very idea because history has always been about documents and eponymously confined to to the historical period when documents were kept after the advent of literacy. According to this reasoning, anything that happened prior to the invention of written language is, by definition, not history. I have myself encountered similar reasoning as, for example, when it is claimed that prehistory is not history at all because it happened prior to the existence of written records, which latter define history.
This a sadly limited view of history, but apparently it is a view with some currency because I have encountered it in many forms and in different contexts. One way to discredit any intellectual exercise is to define it so narrowly that it cannot benefit from the most recent scientific knowledge, and then to impugn it precisely for its narrowness while not allowing it to change and expand as human knowledge expands. The explosion in scientific knowledge in the last century has made possible a scientific historiography that simply did not exist previously; to deny that this is history on the basis of traditional humanistic history being based on written records means that we must then define some new discipline, with all the characteristics of traditional history, but expanded to include our new knowledge. This seems like a perverse attitude to me, but for some people the label of their discipline is important.
Call it what you will then — call it big history, or scientific historiography, or the study of human origins, or deny that it is history altogether, but don’t try to deny that our knowledge of the past has expanded exponentially since the scientific method has been applied to the past.
In this same spirit, we need to recognize that a greatly expanded conception of history needs to reach into the future, that a scientific futurism needs to be part of our expanded conception of the totality of time and history — or whatever it is that results when we apply Russell’s generalization imperative to time. Once again, it would be unwise to be overly concerned with what we call his emerging discipline, whether it be the totality of time or the whole of time or temporal infinitude or ecological temporality or what Husserl called omnitemporality or even absolute time.
Part of this grand (historical) effort will be a future science of civilizations, as the long term and big picture conception of civilization is of central human interest in this big picture of time and history. We not only want to know the naturalistic answers to traditional eschatological questions — Where did we come from? Where are we going? — but we also want to know the origins and destiny of what we have ourselves contributed to the universe — our institutions, our ideas, civilization, the technium, and all the artifacts of human endeavor.
. . . . .
. . . . .
. . . . .