15 April 2014
Why be concerned about the future? Will not the future take care of itself? After all, have we not gotten along just fine without being explicitly concerned with the future? The record of history is not an encouraging one, and suggests that we might do much better if only provisions were made for the future, and problems were addressed before they become unmanageable. But are provisions being made for the future? Mostly, no. And there is a surprisingly simple reason that provisions are rarely made for the future, and that is because the future does not get funded.
The present gets funded, because the present is here with us to plead its case and to tug at our heart strings directly. Unfortunately, the past is also often too much with us, and we find ourselves funding the past because it is familiar and comfortable, not realizing that this works against our interests more often than it serves our interests. But the future remains abstract and elusive, and it is all too easy to neglect what we must face tomorrow in light of present crises. But the future is coming, and it can be funded, if only we will choose to do so.
Money, money, everywhere…
The world today is awash in money. Despite the aftereffects of the subprime mortgage crisis, the Great Recession, and the near breakup of the European Union, there has never been so much capital in the world seeking advantageous investment, nor has capital ever been so concentrated as it is now. The statistics are readily available to anyone who cares to do the research: a relatively small number of individuals and institutions own and control the bulk of the world’s wealth. What are they doing with this money? Mostly, they are looking for a safe place to invest it, and it is not easy to find a place to securely stash so much money.
The global availability of money is parallel to the global availability of food: there is plenty of food in the world today, notwithstanding the population now at seven billion and rising, and the only reason that anyone goes without food is due to political (and economic) impediments to food distribution. Still, even in the twenty-first century, when there is food sufficient to feed everyone on the planet, many go hungry, and famines still occur. Similarly, despite the world being awash in capital seeking investment and returns, many worthy projects are underfunded, and many projects are never funded at all.
What gets funded?
What does get funded? Predictable, institutional projects usually get funded (investments that we formerly called, “as safe as houses”). Despite the fact of sovereign debt defaults, nation-states are still a relatively good credit risk, but above all they are large enough to be able to soak up the massive amounts of capital now looking for a place to go. Major industries are also sufficiently large and stable to attract significant investment. And a certain amount of capital finds itself invested as venture capital in smaller projects.
Venture capital is known to be the riskiest of investments, and the venture capitalist expects that most of his ventures will fail and yield no returns whatever. The reward comes from the exceptional and unusual venture that, against all odds and out of proportion to the capital invested in it, becomes an enormous success. This rare venture capital success is so profitable that it not only makes up for all the other losses, but more than makes up the losses and makes the successful venture capital firm one of the most intensively capitalized industries in the world.
Risk for risk’s sake?
With the risk already so high in any venture capital project, the venture capitalist does not unnecessarily court additional, unnecessary risks, so, from among the small projects that receive venture funding, it is not the riskiest ventures that get funded, but the least risky that get funded. That is to say, among the marginal investments available to capital, the investor tries to pick the ones that look as close to being a sure thing as anything can be, notwithstanding the fact that most of these ventures will fail and lose money. No one is seeking risk for risk’s sake; if risk is courted, it is only courted as a means to the end of a greater return on capital.
The venture capitalists have a formula. They invest a certain amount of money at what is seen to be a critical stage in the early development of a project, which is then set on a timetable of delivering its product to market and taking the company public at the earliest possible opportunity so that the venture capital investors can get their money out again in two to five years.
Given the already tenuous nature of the investments that attract venture capital, many ideas for investment are rejected on the most tenuous pretexts, rejected out of hand scarcely without serious consideration, because they are thought to be impractical or too idealistic or are not likely to yield a return quickly enough to justify a venture capital infusion.
Entrepreneurs, investors, and the spectrum of temperament
Why do the funded projects get funded, while other projects do not get funded? The answer to this lies in the individual psychology of the successful investor. The few individuals who accumulate enough capital to become investors in new enterprises largely become wealthy because they had one good idea and they followed through with relentless focus. The focus is necessary to success, but it usually comes at the cost of wearing blinders.
Every human being has both impulses toward adventure and experimentation, and desires for stability and familiarity. From the impulse to adventure comes entrepreneurship, the questioning of received wisdom, a willingness to experiment and take risks (often including thrill-seeking activities), and a readiness to roll with the punches. From the desire for stability comes discipline, focus, diligence, and all of the familiar, stolid virtues of the industrious. With some individuals, the impulse to adventure predominates, while in others the desire for stability is the decisive influence on a life.
With entrepreneurs, the impulse to adventure outweighs the desire for stability, while for financiers the desire for stability outweighs the impulse to adventure. Thus entrepreneurs and the investors who fund them constitute complementary personality types. But neither exemplifies the extreme end of either spectrum. Adventurers and poets are the polar representatives of the imaginative end of the spectrum, while the hidebound traditionalist exemplifies the polar extreme of the stable end of the spectrum.
It is the rare individual who possesses both adventurous imagination and discipline in equal measures; this is genius. For most, either imagination or discipline predominates. Those with an active imagination but little discipline may entertain flights of fancy but are likely to accomplish little in the real world. Those in whom discipline predominates are likely to be unimaginative in their approach to life, but they are also likely to be steady, focused, and predictable in their behavior.
Most people who start out with a modest stake in life yearn for greater adventures than an annual return of six percent. Because of the impulse to adventure, they are likely to take risks that are not strictly financially justified. Such an individual may be rewarded with unique experiences, but would likely have been more financially successful if they could have overcome the desire in themselves for adventure and focused on a disciplined plan of investment coupled with delayed gratification. If you can overcome this desire for adventure, you can make yourself reasonably wealthy (at very least, comfortable) without too much effort. Despite the paeans we hear endlessly celebrating novelty and innovation, in fact discipline is far more important than creativity or innovation.
The bottom line is that the people who have a stranglehold on the world’s capital are not intellectually adventuresome or imaginative; on the contrary, their financial success is a selective result of their lack of imagination.
A lesson from institutional largesse
The lesson of the MacArthur fellowships is worth citing in this connection. When the MacArthur Foundation fellowships were established, the radical premise was to give money away to individuals who could then be freed to do whatever work they desired. When the initial fellowships were awarded, some in the press and some experiencing sour grapes ridiculed the fellowships as “genius grants,” implying that the foundation was being a little too loose and free in its largesse. Apparently the criticism hit home, as in successive rounds of naming MacArthur fellows the grants become more and more conservative, and critics mostly ceased to call them “genius grants” while sniggering behind their hands.
Charitable foundations, like businesses, function in an essentially conservative, if not reactionary, social milieu, in which anything new is immediately suspect and the tried and true is favored. No one wants to court controversy; no one wants to be mentioned in the media for the wrong reason or in an unflattering context, so that anyone who can stir up a controversy, even where none exists, can hold this risk averse milieu hostage to their ridicule or even to their snide laughter.
Who serves on charitable boards? The same kind of unimaginative individuals who serve on corporate boards, and who make their fortunes through the kind of highly disciplined yet largely unimaginative and highly tedious investment strategies favored by those who tend toward the stable end of the spectrum of temperament.
Handing out “genius grants” proved to be too adventuresome and socially risky, and left those in charge of the grants open to criticism. A reaction followed, and conventionality came to dominate over imagination; institutional ossification set in. It is this pervasive institutional ossification that made the MacArthur awards so radical in the early days of the fellowships, when the MacArthur Foundation itself was young and adventuresome, but the institutional climate caught up with the institution and brought it to heel. It now comfortably reclines in respectable conventionality.
Preparing for the next economy
One of the consequences of a risk averse investment class (that nevertheless always talks about its “risk tolerance”) is that it tends to fund familiar technologies, and to fund businesses based on familiar technologies. Yet, in a technological economy the one certainty is that old technologies are regularly replaced by new technologies (a process that I have called technological succession). In some cases there is a straight-forward process of technological succession in which old technologies are abandoned (as when cars displaced horse-drawn carriages), but in many cases what we see instead is that new technologies build on old technologies. In this way, the building of an electricity grid was once a cutting edge technological accomplishment; now it is simply part of the infrastructure upon which the economy is dependent (technologies I recently called facilitators of change), and which serves as the basis of new technologies that go on to become the next cutting edge technologies in their turn (technologies I recently called drivers of change).
What ought to concern us, then, is not the established infrastructure of technologies, which will continue to be gradually refined and improved (a process likely to yield profits proportional to the incremental nature of the progress), but the new technologies that will be built using the infrastructure of existing technologies. Technologies, when introduced, have the capability of providing a competitive advantage when one business enterprise has mastered them while other business enterprises have not yet mastered them. Once a technology has been mastered by all elements of the economy it ceases to provide a competitive advantage to any one firm but is equally possessed and employed by all, and also ceases to be a driver a change. Thus a distinction can be made between technologies that are drivers of change and established technologies that are facilitators of change, driven by other technologies, that is to say, technologies that are tools for the technologies that are in the vanguard of economic, social, and political change.
From the point of view both of profitability and social change, the art of funding visionary business enterprises is to fund those that will focus on those technologies that will be drivers of change in the future, rather than those that have been drivers of change in the past. This can be a difficult art to master. We have heard that generals always prepare for the last war that was just fought rather than preparing for the next war. This is not always true — we can name a list of visionary military thinkers who saw the possibilities for future combat and bent every effort to prepare for it, such as Giulio Douhet, Billy Mitchell, B. H. Liddell Hart, and Heinz Guderian — but the point is well taken, and is equally true in business and industry: financiers and businessmen prepare for the economy that was rather than the economy that will be.
The prevailing investment climate now favors investment in new technology start ups, but the technology in question is almost always implicitly understood to be some kind of electronic device to add to the growing catalog of electronic devices routinely carried about today, or some kind of software application for such an electronic device.
The very fact of risk averse capital coupled with entrepreneurs shaping their projects in such a way as to appeal to investors and thereby to gain access to capital for their enterprises suggests the possibility of the path not taken, and this path would be an enterprise constituted with the particular aim of building the future by funding its sciences, technology, engineering, and even its ideas, that is to say, but funding those developments that are yet to become drivers of change in the economy, rather than those that already are drivers of change in the economy, and therefore will slip into second place as established facilitators of the economy.
What is possible?
If there were more imagination on the part of those in control of capital, what might be funded? What are the possibilities? What might be realized by large scale investments into science, technology, and engineering, not to mention the arts and the best of human culture generally speaking? One possibility is that of explicitly funding a particular vision of the future by funding enterprises that are explicitly oriented toward the realization of aims that transcend the present.
Business enterprises explicitly oriented toward the future might be seen as the riskiest of risky investments, but there is another sense in which they are the most conservative of conservative investments: we know that the future will come, whether bidden or unbidden, although we don’t know what this inevitable future holds. Despite our ignorance as to what the future holds, we at least have the power — however limited and uncertain that power — to shape events in the future. We have no real power to shape events in the past, though many spin doctors try to conceal this impotency.
Those who think in explicit terms about the future are likely to seem like dreamers to an investor, and no one wants to labeled a “dreamer,” as this a tantamount to being ignored as a crank or a fool. Nevertheless, we need dreamers to give us a sense as to what might be possible in the future that we can shape, but of which we are as yet ignorant. The dreamer is one who has at least a partial vision of the future, and however imperfect this vision, it is at least a glimpse, and represents the first attempt to shape the future by imagining it.
Everyone who has ever dreamed big dreams knows what it is like to attempt to share these dreams and have them dismissed out of hand. Those who dismiss big dreams for the future usually are not content merely to ignore or to dismiss the dreamer, but they seem to feel compelled to go beyond dismissal and to ridicule if not attempt to shame those who dream their dreams in spite of social disapproval.
The tactics of discouragement are painfully familiar, and are as unimaginative as they are unhelpful: that the idea is unworkable, that it is a mere fantasy, or it is “science fiction.” One also hears that one is wasting one’s time, that one’s time could be better spent, and there is also the patronizing question, “Don’t you want to have a real influence?”
There is no question that the attempt to surpass the present economic paradigm involves much greater risk than seeking to find a safe place for one’s money with the stable and apparent certainty of the present economic paradigm, but greater risks promise commensurate rewards. And the potential rewards are not limited to the particular vision of a particular business enterprise, however visionary or oriented toward the future. The large scale funding of an unconventional enterprise is likely to have unconventional economic outcomes. These outcomes will be unprecedented and therefore unpredictable, but they are far more likely to be beneficial than harmful.
There is a famous passage from Keynes’ General Theory of Employment, Interest and Money that is applicable here:
“If the Treasury were to fill old bottles with banknotes, bury them at suitable depths in disused coalmines which are then filled up to the surface with town rubbish, and leave it to private enterprise on well-tried principles of laissez-faire to dig the notes up again (the right to do so being obtained, of course, by tendering for leases of the note-bearing territory), there need be no more unemployment and, with the help of the repercussions, the real income of the community, and its capital wealth also, would probably become a good deal greater than it actually is. It would, indeed, be more sensible to build houses and the like; but if there are political and practical difficulties in the way of this, the above would be better than nothing.”
John Maynard Keynes, General Theory of Employment, Interest and Money, Book III, Chapter 10, VI
For Keynes, doing something is better than doing nothing, although it would be better still to build houses than to dig up banknotes buried for the purpose of stimulating economic activity. But if it is better to do something than to do nothing, and if it is better to do something constructive like building houses rather than to do something pointless like digging holes in the ground, how much better must it not be to build a future for humanity?
If some of the capital now in search of an investment were to be systematically directed into projects that promised a larger, more interesting, more exciting, and more comprehensive future for all human beings, the eventual result would almost certainly not be that which was originally intended, but whatever came out of an attempt to build the future would be an unprecedented future.
The collateral effect of funding a variety of innovative technologies is likely to be that, as Keynes wrote, “…the real income of the community, and its capital wealth also, would probably become a good deal greater than it actually is.” Even for the risk averse investor, this ought to be too good of a prospect to pass up.
Where there is no vision, the people perish
What is the alternative to funding the future? Funding the past. It sounds vacuous to say so, but there is not much of a future in funding the past. Nevertheless, it is the past that gets funded in the present socioeconomic investment climate.
Why should the future be funded? Despite our fashionable cynicism, even the cynical need a future in which they can believe. Funding a hopeful vision of the future is the best antidote to hopeless hand-wringing and despair.
Who could fund the future if they wanted to? Any of the risk averse investors who have been looking for returns on their capital and imagining that the world can continue as though nothing were going to change as the future unfolds.
What would it take to fund the future? A large scale investment in an enterprise conceived from its inception as concerned both to be a part of the future as it unfolds, and focused on a long term future in which humanity and the civilization it has created will be an ongoing part of the future.
. . . . .
. . . . .
. . . . .
9 April 2014
Technologies may be drivers of change or facilitators of change, the latter employed by the former as the technologies that enable the development of technologies that are drivers of change; that is to say, technologies that are facilitators of change are tools for the technologies that are in the vanguard of economic, social, and political change. Technologies, when introduced, have the capability of providing a competitive advantage when one business enterprise has mastered them while other business enterprises have not yet mastered them. Once a technology has been mastered by all elements of the economy it ceases to provide a competitive advantage to any one firm but is equally possessed and employed by all. At that point of its mature development, a technology also ceases to be a driver a change and becomes a facilitator of change.
Any technology that has become a part of the infrastructure may be considered a facilitator of change rather than a driver of change. Civilization requires an infrastructure; industrial-technological civilization requires an industrial-technological infrastructure. We are all familiar with infrastructure such as roads, bridges, ports, railroads, schools, and hospitals. There is also the infrastructure that we think of as “utilities” — water, sewer, electricity, telecommunications, and now computing — which we build into our built environment, retrofitting old buildings and sometimes entire older cities in order to bring them up to the standards of technology assumed by the industrialized world today.
All of the technologies that now constitute the infrastructure of industrial-technological civilization were once drivers of change. Before the industrial revolution, the building of ports and shipping united coastal communities in many regions of the world; the Romans built a network of road and bridges; in medieval Europe, schools and hospitals become a routine part of the structure of cities; early in the industrial revolution railroads became the first mechanized form of rapid overland transportation. Consider how the transcontinental railroad in North America and the trans-Siberian railway in Russia knitted together entire continents, and their role as transformative technologies should be clear.
Similarly, the technologies we think of as utilities were once drivers of change. Hot and cold running water and indoor plumbing, still absent in much of the world, did not become common in the industrialized world until the past century, but early agricultural and urban centers only came into being with the management of water resources, which reached a height in the most sophisticated cities of classical antiquity, with water supplied by aqueducts and sewage taken away by underground drainage systems that were superior to many in existence today. With the advent of natural gas and electricity as fuels for home and industry, industrial cities were retrofitted for these services, and have since been retrofitted again for telecommunications, and now computers.
The most recent technology to have a transformative effect on socioeconomic life was computing. In the past several decades — since the end of the Second World War, when the first digital, programmable electronic computers were built for code breaking (the Colossus in the UK) — computer technology grew exponentially and eventually affected almost every aspect of life in industrialized nation-states. During this period of time, computing has been a driver of change across socioeconomic institutions. Building a faster and more sophisticated computer has been an end in itself for technologists and computer science researchers. While this will continue to be the case for some time, computing has begun to make the transition from being a driver of change in an of itself to being a facilitator of change in other areas of technological innovation. In other words, computers are becoming a part of the infrastructure of industrial-technological civilization.
The transformation of the transformative technology of computing from a driver of change into a facilitator of change for other technologies has been recognized for more than ten years. In 2003 an article by Nicholas G. Carr, Why IT Doesn’t Matter Anymore, stirred up a significant controversy when it was published. More recently, Mark R. DeLong in Research computing as substrate, calls computing a substrate instead of an infrastructure, though the idea is much the same. Delong writes of computing: “It is a common base that supports and nurtures research work and scholarly endeavor all over the university.” Although computing is also a focus of research work and scholarly endeavor in and of itself, it also serves a larger supporting role, not only in the university, but also throughout society.
Although today we still fall far short of computational omniscience, the computer revolution has happened, as evidenced by the pervasive presence of computers in contemporary socioeconomic institutions. Computers have been rapidly integrated into the fabric of industrial-technological civilization, to the point that those of us born before the computer revolution, and who can remember a world in which computers were a negligible influence, can nevertheless only with difficulty remember what life was like without computers.
Depsite, then, what technology enthusiasts tell us, computers are not going to revolutionize our world a second time. We can imagine faster computers, smaller computers, better computers, computers with more storage capacity, and computers running innovative applications that make them useful in unexpected ways, but the pervasive use of computers that has already been achieved gives us a baseline for predicting future computer capacities, and these capacities will be different in degree from earlier computers, but not different in kind. We already know what it is like to see exponential growth in computing technology, and so we can account for this; computers have ceased to be a disruptive technology, and will not become a disruptive technology a second time.
Recently quantum computing made the cover of TIME magazine, together with a number of hyperbolic predictions about how quantum computing will change everything (the quantum computer is called “the infinity machine”). There have been countless articles about how “big data” is going to change everything also. Similar claims are made for artificial intelligence, and especially for “superintelligence.” An entire worldview has been constructed — the technological singularity — in which computing remains an indefinitely disruptive technology, the development of which eventually brings about the advent of the Millennium — the latter suitably re-conceived for a technological age.
Predictions of this nature are made precisely because a technology has become widely familiar, which is almost a guarantee that the technology in question is now part of the infrastructure of the ordinary business of life. One can count on being understood when one makes predictions about the future of the computer, in the same way that one might have been understood in the late nineteenth or early twentieth century if making predictions about the future of railroads. But in so far as this familiarity marks the transition in the life of a technology from being a driver of change to being a facilitator of change, such predictions are misleading at best, and flat out wrong at worst. The technologies that are going to be drivers of change in the coming century are not those that have devolved to the level of infrastructure; they are (or will be) unfamiliar technologies that can only be understood with difficulty.
The distinction between technologies that are drivers of change and technologies that are facilitators of change (like almost all distinctions) admits of a certain ambiguity. In the present context, one of these ambiguities is that of what constitutes a computing technology. Are computing applications distinct from computing? What of technologies for which computing is indispensable, and which could not have come into being without computers? This line of thought can be pursued backward: computers could not exist without electricity, so should computers be considered anything new, or merely an extension of electrical power? And electrical power could not have come about with the steam and fossil-fueled industry that preceded it. This can be pursued back to the first stone tools, and the argument can be made the nothing new has happened, in essence, since the first chipped flint blade.
Perhaps the most obvious point of dispute in this analysis is the possibility of machine consciousness. I will acknowledge without hesitation that the emergence of machine consciousness is a potentially revolutionary development, and it would constitute a disruptive technology. Machine consciousness, however, is frequently conflated with artificial intelligence and with superintelligence, and we must distinguish between the two. Artificial intelligence of a rudimentary form is already crucial to the automation of industry; machine consciousness would be the artificial production, in a machine substrate, of the kind of consciousness that we personally experience as our own identity, and which we infer to be at the basis of the action of others (what philosophers call the problem of other minds).
What makes the possibility of machine consciousness interesting to me, and potentially revolutionary, is that it would constitute a qualitatively novel emergent from computing technology, and not merely another application of computing. Computers stand in the same relationship to electricity that machine consciousness would stand in relation to computing: a novel and transformational technology emergent from an infrastructural technology, that is to say, a driver of change that emerges from a facilitator of change.
The computational infrastructure of industrial-technological civilization is more or less in place at present, a familiar part of our world, like the early electrical grids that appeared in the industrialized world once electricity became sufficiently commonplace to become a utility. Just as the electrical grid has been repeatedly upgraded, and will continue to be ungraded for the foreseeable future, so too the computational infrastructure of industrial-technological civilization will be continually upgraded. But the upgrades to our computational infrastructure will be incremental improvements that will no longer be major drivers of change either in the economy or in sociopolitical institutions. Other technologies will emerge that will take that role, and they will emerge from an infrastructure that is no longer driving socioeconomic change, but is rather the condition of the possibility of this change.
. . . . .
. . . . .
. . . . .
. . . . .
2 March 2014
Kierkegaard’s Concluding Unscientific Postscript is an impassioned paean to subjectivity, which follows logically (if Kierkegaard will forgive me for saying so) from Kierkegaard’s focus on the individual. The individual experiences subjectivity, and, as far as we know, nothing else in the world experiences subjectivity, so that if the individual is the central ontological category of one’s thought, then the subjectivity that is unique to the individual will be uniquely central to one’s thought, as it is to Kierkegaard’s thought.
Another way to express Kierkegaard’s interest in the individual is to identify his thought as consistently ideographic, to the point of ignoring the nomothetic (on the ideographic and the nomothetic cf. Axes of Historiography). Kierkegaard’s account of the individual and his subjectivity as an individual falls within an overall ontology of individuals, therefore a continuum of contingency. Thus, in a sense, Kierkegaard represents a kind of object-oriented historiography (as a particular expression of an object-oriented ontology). From this point of view, once can easily see Kierkegaard’s resistance to Hegel’s lawlike, i.e., nomothetic, account of history, in which individuals are mere pawns at the mercy of the cunning of Reason.
At the present time, however, I will not discuss the implications of Kierkegaard’s implicit historiography, but rather his implicit futurism, though the two — historiography and futurism — are mirror images of each other, and I have elsewhere quoted Friedrich von Schlegel that, “The historian is a prophet facing backwards.” The same concern for the individual and his subjectivity is present in Kierkegaard’s implicit futurism as in his implicit historiography.
In Kierkegaard’s Concluding Unscientific Postscript, written under the pseudonym Johannes Climacus, we find the following way to distinguish the objective approach from the subjective approach:
The objective accent falls on WHAT is said, the subjective accent on HOW it is said.
Søren Kierkegaard, Concluding Unscientific Postscript, Translated from the Danish by David F. Swenson, completed after his death and provided with Introduction and Notes by Walter Lowrie, Princeton: Princeton University Press, 1968, p. 181
A few pages prior to this in the text, Kierkegaard tells us a story about the importance of the subjective accent upon how something is said:
The objective truth as such, is by no means adequate to determine that whoever utters it is sane; on the contrary, it may even betray the fact that he is mad, although what he says may be entirely true, and especially objectively true. I shall here permit myself to tell a story, which without any sort of adaptation on my part comes direct from an asylum. A patient in such an institution seeks to escape, and actually succeeds in effecting his purpose by leaping out of a window, and prepares to start on the road to freedom, when the thought strikes him (shall I say sanely enough or madly enough?): “When you come to town you will be recognized, and you will at once be brought back here again; hence you need to prepare yourself fully to convince everyone by the objective truth of what you say, that all is in order as far as your sanity is concerned.” As he walks along and thinks about this, he sees a ball lying on the ground, picks it up, and puts it into the tail pocket of his coat. Every step he takes the ball strikes him, politely speaking, on his hinder parts, and every time it thus strikes him he says: “Bang, the earth is round.” He comes to the city, and at once calls on one of his friends; he wants to convince him that he is not crazy, and therefore walks back and forth, saying continually: “Bang, the earth is round!” But is not the earth round? Does the asylum still crave yet another sacrifice for this opinion, as in the time when all men believed it to be flat as a pancake? Or is a man who hopes to prove that he is sane, by uttering a generally accepted and generally respected objective truth, insane? And yet it was clear to the physician that the patient was not yet cured; though it is not to be thought that the cure would consist in getting him to accept the opinion that the earth is flat. But all men are not physicians, and what the age demands seems to have a considerable influence upon the question of what madness is.
Søren Kierkegaard, Concluding Unscientific Postscript, Translated from the Danish by David F. Swenson, completed after his death and provided with Introduction and Notes by Walter Lowrie, Princeton: Princeton University Press, 1968, p. 174
These themes of individuality and subjectivity occur throughout Kierkegaard’s work, always expressed with humor and imagination — Kierkegaard’s writing itself is a testament to the individuality he so valued — as especially illustrated in the passage above. Kierkegaard engages in philosophy by telling a joke; would that more philosophy were written with similar panache.
From Kierkegaard we can learn that how the future is presented can mean the difference between a vision that inspires the individual and a vision that sounds like madness — and this is important. Implicit Kierkegaardian futurism forces us to see the importance of the individual in a schematic conception of the future that is often impersonal and without a role for the individual that the individual would care to assume. Worse yet, there are often aspects of futurism that seem to militate against the individual.
One of the great failings of the communist vision of the future — which inspired many in the twentieth century, and was a paradigm of European manifest destiny such as I described in The Idea and Destiny of Europe — was its open contempt for the individual, which is a feature of most collectivist thought. Not only is it true that, “Where there is no vision, the people perish,” but one might also say that without a personal vision, the people perish.
One of the ways in which futurism has been presented in such a manner that almost seems contrived to deny and belittle the role of the individual is the example of the “twin paradox” in relativity theory. I have discussed this elsewhere (cf. Stepping Stones Across the Cosmos) because I find it so interesting. The twin paradox is used to explain of the oddities of general relativity, such that an accelerated clock moves more slowly relative to a clock that remains stationary.
In the twin paradox, it is postulated that, of two twins on Earth, the two say their goodbyes and one remains on Earth while another travels a great distance (perhaps to another star) at relativistic velocities. When the traveling twin returns to Earth, he finds that his twin has aged beyond recognition and the two scarcely know each other. This already poignant story can be made all the more poignant by postulating an even longer journey in which an individual leaves Earth and returns to find everyone he knew long dead, and perhaps even the places, the cities, and the monuments once familiar to him now long vanished.
The twin paradox, as it is commonly told, is a story, and, moreover, is a parable of cosmic loneliness. We would probably question the sanity of any individual who undertook a journey of space exploration under these conditions, and rightly so. If we imagine this story set within a larger story, the only kind of character who would undertake such a journey would be the villain of the piece, or an outcast, like a crazed scientist maddened by his lack of human contact and obsessed exclusively with his work (a familiar character from fiction).
The twin paradox was formulated to relate the objective truth of our universe, but it sounds more like Kierkegaard’s story of a madman reciting an obvious truth: no one is fooled by the madman. As long as a human future in space is presented in such terms, it will sound like madness to most. What we need in order to present existential risk mitigation to the public are stories of space exploration that touch the heart in a way that anyone can understand. We need new stories of the far future and of the individual’s role in the future in order to bring home such matters in a way that makes the individual respond on a personal level.
A subjective experience is always presented in a personal context. This personal context is important to the individual. Indeed, we know this from many perspectives on human life, whether it be the call to heroic personal self-sacrifice for the good of the community that is found collectivist thought, or the celebration of enlightened self-interest found in individualistic thought. Just as it is possible to paint either approach as a form of selfishness rooted in a personal context, it is possible to paint either as heroic for the same reason. In so far as a conception of history can be made real to the individual, and incorporates a personal context suggestive of subjective experiences, that conception of history will animate effective social action far more readily than even the most seductive vision of a sleek and streamlined future which nevertheless has no obvious place for the individual and his subjective experience.
The ultimate lesson here — and it is a profoundly paradoxical lesson, worthy of the perversity of human nature — is this: the individual life serves as the “big picture” context by which the individual, the individual’s isolated experiences, derive their value.
When we think of “big picture” conceptions of history, humanity, and civilization, we typically think in impersonal terms. This is a mistake. The big picture can be equally formulated in personal or impersonal terms, and it is the vision that is formulated in personal terms that speaks to the individual. In so far as the individual accepts this personal vision of the big picture, the vision informs the individual’s subjective experiences.
The narratives of existential risk would do well to learn this lesson.
. . . . .
. . . . .
. . . . .
. . . . .
24 February 2014
In my previous post, Akhand Bharat and Ghazwa-e-hind: Conflicting Destinies in South Asia, I discussed the differing manifest destinies of Pakistan and India in South Asia. I also placed this discussion in the context of Europe’s wars of the twentieth century and the Cold War. The conflicting destinies imagined by ideological extremists in Pakistan and India is more closely parallel to European wars in the twentieth century than to the Cold War, because while Europe’s wars escalated into a global conflagrations, it was, at heart, conflicting manifest destinies in Europe that brought about these wars.
A manifest destiny is a vision for a people, that is to say, an imagined future, perhaps inevitable, for a particular ethnic or national community. Thus manifest destinies are produced by visionaries, or communities of visionaries. The latter, communities of visionaries, typically include religious organizations, political parties, professional protesters and political agitators, inter alia. We have become too accustomed to assuming that “visionary” is a good thing, but vision, like attempted utopias, goes wrong much more frequently than it goes well.
Perhaps that last visionary historical project to turn out well was that of the United States, which is essentially en Enlightenment-era thought experiment translated into the real world — supposing we could rule ourselves without kings, starting de novo, how would be do it? — and of course there would be many to argue that the US did not turn out well at all, and that whatever sociopolitical gains that have been realized as a result of the implementation of popular sovereignty, the price has been too high. Whatever narrative one employs to understand the US, and however one values this political experiment, the US is like an alternative history of Europe that Europe itself did not explore, i.e., the US is the result of one of many European ideas that had a brief period of influence in Europe but which was supplanted by later ideas.
Utopians are not nice people who wish only to ameliorate the human condition; utopians are the individuals and movements who place their vision above the needs, and even the lives, of ordinary human beings engaged in the ordinary business of life. Utopians are idealists, who wish to see an ideal put into practice — at any cost. The great utopian movements of the twentieth century were identical to the greatest horrors of the twentieth century: Soviet communism, Nazi Germany, Mao’s Great Leap Forward and the Cultural Revolution, and the attempt by the Khmer Rouge to create an agrarian communist society in Cambodia. It was one of the Khmer Rouge slogans that, “To keep you is no benefit, to destroy you is no loss.”
The Second World War — that is to say, the most destructive conflict in human history — was a direct consequence of the Nazi vision for a utopian Europe. The ideals of a Nazi utopia are not widely shared today, but this is how the Nazis themselves understood their attempt to bring about a Judenrein slave empire in the East, which Nazi overlords ruling illiterate Slav peasants. Nazism is one of the purest exemplars in human history of the attempt to place the value of a principle above the value of individual lives. It would also be said that the Morganthau plan for post-war Germany (which I discussed in The Stalin Doctrine) was almost as visionary as the Nazi vision itself, though certainly less brutal and not requiring any genocide to be put into practice. Visionary adversaries sometimes inspire visionary responses, although the Morganthau plan was not ultimately adopted.
In the wake of the unprecedented destruction of the Second World War, the destiny of Europe has been widely understood to lie in European integration and unity. The attempt to unify Europe in our time — the European Union — is predicated upon an implicit idea of Europe, which is again predicated upon an implicit shared vision of the future. What is this shared vision of the future? I could maliciously characterize the contemporary European vision of the future as Fukuyama’s “end of history,” in which, “economic calculation, the endless solving of technical problems, environmental concerns, and the satisfaction of sophisticated consumer demands,” constitute the only remaining social vision, and, “The struggle for recognition, the willingness to risk one’s life for a purely abstract goal, the worldwide ideological struggle that called forth daring, courage, imagination, and idealism,” have long since disappeared. …
After the horrors of the twentieth century, such a future might not sound too bad, and while it may constitute a kind of progress, this can no longer be understood as a manifest destiny; no one imagines that a unified Europe is one people with one vision; unified Europe is, rather, a conglomerate, and its vision is no more coherent or moving than the typical mission statement of a conglomerate. Indeed, we must view it as an open question as to whether a truly democratic society can generate or sustain a manifest destiny — and Europe today is, if anything, a truly democratic society. There are, of course the examples of Athens at the head of the Delian League and the United States in the nineteenth century. I invite the reader to consider whether these societies were as thoroughly democratic as Europe today, and I leave the question open for the moment.
But Europe did not come to its democratic present easily or quickly. Europe has represented both manifest destinies and conflicting manifest destinies throughout its long history. Europe’s unusual productivity of ideas has given the world countless ideologies that other peoples have adopted as their own, even as the Europeans took them up for a time, only to later cast them aside. Europe for much of its history represented Christendom, that is to say, Christian civilization. In its role as Christian civilization, Europe resisted the Vikings, the Mongols, Russian Orthodox civilization after the Great Schism, Islam during the Crusades, later the Turk, another manifestation of Islam, and eventually Europeans fell on each other and during the religious wars that followed the Protestant Reformation, with Catholics and Protestants representing conflicting manifest destinies that tore Europe apart with an unprecedented savagery and bloodthirstiness.
After Europe exhausted itself with fratricidal war inspired by conflicting manifest destinies, Europe came to represent science, and progress, and modernity, and this came to be a powerful force in the world. But modernity has more than one face, and by the time Europe entered the twentieth century, Europe hosted two mortal enemies that held out radically different visions of the future, the truly modern manifest destinies of fascism and communism. Europe again exhausted itself in fratricidal conflict, and it was left to the New World to sort out the peace and to provide the competing vision to the surviving communist vision that emerged from the mortal conflict in Europe. Now communism, too, has ceded its place as a vision for the future and a manifest destiny, leaving Russia again as the representative of Orthodox civilization, and Europe as the representative of democracy.
On the European periphery, Russia continues to exercise an influence in a direction distinct from that of the idea of Europe embodied in the European Union. Even as I write this, protesters and police are battling in Ukraine, primarily as a result of Russian pressure on the leaders of Ukraine not to more closely associate itself with Europe (cf. Europe’s Crisis in Ukraine by Swedish Foreign Minister Carl Bildt). Ukraine is significant in this connection, because it is a nation-state split between a western portion that shares the European idea and wants to be a part of Europe, and an eastern part that looks to Russia.
What does a nation-state on the European periphery look toward when it looks toward Russia? Does Russia represent an ideology or a destiny, if only on the European periphery and not properly European? As the leading representative of Orthodox civilization, Russia should represent some kind of vision, but what vision exactly? As I have attempted to explain previously in The Principle of Autocracy and Spheres of Influence, I remain puzzled by autocracy and forms of authoritarianism, and I don’t see that Russia has anything to offer other than a kinder, gentler form of autocracy than that what the Tsars offered in earlier centuries.
Previously in The Evolution of Europe I wrote that, “The idea of Europe will not go away,” and, “The saga of Europe is far from over.” I would still say the same now, but I would qualify these claims. The idea of Europe remains strong for the Europeans, but it represents little in the way of a global vision, and while many seek to join Europe, as barbarians sought join the Roman Empire, Europe represents a manifest destiny as little as the later Roman Empire represented anything. But Europe displaced into the New World, where its Enlightenment prodigy, the United States continues its political experiment, still represented something, however tainted the vision.
The idea of Europe remains in Europe, but the destiny of Europe lies in the Western Hemisphere.
. . . . .
. . . . .
. . . . .
1 February 2014
In my previous post, Autonomous Vehicles and Technological Unemployment in the Transportation Sector, I discussed some of the changes that are likely to come to the transportation industry as a result of autonomous vehicles, which may come to be a textbook case of technological unemployment, though I argued in that post that the transition will take many decades, which will allow for some degree of reallocation of the workforce over time. Economic incentives to freight haulers will drive the use of autonomous vehicles, because of their relatively low costs and ability to operate non-stop, but many people today are employed as transportation workers, and these workers, though today in high demand, may find themselves with greatly changed employment opportunities by the end of the twenty-first century. A whole class of workers who today earn a living wage without the necessity of extensive training and education, stands to be eliminated.
Today I want to go a little deeper into the structural problem of technological unemployment. In my previous post, Autonomous Vehicles and Technological Unemployment in the Transportation Sector, I mentioned the recent cover story on The Economist, Coming to an office near you… The argument in an article in this issue in The Economist, “The Onrushing Wave,” is that automation allows for capital to substitute for labor. I don’t disagree with this entirely, but there is no mention in The Economist of regressive taxation or decades of policies that have redistributed income upward.
The same article in The Economist mentions the upcoming book The Second Machine Age by Andrew McAfee and Erik Brynjolfsson; the authors of this book recently had an article on the Financial Times’ Comment page, “Robots stay in the back seat in the new machine age” (Wednesday 22 January 2014). The authors try to remain upbeat while grappling with the realities of technological unemployment. One answer to “resigning ourselves to an era of mass unemployment” proposed by the authors is educational reform, but we know that education, too (like employment), is undergoing a crisis. The same socioeconomic system that is making it possible for capital to substitute for labor through automation is the same socioeconomic system that has been driving young people to spend ever-larger amounts of borrowed money on education, which has lined the pockets of the universities, transformed them into credentialing mills, and has driven employers to escalate their educational requirements for routine jobs that could just as well be filled by someone without a credential.
Both The Economist article and the Financial Times article cite Keynes, who in a particularly prescient passage in an essay of 1930 both foresaw and largely dismissed the problem of technological unemployment:
“We are being afflicted with a new disease of which some readers may not yet have heard the name, but of which they will hear a great deal in the years to come — namely, technological unemployment. This means unemployment due to our discovery of means of economising the use of labour outrunning the pace at which we can find new uses for labour. But this is only a temporary phase of maladjustment. All this means in the long run that mankind is solving its economic problem. I would predict that the standard of life in progressive countries one hundred years hence will be between four and eight times as high as it is to-day. There would be nothing surprising in this even in the light of our present knowledge. It would not be foolish to contemplate the possibility of a far greater progress still.”
John Maynard Keynes, Essays in Persuasion, “ECONOMIC POSSIBILITIES FOR OUR GRANDCHILDREN” (1930)
It is remarkable that Keynes would so plainly acknowledge technological unemployment as a “new disease” and then go on to dismiss is as “…a temporary phase of maladjustment.” It was Keynes, after all, who penned one of the most famous lines in all economic writing about how misleading it is to appeal to the long run while dismissing the temporary problem:
“But this long run is a misleading guide to current affairs. In the long run we are all dead. Economists set themselves too easy, too useless a task if in tempestuous seasons they can only tell us that when the storm is long past the ocean is flat again.”
John Maynard Keynes, Monetary Reform, New York: Harcourt, Brace, and Company, 1924, p. 88
Economists would indeed set themselves too easy, too useless a task if they dismiss technological unemployment as a temporary phase of maladjustment. But, to be fair, economists are not social engineers. It is not for economists, in their role as economists, to make social policy, or even to make economic or monetary policy. This is a political task. It is the role of the economist to understand economic policy and monetary policy, and it is to be hoped that this understanding can be the basis of sound practical recommendations that can be presented to policy makers and the public.
It is well worth reading the whole of Keynes’ essay on the economic possibilities for our grandchildren, in which he suggests that human beings have evolved to struggle for subsistence, but that the growth of technology and capital are going to bring an end to this struggle for subsistence, thus marking a permanent change in the human condition (which Keynes calls, “solving the economic problem”). In short, Keynes was a classic techno-optimist, and he thought it would take about a hundred years (from 1930, so 2030) to get to the point at which humanity has definitively solved the economic problem. He does add the caveat that population control, the avoidance of war, and the employment of science will be necessary in addition to economic effort to solve humanity’s economic problem, and presumably, if we fail to heed Keynes’ caveats — as we certainly have since he wrote his essay — we will likely hamper our progress and delay the solution of the economic problem.
What I find remarkable in Keynes, and in the techno-optimists of our own time, is their ability to speak of the coming age of maximized abundance as though it were all but achieved, and to neglect the whole struggle and negotiation that will get us to that point. Keynes effectively consigned a century to being a temporary phase of maladjustment, and recognized that this temporary phase may stretch out over more than a century if matters don’t proceed smoothly. But for Keynes that isn’t the real problem. Keynes feels that, “the economic problem is not — if we look into the future — the permanent problem of the human race.” He then goes on to blandly state:
“…there is no country and no people, I think, who can look forward to the age of leisure and of abundance without a dread. For we have been trained too long to strive and not to enjoy. It is a fearful problem for the ordinary person, with no special talents, to occupy himself, especially if he no longer has roots in the soil or in custom or in the beloved conventions of a traditional society.”
In other words, what bothers Keynes is the troubling prospect of leisure for the working classes. To Keynes and the techno-optimists, I say there is nothing to worry you; that the millennium has not yet arrived, nor are we prepared for it to arrive, since the masses of the people will continue to struggle for subsistence for the foreseeable future. In the contemporary economy, we see no measures put into place that would indicate a shift toward institutions that would ease us into the paradise of maximized abundance promised by automation. There are, of course, the traditional workplace protections put into place throughout the industrialized world in the early part of the twentieth century, which include benefits for the unemployed, protections for those injured on the job, and a minimal stipend for the elderly, i.e., the worker after retirement. None of these traditional protections, however, begins to go far enough to support the unemployed worker for extended periods of time, or eases him into our out of his unemployed condition into sometime sustainable for the indefinite future.
If you lose your job at the age of 50 and have another 15 years to go until retirement (assuming a retirement age, and therefore eligibility for retirement benefits, at age 65), the benefits available to unemployed workers are not going to pay your mortgage for 15 years. And if you sell your house and move into an apartment, those benefits are not going to pay your rent. There are food banks and clothing banks for the destitute, so that in an industrialized nation-state you are not likely to go without some minimal amount of food and clothing. Perhaps, by hook or by crook, you find a way to maintain yourself for 15 years without becoming homeless and ending up as an invisible statistic, begging for change on a street corner. At that time you might get the minimal stipend provided for the elderly, and this might sustain you until you die. But what kind of life is the survival that I have described? It is simply another form of the struggle for subsistence, which Keynes’ thought would be eliminated by the solution of humanity’s economic problem.
While the unfortunate scenario I have outlined above consigns an individual to a relentlessly marginal life, others who have managed to find a more fortunate niche for themselves in the changing economy will have a house or two, a car or two, dinners at nice restaurants, a good education for their children, vacations, and all the things that money can buy in a market economy. The kind of problems that Keynes imagines in his essay, and which techno-optimists ever since have been (implicitly) imagining — that is to say, the problem of what individuals will do with all the time hanging heavy on their hands when they no longer have work to do — would be a kind of situation in which material goods become so cheap that they are simply given away to people. But are we going to give away the kind of good life that the fortunate enjoy?
All you have to do is to drive (or walk) through any large city in the world, and in a recession you will see block after block of empty store fronts, and if you read the classified advertisements you will find countless empty apartments waiting to be rented even as there are homeless people living on the street. We know that the owners of the empty store fronts could rent them out if they were willing to drop their asking price, but there is a limit below which landlords will not drop their price, and they would rather hold on to their properties, paying property taxes and maintenance expenses while their property remains idle, in hopes that a tenant will appear who is willing to meet their price. This situation could be met by government income redistribution, if money collected as taxes were spent to subsidize rentals, to give storefronts to small businesses or to rent empty apartments outright in which the homeless might live. But we already know what government programs like this are like. Individuals have to jump through hoops — in other words, they must be ready to humiliate themselves and to grovel before a functionary — in order to receive the “benefit.” Many people will not do this (I wouldn’t do this), and would thus opt out of well-intentioned programs that would make housing available to the homeless — with strings attached.
Suppose, however, you’re willing to grovel and you get your government apartment. What then? You will still be trapped in an extremely marginal position. You won’t be getting a penthouse suite with a view, you won’t be given a Ferrari to drive, you won’t be given an Armani suit, and you won’t be given an all-expense-paid trip to the south of France to sample the food and wine of the region. Who gets the penthouses and the Ferraris and the Armani suits and the vacations in the Dordogne? In other words, how do we allocate luxury goods in an economy of maximized abundance? Ideally, there would be no limits to consumer goods; that’s what “maximized abundance” means, but we all know that we are not going to be living in a world in which everyone has a Ferrari and an Armani suit.
How far can abundance be stretched? Are we to understand maximized abundance (or what Adam Smith called universal opulence) in terms of equal access to luxuries for everyone, or in terms of freezing social arrangements in a particular configuration so that each level of society receives its traditional share of goods? In other words, are we going to understand society as an egalitarian paradise or a feudal hierarchy? History has many examples of feudal hierarchies, and no examples of egalitarian paradises. Those societies explicitly constituted with the goal of becoming egalitarian paradises — i.e., large scale communist societies of the twentieth century — turned out to be even more stultifyingly hierarchical than feudalism.
There are some rather obvious answers to the rhetorical questions I have posed above, and none of them are particularly admirable. Luxury goods may go to those who are born into great wealth, or they may go to those who are particularly expert in some skill valued by society, or they may be reserved to reward government functionaries for loyal service. All of these arrangements have been realized in actual human societies of the past, and none of them constituted what Keynes called a solution to the economic problem for humanity.
Perhaps you think I am being trivial in my discussion of luxury goods, mentioning Ferraris and Armani suits, but I employ these as mere counters for the real luxuries that make life worth living. By these, I mean the experiences that we treasure and which are uniquely our own. The richness of a life is a function of the experiences that comprise the life in question. In market economies as they are administered today, if you have money, you can afford a wide variety of experiences. And if you are poor, your experiences are pretty much limited to staring at the four walls of your room, if you are lucky enough to avoid being homeless.
Believe me, I could easily elaborate a scenario that would stand with the best of the techno-optimists. I have observed elsewhere that, while seven billion human beings is a lot for the Earth, in the Milky Way it is virtually nothing. With the declining birth rates that characterize industrial-technological civilization, we will need every human being simply for the task of expanding our civilization into the Milky Way, leaving the machines to do the dead-end industrial jobs that once trapped human beings in unenviable circumstances.
There are endless interesting things yet to be done, and we will need every living human being freed from drudgery simply to begin the process of establishing a spacefaring civilization. This is a wonderful vision of considerable attraction to me personally. This is the world that I would like to see come about. The problem is, virtually nothing is being done to realize such a vision, or, for that matter, to realize any other techno-optimist vision. On the contrary, policies being implemented today seem formulated for the purpose of discouraging the kind of society that we need to begin building right now, today, if we are to defy the existential risks with which we are confronted as a species.
We could accurately speak of contemporary economic circumstances as “…a temporary phase of maladjustment…” if we were actively seeking to mitigate the maladjustment and to build an economy that would prepare us for the future. This is not being done. On the contrary, people who lose their jobs are viewed as failures or worse, and are condemned by economic reality to live a life of straightened circumstances. The struggle for subsistence continues, and is likely to continue indefinitely, because despite Keynes’ claim to the contrary, humanity has not yet solved its economic problem, although the economic problem is no longer a problem of production, but rather a problem of distribution and allocation.
. . . . .
. . . . .
. . . . .
. . . . .
26 October 2013
In my last post, The Retrodiction Wall, I introduced several ideas that I think were novel, among them:
● A retrodiction wall, complementary to the prediction wall, but in the past rather than the present
● A period of effective history lying between the retrodiction wall in the past and the prediction wall in the future; beyond the retrodiction and prediction walls lies inaccessible history that is not a part of effective history
● A distinction between diachronic and synchronic prediction walls, that is to say, a distinction between the prediction of succession and the prediction of interaction
● A distinction between diachronic and synchronic retrodiction walls, that is to say, a distinction between the retrodiction of succession and the retrodiction of interaction
I also implicitly formulated a principle, though I didn’t give it any name, parallel to Einstein’s principle (also without a name) that mathematical certainty and applicability stand in inverse proportion to each other: historical predictability and historical relevance stand in inverse proportion to each other. When I can think of a good name for this I’ll return to this idea. For the moment, I want to focus on the prediction wall and the retrodiction wall as the boundaries of effective history.
In The Retrodiction Wall I made the assertion that, “Effective history is not fixed for all time, but expands and contracts as a function of our knowledge.” An increase in knowledge allows us to push the boundaries the prediction and retrodiction walls outward, as a diminution of knowledge means the contraction of prediction and retrodiction boundaries of effective history.
We can go farther than this is we interpolate a more subtle and sophisticated conception of knowledge and prediction, and we can find this more subtle and sophisticated understand in the work of Frank Knight, which I previously cited in Existential Risk and Existential Uncertainty. Knight made a tripartite distinction between prediction (or certainty), risk, and uncertainty. Here is the passage from Knight that I quoted in Addendum on Existential Risk and Existential Uncertainty:
1. A priori probability. Absolutely homogeneous classification of instances completely identical except for really indeterminate factors. This judgment of probability is on the same logical plane as the propositions of mathematics (which also may be viewed, and are viewed by the writer, as “ultimately” inductions from experience).
2. Statistical probability. Empirical evaluation of the frequency of association between predicates, not analyzable into varying combinations of equally probable alternatives. It must be emphasized that any high degree of confidence that the proportions found in the past will hold in the future is still based on an a priori judgment of indeterminateness. Two complications are to be kept separate: first, the impossibility of eliminating all factors not really indeterminate; and, second, the impossibility of enumerating the equally probable alternatives involved and determining their mode of combination so as to evaluate the probability by a priori calculation. The main distinguishing characteristic of this type is that it rests on an empirical classification of instances.
3. Estimates. The distinction here is that there is no valid basis of any kind for classifying instances. This form of probability is involved in the greatest logical difficulties of all, and no very satisfactory discussion of it can be given, but its distinction from the other types must be emphasized and some of its complicated relations indicated.
Frank Knight, Risk, Uncertainty, and Profit, Chap. VII
This passage from Knight’s book (as the entire book) is concerned with applications to economics, but the kernel of Knight’s idea can be generalized beyond economics to generally represent different stages in the acquisition of knowledge: Knight’s a priori probability corresponds to certainty, or that which is so exhaustively known that it can be predicted with precision; Knight’s statistical probably corresponds with risk, or partial and incomplete knowledge, or that region of human knowledge where the known and unknown overlap; Knight’s estimates correspond to unknowns or uncertainty.
Knight formulates his tripartite distinction between certainty, risk, and uncertainty exclusively in the context of prediction, and just as Knight’s results can be generalized beyond economics, so too Knight’s distinction can be generalized beyond prediction to also embrace retrodiction. In The Retrodiction Wall I generalized John Smart‘s exposition of a prediction wall in the future to include a retrodiction wall in the past, both of which together define the boundaries of effective history. These two generalizations can be brought together.
A prediction wall in the future or a retrodiction wall in the past are, as I noted, functions of knowledge. That means we can understand this “boundary” not merely as a threshold that is crossed, but also as an epistemic continuum that stretches from the completely unknown (the inaccessible past or future that lies utterly beyond the retrodiction or prediction wall) through an epistemic region of prediction risk or retrodiction risk (where predictions or retrodictions can be made, but are subject to at least as many uncertainties as certainties), to the completely known, in so far as anything can be completely known to human beings, and therefore well understood by us and readily predictable.
Introducing and integrating distinctions between prediction and retrodiction walls, and among prediction, risk and uncertainty gives a much more sophisticated and therefore epistemically satisfying structure to our knowledge and how it is contextualized in the human condition. The fact that we find ourselves, in medias res, living in a world that we must struggle to understand, and that this understanding is an acquisition of knowledge that takes place in time, which is asymmetrical as regards the past and future, are important features of how we engage with the world.
This process of making our model of knowledge more realistic by incorporating distinctions and refinements is not yet finished (nor is it ever likely to be). For example, the unnamed principle alluded to above — that of the inverse relation between historical predictability and relevance, suggests that the prediction and retrodiction walls can be penetrated unevenly, and that our knowledge of the past and future is not consistent across space and time, but varies considerably. An inquiry that could demonstrate this in any systematic and schematic way would be more complicated than the above, so I will leave this for another day.
. . . . .
. . . . .
. . . . .
23 October 2013
Prediction in Science
One of the distinguishing features of science as a system of thought is that it makes testable predictions. The fact that scientific predictions are testable suggests a methodology of testing, and we call the scientific methodology of testing experiment. Hypothesis formation, prediction, experimentation, and resultant modification of the hypothesis (confirmation, disconfirmation, or revision) are all essential elements of the scientific method, which constitutes an escalating spiral of knowledge as the scientific method systematically exposes predictions to experiment and modifies its hypotheses in the light of experimental results, which leads in turn to new predictions.
The escalating spiral of knowledge that science cultivates naturally pushes that knowledge into the future. Sometimes scientific prediction is even formulated in reference to “new facts” or “temporal asymmetries” in order to emphasize that predictions refer to future events that have not yet occurred. In constructing an experiment, we create a few set of facts in the world, and then interpret these facts in the light of our hypothesis. It is this testing of hypotheses by experiment that establishes the concrete relationship of science to the world, and this is also a source of limitation, for experiments are typically designed in order to focus on a single variable and to that end an attempt is made to control for the other variables. (A system of thought that is not limited by the world is not science.)
Alfred North Whitehead captured this artificial feature of scientific experimentation in a clever line that points to the difference between scientific predictions and predictions of a more general character:
“…experiment is nothing else than a mode of cooking the facts for the sake of exemplifying the law. Unfortunately the facts of history, even those of private individual history, are on too large a scale. They surge forward beyond control.”
Alfred North Whitehead, Adventures of Ideas, New York: The Free Press, 1967, Chapter VI, “Foresight,” p. 88
There are limits to prediction, and not only those pointed out by Whitehead. The limits to prediction have been called the prediction wall. Beyond the prediction wall we cannot penetrate.
The Prediction Wall
John Smart has formulated the idea of a prediction wall in his essay, “Considering the Singularity,” as follows:
With increasing anxiety, many of our best thinkers have seen a looming “Prediction Wall” emerge in recent decades. There is a growing inability of human minds to credibly imagine our onrushing future, a future that must apparently include greater-than-human technological sophistication and intelligence. At the same time, we now admit to living in a present populated by growing numbers of interconnected technological systems that no one human being understands. We have awakened to find ourselves in a world of complex and yet amazingly stable technological systems, erected like vast beehives, systems tended to by large swarms of only partially aware human beings, each of which has only a very limited conceptualization of the new technological environment that we have constructed.
Business leaders face the prediction wall acutely in technologically dependent fields (and what enterprise isn’t technologically dependent these days?), where the ten-year business plans of the 1950′s have been replaced with ten-week (quarterly) plans of the 2000′s, and where planning beyond two years in some fields may often be unwise speculation. But perhaps most astonishingly, we are coming to realize that even our traditional seers, the authors of speculative fiction, have failed us in recent decades. In “Science Fiction Without the Future,” 2001, Judith Berman notes that the vast majority of current efforts in this genre have abandoned both foresighted technological critique and any realistic attempt to portray the hyper-accelerated technological world of fifty years hence. It’s as if many of our best minds are giving up and turning to nostalgia as they see the wall of their own conceptualizing limitations rising before them.
Considering the Singularity: A Coming World of Autonomous Intelligence (A.I.) © 2003 by John Smart (This article may be reproduced for noncommercial purposes if it is copied in its entirety, including this notice.)
I would to suggest that there are at least two prediction walls: synchronic and diachronic. The prediction wall formulated above by John Smart is a diachronic prediction wall: it is the onward-rushing pace of events, one following the other, that eventually defeats our ability to see any recognizable order or structure of the future. The kind of prediction wall to which Whitehead alludes is a synchronic prediction wall, in which it is the outward eddies of events in the complexity of the world’s interactions that make it impossible for us to give a complete account of the consequences of any one action. (Cf. Axes of Historiography)
Retrodiction and the Historical Sciences
Science does not live by prediction alone. While some philosophers of science have questioned the scientificity of the historical sciences because they could not make testable (and therefore falsifiable) predictions about the future, it is now widely recognized that the historical sciences don’t make predictions, but they do make retrodictions. A retrodiction is a prediction about the past.
The Oxford Dictionary of Philosophy by Simon Blackburn (p. 330) defines retrodiction thusly:
retrodiction The hypothesis that some event happened in the past, as opposed to the prediction that an event will happen in the future. A successful retrodiction could confirm a theory as much as a successful prediction.
As with predictions, there is also a limit to retrodiction, and this is the retrodiction wall. Beyond the retrodiction wall we cannot penetrate.
I haven’t been thinking about this idea for long enough to fully understand the ramifications of a retrodiction wall, so I’m not yet clear about whether we can distinction diachronic retrodiction and synchronic retrodiction. Or, rather, it would be better to say that the distinction can certainly be made, but that I cannot think of good contrasting examples of the two at the present time.
We can define a span of accessible history that extends from the retrodiction wall in the past to the prediction wall in the future as what I will call effective history (by analogy with effective computability). Effective history can be defined in a way that is closely parallel to effectively computable functions, because all of effective history can be “reached” from the present by means of finite, recursive historical methods of inquiry.
Effective history is not fixed for all time, but expands and contracts as a function of our knowledge. At present, the retrodiction wall is the Big Bang singularity. If anything preceded the Big Bang singularity we are unable to observe it, because the Big Bang itself effectively obliterates any observable signs of any events prior to itself. (Testable theories have been proposed that suggest the possibility of some observable remnant of events prior to the Big Bang, as in conformal cyclic cosmology, but this must at present be regarded as only an early attempt at such a theory.)
Prior to the advent of scientific historiography as we know it today, the retrodiction wall was fixed at the beginning of the historical period narrowly construed as written history, and at times the retrodiction wall has been quite close to the present. When history experiences one of its periodic dark ages that cuts it off from his historical past, little or nothing may be known of a past that once familiar to everyone in a given society.
The emergence of agrarian-ecclesiastical civilization effectively obliterated human history before itself, in a manner parallel to the Big Bang. We know that there were caves that prehistorical peoples visited generation after generation for time out of mind, over tens of thousands of years — much longer than the entire history of agrarian-ecclesiastical civilization, and yet all of this was forgotten as though it had never happened. This long period of prehistory was entirely lost to human memory, and was not recovered again until scientific historiography discovered it through scientific method and empirical evidence, and not through the preservation of human memory, from which prehistory had been eradicated. And this did not occur until after agrarian-ecclesiastical civilization had lapsed and entirely given way to industrial-technological civilization.
We cannot define the limits of the prediction wall as readily as we can define the limits of the retrodiction wall. Predicting the future in terms of overall history has been more problematic than retrodicting the past, and equally subject to ideological and eschatological distortion. The advent of modern science compartmentalized scientific predictions and made them accurate and dependable — but at the cost of largely severing them from overall history, i.e., human history and the events that shape our lives in meaningful ways. We can make predictions about the carbon cycle and plate tectonics, and we are working hard to be able to make accurate predictions about weather and climate, but, for the most part, our accurate predictions about the future dispositions of the continents do not shape our lives in the near- to mid-term future.
I have previously quoted a famous line from Einstein: “As far as the laws of mathematics refer to reality, they are not certain; and as far as they are certain, they do not refer to reality.” We might paraphrase this Einstein line in regard to the relation of mathematics to the world, and say that as far as scientific laws of nature predict events, these events are irrelevant to human history, and in so far as predicted events are relevant to human beings, scientific laws of nature cannot predict them.
Singularities Past and Future
As the term “singularity” is presently employed — as in the technological singularity — the recognition of a retrodiction wall in the past complementary to the prediction wall in the future provides a literal connection between the historiographical use of “singularity” and the use of the term “singularity” in cosmology and astrophysics.
Theorists of the singularity hypothesis place a “singularity” in the future which constitutes an absolute prediction wall beyond which history is so transformed that nothing beyond it is recognizable to us. This future singularity is not the singularity of astrophysics.
If we recognize the actual Big Bang singularity in the past as the retrodiction wall for cosmology — and hence, by extension, for Big History — then an actual singularity of astrophysics is also at the same time an historical singularity.
. . . . .
I have continued my thoughts on the retrodiction wall in Addendum on the Retrodiction Wall.
. . . . .
. . . . .
. . . . .
8 September 2013
The Life of Civilization
Tenth in a Series on Existential Risk
What makes a civilization viable? What makes a species viable? What makes an individual viable? To put the question in its most general form, what makes a given existent viable?
These are the questions that we must ask in the pursuit of the mitigation of existential risk. The most general question — what makes an existent viable? — is the most abstract and theoretical question, and as soon as I posed this question to myself in these terms, I realized that I had attempted to answer this earlier, prior to the present series on existential risk.
In January 2009 I wrote, generalizing from a particular existential crisis in our political system:
“If we fail to do what is necessary to perpetuate the human species and thus precipitate the end of the world indirectly by failing to do what was necessary to prevent the event, and if some alien species should examine the remains of our ill-fated species and their archaeologists reconstruct our history, they will no doubt focus on the problem of when we turned the corner from viability to non-viability. That is to say, they would want to try to understand the moment, and hence possibly also the nature, of the suicide of our species. Perhaps we have already turned that corner and do not recognize the fact; indeed, it is likely impossible that we could recognize the fact from within our history that might be obvious to an observer outside our history.”
This poses the viability of civilization in stark terms, and I can now see in retrospect that I was feeling my way toward a conception of existential risk and its moral imperatives before I was fully conscious of doing so.
From the beginning of this blog I started writing about civilizations — why they rise, why they fall, and why some remain viable for longer than others. My first attempt to formulate the above stark dilemma facing civilization in the form of a principle, in Today’s Thought on Civilization, was as follows:
a civilization fails when it fails to change when the world changes
This formulation in terms of the failure of civilization immediately suggests a formulation in terms of the success (or viability) of a civilization, which I did not formulate at that time:
A civilization is viable when it successfully changes when the world changes.
I also stated in the same post cited above that the evolution of civilization has scarcely begun, which continues to be my point of view and informs my ongoing efforts to formulate a theory of civilization on the basis of humanity’s relatively short experience of civilized life.
In any case, in the initial formulation given above I have, like Toynbee, taken the civilization as the basic unit of historical study. I continued in this vein, writing a series of posts about civilization, The Phenomenon of Civilization, The Phenomenon of Civilization Revisited, Revisiting Civilization Revisited, Historical Continuity and Discontinuity, Two Conceptions of Civilization, A Note on Quantitative Civilization, inter alia.
I moved beyond civilization-specific formulations of what I would come to call the principle of historical viability in a later post:
…the general principle enunciated above has clear implications for historical entities less comprehensive than civilizations. We can both achieve a greater generality for the principle, as well as to make it applicable to particular circumstances, by turning it into the following schema: “an x fails when it fails to change when the world changes” where the schematic letter “x” is a variable for which we can substitute different historical entities ceteris paribus (as the philosophers say). So we can say, “A city fails when it fails to change…” or “A union fails when it fails to change…” or (more to the point at present), “A political party fails when it fails to change when the world changes.”
And in Challenge and Response I elaborated on this further development of what it means to be historically viable:
…my above enunciated principle ought to be amended to read, “An x fails when it fails to change as the world changes” (instead of “…when the world changes”). In other words, the kind of change an historical entity must undergo in order to remain historically viable must be in consonance with the change occurring in the world. This is, obviously, or rather would be, a very difficult matter to nail down in quantitative terms. My schema remains highly abstract and general, and thus glides over any number of difficulties vis-à-vis the real world. But the point here is that it is not so much a matter of merely changing in parallel with the changing world, but changing how the world changes, changing in the way that the world changes.
It was also in this post that I first called this the principle of historical viability.
I now realize that what I then called historical viability might better be called existential viability — at least, by reformulating by principle of historical viability again and calling it the principle of existential viability, I can assimilate these ideas to my recent formulations of existential risk. Seeing historical viability through the lens of existential risk and existential viability allows us to formulate the following relationship between the latter two:
Existential viability is the condition that follows from the successful mitigation of existential risk.
Thus the achievement of existential risk mitigation is existential viability. So when we ask, “What makes an existent viable?” we can answer, “The successful mitigation of risks to that existent.” This gives us a formal framework for understanding existential viability as a successful mitigation of existential risk, but it tells us nothing about the material conditions that contribute to existential viability. Determining the material conditions of existential viability will be a matter both of empirical study and the formulation of a theoretical infrastructure adequate to the conditions that bear upon civilization. Neither of these exist yet, but we can make some rough observations about the material conditions of existential viability.
Different qualities in different places at different times have contributed to the viability of existents. This is one of the great lessons of natural selection: evolution is not about a ladder of progress, but about what organism is best adapted to the particular conditions of a particular area at a particular time. When the “organism” in question is civilization, the lesson of natural selection remains valid: civilizations do not describe a ladder of progress, but those civilizations that have survived have been those best adapted to the particular conditions of a particular region at a particular time. Existential risk mitigation is about making civilization part of evolution, i.e., part of the long term history of the universe.
To acknowledge the position of civilization in the long term history of the universe is to recognize that a change has come about in civilization as we know it, and this change is primarily the consequence of the advent of industrial-technological civilization: civilization is now global, populations across the planet, once isolated by geographical barriers, now communicate instantaneously and trade and travel nearly instantaneously. A global civilization means that civilization is no longer selected on the basis of local conditions at a particular place at a particular time — which was true of past civilizations. Civilization is now selected globally, and this means placing the earth that is the bearer of global civilization in a cosmological context of selection.
What selects a planet for the long term viability of the civilization that it bears? This is essentially a question of astrobiology, which is a point that I recently attempted to make in my recent presentation at the Icarus Interstellar Starship Congress and my post on Paul Gilster’s Centauri Dreams, Existential Risk and Far Future Civilization.
An astrobiological context suggests what we might call an astroecological context, and I have many times pointed out the relevance of ecology to questions of civilization. Pursuing the idea of existential viability may offer a new perspective for the application methods developed for the study of the complex systems of ecology to the complex systems of civilization. And civilizations are complex systems if they are anything.
There is a growing branch of mathematical ecology called viability theory, with obvious application to the viability of the complex systems of civilization. We can immediately see this applicability and relevance in the following passage:
“Agent-based complex systems such as economics, ecosystems, or societies, consist of autonomous agents such as organisms, humans, companies, or institutions that pursue their own objectives and interact with each other an their environment (Grimm et al. 2005). Fundamental questions about such systems address their stability properties: How long will these systems exist? How much do their characteristic features vary over time? Are they sensitive to disturbances? If so, will they recover to their original state, and if so, why, from what set of states, and how fast?”
Viability and Resilience of Complex Systems: Concepts, Methods and Case Studies from Ecology and Society (Understanding Complex Systems), edited by Guillaume Deffuant and Nigel Gilbert, p. 3
Civilization itself is an agent-based complex system like, “economics, ecosystems, or societies.” Another innovative approach to complex systems and their viability is to be found in the work of Hartmut Bossel. Here is an extract from the Abstract of his paper “Assessing Viability and Sustainability: a Systems-based Approach for Deriving Comprehensive Indicator Sets”:
Performance assessment in holistic approaches such as integrated natural resource management has to deal with a complex set of interacting and self-organizing natural and human systems and agents, all pursuing their own “interests” while also contributing to the development of the total system. Performance indicators must therefore reflect the viability of essential component systems as well as their contributions to the viability and performance of other component systems and the total system under study. A systems-based derivation of a comprehensive set of performance indicators first requires the identification of essential component systems, their mutual (often hierarchical or reciprocal) relationships, and their contributions to the performance of other component systems and the total system. The second step consists of identifying the indicators that represent the viability states of the component systems and the contributions of these component systems to the performance of the total system. The search for performance indicators is guided by the realization that essential interests (orientations or orientors) of systems and actors are shaped by both their characteristic functions and the fundamental and general properties of their system environments (e.g., normal environmental state, scarcity of resources, variety, variability, change, other coexisting systems). To be viable, a system must devote an essential minimum amount of attention to satisfying the “basic orientors” that respond to the properties of its environment. This fact can be used to define comprehensive and system-specific sets of performance indicators that reflect all important concerns.
…and in more detail from the text of his paper…
● Obtaining a conceptual understanding of the total system. We cannot hope to find indicators that represent the viability of systems and their component systems unless we have at least a crude, but essentially realistic, understanding of the total system and its essential component systems. This requires a conceptual understanding in the form of at least a good mental model.
● Identifying representative indicators. We have to select a small number of representative indicators from a vast number of potential candidates in the system and its component systems. This means concentrating on the variables of those component systems that are essential to the viability and performance of the total system.
● Assessing performance based on indicator states. We must find measures that express the viability and performance of component systems and the total system. This requires translating indicator information into appropriate viability and performance measures.
● Developing a participative process. The previous three steps require a large number of choices that necessarily reflect the knowledge and values of those who make them. In holistic management, it is therefore essential to bring in a wide spectrum of knowledge, experience, mental models, and social and environmental concerns to ensure that a comprehensive indicator set and proper performance measures are found.
“Assessing Viability and Sustainability: a Systems-based Approach for Deriving Comprehensive Indicator Sets,” Hartmut Bossel, Ecology and Society, Vol. 5, No. 2, Art. 12, 2001
Another dimension can be added to this applicability and relevance by the work of Xabier E. Barandiaran and Matthew D. Egber on the role of norms in complex systems involving agents. Here is an extract from the abstract of their paper:
“One of the fundamental aspects that distinguishes acts from mere events is that actions are subject to a normative dimension that is absent from other types of interaction: natural agents behave according to intrinsic norms that determine their adaptive or maladaptive nature. We briefly review current and historical attempts to naturalize normativity from an organism-centred perspective that conceives of living systems as defining their own norms in a continuous process of self-maintenance of their individuality. We identify and propose solutions for two problems of contemporary modelling approaches to viability and normative behaviour in this tradition: 1) How to define the topology of the viability space beyond establishing normatively-rigid boundaries, so as to include a sense of gradation that permits reversible failure; and 2) How to relate, in models of natural agency, both the processes
that establish norms and those that result in norm-following behaviour.”
The author’s definition of a viability space in the same paper is of particular interest:
Viability space: the space defined by the relationship between: a) the set of essential variables representing the components, processes or relationships that determine the system’s organization and, b) the set of external parameters representing the environmental conditions that are necessary for the system’s self-maintenance
“Norm-establishing and norm-following in autonomous agency,” Xabier E. Barandiaran, IAS-Research Centre for Life, Mind, and Society, Dept. of Logic and Philosophy of Science, UPV/EHU University of the Basque Country, Spain, email@example.com, and Matthew D. Egbert, Center for Computational Neuroscience and Robotics, University of Sussex, Brighton, U.K.
Clearly, an adequate account of the existential viability of civilization would want to address the “essential variables representing the components, processes or relationships that determine” the civilization’s structure, as well as the “external parameters representing the environmental conditions that are necessary” for the civilization’s self-maintenance.
In working through the conception of existential risk in the series of posts I have written here I have come to realize how comprehensive the idea of existential risk is, which gives it a particular utility in discussing the big picture and the human future. In so far as existential viability is the condition that results from the successful mitigation of existential risk, then the idea of existential viability is at least as comprehensive as that of existential risk.
In formulating this initial exposition of existential viability I have been struck by the conceptual synchronicities that have have emerged: recent work in viability theory suggests the possibility of the mathematical modeling of civilization; the work of Barandiaran and Egbert on viability space has shown me the relevance of artificial life and artificial intelligence research; the key role of the concept of viability in ecology makes recent ecological studies (such as Assessing Viability and Sustainability cited above) relevant to existential viability and therefore also to existential risk; formulations of ecological viability and sustainability, and the recognition that ecological systems are complex systems demonstrates the relevance of complexity theory; ecological relevance to existential concerns points to the possibility of employing what I have written earlier about metaphysical ecology and ecological temporality to existential risk and existential viability, which in turn demonstrates the relevance of Bronfenbrenner’s work to this intellectual milieu. I dare say that the idea of existential viability has itself a kind of viability and resilience due to its many connections to many distinct disciplines.
. . . . .
. . . . .
Existential Risk: The Philosophy of Human Survival
10. Existential Risk and Existential Viability
. . . . .
. . . . .
. . . . .
. . . . .
27 July 2013
Ninth in a Series on Existential Risk:
How we understand what exactly is at risk.
How we understand existential risk, then, affects what we understand to be a risk and what we understand to be a reward.
It is possible to clarify this claim, or at least to lay out in greater detail the conceptualization of existential risk, and it is worthwhile to pursue such a clarification.
We cannot identify risk-taking behavior or risk averse behavior unless we can identify instances of risk. Any given individual is likely to identify risks differently than any other individual, and the greater the difference between any two given individuals, the greater the difference is likely to be in their identification of risks. Similarly, a given community or society will be likely to identify risks differently than any other given community or society, and the greater the differences between two given communities, the greater the difference is likely to be between the existential risks identified by the two communities.
This difference in the assessment of risk can at least in part be put to the role of knowledge in determining the distinction between prediction, risk, and uncertainty, as discussed in Existential Risk and Existential Uncertainty and Addendum on Existential Risk and Existential Uncertainty: distinct individuals, communities, societies, and indeed civilizations are in possession not only of distinct knowledge, but also of distinct kinds of knowledge. The distinct epistemic profiles of different societies results in distinct understandings of existential risk.
Consider, for example, the kind of knowledge that is widespread in agrarian-ecclesiastical civilization in contradistinction to industrial-technological civilization: in the former, many people know the intimate details of farming, but few are literate; in the latter, many are literate, but few know how to farm. The macro-historical division of civilization in which a given population is to be found profoundly shapes the epistemic profile of the individuals and communities that fall within a given macro-historical division.
Moreover, knowledge is integral with ideological, religious and philosophical ideas and assumptions that provide the foundation of knowledge within a given macro-historical division of civilization. The intellectual foundations of agrarian-ecclesiastical civilization (something I explicitly discussed in Addendum on the Agrarian-Ecclesiastical Thesis) differ profoundly from the intellectual foundations of industrial-technological civilization.
Differences in knowledge and differences in the conditions of the possibility of knowledge among distinct individuals and civilizations mean that the boundaries between prediction, risk, and uncertainty are differently constructed. In agrarian-ecclesiastical civilization, the religious ideology that lies at the foundation of all knowledge gives certainty (and therefore predictability) to things not seen, while consigning all of this world to an unpredictable (therefore uncertain) vale of tears in which any community might find itself facing starvation as the result of a bad harvest. The naturalistic philosophical foundations of knowledge in industrial-technological civilization have stripped away all certainty in regard to things not seen, but by systematically expanding knowledge has greatly reduced uncertainty in this world and converted many certainties into risks and some risks into certain predictions.
Differences in knowledge can also partly explain differences in risk perception among individuals: the greater one’s knowledge, the more one faces calculable risks rather than uncertainties, and predictable consequences rather than risks. Moreover, the kind of knowledge one possesses will govern the kind of risk one perceives and the kind of predictions that one can make with a degree of confidence in the outcome.
While there is much that can be explained between differences in knowledge, and differences between kinds of knowledge (a literary scholar will be certain of different epistemic claims than a biologist), there is also much that cannot be explained by knowledge, and these differences in risk perception are the most fraught and problematic, because they are due to moral and ethical differences between individuals, between communities, and between civilizations.
One might well ask — Who would possibly object to preventing human extinction? There are many interesting moral questions hidden within this apparently obvious question. Can we agree on what constitutes human viability in the long term? Can we agree on what is human? Would some successor species to humanity count as human, and therefore an extension of human viability, or must human viability be attached to a particular idea of the homo sapiens genome frozen in time in its present form? And we must also keep in mind that many today view human actions as being so egregious that the world would be better off without us, and such persons, even if in the minority, might well affirm that human extinction would be a good thing.
Let us consider, for a moment, a couple of Nick Bostrom’s formulations of existential risk:
An existential risk is one that threatens the premature
extinction of Earth-originating intelligent life or the permanent and drastic destruction of its potential for desirable future development.
…an existential risk is one that threatens to cause the extinction of Earth-originating intelligent life or the permanent and drastic failure of that life to realise its potential for desirable development. In other words, an existential risk jeopardises the entire future of humankind.
Existential Risk Prevention as Global Priority, Nick Bostrom, University of Oxford, Global Policy (2013) 4:1, 2013, University of Durham and John Wiley & Sons, Ltd.
What exactly would constitute the “drastic failure of that life to realise its potential for desirable development”? What exactly is permanent stagnation? Flawed realization? Subsequent ruination? What is human potential? Does it include transhumanism?
For some, the very idea of transhumanism is a moral horror, and a paradigm case of flawed realization. For others, transhumanism is a necessary condition of the full realization of human potential. Thus one might imagine an exciting human future of interstellar exploration and expanding knowledge of the world, and understand this to be an instance of permanent stagnation because human beings do not augment themselves and become something more or something different than we are today. And, honestly, such a scenario does involve an essentially stagnant conception of humanity. Another might imagine a future of continual human augmentation and experimentation, but more or less populated by beings — however advanced — who engage in essentially the same pursuits as those we pursue today, so that while the concept of humanity has not remained stagnant, the pursuits of humanity are essentially mired in permanent stagnation.
Similar considerations hold for civilization as hold for individuals: there are vastly different conceptions of what constitutes a viable civilization and of what constitutes the good for civilization. Future forms of civilization that depart too far from the Good may be characterized as instances of flawed realization, while future forms of civilization that don’t depart at all from contemporary civilization may be characterized as instances of permanent stagnation. The extinction of earth-originating intelligent life, or the subsequent ruination of our civilization, may seem more straight-forward, but what constitutes earth-originating intelligent life is vulnerable to the questions above about human successor species, and subsequent ruination may be judged by some to be preferable to the present trajectory of civilization continuing.
Sometimes these moral differences among peoples are exemplified in distinct civilizations. The kind of existential risks recognized within agrarian-ecclesiastical civilization are profoundly different from the kind of existential risks now being recognized by industrial-technological civilization. We can see earlier conceptions of existential risk as deviant, limited, or flawed as compared to those conceptions made possible by the role of science within our civilization, but we should also realize that, if we could revive representatives of agrarian-ecclesiastical civilization and give them a tour of our world today, they would certainly recognize features of our world of which we are most proud as instances of flawed realization (once we had explained to them what “flawed realization” means). For a further investigation of this idea I strongly recommend that the reader peruse Reinhart Koselleck’s Future’s Past: On the Semantics of Historical Time.
It would be well worth the effort to pursue possible quantitative measures of human extinction, permanent stagnation, flawed realization, and subsequent realization, but if we do so we must do so in the full knowledge that this is as much a moral and philosophical inquiry as it would be a scientific and theoretical inquiry; we cannot separate the desirability of future outcomes from the evaluative nature of our desires.
Like the sailors on the Pequod who each look into the gold doubloon nailed to the mast and see themselves and their personal concerns within, just so when we look into the mirror that is the future, we see our own hopes and fears, notwithstanding the fact that, when the future arrives, our concerns will be long washed away by the passage of time, replaced by the hopes and fears of future men and women (or the successors of men and women).
. . . . .
. . . . .
Existential Risk: The Philosophy of Human Survival
9. Conceptualization of Existential Risk
. . . . .
. . . . .
. . . . .
. . . . .