The Structure of Hope

20 February 2015

Friday


Kant on Hope

Kant famously summed up the concerns of his vast body of philosophical work in three questions:

1) What can I know?

2) What ought I to do? and…

3) What may I hope?

These three questions roughly correspond to his three great philosophical treatises, the Critique of Pure Reason, the Critique of Practical Reason, and the Critique of Judgment, which represent, respectively, rigorous inquiries into knowledge, ethics, and teleology. However much the world has changed since Kant, we can still feel the imperative behind his three questions, and they are still three questions that we can ask today with complete sincerity. This is important, because many men who deceive themselves as to their true motives, ask themselves questions and accept answers that they do not truly believe on a visceral level. I am saying that Kant’s questions are not like this.

In other contexts I have considered what we can know, and what we ought to do. (For example, I have just reviewed some aspects of what we can know in Personal Experience and Empirical Knowledge, and in posts like The Moral Imperative of Human Spaceflight I have looked at what we ought to do.) Here I will consider the third of Kant’s questions — what we are entitled to hope. There is no more important study toward understanding the morale of a people than to grasp the structure of hope that prevails in a given society. Kant’s third question — What may I hope? — is perhaps that imperative of human longing that was felt first, has been felt most strongly through the history of our species, and will be the last that continues to be felt even while others have faded. We have all heard that hope springs eternal in the human breast.

It is hope that gives historical viability both to individuals and their communities. In so far as the ideal of historical viability is permanence, and in so far as we agree with Kenneth Clark that a sense of permanence is central to civilization, then hope that aspires to permanence is the motive force that built the great monuments of civilization that Clark identified as such, and which are the concrete expressions of aspirations to permanence. Here hope is a primary source of civilization. More recent thought might call this concrete expression of aspirations to permanence the tendency of civilizations to raise works of monumental architecture (this is, for example, the terminology employed in Big History).

Four conceptions of history -- human nature and human condition

Hope and Conceptions of History

The structure of hope mirrors the conception of history prevalent within a given society. A particular species of historical consciousness gives rise to a particular conception of history, and a particular conception of history in turn defines the parameters of hope. That is to say, the hope that is possible within a given social context is a function of the conception of history; what hope is possible, what hope makes sense, is limited to those forms of hope that are both actualized by and delimited by a conception of history. The function of delimitation puts certain forms of hope out of consideration, while the function of actualization nurtures those possible forms of hope into life-sustaining structures that, under other conceptions of history, would remain stunted and deformed growths, if they were possible forms of hope at all.

In analyzing the structure of hope I will have recourse to the conceptions of history that I have been developing in this forum. Consequently, I will identify political hope, catastrophic hope, eschatological hope, and naturalistic hope. This proves to be a conceptually fertile way to approach hope, since hope is a reflection of human agency, and I have remarked in Cosmic War: An Eschatological Conception that the four conceptions of history I have been developing are based upon a schematic understanding of the possibilities of human agency in the world.

All of these structures of hope — political, catastrophic, eschatological, and naturalistic — have played important roles in human history. Often we find more than one form of hope within a given society, which tells us that no conception of history is total, that it admits of exceptions, and the societies can admit of pluralistic manifestations of historical consciousness.

Hope begins where human agency ends but human desire still presses forward. A man with political hope looks to a better and more just society in the future, as a function of his own agency and the agency of fellow citizens; a man with catastrophic hope believes that he may win the big one, that his ship will come in, that he will be the recipient of great good fortune; a man with eschatological hope believes that he will be rewarded in the hereafter for his sacrifices and sufferings in this world; a man with naturalistic hope looks to the good life for himself and a better life for his fellow man. Each of these personal forms of hope corresponds to a society that both grows out of such personal hopes and reinforces them in turn, transforming them into social norms.

Woman's Eye and World Globes

Structure and Scope

While a conception of history governs the structure of hope, the contingent circumstances that are the events of history — the specific details that fill in the general structure of history — govern the scope of hope. The lineaments of hope are drawn jointly by its structure and scope, so that we see the particular visage of hope when we understand the historical structure and scope of a civilization.

Like structure, scope is an expression of human agency. An individual — or a society — blessed with great resources possesses great power, and thus great freedom of action. An individual or a society possessed of impoverished resources has much more limited power and therefore is constrained in freedom of action. In so far as one can act — that is to say, in so far as one is an agent — one acts in accords with the possibilities and constraints defined by the scope of one’s world. The scope of human agency has changed over historical time, largely driven by technology; much of the human condition can be defined in terms of humanity as tool makers.

Technology is incremental and cumulative, and it generally describes an exponential growth curve. We labor at a very low level for very long periods of time, so that our posterity can enjoy the fruits of our efforts in a later age of abundance. Thus our hopes for the future are tied up in our posterity and their agency in turn. And it is technology that systematically extends human agency. To a surprising degree, then, the scope of civilization corresponds to the technology of a civilization. This technology can come in different forms. Early civilizations mastered the technology of bureaucratic organization, and managed to administer great empires even with a very low level of technical expertise in material culture. This has changed over time, and political entities have grown in size and increased in stability as increasing technical mastery makes the administration of the planet entire a realistic possibility.

The scope of civilization has expanded as our technologically-assisted agency has expanded, and today as we contemplate our emerging planetary civilization such organization is within our reach because our technologies have achieved a planetary scale. Our hopes have grown along the the expanding scope of our civilization, so that justice, luck, salvation, and the good life all reflect the planetary scope of human agency familiar to us today.

earth eye

Hope in Planetary Civilization

What may we hope in our planetary civilization of today, given its peculiar possibilities and constraints? How may be answer Kant’s third question today? Do we have any answers at all, or is ours an Age of Uncertainty that denies the possibility of any and all answers?

Those of a political frame of mind, hope for, “a thriving global civilization and, therefore… the greater well-being of humanity.” (Sam Harris, The Moral Landscape) Those with a catastrophic outlook hope for some great and miraculous event that will deliver us from the difficulties in which we find ourselves immersed. Those whose hope is primarily eschatological imagine the conversion of the world entire to their particular creed, and the consequent rule of the righteous on a planetary scale. And those of a naturalistic disposition look to what human beings can do for each other, without the intervention of fortune or otherworldly salvation.

How each of these attitudes is interpreted in the scope of our current planetary civilization is largely contingent upon how an individual or group of individuals with shared interests views the growth of technology over the past century, and this splits fairly neatly into the skeptics of technology and the enthusiasts of technology, with a few sitting on the fence and waiting to see what will happen next. Among those with the catastrophic outlook on history will be the fence sitters, because they will be waiting for some contingent event to occur which will tip us in one direction or the other, into technological catastrophe or technological bonanza. Those of an eschatological outlook tend to view technology in purely instrumental terms, and the efficacy of their grand vision of a spiritually unified and righteous planet will largely depend on the pragmatism of their instrumental conception of technology. The political cast of mind also views technological instrumentally, but primarily what it can do to advance the cause of large scale social organization (which in the eschatological conception is given over to otherworldly powers).

Perhaps the greatest dichotomy is to be found in the radically different visions of technology held by those of a naturalistic outlook. The naturalistic outlook today is much more common than it appears to be, despite much heated rhetoric to the contrary, since, as I wrote above, many of us deceive ourselves as to our true motives and our true beliefs. The rise of science since the scientific revolution has transformed the world, and many accept a scientific world view without even being aware that they hold such views. Rhetorically they may give pride of place to political ideology or religious faith, but when they act they act in accordance with reason and evidence, remaining open to change if their first interpretations of reason and evidence seem to be contradicted by circumstances and consequences.

The dichotomy of the naturalistic mind today is that between human agency that retreats from technology, as though it were a failed project, and human agency that embraces technology. Each tends to think of their relation to technology in terms of liberation. For the critics of technology, we have become enslaved to The Machine, and either by overthrowing the technological system, or simply by turning out backs on it, people can help each other by living modest lives, transitioning to a sustainable economy, cultivating community gardens, watching over their neighbors, and, generally speaking, living up to (or, as if you prefer, down to) the “small is beautiful” and “limits to growth” creed that had already emerged in the early 1970s.

The contrast could not be more stark between this naturalistic form of hope and the technology-embracing naturalistic form of hope. The technological humanist also sees people helping each other, but doing so on an ever grander scale, allowing human beings to realistically strive toward levels of self-actualization and fulfillment not even possible in earlier ages, perhaps not even conceivable. The human condition, for such naturalists, has enslaved us to a biological regime, and it is the efficacy of technology that is going to liberate us from the stunted and limited lives that have been our lot since the species emerged. Ultimately, technology embracing naturalists look toward transhumanism and all that it potentially promises to human hopes, which in this context can be literally unbounded.

uncertainty ahead

Hope in the Age of Naturalism

Given the state of the world today, with all its pessimism, and the violence of contesting power centers apparently motivated by unchanged and unchanging conceptions of the human condition, the reader may be surprised that I focus on naturalism and the naturalistic conception of history. If we do not destroy ourselves in the short term, the long term belongs to naturalism. Contemporary political hope, in so far as it is pragmatic is naturalistic, and insofar as it is not pragmatic, it will fail. The hysterical and bloody depredations of religious mania in our time is only as bad as it is because, as an ideology, it is under threat form the success of naturalistically-enabled science and technology. Once the break with the past is made, eschatological hope will no longer be the basis of large-scale social organization, and therefore its ability to cause harm will be greatly limited (though it will not disappear). The catastrophic viewpoint is always limited by its shoulder-shrugging attitude to human agency.

Most people cannot bear to leave their fate to fate, but will take their fate into their own hands if they can. How people take their fate into their hands in the future, and therefore the form of hope they entertain for what they do with the fate held in their hands, will largely be defined by naturalism. Perhaps this is ironic, as it has long been assumed that, of perennial conceptions of the human condition, naturalism had the least to say about hope (and eschatology the most). That is only because the age of naturalism had not yet arrived. But naturalistic despair is just as much a reality as naturalistic hope, so that the coming of the age of naturalism will not bring a Millennia of peace, justice, and happiness for all. Human leave-taking of the ideologies of the past is largely a matter of abandoning neurotic misery in favor of ordinary human unhappiness.

. . . . .

signature

. . . . .

Grand Strategy Annex

. . . . .

project astrolabe logo smaller

. . . . .

Monday


When I was a child I heard that practicable fusion power was thirty years in the future. That was more than thirty years ago, and it is not uncommon to hear that practicable fusion power is still thirty years in the future. Jokes have been made both about fusion and artificial intelligence that both will remain perpetually in the future, just out of reach of human technology — though the universe has been running on gravitational confinement fusion since the first stars lighted up at the beginning of the stelliferous era.

It is difficult to imagine anything more redolent of failed futurism than domed cities. Everyone, I think, will recall the domed city in the film Logan’s Run, which embodied so many paradigms of early 1970s futurism.

It is difficult to imagine anything more redolent of failed futurism than domed cities. Everyone, I think, will recall the domed city in the film Logan’s Run, which embodied so many paradigms of early 1970s futurism.

It would be easy to be nonchalantly cynical about nuclear fusion given past promises. After all, the first successful experiments with a tokamak reactor at the Kurchatov Institute in 1968 date to the time of many other failed futurisms that have since become stock figures of fun — the flying car, the jetpack, the domed city, and so on. One could dismiss nuclear fusion in the same spirit, but this would be a mistake. The long, hard road to nuclear fusion as an energy resource will have long-term consequences for our industrial-technological civilization.

Russian T1 Tokamak at the Kurchatov Institute in Moscow.

Russian T1 Tokamak at the Kurchatov Institute in Moscow.

Like hypersonic flight, practicable fusion power has turned out to be a surprisingly difficult engineering challenge. Fusion research began in the 1920s with British physicist Francis William Aston, who discovered that four hydrogen atoms weigh more than one helium (He-4) atom, which means that fusing four hydrogen atoms together would result in the release of energy. The first practical fusion devices (including fusion explosives) were constructed in the 1950s, including several Z-pinch devices, stellarators, and tokamaks at the Kuchatov Institute.

tokamak small

Ever since these initial successes in achieving fusion, fusion scientists have been trying to achieve breakeven or better, i.e., producing more power from the reaction than was consumed in making the reaction. It’s been a long, hard slog. If we start seeing fusion breakeven in the next decade, this will be a hundred years after the first research suggested the possibility of fusion as an energy resource. In other words, fusion power generation has been a technology in development for about a hundred years. For anyone who supposes that our civilization is too short-sighted to take on large multi-generational projects, the effort to master nuclear fusion stands as a reminder of what is possible when the stakes are sufficiently high.

The Z machine at Sandia National Laboratory.

The Z machine at Sandia National Laboratory.

I characterized fusion as a “technology of nature” in Fusion and Consciousness, though the mechanism by which nature achieves fusion — gravitational confinement — is not practical for human technology. Mostly following news stories I previously wrote about fusion in Fusion Milestone Passed at US Lab, High Energy Electron Confinement in a Magnetic Cusp, One Giant Leap for Mankind, and Why we don’t need a fusion powered rocket.

There was a good article in Nature earlier this year, Plasma physics: The fusion upstarts, which focused on some of the smaller research teams vying to make fusion reactors into practical power sources. Here are some of the approaches now being pursued and have been reported in the popular press:

High Beta Fusion Reactor The legendary Skunkworks, which built the U-2 and SR-71 spy planes, is working on a fusion reactor that it hopes will be sufficiently compact that it can be hauled on the back of a truck, and will produce 100 MW. (cf. Nuclear Fusion in Five Years?)

magnetized liner inertial fusion (MagLIF) This is a “Z pinch” design that was among the first fusion device concepts, now being developed as the “Z Machine” at Sandia National Laboratory. (cf. America’s Underdog Fusion Experiment Is Closing In on the Nuclear Future)

spheromak A University of Washington project formerly called a dynomak, a magnetic containment device in the form of a sphere instead of the tokamak’s torus. (cf. Why nuclear fusion will soon become reality)

Polywell The Polywell concept was developed by Robert Bussard of Bussard ramjet fame, based on fusor devices, which have been in use for some time. (cf. Low-Cost Fusion Project Steps Out of the Shadows and Looks for Money)

Stellerator The stellarator is another early fusion idea based on magnetic confinement that fell out of favor after the tokamaks showed early promise, but which are not the focus of active research again. (cf. From tokamaks to stellarators)

This is in no sense a complete list. There is a good summary of the major approaches on Wikipedia at Fusion Power. I give this short list simply to give a sense of the diversity of technological responses to the engineering challenge of controlled nuclear fusion for electrical power generation.

Polywell Fusion Reactor

Polywell Fusion Reactor

Even as ITER remains the behemoth of fusion projects, projected to cost fifty billion USD in spending by thirty-five national governments, the project is so large and is coming together so slowly that other technologies may well leap-frog the large-scale ITER approach and achieve breakeven before ITER and by different methods. The promise of practical energy generation from nuclear fusion is now so tantalizingly close that, despite the amount of money going into ITER and NIF, a range of other approaches are being pursued with far less funding but perhaps equal promise. Ultimately there may turn out to be an unexpected benefit to the difficulty of attaining sustainable fusion reactions. The sheer difficulty of the problem has produced an astonishing range of approaches, all of which have something to teach us about plasma physics.

Stellarator devices look like works of abstract art.

Stellarator devices look like works of abstract art.

Nuclear fusion as an energy source for industrial-technological civilization is a perfect example of what I call the STEM cycle: science drives technology, technology drives industrial engineering, and industrial engineering creates near resources that allow science to be pursued at a larger scope and scale. In some cases the STEM cycle functions as a loosely-coupled structure of our world. The resources of advanced mathematics are necessary to the expression of physics in mathematicized form, but there may be no direct coupling of physics and mathematics, and the mathematics used in physics may have been available for generations. Pure science may suggest a number of technologies, many of which lie fallow, with no particular interest in them. One technology may eventually come into mass manufacture, but it may not be seen to have any initial impact on scientific research. All of these episodes seem de-coupled, and can only be understood as a loosely-coupled cycle when seen in the big picture over the long term.

In the case of nuclear fusion, the STEM cycle is more tightly coupled: fusion science must be consciously developed with an eye to its application in various fusion technologies. The many specific technologies developed on the basis of fusion science are tested with an eye to which can be practically scaled up by industrial engineering to build a workable fusion power generation facility. This process is so tightly coupled in ITER and NIF that the primary research facilities hold out the promise of someday producing marketable power generation. The experience of operating a large scale fusion reactor will doubtless have many lessons for fusion scientists, who will in turn apply the knowledge gained from this experience to their scientific work. The first large scale fusion generation facilities will eventually become research reactors as they are replaced by more efficient fusion reactors specifically adapted to the needs of electrical power generation. With each generation of reactors the science, technology, and engineering will be improved.

The vitality of fusion science today, as revealed in the remarkable diversity of approaches to fusion, constitutes a STEM cycle with many possible inputs and many possible outputs. Even as the fusion STEM cycle is tightly coupled as science immediately feeds into particular technologies, which are developed with the intention of scaling up to commercial engineering, the variety of technologies involved have connections throughout the industrial-technological economy. Most obviously, if high-temperature superconductors become available, this will be a great boost for magnetic confinement fusion. A breakthrough in laser technology would be a boost for inertial confinement fusion. The prolixity of approaches to fusion today means that any number of scientific discoveries of technological advances could have unanticipated benefits for fusion. And fusion itself, once it passes breakeven, will have applications throughout the economy, not limited to the generation of electrical power. Controlled nuclear fusion is a technology that has not experienced an exponential growth curve — at least, not yet — but this at once tightly-coupled and highly diverse STEM cycle certainly looks like a technology on the cusp of an exponential growth curve. And here even a modest exponent would make an enormous difference.

This is big science with a big payoff. Everyone knows that, in a world run by electricity, the first to market with a practical fusion reactor that is cost-competitive with conventional sources (read: fossil fuels) stands to make a fortune not only with the initial introduction of their technology, but also for the foreseeable future. The wealthy governments of the world, by sinking the majority of their fusion investment into ITER, are virtually guaranteeing that the private sector will have a piece of the action when one of these alternative approaches to fusion proves to be at least as efficient, if not more efficient, than the tokamak design.

But fusion isn’t only about energy, profits, and power plants. Fusion is also about a vision of the future that avoids what futurist Joseph Voros has called an “energy disciplined society.” As expressed in panegyric form in a recent paper on fusion:

“The human spirit, its will to explore, to always seek new frontiers, the next Everest, deeper ocean floors, the inner secrets of the atom: these are iconised [sic] into human consciousness by the deeds of Christopher Columbus, Edmund Hillary, Jacques Cousteau, and Albert Einstein. In the background of the ever-expanding universe, this boundless spirit will be curbed by a requirement to limit growth. That was never meant to be. That should never be so. Man should have an unlimited destiny. To reach for the moon, as he already has; then to colonize it for its resources. Likewise to reach for the planets. Ultimately — the stars. Man’s spirit must and will remain indomitable.”

NUCLEAR FUSION ENERGY — MANKIND’S GIANT STEP FORWARD, Sing Lee and Sor Heoh Saw

The race for market-ready fusion energy is a race to see who will power the future, i.e., who will control the resource that makes our industrial-technological civilization viable in the long term. Profits will also be measured over the long term. Moreover, the energy market is such that multiple technologies for fusion may vie with each other for decades as each seeks to produce higher efficiencies at lower cost. This competition will drive further innovation in the tightly-coupled STEM cycle of fusion research.

. . . . .

Note added Wednesday 15 October 2014: Within a couple of days of writing the above, I happened upon two more articles on fusion in the popular press — another announcement from Lockheed, Lockheed says makes breakthrough on fusion energy project, and Cheaper Than Coal? Fusion Concept Aims to Bridge Energy Gap.

. . . . .

signature

. . . . .

Grand Strategy Annex

. . . . .

project astrolabe logo smaller

. . . . .

Sunday


The Life of Civilization

Regions in viability space. Living, dead, viable, precarious and terminal regions of the viability space. The dead region or state lies at [A] = 0, above which the living region appears. Inside the living region three different sub-regions are distinguished: the viable region (light grey) where the system will remain alive if environmental conditions don’t change, the precarious region (medium grey) where the system is still alive but tends towards death unless environmental conditions change and the terminal region (dark grey) where the system will irreversibly fall into the dead region. See text body for detailed explanation. (Xabier E. Barandiaran and Matthew D. Egbert)

Regions in viability space. Living, dead, viable, precarious and terminal regions of the viability space. The dead region or state lies at [A] = 0, above which the living region appears. Inside the living region three different sub-regions are distinguished: the viable region (light grey) where the system will remain alive if environmental conditions don’t change, the precarious region (medium grey) where the system is still alive but tends towards death unless environmental conditions change and the terminal region (dark grey) where the system will irreversibly fall into the dead region. See text body for detailed explanation. (Xabier E. Barandiaran and Matthew D. Egbert)

Tenth in a Series on Existential Risk


What makes a civilization viable? What makes a species viable? What makes an individual viable? To put the question in its most general form, what makes a given existent viable?

These are the questions that we must ask in the pursuit of the mitigation of existential risk. The most general question — what makes an existent viable? — is the most abstract and theoretical question, and as soon as I posed this question to myself in these terms, I realized that I had attempted to answer this earlier, prior to the present series on existential risk.

In January 2009 I wrote, generalizing from a particular existential crisis in our political system:

“If we fail to do what is necessary to perpetuate the human species and thus precipitate the end of the world indirectly by failing to do what was necessary to prevent the event, and if some alien species should examine the remains of our ill-fated species and their archaeologists reconstruct our history, they will no doubt focus on the problem of when we turned the corner from viability to non-viability. That is to say, they would want to try to understand the moment, and hence possibly also the nature, of the suicide of our species. Perhaps we have already turned that corner and do not recognize the fact; indeed, it is likely impossible that we could recognize the fact from within our history that might be obvious to an observer outside our history.”

This poses the viability of civilization in stark terms, and I can now see in retrospect that I was feeling my way toward a conception of existential risk and its moral imperatives before I was fully conscious of doing so.

From the beginning of this blog I started writing about civilizations — why they rise, why they fall, and why some remain viable for longer than others. My first attempt to formulate the above stark dilemma facing civilization in the form of a principle, in Today’s Thought on Civilization, was as follows:

a civilization fails when it fails to change when the world changes

This formulation in terms of the failure of civilization immediately suggests a formulation in terms of the success (or viability) of a civilization, which I did not formulate at that time:

A civilization is viable when it successfully changes when the world changes.

I also stated in the same post cited above that the evolution of civilization has scarcely begun, which continues to be my point of view and informs my ongoing efforts to formulate a theory of civilization on the basis of humanity’s relatively short experience of civilized life.

In any case, in the initial formulation given above I have, like Toynbee, taken the civilization as the basic unit of historical study. I continued in this vein, writing a series of posts about civilization, The Phenomenon of Civilization, The Phenomenon of Civilization Revisited, Revisiting Civilization Revisited, Historical Continuity and Discontinuity, Two Conceptions of Civilization, A Note on Quantitative Civilization, inter alia.

I moved beyond civilization-specific formulations of what I would come to call the principle of historical viability in a later post:

…the general principle enunciated above has clear implications for historical entities less comprehensive than civilizations. We can both achieve a greater generality for the principle, as well as to make it applicable to particular circumstances, by turning it into the following schema: “an x fails when it fails to change when the world changes” where the schematic letter “x” is a variable for which we can substitute different historical entities ceteris paribus (as the philosophers say). So we can say, “A city fails when it fails to change…” or “A union fails when it fails to change…” or (more to the point at present), “A political party fails when it fails to change when the world changes.”

And in Challenge and Response I elaborated on this further development of what it means to be historically viable:

…my above enunciated principle ought to be amended to read, “An x fails when it fails to change as the world changes” (instead of “…when the world changes”). In other words, the kind of change an historical entity must undergo in order to remain historically viable must be in consonance with the change occurring in the world. This is, obviously, or rather would be, a very difficult matter to nail down in quantitative terms. My schema remains highly abstract and general, and thus glides over any number of difficulties vis-à-vis the real world. But the point here is that it is not so much a matter of merely changing in parallel with the changing world, but changing how the world changes, changing in the way that the world changes.

It was also in this post that I first called this the principle of historical viability.

I now realize that what I then called historical viability might better be called existential viability — at least, by reformulating by principle of historical viability again and calling it the principle of existential viability, I can assimilate these ideas to my recent formulations of existential risk. Seeing historical viability through the lens of existential risk and existential viability allows us to formulate the following relationship between the latter two:

Existential viability is the condition that follows from the successful mitigation of existential risk.

Thus the achievement of existential risk mitigation is existential viability. So when we ask, “What makes an existent viable?” we can answer, “The successful mitigation of risks to that existent.” This gives us a formal framework for understanding existential viability as a successful mitigation of existential risk, but it tells us nothing about the material conditions that contribute to existential viability. Determining the material conditions of existential viability will be a matter both of empirical study and the formulation of a theoretical infrastructure adequate to the conditions that bear upon civilization. Neither of these exist yet, but we can make some rough observations about the material conditions of existential viability.

Different qualities in different places at different times have contributed to the viability of existents. This is one of the great lessons of natural selection: evolution is not about a ladder of progress, but about what organism is best adapted to the particular conditions of a particular area at a particular time. When the “organism” in question is civilization, the lesson of natural selection remains valid: civilizations do not describe a ladder of progress, but those civilizations that have survived have been those best adapted to the particular conditions of a particular region at a particular time. Existential risk mitigation is about making civilization part of evolution, i.e., part of the long term history of the universe.

To acknowledge the position of civilization in the long term history of the universe is to recognize that a change has come about in civilization as we know it, and this change is primarily the consequence of the advent of industrial-technological civilization: civilization is now global, populations across the planet, once isolated by geographical barriers, now communicate instantaneously and trade and travel nearly instantaneously. A global civilization means that civilization is no longer selected on the basis of local conditions at a particular place at a particular time — which was true of past civilizations. Civilization is now selected globally, and this means placing the earth that is the bearer of global civilization in a cosmological context of selection.

What selects a planet for the long term viability of the civilization that it bears? This is essentially a question of astrobiology, which is a point that I recently attempted to make in my recent presentation at the Icarus Interstellar Starship Congress and my post on Paul Gilster’s Centauri Dreams, Existential Risk and Far Future Civilization.

An astrobiological context suggests what we might call an astroecological context, and I have many times pointed out the relevance of ecology to questions of civilization. Pursuing the idea of existential viability may offer a new perspective for the application methods developed for the study of the complex systems of ecology to the complex systems of civilization. And civilizations are complex systems if they are anything.

There is a growing branch of mathematical ecology called viability theory, with obvious application to the viability of the complex systems of civilization. We can immediately see this applicability and relevance in the following passage:

“Agent-based complex systems such as economics, ecosystems, or societies, consist of autonomous agents such as organisms, humans, companies, or institutions that pursue their own objectives and interact with each other an their environment (Grimm et al. 2005). Fundamental questions about such systems address their stability properties: How long will these systems exist? How much do their characteristic features vary over time? Are they sensitive to disturbances? If so, will they recover to their original state, and if so, why, from what set of states, and how fast?”

Viability and Resilience of Complex Systems: Concepts, Methods and Case Studies from Ecology and Society (Understanding Complex Systems), edited by Guillaume Deffuant and Nigel Gilbert, p. 3

Civilization itself is an agent-based complex system like, “economics, ecosystems, or societies.” Another innovative approach to complex systems and their viability is to be found in the work of Hartmut Bossel. Here is an extract from the Abstract of his paper “Assessing Viability and Sustainability: a Systems-based Approach for Deriving Comprehensive Indicator Sets”:

Performance assessment in holistic approaches such as integrated natural resource management has to deal with a complex set of interacting and self-organizing natural and human systems and agents, all pursuing their own “interests” while also contributing to the development of the total system. Performance indicators must therefore reflect the viability of essential component systems as well as their contributions to the viability and performance of other component systems and the total system under study. A systems-based derivation of a comprehensive set of performance indicators first requires the identification of essential component systems, their mutual (often hierarchical or reciprocal) relationships, and their contributions to the performance of other component systems and the total system. The second step consists of identifying the indicators that represent the viability states of the component systems and the contributions of these component systems to the performance of the total system. The search for performance indicators is guided by the realization that essential interests (orientations or orientors) of systems and actors are shaped by both their characteristic functions and the fundamental and general properties of their system environments (e.g., normal environmental state, scarcity of resources, variety, variability, change, other coexisting systems). To be viable, a system must devote an essential minimum amount of attention to satisfying the “basic orientors” that respond to the properties of its environment. This fact can be used to define comprehensive and system-specific sets of performance indicators that reflect all important concerns.

…and in more detail from the text of his paper…

Obtaining a conceptual understanding of the total system. We cannot hope to find indicators that represent the viability of systems and their component systems unless we have at least a crude, but essentially realistic, understanding of the total system and its essential component systems. This requires a conceptual understanding in the form of at least a good mental model.

Identifying representative indicators. We have to select a small number of representative indicators from a vast number of potential candidates in the system and its component systems. This means concentrating on the variables of those component systems that are essential to the viability and performance of the total system.

Assessing performance based on indicator states. We must find measures that express the viability and performance of component systems and the total system. This requires translating indicator information into appropriate viability and performance measures.

Developing a participative process. The previous three steps require a large number of choices that necessarily reflect the knowledge and values of those who make them. In holistic management, it is therefore essential to bring in a wide spectrum of knowledge, experience, mental models, and social and environmental concerns to ensure that a comprehensive indicator set and proper performance measures are found.

“Assessing Viability and Sustainability: a Systems-based Approach for Deriving Comprehensive Indicator Sets,” Hartmut Bossel, Ecology and Society, Vol. 5, No. 2, Art. 12, 2001

Another dimension can be added to this applicability and relevance by the work of Xabier E. Barandiaran and Matthew D. Egber on the role of norms in complex systems involving agents. Here is an extract from the abstract of their paper:

“One of the fundamental aspects that distinguishes acts from mere events is that actions are subject to a normative dimension that is absent from other types of interaction: natural agents behave according to intrinsic norms that determine their adaptive or maladaptive nature. We briefly review current and historical attempts to naturalize normativity from an organism-centred perspective that conceives of living systems as defining their own norms in a continuous process of self-maintenance of their individuality. We identify and propose solutions for two problems of contemporary modelling approaches to viability and normative behaviour in this tradition: 1) How to define the topology of the viability space beyond establishing normatively-rigid boundaries, so as to include a sense of gradation that permits reversible failure; and 2) How to relate, in models of natural agency, both the processes
that establish norms and those that result in norm-following behaviour.”

The author’s definition of a viability space in the same paper is of particular interest:

Viability space: the space defined by the relationship between: a) the set of essential variables representing the components, processes or relationships that determine the system’s organization and, b) the set of external parameters representing the environmental conditions that are necessary for the system’s self-maintenance

“Norm-establishing and norm-following in autonomous agency,” Xabier E. Barandiaran, IAS-Research Centre for Life, Mind, and Society, Dept. of Logic and Philosophy of Science, UPV/EHU University of the Basque Country, Spain, xabier.academic@barandiaran.net, and Matthew D. Egbert, Center for Computational Neuroscience and Robotics, University of Sussex, Brighton, U.K.

Clearly, an adequate account of the existential viability of civilization would want to address the “essential variables representing the components, processes or relationships that determine” the civilization’s structure, as well as the “external parameters representing the environmental conditions that are necessary” for the civilization’s self-maintenance.

In working through the conception of existential risk in the series of posts I have written here I have come to realize how comprehensive the idea of existential risk is, which gives it a particular utility in discussing the big picture and the human future. In so far as existential viability is the condition that results from the successful mitigation of existential risk, then the idea of existential viability is at least as comprehensive as that of existential risk.

In formulating this initial exposition of existential viability I have been struck by the conceptual synchronicities that have have emerged: recent work in viability theory suggests the possibility of the mathematical modeling of civilization; the work of Barandiaran and Egbert on viability space has shown me the relevance of artificial life and artificial intelligence research; the key role of the concept of viability in ecology makes recent ecological studies (such as Assessing Viability and Sustainability cited above) relevant to existential viability and therefore also to existential risk; formulations of ecological viability and sustainability, and the recognition that ecological systems are complex systems demonstrates the relevance of complexity theory; ecological relevance to existential concerns points to the possibility of employing what I have written earlier about metaphysical ecology and ecological temporality to existential risk and existential viability, which in turn demonstrates the relevance of Bronfenbrenner’s work to this intellectual milieu. I dare say that the idea of existential viability has itself a kind of viability and resilience due to its many connections to many distinct disciplines.

. . . . .

danger imminent existential threat

. . . . .

Existential Risk: The Philosophy of Human Survival

1. Moral Imperatives Posed by Existential Risk

2. Existential Risk and Existential Uncertainty

3. Addendum on Existential Risk and Existential Uncertainty

4. Existential Risk and the Death Event

5. Risk and Knowledge

6. What is an existential philosophy?

7. An Alternative Formulation of Existential Risk

8. Existential Risk and Existential Opportunity

9. Conceptualization of Existential Risk

10. Existential Risk and Existential Viability

. . . . .

ex risk ahead

. . . . .

signature

. . . . .

Grand Strategy Annex

. . . . .

Thursday


Stockholm 11

Today I had an interesting visit to the Swedish Royal Armory Museum, or Livrustkammaren, which preserves relics from Sweden’s apogee as a military power in Europe during the Thirty Years’ War. More than merely a military museum, the Livrustkammaren is an exercise in the advent of historical consciousness. It is the oldest museum in Sweden, and has its origins in the command of King Gustav II Adolph in 1628 to preserve his clothes from his campaign in Poland. The website of the museum says:

Here you will also find historic items such as the blood-stained shirts and buff jerkin which Gustavus Adolphus was wearing when he was killed in the battle at Lützen (Germany) in 1632. The costume worn by Gustavus III when he was assassinated at a masqued ball at the Royal Opera in 1792 is also on display, as is the uniform worn by Charles XII when he was killed in the trenches at Fredrikshald (Norway) in 1718.

The year the museum was established, 1628, was the same year that the Vasa warship sank on its maiden voyage. It is interesting to note that this ship, replete with its many symbols of imperial dynastic rule — including medallions of Roman emperors — was built (and, unfortunately for the crown, sunk) at the same time that Gustav II Adolf ordered the preservation of his blood-stained clothing from his military campaign in Poland. This was a monarch who was not only thinking of military triumphs and personal glory, but also obviously concerned with his place in history — a concern that extended to historical preservation and invoking the symbols of Roman imperial rule.

Stockholm 12

Textiles are, apparently, more easily preserved than ships, and so the first bequest that created the Swedish Royal Armory Museum is still on display. It took rather longer to refine the technique of preserving ships, but the attempted preservation of ships has an interesting history. This preservation history is an exercise in historical consciousness — and also, as it turns out, the source of a perennial paradox of Western philosophy. The Athenians attempted to preserve the ship of Theseus, and this attempted preservation in the interest of ancient Greek historical consciousness — did not the Greeks invent the genre of history? — resulted in the paradox that is now synonymous with the Ship of Theseus. Here is what Plutarch said of the attempted preservation of the Ship of Theseus:

“The ship wherein Theseus and the youth of Athens returned from Crete had thirty oars, and was preserved by the Athenians down even to the time of Demetrius Phalereus, for they took away the old planks as they decayed, putting in new and stronger timber in their place, in so much that this ship became a standing example among the philosophers, for the logical question of things that grow; one side holding that the ship remained the same, and the other contending that it was not the same.”

After Sir Francis Drake’s circumnavigation, his ship, The Golden Hind, was put on display in Deptford and remained so for a hundred years until it rotted away — apparently the English were not as keen as the Greeks in their attempted curation. Now we have the example of the Vasa, which was not nearly so seaworthy as The Golden Hind, but which first preserved in the icy waters of Stockholm harbor for more than 300 years, and now preserved by the techniques of contemporary science and technology, and may be so preserved indefinitely, as long as the infrastructure of industrial-technological civilization shall endure to maintain the Vasa in existence in its present form. The Vasa’s technologically-enabled preservation (and even sempiternity) is another way in which scientific historiography contributes to growing historical consciousness, and makes the Vasa, which was not seaworthy, “history-worthy,” i.e., seaworthy on the ocean of history.

Stockholm 13

. . . . .

signature

. . . . .

Grand Strategy Annex

. . . . .

Monday


Seventh in a Series on Existential Risk:

risk taxonomy

Infosec as a Guide to Existential Risk


Many of the simplest and seemingly most obvious ideas that we invoke almost every day of our lives are the most inscrutably difficult to formulate in any kind of rigorous way. This is true of time, for example. Saint Augustine famously asked in his Confessions:

What then is time? If no one asks me, I know: if I wish to explain it to one that asketh, I know not: yet I say boldly that I know, that if nothing passed away, time past were not; and if nothing were coming, a time to come were not; and if nothing were, time present were not. (11.14.17)

quid est ergo tempus? si nemo ex me quaerat, scio; si quaerenti explicare velim, nescio. fidenter tamen dico scire me quod, si nihil praeteriret, non esset praeteritum tempus, et si nihil adveniret, non esset futurum tempus, et si nihil esset, non esset praesens tempus.

Marx made a similar point in a slightly different way when he tried to define commodities at the beginning of Das Kapital:

“A commodity appears, at first sight, a very trivial thing, and easily understood. Its analysis shows that it is, in reality, a very queer thing, abounding in metaphysical subtleties and theological niceties.”

“Eine Ware scheint auf den ersten Blick ein selbstverständliches, triviales Ding. Ihre Analyse ergibt, daß sie ein sehr vertracktes Ding ist, voll metaphysischer Spitzfindigkeit und theologischer Mücken.”

Karl Marx, Capital: A Critique of Political Economy, Vol. I. “The Process of Capitalist Production,” Book I, Part I, Chapter I, Section 4., “The Fetishism of Commodities and the Secret Thereof”

Augustine on time and Marx on commodities are virtually interchangeable. Marx might have said, What then is a commodity? If no one asks me, I know: if I wish to explain it to one that asketh, I know not, while Augustine might have said, Time appears, at first sight, a very trivial thing, and easily understood. Its analysis shows that it is, in reality, a very queer thing, abounding in metaphysical subtleties and theological niceties.

As with time and commodities, so too with risk: What is risk? If no one asks me, I know, but if someone asks me to explain, I can’t. Risk appears, at first sight, a very trivial thing, and easily understood; its analysis shows that it is, in reality, a very queer thing, abounding in metaphysical subtleties and theological niceties.

In my writings to date on existential risk I have been developing existential risk in a theoretical context of what is called Knightian risk, because this conception of risk was given its initial exposition by Frank Knight. I quoted Knight’s book Risk, Uncertainty, and Profit at some length in several posts here in an effort to try to place existential risk within a context of Knightian risk. There are, however, alternative formulations of risk, and alternative formulations of risk point to alternative formulations of existential risk.

I happened to notice that a recent issue of Network World had a cover story on “Why don’t risk management programs work?”. The article is an exchange between Jack Jones and Alexander Hutton, information security (infosec) specialists who were struggling with just these foundational issues as to risk as I have noted above. Alexander Hutton sounds like he is quoting Augustine:

“…what is risk? What creates it and how is it measured? These things in and of themselves are evolving hypotheses.”

Both Hutton and Jones point to the weaknesses in the concept of risk that are due to insufficient care in formulations and theoretical models. Jones talks about the inconsistent use of terminology, and Hutton says the following about formal theoretical methods:

“Without strong data and formal methods that are widely identified as useful and successful, the Overconfidence Effect (a serious cognitive bias) is deep and strong. Combined with the stress of our thinning money and time resources, this Overconfidence Effect leads to a generally dismissive attitude toward formalism.”

Probably without knowing it, Jones and Hutton have echoed Kant, who in his little pamphlet On the Old Saw: ‘That May Be Right in Theory, But it Won’t Work in Practice’ argued that the the proper response to an inadequate theory is not less theory but more theory. Here is a short quote from that work of Kant’s to give a flavor of his exposition:

“…theory may be incomplete, and can perhaps be perfected only by future experiments and experiences from which the newly qualified doctor, agriculturalist or economist can and ought to abstract new rules for himself to complete his theory. It is therefore not the fault of the theory if it is of little practical use in such cases. The fault is that there is not enough theory; the person concerned ought to have learnt from experience.”

In the above-quoted article Jack Jones develops the (Kantian) theme of insufficient theoretical foundations, as well as that of multiple approaches to risk that risk clouding our understanding of risk by assigning distinct meanings to one and the same term:

“Risk management programs don’t work because our profession doesn’t, in large part, understand risk. And without understanding the problem we’re trying to manage, we’re pretty much guaranteed to fail… Some practitioners seem to think risk equates to outcome uncertainty (positive or negative), while others believe it’s about the frequency and magnitude of loss. Two fundamentally different views.”

Jones goes on to add:

“…although I’ve heard the arguments for risk = uncertainty, I have yet to see a practical application of the theory to information security. Besides, whenever I’ve spoken with the stakeholders who sign my paychecks, what they care about is the second definition. They don’t see the point in the first definition because in their world the ‘upside’ part of the equation is called ‘opportunity’ and not ‘positive risk’.”

Are these two concepts of risk — uncertainty vs. frequency and magnitude of loss — really fundamentally distinct paradigms for risk? Reading a little further into the literature of risk management in information technology I found that “frequency and magnitude of loss” is almost always prefaced by “probability of” or “likelihood of,” as in this definition of risk in Risk Management: The Open Group Guide, edited by Ian Dobson, Jim Hietala:

“Risk is the probable frequency and probable magnitude of future loss. With this as a starting point, the first two obvious components of risk are loss frequency and loss magnitude.” (section 5.2.1)

What does it mean to speak in terms of probable frequency or likely frequency? It means that the frequency and magnitude of a loss is uncertain, or known only within certain limits. In other words, uncertainty is a component of risk in the definition of risk in terms of frequency and magnitude of loss.

If you have some doubts about the formulation of probable frequency and magnitude of loss in terms of uncertainty, here is a definition of “risk” from Dictionary of Economics by Harold S. Sloan and Arnold J. Zurcher (New York: Barnes and Noble, 1961), dating from well before information security was a major concern:

Risk. The possibility of loss. The term is commonly used to describe the possibility of loss from some particular hazard, as fire risk, war risk, credit risk, etc. It also describes the possibility of loss by an investor who, in popular speech, is often referred to as a risk bearer.

Possibility is just another way of thinking about uncertainty, so one could just as well define risk as the uncertainty of loss. Indeed, in the book cited above, Risk Management: The Open Group Guide, there are several formulations in terms of uncertainty, as, for example:

“A study and analysis of risk is a difficult task. Such an analysis involves a discussion of potential states, and it commonly involves using information that contains some level of uncertainty. And so, therefore, an analyst cannot exactly know the risk in past, current, or future state with absolute certainty.” (2.2.1)

We see, then, that uncertainty is a constitutive element of formulations of risk in terms of frequency and magnitude of loss, but it is also easy to see that in using terms such as “frequency” and “magnitude” which clearly imply quantitative measures, that we are dealing with uncertainties that can be measured and quantified (or, at least, ideally can be quantified), and this is nothing other than Knightian risk, though Knightian risk is usually formulated in terms of uncertainties against which we can be insured. Insuring a risk is made possible though its quantification; those uncertainties that lie beyond the reach of reasonably accurate quantitative predictions remain uncertainties and cannot be transformed into risks. I have suggested in my previous posts that it is the accumulation of knowledge that transforms uncertainties into risk, and I think you will find that this also holds good in infosec: as knowledge of information technologies improves, risk management will improve. Indeed, as much is implied in a couple of quotes from the infosec articled cited above. here is Jack Jones:

“We have the opportunity to break new ground — establish a new science, if you will. What could be more fun than that? There’s still so much to figure out!”

And here is Alexander Hutton making a similar point:

“…the key to success in security and risk for the foreseeable future is going to be data science.”

The development of data science would mean a systematic way of accumulating knowledge that would transform uncertainty into risk and thereby make uncertainties manageable. In other words, when we know more, we will know more about the frequency and magnitude of loss, and the more we know about it, the more we can insure against this loss.

The two conceptions of risk discussed above — risk as uncertainty and risk as probable frequency and magnitude of loss — are not mutually exclusive but rather complementary; uncertainty is employed (if implicitly) in formulations in terms of frequency and magnitude of loss, so that uncertainty is the more fundamental concept. In other words, Knightian risk and uncertainty are the theoretical foundations lacking in infosec formulations. At the same time, the elaboration of risk management in infosec formulations built upon implicit foundations of Knightian risk can be used to arrive at parallel formulations of existential risk.

Existential risk can be understood in terms of the probable frequency and probable magnitude of existential loss, with probably frequency decomposed into existential threat event frequency and existential vulnerability, and so on. Indeed, one of the great difficulties of existential risk consciousness raising stems from the fact that existential threat event frequency must be measured on a time scale that is almost inaccessible to human time consciousness. It is only with the advent of scientific historiography that we have become aware of how often we have dodged the bullet in the past — an observation that suggests that the great filter lies in the past (or perhaps in the present) and not in the future (or so we can hope). In other words, the systematic cultivation of knowledge transforms uncertainty into manageable risk. Thus we can immediately see the relevance of threat event frequency to existential risk mitigation.

Existential risk formulations can illuminate infosec formulations and vice versa. For example, in the book mentioned above, Risk Management: The Open Group Guide, we find this: “Unfortunately, Probable Loss Magnitude (PLM) is one of the toughest nuts to crack in analyzing risk.” Yet in existential risk formulations magnitude of loss has been a central concern, and is quantified by the scope parameter in Bostrom’s qualitative categories of risk.

Table of qualitative risk categories from the book Global Catastrophic Risks.

Table of qualitative risk categories from the book Global Catastrophic Risks.

There is an additional sense in which infosec is relevant to existential risk, and this is the fact that, as industrial-technological civilization incrementally migrates onto virtual platforms, industrial-technological civilization will come progressively closer to being identical to its virtual representation. More and more, the map will be indistinguishable from the territory. This process has already begun in our time, though this beginning is only the thinnest part of the thin edge of the wedge.

We are, at present, far short of totality in the virtual representation of industrial-technological civilization, and perhaps further still from the indistinguishability of virtual and actual worlds. However, we are not at all far short of the indispensability of the virtual to the maintenance of actual industrial-technological civilization, so that the maintenance of the virtual infrastructure of industrial-technological civilization is close to being a conditio sine qua non of the viability of actual industrial-technological civilization. In this way, infosec plays a crucial role in existential risk mitigation.

As I described in The Most Prevalent Form of Degradation in Civilized Life, civilization is the vehicle and the instrument of earth-originating life and its correlates, so that civilizational risks such as flawed realization, permanent stagnation, and subsequent ruination must be accounted co-equal existential threats alongside extinction risks.

If the future of earth-originating life and its correlates is dependent upon industrial-technological civilization, and if industrial-technological civilization is dependent upon an indispensable virtual infrastructure, then the future of earth-originating life and its correlates is dependent upon the indispensable virtual infrastructure of industrial-technological civilization.

Q.E.D.

. . . . .

danger imminent existential threat

. . . . .

Existential Risk: The Philosophy of Human Survival

1. Moral Imperatives Posed by Existential Risk

2. Existential Risk and Existential Uncertainty

3. Addendum on Existential Risk and Existential Uncertainty

4. Existential Risk and the Death Event

5. Risk and Knowledge

6. What is an existential philosophy?

7. An Alternative Formulation of Existential Risk

. . . . .

ex risk ahead

. . . . .

signature

. . . . .

Grand Strategy Annex

. . . . .

Monday


In The Industrial-Technological Thesis I characterized industrial-technological civilization as involving an escalating cycle of science, technology, and engineering, each generation of which feeds into the next so that science makes new technologies possible, new technologies are engineered into new industries, and new industries create the instruments for further scientific research. I further argued in Civilization, War, and Industrial Technology that the only property more pervasively inherent in industrial-technological civilization than escalating feedback is war — since escalating feedback is characteristic only of The Industrial-Technological Thesis, whereas war typifies all civilization. Thus technological growth and war are both structurally inherent in The Industrial-Technological Thesis, so much so that to entertain the idea of civilization without either is probably folly.

Now I realize that in recounting the escalating spiral of science, technology and engineering, that I was recounting only the “creative” side of the “creative destruction” of industrialized capitalism, and that the creative destruction of capitalism as it is played out in industrial-technological civilization also has a destructive side that is expressed in a way entirely consonant with the distinctive character of industrial-technological civilization. Each phase in the cycle of science, technology, and engineering fails in a distinctive (and in a distinctively interesting) way.

The counter-cyclical trend to that of the exponentially escalating spiral of science, technology, and engineering is the exponentially deescalating downward trend of science in model crisis, stalled technology, and catastrophic failures of engineering. Science falters when model drift gives way to model crisis and normal science begins to give way to revolutionary science. Human beings, being what they are, have invested science with the “truth” once reserved for matter theological; but science has no “truths” — there is only the scientific method, which remains the same even while the knowledge that this method yields is always subject to change. Technology falters when its exponential growth tapers off and its attains a mature plateau, after which time it changes little and becomes a stalled technology. Engineering falters when industries experience the inevitable industrial accidents, intrinsic to the very fabric of industrialized society, or even experience the catastrophic failures to which complex systems are vulnerable.

Industrial accidents are intrinsic to industrialized society, and cannot be wished away.

I hadn’t previously thought of these disruptions to industrial-technological civilization together, but now that I see them whole I see that I have already written separately about all the phases of failure that so closely parallel the successes of industrialization. Mostly, I think, these disruptions have taken place separately, and have therefore only proved to be temporary disruptions in the rapidly-resuming cycle of technological growth. However, once we see the possible failures as a systemic, counter-cyclical trend that destroys old knowledge, old technology, and old industries in order to make room for the new, we can easily see the possibility of an escalating disruption in which scientific model crisis would limit knowledge, limited knowledge would lead to long term stalled technologies, and stalled technologies would lead to escalating industrial accidents and complex catastrophic failures.

None of this, of course, is in the least bit surprising. Ever since the industrialized warfare of the twentieth century we have been discussing the possibility that industrial-technological civilization will more or less inevitably destroy itself. Civilization, when it was suddenly and unexpectedly preempted by industrialization, has opened Pandora’s box, and the evils that fly free cannot be shut back inside.

. . . . .

signature

. . . . .

Grand Strategy Annex

. . . . .

History Degree Zero

22 October 2012

Monday


Waiting at the End of History

for the Coming of the Zero Hour


What does French literary criticism have to do with geopolitics, geostrategy, and far future scenarios of human civilization? Everything, as it turns out.

Roland Barthes wrote a book titled Writing Degree Zero; one could say that it is a work of literary criticism, but as with much sophisticated scholarship it is more than this. French literary criticism is not a scholarly undertaking for the faint at heart.

Barthes compares what he calls “writing degree zero” to the writing of a journalist; we can similarly compare history degree zero with the history found in journalism. In journalism, nothing ever happens, and at the same time something is always happening. It is the contemporary incarnation of the cyclical conception of history, in which nothing in essentials changes even while accidental change is the pervasive order of the day. (In Italy this is called “Gatopardismo.”) This is history reduced to white noise.

Here is Barthes’ own formulation of writing degree zero:

“Proportionately speaking, writing at the degree zero is basically in the indicative mood, or if you like, amodal; it would be accurate to say that it is a journalist’s writing. If it were not precisely the case that journalism develops, in general, optative or imperative (that is, emotive) forms. The new neutral writing takes place in the midst of all those ejaculations and judgments, without becoming involved in any of them; it consists precisely in their absence. But this absence is complete, it implies no refuge, no secret; one cannot therefore say that it is an impassive mode of writing; rather, that is is innocent.”

Roland Barthes, Writing Degree Zero, translated by Annette Lavers and Colin Smith, New York: Hill and Wang, 1977 (originally published 1953), pp. 76-77

It has been said that Barthes’ book is parochial, and certainly his central concern is French literature, and the situation (or, if you prefer, the dilemma) of the French writer. Barthes was a man of his place and time, and the book sets itself questions that scarcely resonate in early twenty-first century America: How can writing be revolutionary? We’ve come a long way since 1968.

Barthes was clearly vexed that a lot of writing by professed communists was anything but revolutionary. It was, in fact — horror of horrors — bourgeois, and little better than shilling shockers, penny dreadfuls, and yellow journalism. Barthes, then, was asking how it was possible for someone with truly revolutionary ideas to write in a revolutionary manner.

One must recall that at this time there were two kinds of writers in France: communists who supported Stalin and made excuses for him, and communists who did not support Stalin and made no excuses for him. (If you have the chance, I urge you to see the wonderful film Red Kiss, which is a bit difficult to find, but worth the effort for its illustration of the period.) The most famous literary-intellectual-philosophical dispute of the time — that between Sartre and Camus — perfectly exemplified this. Camus, not one to make excuses for anyone, said he would be neither a victim nor an executioner. Sartre, after resisting the blandishments of communism for many years, eventually became the most unimaginative of communists, defended Stalin and Mao, and had his lackeys take Camus to task in print.

Barthes explicitly cites the style of Camus as embodying the qualities of writing of the zero degree, though I think that Barthes was so personally involved in the idea of literature that his identification of Camus as writing degree zero was not in any sense intended as a political slander — or, for that matter, as a literary slander. (I hope that more informed readers will correct me if I am wrong.)

Journalism, then, is historiography degree zero, and in so far as journalists produce (as they like to say) the first draft of history, and in so far as this first draft is subsequently iterated in later drafts of history, historiography more closely approximates the zero degree. (If you prefer reading sitreps to journalism — they’re pretty much the same thing — you can reformulate the preceding sentence.) And then again, in so far as mass journalism is consumed by a mass audience, and that mass audience goes on to create contemporary history, in a mass spectacle of life imitating art, history itself, and not merely the recounting of history in historiography, approaches the zero degree. The new neutral history — uninvolved, disengaged, absent — is the perfect characterization of the mass politics of mass man.

There are elections, there are debates, there is television news 24/7 and radio talk shows 24/7, there are still a few newspapers and magazines sacrificing dead trees, and there is of course the blogosphere resonating with the voices of the millions (like myself) who have no access to the media megaphone and who prefer the web to a soapbox. All of this feeds into the appearance that there is always something going on. But we know that almost nothing changes for all the sound and fury. It doesn’t really matter who wins the election, since the rich will still be rich and the poor will still be poor.

Have we already, then, reached history degree zero? Are we living at the end of history? Is this what the end of days looks like? Not quite. Not quite yet.

One of the most famous and familiar motifs of Marx’s thought is that history is driven by ideological conflict. It is a very Victorian, very Darwinian, very nineteenth century idea. History understood as an ideological conflict has characterized the modern period of Western history, even if it was not always obvious what people were fighting for. Sometimes it was obvious what men were fighting for, and this was especially true in the wake of revolutions: those who died to defend the American Revolution or the French Revolution or the Russian Revolution knew, to some extent at least, what they were fighting for.

For Marx, the locomotive of history was the class struggle, and it was the nature of class struggle to erupt into revolutionary action. Revolutions, as I noted above, had the property of clarifying what it’s all about. You’re on one side of the barricades or the other. Marx was right to focus on revolutions, but wrong to focus on the class struggle.

We can arrive at a more satisfactory understanding of modern history if we take social class out of Marx’s class struggle and make the class a variable for which we can substitute any political entity whatsoever. Thus we arrive at a formal conception of political struggle: a social class can struggle against a nation-state; a nation-state can struggle against a royal family; a royal family can struggle against a city-state, and so on, and so forth.

The convergence of the international system on the model of the nation-state system has given us the appearance that nation-states struggle with nation-states, and as life has imitated art — in this case, the art of political thought — we have steadily been reduced to the monoculture of a single kind of political entity — nation-states — engaged in a single kind of struggle. Francis Fukuyama called this political system “liberal democracy” and this condition “the end of the history.” I guess one name is as good as any other name; I would call it political homogenization.

In many posts I have discussed Francis Fukuyama’s “end of history” thesis (a thesis, I might add, heavily indebted to French scholarship, and especially to Alexandre Kojève’s reading of Hegel — note that Kojève was an acquaintance of Leo Strauss and his work was translated by Allen Bloom, noted literary critic and cranky academic who wrote The Closing of the American Mind). I have pointed out that, despite the many dismissive critiques of Fukuyama’s “end of history” thesis, and claims of a “return of history,” that Fukuyama himself still holds a modified version of the thesis, and this is that contemporary liberal democratic society is the sole remaining viable form of political society (cf. Gödel’s Lesson for Geopolitics, in which I noted that Fukuyama is still thinking through his thesis twenty years on, as befits a philosopher).

As it turns out, there is a political level below that of the “end of history” and this is the absence of history — history degree zero.

A single remaining political ideology signifies History Degree One, and in the theater of political ideologies, liberal democracy is, for Fukuyama, the last man standing — but if this last man standing is a straw man, and we knock over this straw man, what then? If it can be shown that liberal democracy is a failure also, along with communism and fascism, nationalism and socialism, internationalism and fundamentalism, what comes next?

What then? Zero hour. History degree zero.

Even the end of history waits for further developments, and the future of the end of history is Zero Hour.

. . . . .

signature

. . . . .

Grand Strategy Annex

. . . . .

Monday


The Urnes Stave church — the sun came out briefly as we crossed the fjord from Solvorn to Urnes, though the rest of the day was overcast or raining.

Even if you know what to look for, it is quite difficult to pick out the Urnes stave church from across the fjord at Solvorn, where a small ferry departs each hour on the hour to take tourists and a few cars and bicycles across Sognefjord over to the Urnes side (also spelled “Ornes”). Once across, you walk up the hill to the top of the village, and there sits the Urnes stave church among trees and the cultivated hillsides, just as it has been sitting for more then 800 years. This is the second time I have been to Urnes, and I was unable to see the stave church from across the fjord; perhaps if I had had binoculars I would have seen it, but it melds into the landscape from which it came.

Looking back to Solvorn from the top of the hill at Urnes, standing next to this ancient wooden structure, little changed from when it was built — Urnes is thought to be the oldest of the surviving stave churches, with timbers dating from 1129-1130 (thanks to dedrochronology) — it is very easy to imagine the villagers are Solvorn getting into the wooden boats, rowing across the fjord, and walking up the hill to attend services in their ancient church. We often hear the phrase “time stands still” — at Urnes, you can stand still along with time for a few moments. Here, history has been paused.

In so saying that history is paused at Urnes I am reminded of a passage from Rembrandt and Spinoza by Leo Balet, which I quoted previously in Capturing the Moment:

“In those of his portraits where the portrayed is not acting, but just resting, pausing, we get the feeling that the resting continues, that it is a resting with duration, a resting, thus, in time; in those pictures we are closer to life than in the portraits where just the breaking off of the action makes us so vividly aware that his whole action was make-believe.”

Leo Balet, Rembrandt and Spinoza, p. 184

Balet here frames his thesis in terms of portraiture, but the same might be said of a photograph or a sculpture — or even of a place that changes but little over the years. Urnes is such a place, and, in fact, there are many such places in Norway. Yesterday in A Wittgensteinian Pilgrimage I noted how Wittgenstein’s correspondents in Skjolden often closed their letters with, “All is as before here” (“Her er det som før”). in Skjolden, too, time is paused.

Similarly, the busyness of the world appears to us as mere make-believe when seen from the perennial perspective of unchanging continuity in time. Our hurried and harassed lives seem mindless and perhaps a bit comical when compared to forms of life that endure — or, to put it otherwise, compared to modes of life that enjoy historical viability.

I have elsewhere defined historical viability as the ability of an existent to endure in existence by changing as the world changes; now I realize that the world changes in different ways at different times and places, so that historical viability is a local phenomenon that is subject to conditions closely similar to natural selection — existents are selected for historical viability not by being “better” or “higher” or “superior” or “perfect,” but by being the most suited to their environment. In the present context, “environment” should be understood as the temporal or historical environment of a historical existent — with this in mind, a more subtle form of the principle of historical viability begins to emerge.

. . . . .

Solvorn, across the fjord from Urnes.

. . . . .

signature

. . . . .

Grand Strategy Annex

. . . . .

Tuesday


Mario Monti said of the Euro that, “the will to make it indissoluble and irrevocable is there.” Today, perhaps yes, but what will the will be tomorrow?

Each time the Eurozone puts together another bailout package the markets follow with a brief (sometimes very brief) rally, which collapses pretty much as soon as reality reasserts itself and it becomes obvious that most of the measures constitute creative ways of kicking the can down the road, while those more ambitious measures that are more than kicking the can down the road are probably overly ambitious and not likely to be practical policies in the midst of a financial crisis.

Simply from a practical point of view, it is difficult to imagine how anyone can believe that a more comprehensive fiscal and political union can be brought about in the midst of the crisis, although formulated with the best intentions of saving the Eurozone, since the original (and much more limited) Eurozone was negotiated, planned, and implemented over a period of many years, not over a period of few days as inter-bank loan rates are climbing by the hour. Apart from this practical problem, there are several issues of principle at stake in the Eurozone crisis and the attempts to rescue the European Monetary Union.

Mario Monti was quoted in a Reuter’s article, Monti says EU hinges on summit talks outcome: report, in defense of strengthening financial and political ties within the Eurozone as a way to save that Euro that:

“Europeans know where they’re going… the markets are convinced that having given birth to the euro, the will to make it indissoluble and irrevocable is there and will be strengthened by other steps towards integration.”

Can the Euro be made “indissoluble and irrevocable”? Can anything be made indissoluble and irrevocable? I think not, and this is a matter of principle to which I attach great importance.

I have several times quoted Edward Gibbon on the impossibility of present legislators binding the acts of future legislators:

“In earthly affairs, it is not easy to conceive how an assembly equal of legislators can bind their successors invested with powers equal to their own.”

Edward Gibbon, History of the Decline and Fall of the Roman Empire, Vol. VI, Chapter LXVI, “Union Of The Greek And Latin Churches.–Part III.

Since I have quoted this several times (in The Imperative of Regime Survival, The Institution of Language, and The Chilean Model, e.g.), implicitly maintaining that it states an important principle, I am now going give this principle a name: Gibbon’s Principle of Inalienable Autonomy for Political Entities, or, more briefly, Gibbon’s Principle.

As I have tried to make explicit, Gibbon’s Principle holds for political entities, but I have also quoted a passage from Sartre that presents essentially the same idea for individuals rather than for political entities:

“I cannot count upon men whom I do not know, I cannot base my confidence upon human goodness or upon man’s interest in the good of society, seeing that man is free and that there is no human nature which I can take as foundational. I do not know where the Russian revolution will lead. I can admire it and take it as an example in so far as it is evident, today, that the proletariat plays a part in Russia which it has attained in no other nation. But I cannot affirm that this will necessarily lead to the triumph of the proletariat: I must confine myself to what I can see. Nor can I be sure that comrades-in-arms will take up my work after my death and carry it to the maximum perfection, seeing that those men are free agents and will freely decide, tomorrow, what man is then to be. Tomorrow, after my death, some men may decide to establish Fascism, and the others may be so cowardly or so slack as to let them do so. If so, Fascism will then be the truth of man, and so much the worse for us. In reality, things will be such as men have decided they shall be. Does that mean that I should abandon myself to quietism? No. First I ought to commit myself and then act my commitment, according to the time-honoured formula that “one need not hope in order to undertake one’s work.” Nor does this mean that I should not belong to a party, but only that I should be without illusion and that I should do what I can. For instance, if I ask myself ‘Will the social ideal as such, ever become a reality?’ I cannot tell, I only know that whatever may be in my power to make it so, I shall do; beyond that, I can count upon nothing.”

Jean-Paul Sartre, “Existentialism is a Humanism” (lecture from 1946, translated by Philip Mairet)

This I will now also name with a principle: Sartre’s Principle of Inalienable Autonomy for Individuals, or, more briefly, Sartre’s Principle.

If that weren’t already enough principles for today, I going to formulate another principle, and although this is my own I’m not going to name it after myself after the fashion of the names I’ve given to Gibbon’s Principle or Sartre’s Principle. This additional principle is The Principle of the Political Primacy of the Individual (admittedly awkward — I will try to think of a better name for this): political autonomy is predicated upon individual autonomy. In other words, Gibbon’s Principle carries the force that it does because of Sartre’s Principle, and this makes Sartre’s Principle the more fundamental.

At present I am not going to argue for The Principle of the Political Primacy of the Individual, but I will simply assume that Gibbon’s Principle supervenes upon Sartre’s Principle, but I wanted to make clear that I understand that there are those who would reject this principle, and that there are arguments on both sides of the question. There is no establish literature on this principle so far as I know, as I am not aware that anyone has previously formulated it in an explicit form, but I can easily imagine arguments taken from classic sources that bear on both sides of the principle (i.e., its affirmation or its denial).

Because, as Sartre said, “men are free agents and will freely decide,” the Euro cannot be made “indissoluble and irrevocable” and the attempt to try to make it seem so is pure folly. For in order to maintain this appearance, we must be dishonest with ourselves; we must make claims and assertions that we know to be false. This cannot be a robust foundation for any political effort. If, tomorrow, a deeper economic and political union of the Eurozone becomes of the truth of Europe, this does not mean that the day after tomorrow that this will remain the truth of Europe.

And this brings us to yet another principle, and this principle is a negative formulation of a principle that I have formulated in the past, the principle of historical viability. According to the principle of historical viability, an existent must change as the world changes or it will be eliminated from history. This means that entities that remain in existence must be so malleable that they can change in their essence, for if they fail to change, they experience adverse selection.

A negative formulation of the principle of historical viability might be called the principle of historical calamity: any existent so constituted that it cannot change is doomed to extinction, and sooner rather than later. In other words, any effort that is made to make the Euro “indissoluble and irrevocable” not only will fail to make the Euro indissoluble and irrevocable, but will in fact make the Euro all the more vulnerable to historical forces that would destroy it.

When I previously discussed Gibbon’s Principle and Sartre’s Principle (before I had named these principles as such) in The Imperative of Regime Survival, I cited an effort in Cuba to incorporate Castro’s vision of Cuba’s socio-economic system into the constitution as a permanent feature of the government of Cuba that would presumably hold until the end of time. This would be laughable were it not the source of so much human suffering and misery.

Well, the Europeans aren’t imposing any misery on themselves on the level of that which has been imposed upon the Cuban people by their elites, but the folly in each class of elites is essentially the same: the belief that those in power today, at the present moment, are in a privileged position to dictate the only correct institutional model for all time and eternity. In other words, the End of History has arrived.

Why not make the Euro an open, flexible, and malleable institution that can respond to political, social, economic, and demographic changes? Sir Karl Popper famously wrote about The Open Society and its Enemies — ought not an open society to have open institutions? And would not open institutions be those that are formulated with an eye toward the continuous evolution in the light of further and future experience?

To deny Gibbon’s Principle and Sartre’s Principle is to count oneself among the enemies of open societies and open institutions.

. . . . .

signature

. . . . .

Grand Strategy Annex

. . . . .

Decadent Technologies

4 June 2012

Monday


In several previous posts I have discussed how novel technologies will often display a sigmoid growth curve, starting with a gradual development, suddenly experiencing an exponential increase in complexity, sophistication, and efficacy, followed by a long plateau of little or no development after that technology has achieved maturity. The posts in which I described this development include:

The Law of Stalled Technologies

More on Stalled Technologies

Blindsided by History

Technological Succession

In Blindsided by History I wrote:

“Present technologies will stall, and they will eventually be superseded by unpredicted and unpredictable technologies that will emerge to surpass them. Those who remain fixated on existing technologies will be blindsided by the new technologies, and indeed may simply fail to recognize new technologies for what they are when they do in fact appear.”

The phenomena of one technology superseding another results in Technological Succession. In my post on technological succession I wrote the following:

The overtaking of a stalled technology that remains at a given plateau by another technology that fulfills a similar need (although by way of a distinct method) is an extension of a society with stable institutions that was able to bring to fruition a mature technology. With a mature technology in place, and stable economic and social institutions built upon this technology, there emerges an incentive to continue or to expand these institutions to a greater extent, at a cheaper cost, more efficiently, more effectively, and with less effort. This attempt to do previous technology one better is, in turn, a spur to social changes that will call forth further innovations. It could be argued that the Industrial Revolution emerged from just such an escalation of social and technology coevolution.

Technological succession, then, develops in parallel with the social succession of institutions capable of fostering further technological development by different means once a given technology stalls. In this post I made a distinction between mature technologies (another name for stalled technologies), which are technologies that have passed through their exponential growth phase and have plateaued at a stable level, and perennial technologies, which are technologies that do not experience exponential growth curves in their development — things like knives that have always been a part of the human “toolkit” and always will be. This distinction between mature and perennial technologies I then developed according to a biological analogy:

By analogy with microevolution (evolution within a species) and macroevolution (evolution from one species into another) in biology, we can see the microevolution and macroevolution of technologies. Perennial technologies exhibit micorevolution. No new technological “species” emerge from the incremental changes in perennial technologies. Technological macroevolution is the succession of a stalled technology by a new, immature technology, which latter still possesses the possibility of development. Mature technologies experience adaptive radiation under coevolutionary pressures, and this macroevolution can result in new technological species.

The coevolutionary pressures are those social institutions that make demands upon a technology to continue its development in the face of advancing social developments, which latter might include expanding populations, higher standards of living, raised expectations and soaring ambitions.

Even if another technology does not come along to further extend the social functions served by the mature and now stalled technology, the incentives to continue to go one better with technology remains, and this incentive drives the attempt to try to squeeze more performance out of mature technologies that would, if surpassed in the process of technological succession, remain stalled at a stable plateau of development. The result of pushing for more performance from a stalled technology is what I will call decadent technology (though I could just as well call this baroque technology).

The obvious examples that come to mind of decadent technologies are either of a humorous or theatrical character (or both). Steampunk and tubepunk are obvious examples of the intentional elaboration of a decadent technology for aesthetic and theatrical effect. As genres of art and literature, steampunk and tubepunk aren’t seeking to supply the wants of mass society (except for aesthetic wants, which respond to a different class of coevolutionary pressures).

Another example of decadent technology is that of race car engines. If you want to go really fast, it would make more sense to strap a jet engine onto set of wheels (which would look like a steampunk contraption), but racing mostly means specialized internal combustion engines — engines pushed about as far as the technology of the internal combustion engine can be pushed. It is obvious, from the thousands of photographs in car magazines, that the builders of racing engines can an aesthetic pleasure in their creations. However, these engines are not merely aesthetic exercises like steampunk, because by pushing the technology of the internal combustion engine to its limits, much more horsepower can be obtained. Thus a decadent technology can be effective, though it quickly begins to reach a level of diminishing returns, and further investment yields progressively less of a return. That is why these engines are not models of efficiency that the mass producers of automobiles look to for technological developments (though this is often used as an excuse for car manufacturers to sponsor drag racing) but rather they are expressions of mechanical ambition. Like I wrote above, if you want to go really fast, you can build a jet; the challenge is to build an internal combustion engine with the power of a jet, and this is a challenge in which both builders of racing engines and race spectators enjoy.

Most examples of decadent technology are not as theatrical and not as much fun as steampunk and race cars, but the principles are essentially the same. Microchip technology, following the social coevolutionary pressure of fulfilling the prophecy of Moore’s Law, is close to becoming a decadent technology. If some other technology for computing fundamentally different from silicone wafer technology does not emerge soon (like quantum computing, which still seems to be some way off), the producers of microchips will come under considerable economic pressure to drive silicone technology beyond its natural (i.e., physical) limits and transform it into a decadent technology.

. . . . .

Decadent technology

. . . . .

signature

. . . . .

Grand Strategy Annex

. . . . .

Follow

Get every new post delivered to your Inbox.

Join 394 other followers

%d bloggers like this: