The Finality Fallacy

11 January 2014

Saturday


fallacy taxonomy

One of my pet peeves is when a matter is treated as though settled and final when there is in fact no finality at all in a given formulation or in the circumstances that the formulation seeks to capture. I am going to call this attribution of finality to matters remaining unsettled the “finality fallacy” (it would more accurate to call this the “false finality fallacy” but this is too long and alliterative to boot). This is an informal rather than a formal fallacy, so an individual might be impeccable in their logic while still committing the finality fallacy. Another way to understand informal fallacies is that they concern the premisses of reasoning rather than the reasoning itself (another term for this is material fallacy), and it is one of the premisses of any finality fallacy that a given matter is closed to discussion and nothing more need be said.

Like many fallacies, the finality fallacy is related to recognized fallacies, although it is difficult to classify exactly. One could compare the finality fallacy to ignoratio elenchi or begging the question or the moralistic fallacy — all of these are present in some degree or other in the finality fallacy in its various manifestations.

Preparing for a cosmic journey -- but one trip settles nothing.

Preparing for a cosmic journey — but one trip settles nothing.

Allow me to begin with an example from popular culture. Although it has been several years since I have seen the film Contact (written by Carl Sagan and loosely based on the life of Jill Tartar, famous for her work in SETI) I can remember how irritated I was by the ending, which treated the celestial journey made by the main character as a unique, one-off effort, despite the fact that an enormous apparatus was built to make the journey possible. If I had made the film I would have finished with a waiting line of people queued up to use the machine next, to make it clear that nothing is finished by the fact of a disputed first journey.

It is routine for films to end with a false sense of finality, as filmmakers assume that the audience requires resolution, or “closure.” We hear a lot about closure, but it is rare to see a clear definition of what exactly constitutes closure. Perhaps it is the desire for closure, generally speaking, that is the primary motivation for the finality fallacy. When the psychological need for closure leaks over into an intellectual need for closure, then we find rationalizations of a false finality; perhaps it would be better to call the finality fallacy a cognitive bias rather, or this might be the point at which material fallacy overlaps with cognitive bias.

Closure sign

The cultivation of a false finality is also prevalent among contemporary Marxists, especially those who focus on Marx’s economic doctrines rather than his wider social and philosophical critique. Marx’s economics was already antiquated by the time he published Das Kapital, but because of Marx’s influence, and because of the ongoing revolutionary tradition that rightly claims Marx as a founding father, Marx’s dictates on the labor theory of value are taken as final by Marxists, who must now, more than 150 years later, pretend as though no advances had been made in economics in the meantime. Strangely, this attitude is also taken for granted among ideological foes of Darwin, who, again more then 150 years later, continue to raise the same objections as though nothing had happened in biology since before 1859. This carefully studied ignorance takes a particular development in intellectual history and treats it as final, as the last word, as definitive, as Gospel.

Wherever there is a dogma, there is a finality fallacy waiting to be committed when the adherents of the dogma in question treat that dogma as final and must thereafter perpetuate the ruse that nothing essential changes after the dogma establishes the last word. In Islam, we have the notorious historical development of ‘closing the gate of ijtihad’ — ijtihad being individual rational inquiry. This is now a contested idea — i.e., whether there ever was a closing of the gate of ijtihad — as there seems to have been no official proclamation or definitive text, but there can be no question that the idea of closing the gate of ijtihad played an important role is Islamic civilization in discouraging inquiry and independent thought (what we would today call a “chilling” effect, much like the condemnations of 1277 in Western history).

Bertrand Russell evinced a certain irritation when his eponymous paradox was dismissed or treated as final before any satisfactory treatment had been formulated.

Bertrand Russell evinced a certain irritation when his eponymous paradox was dismissed or treated as final before any satisfactory treatment had been formulated.

Bertrand Russell evinced an obvious irritation and impatience with the response to his paradox, which reveals an attitude not unlike my impatience with false finality:

“Poincaré, who disliked mathematical logic and had accused it of being sterile, exclaimed with glee, ‘it is no longer sterile, it begets contradiction’. This was all very well, but it did nothing towards the solution of the problem. Some other mathematicians, who disapproved of Georg Cantor, adopted the March Hare’s solution: ‘I’m tired of this. Let’s change the subject.’ This, also, appeared to me inadequate.”

Bertrand Russell, My Philosophical Development, Chapter VII, “Principia Mathematica: Philosophical Aspects”

Russell knew that nothing was settled by dismissing mathematical logic, as Poincaré did, or by simply changing the subject, as others were content to do. But some were satisfied with these evasions; Russell would have none of it, and persisted until he satisfied himself with a solution (which was his theory of types). Most mathematicians rejected Russell’s solution, and it was Zermelo’s axiomatization of set theory that ultimately became the consensus choice among mathematicians for employing set theory without running into the contradiction discovered by Russell.

Now I will turn to the contemporary example that prompted this post — as I said above, the finality fallacy is a pet peeve, but this particular instance was the trigger for this particular post — in the work of contemporary philosopher John Gray.

John Gray is not a philosopher who writes intentionally inflammatory pieces in order to grab headlines. He regularly has short essays on the BBC (I have commented on his A Point Of View: Leaving Gormenghast; his A Point of View: Two cheers for human rights also appeared on the BBC). He has written a sober book on Mill’s On Liberty, Mill on Liberty: A Defence, and a study of the thought of Isaiah Berlin (Isaiah Berlin) — in no sense radical topics for a contemporary philosopher.

726508w

Among Gray’s many books is The Immortalization Commission: Science and the Strange Quest to Cheat Death, in which we find the following:

“Echoing the Russian rocket scientist Konstantin Tsiolkovsky, there are some who think humans should escape the planet they have gutted by migrating into outer space. Happily, there is no prospect of the human animal extending its destructive career in this way. The cost of sending a single human being to another planet is prohibitive, and planets in the solar system are more inhospitable than the desolated Earth from which humans would be escaping.”

John Gray, The Immortalization Commission: Science and the Strange Quest to Cheat Death, New York: Farrar, Straus and Giroux, 2011, p. 212

There is a lot going on in this brief passage, and I would like to try to gloss some of this implicit content. Gray is here counting on his reader nodding along with him, since human beings have indeed had a destructive career on Earth, and I can easily imagine someone agreeing to this also agreeing to the undesirability of this destructive career being extended beyond the Earth. Gray also throws in a sense of gross irresponsibility by speaking of human beings having “gutted” the planet, and presumably moving on to “gut” the next one, with the clear implication that this would be worse than arresting the destructive career of human beings on their homeworld. Then Gray moves on to the expense of space travel at the present moment and the inhospitableness of other planets in our solar system. He treats as though final the present expense of space travel and the need to live on the surface of a planet, but more importantly he does so in a moral context that is intended to give the impression that any attempt to go beyond the Earth is unspeakable folly and morally disastrous.

Eunapius and Philostratus

This may sound like a stretch, but I am reminded of a passage from Eunapius (b. 347 A.D.), where Eunapius described the kind of atmosphere that made the condemnation of Socrates possible in Athens:

“…no one of all the Athenians, even though they were a democracy, would have ventured on that accusation and indictment of one whom all the Athenians regarded as a walking image of wisdom, had it not been that in the drunkenness, insanity, and license of the Dionysia and the night festival, when light laughter and careless and dangerous emotions are discovered among men, Aristophanes first introduced ridicule into their corrupted minds, and by setting dances upon the stage won over the audience to his views…”

Philostratus and Eunapius, Lives of the Sophists, Cambridge and London: Harvard, 1921, p. 381

Though a sober philosopher in his own right, Gray here trades upon the light laughter and careless and dangerous emotions when he engages in the ridicule of a human future beyond the Earth, which he implies is not only unlikely (if not impossible) but also morally wrong. But to demonstrate his intellectual sobriety he next turns serious and has this to say on the next page:

“The pursuit of immortality through science is only incidentally a project aiming to defeat death. At bottom it is an attempt to escape contingency and mystery. Contingency means humans will always be subject to fate and chance, mystery that they will always be surrounded by the unknowable. For many this state of affairs is intolerable, even unthinkable. Using advancing knowledge, they insist, the human animal can transcend the human condition.”

John Gray, The Immortalization Commission: Science and the Strange Quest to Cheat Death, New York: Farrar, Straus and Giroux, 2011, p. 213

Gray’s certainty and confidence of expression here mask the sheer absurdity of his claims; the expansion of a scientific civilization will by no means prejudice our relationship to contingency and mystery. On the contrary, it is a scientific understanding of the world that reminds us of contingency on levels that far exceed human capacity. The universe itself is a contingency, and all that it holds is contingency; science reminds us of this at every turn, and for the same reason, no matter how distantly human civilization travels beyond Earth, scientific mystery will be there to remind us of all that we still do not know.

But this is not my topic today (though it makes me angry to read it, and that is why I have quoted it). It is the previously quoted passage from Gray that truly bothers me because of its pose of finality in his pithy remarks about the human future beyond Earth. Gray is utterly dismissive of such prospects, and it is ultimately the dismissiveness that is the problem, not the view he holds.

I don’t mean to single out John Gray as especially guilty in this respect, though as a philosopher he is more guilty than others because he ought to know better, just as Russell knew better when it came to his paradox. In fact, there is some similarity here, because both mathematicians and philosophers were dismissive either of Russell’s paradox or of the formal methods that led to the paradox. We should not be dismissive. We need to confront these problems on their merits, and not turn it into a joke or an excuse to condemn human folly. We recall that a great many dismissed Cantor’s work as folly — after all, how can human beings know the infinite? — and Russell’s extension of Cantor’s work drew similar judgments. This, again, is closely connected to what we are talking about here, because the idea that human beings, finite as they are, can never know anything of the infinite (i.e., human beings cannot escape their legacy of intellectual finitude) is closely related to the idea that human beings can never escape their biological legacy of finitude, which is the topic of Gray’s book.

Hermann Weyl said that we live in an open world.

Hermann Weyl said that we live in an open world.

Anyone who views the world as an ongoing process of natural history, as I do, must see it as an open world. That the world is open, that it is neither closed nor final, neither finished nor complete, means that unprecedented events occur and that there always remains the possibility of evolution, by which we transcend a previous form of being and attain to a new form of being. The world’s openness is an idea that Hermann Weyl took as the title of three lectures from 1932, which end on this note:

“We reject the thesis of the categorical finiteness of man, both in the atheistic form of obdurate finiteness which is so alluringly represented today in Germany by the Freiburg philosopher Heidegger, and in the theistic, specifically Lutheran-Protestant form, where it serves as a background for the violent drama of contrition, revelation, and grace. On the contrary, mind is freedom within the limitations of existence; it is open toward the infinite. Indeed, God as the completed infinite cannot and will not be comprehended by it; neither can God penetrate into man by revelation, nor man penetrate to him by mystical perception. The completed infinite we can only represent in symbols. From this relationship every creative act of man receives its deep consecration and dignity. But only in mathematics and physics, as far as I can see, has symbolical-theoretical construction acquired sufficient solidity to be convincing for everyone whose mind is open to these sciences.”

Hermann Weyl, Mind and Nature: Selected Writings on Philosophy, Mathematics, and Physics, edited by Peter Pesic, Princeton University Press, 2009, Chapter 4, “The Open World: Three Lectures on the Metaphysical Implications of Science,” 1932

Weyl gave his own peculiar theological and constructivist spin to the conception of an open world — Weyl, in fact, represents one of those mathematicians “who disapproved of Georg Cantor” about which I quoted Bertrand Russell above — but in the main I am in agreement with Weyl, and Weyl and Russell could have agreed on the openness of the world. To commit the finality fallacy is to presume some aspect of the world closed, and if the world is indeed open, it is a fallacy to represent it as being closed.

. . . . .

signature

. . . . .

Grand Strategy Annex

. . . . .

Advertisements

Sunday


The Life of Civilization

Regions in viability space. Living, dead, viable, precarious and terminal regions of the viability space. The dead region or state lies at [A] = 0, above which the living region appears. Inside the living region three different sub-regions are distinguished: the viable region (light grey) where the system will remain alive if environmental conditions don’t change, the precarious region (medium grey) where the system is still alive but tends towards death unless environmental conditions change and the terminal region (dark grey) where the system will irreversibly fall into the dead region. See text body for detailed explanation. (Xabier E. Barandiaran and Matthew D. Egbert)

Regions in viability space. Living, dead, viable, precarious and terminal regions of the viability space. The dead region or state lies at [A] = 0, above which the living region appears. Inside the living region three different sub-regions are distinguished: the viable region (light grey) where the system will remain alive if environmental conditions don’t change, the precarious region (medium grey) where the system is still alive but tends towards death unless environmental conditions change and the terminal region (dark grey) where the system will irreversibly fall into the dead region. See text body for detailed explanation. (Xabier E. Barandiaran and Matthew D. Egbert)

Tenth in a Series on Existential Risk


What makes a civilization viable? What makes a species viable? What makes an individual viable? To put the question in its most general form, what makes a given existent viable?

These are the questions that we must ask in the pursuit of the mitigation of existential risk. The most general question — what makes an existent viable? — is the most abstract and theoretical question, and as soon as I posed this question to myself in these terms, I realized that I had attempted to answer this earlier, prior to the present series on existential risk.

In January 2009 I wrote, generalizing from a particular existential crisis in our political system:

“If we fail to do what is necessary to perpetuate the human species and thus precipitate the end of the world indirectly by failing to do what was necessary to prevent the event, and if some alien species should examine the remains of our ill-fated species and their archaeologists reconstruct our history, they will no doubt focus on the problem of when we turned the corner from viability to non-viability. That is to say, they would want to try to understand the moment, and hence possibly also the nature, of the suicide of our species. Perhaps we have already turned that corner and do not recognize the fact; indeed, it is likely impossible that we could recognize the fact from within our history that might be obvious to an observer outside our history.”

This poses the viability of civilization in stark terms, and I can now see in retrospect that I was feeling my way toward a conception of existential risk and its moral imperatives before I was fully conscious of doing so.

From the beginning of this blog I started writing about civilizations — why they rise, why they fall, and why some remain viable for longer than others. My first attempt to formulate the above stark dilemma facing civilization in the form of a principle, in Today’s Thought on Civilization, was as follows:

a civilization fails when it fails to change when the world changes

This formulation in terms of the failure of civilization immediately suggests a formulation in terms of the success (or viability) of a civilization, which I did not formulate at that time:

A civilization is viable when it successfully changes when the world changes.

I also stated in the same post cited above that the evolution of civilization has scarcely begun, which continues to be my point of view and informs my ongoing efforts to formulate a theory of civilization on the basis of humanity’s relatively short experience of civilized life.

In any case, in the initial formulation given above I have, like Toynbee, taken the civilization as the basic unit of historical study. I continued in this vein, writing a series of posts about civilization, The Phenomenon of Civilization, The Phenomenon of Civilization Revisited, Revisiting Civilization Revisited, Historical Continuity and Discontinuity, Two Conceptions of Civilization, A Note on Quantitative Civilization, inter alia.

I moved beyond civilization-specific formulations of what I would come to call the principle of historical viability in a later post:

…the general principle enunciated above has clear implications for historical entities less comprehensive than civilizations. We can both achieve a greater generality for the principle, as well as to make it applicable to particular circumstances, by turning it into the following schema: “an x fails when it fails to change when the world changes” where the schematic letter “x” is a variable for which we can substitute different historical entities ceteris paribus (as the philosophers say). So we can say, “A city fails when it fails to change…” or “A union fails when it fails to change…” or (more to the point at present), “A political party fails when it fails to change when the world changes.”

And in Challenge and Response I elaborated on this further development of what it means to be historically viable:

…my above enunciated principle ought to be amended to read, “An x fails when it fails to change as the world changes” (instead of “…when the world changes”). In other words, the kind of change an historical entity must undergo in order to remain historically viable must be in consonance with the change occurring in the world. This is, obviously, or rather would be, a very difficult matter to nail down in quantitative terms. My schema remains highly abstract and general, and thus glides over any number of difficulties vis-à-vis the real world. But the point here is that it is not so much a matter of merely changing in parallel with the changing world, but changing how the world changes, changing in the way that the world changes.

It was also in this post that I first called this the principle of historical viability.

I now realize that what I then called historical viability might better be called existential viability — at least, by reformulating by principle of historical viability again and calling it the principle of existential viability, I can assimilate these ideas to my recent formulations of existential risk. Seeing historical viability through the lens of existential risk and existential viability allows us to formulate the following relationship between the latter two:

Existential viability is the condition that follows from the successful mitigation of existential risk.

Thus the achievement of existential risk mitigation is existential viability. So when we ask, “What makes an existent viable?” we can answer, “The successful mitigation of risks to that existent.” This gives us a formal framework for understanding existential viability as a successful mitigation of existential risk, but it tells us nothing about the material conditions that contribute to existential viability. Determining the material conditions of existential viability will be a matter both of empirical study and the formulation of a theoretical infrastructure adequate to the conditions that bear upon civilization. Neither of these exist yet, but we can make some rough observations about the material conditions of existential viability.

Different qualities in different places at different times have contributed to the viability of existents. This is one of the great lessons of natural selection: evolution is not about a ladder of progress, but about what organism is best adapted to the particular conditions of a particular area at a particular time. When the “organism” in question is civilization, the lesson of natural selection remains valid: civilizations do not describe a ladder of progress, but those civilizations that have survived have been those best adapted to the particular conditions of a particular region at a particular time. Existential risk mitigation is about making civilization part of evolution, i.e., part of the long term history of the universe.

To acknowledge the position of civilization in the long term history of the universe is to recognize that a change has come about in civilization as we know it, and this change is primarily the consequence of the advent of industrial-technological civilization: civilization is now global, populations across the planet, once isolated by geographical barriers, now communicate instantaneously and trade and travel nearly instantaneously. A global civilization means that civilization is no longer selected on the basis of local conditions at a particular place at a particular time — which was true of past civilizations. Civilization is now selected globally, and this means placing the earth that is the bearer of global civilization in a cosmological context of selection.

What selects a planet for the long term viability of the civilization that it bears? This is essentially a question of astrobiology, which is a point that I recently attempted to make in my recent presentation at the Icarus Interstellar Starship Congress and my post on Paul Gilster’s Centauri Dreams, Existential Risk and Far Future Civilization.

An astrobiological context suggests what we might call an astroecological context, and I have many times pointed out the relevance of ecology to questions of civilization. Pursuing the idea of existential viability may offer a new perspective for the application methods developed for the study of the complex systems of ecology to the complex systems of civilization. And civilizations are complex systems if they are anything.

There is a growing branch of mathematical ecology called viability theory, with obvious application to the viability of the complex systems of civilization. We can immediately see this applicability and relevance in the following passage:

“Agent-based complex systems such as economics, ecosystems, or societies, consist of autonomous agents such as organisms, humans, companies, or institutions that pursue their own objectives and interact with each other an their environment (Grimm et al. 2005). Fundamental questions about such systems address their stability properties: How long will these systems exist? How much do their characteristic features vary over time? Are they sensitive to disturbances? If so, will they recover to their original state, and if so, why, from what set of states, and how fast?”

Viability and Resilience of Complex Systems: Concepts, Methods and Case Studies from Ecology and Society (Understanding Complex Systems), edited by Guillaume Deffuant and Nigel Gilbert, p. 3

Civilization itself is an agent-based complex system like, “economics, ecosystems, or societies.” Another innovative approach to complex systems and their viability is to be found in the work of Hartmut Bossel. Here is an extract from the Abstract of his paper “Assessing Viability and Sustainability: a Systems-based Approach for Deriving Comprehensive Indicator Sets”:

Performance assessment in holistic approaches such as integrated natural resource management has to deal with a complex set of interacting and self-organizing natural and human systems and agents, all pursuing their own “interests” while also contributing to the development of the total system. Performance indicators must therefore reflect the viability of essential component systems as well as their contributions to the viability and performance of other component systems and the total system under study. A systems-based derivation of a comprehensive set of performance indicators first requires the identification of essential component systems, their mutual (often hierarchical or reciprocal) relationships, and their contributions to the performance of other component systems and the total system. The second step consists of identifying the indicators that represent the viability states of the component systems and the contributions of these component systems to the performance of the total system. The search for performance indicators is guided by the realization that essential interests (orientations or orientors) of systems and actors are shaped by both their characteristic functions and the fundamental and general properties of their system environments (e.g., normal environmental state, scarcity of resources, variety, variability, change, other coexisting systems). To be viable, a system must devote an essential minimum amount of attention to satisfying the “basic orientors” that respond to the properties of its environment. This fact can be used to define comprehensive and system-specific sets of performance indicators that reflect all important concerns.

…and in more detail from the text of his paper…

Obtaining a conceptual understanding of the total system. We cannot hope to find indicators that represent the viability of systems and their component systems unless we have at least a crude, but essentially realistic, understanding of the total system and its essential component systems. This requires a conceptual understanding in the form of at least a good mental model.

Identifying representative indicators. We have to select a small number of representative indicators from a vast number of potential candidates in the system and its component systems. This means concentrating on the variables of those component systems that are essential to the viability and performance of the total system.

Assessing performance based on indicator states. We must find measures that express the viability and performance of component systems and the total system. This requires translating indicator information into appropriate viability and performance measures.

Developing a participative process. The previous three steps require a large number of choices that necessarily reflect the knowledge and values of those who make them. In holistic management, it is therefore essential to bring in a wide spectrum of knowledge, experience, mental models, and social and environmental concerns to ensure that a comprehensive indicator set and proper performance measures are found.

“Assessing Viability and Sustainability: a Systems-based Approach for Deriving Comprehensive Indicator Sets,” Hartmut Bossel, Ecology and Society, Vol. 5, No. 2, Art. 12, 2001

Another dimension can be added to this applicability and relevance by the work of Xabier E. Barandiaran and Matthew D. Egber on the role of norms in complex systems involving agents. Here is an extract from the abstract of their paper:

“One of the fundamental aspects that distinguishes acts from mere events is that actions are subject to a normative dimension that is absent from other types of interaction: natural agents behave according to intrinsic norms that determine their adaptive or maladaptive nature. We briefly review current and historical attempts to naturalize normativity from an organism-centred perspective that conceives of living systems as defining their own norms in a continuous process of self-maintenance of their individuality. We identify and propose solutions for two problems of contemporary modelling approaches to viability and normative behaviour in this tradition: 1) How to define the topology of the viability space beyond establishing normatively-rigid boundaries, so as to include a sense of gradation that permits reversible failure; and 2) How to relate, in models of natural agency, both the processes
that establish norms and those that result in norm-following behaviour.”

The author’s definition of a viability space in the same paper is of particular interest:

Viability space: the space defined by the relationship between: a) the set of essential variables representing the components, processes or relationships that determine the system’s organization and, b) the set of external parameters representing the environmental conditions that are necessary for the system’s self-maintenance

“Norm-establishing and norm-following in autonomous agency,” Xabier E. Barandiaran, IAS-Research Centre for Life, Mind, and Society, Dept. of Logic and Philosophy of Science, UPV/EHU University of the Basque Country, Spain, xabier.academic@barandiaran.net, and Matthew D. Egbert, Center for Computational Neuroscience and Robotics, University of Sussex, Brighton, U.K.

Clearly, an adequate account of the existential viability of civilization would want to address the “essential variables representing the components, processes or relationships that determine” the civilization’s structure, as well as the “external parameters representing the environmental conditions that are necessary” for the civilization’s self-maintenance.

In working through the conception of existential risk in the series of posts I have written here I have come to realize how comprehensive the idea of existential risk is, which gives it a particular utility in discussing the big picture and the human future. In so far as existential viability is the condition that results from the successful mitigation of existential risk, then the idea of existential viability is at least as comprehensive as that of existential risk.

In formulating this initial exposition of existential viability I have been struck by the conceptual synchronicities that have have emerged: recent work in viability theory suggests the possibility of the mathematical modeling of civilization; the work of Barandiaran and Egbert on viability space has shown me the relevance of artificial life and artificial intelligence research; the key role of the concept of viability in ecology makes recent ecological studies (such as Assessing Viability and Sustainability cited above) relevant to existential viability and therefore also to existential risk; formulations of ecological viability and sustainability, and the recognition that ecological systems are complex systems demonstrates the relevance of complexity theory; ecological relevance to existential concerns points to the possibility of employing what I have written earlier about metaphysical ecology and ecological temporality to existential risk and existential viability, which in turn demonstrates the relevance of Bronfenbrenner’s work to this intellectual milieu. I dare say that the idea of existential viability has itself a kind of viability and resilience due to its many connections to many distinct disciplines.

. . . . .

danger imminent existential threat

. . . . .

Existential Risk: The Philosophy of Human Survival

1. Moral Imperatives Posed by Existential Risk

2. Existential Risk and Existential Uncertainty

3. Addendum on Existential Risk and Existential Uncertainty

4. Existential Risk and the Death Event

5. Risk and Knowledge

6. What is an existential philosophy?

7. An Alternative Formulation of Existential Risk

8. Existential Risk and Existential Opportunity

9. Conceptualization of Existential Risk

10. Existential Risk and Existential Viability

. . . . .

ex risk ahead

. . . . .

signature

. . . . .

Grand Strategy Annex

. . . . .

Saturday


It is difficult to find an authentic expression of horror, due to its close resemblance to both fear and disgust, but one readily recognizes horror when one sees it.

It is difficult to find an authentic expression of horror, due to its close resemblance to both fear and disgust, but one readily recognizes horror when one sees it.

In several posts I have referred to moral horror and the power of moral horror to shape our lives and even to shape our history and our civilization (cf., e.g., Cosmic Hubris or Cosmic Humility?, Addendum on the Avoidance of Moral Horror, and Against Natural History, Right and Left). Being horrified on a uniquely moral level is a sui generis experience that cannot be reduced to any other experience, or any other kind of experience. Thus the experience of moral horror must not be denied (which would constitute an instance of failing to do justice to our intuitions), but at the same time it cannot be uncritically accepted as definitive of the moral life of humanity.

Our moral intuitions tell us what is right and wrong, but they do not tell us what is or is not (i.e., what exists or what does not exist). This is the upshot of the is-ought distinction, which, like moral horror, must not be taken as an absolute principle, even if it is a rough and ready guide in our thinking. It is perfectly consistent, if discomfiting, to explicitly acknowledge the moral horrors of the world, and not to deny that they exist even while acknowledging that they are horrific. Sometimes the claim is made that the world itself is a moral horror. Joseph Campbell attributes this view to Schopenhauer, saying that according to Schopenhauer the world is something that never should have been.

Apart from the horrors of the world as a central theme of mythology, it is also to be found in science. There is a famous quote from Darwin that illustrates the acknowledgement of moral horror:

“There seems to me too much misery in the world. I cannot persuade myself that a beneficent & omnipotent God would have designedly created the Ichneumonidæ with the express intention of their feeding within the living bodies of caterpillars, or that a cat should play with mice.

Letter from Charles Darwin to Asa Gray, 22 May 1860

This quote from Darwin underlines another point repeatedly made by Joseph Campbell: that different individuals and different societies draw different lessons from the same world. For some, the sufferings of the world constitute an affirmation of divinity, while for Darwin and others, the sufferings of the world constitute a denial of divinity. That being said, it is not the point I would like to make today.

Far more common than the acceptance of the world’s moral horrors as they are is the denial of moral horrors, and especially the denial that moral horrors will occur in the future. On one level, a pragmatic level, we like to believe that we have learned our lessons from the horrors of our past, and that we will not repeat them precisely because we have perpetrated horrors in past and came to realize that they were horrors.

To insist that moral horrors can’t happen because it would offend our sensibilities to acknowledge such a moral horror is a fallacy. Specifically, the moral horror fallacy is a special case of the argumentum ad baculum (argument to the cudgel or appeal to the stick), which is in turn a special case of the argumentum ad consequentiam (appeal to consequences).

Here is one way to formulate the fallacy:

Such-and-such constitutes a moral horror,
It would be unconscionable for a moral horror to take place,
Therefore, such-and-such will not take place.

For “such-and-such” you can substitute “transhumanism” or “nuclear war” or “human extinction” and so on. The inference is fallacious only when the shift is made from is to ought or from ought to is. If confine our inference exclusively either to what is or what ought to be, we do not have a fallacy. For example:

Such-and-such constitutes a moral horror,
It would be unconscionable for a moral horror to take place,
Therefore, we must not allow such-and-such to take place.

…is not fallacious. It is, rather, a moral imperative. If you do not want a moral horror to occur, then you must not allow it to occur. This is what Kant called a hypothetical imperative. This is a formulation entirely in terms of what ought to be. We can also formulate this in terms of what is:

Such-and-such constitutes a moral horror,
Moral horrors do not occur,
Therefore, such-and-such does not occur.

This is a valid inference, although it is false. That is to say, this is not a formal fallacy but a material fallacy. Moral horrors do, in fact, occur, so the premise stating that moral horrors do not occur is a false premise, and the conclusion drawn from this false premise is a false conclusion. (If one denies that moral horrors do, in fact, take place, then one argues for the truth of this inference.)

Moral horrors can and do happen. They are even visited upon us numerous times. After the Holocaust everyone said “never again,” yet subsequent history has not spared us further genocides. Nor will it spare us further genocides and atrocities in the future. We cannot infer from our desire to be spared further genocides and atrocities that they will not come to pass.

More interesting than the fact that moral horrors continue to be perpetrated by the enlightened and technologically advanced human societies of the twenty-first century is the fact that the moral life of humanity evolves, and it often is the case that the moral horrors of the future, to which we look forward with fear and trembling, sometimes cease to be moral horrors by the time they are upon us.

Malthus famously argued that, because human population growth outstrips the production of food (and Malthus was particularly concerned with human beings, but he held this to be a universal law affecting all life) that humanity must end in misery or vice. By “misery” Malthus understood mass starvation — which I am sure that most of us today would agree is misery — and by “vice” Malthus meant birth control. In other words, Malthus viewed birth control as a moral horror comparable to mass starvation. This is not a view that is widely held today.

A great many unprecedented events have occurred since Malthus wrote his Essay on the Principle of Population. The industrialization of agriculture not only provided the world with plenty of food for an unprecedented increase in human population, it did so while farming was reduced to a marginal sector of the economy. And in the meantime birth control has become commonplace — we speak of it today as an aspect of “reproductive rights” — and few regard it as a moral horror. However, in the midst of this moral change and abundance, starvation continues to be a problem, and perhaps even more of a moral horror because there is plenty of food in the world today. Where people are starving, it is only a matter of distribution, and this is primarily a matter of politics.

I think that in the coming decades and centuries that there will be many developments that we today regard as moral horrors, but when we experience them they will not be quite as horrific as we thought. Take, for instance, transhumanism. Francis Fukuyama wrote a short essay in Foreign Policy magazine, Transhumanism, in which he identified transhumanism as the world’s most dangerous idea. While Fukuyama does not commit the moral horror fallacy in any explicit way, it is clear that he sees transhumanism as a moral horror. In fact, many do. But in the fullness of time, when our minds will have changed as much as our bodies, if not more, transhumanism is not likely to appear so horrific.

On the other hand, as I noted above, we will continue to experience moral horrors of unprecedented kinds, and probably also on an unprecedented scope and scale. With the human population at seven billion and climbing, our civilization may well experience wars and diseases and famines that kill billions even while civilization itself continues despite such depredations.

We should, then, be prepared for moral horrors — for some that are truly horrific, and others that turn out to be less than horrific once they are upon us. What we should not try to do is to infer from our desires and preferences in the present what must be or what will be. And the good news in all of this is that we have the power to change future events, to make the moral horrors that occur less horrific than they might have been, and to prepare ourselves intellectually to accept change that might have, once upon a time, been considered a moral horror.

. . . . .

signature

. . . . .

Grand Strategy Annex

. . . . .

The Science of Time

30 January 2013

Wednesday


Francis_Herbert_Bradley

F. H. Bradley in his classic treatise Appearance and Reality: A Metaphysical Essay, made this oft-quoted comment:

“If you identify the Absolute with God, that is not the God of religion. If again you separate them, God becomes a finite factor in the Whole. And the effort of religion is to put an end to, and break down, this relation — a relation which, none the less, it essentially presupposes. Hence, short of the Absolute, God cannot rest, and, having reached that goal, he is lost and religion with him. It is this difficulty which appears in the problem of the religious self-consciousness.”

I think many commentators have taken this passage as emblematic of what they believe to be Bradley’s religious sentimentalism, and in fact the yearning for religious belief (no longer possible for rational men) that characterized much of the school of thought that we now call “British Idealism.”

This is not my interpretation. I’ve read enough Bradley to know that he was no sentimentalist, and while his philosophy diverges radically from contemporary philosophy, he was committed to a philosophical, and not a religious, point of view.

Bradley was an elder contemporary of Bertrand Russell, and Bertrand Russell characterized Bradley as the grand old man of British idealism. This if from Russell’s Our Knowledge of the External World:

“The nature of the philosophy embodied in the classical tradition may be made clearer by taking a particular exponent as an illustration. For this purpose, let us consider for a moment the doctrines of Mr Bradley, who is probably the most distinguished living representative of this school. Mr Bradley’s Appearance and Reality is a book consisting of two parts, the first called Appearance, the second Reality. The first part examines and condemns almost all that makes up our everyday world: things and qualities, relations, space and time, change, causation, activity, the self. All these, though in some sense facts which qualify reality, are not real as they appear. What is real is one single, indivisible, timeless whole, called the Absolute, which is in some sense spiritual, but does not consist of souls, or of thought and will as we know them. And all this is established by abstract logical reasoning professing to find self-contradictions in the categories condemned as mere appearance, and to leave no tenable alternative to the kind of Absolute which is finally affirmed to be real.”

Bertrand Russell, Our Knowledge of the External World, Chapter I, “Current Tendencies”

Although Russell rejected what he called the classical tradition, and distinguished himself in contributing to the origins of a new philosophical school that would come (in time) to be called analytical philosophy, the influence of figures like F. H. Bradley and J. M. E. McTaggart (whom Russell knew personally) can still be found in Russell’s philosophy.

In fact, the above quote from F. H. Bradley — especially the portion most quoted, short of the Absolute, God cannot rest, and, having reached that goal, he is lost and religion with him — is a perfect illustration of a principle found in Russell, and something on which I have quoted Russell many times, as it has been a significant influence on my own thinking.

I have come to refer to this principle as Russell’s generalization imperative. Russell didn’t call it this (the terminology is mine), and he didn’t in fact give any name at all to the principle, but he implicitly employs this principle throughout his philosophical method. Here is how Russell himself formulated the imperative (which I last quoted in The Genealogy of the Technium):

“It is a principle, in all formal reasoning, to generalize to the utmost, since we thereby secure that a given process of deduction shall have more widely applicable results…”

Bertrand Russell, An Introduction to Mathematical Philosophy, Chapter XVIII, “Mathematics and Logic”

One of the distinctive features that Russell identifies as constitutive of the classical tradition, and in fact one of the few explicit commonalities between the classical tradition and Russell’s own thought, was the denial of time. The British idealists denied the reality of time outright, in the best Platonic tradition; Russell did not deny the reality of time, but he was explicit about not taking time too seriously.

Despite Russell’s hostility to mysticism as expressed in his famous essay “Mysticism and Logic,” when it comes to the mystic’s denial of time, Russell softens a bit and shows his sympathy for this particular aspect of mysticism:

“Past and future must be acknowledged to be as real as the present, and a certain emancipation from slavery to time is essential to philosophic thought. The importance of time is rather practical than theoretical, rather in relation to our desires than in relation to truth. A truer image of the world, I think, is obtained by picturing things as entering into the stream of time from an eternal world outside, than from a view which regards time as the devouring tyrant of all that is. Both in thought and in feeling, even though time be real, to realise the unimportance of time is the gate of wisdom.”

And…

“…impartiality of contemplation is, in the intellectual sphere, that very same virtue of disinterestedness which, in the sphere of action, appears as justice and unselfishness. Whoever wishes to see the world truly, to rise in thought above the tyranny of practical desires, must learn to overcome the difference of attitude towards past and future, and to survey the whole stream of time in one comprehensive vision.”

Bertrand Russell, Mysticism and Logic, and Other Essays, Chapter I, “Mysticism and Logic”

While Russell and the classical tradition in philosophy both perpetuated the devalorization of time, this attitude is slowly disappearing from philosophy, and contemporary philosophers are more and more treating time as another reality to be given philosophical exposition rather than denying its reality. I regard this as a salutary development and a riposte to all who claim that philosophy makes no advances. Contemporary philosophy of time is quite sophisticated, and embodies a much more honest attitude to the world than the denial of time. (For those looking at philosophy from the outside, the denial of the reality of time simply sounds like a perverse waste of time, but I won’t go into that here.)

In any case, we can bring Russell’s generalization imperative to time and history even if Russell himself did not do so. That is to say, we ought to generalize to the utmost in our conception of time, and if we do so, we come to a principle parallel to Bradley’s that I think both Russell and Bradley would have endorsed: short of the absolute time cannot rest, and, having reached that goal, time is lost and history with it.

Since I don’t agree with this, but it would be one logical extrapolation of Russell’s generalization imperative as applied to time, this suggests to be that there is more than one way to generalize about time. One way would be the kind of generalization that I formulated above, presumably consistent with Russell’s and Bradley’s devalorization of time. Time generalized in this way becomes a whole, a totality, that ceases to possess the distinctive properties of time as we experience it.

The other way to generalize time is, I think, in accord with the spirit of Big History: here Russell’s generalization imperative takes the form of embedding all times within larger, more comprehensive times, until we reach the time of the entire universe (or beyond). The science of time, as it is emerging today, demands that we almost seek the most comprehensive temporal perspective, placing human action in evolutionary context, placing evolution in biological context, placing biology is in geomorphological context, placing terrestrial geomorphology into a planetary context, and placing this planetary perspective into a cosmological context. This, too, is a kind of generalization, and a generalization that fully feels the imperative that to stop at any particular “level” of time (which I have elsewhere called ecological temporality) is arbitrary.

On my other blog I’ve written several posts related directly or obliquely to Big History as I try to define my own approach to this emerging school of historiography: The Place of Bilateral Symmetry in the History of Life, The Archaeology of Cosmology, and The Stars Down to Earth.

The more we pursue the rapidly growing body of knowledge revealed by scientific historiography, the more we find that we are part of the larger universe; our connections to the world expand as we pursue them outward in pursuit of Russell’s generalization imperative. I think it was Hans Blumenberg in his enormous book The Genesis of the Copernican World, who remarked on the significance of the fact that we can stand with our feet on the earth and look up at the stars. As I remarked in The Archaeology of Cosmology, we now find that by digging into the earth we can reveal past events of cosmological history. As a celestial counterpart to this digging in the earth (almost as though concretely embodying the contrast to which Blumenberg referred), we know that by looking up at the stars, we are also looking back in time, because the light that comes to us ages after it has been produced. Thus is astronomy a kind of luminous archaeology.

In Geometrical Intuition and Epistemic Space I wrote, “…we have no science of time. We have science-like measurements of time, and time as a concept in scientific theories, but no scientific theory of time as such.” Scientists have tried to think scientifically about time, but, as with the case of consciousness, a science of time eludes us as a science of consciousness eludes us. Here a philosophical perspective remains necessary because there are so many open questions and no clear indication of how these questions are to be answered in a clearly scientific spirit.

Therefore I think it is too early to say exactly what Big History is, because we aren’t logically or intellectually prepared to say exactly what the Russellian generalization imperative yields when applied to time and history. I think that we are approaching a point at which we can clarify our concepts of time and history, but we aren’t quite there yet, and a lot of conceptual work is necessary before we can produce a definitive formulation of time and history that will make of Big History the science and it aspires to be.

. . . . .

signature

. . . . .

Grand Strategy Annex

. . . . .

project astrolabe logo smaller

. . . . .

Friday


Alonzo Church and Alan Turing

What is the Church-Turing Thesis? The Church-Turing Thesis is an idea from theoretical computer science that emerged from research in the foundations of logic and mathematics, also called Church’s Thesis, Church’s Conjecture, the Church-Turing Conjecture as well as other names, that ultimately bears upon what can be computed, and thus, by extension, what a computer can do (and what a computer cannot do).

Note: For clarity’s sake, I ought to point out the Church’s Thesis and Church’s Theorem are distinct. Church’s Theorem is an established theorem of mathematical logic, proved by Alonzo Church in 1936, that there is no decision procedure for logic (i.e., there is no method for determining whether an arbitrary formula in first order logic is a theorem). But the two – Church’s theorem and Church’s thesis – are related: both follow from the exploration of the possibilities and limitations of formal systems and the attempt to define these in a rigorous way.

Even to state Church’s Thesis is controversial. There are many formulations, and many of these alternative formulations come straight from Church and Turing themselves, who framed the idea differently in different contexts. Also, when you hear computer science types discuss the Church-Turing thesis you might think that it is something like an engineering problem, but it is essentially a philosophical idea. What the Church-Turing thesis is not is as important as what it is: it is not a theorem of mathematical logic, it is not a law of nature, and it not a limit of engineering. We could say that it is a principle, because the word “principle” is ambiguous and thus covers the various formulations of the thesis.

There is an article on the Church-Turing Thesis at the Stanford Encyclopedia of Philosophy, one at Wikipedia (of course), and even a website dedicated to a critique of the Stanford article, Alan Turing in the Stanford Encyclopedia of Philosophy. All of these are valuable resources on the Church-Turing Thesis, and well worth reading to gain some orientation.

One way to formulate Church’s Thesis is that all effectively computable functions are general recursive. Both “effectively computable functions” and “general recursive” are technical terms, but there is an important different between these technical terms: “effectively computable” is an intuitive conception, whereas “general recursive” is a formal conception. Thus one way to understand Church’s Thesis is that it asserts the identity of a formal idea and an informal idea.

One of the reasons that there are many alternative formulations of the Church-Turing thesis is that there any several formally equivalent formulations of recursiveness: recursive functions, Turing computable functions, Post computable functions, representable functions, lambda-definable functions, and Markov normal algorithms among them. All of these are formal conceptions that can be rigorously defined. For the other term that constitutes the identity that is Church’s thesis, there are also several alternative formulations of effectively computable functions, and these include other intuitive notions like that of an algorithm or a procedure that can be implemented mechanically.

These may seem like recondite matters with little or no relationship to ordinary human experience, but I am surprised how often I find the same theoretical conflict played out in the most ordinary and familiar contexts. The dialectic of the formal and the informal (i.e., the intuitive) is much more central to human experience than is generally recognized. For example, the conflict between intuitively apprehended free will and apparently scientifically unimpeachable determinism is a conflict between an intuitive and a formal conception that both seem to characterize human life. Compatibilist accounts of determinism and free will may be considered the “Church’s thesis” of human action, asserting the identity of the two.

It should be understood here that when I discuss intuition in this context I am talking about the kind of mathematical intuition I discussed in Adventures in Geometrical Intuition, although the idea of mathematical intuition can be understood as perhaps the narrowest formulation of that intuition that is the polar concept standing in opposition to formalism. Kant made a useful distinction between sensory intuition and intellectual intuition that helps to clarify what is intended here, since the very idea of intuition in the Kantian sense has become lost in recent thought. Once we think of intuition as something given to us in the same way that sensory intuition is given to us, only without the mediation of the senses, we come closer to the operative idea of intuition as it is employed in mathematics.

Mathematical thought, and formal accounts of experience generally speaking, of course, seek to capture our intuitions, but this formal capture of the intuitive is itself an intuitive and essentially creative process even when it culminates in the formulation of a formal system that is essentially inaccessible to intuition (at least in parts of that formal system). What this means is that intuition can know itself, and know itself to be an intuitive grasp of some truth, but formality can only know itself as formality and cannot cross over the intuitive-formal divide in order to grasp the intuitive even when it captures intuition in an intuitively satisfying way. We cannot even understand the idea of an intuitively satisfying formalization without an intuitive grasp of all the relevant elements. As Spinoza said that the true is the criterion both of itself and of the false, we can say that the intuitive is the criterion both of itself and the formal. (And given that, today, truth is primarily understood formally, this is a significant claim to make.)

The above observation can be formulated as a general principle such that the intuitive can grasp all of the intuitive and a portion of the formal, whereas the formal can grasp only itself. I will refer to this as the principle of the asymmetry of intuition. We can see this principle operative both in the Church-Turing Thesis and in popular accounts of Gödel’s theorem. We are all familiar with popular and intuitive accounts of Gödel’s theorem (since the formal accounts are so difficult), and it is not usual to make claims for the limitative theorems that go far beyond what they formally demonstrate.

All of this holds also for the attempt to translate traditional philosophical concepts into scientific terms — the most obvious example being free will, supposedly accounted for by physics, biochemistry, and neurobiology. But if one makes the claim that consciousness is nothing but such-and-such physical phenomenon, it is impossible to cash out this claim in any robust way. The science is quantifiable and formalizable, but our concepts of mind, consciousness, and free will remain stubbornly intuitive and have not been satisfyingly captured in any formalism — the determination of any such satisfying formalization could only be determined by intuition and therefore eludes any formal capture. To “prove” determinism, then, would be as incoherent as “proving” Church’s Thesis in any robust sense.

There certainly are interesting philosophical arguments on both sides of Church’s Thesis — that is to say, both its denial and its affirmation — but these are arguments that appeal to our intuitions and, most crucially, our idea of ourselves is intuitive and informal. I should like to go further and to assert that the idea of the self must be intuitive and cannot be otherwise, but I am not fully confident that this is the case. Human nature can change, albeit slowly, along with the human condition, and we could, over time — and especially under the selective pressures of industrial-technological civilization — shape ourselves after the model of a formal conception of the self. (In fact, I think it very likely that this is happening.)

I cannot even say — I would not know where to begin — what would constitute a formal self-understanding of the self, much less any kind of understanding of a formal self. Well, maybe not. I have written elsewhere that the doctrine of the punctiform present (not very popular among philosophers these days, I might add) is a formal doctrine of time, and in so far as we identify internal time consciousness with the punctiform present we have a formal doctrine of the self.

While the above account is one to which I am sympathetic, this kind of formal concept — I mean the punctiform present as a formal conception of time — is very different from the kind of formality we find in physics, biochemistry, and neuroscience. We might assimilate it to some mathematical formalism, but this is an abstraction made concrete in subjective human experience, not in physical science. Perhaps this partly explains the fashionable anti-philosophy that I have written about.

. . . . .

signature

. . . . .

Grand Strategy Annex

. . . . .

Monday


The Urnes Stave church — the sun came out briefly as we crossed the fjord from Solvorn to Urnes, though the rest of the day was overcast or raining.

Even if you know what to look for, it is quite difficult to pick out the Urnes stave church from across the fjord at Solvorn, where a small ferry departs each hour on the hour to take tourists and a few cars and bicycles across Sognefjord over to the Urnes side (also spelled “Ornes”). Once across, you walk up the hill to the top of the village, and there sits the Urnes stave church among trees and the cultivated hillsides, just as it has been sitting for more then 800 years. This is the second time I have been to Urnes, and I was unable to see the stave church from across the fjord; perhaps if I had had binoculars I would have seen it, but it melds into the landscape from which it came.

Looking back to Solvorn from the top of the hill at Urnes, standing next to this ancient wooden structure, little changed from when it was built — Urnes is thought to be the oldest of the surviving stave churches, with timbers dating from 1129-1130 (thanks to dedrochronology) — it is very easy to imagine the villagers are Solvorn getting into the wooden boats, rowing across the fjord, and walking up the hill to attend services in their ancient church. We often hear the phrase “time stands still” — at Urnes, you can stand still along with time for a few moments. Here, history has been paused.

In so saying that history is paused at Urnes I am reminded of a passage from Rembrandt and Spinoza by Leo Balet, which I quoted previously in Capturing the Moment:

“In those of his portraits where the portrayed is not acting, but just resting, pausing, we get the feeling that the resting continues, that it is a resting with duration, a resting, thus, in time; in those pictures we are closer to life than in the portraits where just the breaking off of the action makes us so vividly aware that his whole action was make-believe.”

Leo Balet, Rembrandt and Spinoza, p. 184

Balet here frames his thesis in terms of portraiture, but the same might be said of a photograph or a sculpture — or even of a place that changes but little over the years. Urnes is such a place, and, in fact, there are many such places in Norway. Yesterday in A Wittgensteinian Pilgrimage I noted how Wittgenstein’s correspondents in Skjolden often closed their letters with, “All is as before here” (“Her er det som før”). in Skjolden, too, time is paused.

Similarly, the busyness of the world appears to us as mere make-believe when seen from the perennial perspective of unchanging continuity in time. Our hurried and harassed lives seem mindless and perhaps a bit comical when compared to forms of life that endure — or, to put it otherwise, compared to modes of life that enjoy historical viability.

I have elsewhere defined historical viability as the ability of an existent to endure in existence by changing as the world changes; now I realize that the world changes in different ways at different times and places, so that historical viability is a local phenomenon that is subject to conditions closely similar to natural selection — existents are selected for historical viability not by being “better” or “higher” or “superior” or “perfect,” but by being the most suited to their environment. In the present context, “environment” should be understood as the temporal or historical environment of a historical existent — with this in mind, a more subtle form of the principle of historical viability begins to emerge.

. . . . .

Solvorn, across the fjord from Urnes.

. . . . .

signature

. . . . .

Grand Strategy Annex

. . . . .

Tuesday


Mario Monti said of the Euro that, “the will to make it indissoluble and irrevocable is there.” Today, perhaps yes, but what will the will be tomorrow?

Each time the Eurozone puts together another bailout package the markets follow with a brief (sometimes very brief) rally, which collapses pretty much as soon as reality reasserts itself and it becomes obvious that most of the measures constitute creative ways of kicking the can down the road, while those more ambitious measures that are more than kicking the can down the road are probably overly ambitious and not likely to be practical policies in the midst of a financial crisis.

Simply from a practical point of view, it is difficult to imagine how anyone can believe that a more comprehensive fiscal and political union can be brought about in the midst of the crisis, although formulated with the best intentions of saving the Eurozone, since the original (and much more limited) Eurozone was negotiated, planned, and implemented over a period of many years, not over a period of few days as inter-bank loan rates are climbing by the hour. Apart from this practical problem, there are several issues of principle at stake in the Eurozone crisis and the attempts to rescue the European Monetary Union.

Mario Monti was quoted in a Reuter’s article, Monti says EU hinges on summit talks outcome: report, in defense of strengthening financial and political ties within the Eurozone as a way to save that Euro that:

“Europeans know where they’re going… the markets are convinced that having given birth to the euro, the will to make it indissoluble and irrevocable is there and will be strengthened by other steps towards integration.”

Can the Euro be made “indissoluble and irrevocable”? Can anything be made indissoluble and irrevocable? I think not, and this is a matter of principle to which I attach great importance.

I have several times quoted Edward Gibbon on the impossibility of present legislators binding the acts of future legislators:

“In earthly affairs, it is not easy to conceive how an assembly equal of legislators can bind their successors invested with powers equal to their own.”

Edward Gibbon, History of the Decline and Fall of the Roman Empire, Vol. VI, Chapter LXVI, “Union Of The Greek And Latin Churches.–Part III.

Since I have quoted this several times (in The Imperative of Regime Survival, The Institution of Language, and The Chilean Model, e.g.), implicitly maintaining that it states an important principle, I am now going give this principle a name: Gibbon’s Principle of Inalienable Autonomy for Political Entities, or, more briefly, Gibbon’s Principle.

As I have tried to make explicit, Gibbon’s Principle holds for political entities, but I have also quoted a passage from Sartre that presents essentially the same idea for individuals rather than for political entities:

“I cannot count upon men whom I do not know, I cannot base my confidence upon human goodness or upon man’s interest in the good of society, seeing that man is free and that there is no human nature which I can take as foundational. I do not know where the Russian revolution will lead. I can admire it and take it as an example in so far as it is evident, today, that the proletariat plays a part in Russia which it has attained in no other nation. But I cannot affirm that this will necessarily lead to the triumph of the proletariat: I must confine myself to what I can see. Nor can I be sure that comrades-in-arms will take up my work after my death and carry it to the maximum perfection, seeing that those men are free agents and will freely decide, tomorrow, what man is then to be. Tomorrow, after my death, some men may decide to establish Fascism, and the others may be so cowardly or so slack as to let them do so. If so, Fascism will then be the truth of man, and so much the worse for us. In reality, things will be such as men have decided they shall be. Does that mean that I should abandon myself to quietism? No. First I ought to commit myself and then act my commitment, according to the time-honoured formula that “one need not hope in order to undertake one’s work.” Nor does this mean that I should not belong to a party, but only that I should be without illusion and that I should do what I can. For instance, if I ask myself ‘Will the social ideal as such, ever become a reality?’ I cannot tell, I only know that whatever may be in my power to make it so, I shall do; beyond that, I can count upon nothing.”

Jean-Paul Sartre, “Existentialism is a Humanism” (lecture from 1946, translated by Philip Mairet)

This I will now also name with a principle: Sartre’s Principle of Inalienable Autonomy for Individuals, or, more briefly, Sartre’s Principle.

If that weren’t already enough principles for today, I going to formulate another principle, and although this is my own I’m not going to name it after myself after the fashion of the names I’ve given to Gibbon’s Principle or Sartre’s Principle. This additional principle is The Principle of the Political Primacy of the Individual (admittedly awkward — I will try to think of a better name for this): political autonomy is predicated upon individual autonomy. In other words, Gibbon’s Principle carries the force that it does because of Sartre’s Principle, and this makes Sartre’s Principle the more fundamental.

At present I am not going to argue for The Principle of the Political Primacy of the Individual, but I will simply assume that Gibbon’s Principle supervenes upon Sartre’s Principle, but I wanted to make clear that I understand that there are those who would reject this principle, and that there are arguments on both sides of the question. There is no establish literature on this principle so far as I know, as I am not aware that anyone has previously formulated it in an explicit form, but I can easily imagine arguments taken from classic sources that bear on both sides of the principle (i.e., its affirmation or its denial).

Because, as Sartre said, “men are free agents and will freely decide,” the Euro cannot be made “indissoluble and irrevocable” and the attempt to try to make it seem so is pure folly. For in order to maintain this appearance, we must be dishonest with ourselves; we must make claims and assertions that we know to be false. This cannot be a robust foundation for any political effort. If, tomorrow, a deeper economic and political union of the Eurozone becomes of the truth of Europe, this does not mean that the day after tomorrow that this will remain the truth of Europe.

And this brings us to yet another principle, and this principle is a negative formulation of a principle that I have formulated in the past, the principle of historical viability. According to the principle of historical viability, an existent must change as the world changes or it will be eliminated from history. This means that entities that remain in existence must be so malleable that they can change in their essence, for if they fail to change, they experience adverse selection.

A negative formulation of the principle of historical viability might be called the principle of historical calamity: any existent so constituted that it cannot change is doomed to extinction, and sooner rather than later. In other words, any effort that is made to make the Euro “indissoluble and irrevocable” not only will fail to make the Euro indissoluble and irrevocable, but will in fact make the Euro all the more vulnerable to historical forces that would destroy it.

When I previously discussed Gibbon’s Principle and Sartre’s Principle (before I had named these principles as such) in The Imperative of Regime Survival, I cited an effort in Cuba to incorporate Castro’s vision of Cuba’s socio-economic system into the constitution as a permanent feature of the government of Cuba that would presumably hold until the end of time. This would be laughable were it not the source of so much human suffering and misery.

Well, the Europeans aren’t imposing any misery on themselves on the level of that which has been imposed upon the Cuban people by their elites, but the folly in each class of elites is essentially the same: the belief that those in power today, at the present moment, are in a privileged position to dictate the only correct institutional model for all time and eternity. In other words, the End of History has arrived.

Why not make the Euro an open, flexible, and malleable institution that can respond to political, social, economic, and demographic changes? Sir Karl Popper famously wrote about The Open Society and its Enemies — ought not an open society to have open institutions? And would not open institutions be those that are formulated with an eye toward the continuous evolution in the light of further and future experience?

To deny Gibbon’s Principle and Sartre’s Principle is to count oneself among the enemies of open societies and open institutions.

. . . . .

signature

. . . . .

Grand Strategy Annex

. . . . .

Sunday


Science often (though not always or exclusively) involves a quantitative approach to phenomena. As the phenomena of the world are often (though not always or exclusively) continuous, the continuum of phenomena must be broken up into discrete chunks of experience, however imperfect the division. If we are to quantify knowledge, we must have distinctions, and distinctions must be interpolated at some particular point in a continuum.

The truncation principle is the principled justification of this practice, and the truncation fallacy is the claim that distinctions in the name of quantification are illegitimate. The claim of the illegitimacy of a given distinction is usually based on an ideal standard of distinctions having to be based on a sharply-bounded concept that marks an exhaustive division that admits of no exceptions. This is an unreasonable standard for human experience or its systematization in scientific knowledge.

One of my motivations (though not my only motivation) for formulating the truncation principle was the obvious application to historical periodization. Historians have always been forced to confront the truncation fallacy, though I am not aware that there has previously been any name for the conceptual problems involved in historical periodization, though it has been ever-present in the background of historical thought.

Here is an implicit exposition of the problems of the truncation principle by Marc Bloch, one of the most eminent members of the Annales school of historians (which also included Fernand Braudel, of whom I have written on many occasions), and who was killed by the Gestapo while working for the French resistance during the Second World War:

“…it is difficult to imagine that any of the sciences could treat time as a mere abstraction. Yet, for a great number of those who, for their own purposes, chop it up into arbitrary homogenous segments, time is nothing more than a measurement. In contrast, historical time is a concrete and living reality with an irreversible onward rush… this real time is, in essence, a continuum. It is also perpetual change. The great problems of historical inquiry derive from the antithesis of these two attributes. There is one problem especially, which raises the very raison d’être of our studies. Let us assume two consecutive periods taken out of the uninterrupted sequence of the ages. To what extent does the connection which the flow of time sets between them predominate, or fail to predominate, over the differences born out of the same flow?”

Marc Bloch, The Historian’s Craft, translated by Peter Putnam, New York: Vintage, 1953, Chapter I, sec. 3, “Historical Time,” pp. 27-29

Bloch, then, sees times itself, the structure of time, as the source both of historical continuity and historical discontinuity. For Bloch the historian, time is the truncation principle, as for some metaphysicians space (or time, for that matter) simply is the principle of individuation.

The truncation principle and the principle of individuation are closely related. What makes an individual an individual? When it is cut off from the rest of the world and designated as an individual. I haven’t thought about this yet, so I will reserve further remarks until I’ve made an effort to review the history of the principium individuationis.

The “two attributes” of continuity and change are both functions of time; both the connection and the differences between any two “arbitrary homogenous segments” are due to the action of time, according to Bloch.

The truncation principle, however, has a wider application than time. To express the truncation principle in terms of time invites a formulation (or an example) in terms of space, and there is an excellent example ready to hand: that of the color spectrum of visible light. There is a convention of dividing the color spectrum into red, orange, yellow, green, blue, indigo, and violet. But this is not the only convention. Because the word “indigo” is becoming almost archaic, one now sees the color spectrum decomposed into red, orange, yellow, green, blue, and purple.

Both decompositions of the color spectrum, and any others that might be proposed, constitute something like, “arbitrary homogenous segments.” The decomposition of the color spectrum is justified by the truncation principle, but the principle does not privilege any one decomposition over any other. All distinctions are equal, and if any one distinction is taken to be more equal than others, it is only because this distinction has the sanction of tradition.

. . . . .

signature

. . . . .

Grand Strategy Annex

. . . . .

Saturday


Truncated sphere: can we appeal to any principle in our truncations?

We can make a distinction among distinctions between ad hoc and principled distinctions. The former category — ad hoc distinctions — may ultimately prove to be based on a principle, but that principle is unknown as long as the distinction remains an ad hoc distinction. This suggests a further distinction among distinctions between ad hoc distinctions that really are ad hoc, and which are based on no principle, and ad hoc distinctions that are really principled distinctions but the principle in question is not yet known, or not yet formulated, at the time the distinction is made. So there you have a principled distinction between distinctions.

A perfect evocation of ad hoc distinctions is to be found in the opening paragraph of the Preface to Foucault’s The Order of Things:

This book first arose out of a passage in Borges, out of the laughter that shattered, as I read the passage, all the familiar landmarks of my thought — our thought, the thought that bears the stamp of our age and our geography — breaking up all the ordered surfaces and all the planes with which we are accustomed to tame the wild profusion of existing things, and continuing long afterwards to disturb and threaten with collapse our age-old distinction between the Same and the Other. This passage quotes a ‘certain Chinese encyclopedia’ in which it is written that ‘animals are divided into: (a) belonging to the Emperor, (b) embalmed, (c) tame, (d) sucking pigs, (e) sirens, (f) fabulous, (g) stray dogs, (h) included in the present classification, (i) frenzied, (j) innumerable, (k) drawn with a very fine camelhair brush, (1) et cetera, (m) having just broken the water pitcher, (n) that from a long way off look like flies’. In the wonderment of this taxonomy, the thing we apprehend in one great leap, the thing that, by means of the fable, is demonstrated as the exotic charm of another system of thought, is the limitation of our own, the stark impossibility of thinking that.

Such distinctions are comic, though Foucault recognizes that our laughter is uneasy: even as we immediately recognize the ad hoc character of these distinctions, we realize that the principled distinctions we routinely employ may not be so principled as we supposed.

Foucault continues this theme for several pages, and then gives another formulation — perhaps, given his interest in mental illness, an illustration that is closer to reality than Borges’ Chinese dictionary:

“It appears that certain aphasiacs, when shown various differently coloured skeins of wool on a table top, are consistently unable to arrange them into any coherent pattern; as though that simple rectangle were unable to serve in their case as a homogeneous and neutral space in which things could be placed so as to display at the same time the continuous order of their identities or differences as well as the semantic field of their denomination. Within this simple space in which things are normally arranged and given names, the aphasiac will create a multiplicity of tiny, fragmented regions in which nameless resemblances agglutinate things into unconnected islets; in one corner, they will place the lightest-coloured skeins, in another the red ones, somewhere else those that are softest in texture, in yet another place the longest, or those that have a tinge of purple or those that have been wound up into a ball. But no sooner have they been adumbrated than all these groupings dissolve again, for the field of identity that sustains them, however limited it may be, is still too wide not to be unstable; and so the sick mind continues to infinity, creating groups then dispersing them again, heaping up diverse similarities, destroying those that seem clearest, splitting up things that are identical, superimposing different criteria, frenziedly beginning all over again, becoming more and more disturbed, and teetering finally on the brink of anxiety.”

Foucault here writes that, “the sick mind continues to infinity,” in other words, the process does not terminate in a definite state-of-affairs. This implies that the healthy mind does not continue to infinity: rational thought must make concessions to human finitude. While I find the use of the concept of the pathological in this context questionable, and I have to wonder if Foucault was unwittingly drawn into the continental anti-Cantorian tradition (Brouwerian intuitionism and the like, though I will leave this aside for now), there is some value to the idea that a scientific process (such as classification) must terminate in a finite state-of-affairs, even if only tentatively. I will try to show, moreover, that there is an implicit principle in this attitude, and that it is in fact a principle that I have discussed previously.

The quantification of continuous data requires certain compromises. Two of these compromises include finite precision errors (also called rounding errors) and finite dimension errors (also called truncation). Rounding errors should be pretty obvious: finite parameters cannot abide infinite decimal expansions, and so we set a limit of six decimal places, or twenty, or more — but we must set a limit. The difference between actual figures and limited decimal expansions of the same figure is called a finite precision error. Finite dimension errors result from the need to arbitrarily introduce gradations into a continuum. Using the real number system, any continuum can be faithfully represented, but this representation would require infinite decimal expansions, so we see that there is a deep consonance between finite precision errors and finite dimension errors. Thus, for example, we measure temperature by degrees, and the arbitrariness of this measure is driven home to us by the different scales we can use for this measurement. And if we could specify temperature using real numbers (including transcendental numbers) we would not have to compromise. But engineering and computers and even human minds need to break things up into manageable finite quantities, so we speak of 3 degrees C, or even 3.14 degrees C, but we don’t try to work with pi degrees C. Thus the increments of temperature, or of any another measurement, involve both finite precision errors and finite dimension errors.

In so far as quantification is necessary to the scientific method, finite dimension errors are necessary to the scientific method. In several posts (e.g., Axioms and Postulates in Strategy) I have cited Carnap’s tripartite distinction among scientific concepts, the three being classificatory, comparative, and quantitative concepts. Carnap characterizes the emergence of quantitative scientific concepts as the most sophisticated form of scientific thought, but in reviewing Carnap’s scientific concepts in the light of finite precision errors and finite dimension errors, it is immediately obvious that classificatory concepts and comparative concepts do not necessarily involve finite precision errors and finite dimension errors. It is only with the introduction of quantitative concepts that science becomes sufficiently precise that its precision forces compromises upon us. However, I should point out that classificatory concepts routinely force us to accept finite dimension errors, although they do not involve finite precision errors. The example given by Foucault, quoted above, illustrates the inherent tension in classificatory concepts.

We accept finite precision errors and finite dimension errors as the price of doing science, and indeed as the price of engaging in rational thought. As Foucault implied in the above quote, the healthy and sane mind must draw lines and define limits and call a halt to things. Sometimes these limits are close to being arbitrary. We retain the ambition of “carving nature at the joints,” but we accept that we can’t always locate the joint but at times must cleave the carcass of nature regardless.

For this willingness to draw lines and establish limits and to call a halt to proceedings I will give the name The Truncation Principle, since it is in virtue to cutting off some portion of the world and treating it as though it were a unified whole that we are able to reason about the world.

As I mentioned above, I have discussed this problem previously, and in my discussion I noted that I wanted to give an exposition of a principle and a fallacy, but that I did not have a name for it yet, so I called it An Unnamed Principle and an Unnamed Fallacy. Now I have a name for it, and I will use this name, i.e., the truncation principle, from now on.

Note: I was tempted to call this principle the “baby retention principle” or even the “hang on to your baby principle” since it is all about the commonsense notion of not throwing out the baby with the bathwater.

In An Unnamed Principle and an Unnamed Fallacy I initially formulated the principle as follows:

The principle is simply this: for any distinction that is made, there will be cases in which the distinction is problematic, but there will also be cases when the distinction is not problematic. The correlative unnamed fallacy is the failure to recognize this principle.

What I most want to highlight is that when someone points out there are gray areas that seem to elude classification by any clear cut distinction, this is sometimes used as a skeptical argument intended to undercut the possibility of making any distinctions whatsoever. The point is that the existence of gray areas and problematic cases does not address the other cases (possibly even the majority of the cases) for which the distinction isn’t in the least problematic.

A distinction that that admits of problematic cases not clearly falling on one side of the distinction or the other, may yet have other cases that are clearly decided by the distinction in question. This might seem too obvious to mention, but distinctions that admit of problematic instances are often impugned and rejected for this reason. Admitting of no exceptions whatsoever is an unrealistic standard for a distinction.

I hope to be able to elaborate on this formulation as I continue to think about the truncation principle and its applications in philosophical, formal, and scientific thought.

Usually when we hear “truncation” we immediately think of the geometrical exercise of regularly cutting away parts of the regular (Platonic) solids, yielding truncated polyhedra and converging on rectified polyhedra. This is truncation in space. Truncation in time, on the other hand, is what is more commonly known as historical periodization. How exactly one historical period is to be cut off from another is always problematic, not least due to the complexity of history and the sheer number of outliers that seem to falsify any attempt at periodization. And yet, we need to break history up into comprehensible chunks. When we do so, we engage in temporal truncation.

All the problems of philosophical logic that present themselves to the subtle and perceptive mind when contemplating a spatial truncation, as, for example, in defining the Pacific Ocean — where exactly does it end in relation to the Indian Ocean? — occur in spades in making a temporal truncation. Yet if rational inquiry is to begin (and here we do not even raise the question of where rational inquiry ends) we must make such truncations, and our initial truncations are crude and mostly ad hoc concessions to human finitude. Thus I introduce the truncation principle as an explicit justification of truncations as we employ them throughout reasoning.

And, as if we hadn’t already laid up enough principles and distinctions for today, here is a principle of principles of distinctions: every principled distinction implies a fallacy that takes the form of neglecting this distinction. With an ad hoc distinction there is no question of fallacy, because there is no principle to violate. Where there is a principle involved, however, the violation of the principle constitutes a fallacy.

Contrariwise, every fallacy implies a principled distinction that ought to have been made. If we observe the appropriate principled distinctions, we avoid fallacies, and if we avoid fallacies we appropriately distinguish that which ought to be distinguished.

. . . . .

signature

. . . . .

Grand Strategy Annex

. . . . .

project astrolabe logo smaller

. . . . .

Thursday


Yesterday in A Review of Iranian Capabilities I mentioned the current foreign policy debate over the idea of a preventative war against Iran and recounted some of Iran’s known capabilities.

Reflecting the these attempts to make a case for or against preventative war with Iran, I was led back in my thoughts to a post I wrote last summer about what I called The Possible War. In this post I tried to emphasize that ex post facto criticisms of conduct in war — like criticisms of the Allies’ strategic bombing of Germany during the Second World War — presume a parity of capability and opportunity that almost never obtains in fact. Military powers do not engage in ideal wars that meet certain standards; they fight the war that they are able to fight, and this is the possible war.

Moving beyond a description of the possible war, the idea can be formulated as a principle, the principle of possible wars, and the principle is this: in any given conflict, each party to the conflict will fight the war that it is possible for that party to fight. In other words, no party to a conflict is going to fight a war that it is impossible for it to fight. In other words again, no party to a conflict is going to fight a losing war on the basis of peer-to-peer engagement if there is a non-peer strategy that will win the war. This sort of thing makes good poetry, as in The Charge of the Light Brigade, but in so far as it ensures failure in a campaign, it exerts a strong negative selection over military powers that pursue such policies.

The military resources of a given political entity (whether state or non-state entity) will always seek to maximize its advantage by employing its most effective means available against its adversary’s most vulnerable target available. This is what makes war brutal and ugly, this is why it has been said since ancient times that inter arma enim silent leges.

There is a sense in which this principle of possible wars is simply an extension of the classic twin principles of mass and economy of forces. Each party to a conflict concentrates as much force as it can at a point it believes the adversary to be most vulnerable, and the enemy is simultaneously trying to do the same thing. If we think of concentration as concentration of effort, rather than mere numbers of battalions, and we think of vulnerability as any way in which an enemy can be defeated, and not merely a point on the line that is insufficiently defended, then we have the principle of possible war.

War is not always and inevitably brutal and ugly, and the principle of possible wars helps us to understand why this is the case. Previously in Civilization and War as Social Technologies I discussed how in particular historical circumstances warfare can become highly ritualized and stylized. There I cited the non-Western examples of Samurai sword fighting, retained in Japan long after the rest of the world was fighting with guns, and the Aztec Flower Battle, which combined religious rituals of sacrifice with the honor and prestige requirements of combat. However, there are Western precedents for ritualized combat as well, as when, in the ancient world, each party to a conflict would choose an individual champion and the issue was decided by single combat.

Another example of semi-ritualized forms of combat in Western history might include early modern Condottieri wars in the Italian peninsula. Before the large scale armies of the French and the Spanish crossed the Alps to pillage and plunder Italy, the peninsula was dominated by wealth city-states who hired mercenary armies under Condottieri captains to wage war against each other. With two mercenary armies facing each other on the battlefield, there was a strong incentive to minimize casualties, and there are some remarkable stories from the era of nearly bloodless battles.

Another example would be the maneuver warfare of small, professional European armies during the Enlightenment, who sometimes managed to fight limited wars with a minimal impact on non-combatants. This may well have been a cultural response to the horrific slaughter of the Thirty Years War.

In these latter two examples, limited wars were the possible war because a sufficient number of social conventions and normative presuppositions were shared by all parties to the conflict, who were willing to abide by the results of the contest even when a more ruthless approach might have secured a Pyrrhic victory. Under these socio-political conditions, limited wars were possible wars because all parties recognized that it was in their enlightened self-interest not to escalate wars beyond a certain threshold. Such social conventions touching even upon the conduct of war can only be effective in a suitably homogenous cultural region.

After the escalating total wars leading up to the middle of the twentieth century, limited wars emerged again out of fear of crossing the nuclear threshold. Parties to the conflicts were willing to abide by the issue of these limited wars because the alternative was mutually assured destruction. Also, all parties to proxy wars knew they would have another chance at achieving their goals in another theater when the proxy war would shift to another region of the world. Thus limited wars because possible wars because the alternative was unthinkable.

. . . . .

signature

. . . . .

Grand Strategy Annex

. . . . .

%d bloggers like this: