The Genocidal Species

15 March 2014

Saturday


hominid-evolution

Homo sapiens is the genocidal species. I have long had it on my mind to write about this. I have the idea incorporated in an unpublished manuscript, but I don’t know if it will ever see the light of day, so I will give a brief exposition here. What does it mean to say that Homo sapiens is the genocidal species (or, if you prefer, a genocidal animal)?

Early human history is a source of controversy that exceeds the controversy over the scientific issues at stake. It is not difficult to understand why this is the case. Controversies over human origins are about us, what we are as a species, notwithstanding the obvious fact that we are in no way limited by our past, and we may become many things that have no precedent in our long history. Moreover, the kind of evidence that we have of human origins is not such as to provide us with the kind of narrative that we would like to have of our early ancestors. We have the evidence of scientific historiography, but no poignant human interest stories. In so far as our personal experience of life paradoxically provides the big picture narrative by which we understand the world (a point I tried to make in Kierkegaard and Futurism), the absence of a personal account of our origins is an ellipsis of great consequence.

To assert that humanity is a genocidal species is obviously a tendentious, if not controversial, claim to make. I make this claim partly because it is controversial, because we have seen the human past treated with excessive care and caution, because, as I said above, it is about us. We don’t like to think of ourselves has intrinsically genocidal in virtue of our biology. Indeed, when a controversial claim such as this is made, one can count on such a claim being dismissed not on grounds of evidence, or the lack thereof, but because it is taken to imply biological determinism. According to this reasoning, an essentialist reading of our history shows us that we are genocidal, therefore we cannot be anything other than genocidal. Apart from being logically flawed, this response misses the point and fails to engage the issue.

Yet, in saying that man is a genocidal species, I obviously making an implicit reference to a long tradition of pronouncing humanity to be this or that, as when Plato said that man is a featherless biped. This is, by the way, a rare moment providing a glimpse into Plato’s naturalism, which is a rare thing. There is a story that, hearing this definition, Diogenes of Sinope plucked a chicken and brought it to Plato’s Academy, saying, “Here is Plato’s man.” (Perhaps he should have said, “Ecce homo!”) This, in turn, reveals Diogenes’ non-naturalism (as uncharacteristic as Plato’s naturalism). Plato is supposed to have responded by adding to his definition, “with broad, flat nails.”

Aristotle, most famously of all, said that man is by nature a political animal. This has been variously translated from the Greek as, “Man is by nature an animal that lives in a polis,” and, “Man is by nature a social animal.” This I do not dispute. However, once we recognize that homo sapiens is a social or political animal (and Aristotle, as the Father of the Occidental sciences, would have enthusiastically approved of the transition from “man” to “homo sapiens”), we must then take the next step and ask what exactly is the nature of human sociability, or human political society. What does it mean for homo sapiens to be a political animal?

If Clausewitz was right, political action is one pole of a smoothly graduated continuum, the other pole of which is war, because, according to Clausewitz, war is the continuation of policy by other means (cf. The Clausewitzean Continuum). This claim is equivalent to the claim that politics is the continuation of war by other means (the Foucauldian inversion of Clausewitz). Thus war and politics are substitutable salve veritate, so that homo sapiens the political animal is also homo sapiens the military animal.

I don’t know if anyone has ever said, man is a military animal, but Freud came close to this in a powerful passage that I have quoted previously (in A Note on Social Contract Theory):

“…men are not gentle creatures who want to be loved, and who at the most can defend themselves if they are attack; they are, on the contrary, creatures among whose instinctual endowments is to be reckoned a powerful share of aggressiveness. As a result, their neighbor is for them not only a potential helper or sexual object, but also someone who tempts them to satisfy their aggressiveness on him, to exploit his capacity for work without compensation, to use him sexually without his consent, to seize his possessions, to humiliate him, to cause him pain, to torture and to kill him. Homo homini lupus. Who, in the face of all his experience of life and of history, will have the courage to dispute this assertion? As a rule this cruel aggressiveness waits for some provocation or puts itself at the service of some other purpose, whose goal might also have been reached by milder measures. In circumstances that are favorable to it, when the mental counter-forces which ordinarily inhibit it are out of action, it also manifests itself spontaneously and reveals man as a savage beast to whom consideration towards his own kind is something alien.”

Is it unimaginable that it is this aggressive instinct, at least in part, that made in possible for homo sapiens to out-compete every other branch of the hominid tree, and to leave itself as the only remaining hominid species? We are, existentially speaking, El último hombre — the last man standing.

What was the nature of the competition by which homo sapiens drove every other hominid to extinction? Over the multi-million year history of hominids on Earth, it seems likely that the competition among hominids likely assumed every possible form at one time or another. Some anthropologists that observed a differential reproductive success rate only marginally more fertile than other hominid species would have, over time, guaranteed our demographic dominance. This gives the comforting picture of a peaceful and very slow pace of one hominid species supplanting another. No doubt some of homo sapiens’ triumphs were of this nature, but there must have also been, at some time in the deep time of our past, violent and brutal episodes when we actively drove our fellow hominids into extinction — much as throughout the later history of homo sapiens one community frequently massacred another.

A recent book on genocide, The Specter of Genocide: Mass Murder in Historical Persepctive (edited by ROBERT GELLATELY, Clark University, and BEN KIEMAN Yale University), is limited in its “historical perspective” to the twentieth century. I think we must go much deeper into our history. In an even larger evolutionary framework than that employed above, if we take the conception of humanity as a genocidal species in the context of Peter Ward’s Medea Hypothesis, according to which life itself is biocidal, then humanity’s genocidal instincts are merely a particular case (with the added element of conscious agency) of a universal biological imperative. Here is how Ward defines his Medea Hypothesis:

Habitability of the Earth has been affected by the presence of life, but the overall effect of life has been and will be to reduce the longevity of the Earth as a habitable planet. Life itself, because it is inherently Darwinian, is biocidal, suicidal, and creates a series of positive feedbacks to Earth systems (such as global temperature and atmospheric carbon dioxide and methane content) that harm later generations. Thus it is life that will cause the end of itself, on this or any planet inhabited by Darwinian life, through perturbation and changes of either temperature, atmospheric gas composition, or elemental cycles to values inimical to life.

Ward, Peter, The Medea Hypothesis: Is Life on Earth Ultimately Self-Destructive? Princeton and Oxford: Princeton University Press, 2009, p. 35

Ward goes on to elaborate his Medea Hypothesis in greater detail in the following four hypotheses:

1. All species increase in population not only to the carrying capacity as defined by some or a number of limiting factors, but to levels beyond that capacity, thus causing a death rate higher than would otherwise have been dictated by limiting resources.

2. Life is self-poisoning in closed systems. The byproduct of species metabolism is usually toxic unless dispersed away. Animals pro- duce carbon dioxide and liquid and solid waste. In closed spaces this material can build up to levels lethal either through direct poisoning or by allowing other kinds of organisms living at low levels (such as the microbes living in animal guts and carried along with fecal wastes) to bloom into populations that also produce toxins from their own metabolisms.

3. In ecosystems with more than a single species there will be competition for resources, ultimately leading to extinction or emigration of some of the original species.

4. Life produces a variety of feedbacks in Earth systems. The majority are positive, however.

Ward, Peter, The Medea Hypothesis: Is Life on Earth Ultimately Self-Destructive? Princeton and Oxford: Princeton University Press, 2009, pp. 35-36

The experience of industrial-technological civilization has added a new dimension to hypothesis 2 above, as industrial processes and their wastes have been added to biological processes and their wastes, leading to forms of poisoning that do not occur unless facilitated by civilization. Moreover, a corollary to hypothesis 3 above (call is 3a, if you like) might be formulated such that those species within an ecosystem that seek to fill the same niche (i.e., that feed off the same trophic level) will be in more direct competition that those species feeding off distinct trophic levels. In this way, multiple hominid species that found themselves in the same ecosystem would be trying to fill the same niche, leading to extinction or emigration. Once homo sapiens achieved extensive totality in the distribution of the species range, however, there is nowhere else for competitors to emigrate, so if they are out-competed, they simply go extinct.

Ward was not the first to focus on the destructive aspects of life. I have previously quoted the great biologist Ernst Haeckel, who defined ecology as the science of the struggle for existence (cf. Metaphysical Ecology Reformulated), and of course in the same vein there is the whole tradition of nature red in tooth and claw. Such visions of nature no longer hold the attraction that they exercised in the nineteenth century, and such phrases have been criticized, but it may be that these expressions of the deadly face of nature did not go far enough.

There is a sense in which all life if genocidal, and this is the Medean Hypothesis; what distinguishes human beings is that we have made genocide planned, purposeful, systematic, and conscious. The genocidal campaigns that have punctuated modern history, and especially those of the twentieth century, represent the conscious implementation of Medean life. We knowingly engage in genocide. Genocide is now a policy option for political societies, and in so far as we are political animals all policy options are “on the table” so to speak. It is this that makes us the uniquely genocidal species.

. . . . .

signature

. . . . .

Grand Strategy Annex

. . . . .

Advertisements

Thursday


The Classical Greek Intellectual Foundations

Aristotle icon
Euclid icon
ptolemy icon

of Agrarian-Ecclesiastical Civilization


One of Voltaire’s most famous witticisms was that the Holy Roman Empire was neither holy, nor Roman, nor an empire. Such contradictions abound history; as Barbara Tuchman noted, we should expect them rather than be offended by them: “Contradictions… are part of life, not merely a matter of conflicting evidence. I would ask the reader to expect contradictions, not uniformity.” (I just happened to notice today that Michael Shermer quotes this passage in a Youtube video.) In this spirit of historical contradiction it could be observed that the intellectual framework of agrarian-ecclesiastical civilization was neither agrarian nor ecclesiastical, but rather reflected the high point of Greek civilization in classical antiquity.

The intellectual space of agrarian-ecclesiastical civilization — the paradigm if you prefer Kuhnian language, or the epistēmē if you prefer the terminology of Foucault — was the result of what we might call the “world-builders” of classical antiquity, of them I would like to call attention to three: Aristotle, Euclid, and Ptolemy.

Aristotle, Euclid, and Ptolemy were the architects of the “closed world” that Alexander Koyré famously contrasted to the infinite universe that was to emerge (slowly, gradually, and at times painfully, as Koyré would demonstrate in detail) from the scientific revolution as played out in the work of Copernicus, Kepler, Galileo, and many others (the architects of the infinite universe):

The infinite cannot be traversed, argued Aristotle; now the stars turn around, therefore… But the stars do not turn around; they stand still, therefore… It is thus not surprising that in a rather short time after Copernicus some bold minds made the step that Copernicus refused to make, and asserted that the celestial sphere, that is the sphere of the fixed stars of Copernican astronomy, does not exist, and that the starry heavens, in which the stars are placed at different distances from the earth, “extendeth itself infinitely up.”

Alexander Koyré, From the Closed World to the Infinite Universe, Baltimore, Md.: The Johns Hopkins Press, 1957, p. 35

Aristotle was the comprehensive philosopher who not only had respect for empirical observation (something Plato consistently devalued) but also formulated a system of deductive logic that made it possible for him to connect empirical observations together into a theoretical structure with great explanatory power. Aristotle, then, did not deal with isolated facts, but with theories. Each new fact, each new observation, can in this way be fit within the overall structure of a theory which in Aristotle extends from the summum genus on top to the inferior species on the bottom. There is a place for everything and everything is in its place. The much later conception of a “great chain of being” — a central idea to later agrarian-ecclesiastical civilization — has its origins in the Aristotelian construct.

Euclid and Ptolemy, while comprehensive each within their own disciplines, were nowhere near as comprehensive as Aristotle; it was Aristotle’s philosophy that was the system of the world to which Euclid and Ptolemy contributed. Even though Aristotle distinguished many sciences later recognized as independent intellectual disciplines, with only two exceptions none of these sciences came to be systematically developed in antiquity (except perhaps for Aristotle’s own research in biology). Mathematics and astronomy were the two sciences that were systematically developed in antiquity as sciences recognizable as such, and still recognizable today as sciences.

While later thought, especially medieval thought, made much of the theory of the syllogism found in Aristotle’s Prior Analytics, Aristotle’s theory of science in the Posterior Analytics received much less attention. It was, nevertheless, the theoretical basis of Euclid’s systematic exposition of geometry on the basis of first principles. Euclid brought Aristotle’s world-building and logical rigor into mathematics, and wrote a book on geometry that was used as a textbook well into the twentieth century. We can today read ancient Greek mathematicians as contemporaries, and we can learn something from them; we can similarly read Ptolemy’s treatise on astronomy, the Almagest, as a serious work of astronomy, though we would have less to learn from it than from ancient mathematics.

Aristotle, Euclid, and Ptolemy date (roughly) from what Jaspers called the Axial Age; while peoples elsewhere in the world of maturing agrarian-ecclesiastical civilization were creating religions, the Greeks were creating philosophy of science, and this proved to be a lasting contribution. This was the axialization of Western civilization during the period of agrarian-ecclesiastical civilization.

Aristotle provided the philosophical foundations for the thought of later Western civilization up until the scientific revolution, and even after modern science began to change the world, Aristotle’s influence continued to echo in the work of later scientists. Even up into the early modern period, when we see the first signs of modern science taking shape in Galileo’s work on physics and cosmology, scientists were still writing their treatises in the Euclidean manner. Galileo’s early works on motion and mechanics are almost scholastic in tone, but are not as well remembered as his Sidereal Messenger or Dialogues Concerning the Two Chief World Systems. Even Newton’s Principia is laid out more geometrico.

The emergence of industrial-technological civilization from agrarian-ecclesiastical civilization was a process that began with the scientific revolution and continues to this day as the consequences of the industrial revolution continue to unfold, continuing the change the world in which we live. The transitional periods between macro-historical periods — which I have called macro-historical revolutions — are themselves periods of hundreds of years in duration. In fact, the first such macro-historical revolution, which inaugurated the macro-historical division of agrarian-ecclesiastical civilization, may have been a transition measurable in thousands of years.

In my immediately previous post, The Agrarian-Ecclesiastical Thesis, I suggested that, given the counter-market, counter-developmental mechanisms institutionalized in agrarian-ecclesiastical civilization, that its failure is to allow a revolution to take place. The long history of agrarian-ecclesiastical civilization — which might be stretched to as much as 15,000 years, depending upon when we date the first domestication of crops and the first settled, quasi-urban villages enabled by domesticated agriculture — witnessed many revolutions, all of which failed except for the last, which issued in the catastrophic collapse of agrarian-ecclesiastical civilization and the emergence of industrial-technological civilization.

That I have called contemporary civilization “industrial-technological civlization” and the civilization the preceded it “agrarian-ecclesiastical civilization,” and given that the latter so closely conforms to the distinction between economic infrastructure and ideological superstructure, I am trying to make a point about the overall structure of civilizations, even civilizations that inhabit distinct macro-historical divisions?

The source of Marx’s distinction between economic infrastructure (or economic base) and ideological superstructure is to be found in his A Contribution to The Critique of Political Economy. It is worth revisiting Marx’s formulation. The crucial passage is as follows:

In the social production which men carry on they enter Into definite relations that are indispensable and independent of their will, these relations of production correspond to a definite stage of development of their material powers of production. The sum total of these relations of production constitutes the economic structure of society — the real foundation, on which rise legal and political superstructures and to which correspond definite forms of social consciousness. The mode of production in material life determines the general character of social, political, and spiritual processes of life. It is not the consciousness of men that determines their existence, but, on the contrary, their social existence determines their consciousness.

Marx, Karl, A Contribution to The Critique of Political Economy, translated from the Second German Edition by N. I. Stone, Chicago: Charles H. Kerr & Company, 1911, Author’s Preface, pp. 11-12

Marx’s formulation is a straight-forward social implementation of a materialist theory of the relation of mind to body, so that we can say at least that Marx was a consistent materialist. Marx’s consistent materialism yields consistent results in the analysis of societies, which in some instances seems to be highly successful and offers us some insight. But not always. No schema can be quite true when stretched to fit every possible instance, and this is true of Marx’s consistent materialism. It collapses when confronted by societies in which there is no distinction between economics and ideology (each of these terms broadly construed).

It would be an interesting intellectual exercise to formulate a binomial nomenclature of civilizations characterizing each in terms of its economic infrastructure and ideological superstructure, but this is too schematic to the quite true. One point I have tried to argue several times (but for which I still lack a definitive formulation) is that distinct civilizations are not distinct implementations of one and the same idea of civilization, but rather distinct civilizations embody distinct ideas as to the nature and aims of civilization. So while “agrarian-ecclesiastical civilization” nicely fits the economic infrastructure/ideological superstructure model, “industrial-tecnnological civilization” does not fit as nicely. While there is a sense in which technology has become an ideology, it is in no sense an ideological superstructure in the same way that institutionalized religion served as the ideological superstructure of agrarian-ecclesiastical civilization.

. . . . .

signature

. . . . .

Grand Strategy Annex

. . . . .

The Preemption Hypothesis

20 October 2012

Saturday


Three Little Words: “Where are they?”

In The Visibility Presumption I examined some issues in relation to the response to the Fermi paradox by those who claim that a technological singularity would likely overtake any technologically advanced civilization. I don’t see how the technological singularity visited upon an alien species makes them any less visible (in the sense of “visible” relevant to SETI) nor any less likely to be interested in exploration, adventure, or the quest for scientific knowledge — and finding us would constitute a major scientific discovery for some xenobiological species that had matured into a peer industrial-technological civilization.

The more I think about the Fermi paradox — and I have been thinking a lot about it lately — and the more I contextualize the Fermi paradox in my own emerging theory of civilization — which is a theory I am attempting to formulate in the purest tradition of Russellian generality so that it is equally applicable to human civilization and to any non-human civilization — the more I have come to think that our civilization is relatively isolated in the cosmos, being perhaps one of the few civilizations, or the only civilization, in the Milky Way, and one among only a handful of civilizations in the local cluster of galaxies or our supercluster.

Having an opinion on the Fermi paradox, and even making an attempt to argue for a particular position, does not however relieve one of the intellectual responsibility of exploring all aspects of the paradox. I have also come to think, while reflecting on the Fermi paradox, that the paradox itself has been fruitful in pushing those who care to think about it toward better formulations of the nature and consequences of industrial-technological civilization and of interstellar civilization — whether that of a supposed xenocivilization, or that of ourselves now and in the future.

The human experience of economic and technological growth in the wake of the industrial revolution has made us aware that if there are other peer species in the universe, and if these peer species undergo a process of the development of civilization anything like our own, then these peer species may also have experienced or will experience the escalating exponential growth of economic organization and technological complexity that we have experienced. Looking at our own civilization, again, it seems that the natural telos of continued economic and technological development — for we see no natural or obvious impediment to such continued development — is for human civilization to extend itself beyond the confines of the Earth and the establish itself throughout the solar system and eventually throughout the galaxy and beyond. This natural teleology has been called “The Expansion Hypothesis” by John M. Smart. Smart credits the expansion hypothesis to Kardashev, and while it is implicit in Kardashev, Kardashev himself does not formulate the idea explicitly and does not use the term “expansion hypothesis.”

Aristotle as depicted by Raphael in the Vatican stanze.

Aristotle as depicted by Raphael in the Vatican stanze.

The natural teleology of civilization

I have taken the term “natural teleology” from contemporary philosophical expositions of Aristotle’s distinction between final causes and efficient causes. We can get something of a flavor of Aristotle’s idea of natural teleology (without going too deep into the controversy over final causes) from this paragraph from the second book of Aristotle’s Physics:

We also speak of a thing’s nature as being exhibited in the process of growth by which its nature is attained. The ‘nature’ in this sense is not like ‘doctoring’, which leads not to the art of doctoring but to health. Doctoring must start from the art, not lead to it. But it is not in this way that nature (in the one sense) is related to nature (in the other). What grows qua growing grows from something into something. Into what then does it grow? Not into that from which it arose but into that to which it tends. The shape then is nature.

Aristotle is a systematic philosopher, in which any one doctrine is related to many other doctrines, so that an excerpt really doesn’t do him justice; if the reader cares to, he or she can can look into this more deeply by reading Aristotle and his commentators. But I must say this much in elaboration: the idea of natural teleology is problematic because it suggests a teleological conception of the whole of nature and all of its parts, and ever since Darwin we have understood that many claims to natural teleology are simply the expression of anthropic bias.

Still, kittens grow into cats and puppies grow into dogs (if they live to maturity), and it is pointless to deny this. What is important here is to tightly circumscribe the idea of natural teleology so that we don’t throw out the baby with the bathwater. The difficulty comes in distinguishing the baby from the bathwater in which the baby is immersed. Unless we want to end up with the idea of a natural teleology for human beings and the lives they live — this was the “human nature” that Sartre emphatically denied — we must deny final causes to agents, or find some other principle of distinction.

Are civilizations a natural kind for which we can posit a natural teleology, i.e., a form or a nature toward which they naturally tend as they grow and develop? My answer to this is ambiguous, but it is a principled ambiguity: yes and no. Yes, because some aspects of civilization are clearly developmental, when an institution is growing toward its fulfillment, while other aspects of civilization are clearly non-developmental and discontinuous. But civilization is so complex a whole that there is no simple way to separate the developmental and the non-developmental aspects of any one given civilization.

When we examine high points of civilization like Athens under Pericles or Florence during the Renaissance, we can recognize after the fact the slow build up to these cultural heights, which cannot clearly be distinguished from economic, civil, urban, and military development. The natural teleology of a civilization is the attainment of excellence in its particular mode of being, just as Aristotle said that the great-souled man aims at excellence in his life, but the path to that excellence is as varied as the different lives of individuals and the difference histories of civilizations (Sam Harris might call them distinct peaks on the moral landscape).

Now, I don’t regard this brief exposition of the natural teleology of civilization as anything like a definitive formulation, but a definitive formulation of something so complex and subtle would require years of work. I will save this for another time, rather, counting on the reader’s charity (if not indulgence) to grant me the idea that at least in some respects civilizations tend toward fulfilling an apparent telos implicit in its developmental history.

Early industrialization often had an incongruous if not surreal character, as in this painting of traditional houses silhouetted again the Madeley Wood Furnaces at Coalbrookdale; the incongruity and surrealism is a function of historical preemption.

The Preemption Hypothesis

What I am going to suggest here as another response to the Fermi paradox will sound to some like just another version of the technological singularity response, but I want to try to show that what I am suggesting is a more general conception than that — a potential structural failure of civilization, as it were — and as a more comprehensive concept the technological singularity response to the Fermi paradox can be subsumed under it as a particular instance of civilizational preemption.

The more general conception of a response to the silentium universi I call the preemption hypothesis. According to the preemption hypothesis, the ordinary course of development of industrial-technological civilization — which, if extrapolated, would seem to point to a nearly inevitable expansion of that civilization beyond its home planet and eventually across interstellar space as its natural teleology — is preempted by the emergence of a completely different kind of civilization, a radically different kind of civilization, or by post-civilization, so that the expected natural teleology of the preempted civilization is interrupted and never comes to fruition.

Thus “the lights go out” for a given alien civilization not because that civilization destroys itself (the Doomsday argument, Solution no. 27 in Webb’s book) and not because it collapses into permanent stagnation or even catastrophic civilizational failure (existential risks outlined by Nick Bostrum), and not because it completes a natural cycle of growth, maturity, decay, and death, but rather because it moves on to the next stage of social institution that lies beyond civilization. In simplest terms, the preemption hypothesis is that industrial-technological civilization, for which the expansion hypothesis holds, is preempted by post-civilization, for which the expansion hypothesis no longer holds. Post-civilization is a social institution derived from civilization but no longer recognizably civilization.

The idea of a technological singularity is one kind of potential preemption of industrial-technological civilization, but certainly not the only possible kind of preemption. There are many possible forms of civilizational preemption, and any attempted list of possible preemptions is limited only by our imagination and our parochial conception of civilization, the latter being informed exclusively by human civilization. It is entirely possible, as another example of preemption, that once a civilization attains a certain degree of technological development, everyone recognizes the pointlessness of the the whole endeavor, all the machines are shut down, and the entire population turns to philosophical contemplation as the only worthy undertaking in life.

Acceleration and Preemption

I have previously argued that civilizations come to maturity in an Axial Age. The Axial Age is a conception due to Karl Jaspers, but I have suggested a generalization that holds for any society that achieves a sufficient degree of development and maturity. What Jaspers postulated for agricultural civilizations, and understood to be a turning point for the world entire, I believe holds for most civilizations, and that each stage in the overall development of civilization may have such a turning point.

Also, the history of human civilization reveals an acceleration. Nomadic hunter-gatherer society required hundreds of thousands of years before it matured into a condition capable of producing the great cave paintings of the upper Paleolithic (which I call the Axialization of the Nomadic Paradigm). The agricultural civilizations that superseded Paleolithic societies with the Neolithic Agricultural Revolution required thousands of years to mature to the point of producing what Jaspers called an Axial Age (The Axial Age for Jaspers).

Industrial civilization has not yet produced an industrialized axialization (though we may look back someday and understand one to have been achieved in retrospect), but the early modern civilization that seemed to be producing a decisively different way of life than the medieval period that preceded it experienced a catastrophic preemption: it did not come to fulfillment on its own terms. In Modernism without Industrialism I argued that modern civilization was effectively overtaken by the sudden and catastrophic emergence of industrialization, which set civilization on an entirely new course.

At each stage of the development of human society the maturation of that society, measured by the ability of that society to give a coherent account of itself in a comprehensive cosmological context (also known as mythology), has come sooner than the last, with the abortive civilization of modernism, Enlightenment, and the scientific revolution derailed and suddenly superseded by a novel and unprecedented development from within civilization. Modernism was preempted by accelerating events, and, specifically, by accelerating technology. It is possible that there are other forms of accelerating development that could derail or preempt that course of development that at present appears to be the natural teleology of industrial-technological civilization.

The Dystopian Hypothesis

Because the most obvious forms of the preemption hypothesis, in terms of the prospects for civilization most widely discussed today, would include the technological singularity, transhumanism, and The Transcension Hypothesis, and also because of the human ability (probably reinforced by the survival value of optimism) to look on the bright side of things, we may lose sight of equally obvious sub-optimal forms of preemption. Suboptimal forms of civilizational preemption, in which civilization does not pass on to developments of greater complexity more technically difficult achievement, could be separately identified as the dystopian hypothesis.

In Miserable and Unhappy Civilizations I suggested that the distinction Freud made between neurotic misery and ordinary human unhappiness can be extended to encompass a distinction between a civilization in the grip of neurotic misery as distinct from a civilization experiencing ordinary civilizational unhappiness. I cited the example of the religious wars of early modern Europe as an example of civilization experiencing neurotic misery (and later went on to suggest that contemporary Islam is a civilization in the grip of neurotic misery). It is possible that neurotic misery at the civilizational level could be perpetuated across time and space so that neurotic misery became the enduring condition of civilization. (This might be considered an instance of what Nick Bostrum called “flawed realization” in his analysis of existential risk.)

It would likely be the case that neurotically miserable civilization — which we might also call dystopian civilization, or a suboptimal civilization — would be incapable of anything beyond perpetuating its miserable existence from one day to the next. The dystopian hypothesis could be assimilated to solution no. 23 in Webb’s book, “They have no desire to communicate,” but there many be many reasons that a civilization lacks a desire to communicate over interstellar distances with other civilizations, so I think that the dystopian lack of motivation deserves its own category as a response to the Fermi paradox.

Whether or not chronic and severe dystopianism could be considered a post-civilization institution and therefore a preemption of industrial-technological civilization is open to question. I will think about this.

. . . . .

signature

. . . . .

Grand Strategy Annex

. . . . .

project astrolabe logo smaller

. . . . .

Areté and Selection

12 January 2012

Thursday


Aristotle as depicted in a medieval woodcut. Aristotle was a central influence shaping the intellectual life of the Middle Ages, the direct ancestor to our own civilization.

In several posts I have written about the Aristotelian conception of excellence, i.e., Areté (ἀρετή in the original Greek, and sometimes translated as “virtue,” just like Machiavelli’s Virtù). Throughout Aristotle’s ethics there is a clear implication that human beings take pleasure in achieving excellence, and in so doing experience a proper sense of pride in their accomplishment. Here is Aristotle’s take on this:

“…the Good of man is the active exercise of his soul’s faculties in conformity with excellence or virtue, or if there be several human excellences or virtues, in conformity with the best and most perfect among them…”

Aristotle, Nicomachean Ethics, Rackham translation of I.7.1098a

Excellence is not only a virtue, and the good for man, but is also desirable:

“…the activities of the part of the soul that is by nature superior must be preferable for those persons who are capable of attaining either all the soul’s activities or two out of the three; since that thing is always most desirable for each person which is the highest to which it is possible for him to attain.”

In regard to the “two out of three” reference in the above, the Aristotle text at Perseus Digital Library has this footnote:

i.e. the two lower ones, the three being the activities of the theoretic reason, of the practical reason, and of the passions that although irrational are amenable to reason.

Aristotle, Politics, Book 7, 1333a

This has been called Aristotle’s principle of perfection by Fred Miller, who cites a different translation of this same passage. Fred Miller in his Stanford Encyclopedia of Philosophy article on Aristotle’s Political Theory, includes a list of Presuppositions of Aristotle’s Politics, which names the Principle of teleology as the first of Aristotle’s presuppositions:

Principle of teleology Aristotle begins the Politics by invoking the concept of nature (see Political Naturalism). In the Physics Aristotle identifies the nature of a thing above all with its end or final cause (Physics II.2.194a28–9, 8.199b15–18). The end of a thing is also its function (Eudemian Ethics II.1.1219a8), which is its defining principle (Meteorology IV.12.390a10–11). On Aristotle’s view plants and animals are paradigm cases of natural existents, because they have a nature in the sense of an internal causal principle which explains how it comes into being and behaves (Phys. II.1.192b32–3). For example, an acorn has an inherent tendency to grow into an oak tree, so that the tree exists by nature rather than by craft or by chance. The thesis that human beings have a natural function has a fundamental place in the Eudemian Ethics II.1, Nicomachean Ethics I.7, and Politics I.2. The Politics further argues that it is part of the nature of human beings that they are political or adapted for life in the city-state. Thus teleology is crucial for the political naturalism which is at the foundation of Aristotle’s political philosophy. (For discussion of teleology see the entry on Aristotle’s biology.)

Now, there is no question that Aristotle was as Greek as any other Greek, and was very much a man of his time, even while being one of the greatest philosophers who ever lived. I make this obvious statement only because I must follow it with the observation that Aristotle’s very Greek philosophy was eventually appropriated by medieval European philosophers, and there it took on a second life, providing the theoretical framework for Scholasticism.

When we consider the role of Aristotle in medieval scholastic theology, and the ongoing role of medieval civilization in the constitution of our own industrial-technological civilization, we can see how hard it has been for us to overcome teleological thinking. Medieval civilization is the direct ancestor to our civilization; there was no catastrophic break between the Middle Ages and Modernity, but rather a smooth and continuous tradition that left many aspects of medievalism intact well into modern times.

Because of Aristotle’s pervasive teleology, he formulated his ethics and his politics teleologically, and transmitted them into this form to posterity, and it was in this form that medieval philosophers received them. In a Greek context I don’t think that Aristotle’s teleology had quite the meaning that it came to have, and indeed scholastic philosopher’s developed Aristotle’s distinction between potency and act into an entire metaphysics in its own right. In the context of a civilization almost entirely constructed upon an eschatological basis, Aristotle’s teleology and his conception of potential take on a meaning that they did not have for Ancient Greeks.

Medieval philosophers more or less ran wild with Aristotle, and I think it is this potent admixture of Aristotelian teleology and medieval eschatology that gives us that Arthur Lovejoy famously called the Great Chain of Being — the metaphysical idea of an exhaustive hierarchy in which there is a place for everything, and everything is to be found in its proper place. Linnaeus eventually naturalized the great chain of being as a system of taxonomy, and the Linnaean system continues to be used today, with evolutionary phylogeny only gradually forcing revisions to cladistic systematics.

There was no need for Aristotle to look for any alternative formulation for this ethical and political views, and his metaphysics were adequate to the scientific knowledge of his time. But since Aristotle’s time we have learned a lot, and one of the most important things we have learned is how to explain the natural world, and our place within it, without recourse to teleology. Now, I really believe that if Aristotle himself could see the naturalistic account of the world produced by contemporary science he would be enthusiastic beyond words. But the Aristotle read through Scholastic spectacles might well be horrified. There is nothing in Aristotle that suggests to me that he was committed to any anti-naturatlistic mode of thought, but Aristotelian doctrines do become anti-naturalistic in the hands of the Schoolmen.

Thus it is no surprise that Aristotle did not recognize that his conception of Areté has strongly selective connotations. If human beings find it desirable to engage in activities that are, “the highest to which it is possible for him to attain,” I think you will find that people truly enjoy doing these things. You will also find that people generally don’t much enjoy doing things that they are not at all good at doing (acknowledging important recreational exceptions — I enjoy swimming but am in no sense good at it). On the whole, then, individuals will be attracted to activities at which they excel, while they will be indifferent to, or perhaps even distance themselves from, activities at which they do not excel.

Moreover, when you become highly competent in some activity, you want to engage in this activity with others who are also highly competent. For example, if you are really good at tennis, you will want to find other really good tennis players in order to play a really satisfying game. You would not be able to have a really good game with someone who knew nothing about tennis. And so there is an elaborate system of rating tennis players that can be used to pair players of a similar skill level. I once read a quote from an institutionalized philosopher (I think it was Richard Cartwright, but I’m not certain; I’ll look up the quote later) saying that he did not like to discuss philosophy with those who had not studied the subject. Same idea as the tennis ratings. An institutionalized philosopher is a like a seeded tennis player.

The preference of the individual for activities at which the individual excels, further escalated by the preference of groups of individuals for others who have attained a similar level of excellence, produces a strong selective effect in human communities. This is one source — one source among many — of what has been called “the great sort.” People sort themselves into communities by temperament and inclination. Individual temperament and inclination tend to lead a person toward a particular community. One of the characteristics of a community is that it tends to be good at something, like Switzerland is good at clock and watch making. Something as large as a nation-state will contain considerable diversity, so there are probably many Swiss who have never assembled a watch in their life, but something as small as an academic or sports clique can be aggressively exclusive to a particular interest or talent.

There is something essentially anti-democratic — or, at least, non-egalitarian — about cliques, and, by extension, Aristotelian Areté. In its best exemplification it produces most of what is valuable in human civilization; in its worst exemplification it is insufferably aristocratic in the worst sense: ossified and unimaginative. Thus it has become one of the great problems of the age of popular sovereignty to understand and to cultivate excellence for its own sake. Popular culture is littered with failed examples of the cultivation of excellence in democratic societies. There are pathetic attempts to identify the talented when they are young, but this is almost always distorted by family connections and prejudice. And there are the monetary rewards that have created the contemporary “culture industry” and its “commodity music” (as well as “commodity painting” and commodity arts of all kinds), which is without any true cultural value. And there is the population culture fascination with fame. The potent mixture of fame of money has meant that people become involved in cinema because wnat to be rich and famous, and not because they can make great movies.

This is a problem, and it is an admittedly unresolved problem. Egalitarianism subordinates excellence to the lowest common denominator, and leaves us with the civilization not worthy of the name; aristocracy at times manages to advance excellence, but much more often it declines into mere inherited privilege, as bereft of cultural value as civilization today.

. . . . .

signature

. . . . .

Grand Strategy Annex

. . . . .

North Korea and Areté

18 December 2011

Sunday


Aristotle as depicted by Raphael in the Vatican stanze.

Aristotle said that excellence is not an act, but a habit.

Aristotle is famous for saying, among other things, that excellence is not an act, but a habit. The Greek word for excellence — areté, ἀρετή — is also translated as “virtue,” so the same Aristotelian quote is sometimes rendered virtue is not an act but a habit. These variant translations are justified, as they point to the close interrelationship between excellence and virtue in Aristotle.

Kim Jong-il impoverished his people and left the world a worse place than he found it.

Aristotle was a common sense philosopher before there was any such thing as common sense philosophy, and his moral psychology is equally commonsensical. Aristotle maintained that people enjoy doing the things that they are good at doing, and so people make an effort at getting good at doing certain things so that they can enjoy these activities all the more. I think that this is largely correct.

The elder Kim looking frail a few months before dying; the younger Kim Jong-un, heir apparent, looking scared.

It would not be overstating the case to say that many individuals actively seek out opportunities for cultivating excellence. These opportunities can vary dramatically from place to place and time to time. Certain socio-economic systems will be richer in opportunities for certain kinds of excellence, so we find excellence unevenly distributed across history and geography.

The blackout of North Korea is both literal and metaphorical. If it was not the Hermit Kingdom in the past, under its communist autocrats it certainly has become a Hermit Kingdom today.

If Aristotle’s moral psychology is more or less correct, it would then stand to reason that we will find excellence-seeking individuals at all times and places, so that these efforts toward excellence are likely to be directed into whatever channels happen to be available.

They know how to goosestep in the DPRK.

Today the news has brought word that Kim Jong-il, the North Korean despot, has died. I have written repeatedly about Kim Jong-il and North Korea, as these provide a radical example of state failure. Even while North Korea is the paradigm case of a failed state, there is a sense in which its rulers have chosen to rule over just such a failed state, though we usually think of failure as an accident. This is failure by design. But what then is the design? In a word: the military.

While the Kim family has been the despotic focus of attention in North Korea, the country is really ruled by the military. And while it is often reported that North Korea maintains an enormous military of a million men under arms, it is rarely reported how the North Korean military is not merely large, but is also an innovative, aggressive, and essentially meritocratic institution (assuming you also know to say the right thing and not say the wrong thing).

Sometimes dictators will create a bloated military of conscripts for bragging rights, but this does not accurately describe the North Korean military. People who study such things say that the North Korean military is an impressive institution in terms of its discipline, organization, and training. While they cannot count on having the most advanced technology and the most sophisticated weapons systems, they can train relentlessly and by all reports they do.

Knowing this to be the case, I would guess that one of the few opportunities to pursue excellence in North Korea would be by way of entering the military. Another opportunity would be to be a gymnast, dancer, or other performer in the enormous spectacles that were staged for the “Dear Leader.” Those are narrow options, but in the nation-state in which saying the wrong thing can mean a life sentence to the gulag for you and your family, it is best not to even try to pursue excellence in literature, art, entertainment, or anything else that might “send a message” and therefore be considered dangerous or subversive. Sports are relatively safe, and we all recall how the Eastern Bloc Warsaw Pact nation-states cultivated extensive sports training programs during the Cold War.

If the only (safe) outlets for a people’s pursuit of excellence is the military or sports, this is going to profoundly affect the cultural life of a country. It is also going to channel a lot of very clever and innovative people into the military who would not, under other circumstances, choose a career in the military. The talents of these intelligent men and women, indirectly conscripted through the suppression of other activities by which they might have pursued other forms of excellence, are in North Korea at the service of the military and therefore at the service of the state. These are the people who rule North Korea.

How will the military rule North Korea after the death of Kim Jong-il? Will they allow his inexperienced son, Kim Jong-un, to assume his place as a figurehead, and continue to rule the country to the greater glory of the DPRK military at the expense of all else? Some self-perpetuating institutions do exactly this; they have an overriding incentive to maintain the system that has put them in control and which disproportionately benefits them at the expense of their countrymen.

There are problems, however. A military establishment of more than a million soldiers is a sufficiently large organization for factions to emerge, and for those factions to be quite large. If, say, each son of Kim Jong-il could command the loyalty of a third of the army, each would still have military forces far larger than those of most nation-states. Internal power struggles have almost certainly already begun, and the issue of these struggles is not likely to be decided for months, if not years. Kim Jong-un is still quite young, and much could happen before he has any opportunity to exercise control (or for others to exercise control in his name).

Internal power struggles in the DPRK could be an opportunity for outside powers to intervene, or to use whatever levers they have available to influence the outcome in North Korea. But China and South Korea, the geographical neighbors, will be most concerned about stability and avoiding a flood of refugees should a crisis emerge. Furthermore, China will not want to take any action that might be interpreted as condoning either interference in internal affairs or questioning the legitimacy of a one-party state, since either action could be turned around and used against China in turn.

. . . . .

. . . . .

signature

. . . . .

Grand Strategy Annex

. . . . .

Saturday


Plato and Aristotle by Rafael

Yesterday in Risk Management: A Personal View I asked the question, in relation to John Rawls’ thought experiment involving choosing a just society from behind a veil of ignorance, “How would Aristotle’s Great Souled Man judge a society from behind a veil of ignorance?” Here is Rawls’ original formulation of his thought experiment:

“…no one knows his place in society, his class position or social status, nor does anyone know his fortune in the distribution of natural assets and abilities, his intelligence, strength, and the like. I shall even assume that the parties do not know their conceptions of the good or their special psychological propensities. The principles of justice are chosen behind a veil of ignorance.”

And here is Aristotle’s description of the Great Souled Man:

“Now the proud man, since he deserves most, must be good in the highest degree; for the better man always deserves more, and the best man most. Therefore the truly proud man must be good. And greatness in every virtue would seem to be characteristic of a proud man. And it would be most unbecoming for a proud man to fly from danger, swinging his arms by his sides, or to wrong another; for to what end should he do disgraceful acts, he to whom nothing is great? If we consider him point by point we shall see the utter absurdity of a proud man who is not good. Nor, again, would he be worthy of honour if he were bad; for honour is the prize of virtue, and it is to the good that it is rendered. Pride, then, seems to be a sort of crown of the virtues; for it makes them greater, and it is not found without them. Therefore it is hard to be truly proud; for it is impossible without nobility and goodness of character. It is chiefly with honours and dishonours, then, that the proud man is concerned; and at honours that are great and conferred by good men he will be moderately Pleased, thinking that he is coming by his own or even less than his own; for there can be no honour that is worthy of perfect virtue, yet he will at any rate accept it since they have nothing greater to bestow on him; but honour from casual people and on trifling grounds he will utterly despise, since it is not this that he deserves, and dishonour too, since in his case it cannot be just. In the first place, then, as has been said, the proud man is concerned with honours; yet he will also bear himself with moderation towards wealth and power and all good or evil fortune, whatever may befall him, and will be neither over-joyed by good fortune nor over-pained by evil. For not even towards honour does he bear himself as if it were a very great thing. Power and wealth are desirable for the sake of honour (at least those who have them wish to get honour by means of them); and for him to whom even honour is a little thing the others must be so too. Hence proud men are thought to be disdainful.”

This translation of Aristotle uses “pride” in place of “great souled” or “great minded,” but whatever the language, the idea comes through. Aristotle did not present the great souled man as a thought experiment, but he is an ideal of Aristotelian ethics, and we can treat him as a thought experiment in exemplification of Aristotelian virtue.

What struck me later after I wrote that post about Aristotle’s Great Souled Man in relation to risk is that I had combined two distinct thought experiments into one. This in turn suggests further thought experiments. One of my favorite sections of Plato’s Republic is the description of the perfectly just and the perfectly unjust man in Book II:

“Now, if we are to form a real judgment of the life of the just and unjust, we must isolate them; there is no other way; and how is the isolation to be effected? I answer: Let the unjust man be entirely unjust, and the just man entirely just; nothing is to be taken away from either of them, and both are to be perfectly furnished for the work of their respective lives. First, let the unjust be like other distinguished masters of craft; like the skillful pilot or physician, who knows intuitively his own powers and keeps within their limits, and who, if he fails at any point, is able to recover himself. So let the unjust make his unjust attempts in the right way, and lie hidden if he means to be great in his injustice (he who is found out is nobody): for the highest reach of injustice is: to be deemed just when you are not. Therefore I say that in the perfectly unjust man we must assume the most perfect injustice; there is to be no deduction, but we must allow him, while doing the most unjust acts, to have acquired the greatest reputation for justice. If he have taken a false step he must be able to recover himself; he must be one who can speak with effect, if any of his deeds come to light, and who can force his way where force is required his courage and strength, and command of money and friends. And at his side let us place the just man in his nobleness and simplicity, wishing, as Aeschylus says, to be and not to seem good. There must be no seeming, for if he seem to be just he will be honored and rewarded, and then we shall not know whether he is just for the sake of justice or for the sake of honors and rewards; therefore, let him be clothed in justice only, and have no other covering; and he must be imagined in a state of life the opposite of the former. Let him be the best of men, and let him be thought the worst; then he will have been put to the proof; and we shall see whether he will be affected by the fear of infamy and its consequences. And let him continue thus to the hour of death; being just and seeming to be unjust. When both have reached the uttermost extreme, the one of justice and the other of injustice, let judgment be given which of them is the happier of the two.”

I find this passage almost frightening in its unflinching portrayal of corruption masquerading as virtue, and virtue mistaken for vice, but while few of us would qualify as perfectly just or perfectly unjust, I think most people will have had experiences in their life that reflect Plato’s point and give it the ring of truth. In any case, we can take Plato’s perfectly just man and his perfectly unjust man and make another thought experiment by asking how a perfectly just man would choose from behind a veil of ignorance, and how a perfectly unjust man would choose from behind a veil of ignorance.

We can go beyond this and ask how Nietzsche’s Übermensch would choose from behind a veil of ignorance, or how Machiavelli’s Prince would so choose, or how homo economicus might so choose.

We might also take these various philosophical characters and substitute them in other thought experiments, like that of Buridan’s Ass: a jackass is positioned equidistant from two piles of hay, and in the classic version of the thought experiment, the ass starves to death, unable to choose between identical options. We might similarly present the perfectly just man, the perfectly unjust man, the great souled man, Machiavelli’s Prince, Nietzsche’s Übermensch, or homo economicus with a similar dilemma and ask how each would fare.

By the time we come to inserting one philosophical thought experiment inside another, we have reached a pitch of abstraction that may prevent us from thinking coherently. Of what value is such an exercise? What I have suggested might seem a little ridiculous, if not outright silly, but it suggests an increase in the order of magnitude of the difficulty of our thought experiments. This might be a profitable exercise if it helps us to pick out intrinsic weaknesses in thought experiments, and allows us to go back to the original thought experiments with a clearer idea of what exactly is involved in them.

If we could submit our thought experiments to controlled conditions, we might pursue them more profitably. This is precisely what logic seeks to do. With the appropriate formalized language, all our philosophical thought experiments could be formulated in a rigorous language, and we could be pretty clear about the consequences. However, in this case we have simply displaced the problem from the intuitive difficulty of working through the problem on its own problematic merits into the difficulty of finding or formulating the appropriate formalism.

If we are honest with ourselves, nothing can spare us from the difficulty of thinking clearly about things that are themselves not clear. Thought experiments are the Zen Koans of Western thought, and their contemplation yields for us the Western equivalent of enlightenment. To put one thought experiment inside another is to raise the stakes, making an already difficult exercise all the more difficult. But this is good for us. As Spinoza wrote at the end of his Ethics, “All noble things are as difficult as they are rare.”

. . . . .

signature

. . . . .

Grand Strategy Annex

. . . . .

project astrolabe logo smaller

. . . . .

Friday


Should risk management consist of three steps?

I have never thought of myself as a person who takes risks. On the contrary, I view my actions as cautious and calculated, since I am not likely to engage in some activity until I judge it to be a sure thing. But others see things otherwise. Some years ago I was talking to a financial adviser and I was taken up short when he characterized me as a risk taker. I think this was primarily because of my attitude to insurance. I have a personal distaste for insurance, probably from a mixture of instinct and experience. This may sound odd, but it is accurate.

Should risk management consist of four steps?

As I see it, the attempt to manage risk is illusory. I act upon this view by minimizing my insurance coverage. (This is what struck my financial adviser as risky.) About the only regrets I have in life are when I paid for an insurance policy when I didn’t strictly have to do so under legal compulsion. It felt a lot like flushing money down a toilet, and this latter exercise would probably be more fun than sending a check to an insurance company. Buying insurance does not give me peace of mind, it only makes me angry.

Should risk management consist of five steps?

One of the reasons I view the attempt to manage risk as illusory is because of my personal experiences with insurance policies that do not pay when you most need them. There are countless stories in the news, some of them tragic, about people who thought they were covered for some eventuality but found out in their hour of need that they were not. Whether it is someone who lost their home to a flood and later found a waiver in the homeowner’s policy for flood damage, or someone who cannot convince their HMO to pay for some particular medical procedure and who is dying as a result, there is almost always something in the fine print that makes it possible for an insurance company to deny coverage. These episodes are not accidental. Insurance companies have many lawyers who write up policies precisely to minimize the losses of the insurance companies for whom they work, and insurance adjusters can be even more creative in their interpretation of events.

Should risk management consist of six steps?

The state of Oregon requires that Oregon residents maintain insurance on their motor vehicles. I always buy the legal minimum, which means that I get liability insurance, but no collision or comprehensive coverage. I have no life insurance policy, but I have no wife or children so the only beneficiaries could be my sisters or parents or a charity. Also, life insurance feels like blood money to me. I wouldn’t want to receive a payoff, and I wouldn’t want someone to profit from my demise. I think it is a good policy to be worth more alive than dead; it gives others an incentive to keep you around. The moorage where I live requires residents to have fire insurance, and this seems reasonable to me as a fire could easily spread, so again I get the minimum coverage required, which is to say that I have the structure covered but not the contents. How could one replace the contents of one’s home? If you have picked up irreplaceable items throughout your life, and received gifts from family and friends, your possessions are more than widgets that can be replaced. Being paid for their loss would, to me, feel like getting blood money for objects.

Should risk management consist of seven steps?

I also have no health insurance. That’s right, I’m one of those people who make up the statistic of 47 million Americans without health insurance. And I am fully prepared to accept the consequences of my actions (and inaction). I have emergency instructions in my mobile phone that state that I have no health insurance and that no special measures are to be taken to preserve my life, because I can’t afford them. I suppose if I were taken to an emergency room contrary to my wishes, and I died despite any efforts made to save my life, that a diligent collector for the hospital might come after my assets to pay the bill, so they might eventually get their pound of flesh.

Should risk management consist of eight steps?

What would I do if I were diagnosed with a serious condition that required major surgery or some expensive treatment like chemotherapy? One thing I can tell you is what I would not do, and what I would definitely not do is to seek treatment in the US. Policies and litigation have so distorted the market for health care that no ordinary working class person in the US can afford to be sick, but this is not the case everywhere in the world. Since I know people all over the world, if I required major medical treatment, I would send out e-mails to friends in other countries and ask them to find a doctor who speaks English and to get an estimate for the procedure needed. I would without hesitation seek treatment in Korea or Peru, Indonesia or Argentina, before I would seek treatment in the US.

Should risk management consist of nine steps?

Perhaps you think this is odd, and perhaps even odd enough to be pathological. I have had many disagreements with others over insurance. None of the arguments that have been advanced to try to convince me of the folly of my position have changed my mind; they certainly have not changed how I feel about insurance. Then could we at least, at the very minimum, agree that it is rational to minimize risk, and to not take any unnecessary risks? Alas, we cannot even agree on the rational minimization of risk. As I see it, risk (like pain) is a good thing that forces us to think through our course of action carefully, so that the minimization of risk by way of insurance is a moral hazard that lures people unnecessarily into braving risks they would otherwise avoid.

Should risk management consist of n or more steps?

Allow me to relate a little story about the philosophical dimensions of risk. Contemporary legal and political philosophy is dominated (utterly dominated) by the work of John Rawls. Rawls’ claim to fame is a thought experiment. According to Rawls, a just society is a society that would be chosen behind a “veil of ignorance,” that is to say, if we would choose a social system not knowing what place we would be born into it, then that is a just social system (by our lights). The assumption here is that, if you don’t know the position in which you will be born into in a society, you will choose a thoroughly egalitarian system so that the birth lottery does not relegate you to a irremediably marginal role. It has been observed that this thought experiment and its presumed outcome assumes that the individual choosing a society from behind a veil of ignorance is risk averse. Of course, not everyone is risk averse, and there may be individuals — perhaps many of them — who might prefer inequitable social arrangements and be willing to take the risk that they will either end up in a privileged position or that they can manipulate the social system in question sufficiently to their benefit that the initial inequity will not be a liability that they cannot overcome.

John Rawls formulated one of the most influential thought experiments of our time, such that a just social order would be chosen from behind a veil of ignorance.

I don’t take the Rawlsian thought experiment too seriously, partly because almost no one believes in their own impoverishment and immiserization before they have hit rock bottom, and partly because the Rawlsian emphasis upon fairness seems so vulgar that it might have been explicitly conceived in contradistinction to the Aristotelian conception of areté. How would Aristotle’s Great Souled Man judge a society from behind a veil of ignorance? He would value most highly that society in which the highest virtues would attain their highest development. This would not necessarily be an egalitarian society, and it would not be a society without risk.

Great accomplishments, great deeds, and great undertakings are all won in the face of adversity and risk. To eliminate risk from the world would be to eliminate the possibility of greatness and excellence (areté); to minimize risk would be to minimize the possibility of greatness and excellence (areté). There is a sense, then — an Aristotelian sense — in which virtue is predicated upon risk. Even the great works of art, literature, poetry, science, mathematics, and philosophy all enjoin risk and entail risk. It is a risk to entertain a radical new idea or to present a radically new vision to the world. One risks one’s career, one’s reputation, one’s ability to make a living, one’s friends — in a word, one risks everything in attempting any authentic innovation. And yet almost everything of value in the world comes from such efforts, and from such willingness to court risk, all in the pursuit of being true to oneself.

This is perhaps a somewhat grandiose if not histrionic characterization of risk, but even if we reduce our scope and scale to the most intimate and personal perspective, risk cannot be eradicated from life. To be alive is to be at risk of dying. Whether we choose to think in terms of our legacy we leave in the world or our immediate personal circumstances, in remains true that we cannot control the world, we cannot control our circumstances, we cannot control what others say, think, do, or feel, and sometimes we cannot even control ourselves. The world is an unpredictable place, and, as I have argued many times in several posts, we ought to expect to be blindsided by history. And if we are blindsided by history to our misfortune, we ought also expect that our insurance policy is not going to cover the loss or make us whole again.

Perhaps risk management need only consist of a single step, and that is the recognition of inescapable risk.

I hope that this personal view of risk management has managed to convince you of one thing. I do not expect to have convinced you that the attempt to manage risk is illusory, but I may have been able to successfully make the point that an individual’s attitude to risk management is deeply embedded in the way that a given individual sees the world. A conception of risk management is predicated upon a particular Weltanschauung. Two individuals with distinct outlooks on life are also likely to have distinct ideas on how to best manage risk — or, as the case may be, to live with risk and not attempt to wish it away. The point is that there is not a single rational and pragmatic way to manage risk, but that the management of risk will be relative to the criteria of reasonableness and pragmatism that follows from a given Weltanschauung.

. . . . .

. . . . .

Note added 03 March 2011: There was a great article in Financial Times by John Kay, Don’t blame luck when your models misfire, which includes the following:

“Like practitioners of alchemy and quack medicine, these modellers thrive on our desire to believe impossible things. But the search for objective means of controlling risks that can reliably be monitored externally is as fruitless as the quest to turn base metal into gold. Like the alchemists and the quacks, the risk modellers have created an industry whose intense technical debates with each other lead gullible outsiders to believe that this is a profession with genuine expertise.”

What can I say other than that I agree? This isn’t quite as blunt as my contention that, “the attempt to manage risk is illusory,” but it is not far from it. If the technical debates of risk modellers constitute a profession without genuine expertise, as implied by Kay, it is high time that we declared our independence from technocrats who would presume to control us all for our own good.

. . . . .

signature

. . . . .

Grand Strategy Annex

. . . . .

Tuesday


Aristotle as portrayed by Raphael

Aristotle claimed that mathematics has no ethos (Metaphysics, Book III, Chap. 2, 996a). Aristotle, of course, was more interested in the empirical sciences than his master Plato, whose Academy presumed and demanded familiarity with geometry — and we must understand that for the ancients, long before the emergence of analytical geometry in the work of Descartes (allowing us to formulate geometry algebraically, hence arithmetically), that geometry was always axiomatic thought, rigorously conceived in terms of demonstration. For the Greeks, this was the model and exemplar of all rigorous thought, and for Aristotle this was a mode of thought that lacked an ethos.

Euclid provided the model of formal thought with his axiomatization of geometry. Legend has it that there was a sign over the door of Plato's Academy stating, 'Let no one enter here who has not studied geometry.'

In this, I think, Aristotle was wrong, and I think that Plato would have agree on this point. But the intuition behind Aristotle’s denial of a mathematical ethos is, I think, a common one. And indeed it has even become a rhetorical trope to appeal to rigorous mathematics as an objective standard free from axiological accretions.

In his famous story within a story about the Grand Inquisitor, Dostoyevsky has the Grand Inquisitor explain how “miracles, mystery, and authority” are used to addle the wits of others.

In his famous story within a story about the Grand Inquisitor, Dostoyevsky has the Grand Inquisitor explain how “miracles, mystery, and authority” are used to addle the wits of others.

Our human, all-too-human faculties conspire to confuse us, to addle our wits, when we begin talking about morality, so that the purity and rigor of mathematical and logical thought seem to be called into question if we acknowledge that there is an ethos of formal thought. We easily confuse ourselves with religious, mystical, and ethical ideas, and since the great monument of mathematical thought has been mostly free of this particular species of confusion, to deny an ethos of formal thought can be understood as a strategy to protect and defend of the honor of mathematics and logic by preserving it from the morass that envelops most human attempts to think clearly, however heroically undertaken.

Kant famously said that he had to limit knowledge to make room for faith.

Kant famously stated in the Critique of Pure Reason that, “I have found it necessary to deny knowledge in order to make room for faith.” I should rather limit faith to make room for rigorous reasoning. Indeed, I would squeeze out faith altogether, and find myself among the most rigorous of the intuitionists, one of whom has said: “The aim of this program is to banish faith from the foundations of mathematics, faith being defined as any violation of the law of sufficient reason (for sentences). This law is defined as the identification (by definition) of truth with the result of a (present or feasible) proof…”

Western asceticism can be portrayed as demonic torment or as divine illumination; the same diversity of interpretation can be given to ascetic forms of reason.

Though here again, with intuitionism (and various species of constructivism generally), we have rigor, denial, asceticism — intuitionistic logic is no joyful wisdom. (An ethos of formal thought need not be an inspiring and edifying ethos.) It is logic with a frown, disapproving, censorious — a bitter medicine justified only because it offers hope of curing the disease of contradiction, contracted when mathematics was shown to be reducible to set theory, and the latter shown to be infected with paradox (as if the infinite hubris of set theory were not alone enough for its condemnation). Is the intuitionist’s hope justified? In so far as it is hope — i.e., hope and not proof, the expectation that things will go better for the intuitionistic program than for logicism — it is not justified.

Dummett has said that intuitionistic logic and mathematics are to wear their justification on their face:

“From an intuitionistic standpoint, mathematics, when correctly carried on, would not need any justification from without, a buttress from the side or a foundation from below: it would wear its own justification on its face.”

Dummett, Michael, Elements of Intuitionism, Oxford University Press, 1977, p. 2

The hope that contradiction will not arise from intuitionistic methods clearly is no such evident justification. As a matter of fact, empirically and historically verifiable, we know that intuitionism has resulted in no contradictions, but this could change tomorrow. Intuitionism stands in need of a consistency proof even more than formalism. There is, in its approach, a faith invested in the assumption that infinite totalities caused the paradoxes, and once we have disallowed reference to them all will go well. This is a perfectly reasonable assumption, but one, in so far as it is an article of faith, which is at variance with the aims and methods of intuitionism.

And what is a feasible proof, which our ultra-intuitionist would allow? Have we not with “feasible proof” abandoned proof altogether in favor of probability? Again, we will allow them their inconsistencies and meet them on their own ground. But we shall note that the critics of the logicist paradigm fix their gaze only upon consistency, and in so doing reveal again their stingy, miserly conception of the whole enterprise.

“The Ultra-Intuitionistic Criticism and the Antitraditional program for the foundations of Mathematics” by A. S. Yessenin-Volpin (who was arguing for intellectual freedom in the Soviet Union at the same time that he was arguing for a censorious conception of reason), in Intuitionism and Proof Theory, quoted briefly above, is worth quoting more fully:

The aim of this program is to banish faith from the foundations of mathematics, faith being defined as any violation of the law of sufficient reason (for sentences). This law is defined as the identification (by definition) of truth with the result of a (present or feasible) proof, in spite of the traditional incompleteness theorem, which deals only with a very narrow kinds [sic] of proofs (which I call ‘formal proofs’). I define proof as any fair way of making a sentence incontestable. Of course this explication is related to ethics — the notion fair means ‘free from any coercion or fraud’ — and to the theory of disputes, indicating the cases in which a sentence is to be considered as incontestable. Of course the methods of traditional mathematical logic are not sufficient for this program: and I have to enlarge the domain of means explicitly studied in logic. I shall work in a domain wherein are to be found only special notions of proof satisfying the mentioned explication. In this domain I shall allow as a means of proof only the strict following of definitions and other rules or principles of using signs.

Intuitionism and proof theory: Proceedings of the summer conference at Buffalo, N.Y., 1968, p. 3

What is coercion or fraud in argumentation? We find something of an illustration of this in Gregory Vlastos’ portrait of Socrates: “Plato’s Socrates is not persuasive at all. He wins every argument, but never manages to win over an opponent. He has to fight every inch of the way for any assent he gets, and gets it, so to speak, at the point of a dagger.” (The Philosophy of Socrates, Ed. by Gregory Vlastos, page 2)

According to Gregory Vlastos, Socrates used the kind of 'coercive' argumentation that the intuitionists abhor.

What appeal to logic does not invoke logical compulsion? Is logical compulsion unique to non-constructive mathematical thought? Is there not an element of logical compulsion present also in constructivism? Might it not indeed be the more coercive form of compulsion that is recognized alike by constructivists and non-constructivists?

The breadth of the conception outlined by Yessenin-Volpin is impressive, but the essay goes on to stipulate the harshest measures of finitude and constructivism. One can imagine these Goldwaterite logicians proclaiming: “Extremism in the defense of intuition is no vice, and moderation in the pursuit of constructivist rigor is no virtue.” Brouwer, the spiritual father of intuitionism, even appeals to the Law-and-Order mentality, saying that a criminal who has not been caught is still a criminal. Logic and mathematics, it seems, must be brought into line. They verge on criminality, deviancy, perversion.

Quine was no intuitionist by a long shot, but as a logician he brought a quasi-disciplinary attitude to reason and adopted a tone of disapproval not unlike Brouwer.

The same righteous, narrow, anathamatizing attitude is at work among the defenders of what is sometimes called the “first-order thesis” in logic. Quine sees a similar deviancy in modal logic (which can be shown to be equivalent to intuitionistic logic), which he says was “conceived in sin” — the sin of confusing use and mention. These accusations do little to help us understand logic. We would do well to adopt Foucault’s attitude on these matters: “leave it to our bureaucrats and our police to see that our papers are in order. At least spare us their morality when we write.” (The Archaeology of Knowledge, p. 17)

Foucault had little patience for the kind of philosophical reason that seemed to be asking if our papers are in order, a function he thought best left to the police.

The philosophical legacy of intuitionism has been profound yet mixed; its influence has been deeply ambiguous. (Far from the intuitive certainty, immediacy, clarity, and evident justification that it would like to propagate.) There is in inuitionism much in harmony with contemporary philosophy of mathematics and its emphasis on practices, the demand for finite constructivity, its anti-philosophical tenor, its opposition to platonism. The Father of Intuitionism, Brouwer, was, like many philosophers, anti-philosophical even while propounding a philosophy. No doubt his quasi-Kantianism put his conscience at rest in the Kantian tradition of decrying metaphysics while practicing it, and his mysticism gave a comforting halo (which softens and obscures the hard edges of intuitionist rigor in proof theory) to mathematics which some have found in the excesses of platonism.

L. E. J. Brouwer: philosopher of mathematics, mystic, and pessimistic social theorist

In any case, few followers of Brouwer followed him in his Kantianism and mysticism. The constructivist tradition which grew from intuitionism has proved to be philosophically rich, begetting a variety of constructive techniques and as many justifications for them. Even if few mathematicians actually do intuitionistic mathematics, controversies over the significance of constructivism have a great deal of currency in philosophy. And Dummett is explicit about the place of philosophy in intuitionistic logic and mathematics.

The light of reason serves as an inspiration to us as it shines down from above, and it remains an inspiration even when we are not equal to all that it might ideally demand of us.

Intuitionism and constructivism command our respect in the same way that Euclidean geometry commanded the respect of the ancients: we might not demand that all reasoning conform to this model, but it is valuable to know that rigorous standards can be formulated, as an ideal to which we might aspire if nothing else. And and ideal of reason is itself an ethos of reason, a norm to which formal thought aspires, and which it hopes to approximate even if it cannot always live up the the most exacting standard that it can recognize for itself.

. . . . .

Studies in Formalism

1. The Ethos of Formal Thought

2. Epistemic Hubris

3. Parsimonious Formulations

4. Foucault’s Formalism

5. Cartesian Formalism

6. Doing Justice to Our Intuitions: A 10 Step Method

7. The Church-Turing Thesis and the Asymmetry of Intuition

8. Unpacking an Einstein Aphorism

9. Methodological and Ontological Parsimony (in preparation)

10. The Spirit of Formalism (in preparation)

. . . . .

signature

. . . . .

Grand Strategy Annex

. . . . .

Thursday


Aristotle, the Father of Logic.

Aristotle, the Father of Logic.

Science often makes progress when an unphilosophical scientist reads the work of a philosopher, misunderstands it, but nevertheless derives an interesting research program from his misunderstanding of philosophical ideas. In the long run, it is very likely that the scientific misunderstanding of a philosophical idea will have a longer life and prove to be mush closer to the truth than the original philosophical idea, which latter will be immediately disposed by the next generation of philosophers. In the long run, then, it doesn’t really matter where inspiration comes from.

We might call this is exaptation of ideas. Exaptation — the use of anything for a function distinct from the function that defined the initial conditions of the thing so used — has been a theme that I have returned to several times. Today I would like to consider exaptation at its most abstract level.

Scientific method in the broadest sense of the term — patient, careful, and systematic observation of the object under study — has changed our conception of logic. The method of metamathematics has made of logic the object of a science — as it turns out, a science very much like logic. This science has been variously interpreted. We might employ metamathematics to become more acutely aware of the rules by which we reason, so extending the scope and profundity of formalism. But it seems that we are instead following the traces of positivism which are to be found implicit throughout metamathematics, with its scientistic orientation.

Hilbert, the father of metamathematics, was no abstract thinker. His philosophical observations are the occasional remarks of a working mathematician. The formalist program he proposed still inspires philosophers, but this is no surprise as philosophers today are largely inspired by anti-philosophical doctrines. Hilbert’s emphasis upon a misreading of Kantian intuition — a scientific, empirical mistaking of the materiality of intuition for the materiality of the concrete — took on as time passed more and more the character of physicalism, and today we find thinkers who entertain even the physicalization of logic.

The father of metamathematics was generous to his progeny: Hilbert’s praise of his creation is in the same vein as Aristotle’s self-congratulation in his Sophistical Refutations:

“That our programme, then, has been adequately completed is clear …it was not the case that part of the work had been thoroughly done before, while part had not. Nothing existed at all …on the subject of reasoning we had nothing else of an earlier date to speak of at all, but were kept at work for a long time in experimental researches. If, then, it seems to you after inspection that, such being the situation as it existed at the start, our investigation is in a satisfactory condition compared with the other inquiries that have been developed by tradition, there must remain for all of you, or for our students, the task of extending us your pardon for the shortcomings of the inquiry, and for the discoveries thereof your warm thanks.”

Sophistical Refutations, § 34, Works of Aristotle, 183b – 184b

It would seem that Aristotle and logicians since Aristotle have had a fine opinion of themselves, but whether this high estimate is an instance of the sober logical deliberation so carefully cultivated in their discipline, or a failure of the same, is another matter entirely. We can only observe the consistency with which logicians have propounded the finality of their subject, only to have the next generation of dialecticians pronounce the effort corrupt and count their own production the genuine article which at long last delivers on the promise of logic to secure truth and certainty, is perhaps more consistent than the logics themselves.

Wittgenstein’s certainty in the unassailable truth of the doctrines of the Tractatus also comes to mind, as does Russell’s assertion that Wittgenstein, “had the pride of Lucifer.” That so few philosophers have seemed to notice the Hilbertian distortion of Kant suggests that the claim will be controversial, but I find it difficult to imagine anyone who has read Kant in detail believing that what Hilbert calls intuition (“Anschauung”) is what Kant understood by the same term: Hilbert exapted Kant’s conception of intuition. Hilbert’s anachronistic reading of physicalism into Kant darkly heralds further physicalist mischief which was to come. Philosophers today speak of making their theories “physicalistically acceptable,” and in Mechanization of Reasoning in Historical Perspective, Withold Marciezewski offers a “physicalization of logic.” Despite its dubious provenance, physicalism may well be a legitimate theory, though its advocates have yet to indicate that they are willing to deal with the hard questions which it poses.

One might be forgiven for supposing that pride and self-satisfaction are necessary prerequisites for the logicians, marks of character especially suited to systematic and rigorous reasoning. Russell, himself a bona fide member of the Peerage, suggested: “There is a certain lordliness which the logician should preserve: he must not condescend to derive arguments from the things he sees about him.” (Introduction to Mathematical Philosophy, p. 192) The logician is to assume a demeanor of lordly indifference to the surrounding world. This may well be the origin of the role of the arbitrary in rigorous reasoning.

Hilbert’s contribution to the tradition of self-congratulatory exposition is as follows: “I believe that in my proof theory I have fully attained what I desired and promised: the world has been rid, once and for all, of the question of the foundations of mathematics as such. The philosophers will be interested that a science like mathematics exists at all. For us mathematicians, the task is to guard it like a relic, so that one day all human knowledge whatsoever will partake of the same precision and clarity. That this must and will occur is my firm conviction.” (“The Grounding of Elementary Number Theory” in From Brouwer to Hilbert, Paolo Mancosu, New York and Oxford: Oxford University Press, 1998, p. 273) Hilbert’s formalist program did in fact become a relic, but not, one suspects, in the sense to which he aspired. It is now a matter of historical interest, known to specialists, but not a living part of mathematics or an on-going research program. This is not to say that nothing has come of Hilbert’s foundationalist enterprise. The perennial aspects of formalism continue to assert themselves today as they were once asserted in Hilbert’s work. The doctrines unique to Hilbert have enjoyed varying degrees of influence. It is the central theme of Hilbert’s foundationalist program, the inspiration and the motivation, the vision of a finite consistency proof for the whole of mathematics, which was defeated by Gödel. The structure has been demolished, though we may build anew with salvaged bricks.

Hilbert was a visionary in a precisely definable sense of the term: he envisioned something which did not yet exist and sought its realization, i.e., he tried to make the possible actual, only to be shown (by Gödel) that it was in fact impossible. In so far as Hilbert was a visionary, he was a radical, a subversive, a rebel — for a vision of a better world to be realized must be at odds with the imperfect world which is—and stands opposed to classicism. Although Hilbert was in a certain sense the culmination and apotheosis of classical mathematics, he did not put his faith in classical mathematics, but rather in something beyond classical mathematics — in a mathematics yet to be.

Logicism, by contrast, looks frankly reactionary in its elevation of classical mathematics as the end of the successful logicist theory. The only thing that saved logicism from complete hostility to innovation was its willingness to embrace recent tradition, such as Cantor’s set theory and transfinite numbers, as a part of the classicism to be ordained finally with logical certainty. Logicism, despite its tolerance for Cantor’s actual infinite and his non-constructive methods, was inspired in its central program by a constructivist quest for securing mathematics from contradiction piecemeal, one deduction at a time. The constructivism of the intuitionists, by comparison, is always an assumption that their restriction of methods to the apparently safe will simply not issue in inconsistency, though this is by no means guaranteed a priori. Thus constructivism itself is inspired by a non-constructive, top-down conception of how order and consistency are to be imposed upon mathematics by principles determined not in practice, but prior to practices, which are determined, by definition, by the principles adopted to guide them.

Hilbert too had sought a peculiarly logical certainty: consistency. Interestingly, logicism sought the consistency of mathematics through the construction of mathematics slowly and gradually from simple beginnings. Hilbert sought absolute and complete consistency through a consistency proof which would hold good for all that was to follow in the future. This was a top-down effort, in essence non-constructive. Thus the finitist and constructivist strains in Hilbert’s thought conflict with the central vision and inspiration, which was essentially non-constructive: Hilbert, of course, is the one who famously referred to set theory as “Cantor’s paradise.”

Another example of philosophical self-congratulation and smugly self-satisfied logic is to be found in Yehoshua Bar-Hillel’s paper, “A Prerequisite for Rational Philosophical Discussion,” (Logic and Language, Studies Dedicated to Professor Rudolf Carnap, Dordrecht: D. Reidel, 1962, pp. 1-5) in which he sets forth, unblinkingly and without a trace a embarrassment, the reasons he will not even entertain objections to his principles unless they already agree with his principles. This embodies the familiar strategy of logical monism, to argue for monism from the perspective of monism, and employing a monistic logic to prove that logic must be monistic: There is one and only one logic, and (in this context, at least) Yehoshua Bar-Hillel is its prophet.

For Bar-Hillel, there is one and only one way to be rational, and he is completely unwilling to listen to any alternative. He will condescend to discuss his conception of rationality, but only with those who adopt his standards of rationality as the principles by which the discussion is to abide: “. . . I am ready to listen and ague with [a speculative philosopher] only if the meta-language, in which he explains to me his reasons for challenging my standards, itself complies with these standards.” (“A Prerequisite for Rational Philosophical Discussion,” in Logic and Language, Studies Dedicated to Professor Rudolf Carnap, Dordrecht: D. Reidel, 1962, p. 3, italics in original) Clearly, he is not interested in any serious challenge to his views, nor in anything unpredictable and upsetting. Indeed, entering into “dialogue” with Bar-Hillel would be more in the way of reading from a panagyric script in which triumphant reason affirms its own value and veracity. Bar-Hillel is to be congratulated for his honesty, if not for his attitude. Most of convinced of his opinion would not admit as plainly their indifference to any pluralism of reason.

We cannot dictate how others will reason, nor what they will make of our ideas. The exaptation of ideas continues apace. This is an unavoidable aspect of human history. We believe that we are creating a building that will last for the ages, but in fact the materials that we gather and work will be used by others in different constructions. We ought not to fight this. We ought to offer ideas to posterity in the spirit that they will be exapted. Indeed, we ought to consider ourselves fortunate if any of our ideas is exapted, for this is the only way that they will survive. I have argued that the historical viability of an institution only comes with its ability to change intelligently. Ideas are the institutions of the mind, and they too only possess historical viability if they can be adapted and exapted to changing circumstances.

. . . . .

signature

. . . . .

Grand Strategy Annex

. . . . .

project astrolabe logo smaller

. . . . .

Thursday


Roosting birds at Pass-a-Grille Beach, Florida; the natural world is a fitting point of departure not only for understanding nature through science, but also of understanding science through the philosophy of science.

Yesterday’s meditation upon The Fungibility of the Biome led me to think in very general terms about scientific knowledge. It is one of the remarkable things about contemporary natural science — following rigorously, as it does, the methodological naturalism toward which it has struggled over the past several hundred years since the advent of the Scientific Revolution — that the more complex and sophisticated it becomes, the more closely science is in touch with the details of ordinary experience. This is almost precisely the opposite of what one finds with most intellectual traditions. As an intellectual tradition develops it often becomes involuted and self-involved, veering off in oddball directions and taking unpredictable tangents that take us away from the world and our immediate experience of it, not closer to it. The history of human reason is mostly a history of wild goose chases.

Detail of a pelican from the above photograph.

In fact, Western science began exactly in this way, and in so doing gave us the most obvious example of an involuted, self-referential intellectual tradition that was more interested in building on a particular cluster of ideas than of learning about the world. This we now know as scholasticism, when the clerics and monks of medieval Europe read and re-read, studied and commented upon, the works of Aristotle. For a thousand years, Aristotle was synonymous with natural science.

The scholastics constructed a science upon the basis of Aristotle, rather than upon the world with Aristotle as a point of departure.

Aristotle is not to be held responsible for the non-science that was done in his name and, to add insult to injury, was called science. If Aristotle had been treated as a point of departure rather than as dogma to be defended and upheld as doctrine, medieval history would have been very different. But at that time Western history was not yet prepared for the wrenching change that science, when properly pursued, forces upon us, both in terms of our understanding of the world and the technology it makes possible (and the industry made possible in turn by technology).

Science forces wrenching change upon us because it plays havoc with some of the more absurd notions that we have inherited from our earlier, pre-scientific history. Pre-scientific beliefs suffer catastrophic failure when confronted with their scientific alternatives, however gently the science is presented in the attempt to spare the feelings of those still wedded to the beliefs of the past.

Once we get past our inherited absurdities, as I implied above, we can see the world for what it is, and science puts us always more closely in touch with what the world it is. Allow me to mention two examples of things that I have recently learned:

Example 1) We know now that not only does the earth circle the sun, and the sun spins with the Milky Way, but we know that this circling and spinning is irregular and imperfect. The earth wobbles in its orbit, and in fact the sun bobs up and down in the plane of the Milky Way as the galaxy spins. This wobbling and bobbing has consequences for life on earth because it changes the climate, sometimes predictably and sometimes unpredictably. But regularity is at least partly a function of the length of time we consider. The impact of extraterrestrial objects on the earth seems like a paradigmatic instance of catastrophism, and the asteroid impact that likely contributed to the demise of the dinosaurs is thought of as a catastrophic punctuation in the history of life, but we now also know that the earth is subject to periods of greater bombardment by extraterrestrial bodies when it is passing through the galactic plane. Viewed from a perspective of cosmological time, asteroid impacts and regular and statistically predictable. And it happens that about 65 million years ago we were passing through the galactic plane and we caught a collision as a result. All of this makes eminently good sense. Matter is present at greater density in the galactic plane, so we are far more likely to experience collisions at this time. All of this accords with ordinary experience.

Example 2) We have had several decades to get used to the idea that the continents and oceans of the earth are not static and unchanging, but dynamic and dramatically different over time. A great many things that remain consistent during the course of one human lifetime have been mistakenly thought to be eternal and unchanging. Now we know that the earth changes and in fact the whole cosmos changes. Even Einstein had to correct himself on this account. His first formulation of general relativity included the cosmological constant in order to maintain the cosmos according to its presently visible structure. Now cosmological evolution is recognized and we detail the lives of stars as carefully as we detail the natural history of a species. Now that we know something of the natural history of our planet, and we know that it changes, we find that it changes according to our ordinary experience. In the midst of an ice age, when much of the world’s water is frozen as ice and is burdening the continental plates as ice, it turns out that the weight of the ice forces the continents lower as they float in the magma beneath them. During the interglacial periods, when much or most of the ice melts, unburdened of the weight the continents bob up again and rise relative to the oceanic plates that have not been been weighted down with ice. And, in fact, this is how things behave in our ordinary experience. It is perhaps also possible (though I don’t know if this is the case) that the weight of ice, melted and now run into the oceans, becomes additional water weight pressing down on the oceanic plates, which could sink a little as a result.

Last night I was reading A Historical Introduction to the Philosophy of Science by John Losee (an excellent book, by the way, that I heartily recommend) and happened across this quote from Larry Laudan (p. 213):

…the degree of adequacy of any theory of scientific appraisal is proportional to how many of the [preferred intuitions] it can do justice to. The more of our deep intuitions a model of rationality can reconstruct, the more confident we will be that it is a sound explication of what we mean by ‘rationality’.

Contemporary Anglo-American analytical philosophers seem to love to employ the locution “deep intuitions” and similar formulations in the way that a few years ago (or a few decades ago) phenomenologists never tired of writing about the “richness of experience.” Certainly experience is rich, and certainly there are deep intuitions, but to have to call attention to either by way of awkward locutions like these points to a weakness in formulating exactly what it is that is rich about experience, and exactly what it is that is deep about a deep intuition.

And this, of course, is the whole problem in a nutshell: what exactly is a deep intuition? What intuitions ought to be considered to be preferred intuitions? I suggest that our preferred intuitions ought to be those most common and ordinary intuitions that we derive from our common and ordinary experience, things like the fact that floating bodies, when weighted down, float a little lower in the water, or whatever medium in which they happen to float. It is in this spirit that we recall the words that Robert Green Ingersoll attributed to Ferdinand Magellan:

“The church says the earth is flat, but I know that it is round, for I have seen the shadow on the moon, and I have more faith in a shadow than in the church”

The quote bears exposition. Almost certainly Magellan never said it, or even anything like it. Nevertheless, we ought to be skeptical for reasons other than those cited by the most familiar skeptics, who like to point out that the church never argued for the flatness of the earth. We ought to be skeptical because Magellan was a deeply pious man, who lost his life before the completion of his circumnavigation by his crew because Magellan was so intent upon the conversion to Catholicism of the many peoples he encountered. Eventually he encountered peoples who did not want to be converted, and they eventually took up arms and killed him in an entirely unnecessary engagement. But what remains interesting in the quote, and its implied reference to Galileo’s early observations of the moon, is not so much about flatness as about perfection. Aristotle in particular, and ancient Greek philosophy in general, held that the heavens were a realm of perfection in which all bodies were perfectly spherical and moved in perfectly circular motions through the sky. We now know this to be false, and Galileo was among the first to graphically demonstrate this with his sketches of superlunary mountains.

What does the word “superlunary” refer to? It is a term that derives from pre-Copernican (or, if you will, Ptolemaic) astronomy. When it was believed that the earth was the center of the universe, the closest extraterrestrial body was believed to be the moon (this happened to be correct, even if much in Ptolemaic astronomy was not correct). Everything below the moon, i.e., everything sublunary, was believed to be tainted and imperfect, contaminated with the dirt of lowly things and the stain of Original Sin, while everything above the moon, i.e., everything superlunary, including all other known extraterrestrial bodies, were believed to be free of this taint and therefore to be perfect, therefore unblemished. Thus it was deeply radical to observe an “imperfection” on the supposedly perfect spheres beyond the earth, as it was equally radical to discover “new” extraterrestrial bodies that had never been seen before, like the moons of Jupiter.

Both of these heresies point to our previous tendency to attribute an eternal and unchanging status to things beyond the earth. It was believed impossible to discover “new” extraterrestrial bodies because the heavens, after all, were complete, perfect, and unchanging. For the same reason, one should not be able to view anything as irregular as mountains or shadows on extraterrestrial bodies. Once we get beyond the absurd postulate of extraterrestrial perfection, we can see the world with our own eyes, and for what it is. And when we begin to do so, we do not negate the properties of perfection once attributed to the superlunary world as much as we find them to be simply irrelevant. The heavens, like the earth, are neither perfect nor imperfect. They simply are, and they are what they are. To attribute evaluative or normative content or significance to them, such as believing in their perfection, is only to send us off on one of those oddball directions or unpredictable tangents that I mentioned in the first paragraph.

. . . . .

signature

. . . . .

Grand Strategy Annex

. . . . .

%d bloggers like this: