14 February 2017
Nietzsche’s Big History
One of the most succinct formulations of Big History of which I am aware is a brief paragraph from Nietzsche:
“In some remote corner of the universe, poured out and glittering in innumerable solar systems, there once was a star on which clever animals invented knowledge. That was the highest and most mendacious minute of ‘world history’ — yet only a minute. After nature had drawn a few breaths the star grew cold, and the clever animals had to die.
“On Truth and Lie in an Extra-Moral Sense,” Friedrich Nietzsche, Fragment, 1873: from the Nachlass. Translated by Walter Kaufmann
…and in the original German:
In irgend einem abgelegenen Winkel des in zahllosen Sonnensystemen flimmernd ausgegossenen Weltalls gab es einmal ein Gestirn, auf dem kluge Tiere das Erkennen erfanden. Es war die hochmütigste und verlogenste Minute der “Weltgeschichte”: aber doch nur eine Minute. Nach wenigen Atemzügen der Natur erstarrte das Gestirn, und die klugen Tiere mußten sterben.
Über Wahrheit und Lüge im außermoralischen Sinne, Friedrich Nietzsche, 1873, aus dem Nachlaß
This passage has been translated several times, so, for purposes of comparison, here is another translation:
“In some remote corner of the universe that is poured out in countless flickering solar systems, there once was a star on which clever animals invented knowledge. That was the most arrogant and the most untruthful moment in ‘world history’ — yet indeed only a moment. After nature had taken a few breaths, the star froze over and the clever animals had to die.”
ON TRUTH AND LYING IN AN EXTRA-MORAL SENSE (1873), Edited and Translated with a Critical Introduction by Sander L. Gilman, Carole Blair, and David J. Parent, New York and Oxford: OXFORD UNIVERSITY PRESS, 1989
Bertrand Russell, who rarely passed over an opportunity to criticize Nietzsche in the harshest terms, expressed a tragic interpretation of human endeavor that is quite similar to Nietzsche’s capsule big history:
“That Man is the product of causes which had no prevision of the end they were achieving; that his origin, his growth, his hopes and fears, his loves and his beliefs, are but the outcome of accidental collocations of atoms; that no fire, no heroism, no intensity of thought and feeling, can preserve an individual life beyond the grave; that all the labours of the ages, all the devotion, all the inspiration, all the noonday brightness of human genius, are destined to extinction in the vast death of the solar system, and that the whole temple of Man’s achievement must inevitably be buried beneath the debris of a universe in ruins–all these things, if not quite beyond dispute, are yet so nearly certain, that no philosophy which rejects them can hope to stand. Only within the scaffolding of these truths, only on the firm foundation of unyielding despair, can the soul’s habitation henceforth be safely built.”
Bertrand Russell, “A Free Man’s Worship”
Even closer to Nietzsche, in both style and spirit, is the passage that immediately precedes this in the same essay by Russell, told, as with Nietzsche, in the form of a parable:
“For countless ages the hot nebula whirled aimlessly through space. At length it began to take shape, the central mass threw off planets, the planets cooled, boiling seas and burning mountains heaved and tossed, from black masses of cloud hot sheets of rain deluged the barely solid crust. And now the first germ of life grew in the depths of the ocean, and developed rapidly in the fructifying warmth into vast forest trees, huge ferns springing from the damp mould, sea monsters breeding, fighting, devouring, and passing away. And from the monsters, as the play unfolded itself, Man was born, with the power of thought, the knowledge of good and evil, and the cruel thirst for worship. And Man saw that all is passing in this mad, monstrous world, that all is struggling to snatch, at any cost, a few brief moments of life before Death’s inexorable decree. And Man said: `There is a hidden purpose, could we but fathom it, and the purpose is good; for we must reverence something, and in the visible world there is nothing worthy of reverence.’ And Man stood aside from the struggle, resolving that God intended harmony to come out of chaos by human efforts. And when he followed the instincts which God had transmitted to him from his ancestry of beasts of prey, he called it Sin, and asked God to forgive him. But he doubted whether he could be justly forgiven, until he invented a divine Plan by which God’s wrath was to have been appeased. And seeing the present was bad, he made it yet worse, that thereby the future might be better. And he gave God thanks for the strength that enabled him to forgo even the joys that were possible. And God smiled; and when he saw that Man had become perfect in renunciation and worship, he sent another sun through the sky, which crashed into Man’s sun; and all returned again to nebula.
“`Yes,’ he murmured, `it was a good play; I will have it performed again.'”
Here Russell, unlike Nietzsche, gives theological meaning to the spectacle, however heterodox that meaning may be; I can easily imagine someone preferring Russell’s theological version to Nietzsche’s secular version, though both highlight the meaninglessness of human endeavor in a thermodynamic universe.
Our sun — a star among stars — will be a relatively early casualty in the heat death of the universe. While the life of the sun is orders of magnitude beyond the life of the individual human being, as soon as we understood that the sun’s life will pass through predictable stages of stellar evolution, we understood that the sun, like any human being, was born, will shine for a time, and then will die, and, when the sun dies, everything that is dependent upon the light of the sun for life will die also. It is only if we can make ourselves independent of the sun that we will not inevitably share the fate of the sun.
The idea that the sun is a star among stars, and that any star will do in terms of supporting human life, is embodied in a quote attributed to Wernher von Braun by Tom Wolfe and reported in Bob Ward’s book about von Braun:
“The importance of the space program is not surpassing the Soviets in space. The importance is to build a bridge to the stars, so that when the Sun dies, humanity will not die. The Sun is a star that’s burning up, and when it finally burns up, there will be no Earth… no Mars… no Jupiter.”
quoted in Dr. Space: The Life of Wernher von Braun, Bob Ward, Chapter 22, p. 218, with a footnote giving as the source, “Transcript, NBC’s Today program, New York, November 11, 1998”
Wernher von Braun had seized upon the essential insight of existential risk mitigation, as had many involved in the space program from its inception. As soon as one adopts a naturalistic understand of the place of humanity in the universe, and when technology develops to a point at which its extrapolation offers human beings options and alternatives within the universe, anyone will draw the same conclusion. Another quote from von Braun makes the same point in another way:
“…man’s newly acquired capability to travel through outer space provides us with a way out of our evolutionary dead alley.”
Bob Ward, Dr. Space: The Life of Wernher von Braun, Annapolis, US: Naval Institute Press, 2013.
I have previously written about the idea that humanity is a solar species, but the fact that humanity and the biosphere from which we derive has been utterly dependent upon solar insolation has been an accident of history. Any sun will do. We can, accordingly, re-conceive humanity as a stellar species, the kind of species that requires a star and its planetary system to make a home for ourselves. In this sense, all species of planetary endemism are stellar species.
Even this idea of immigration to another star, and of any other star being as good as the sun, is ultimately too narrow. Our sun, or any star, can be the source of energy that powers our civilization, but it can easily be seen that substitute forms of energy could equally well power the future of our civilization, and that it has merely been an historical contingency — a matter of our planetary endemism — that we have been dependent upon a single star, or upon any star, for our energy needs.
This more radical and farther-reaching vision is embodied in a quote attributed to Ray Bradbury by Oriana Fallaci:
“Don’t let us forget this: that the Earth can die, explode, the Sun can go out, will go out. And if the Sun dies, if the Earth dies, if our race dies, then so will everything die that we have done up to that moment. Homer will die. Michelangelo will die, Galileo, Leonardo, Shakespeare, Einstein will die, all those will die who now are not dead because we are alive, we are thinking of them, we are carrying them within us. And then every single thing, every memory, will hurtle down into the void with us. So let us save them, let us save ourselves. Let us prepare ourselves to escape, to continue life and rebuild our cities on other planets: we shall not be long of this Earth! And if we really fear the darkness, if we really fight against it, then, for the good of all, let us take our rockets, let us get well used to the great cold and heat, the no water, the no oxygen, let us become Martians on Mars, Venusians on Venus, and when Mars and Venus die, let us go to the other solar systems, to Alpha Centauri, to wherever we manage to go, and let us forget the Earth. Let us forget our solar system and our body, the form it used to have, let us become no matter what, lichens, insects, balls of fire, no matter what, all that matters is that somehow life should continue, and the knowledge of what we were and what we did and learned: the knowledge of Homer and Michelangelo, of Galileo, Leonardo, Shakespeare, of Einstein! And the gift of life will continue.”
Oriana Fallaci, If the Sun Dies, New York: Atheneum, 1966, pp. 14-15
Fallaci refers to this as a “prayer,” and indeed we might see this as a prayer or a catechism of the Space Age — not a belief, not merely belief, but an imperative ever-present in the hearts and minds of those who have fully imbibed the spirit of the age and who seek to carry that spirit forward with evangelical fervor, proselytizing to the masses and bringing them to the True Faith through purity of will and vision — another way of saying naïveté.
Do the clever animals have to die? No, not yet. Not if they are clever enough to move on to another planet, another star, another galaxy. Not if they are clever enough to change themselves so that, when the changed conditions of the universe in which they exist no longer allow the lives of clever animals to continue, what the clever animals have achieved can be preserved in some other way, and they themselves can be preserved in another form.
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
In my previous post, The Study of Civilization as Rigorous Science I drew upon examples from both Edmund Husserl and Bertrand Russell — the Godfathers, respectively, of contemporary continental and analytical philosophy — to illustrate some of the concerns of constituting a new science de novo, which is what a science of civilization must be.
In particular, I quoted Husserl to the effect that true science eschews “profundity” in favor of Cartesian clarity and distinctness. Since Husserl himself was none-too-clear a writer, his exposition of a distinction between profundity and clarity might not be especially clear. But another example occurred to me. There is a wonderful passage from Bertrand Russell in which he describes the experience of intellectual insight:
“Every one who has done any kind of creative work has experienced, in a greater or less degree, the state of mind in which, after long labour, truth, or beauty, appears, or seems to appear, in a sudden glory — it may be only about some small matter, or it may be about the universe. The experience is, at the moment, very convincing; doubt may come later, but at the time there is utter certainty. I think most of the best creative work, in art, in science, in literature, and in philosophy, has been the result of such a moment. Whether it comes to others as to me, I cannot say. For my part, I have found that, when I wish to write a book on some subject, I must first soak myself in detail, until all the separate parts of the subject matter are familiar; then, some day, if I am fortunate, I perceive the whole, with all its parts duly interrelated. After that, I only have to write down what I have seen. The nearest analogy is first walking all over a mountain in a mist, until every path and ridge and valley is separately familiar, and then, from a distance, seeing the mountain whole and clear in bright sunshine.”
Bertrand Russell, A History of Western Philosophy, CHAPTER XV, “The Theory of Ideas”
Russell returned to this metaphor of seeing a mountain whole after having wandered in the fog of the foothills on several occasions. For example:
“The time was one of intellectual intoxication. My sensations resembled those one has after climbing a mountain in a mist, when, on reaching the summit, the mist suddenly clears, and the country becomes visible for forty miles in every direction.”
Bertrand Russell, The Autobiography of Bertrand Russell: 1872-1914, Chapter 6, “Principia Mathematica”
“Philosophical progress seems to me analogous to the gradually increasing clarity of outline of a mountain approached through mist, which is vaguely visible at first, but even at last remains in some degree indistinct. What I have never been able to accept is that the mist itself conveys valuable elements of truth. There are those who think that clarity, because it is difficult and rare, should be suspect. The rejection of this view has been the deepest impulse in all my philosophical work.”
Bertrand Russell, The Basic Writings of Bertrand Russell, Preface
Russell’s description of intellectual illumination employing the metaphor of seeing a mountain whole is an example of the what I have called the epistemic overview effect — being able to place the parts of knowledge within a larger epistemic whole gives us a context for understanding that is not possible when confined to any parochial, local, or limited perspective.
If we employ Russell’s metaphor to illustrate Husserl’s distinction between the profound and the pellucid we immediately see that an attempt at an exposition which is confined to wandering in the foothills enshrouded in mist and fog has the character of profundity, but when the sun breaks through, the fog lifts, and the mist evaporates, we see clearly and distinctly that which we had before known only imperfectly and at that point we are able to give an exposition in terms of Cartesian clarity and distinctness. Russell’s insistence that he never thought that the mist contained any valuable elements of truth is of a piece with Husserl eschewing profundity.
Just so, a science of civilization should surprise us with unexpected vistas when we see the phenomenon of civilization whole after having familiarized ourselves with each individual parts of it separately. When the moment of illumination comes, dispelling the mists of profundity, we realize that it is no loss at all to let go of the profundity that has, up to that time, been our only guide. The definitive formulation of a concept, a distinction, or a principle can suddenly cut through the mists that we did not even realize were clouding our thoughts, revealing to us the perfect clarity that had eluded us up to that time. As Russell noted that, “this view has been the deepest impulse in all my philosophical work,” so too this is the deepest impulse in my attempt to understand civilization.
. . . . .
. . . . .
. . . . .
. . . . .
8 June 2015
In several posts I have discussed the need for a science of civilization (cf., e.g., The Future Science of Civilizations), and this is a theme I intended to continue to pursue in future posts. It is no small matter to constitute a new science where none has existed, and to constitute a new science for an object of knowledge as complex as civilization is a daunting task.
The problem of constituting a science of civilization, de novo for all intents and purposes, may be seen in the light of Husserl’s attempt to constitute (or re-constitute) philosophy as a rigorous science, which was a touchstone of Husserl’s work. Here is a passage from Husserl’s programmatic essay, “Philosophy as Strict Science” (variously translated) in which Husserl distinguishes between profundity and intelligibility:
“Profundity is the symptom of a chaos which true science must strive to resolve into a cosmos, i.e., into a simple, unequivocal, pellucid order. True science, insofar as it has become definable doctrine, knows no profundity. Every science, or part of a science, which has attained finality, is a coherent system of reasoning operations each of which is immediately intelligible; thus, not profound at all. Profundity is the concern of wisdom; that of methodical theory is conceptual clarity and distinctness. To reshape and transform the dark gropings of profundity into unequivocal, rational propositions: that is the essential act in methodically constituting a new science.”
Edmund Husserl, “Philosophy as Rigorous Science” in Phenomenology and the Crisis of Philosophy, edited by Quentin Lauer, New York: Harper, 1965 (originally “Philosophie als strenge Wissenschaft,” Logos, vol. I, 1911)
Recently re-reading this passage from Husserl’s essay I realized that much of what I have attempted in the way of “methodically constituting a new science” of civilization has taken the form of attempting to follow Husserl’s pursuit of “unequivocal, rational propositions” that eschew “the dark gropings of profundity.” I think much of the study of civilization, immersed as it is in history and historiography, has been subject more often to profound meditations (in the sense that Husserl gives to “profound”) than conceptual clarity and distinctness.
The Cartesian demand for clarity and distinctness is especially interesting in the context of constituting a science of civilization given Descartes’ famous disavowal of history (on which cf. the quote from Descartes in Big History and Scientific Historiography); if an historical inquiry is the basis of the study of civilization, and history consists of little more than fables, then a science of civilization becomes rather dubious. The emergence of scientific historiography, however, is relevant in this context.
The structure of Husserl’s essay is strikingly similar to the first lecture in Russell’s Our Knowledge of the External World. Both Russell and Husserl take up major philosophical movements of their time (and although the two were contemporaries, each took different examples — Husserl, naturalism, historicism, and Weltanschauung philosophy; Russell, idealism, which he calls “the classical tradition,” and evolutionism), primarily, it seems, to show how philosophy had gotten off on the wrong track. The two works can profitably be read side-by-side, as Russell is close to being an exemplar of the naturalism Husserl criticized, while Husserl is close to being an exemplar of the idealism that Russell criticized.
Despite the fundamental difference between Husserl and Russell, each had an idea of rigor and each attempted to realize in their philosophical work, and each thought of that rigor as bringing the scientific spirit into philosophy. (In Kierkegaard and Russell on Rigor I discussed Russell’s conception of rigor and its surprising similarity to Kierkegaard’s thought.) Interestingly, however, the two did not criticize each other directly, though they were contemporaries and each knew of the other’s work.
The new science Russell was involved in constituting was mathematical logic, which Roman Ingarden explicitly tells us that Husserl found inadequate for the task of a scientific philosophy:
“It is maybe unexpected and surprising that Husserl who was trained as a mathematician did not seek salvation for philosophy in the mathematical method which had from time to time stood out like a beacon as an ideal worthy of imitation by philosophers. But mathematical logic could not satisfy him… above all he fought for responsibility in philosophical research and devoted many years to the elaboration of a method which, according to him, was to secure for philosophy the status of a science.”
Roman Ingarden, On the Motives which Led Husserl to Transcendental Idealism, Translated from the Polish by Arnor Hannibalsson, Den Haag: Martinus Nijhoff, 1975, p. 9.
Ingarden’s discussion of Husserl is instructive, in so far as he notes the influence of mathematical method upon Husserl’s thought, but also that Husserl did not try to employ a mathematical method directly in philosophy. Rather, Husserl invested his philosophical career in the formulation of a new methodology that would allow the values of rigorous scientific practice to be expressed in philosophy and through a philosophical method — a method that might be said to be parallel to or mirroring the mathematical method, or derived from the same thematic motives as those that inform mathematical methodology.
The same question is posed in considering the possibility of a rigorously scientific method in the study of civilization. If civilization is sui generis, is a sui generis methodology necessary to the formulation of a rigorous theory of civilization? Even if that methodology is not what we today know as the methodology of science, or even if that methodology does not precisely mirror the rigorous method of mathematics, there may be a way to reason rigorously about civilization, though it has yet to be given an explicit form.
The need to think rigorously about civilization I took up implicitly in Thinking about Civilization, Suboptimal Civilizations, and Addendum on Suboptimal Civilizations. (I considered the possibility of thinking rigorously about the human condition in The Human Condition Made Rigorous.) Ultimately I would like to make my implicit methodology explicit and so to provide a theoretical framework for the study of civilization.
Since theories of civilization have been, for the most part, either implicit or vague or both, there has been little theoretical framework to give shape or direction to the historical studies that have been central to the study of civilization to date. Thus the study of civilization has been a discipline adrift, without a proper research program, and without an explicit methodology.
There are at least two sides to the rigorous study of civilization: theoretical and empirical. The empirical study of civilization is familiar to us all in the form of history, but history studied as history, as opposed to history studied for what it can contribute to the theory of civilization, are two different things. One of the initial fundamental problems of the study of civilization is to disentangle civilization from history, which involves a formal rather than a material distinction, because both the study of civilization and the study of history draw from the same material resources.
How do we begin to formulate a science of civlization? It is often said that, while science begins with definitions, philosophy culminates in definitions. There is some truth to this, but when one is attempting to create a new discipline one must be both philosopher and scientist simultaneously, practicing a philosophical science or a scientific philosophy that approaches a definition even as it assumes a definition (admittedly vague) in order for the inquiry to begin. Husserl, clearly, and Russell also, could be counted among those striving for a scientific philosophy, while Einstein and Gödel could be counted as among those practicing a philosophical science. All were engaged in the task of formulating new and unprecedented disciplines.
This division of labor between philosophy and science points to what Kant would have called the architectonic of knowledge. Husserl conceived this architectonic categorically, while we would now formulate the architectonic in hypothetico-deductive terms, and it is Husserl’s categorical conception of knowledge that ties him to the past and at times gives his thought an antiquated cast, but this is merely an historical contingency. Many of Husserl’s formulations are dated and openly appeal to a conception of science that no longer accords with what we would likely today think of as science, but in some respects Husserl grasps the perennial nature of science and what distinguishes the scientific mode of thought from non-scientific modes of thought.
Husserl’s conception of science is rooted in the conception of science already emergent in the ancient world in the work of Aristotle, Euclid, and Ptolemy, and which I described in Addendum on the Agrarian-Ecclesiastical Thesis. Russell’s conception science is that of industrial-technological civilization, jointly emergent from the scientific revolution, the political revolutions of the eighteenth century, and the industrial revolution. With the overthrow of scholasticism as the basis of university curricula (which took hundreds of years following the scientific revolution before the process was complete), a new paradigm of science was to emerge and take shape. It was in this context that Husserl and Russell, Einstein and Gödel, pursued their research, employing a mixture of established traditional ideas and radically new ideas.
In a thorough re-reading of Husserl we could treat his conception of science as an exercise to be updated as we went along, substituting an hypothetico-deductive formulation for each and every one of Husserl’s categorical formulations, ultimately converging upon a scientific conception of knowledge more in accord with contemporary conceptions of scientific knowledge. At the end of this exercise, Husserl’s observation about the different between science and profundity would still be intact, and would still be a valuable guide to the transformation of a profound chaos into a pellucid cosmos.
This ideal, and ever more so the realization of this ideal, ultimately may not prove to be possible. Husserl himself in his later writings famously said, “Philosophy as science, as serious, rigorous, indeed apodictically rigorous, science — the dream is over.”(It is interesting to compare this metaphor of a dream to Kant’s claim that he was awoken from his dogmatic slumbers by Hume.) The impulse to science returns, eventually, even if the idea of an apodictically rigorous science has come to seem a mere dream. And once the impulse to science returns, the impulse to make that science rigorous will reassert itself in time. Our rational nature asserts itself in and through this impulse, which is complementary to, rather than contradictory of, our animal nature. To pursue a rigorous science of civilization is ultimately as human as the satisfaction of any other impulse characteristic of our species.
. . . . .
. . . . .
. . . . .
. . . . .
27 May 2015
Is it possible for human beings to care about the fate of strangers? This is at once a profound philosophical question and an immediately practical question. The most famous response to this question is perhaps that of John Donne:
“No man is an island, entire of itself; every man is a piece of the continent, a part of the main. If a clod be washed away by the sea, Europe is the less, as well as if a promontory were, as well as if a manor of thy friend’s or of thine own were: any man’s death diminishes me, because I am involved in mankind, and therefore never send to know for whom the bells tolls; it tolls for thee.”
John Donne, Devotions upon Emergent Occasions, XVII. Nunc lento sonitu dicunt, morieris. Now, this bell tolling softly for another, says to me: Thou must die.
Immanuel Levinas spoke of “the community of those with nothing in common,” in an attempt to get at the human concern for other human beings of whom we know little or nothing. More recently, there is this from Bill Gates:
“When I talk to friends about global health, I often run into a strange paradox. The idea of saving one person’s life is profound and thrilling. But I’ve found that when you talk about saving millions of lives — it sounds almost meaningless. The higher the number, the harder it is to wrap your head around.”
Bill Gates, opening paragraph of An AIDS Number That’s Almost Too Big to Believe
Gates presents this as a paradox, but in social science it is a well-known and well-studied cognitive bias known as the Identifiable victim effect. One researcher who has studied this cognitive bias is Paul Slovic, whose work was discussed by Sam Harris in the following passage:
“…when human life is threatened, it seems both rational and moral for our concern to increase with the number of lives at stake. And if we think that losing many lives might have some additional negative consequences (like the collapse of civilization), the curve of our concern should grow steeper still. But this is not how we characteristically respond to the suffering of other human beings.”
“Slovic’s experimental work suggests that we intuitively care most about a single, identifiable human life, less about two, and we grow more callous as the body count rises. Slovic believes that this ‘psychic numbing’ explains the widely lamented fact that we are generally more distressed by the suffering of single child (or even a single animal) than by a proper genocide. What Slovic has termed ‘genocide neglect’ — our reliable failure to respond, both practically and emotionally, to the most horrific instances of unnecessary human suffering — represents one of the more perplexing and consequential failures of our moral intuition.”
“Slovic found that when given a chance to donate money in support of needy children, subjects give most generously and feel the greatest empathy when told only about a single child’s suffering. When presented with two needy cases, their compassion wanes. And this diabolical trend continues: the greater the need, the less people are emotionally affected and the less they are inclined to give.”
Sam Harris, The Moral Landscape, Chapter 2
Skip down another paragraph and Harris adds this:
“The fact that people seem to be reliably less concerned when faced with an increase in human suffering represents an obvious violation of moral norms. The important point, however, is that we immediately recognize how indefensible this allocation of emotional and material resources is once it is brought to our attention.”
While Harris has not hesitated to court controversy, and speaks the truth plainly enough as he sees it, by failing to place what he characterizes as norms of moral reasoning in an evolutionary context he presents us with a paradox (the above section of the book is subtitled “Moral Paradox”). Really, this kind of cognitive bias only appears paradoxical when compared to a relatively recent conception of morality liberated from parochial in-group concerns.
For our ancestors, focusing on a single individual whose face is known had a high survival value for a small nomadic band, whereas a broadly humanitarian concern for all human beings would have been disastrous in equal measure. Today, in the context of industrial-technological civilization we can afford to love humanity; if our ancestors had loved humanity rather than particular individuals they knew well, they likely would have gone extinct.
Our evolutionary past has ill prepared us for the perplexities of population ethics in which the lives of millions may rest on our decisions. On the other hand, our evolutionary past has well prepared us for small group dynamics in which we immediately recognize everyone in our in-group and with equal immediacy identify anyone who is not part of our in-group and who therefore belongs to an out-group. We continue to behave as though our decisions were confined to a small band of individuals known to us, and the ability of contemporary telecommunications to project particular individuals into our personal lives as though we knew them, as if they were part of our in-group, plays into this cognitive bias.
While the explicit formulation of Identifiable victim effect is recent, the principle has been well known for hundreds of years at least, and has been as compellingly described in historical literature as in recent social science, as, for example, in Adam Smith:
“Let us suppose that the great empire of China, with all its myriads of inhabitants, was suddenly swallowed up by an earthquake, and let us consider how a man of humanity in Europe, who had no sort of connexion with that part of the world, would be affected upon receiving intelligence of this dreadful calamity. He would, I imagine, first of all, express very strongly his sorrow for the misfortune of that unhappy people, he would make many melancholy reflections upon the precariousness of human life, and the vanity of all the labours of man, which could thus be annihilated in a moment. He would too, perhaps, if he was a man of speculation, enter into many reasonings concerning the effects which this disaster might produce upon the commerce of Europe, and the trade and business of the world in general. And when all this fine philosophy was over, when all these humane sentiments had been once fairly expressed, he would pursue his business or his pleasure, take his repose or his diversion, with the same ease and tranquillity, as if no such accident had happened. The most frivolous disaster which could befall himself would occasion a more real disturbance. If he was to lose his little finger to-morrow, he would not sleep to-night; but, provided he never saw them, he will snore with the most profound security over the ruin of a hundred millions of his brethren, and the destruction of that immense multitude seems plainly an object less interesting to him, than this paltry misfortune of his own.”
Adam Smith, Theory of Moral Sentiments, Part III, chapter 3, paragraph 4
And immediately after Hume made his famous claim that, “Reason is, and ought only to be the slave of the passions, and can never pretend to any other office than to serve and obey them,” he illustrated the claim with an observation similar to Smith’s:
“Where a passion is neither founded on false suppositions, nor chuses means insufficient for the end, the understanding can neither justify nor condemn it. It is not contrary to reason to prefer the destruction of the whole world to the scratching of my finger. It is not contrary to reason for me to chuse my total ruin, to prevent the least uneasiness of an Indian or person wholly unknown to me. It is as little contrary to reason to prefer even my own acknowledgeed lesser good to my greater, and have a more ardent affection for the former than the latter.”
David Hume, A Treatise of Human Nature, Book II, Part III, section 3
Bertrand Russell has well described how the expression of this cognitive bias can take on the conceit of moral superiority in the context of romanticism:
“Cultivated people in eighteenth-century France greatly admired what they called la sensibilité, which meant a proneness to emotion, and more particularly to the emotion of sympathy. To be thoroughly satisfactory, the emotion must be direct and violent and quite uninformed by thought. The man of sensibility would be moved to tears by the sight of a single destitute peasant family, but would be cold to well-thought-out schemes for ameliorating the lot of peasants as a class. The poor were supposed to possess more virtue than the rich; the sage was thought of as a man who retires from the corruption of courts to enjoy the peaceful pleasures of an unambitious rural existence.”
Bertrand Russell, A History of Western Philosophy, Part II. From Rousseau to the Present Day, CHAPTER XVIII “The Romantic Movement”
Russell’s account of romanticism provides some of the missing rationalization whereby a cognitive bias clearly at variance with norms of moral reasoning is justified as being the “higher” moral ground. Harris seems to suggest that, as soon as this violation of moral reasoning is pointed out to us, we will change. But we don’t change, for the most part. Our rationalizations change, but our behavior rarely does. And indeed studies of cognitive bias have revealed that even when experimental subjects are informed of cognitive biases that should be obvious ex post facto, most will continue to defend choices that unambiguously reflect cognitive bias.
I have personally experienced the attitude described by Russell (despite the fact that I have not lived in eighteenth-century France) more times than I care to recall, though I find myself temperamentally on the side of those formulating well-thought-out schemes for the amelioration of the lot of the destitute as a class, rather than those moved to tears by the sight of a single destitute family. From these personal experiences of mine, anecdotal evidence suggests to me that if you attempt to live by the quasi-utilitarianism advocated by Russell and Harris, others will regard you as cold, unfeeling, and lacking in the milk of human kindness.
The cognitive bias challenge to presumptive norms of moral reasoning is also a profound challenge to existential risk mitigation, since existential risk mitigation deals in the largest numbers of human lives saved, but is a well-thought-out scheme for ameliorating the lot of human beings as a class, and may therefore have little emotional appeal compared to putting an individual’s face on a problem and then broadcasting that face repetitively.
We have all heard that the past is the foreign county, and that they do things differently there. (This line comes from the 1953 novel The Go-Between by L. P. Hartley.) We are the past of some future that has yet to occur, and we will in turn be a foreign country to that future. And, by the same token, the future is a foreign country, and they do things differently there. Can we care about these foreigners with their foreign ways? Can we do more than care about them, and actually change our behavior in the present in order to ensure on ongoing future, however foreign that future is from our parochial concerns?
In Bostrom’s paper “Existential Risk Prevention as Global Priority” (Global Policy, Volume 4, Issue 1, February 2013) the author gives a lower bound of 1016 potential future lives saved by existential risk mitigation (though he also gives “a lower bound of 1054 human-brain-emulation subjective life-years” as a possibility), but if the “collapse of compassion” is a function of the numbers involved, the higher the numbers we cite for individuals saved as a result of existential risk mitigation, the less will the average individual of today care.
Would it be possible to place an identifiable victim in the future? This is difficult, but we are all familiar with appeals to the world we leave to our children, and these are attempts to connect identifiable victims with actions that may prejudice the ability of human beings in the future to live lives of value commensurate with our own. It would be possible to construct some grand fiction, like Plato’s “noble lie” in order to interest the mass of the public in existential risk mitigation, but this would not be successful unless it became some kind of quasi-religious belief exempted from falsification that becomes the receptacle of our collective hopes. This does not seem very plausible (or sustainable) to me.
Are we left, then, to take the high road? To try to explain in painstaking (and off-putting) detail the violation of moral norms involved in our failure to adequately address existential risks, thereby putting our descendants in mortal danger? Certainly if an attempt to place an identifiable victim in the future is doomed to failure, we have no remaining option but the attempt at a moral intervention and relentless moral education that could transform the moral lives of humanity.
I do not think either of the above approaches to resolving the identifiable victim challenge to existential risk mitigation would be likely to be successful. I can put this more strongly yet: I think both approaches would almost certainly result in a backlash and would therefore be counter-productive to existential risk mitigation efforts. The only way forward that I can see is to present existential risk mitigation under the character of the adventure and exploration made possible by a spacefaring civilization that would, almost as an unintended consequence, secure the redundancy and autonomy of extraterrestrial centers of human civilization.
Human beings (at least as I know them) have a strong distaste for moral lectures and do not care to be told to do anything for their own good, but if you present them with the possibility of adventure and excitement that promises new experiences to every individual and possibly even the prospect of the extraterrestrial equivalent of a buried treasure, or even a pot of gold at the of the rainbow, you might enlist the selfishness and greed of individuals in a great cause on behalf of Earth and all its inhabitants, so that each individual is moved, as it were, by an invisible hand to promote an end which was no part of his intention.
. . . . .
. . . . .
Existential Risk: The Philosophy of Human Survival
13. Existential Risk and Identifiable Victims
. . . . .
. . . . .
. . . . .
. . . . .
23 May 2015
In my recent post on Proxy War in Yemen I asserted that the concept of a proxy war, while primarily associated with the Cold War, can be applied to the war now being fought indirectly between Saudi Arabia and Iran in Yemen. A narrow conception of proxy wars would not have this application, and would be more confined to its original introduction and usage. Thus is could be rightly said that I was applying a broad conception of a proxy war. This was my intent.
What has been said above of proxy wars can also be said of war in general: that there are narrow and broad conceptions. Narrow conceptions are usually a function of a particular historical context of usage. If you asked an inhabitant of Periclean Athens to define war, they might have answered that war was a clash between hoplites from different city-states facing each other as a phalanx. For such a narrow conception of war, the innovations that Alexander introduced into the Macedonian phalanx might pose a definitional challenge: is it or is it not a phalanx, and is war employing this instrument a war, or something related to war through descent with modification?
In many contexts I have pursued the exposition of what I call the extended sense of a concept, in which a familiar concept is systematically subjected to variation, extrapolation, extension, and generalization in order to see how comprehensive a conception can be made. I have been influenced in this respect by Bertrand Russell, whose imperative to generalization I previously quoted in The Science of Time and The Genealogy of the Technium:
“It is a principle, in all formal reasoning, to generalize to the utmost, since we thereby secure that a given process of deduction shall have more widely applicable results…”
Bertrand Russell, An Introduction to Mathematical Philosophy, Chapter XVIII, “Mathematics and Logic”
Open-textured concepts are best suited to Russellian generalization. What is an open-textured concept? Here is one account:
“According to Austin and Wittgenstein, words have clear conditions of application only against a background of ‘normal circumstances’ corresponding to the type of context in which the words were used in the past. There is no ‘convention’ to guide us as to whether or not a particular expression applies in some extraordinary situation. This is not because the meaning of the word is ‘vague’, but because the application of words ultimately depends on there being a sufficient similarity between the new situation of use and past situations. The relevant dimensions of similarity are not fixed once and for all; this is what generates ‘open texture’ (Waismann 1951).”
Routledge Encyclopedia of Philosophy, London and New York: Routledge, 1998, “Pragmatics”
More briefly, Stephan Barker wrote of open texture: “Our tendencies concerning the use of the word form a loosely knit pattern which does not definitely provide for all possibilities.” (Philosophy of Mathematics, “Introduction: The Open Texture of Language” p. 11) Barker goes on to use the Copernican analysis of celestial motion as an example of open texture. If “move” means to change position relative to Earth, then certainly the Earth cannot, by definition move. But what Copernicus did is to extend our conception of movement beyond the concept of movement that was limited to the special case of the surface of the Earth. One could say that Copernicus formulated an extended concept of motion.
It seems to me that war is a perfect example of an open-textured concept, and one that can readily (and indeed has been repeatedly) extended by changed circumstances. As civilization has grown, war has grown — in scope, scale, fatality, and complexity. The growth of war has been twofold: 1) growth in the absolute size of war (quantitative), and 2) growth in the complexity and sophistication of war (qualitative). Once we understand that war is an open-textured concept, the Russellian imperative comes into play, and the philosophical impulse is to generalize war to the greatest possible extent and thus to arrive at an extended conception of warfare.
Recently in VE Day: Seventy Years I suggested the possibility of the existential viability of warfare, which sounds like an odd way to speak of war, as though we were concerned to maintain war in existence, when many if not most individuals view the extirpation of war as the goal of civilization. But war and civilization are coextensive, and this implies that the viability of war is linked to the viability of civilization. In the long ten thousand year history of agricultural civilization warfare took many different and distinct forms. These different forms of warfare were driven by both quantitative and qualitative growth in war. The advent of industrialized warfare (cf. A Century of Industrialized Warfare) forced us once again to expand the scope and scale of what we call war.
Industrialized warfare coincided with the social consequences of industrialization — the growth of conurbations, mass communications, rapid transportation, and popular sovereignty, inter alia — and all of these developments forced warfare to become mass war fought by mass man. Industrialization allowed for a rapid increase in scale that outstripped qualitative development, and this almost exclusively quantitative increase in warfare gave us the concept of total war. (The idea of total war preceded that of industrialization, but I would argue that the term only came into its proper significant in the wake of mass war, i.e., that industrialized mass war is the natural teleology of the concept of total war.)
Industrialized total war did not persist long; if it had, we would have destroyed ourselves. Thus the rapid development of total war executed a perfect dialectical inversion and gave us the contemporary conception of limited war. We don’t even talk in terms of “limited war” any more because all wars are limited. An unlimited war today — total war — would be too devastating to contemplate. During the Cold War, a common euphemism for the MAD scenario of a massive nuclear exchange was “the unthinkable.” Of course, some did think the unthinkable, and they in turn became symbolic of an unmentionable engagement with the unthinkable (Curtis LeMay and Herman Kahn come to mind in this respect). The strange world of pervasive yet limited conflict to which we have now become accustomed has no place for total war, but it is perhaps no less strange than the paradigm of warfare that preceded it, consisting of mass conscript armies engaged in total industrialized warfare between nation-states.
Yet we have found countless ways to wage limited wars, with new conceptions of war appearing regularly with changes in technology and social organization. There is proxy war, guerrilla war, irregular war, asymmetrical warfare, swarm warfare, and so on. Perhaps the most recent extension of the concept of war is that of hybrid warfare, which has received much attention lately. (Russian actions in east Ukraine are often characterized in terms of hybrid warfare.) It is arguable that the many “experiments” with limited war following the end of the period of industrialized total war have qualitatively expanded and extended our conception of war in a way parallel to the quantitative expansion and extension of our conception of war driven by industrialization. Thus hybrid war, or some successor to hybrid war that is yet to be visited upon us (through descent with modification), may be understood as the qualitative form of total war.
Hybrid warfare is an illustration of how the scope and scape of warfare are related and can come to permeate society even when war is not “total” in the sense used prior to nuclear weapons (i.e., the quantitative sense of total war). The duration of the local and limited wars we have managed to fight under the nuclear umbrella is limited only by the willingness of participants to engage in long-term low-intensity warfare. We have learned much from this experience. While the world wars of the first half of the twentieth century taught us that democratic nation-states could field armies of millions and project unprecedented power for a few years’ duration, the local and limited wars of the second half of the twentieth century taught us that democratic nation-states cannot sustain long term warfare. Whatever the initial war enthusiasm, the populace grows tired of it, and eventually turns against it. If wars are to be fought, they must be fought within the political constraints of the form of social organization available in any given historical period.
On the other side, national insurgencies often possess a willingness to continue fighting virtually indefinitely (there has been insurgent conflict in Colombia for almost a half century, i.e., the entire period of post-industrialized total war), but when these groups come to realize that, despite their nationalist aspirations, they have been used as the pawns in someone else’s war (i.e., they have been serving someone else’s national aspirations), they are as likely to switch sides as not. Moreover, civil governance following long civil wars — regardless of which side in the conflict wins, if in fact any side wins — is almost always disastrous, and low-intensity warfare is essentially traded for high-intensity civil strife. Police do the killing instead of soldiers (but many of the police are former soldiers).
As warfare becomes pervasively represented throughout the culture, it represents the return (for it has occurred many times in human history) of warfare as a cultural activity, something I discussed in an early post Civilization and War as Social Technologies, i.e., war is a social technology, like civilization, that allows us to do certain things and to accomplish certain ends. For example, war is a decision procedure among nation-states who can agree upon nothing except that they will not allow a local and limited war to grow into a general and total war.
Warfare has, once again, adapted to changed conditions and thereby demonstrated its existential viability when war itself has risen to the level of an existential risk to the species and our civilization.
. . . . .
. . . . .
. . . . .
. . . . .
15 February 2015
In my first post on the overview effect, The Epistemic Overview Effect, I compared a comprehensive overview of knowledge to the perspective-altering view of the whole of the Earth in one glance. Later in The Overview Effect in Formal Thought I discussed the need for a comprehensive overview in formal thought no less than in scientific knowledge. I also discussed the overview in relation to interoception in Our Knowledge of the Internal World.
This account of the overview effect in various domains of knowledge leaves an ellipsis in my formulation of the overview effect, namely, the overview effect in specifically empirical knowledge, i.e., the overview effect in science other than the formal sciences. What would constitute an overview of empirical knowledge? The totality of facts? An awareness of the overall structure of the empirical sciences? A bird’s eye view of the universe entire? (The latter something I recently suggested in A Brief History of the Stelliferous Era.)
A subjective experience is always presented in a personal context, and when that subjective experience is of the overview effect the individual life serves as the “big picture” context by which individual and isolated experiences derive their value. The overview effect, as documented to date, is a personal experience, therefore ideographic, and therefore also idiosyncratic to a certain extent. The traditionally ideographic character of the historical sciences, then, has been uniquely well-adapted to being given an exposition in overview, and so we have the recent branch of historiography called big history. Big history in particular gives an overview of the historical sciences even as the historical sciences are employed to give an overview of history. There is a twofold task here to interpret all the physical sciences historically (in ideographic terms) so that their epistemic contributions can be integrated into the historical sciences, and to move the historical sciences closer to the nomothetic rigor of the traditionally ahistorical physical sciences. We will truly have a comprehensive overview of scientific knowledge when the ideographic historical sciences and the nomothetic ahistorical sciences meet in the middle. This constitutes an ideal of scientific knowledge that has not yet been attained.
Every individual has an overview of their own life — or, rather, every individual with a minimal degree of insight has an overview of their own life — and this is the setting for any other overview of which the individual becomes aware, including the overview effect itself. (Individuals also, partly in virtue of their personal overview of their own life, possess what I have called the human overview, such that in the experience of meeting another person we can usually rapidly place that person within a social, cultural, ethnic, and historical context.) In the future, the personal experience of the overview effect may be harnessed for the production of knowledge understood more broadly than the knowledge engendered by purely personal experience. All empirical knowledge is ultimately derived from personal experience, has its origins in personal experience, but once personal experience has been exapted through idealization and quantification for the purpose of the production of empirical knowledge, it loses its personal and experiential character and becomes impersonal and objective.
It may sound overly subtle at first to make a distinction between personal experience and empirical knowledge, but the distinction is worth noting, and in any theoretical context it is important to observe the distinction. Experience is ideographic; empirical knowledge is nomothetic. Thus personal experience of the overview effect to date is an ideographic overview effect; the possibility of the empirical sciences converging upon an overview effect would be a nomothetic overview effect. If this nomothetic overview effect of scientific knowledge can be further extended by rendering the ahistorical nomothetic sciences in terms of the historical sciences, and the overview effect of scientific knowledge can be given a history in which we have an overview of each stage of development, we can get a glimpse of the possibilities for comprehensive knowledge, and what the future may hold for scientific knowledge.
Science has always been in the business of attempting to provide an overview of the world, but the approach of science has always been a form of objectivity that attempts to alienate personal experience. One sees this most clearly in classical antiquity, when the most abstract of sciences flourished — viz. mathematics — while the other sciences languished, partly because the theoretical framework for constructing objective knowledge out of personal experience did not yet exist. Hundreds of years of the development of scientific thought have subsequently provided this framework, but the paradigm produced by science has come at a certain cost. We are still today struggling with that legacy and its costs.
One way to approach the role of personal experience in empirical knowledge is by way of Bertrand Russell’s distinction between knowledge by acquaintance and knowledge by description (“Knowledge by Acquaintance and Knowledge by Description” in Mysticism and Logic and Other Essays). The task that Russell set himself in this paper — “…what it is that we know in cases where we know propositions about ‘the so-and-so’ without knowing who or what the so-and-so is” — is closely related to the cluster of problems addressed by his theory of descriptions. Russell’s distinction implies two other permutations: the case in which we have neither knowledge by acquaintance nor knowledge by description, which is epistemically uninteresting, and the case in which we have both knowledge by acquaintance and knowledge by description. In the latter case, knowledge by description has been confirmed by knowledge by acquaintance, but for the purposes of his exposition of the distinction Russell makes it quite clear that he wants to focus on instances of knowledge by description in which knowledge is only by description.
I am going to make my own use of Russell’s distinction, but will not attempt to retain any fidelity to the metaphysical context of Russell’s exposition of the distinction. Russell’s exposition of his distinction is wrapped up in a particular metaphysical theory that is no longer as common as it was a hundred years ago, but I am going to interpret Russell in terms of a naive scientific realism, so that when we see the Earth we really do see the Earth, and the Earth is not merely a logical construction out of sense data. (If I, or anyone, wanted to devote an entire book to Russell’s metaphysic in relation to his distinction between acquaintance and description this could easily be done. Indeed, an exposition of the Earth as a logical construction out of sense data would be an interesting intellectual exercise, and I can easily imagine a professor assigning this to his students as a project.)
Russell wrote of knowledge by acquaintance: “I say that I am acquainted with an object when I have a direct cognitive relation to that object, i.e. when I am directly aware of the object itself. When I speak of a cognitive relation here, I do not mean the sort of relation which constitutes judgment, but the sort which constitutes presentation.” Thus in the overview effect, I have a direct cognitive relation to the whole of the Earth, not in terms of judgment, but as a presentation. Intuitively, I think that Russell’s formulation works quite well as an explication of the epistemic significance of the overview effect.
Russell described knowledge by description as follows:
I shall say that an object is “known by description” when we know that it is “the so-and-so,” i.e. when we know that there is one object, and no more, having a certain property; and it will generally be implied that we do not have knowledge of the same object by acquaintance. We know that the man with the iron mask existed, and many propositions are known about him; but we do not know who he was. We know that the candidate who gets most votes will be elected, and in this case we are very likely also acquainted (in the only sense in which one can be acquainted with some one else) with the man who is, in fact, the candidate who will get most votes, but we do not know which of the candidates he is, i.e. we do not know any proposition of the form “A is the candidate who will get most votes” where A is one of the candidates by name. We shall say that we have “merely descriptive knowledge” of the so-and-so when, although we know that the so-and-so exists, and although we may possibly be acquainted with the object which is, in fact, the so-and-so, yet we do not know any proposition “a is the so-and-so,” where a is something with which we are acquainted.
There are a lot of interesting philosophical questions implicit in Russell’s exposition of knowledge by description; I am not going to pursue these at present, but will take Russell at his word. In the context of the overview effect, “the so-and-so” is “the planet on which human beings live,” and we know (to employ a Russellian formulation) that there is one and only one planet upon which human beings live, and moreover this planet is Earth. In fact, we know that it was a considerable achievement of scientific knowledge to come to the understanding that human beings live on a planet, and all this knowledge was achieved through knowledge by description. For the vast majority of human history, we were acquainted with the Earth, yet we did not know the proposition “x is the planet upon which human beings live” where x was something with which we were acquainted. This is almost as perfect an example as there could be of knowledge by description in the absence of knowledge by acquaintance.
In Russell’s distinction, ideographic personal experience is a kind of knowledge — knowledge by acquaintance — but is distinct from knowledge by description. What Russell called “knowledge by description” is a special case of non-constructive knowledge. Non-constructive reasoning is the logic of the big picture and la longue durée (cf. Six Theses on Existential Risk) — the scientific (in contradistinction to the personal) approach to the overview effect. Just as science has always been in the business of seeking an overview, so too science has long been in the business of elaborating knowledge by description, because in many cases this is the only way we can begin a scientific investigation, though in such cases we always begin with the hope that our knowledge by description can eventually be transformed into knowledge by acquaintance. In other words, we hope to become acquainted with the objects of knowledge we describe. Knowledge by description is here the theoretical framework of scientific knowledge in search of instances of acquaintance — evidence, experience, and experiment — to confirm the theory.
Although Russell was not a constructivist per se, his position in this essay is unambiguously constructive in so far as the thesis he maintains is that, “Every proposition which we can understand must be composed wholly of constituents with which we are acquainted” (italics in original). Russell’s foundation of knowledge in the personal experience of knowledge by acquaintance demonstrates that Russell and Kierkegaard not only have a conception of rigor in common, but also the ultimate epistemic authority of individual experience.
Part of the importance of the overview effect is that it is a personal vision, such as I described in Kierkegaard and Futurism. The individuality of a personal vision is a function of the subjectivity of the individual, hence how the effect is experienced is as significant, if not more significant, than what is experienced.
An interesting result of this inquiry is not only to bring further philosophical resources to the analysis of the overview effect, but also to point the way to the further development science. I have often emphasized that science is not a finished edifice of knowledge, but that science itself continues to grow, not only in sense of continually producing scientific knowledge, but also in the sense of continuing to revise the scientific method itself. One of the most common objections one encounters when talking about science among those who take little account of science is the impersonal nature of scientific knowledge, and even a rejection of that same objectivity that has been the pride science to have attained. To fully appreciate the overview effect as a moment in the development of scientific knowledge is to understand that it may not only give us a new perspective on the world in which we live, but also a new perspective on how we attain knowledge of this world.
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
27 December 2014
The human mind is a strange and complex entity, and while the mind possesses unappreciated subtlety (of the kind I attempted to describe in The Human Overview), rigorous thinking does not come naturally to it. Rigor is a hard-won achievement, not a gift. If we want to achieve some measure of conceptual clarity we must make a particular effort to think rigorously. This is not easy. If you let the mind do what comes naturally and easily to it, you will probably not be thinking rigorously, and you will probably not attain conceptual clarity.
But what is rigor? To ask this question puts us in a position not unlike Saint Augustine who asked, “What, then, is time?” If no one asks me, I know what rigor is. If I wish to explain it to him who asks, I do not know. What distinguishes rigorous thinking from ordinary thinking? And what distinguishes a rigorous life from an ordinary life? Is there any relation between the formal and existential senses of rigor?
As a first and rough approximation, we could say that rigor is the implementation of a precise idea of precision. Whether or not a precise idea of precision can be applied to the human condition, a question that I have addressed in The Human Condition Made Rigorous, is a question of whether the formal sense of rigor is basic, and existential rigor is an implementation of formal rigor in life.
Kierkegaard concerned himself with what I am here calling existential rigor, i.e., the idea of living a rigorous life. One of the central themes that runs through Kierkegaard’s substantial corpus is the question of how one becomes an authentic Christian in an inauthentic Christian society (though this is not how Kierkegaard himself expressed the problem that preoccupied him). Kierkegaard expresses himself in the traditional Christian idiom of suffering for the truth, but Kierkegaard’s suffering is not pointless or meaningless: it is conducive to existential rigor:
“My purpose is to make it difficult to become a Christian, yet not more difficult than it is, nor to make it difficult for stupid people, and easy for clever pates, but qualitatively difficult, and essentially difficult for every man equally, for essentially it is equally difficult for every man to relinquish his understanding and his thinking, and to keep his soul fixed upon the absurd; it is comparatively more difficult for a man if he has much understanding — if one will keep in mind that not everyone who has lost his understanding over Christianity thereby proves that he has any.”
KIERKEGAARD’S CONCLUDING UNSCIENTIFIC POSTSCRIPT, Translated from the Danish by DAVID F. SWENSON, PROFESSOR OF PHILOSOPHY AT THE UNIVERSITY OF MINNESOTA, Completed after his death and provided with Introduction and Notes by WALTER LOWRIE, PRINCETON: PRINCETON UNIVERSITY PRESS, p. 495
The whole of Kierkegaard’s book Attack Upon Christendom is an explicit attack upon “official” Christianity, which he saw as too safe, too comfortable, too well-connected to the machinery of the state. In Kierkegaard’s Denmark, no one was suffering in order to bear witness to the truth of Christianity:
“…hundreds of men are introduced who instead of following Christ are snugly and comfortably settled, with family and steady promotion, under the guise that their activity is the Christianity of the New Testament, and who live off the fact that others have had to suffer for the truth (which precisely is Christianity), so that the relationship is completely inverted, and Christianity, which came into the world as the truth men die for, has now become the truth upon which they live, with family and steady promotion — ‘Rejoice then in life while thy springtime lasts’.”
Søren Kierkegaard, Attack Upon Christendom, Princeton: Princeton University Press, 1946, p. 42
And from Kierkegaard’s journals…
“Could you not discover some way in which you too could help the age? Then I thought, what if I sat down and made everything difficult? For one must try to be useful in every possible way. Even if the age does not need ballast I must be loved by all those who make everything easy; for if no one is prepared it difficult it becomes all too easy — to make things easy.”
Søren Kierkegaard, The Soul of Kierkegaard: Selections from His Journals, 1845, p. 93
Kierkegaard is full of such passages, and if you read him through you will probably find more compelling instances of this idea than the quotes I have plucked out above.
Kierkegaard called into question the easy habits of belief that we follow mostly without questioning them; Russell called into question the intuitions that come naturally to us, to the human mind, and which we mostly do not question. Both Kierkegaard and Russell thought there was value in doing things the hard way, not in order to court difficulty for its own sake, but rather for the different perspective it affords us by not simply doing what comes naturally, but having to think things through for ourselves.
Russell’s approach to rigor is superficially antithetical to that of Kierkegaard. While Kierkegaard was interested in the individual and his individual existence, Russell was interested in universal logical principles that had nothing to do with individual existence. William James once wrote to Russell, “My dying words to you are ‘Say good-by to mathematical logic if you wish to preserve your relations with concrete realities!'” Russell’s response was perfect deadpan: “As for the advice to say goodbye to mathematical logic if I wish to preserve my relation with concrete realities, I am not wholly inclined to dispute its wisdom. But I should push it farther, & say that it would be well to give up all philosophy, & abandon the student’s life altogether. Ten days of standing for Parliament gave me more relations to concrete realities than a lifetime of thought.”
Nevertheless, beyond these superficial differences, both Kierkegaard and Russell understood, each in his own way, that the easy impulse must be resisted. A passage from Bertrand Russell that I previously quoted in The Overview Effect in Formal Thought makes this point for formal rigor:
“The fact is that symbolism is useful because it makes things difficult. (This is not true of the advanced parts of mathematics, but only of the beginnings.) What we wish to know is, what can be deduced from what. Now, in the beginnings, everything is self-evident; and it is very hard to see whether one self-evident proposition follows from another or not. Obviousness is always the enemy to correctness. Hence we invent some new and difficult symbolism, in which nothing seems obvious. Then we set up certain rules for operating on the symbols, and the whole thing becomes mechanical. In this way we find out what must be taken as premiss and what can be demonstrated or defined.”
Bertrand Russell, Mysticism and Logic, “Mathematics and the Metaphysicians”
“There is a good deal of importance to philosophy in the theory of symbolism, a good deal more than at one time I thought. I think the importance is almost entirely negative, i.e., the importance lies in the fact that unless you are fairly self conscious about symbols, unless you are fairly aware of the relation of the symbol to what it symbolizes, you will find yourself attributing to the thing properties which only belong to the symbol. That, of course, is especially likely in very abstract studies such as philosophical logic, because the subject-matter that you are supposed to be thinking of is so exceedingly difficult and elusive that any person who has ever tried to think about it knows you do not think about it except perhaps once in six months for half a minute. The rest of the time you think about the symbols, because they are tangible, but the thing you are supposed to be thinking about is fearfully difficult and one does not often manage to think about it. The really good philosopher is the one who does once in six months think about it for a minute. Bad philosophers never do.”
Bertrand Russell, Logic and Knowledge: Essays 1901-1950, 1956, “The Philosophy of Logical Atomism,” I. “Facts and Propositions,” p. 185
For Russell, the use of symbols in reasoning constitutes a reformulation of the intuitive in a counter-intuitive form, and this makes it possible for us to struggle toward the truth without being distracted by matters that seem so obvious that our cognitive biases lead us toward deceptive obviousness instead of toward the truth. There is another name for this, defamailiarization (which I previously discussed in Reversing the Process of Defamiliarization). Great art defamiliarizes the familiar in order to present it to us again, anew, in unfamiliar terms. In this way we see the world with new eyes. Just so, the reformulation of intuitive thought in counter-intuitive forms presents the familiar to us in unfamiliar terms and we see our reasoning anew with the mind’s eye.
Intuitions have their place in formal thought. I have in the past written of the tension between intuition and formalization that characterizes formal thought, as well as of the place of intuition in philosophical argument (cf. Doing Justice to Our Intuitions: A 10 Step Method). But if intuitions have their place, they also have their limitations, and the making of easy things difficult is a struggle against the limitations of intuition. What Kierkegaard and Russell have in common in their conception of rigor is that of making something ordinarily easy into something difficult in order to overcome the limitations of the natural and the intuitive. All of this may sound rather arcane and confined to academic squabbles, but it is in fact quite directly related to the world situation today.
I have often written about the anonymity and anomie of life in industrial-technological civilization; this is a familiar theme that has been worked through quite extensively in twentieth century sociology, and one could argue that it is also a prominent element in existentialism. But the human condition in the context of our civilization today is not only marked by anonymity and anomie, but also by high and rising standards of living, which usually translates directly into comfort. While we are perhaps more bereft of meaning than ever, we are also more comfortable than ever before in history. This has also been studied in some detail. Occasionally this combination of a comfortable but listless life is called “affluenza.”
Kierkegaard’s defamiliarization of (institutionalized and inauthentic) Christianity was intended to make Christianity difficult for bourgeois worldlings; the militant Islamists of our time want to make Islam difficult and demanding for those who would count themselves Muslims. It is the same demand for existential rigor in each that is the motivation. If it is difficult to understand why young men at the height of their prowess and physical powers can be seduced into extremist militancy, one need only reflect for a moment on the attraction of difficult things and the earned honors of existential rigor. The west has almost completely forgotten the attraction of difficult things. What remains is perhaps the interest in “extreme” sports, in which individuals test themselves against contrived physical challenges, which provides a kind of existential rigor along with bragging rights.
Extremist ideologies offer precisely the two things for which the individual hungers but cannot find in contemporary industrialized society: meaning, and a challenge to his complacency. An elaborately worked out eschatological conception of history shows the individual his special place within the grand scheme of things (this is the familiar ground of cosmic warfare and the eschatological conception of history), but this eschatological vision is not simply handed for free to the new communicant. He must work for it, strive for it, sacrifice for it. And when he has proved himself equal to the demands placed upon him, then he is rewarded with the profoundly satisfying gift of an earned honor: membership in a community of the elect.
This view is not confined to violent extremists. We meet with this whenever someone makes the commonplace remark that we don’t value that which is given away for free, and Spinoza expressed the thought with more eloquence: “All noble things are as difficult as they are rare.” Anyone who feels this pull of difficult things, who desires a challenge, who wants to be tested in order to prove their worth in the only way that truly counts, is an existentialist in action, if not in thought, because it is the existentialist conception of authenticity that is operative in this conception of existential rigor.
We have tended to think of pre-modern societies, mostly agrarian-ecclesiastical civilization, with their rigid social hierarchies and inherited social positions, as paradigmatic examples of inauthentic societies, but we have managed to create a thoroughly inauthentic society in the midst of our industrial-technological civilization. This civilization and its social order may have its origins in the overturning of the inauthentic social order of earlier ages, but, after an initial period of social experimentation, the present social order ossified and re-created many of the inauthentic and hierarchical forms that characterized the overthrown social order.
Inauthentic societies are awash in unearned unearned advantages. I wrote about this earlier in discussing the urban austerity of Simone Weil, the wilderness austerity of Christopher McCandless (also known as Alexander Supertramp), and comparing the two in Weil and McCandless: Another Parallel:
“…the accomplishments of the elite and the privileged are always tainted by the fact that what they have attained has not been earned. But it is apparent that there are always a few honest individuals among the privileged who are acutely aware that their position has not been earned, that it is tainted, and the only way to prove that one can make it on one’s own is to cut one’s ties to one’s privileged background and strike out on one’s own.”
There is a certain sense in which the available and ample comforts of industrial-technological civilization transformed the greater part of the global population into complacent consumers who accept an inauthentic life. There is another name of this too; Nietzsche called such individuals Last Men.
. . . . .
. . . . .
. . . . .
. . . . .
2 August 2014
A Century of Industrialized Warfare:
Europe Erupts in Popular Support for War
Sunday 02 August 1914
With the general mobilization of the great powers of Europe — news once again rapidly broadcast around the world by the mass media — it was now obvious that the July Crisis was no longer merely a crisis but that a European-wide war was in the near future. With mobilization, men in the millions were moving around their respective countries, and preparing to be transported to the frontier, where battles would soon commence. What was the response of the European populace? Elation. The capitals of Europe erupted with celebrations that we now call the “August Madness.”
Many photographs of the spontaneous demonstrations of public support for the just-declared war can be found at And so it begins… Images from 1914. The most famous image from the August Madness (reproduced above) was of Hitler, seen in a crowd of thousands in Munich. The photograph may be a forgery, but the outpouring of public enthusiasm at the Odeonsplatz in Munich on 02 August 1914, which Hitler did in fact attend, 25 at the time, was real enough.
Bertrand Russell provided some of the most interesting commentary on the August Madness in his Autobiography. Will Durant called Bertrand Russell, “…an almost mystic communist born out of the ashes of a mathematical logician… He impressed one, in 1914, as cold-blooded, as a temporarily animated abstraction, a formula with legs… the Bertrand Russell who had lain so long buried and mute under the weight of logic and mathematics and epistemology, suddenly burst forth, like a liberated flame, and the world was shocked to find that this slim and anemic-looking professor was a man of infinite courage, and a passionate lover of humanity.” (The Story of Philosophy, Chapter Ten, 3, I-II, the whole passage goes on for several pages and is well worth reading) It was as a passionate lover of humanity that Russell found himself repeatedly shocked by the war hysteria of August 1914. The same day Hitler was celebrating in the Odeonsplatz in Munich, Russell recounted his evening stroll around Trafalgar Square:
I spent the evening walking round the streets, especially in the neighbourhood of Trafalgar Square, noticing cheering crowds, and making myself sensitive to the emotions of passers-by. During this and the following days I discovered to my amazement that average men and women were delighted at the prospect of war. I had fondly imagined, what most pacifists contended, that wars were forced upon a reluctant population by despotic and Machiavellian governments. I had noticed during previous years how carefully Sir Edward Grey lied in order to prevent the public from knowing the methods by which he was committing us to the support of France in the event of war. I naively imagined that when the public discovered how he had lied to them, they would be annoyed; instead of which, they were grateful to him for having spared them the moral responsibility.
Bertrand Russell, The Autobiography of Bertrand Russell, Chapter 8 “The First War”
Russell was both horrified and unable to comprehend the celebratory atmosphere:
The first days of the War were to me utterly amazing. My best friends, such as the Whiteheads, were savagely warlike. Men like J. L. Hammond, who had been writing for years against participation in a European War, were swept off their feet by Belgium. As I had long known from a military friend at the Staff College that Belgium would inevitably be involved, I had not supposed important publicists so frivolous as to be ignorant on this vital matter.
With the advent of mass society, the mass support of population was necessary for a major war effort, and the European public obligingly provided this support to every nation-state that declared war and began mobilization. This public support for and vicarious participation in the war (at least in its early days) may be considered an additional trigger or escalation that allowed what might have been just another localized Balkan war into a global conflict.
Russell admitted that he did not foresee how destructive the war would be, which is as much saying that he, like everyone else, had no idea what a global industrialized war would be like, but already as the war was beginning he was learning lessons from the experience and changing his views on the humanity, the love of which defined his pacifism:
Although I did not foresee anything like the full disaster of the War, I foresaw a great deal more than most people did. The prospect filled me with horror, but what filled me with even more horror was the fact that the anticipation of carnage was delightful to something like ninety per cent of the population. I had to revise my views on human nature. At that time I was wholly ignorant of psycho-analysis, but I arrived for myself at a view of human passions not unlike that of the psychoanalysts. I arrived at this view in an endeavour to understand popular feeling about the War. I had supposed until that time that it was quite common for parents to love their children, but the War persuaded me that it is a rare exception. I had supposed that most people liked money better than almost anything else, but I discovered that they liked destruction even better. I had supposed that intellectuals frequently loved truth, but I found here again that not ten per cent of them prefer truth to popularity. Gilbert Murray, who had been a close friend of mine since 1902, was a pro-Boer when I was not. I therefore naturally expected that he would again be on the side of peace; yet he went out of his way to write about the wickedness of the Germans, and the superhuman virtue of Sir Edward Grey. I became filled with despairing tenderness towards the young men who were to be slaughtered, and with rage against all the statesmen of Europe.
Bertrand Russell lived through the August Madness and saw its direct effect on friends and colleagues that he supposed would share his pacifism; rapidly disabused of this notion, he continued with this activism nevertheless and was eventually jailed. While in jail he wrote An Introduction to Mathematical Philosophy, which the governor of the prison was obligated to read for seditious tendencies before it was allowed to be published.
By the end of the war, many shared Russell’s gloom, but it took years and the death of millions to happen, and by this time gloom had changed into something different that would ultimately shape twentieth century Europe in a way not unlike how the Black Death shaped fourteenth century Europe. One may think of such events as mass extinctions in miniature, that give a kind of intimation of what human extinction would look like.
. . . . .
. . . . .
A Century of Industrialized Warfare
8. The August Madness
. . . . .
. . . . .
. . . . .
. . . . .
24 May 2014
In my post on why the future doesn’t get funded I examined the question of unimaginative funding that locks up the better part of the world’s wealth in “safe” investments. In that post I argued that the kind of person who achieves financial success is likely to do so as a result of putting on blinders and following a few simple rules, whereas more imaginative individuals who want adventure, excitement, and experimentation in their lives are not likely to be financially successful, but they are more likely to have a comprehensive vision of the future — precisely what is lacking among the more stable souls who largely control the world’s financial resources.
Of course, the actual context of investment is much more complex than this, and individuals are always more interesting and more complicated than the contrasting caricatures that I have presented. But while the context of investment is more complicated than I have presented it in my previous sketch of venture capital investment, that complexity does not exonerate the unimaginative investors who have a more complex inner life than I have implied. Part of the complexity of the situation is a complexity that stems from self-deception, and I will now try to say something about the role of self-deception on the part of venture capitalists.
One of the problem with venture capital investments, and one the reasons that I have chosen to write on this topic, is that the financial press routinely glorifies venture capitalists as financial visionaries who are midwives to the future as they finance ventures that other more traditional investors and institutional investors would not consider. While it is true that venture capitalists do finance ventures that others will not finance, as I pointed on in the above-linked article, no one takes on risk for risk’s sake, so that it is the most predictable and bankable of the ventures that haven’t been funded that get funding from the lenders of last resort.
Venture capitalists, I think, have come to rather enjoy their status in the business community as visionaries, and are often seen playing the role in their portentous pronouncements made in interviews with the Wall Street Journal and other organs of the financial community. By and large, however, venture capitalists are not visionaries. But many of them have gotten lucky, and herein lies the problem. If someone thinks that they understand the market and where it is going, and they make an investment that turns out to be successful, they will take this as proof of their understanding of the mechanisms of the market.
This is actually an old philosophical paradox that was in the twentieth century given the name of the Gettier paradox. Here’s where the idea comes from: many philosophers have defined knowledge as justified true belief (something that I previously discussed in A Note on Plantinga). I myself object to this definition, and hold, in the Scholastic tradition, that something known is not a belief, and something believed cannot be said to be known. So, as I see it, knowledge is no kind of belief at all. Nevertheless, many philosophers persist in defining knowledge as justified true belief, even though there is a problem with this definition. The problem with the definition of knowledge as justified true belief is the Gettier paradox. The Gettier paradox is the existence of counter-examples that are obviously not knowledge, but which are both true and justified.
Before this idea was called the Gettier paradox, Betrand Russell wrote about it in his book Human Knowledge. When stated in terms of “non-defeasibility conditions” and similar technical ideas, the Gettier paradox sounds rather daunting, but it is actually quite a simple idea, and one that Russell identified with simple examples:
“It is clear that knowledge is a sub-class of beliefs: every case of knowledge is a case of true belief, but not vice versa. It is very easy to give examples of true beliefs that are not knowledge. There is the man who looks at a clock which is not going, though he thinks it is, and who happens to look at it at the moment when it is right; this man acquires a true belief as to the time of day, but cannot be said to have knowledge. There is the man who believes, truly, that the last name of the Prime Minister in 1906 began with a B, but who beleives this because he thinks that Balfour was Prime Minister then, whereas in fact it was Campbell Bannerman. There is the lucky optimist who, having bought a lottery ticket, has an unshakeable conviction that he will will, and, being lucky, does win. Such instances can be multiplied indefinitely, and show that you cannot claim to have known merely because you turned out to be right.”
Bertrand Russell, Human Knowledge: Its Scope and Limits, New York: Simon and Schuster, 1964, pp. 154-155
Of Russell’s three examples, I like the first best because it so clearly delineates the idea of justified true belief that fails to qualify as knowledge. You look at a stopped clock that indicates noon, and it happens to be noon. You infer from the hands on the dial that it is noon. That inference if your justification. It is, in fact, noon, so your belief is true. But this justified true belief is based upon accident and circumstance, and we would not wish to reduce all knowledge to accident and circumstance. Russell’s last example involves an “unshakeable conviction,” that is to say, a particular state of belief (what analytical philosophers today might call a doxastic context), so it isn’t quite the pure example of justified true belief as the others.
An individual’s understanding of history is often replete with justified true beliefs that aren’t knowledge. We look at the record of the past and we think we understand, and things do seem to turn out as we expected, and yet we still do not have knowledge of the past (or of the present, much less of the future). When we read the tea leaves wrongly, we are right for the wrong reasons, and when we are right for the wrong reasons, our luck will run out, sooner rather than later.
Contemporary history — the present — is no less filled with misunderstandings when we believe that we understand what it is happening, we anticipate certain events on the basis of these beliefs, and the events that we anticipate do come to pass. This problem compounds itself, because each prediction borne out raises the confidence of the investor, who is them more likely to trust his judgments in the future. To be right for the wrong reasons is to be deceived into believing that one understands that which one does not understand, while to be wrong for the right reason is to truly understand, and to understand better than before because one’s views have been corrected and one understands both that they have been corrected and how they have been corrected. Growth of knowledge, in true Popperian fashion, comes from criticism and falsification.
This problem is particularly acute with venture capitalists. A venture capital firm early in its history makes a few good guesses and becomes magnificently wealthy. (We don’t hear about the individuals and firms that fail right off the bat, because they disappear; this is called survivorship bias.) This is the nature of venture capital; you invest in a number of enterprises expecting most to fail, but the one that succeeds succeeds so spectacularly that it more than makes up for the other failures. But the venture capital firm comes to believe that it understands the direction that the economy is headed. They no longer think of themselves as investors, but as sages. These individuals and firms come to exercise an influence over what gets funded and what does not get funded that is closely parallel to the influence that, say, Anna Wintour, has over fashion markets.
Few venture capital firms can successfully follow up on the successes that initially made them fabulously wealthy. Some begin to shift to more conservative investments, and their portfolios can look more like the sage of Omaha than a collection of risky start ups. Others continue to try to stake out risky positions, and fail almost as spectacularly as their earlier successes. The obvious example here is the firm of Kleiner Perkins.
Kleiner Perkins focused on a narrow band of technology companies at a time when tech stocks were rapidly increasing, also known as the “tech bubble.” Anyone who invested in tech stocks at this time, prior to the bubble bursting, made a lot of money. Since VC focuses on short-term start-up funding, they were especially positioned to profit from a boom that quickly spiraled upward before it crashed back down to the earth. In short — and this is something everyone should understand without difficulty — they were in the right place at the right time. After massive losses they threw a sop to their injured investors by cutting fees and tried to make it look like they were doing something constructive by restructuring their organization — also known as “rearranging the deck chairs on the Titanic.” But they still haven’t learned their lesson, because instead of taking classic VC risks with truly new ideas, they are relying on people who “proved” themselves at the tech start-ups that they glaringly failed to fund, Facebook and Twitter. This speaks more to mortification than confidence. Closing the barn door after the horse has escaped isn’t going to help matters.
Again, this is a very simplified version of events. Actual events are much more complex. Powerful and influential individuals who anticipate events can transform that anticipation into a self-fulfilling prophecy. There are economists who have speculated that it was George Soros’ shorting of the Thai Baht that triggered the Asian financial crisis of 1997. So many people thought that Soros was right that they started selling off Thai Baht, which may have triggered the crisis. Many smaller economies now take notice when powerful investors short their currency, taking preemptive action to head off speculation turning into a stampede. Similarly, if a group of powerful and influential investors together back a new business venture, the mere fact that they are backing it may turn an enterprise that might have failed into a success. This is part of what Keynes meant when he talked about the influence of “animal spirits” on the market.
What Keynes called “animal spirits” might also be thought of as cognitive bias. I don’t think that it one can put too much emphasis on the role of cognitive bias in investment decisions, and especially in the role of the substitution heuristic when it comes to pricing risk. In Global Debt Market Roundup I noted this:
It seems that China’s transition from an export-led growth model to a consumer-led growth model based on internal markets is re-configuring the global commodities markets, as producers of raw materials and feedstocks are hit by decreased demand while manufacturers of consumer goods stand to gain. I think that this influence on global markets is greatly overstated, as China’s hunger for materials for its industry will likely decrease gradually over time (a relatively predictable risk), while the kind of financial trainwreck that comes from disregarding political and economic instability can happen very suddenly, and this is a risk that is difficult to factor in because it is almost impossible to predict. So are economists assessing the risk they know, according to what Daniel Kahneman calls a “substitution heuristic” — answering a question that they know, because the question at issue is either too difficult or intractable to calculation? I believe this to be the case.
Most stock pickers simply don’t have what it takes in order to understand the political dynamics of a large (and especially an unstable) nation-state, so instead of trying to engage in the difficult task of puzzling out the actual risk, an easier question is substituted for the difficult question that cannot be answered. And thus it is that even under political conditions in which wars, revolution, and disruptive social instability could result in an historically unprecedented loss or expropriation of wealth, investors find a way to convince themselves that it is okay to return their money to region (or to an enterprise) likely to mismanage any funds that are invested. The simpler way to put this is to observe that greed gets ahead of good sense and due diligence.
Keynes thought that the animal spirits (i.e., cognitive biases) were necessary to the market functioning. Perhaps he was right. Perhaps venture capital also can’t function without investors believing themselves to be right, and believing that they understand what is going on, when in fact they are wrong and they do not understand what is going on. But unless good sense and due diligence are allowed to supplement animal spirits, a day of reckoning will come when apparent gains unravel and some unlucky investor or investors are left holding the bag.
. . . . .
. . . . .
. . . . .