Friday


The University of Toronto more than a hundred years ago in 1910.

The University of Toronto more than a hundred years ago in 1910.

When I attempt to look back on my personal history in a spirit of dispassionate scientific inquiry, I find that I readily abandon entire regions of my past in my perhaps unseemly hurry to develop the next idea that I have, and which I am excited to see where it leads me. Moreover, contemplating one’s personal history can be a painful and discomfiting experience, so that, in addition to the headlong rush into the future, there is the desire to dissociate oneself from past mistakes, even when these past mistakes were provisional positions, known at the time to be provisional, but which were nevertheless necessary steps in order to begin (as well as to continue) the journey of self-discovery, which is at the same time a journey of discovering the world and of one’s place in the world.

In my limited attempts to grasp my personal history as an essential constituent of my present identity, among all the abandoned positions of my past I find that I understood two important truths about myself early in life (i.e., in my teenage years), even if I did not formulate them explicitly, but only acted intuitively upon things that I immediately understood in my heart-of-hearts. One of these things is that I have never been, am not now, and never will be either of the left or of the right. The other thing is, despite having been told many times that I should have pursued higher education, and despite the fact that most individuals who have the interests that I have are in academia, that I am not cut out for academia, whether temperamentally, psychologically, or socially — notwithstanding the fact that, of necessity, I have had to engage in alienated labor in order to support myself, whereas if I had pursued in a career in academia, I might have earned a living by dint of my intellectual efforts.

The autodidact is a man with few if any friends (I could tell you a few stories about this, but I will desist at present). The non-partisan, much less the anti-partisan, is a man with even fewer friends. Adults (unlike childhood friends) tend to segregate along sectional lines, as in agrarian-ecclesiastical civilization we once segregated ourselves even more rigorously along sectarian lines. If you do not declare yourself, you will find yourself outside every ideologically defined circle of friends. And I am not claiming to be in the middle; I am not claiming to strike a compromise between left and right; I am not claiming that I have transcended left and right; I am not claiming that I am a moderate. I claim only that I belong to no doctrinaire ideology.

It has been my experience that, even if you explicitly and carefully preface your remarks with a disavowal of any political party or established ideological position, if you give voice to a view that one side takes to be representative of the other side, they will immediately take your disavowal of ideology to be a mere ruse, and perhaps a tactic in order to gain a hearing for an unacknowledged ideology. The partisans will say, with a knowing smugness, that anyone who claims not to be partisan is really a partisan on the other side — and both sides, left and right alike, will say this. One then finds oneself in overlapping fields of fire. This experience has only served to strengthen my non-political view of the world; I have not reacted against my isolation by seeking to fall into the arms of one side or the other.

This non-political perspective — which I am well aware would be characterized as ideological by others — that eschews any party membership or doctrinaire ideology, now coincides with my sense of great retrospective relief that I did not attempt an academic career path. I have watched with horrified fascination as academia has eviscerated itself in recent years. I have thanked my lucky stars, but most of all I have thanked my younger self for having understood that academia was not for me and for not having taken this path. If I had taken this path, I would be myself subject to the politicization of the academy that in some schools means compulsory political education, increasingly rigid policing of language, and an institution more and more making itself over into the antithesis of the ideal pursuit of knowledge and truth.

But the university is a central institution of western civilization; it is the intellectual infrastructure of western civilization. I can affirm this even as an autodidact who has never matriculated in the university system. I have come to understand, especially in recent years, how it is the western way to grasp the world by way of an analytical frame of mind. The most alien, the most foreign, the most inscrutable otherness can be objectively and dispassionately approached by the methods of scientific inquiry that originated in western civilization. This character of western thought is far older than the scientific revolution, and almost certainly has its origins in the distinctive contribution of the ancient Greeks. As soon as medieval European civilization began to stabilize, the institution of the university emerged as a distinctive form of social organization that continues to this day. Since I value western civilization and its scientific tradition, I must also value the universities that have been the custodians of this tradition. It could even be said that the autodidact is parasitic upon the universities that he spurns: I read the books of academics; I benefit from the scientific research carried on at universities; my life and my thought would not have been possible except for the work that goes on in universities.

It is often said of the Abrahamic religions that they all pray to the same God. So too all who devote their lives to the pursuit of truth pay their respects to the same ancestors: academicians and their institutions look back to Plato’s Academy and Aristotle’s Lyceum, just as do I. We have the same intellectual ancestors, read the same books, and look to the same ideals, even if we approach those ideals differently. In the same way that I am a part of Christian civilization without being a Christian, in an expansive sense I am a part of the intellectual tradition of western civilization represented by its universities, even though I am not of the university system.

As an autodidact, I could easily abandon the western world, move to any place in the world where I was able to support myself, and immerse myself in another tradition, but western civilization means something to me, and that includes the universities of which I have never been a part, just as much as it includes the political institutions of which I have never been a part. I want to know that these sectors of society are functioning in a manner that is consistent with the ideals and aspirations of western civilization, even if I am not part of these institutions.

There are as many autodidacticisms as there are autodidacts; the undertaking is an essentially individual and indeed solitary one, even an individualistic one, hence also essentially an isolated undertaking. Up until recently, in the isolation of my middle age, I had questioned my avoidance of academia. Now I no longer question this decision of my younger self, but am, rather, grateful that this is something I understood early in my life. But that does not exempt me from an interest in the fate of academia.

All of this is preface to a conflict that is unfolding in Canada that may call the fate of the academy into question. Elements at the The University of Toronto have found themselves in conflict with a professor at the school, Jordan B. Peterson. Prior to this conflict I was not familiar with Peterson’s work, but I have been watching his lectures available on Youtube, and I have become an unabashed admirer of Professor Peterson. He has transcended the disciplinary silos of the contemporary university and brings together an integrated approach to the western intellectual tradition.

Both Professor Peterson and his most vociferous critics are products of the contemporary university. The best that the university system can produce now finds itself in open conflict with the worst that the university system can produce. Moreover, the institutional university — by which I mean those who control the institutions and who make its policy decisions — has chosen to side with the worst rather than with the best. Professor Peterson noted in a recent update of his situation that the University of Toronto could have chosen to defend his free speech rights, and could have taken this battle to the Canadian supreme court if necessary, but instead the university chose to back those who would silence him. Thus even if the University of Toronto relents in its attempts to reign in the freedom of expression of its staff, it has already revealed what side it is on.

There are others fighting the good fight from within the institutions that have, in effect, abandoned them and have turned against them. For example, Heterodox Academy seeks to raise awareness of the lack of the diversity of viewpoints in contemporary academia. Ranged against those defending the tradition of western scholarship are those who have set themselves up as revolutionaries engaged in the long march through the institutions, and every department that takes a particular pride in training activists rather than scholars, placing indoctrination before education and inquiry.

If freedom of inquiry is driven out of the universities, it will not survive in the rest of western society. When Justinian closed the philosophical schools of Athens in 529 AD (cf. Emperor Justinian’s Closure of the School of Athens) the western intellectual tradition was already on life support, and Justinian merely pulled the plug. It was almost a thousand years before the scientific spirit revived in western civilization. I would not want to see this happen again. And, make no mistake, it can happen again. Every effort to shout down, intimidate, and marginalize scholarship that is deemed to be dangerous, politically unacceptable, or offensive to some interest group, is a step in this direction.

To employ a contemporary idiom, I have no skin in the game when it comes to universities. It may be, then, that it is presumptuous for me to say anything. Mostly I have kept my silence, because it is not my fight. I am not of academia. I do not enjoy its benefits and opportunities, and I am not subject to its disruptions and disappointments. But I must be explicit in calling out the threat to freedom of inquiry. Mine is but a lone voice in the wilderness. I possess no wealth, fame, or influence that I can exercise on behalf of freedom of inquiry within academia. Nevertheless, I add my powerless voice to those who have already spoken out against the attempt to silence Professor Peterson.

. . . . .

. . . . .

signature

. . . . .

Grand Strategy Annex

. . . . .

project astrolabe logo smaller

. . . . .

Advertisements

Origins of Globalization

20 December 2015

Sunday


Earth_Nightside_composite

The politics of a word

It is unfortunate to have to use the word “globalization,” as it is a word that rapidly came into vogue and then passed out of vogue with equal rapidity, and as it passed out of vogue it had become spattered with a great many unpleasant associations. I had already noted this shift in meaning in my book Political Economy of Globalization.

In the earliest uses, “globalization” had a positive connotation; while “globalization” could be used in an entirely objective economic sense as a description of the planetary integration of industrialized economies, this idea almost always was delivered with a kind of boosterism. One cannot be surprised that the public rapidly tired of hearing about globalization, and it was perhaps the sub-prime mortgage crisis that delivered the coup de grâce.

In much recent use, “globalization” has taken on a negative connotation, with global trade integration and the sociopolitical disruption that this often causes blamed for every ill on the planet. Eventually the hysterical condemnation of globalization will go the way of boosterism, and future generations will wonder what everyone was talking about at the end of the twentieth century and the beginning of the twenty-first century. But in the meantime the world will have been changed, and these future generations will not care about globalization only because process converged on its natural end.

Despite this history of unhelpful connotations, I must use the word, however, because if I did not use it, the relevance of what I am saying would probably be lost. Globalization is important, even if the word has been used in misleading ways; globalization is a civilizational-level transformation that leaves nothing untouched, because at culmination of the process of globalization lies a new kind of civilization, planetary civilization.

I suspect that the reaction to “planetary civilization” would be very different from the reactions evoked by “globalization,” though the two are related as process to outcome. Globalization is the process whereby parochial, geographically isolated civilizations are integrated into a single planetary civilization. The integration of planetary civilization is being consolidated in our time, but it has its origins about five hundred years ago, when two crucial events began the integration of our planet: the Copernican Revolution and the Columbian exchange.

Copernicus continues to shape not only how we see the universe, but also our understanding of our place within it.

Copernicus continues to shape not only how we see the universe, but also our understanding of our place within it.

The Copernican Revolution

The intellectual basis of of our world as a world, i.e., as a planet, and as one planet among other planets in a planetary system, is the result of the Copernican revolution. The Copernican revolution forces us to acknowledge that the Earth is one planet among planets. The principle has been extrapolated so that we eventually also acknowledged that the sun is one star among stars, our galaxy is one galaxy among galaxies, and eventually we will have to accept that the universe is but one universe among universes, though at the present level of the development of science the universe defines the limit of knowledge because it represents the possible limits of observation. When we will eventually transcend this limit, it will be due not to abandoning empirical evidence as the basis of science, but by extending empirical evidence beyond the limits observed today.

As one planet among many planets, the Earth loses its special status of being central in the universe, only to regain its special status as the domicile of an organism that can uniquely understand its status in the universe, overcoming the native egoism of any biological organism that survives first and asks questions later. Life that begins merely as self-replication and eventually adds capacities until it can feel and eventually reason is probably rare in the universe. The unique moral qualities of a being derived from such antecedents but able to transcend the exigencies of the moment is the moral legacy of the Copernican Revolution.

As the beginning of the Scientific Revolution, the Copernican Revolution is also part of a larger movement that would ultimately become the basis of a new civilization. Industrial-technological civilization is a species of scientific civilization; it is science that provides the intellectual infrastructure that ties together scientific civilization. Science is uniquely suited to its unifying role, as it constitutes the antithesis of the various ethnocentrisms that frequently define pre-modern forms of civilization, which thereby exclude even as they expand imperially.

Civilzation unified sub specie scientia is unified in a way that no ethnic, national, or religious community can be organized. Science is exempt from the Weberian process of defining group identity through social deviance, though this not well understood, and because not well understood, often misrepresented. The exclusion of non-science from the scope of science is often assimilated to Weberian social deviance, though it is something else entirely. Science is selective on the basis of empirical evidence, not social convention. While social convention is endlessly malleable, empirical evidence is unforgiving in the demarcation it makes between what falls within the scope of the confirmable or disconfirmable, and what falls outside this scope. Copernicus began the process of bringing the world entire within this scope, and in so doing changed our conception of the world.

An early encounter between the New World and the Old.

An early encounter between the New World and the Old.

The Columbian Exchange

While the Copernican Revolution provided the intellectual basis of the unification of the world as a planetary civilization, the Columbian Exchange provided the material and economic basis of the unification of the world as a planetary civilization. In the wake of the voyages of discovery of Columbus and Magellan, and many others that followed, the transatlantic trade immediately began to exchange goods between the Old World and the New World, which had been geographically isolated. The biological consequences of this exchange were profound, which meant that the impact on biocentric civilization was transformative.

We know the story of what happened — even if we do not know this story in detail — because it is the story that gave us the world that we know today. Human beings, plants, and animals crossed the Atlantic Ocean and changed the ways of life of people everywhere. New products like chocolate and tobacco became cash crops for export to Europe; old products like sugar cane thrived in the Caribbean Basin; invasive species moved in; indigenous species were pushed out or become extinct. Maize and potatoes rapidly spread to the Old World and became staple crops on every inhabited continent.

There is little in the economy of the world today that does not have its origins in the Columbian exchange, or was not prefigured in the Columbian exchange. Prior to the Columbian exchange, long distance trade was a trickle of luxuries that occurred between peoples who never met each other at the distant ends of a chain of middlemen that spanned the Eurasian continent. The world we know today, of enormous ships moving countless shipping containers around the world like so many chess pieces on a board, has its origins in the Age of Discovery and the great voyages that connected each part of the world to every other part.

earthlights - nasa picture from space

Defining planetary civilization

In my presentation “What kind of civilizations build starships?” (at the 2015 Starship Congress) I proposed that civilizations could be defined (and given a binomial nomenclature) by employing the Marxian distinction between intellectual superstructure and economic infrastructure. This is why I refer to civilizations in hyphenated form, like industrial-technological civilization or agrarian-ecclesiastical civilization. The first term gives the economic infrastructure (what Marx also called the “base”) while the second term gives the intellectual superstructure (which Marx called the ideological superstructure).

In accord with this approach to specifying a civilization, the planetary civilization bequeathed to us by globalization may be defined in terms of its intellectual superstructure by the Copernican revolution and in terms of its economic infrastructure by the Columbian exchange. Thus terrestrial planetary civilization might be called Columbian-Copernican civilization (though I don’t intend to employ this name as it is not an attractive coinage).

Planetary civilization is the civilization that emerges when geographically isolated civilizations grow until all civilizations are contiguous with some other civilization or civiliations. It is interesting to note that this is the opposite of the idea of allopatric speciation; biological evolution cannot function in reverse in this way, reintegrating that which has branched off, but the evolution of mind and civilization can bring back together divergent branches of cultural evolution into a new synthesis.

globalization 1

Not the planetary civilization we expected

While the reader is likely to have a different reaction to “planetary civilization” than to “globalization,” both are likely to be misunderstood, though misunderstood in different ways and for different reasons. Discussing “planetary civilization” is likely to evoke utopian visions of our Earth not only intellectually and economically unified, but also morally and politically unified. The world today is in fact unified economically and, somewhat less so, intellectually (in industrialized economies science has become the universal means of communication, and mathematics is the universal language of science), but unification of the planet by trade and commerce has not led to political and moral unification. This is not the planetary civilization once imagined by futurists, and, like most futurisms, once the future arrives we do not recognize it for what it is.

There is a contradiction in the contemporary critique of globalization that abhors cultural homogenization on the one hand, while on the other hand bemoans the ongoing influence of ethnic, national, and religious regimes that stand in the way of the moral and political unification of humankind. It is not possible to have both. In so far as the utopian ideal of planetary civilization aims at the moral and political unification of the planet, it would by definition result in a cultural homogenization of the world far more destructive of traditional cultures than anything seen so far in human civilization. And in so far as the fait accompli of scientific and commercial unification of planetary civilization fails to develop into moral and political unification, it preserves cultural heterogeneity.

Incomplete globalization, incomplete planetary civilization

The process of globalization is not yet complete. China is nearing the status of a fully industrialized economy, and India is making the same transition, albeit more slowly and by another path. The beginnings of the industrialization of Africa are to be seen, but this process will not be completed for at least a hundred years, and maybe it will require two hundred years.

Imperfect though it is, we have today a planetary civilization (an incomplete planetary civilization) as the result of incomplete globalization, and that planetary civilization will continue to take shape as globalization runs its course. When the processes of globalization are exhausted, planetary civilization will be complete, in so far as it remains exclusively planetary, but if civilization makes the transition to spacefaring before the process of globalization is complete, our civilization will assume no final (or mature) form, but will continue to adapt to changed circumstances.

From these reflections we can extrapolate the possibility of distinct large-scale structures of civilizational development. Civilization might transition from parochial, to planetary, and then to spacefaring, not making the transition to the next stage until the previous stage is complete. That would mean completing the process of globalization and arriving at a mature planetary civilization without developing a demographically significant spacefaring capacity (this seems to be our present trajectory of development). Alternatively, civilizational development might be much more disorderly, with civilizations repeatedly preempted as unprecedented emergents derail orderly development.

. . . . .

signature

. . . . .

Grand Strategy Annex

. . . . .

project astrolabe logo smaller

. . . . .

Tuesday


William of Ockham, one of the greatest philosophers of the late Middle Ages, is remembered today primarily for his formulation of the principle of parsimony, also called Ockham's razor.

William of Ockham, one of the greatest philosophers of the late Middle Ages, is remembered today primarily for his formulation of the principle of parsimony, also called Ockham’s razor.

A medieval logician in the twenty-first century

In the discussion surrounding the unusual light curve of the star KIC 8462852, Ockham’s razor has been mentioned numerous times. I have written a couple of posts on this topic, i.e., interpreting the light curve of KIC 8462852 in light of Ockham’s razor, KIC 8462852 and Parsimony and Plenitude in Cosmology.

What is Ockham’s razor exactly? Well, that is a matter of philosophical dispute (and I offer my own more precise definition below), but even if it is difficult to say that Ockham’s razor is exactly, we can say something about what it was originally. Philotheus Boehner, a noted Ockham scholar, wrote of Ockham’s razor:

“It is quite often stated by Ockham in the form: ‘Plurality is not to be posited without necessity’ (Pluralitas non est ponenda sine necessitate), and also, though seldom: ‘What can be explained by the assumption of fewer things is vainly explained by the assumption of more things’ (Frustra fit per plura quod potest fieri per pauciora). The form usually given, ‘Entities must not be multiplied without necessity’ (Entia non sunt multiplicanda sine necessitate), does not seem to have been used by Ockham.”

William of Ockham, Philosophical Writings: A Selection, translated, with an Introduction, by Philotheus Boehner, O.F.M., Indianapolis and New York: The Library of Liberal Arts, THE BOBBS-MERRILL COMPANY, INC., 1964, Introduction, p. xxi

Most references to (and even most uses of) Ockham’s razor are informal and not very precise. In Maybe It’s Time To Stop Snickering About Aliens, which I linked to in KIC 8462852 Update, Adam Frank wrote of Ockham’s razor in relation to KIC 8462852:

“…aliens are always the last hypothesis you should consider. Occam’s razor tells scientists to always go for the simplest explanation for a new phenomenon. But even as we keep Mr. Occam’s razor in mind, there is something fundamentally new happening right now that all of us, including scientists, must begin considering… the exoplanet revolution means we’re developing capacities to stare deep into the light produced by hundreds of thousands of boring, ordinary stars. And these are exactly the kind of stars where life might form on orbiting planets… So we are already going to be looking at a lot of stars to hunt for planets. And when we find those planets, we are going to look at them for basic signs that life has formed. But all that effort means we will also be looking in exactly the right places to stumble on evidence of not just life but intelligent, technology-deploying life.

Here the idea of Ockham’s razor is present, but little more than the idea. Rather than merely invoking the idea of Ockham’s razor, and merely assuming what constitutes simplicity and parsimony, if we are going to profitably employ the idea today, we need to develop it more fully in the context of contemporary scientific knowledge. In KIC 8462852 I wrote:

“One can see an emerging adaptation of Ockham’s razor, such that explanations of astrophysical phenomena are first explained by known processes of nature before they are attributed to intelligence. Intelligence, too, is a process of nature, but it seems to be rare, so one ought to exercise particular caution in employing intelligence as an explanation.”

In a recent post, Parsimony and Emergent Complexity I went a bit further and suggested that Ockham’s razor can be formulated with greater precision in terms of emergent complexity, such that no phenomenon should be explained in terms of a level of emergent complexity higher than that necessary to explain the phenomenon.

De revolutionibus orbium coelestium (On the Revolutions of the Heavenly Spheres) is the seminal work on the heliocentric theory of the Renaissance astronomer Nicolaus Copernicus (1473–1543). The book, first printed in 1543 in Nuremberg, Holy Roman Empire, offered an alternative model of the universe to Ptolemy's geocentric system, which had been widely accepted since ancient times. (Wikipedia)

De revolutionibus orbium coelestium (On the Revolutions of the Heavenly Spheres) is the seminal work on the heliocentric theory of the Renaissance astronomer Nicolaus Copernicus (1473–1543). The book, first printed in 1543 in Nuremberg, Holy Roman Empire, offered an alternative model of the universe to Ptolemy’s geocentric system, which had been widely accepted since ancient times. (Wikipedia)

De revolutionibus orbium coelestium and its textual history

Like Darwin many centuries later, Copernicus hesitated to publish his big book to explain his big idea, i.e., heliocentrism. Both men, Darwin and Copernicus, understood the impact that their ideas would have, though both probably underestimated the eventual influence of these ideas; both were to transform the world and leave as a legacy entire cosmologies. The particular details of the Copernican system are less significant than the Copernican idea, i.e., the Copernican cosmology, which, like Ockham’s razor, has gone on to a long career of continuing influence.

Darwin eventually published in his lifetime, prompted by the “Ternate essay” that Wallace sent him, but Copernicus put off publishing until the end of his life. It is said that Copernicus was shown a copy of the first edition of De revolutionibus on his deathbed (though this is probably apocryphal). Copernicus, of course, lived much closer to the medieval world than did Darwin — one could well argue that Toruń and Frombork in the fifteenth and sixteenth centuries was the medieval world — so we can readily understand Copernicus’ hesitation to publish. Darwin published in a world already transformed by industrialization, already wrenched by unprecedented social change; Copernicus eventually published in a world that, while on the brink of profound change, had not appreciably changed in a thousand years.

Copernicus’ hesitation meant that he did not directly supervise the publication of his manuscript, that he was not able to correct or revise subsequent editions (Darwin revised On the Origin of Species repeatedly for six distinct editions in his lifetime, not including translations), and that he was not able to respond to the reception of his book. All of these conditions were to prove significant in the reception and propagation of the Copernican heliocentric cosmology.

Copernicus, after long hesitation, was stimulated to pursue the publication of De revolutionibus by his contact with Georg Joachim Rheticus, who traveled to Frombork for the purpose of meeting Copernicus. Rheticus, who had great respect for Copernicus’ achievement, came from the hotbed of renaissance and Protestant scholarship that was Nuremberg. He took Copernicus’ manuscript to Nuremberg to be published by a noted scientific publisher of the day, but Rheticus did not stay to oversee the entire publication of the work. This job was handed down to Andreas Osiander, a Protestant theologian who sought to water down the potential impact of De Revolutionibus by adding a preface that suggested that Copernicus’ theory should be accepted in the spirit of an hypothesis employed for the convenience of calculation. Osiander did not sign this preface, and many readers of the book, when it eventually came out, thought that this preface was the authentic Copernican interpretation of the text.

Osiander’s preface, and Osiander’s intentions in writing the preface (and changing the title of the book) continue to be debated to the present day. This debate cannot be cleanly separated from the tumult surrounding the Protestant Reformation. Luther and the Lutherans were critical of Copernicus — they had staked the legitimacy of their movement on Biblical literalism — but one would have thought that Protestantism would have been friendly to the work of Ockham, given Ockham’s conflict with the Papacy, Ockham’s fideism, and his implicit position as a critic of Thomism. (I had intended to read up on the Protestant interpretation of Ockham prior to writing this post, but I haven’t yet gotten to this.) The parsimony of Copernicus’ formulation of cosmology, then, was a mixed message to the early scientific revolution in the context of the Protestant Reformation.

Both Rheticus and Copernicus’ friend Tiedemann Giese were indignant over the unsigned and unauthorized preface by Osiander. Rheticus, by some accounts, was furious, and felt that the book and Copernicus had been betrayed. He pursued legal action against the printer, but it is not clear that it was the printer who was at fault for the preface. While Rheticus suspected Osiander as the author of the preface, this was not confirmed until some time later, when Rheticus had moved on to other matters, so Osiander was never pursued legally over the preface.

Nicolaus Copernicus (1473–1543) -- Mikołaj Kopernik in Polish, and Nikolaus Kopernikus in German

Nicolaus Copernicus (1473–1543) — Mikołaj Kopernik in Polish, and Nikolaus Kopernikus in German

Copernicus’ Ockham

The most common reason adduced to preferring Copernican cosmology to Ptolematic cosmology is not that one is true and the other is false (though this certainly is a reason to prefer Copernicus) but rather that the Copernican cosmology is the simpler and more straight-forward explanation for the observed movements of the stars and the planets. The Ptolemaic system can predict the movements of stars, planets, and the moon (within errors of margin relevant to its time), but it does so by way of a much more complex and cumbersome method than that of Copernicus. Copernicus was radical in the disestablishment of traditional cosmological thought, but once beyond that first radical step of displacing the Earth of the center of the universe (a process we continue to iterate today), the solar system fell into place according to a marvelously simple plan that anyone could understand once it was explained: the sun at the center, and all the planets revolving around it. From the perspective of our rotating and orbiting Earth, the other planets also orbiting the sun appear to reverse in their course, but this is a mere artifact due to our position as observers. Once Copernicus can convince the reader that, despite the apparent solidity of the Earth, it is in fact moving through space, everything else falls into place.

One of the reasons that theoretical parsimony and elegance played such a significant role in the reception of Copernicus — and even the theologians who rejected his cosmology employed his calculations to clarify the calendar, so powerful was Copernicus’ work — was that the evidence given for the Copernican system was indirect. Even today, only a handful of the entire human population has ever left the planet Earth and looked down on it from above — seeing Earth from the perspective of the overview effect — and so acquired direct evidence of the Earth in space. No one, no single human being, has hovered above the solar system entire and looked down upon it and so obtained the most direct evidence of the Copernican theory — this is an overview affect that we have not yet attained. (NB: in The Scientific Imperative of Human Spaceflight I suggested the possibility of a hierarchy of overview effects as one moved further out from Earth.)

The knowledge that we have of our solar system, and indeed of the universe entire, is derived from observations and deduction from observations. Moreover, seeing the truth of Copernican heliocentrism would not only require an overview in space, but an overview in time, i.e., one would need to hover over our solar system for hundreds of years to see all the planets rotating around the common center of the sun, and one would have to, all the while, remain focused on observing the solar system in order to be able to have “seen” the entire process — a feat beyond the limitations of the human lifetime, not to mention human consciousness.

Copernicus himself did not mention the principle of parsimony or Ockham’s razor, and certainly did not mention William of Ockham, though Ockham was widely read in Copernicus’ time. The principle of parsimony is implicit, even pervasive, in Copernicus, as it is in all good science. We don’t want to account for the universe with Rube Goldberg-like contraptions as our explanations.

In a much later era of scientific thought — in the scientific thought of our own time — Stephen J. Gould wrote an essay titled “Is uniformitarianism necessary?” in which he argued for the view that uniformitarianism in geology had simply come to mean that geology follows the scientific method. Similarly, one might well argued that parsimony is no more necessary than uniformitarianism, and that what content of parsimony remains is simply coextenisve with the scientific method. To practice science is to reason in accordance with Ockham’s razor, but we need not explicitly invoke or apply Ockham’s razor, because its prescriptions are assimilated into the scientific method. And indeed this idea fits in quite well with the casual references to Ockham’s razor such as that I quoted above. Most scientists do not need to think long and hard about parsimony, because parsimonious formulations are already a feature of the scientific method. If you follow the scientific method, you will practice parsimony as a matter of course.

Copernicus’ Ockham, then, was already the Ockham already absorbed into nascent scientific thought. Perhaps it would be better to say that parsimony is implicit in the scientific method, and Copernicus, in implicitly following a scientific method that had not yet, in his time, been made explicit, was following the internal logic of the scientific method and its parsimonious demands for simplicity.

Andreas Osiander (19 December 1498 – 17 October 1552) was a German Lutheran theologian who oversaw the publication of Copernicus' De revolutionibus and added an unsigned preface that many attributed to Copernicus.

Andreas Osiander (19 December 1498 – 17 October 1552) was a German Lutheran theologian who oversaw the publication of Copernicus’ De revolutionibus and added an unsigned preface that many attributed to Copernicus.

Osiander’s Ockham

Osiander was bitterly criticized in his own time for his unauthorized preface to Copernicus, though many immediately recognized it as a gambit to allow for the reception of Copernicus’ work to involve the least amount of controversy. As I noted above, the Protestant Reformation was in full swing, and the events that would lead up the Thirty Years’ War were beginning to unfold. Europe was a powder keg, and many felt that it was the better part of valor not to touch a match to any issue that might explode. All the while, others were doing everything in their power to provoke a conflict that would settle matters once and for all.

Osiander not only added the unsigned and unauthorized preface, but also changed the title of the whole work from De revolutionibus to De revolutionibus orbium coelestium, adding a reference to the heavenly spheres that was not in Copernicus. This, too, can be understood as a concession to the intellectually conservative establishment — or it can be seen as a capitulation. But it was the preface, and what the preface claimed as the proper way to understand the work, that was the nub of the complaint against Osiander.

Here is a long extract of Osiander’s unsigned and unauthorized preface to De revolutionibus, not quite the whole thing, but most of it:

“…it is the duty of an astronomer to compose the history of the celestial motions through careful and expert study. Then he must conceive and devise the causes of these motions or hypotheses about them. Since he cannot in any way attain to the true causes, he will adopt whatever suppositions enable the motions to be computed correctly from the principles of geometry for the future as well as for the past. The present author has performed both these duties excellently. For these hypotheses need not be true nor even probable. On the contrary, if they provide a calculus consistent with the observations, that alone is enough. Perhaps there is someone who is so ignorant of geometry and optics that he regards the epicyclc of Venus as probable, or thinks that it is the reason why Venus sometimes precedes and sometimes follows the sun by forty degrees and even more. Is there anyone who is not aware that from this assumption it necessarily follows that the diameter of the planet at perigee should appear more than four times, and the body of the planet more than sixteen times, as great as at apogee? Yet this variation is refuted by the experience of every age. In this science there are some other no less important absurdities, which need not be set forth at the moment. For this art, it is quite clear, is completely and absolutely ignorant of the causes of the apparent nonuniform motions. And if any causes are devised by the imagination, as indeed very many are, they are not put forward to convince anyone that are true, but merely to provide a reliable basis for computation. However, since different hypotheses are sometimes offered for one and the same motion (for example, eccentricity and an epicycle for the sun’s motion), the astronomer will take as his first choice that hypothesis which is the easiest to grasp. The philosopher will perhaps rather seek the semblance of the truth. But neither of them will understand or state anything certain, unless it has been divinely revealed to him.”

Nicholas Copernicus, On the Revolutions, Translation and Commentary by Edward Rosen, THE JOHNS HOPKINS UNIVERSITY PRESS, Baltimore and London

If we eliminate the final qualification, “unless it has been divinely revealed to him,” Osiander’s preface is a straight-forward argument for instrumentalism. Osiander recommends Copernicus’ work because it gives the right results; we can stop there, and need not make any metaphysical claims on behalf of the theory. This ought to sound very familiar to the modern reader, because this kind of instrumentalism has been common in positivist thought, and especially so since the advent of quantum theory. Quantum theory is the most thoroughly confirmed theory in the history of science, confirmed to a degree of precision almost beyond comprehension. And yet quantum theory still lacks an intuitive correlate. Thus we use quantum theory because it gives us the right results, but many scientists hesitate to give any metaphysical interpretation to the theory.

Copernicus, and those most convinced of his theory, like Rheticus, was a staunch scientific realist. He did not propose his cosmology as a mere system of calculation, but insisted that his theory was the true theory describing the motions of the planets around the sun. It follows from this uncompromising scientific realism that other theories are not merely less precise in calculating the movements of the planets, but false. Scientific realism accords with common sense realism when it comes to the idea that there is a correct account of the world, and other accounts that deviate from the correct account are false. But we all know that scientific theories are underdetermined by the evidence. To formulate a law is to go beyond the finite evidence and to be able to predict an infinitude of possible future states of the phenomenon predicted.

Scientific realism, then, is an ontologically robust position, and this ontological robustness is a function of the underdetermination of the theory by the evidence. Osiander argues of Copernicus’ theory that, “if they provide a calculus consistent with the observations, that alone is enough.” So Osiander is not willing to go beyond the evidence and posit the truth of an underdetermined theory. Moreover, Osiander was willing to maintain empirically equivalent theories, “since different hypotheses are sometimes offered for one and the same motion.” Given empirically equivalent theories that can both “provide a calculus consistent with the observations,” why would one theory be favored over another? Osiander states that the astronomer will prefer the simplest explanation (hence explaining Copernicus’ position) while the philosopher will seek a semblance of truth. Neither, however, can know what this truth is without divine revelation.

Osiander’s Ockham is the convenience of the astronomer to seek the simplest explanation for his calculations; the astronomer is justified in employing the simplest explanation of the most precise method available to calculate and predict the course of the heavens, but he cannot know the truth of his theory unless that truth is guaranteed by some outside and transcendent evidence not available through science — a deus ex machina for the mind.

Copernicus stands at the beginning of the scientific revolution, and he stands virtually alone.

Copernicus stands at the beginning of the scientific revolution, and he stands virtually alone.

The origins of the scientific revolution in Copernicus

Copernicus’ Ockham was ontological parsimony; Osiander’s Ockham was methodological parsimony. Are we forced to choose between the two, or are we forced to find a balance between ontological and methodological parsimony? These are still living questions in the philosophy of science today, and there is a sense in which it is astonishing that they appeared so early in the scientific revolution.

As noted above, the world of Copernicus was essentially a medieval world. Toruń and Frombork were far from the medieval centers of learning in Paris and Oxford, and about as far from the renaissance centers of learning in Florence and Nuremberg. Nevertheless, the new cosmology that emerged from the scientific revolution, and which is still our cosmology today, continuously revised and improved, can be traced to the Baltic coast of Poland in the late fifteenth and early sixteenth century. The controversy over how to interpret the findings of science can be traced to the same root.

The conventions of the scientific method were established in the work of Copernicus, Galileo, and Newton, which means that it was the work of these seminal thinkers who established these conventions. Like the cosmologies of Copernicus, Galileo, and Newton, the scientific method has also been continuously revised and improved. That Copernicus grasped in essence as much of the scientific method as he did, working in near isolation far from intellectual centers of western civilization, demonstrates both the power of Copernicus’ mind and the power of the scientific method itself. As implied above, once grasped, the scientific method has an internal logic of its own that directs the development of scientific thought.

The scientific method — methodological naturalism — exists in an uneasy partnership with scientific realism — ontological naturalism. We can see that this tension was present right from the very beginning of the scientific revolution, before the scientific method was ever formulated, and the tension continues down to the present day. Contemporary analytical philosophers discuss the questions of scientific realism in highly technical terms, but it is still the same debate that began with Copernicus, Rheticus, and Osiander. Perhaps we can count the tension between methodological naturalism and ontological naturalism as one of the fundamental tensions of scientific civilization.

. . . . .

Updates and Addenda

This post began as a single sentence in one of my note books, and continued to grow as I worked on it. As soon as I posted it I realized that the discussions of scientific realism, instrumentalism, and methodological naturalism in relation to parsimony could be much better. With additional historical and philosophical discussion, this post might well be transformed into an entire book. So for the questioning reader, yes, I understand the inadequacy of what I have written above, and that I have not done justice to my topic.

Shortly after posting the above Paul Carr pointed out to me that the joint ESA-NASA Ulysses deep-space mission sent a spacecraft to study the poles of the sun, so we have sent a spacecraft out of the plane of the solar system, which could “look down” on our star and its planetary system, although the mission was not designed for this and had no cameras on board. If we did position a camera “above” our solar system, it would be able to take pictures of our heliocentric solar system. This, however, would be more indirect evidence — more direct than deductions from observations, but not as direct as seeing this with one’s own eyes — like the famous picture of the “blue marble” Earth, which is an overview experience for those of us who have not been into orbit to the moon, but which is not quite the same as going into orbit or to the moon.

Paul Carr also drew my attention to Astronomy Cast Episode 390: Occam’s Razor and the Problem with Probabilities, with Fraser Cain and Pamela Gay, which discusses Ockham’s razor in relation to positing aliens as a scientific explanation.

. . . . .

signature

. . . . .

Grand Strategy Annex

. . . . .

project astrolabe logo smaller

. . . . .

Thursday


Starry Night Over the Rhone

2014 IBHA Conference

Yesterday I drove all day long from Portland to San Rafael, California, to attend the second IBHA conference, “Teaching and Researching Big History: Big Picture, Big Questions,” being held at the Dominican University of California. IBHA stands for “International Big History Association,” while “big history” is a contemporary approach to historiography that emphasizes telling the whole story of history from the big bang to the present day, and unifies scientific and humanistic approaches to history. Several of the leading figures in the field of big history are present, and many of them have spoken of how they came to the idea of big history, and that they were essentially doing big history long before there was a name for it. I can identify with this, as I was myself groping toward something like big history, which I am one time called integral history.

David Christian

The conference began with a plenary session featuring David Christian who spoke on “Big History: A Personal Voyage.” David Christian is the most visible face in big history. He began by posing the question, “How do you segue from the smallest scales to the largest scales?” and he gave the first suggestion of an answer by using Van Gogh’s painting “Starry Night over the Rhone” (reproduced above) to show the unity of the eight levels of emergent complexity identified by big historians, from the stars in the sky to the two human figures in the foreground. Christian said that he had been encouraged to give a personal view of his journey to big history, and he said that for him it began with an initial disillusionment, when he began school with great enthusiasm, thinking that this would be a place where big questions could be welcomed, and quickly found out that this was not the case. Big history, he said, gives us a framework in which to meaningfully ask big questions.

Christian also said that “mapping is meaning” — and by “mapping” he not only means conventional maps, but also “maps of time,” which is the title of one of his books. If it is true that mapping is meaning, this implies that the lack of a map is the lack of meaning. We lack maps of time, hence the meaning we crave. We all know that meaninglessness has been a touchstone of modernity. It was a central theme of existentialism, and Christian referred to Durkheim’s use of “anomie” (from the Greek a-nomos, the negation of law). Christian pointed out that there are two responses to anomie: the conventional response that anomie is part of modernity, so accept it for what it is, and the big history response, which is that we are in the midst of constructing a new conception of the world, so our disorientation is understandable, but will not necessarily be a permanent feature of the human condition from now on.

Christian spoke for more than an hour, so there was a lot to take in, and I can’t even give a sketch of the whole presentation here. It was videotaped, so perhaps by the time you read this it will be available online. I especially like that fact that Christian referred to himself as a “framework thinker.” This strikes me as particularly apt, and I think that all big thinkers who like to try to see the big picture (and hence are attracted to big history) are framework thinkers.

Robert Conner

The second speaker to the plenary session was Robert Conner, a likeable classicist who covered a lot of ground in his talk. Being a classicist, he formulated his perspective in terms of the Greeks, but the principles were in no sense parochial to the west’s classical antiquity. Conner was especially concerned with the difference between those who see education as a matter of acquiring habits of mind, and those who see education as primarily as the communication of a particular story. That is to say, he contrasted history — and, by implication, big history — as an analytical inquiry and as preserving the memory of the past.

Conner developed this theme (by way of a detour through Herodotus and Thucydides) toward the idea of learning and education appropriate to a free people. He framed this in terms of “putting questions to the past that will be useful to us now.” I was a bit surprised after this that he did not mention Nietzsche’s essay “The Advantages and Disadvantages of History for Life,” since this covers almost exactly the same ground. It would also have been relevant to bring up T. S. Eliot’s “Tradition and the Individual Talent” in this context, just substituting this historical tradition (largely humanistic, rather than scientific) for the literary and poetic tradition that interested Eliot.

David LePoire discussing energy flows.

David LePoire discussing energy flows.

Complexity (1)

After the plenary session the conference broke up into five rooms with presentations going on concurrently (which ensures that attendees will miss a large part of the program because you can’t be in two different rooms at once, though you can move, which is disruptive). I chose to go to the room with the theme of complexity, featuring presentations by David LePoire, David Baker, and J. Daniel May.

David LePoire spoke on “Two Contrasting Views of Energy and Time Scales,” in which he discussed (among other topics) how higher energy flows into systems can force a reorganization of these energy flows by way of a bifurcation. I’m not at all sure that I understood LePoire (though I picked up a list of his papers so that I can review them at some later date) but I took this to mean that a system that has been stable may become unstable when too much energy begins to flow through it, and it this point is bifurcates into two systems, at least one of which is at a higher level of emergent complexity that is able to remain stable and to thrive at these higher energy levels. If this is what LePoire meant, it seems perfectly sensible to me, and all the discussion (see below) about civilization and energy levels then suggests that once we pump too much energy through civilization, civilization will bifurcate, perhaps producing what I have elsewhere called a post-civilizational institution that can presumably remain stable at these higher energy levels.

David Baker spoke on “The Darwinian Algorithm: An Extension of Rising Complexity as a Unifying Theme of Big History” which was concerned with universal Darwinism, which I take to be equivalent to what is elsewhere called universal selection theory. The influence of Eric Chaisson was apparent again here — Chaisson’s name comes up repeatedly, and many expressed disappointment that he is not at this conference — as Baker described how he used Chaisson’s free energy rate density to formulate universal Darwinism in a big history context. There was a lot of discussion about this after the talk, but what was most interesting to me was that that Baker formulated Chaisson’s ideas on energy flows in the language of Kardashev, though without mentioning Kardashev by name. Paraphrasing from memory, he said that a Type I civilization would utilize energy flows of an entire planet, a Type II civilization would utilize the energy flows of an entire star, and a Type III civilization would utilize the energy flows of an entire galaxy. As I have a particular interest in collecting variations on the theme of Kardashev’s civilization types, I was particularly interested to hear this substitution of “energy flows” for the quantitative approach that Kardashev took to civilization and energy. Indeed, I have now come to realize that Kardashev’s civilization types may be considered an early, non-constructive approach to civilization’s use of energy, whereas the big history approaches now being pursued in the shadow of Chaisson may be thought of as constructive expressions of the same essential idea.

J. Daniel May, not in the printed program, spoke on “Complexity by the Numbers.” May is an instructor in big history at the Dominican University (which has a required course on big history for all students), and he was concerned with the practical pedagogical problem of getting students to understand the unifying theme of emergent complexity, and to this end he had been collecting clear examples of qualitative change linked to the quantitative change of a single metric. I thought that this was a very effective approach. He cited examples such as the decrease of the temperature of the early universe and the emergence of matter, the mass of a proto-stellar nebula and the kind of star that forms from them, and the direct and familiar relationship between number of protons in the nucleus of an atom and the different properties of different elements.

Theories of Thresholds

Closely related to the problem of emergent complexity is the problem of thresholds in big history. This session was supposed to consist of three speakers, one by Skype from Moscow, but the Skype connection didn’t work out, so there were two presentations, “An Alternative scheme of Thresholds and historical turning points” by William McGaughey and “Using Marshall Hodgson’s Concept of Transmutations to Advance our Understanding of Thresholds in the Human Historical Experience” by John Mears. Because the third speaker could not be connected via Skype, the two presentations were followed by an extended question and answer session that was both interesting and enlightening.

John Mears raised a number of traditional historiographical problems in a big history context, especially concerning what he called, “the unavoidable problem of periodization” and “the inherent pitfalls of periodization.” I can sympathize with this, as I have struggled with periodization myself. Mears mentioned some of his minor differences over periodization with other big historians — he cited a particular example from the new big history textbook, which did not include Chaisson’s transition from the “energy era” of the universe to the “matter era” — but acknowledged in a very open way that there are many possible approaches to big history periodization. This fit in well with with William McGaughey’s presentation, which was concerned to describe a periodization that concluded with the rapid rise of automation and artificial intelligence — a topic much discussed in technology circles today, especially in relation to technological unemployment.

Mears also discussed the need for a more rigorous theoretical framework for big history, and this is something with which I strongly agree, and one of the things I hoped to learn by attending this conference was who is working on just this problem, and how they are going about it. This was an implicit theme in other presentations, but Mears made it fully explicit, though without giving a definitive formulation of an answer to the problem.

Opening Reception

After the initial day of presentations there was an evening reception for all involved, with many interesting conversations going on simultaneously. I was disappointed to have to miss so many presentations that sounded interesting because of the format of the conference. While a single session severely limits the number of presentations that can be made, splitting up the conference into five or six groups really fragments things. I think it would be better to keep the division to two or three concurrent sessions.

My overall reflection on the first day of the conference was the ongoing division between scientific and humanistic historiography, which is precisely what big history is supposed to overcome. In the extensive discussion after the “Theories of Thresholds” presentations, the traditional historiographical question was asked — Is history a science, or does it belong to the humanities? — and, despite this being a gathering of historians, the question was not taken up in its historical context. History began as a literary genre, then it became one among the humanities, and now it is becoming a science. All of these approaches still exist side by side.

There is a division among participants between those coming from a primarily science background, and those with a more traditional background in history, where “traditional” here means “humanities-based historiography. Big historians are determined to bridge these diverse backgrounds, and to emerge from the “silos” of academic specialization — but it hasn’t happened yet.

. . . . .

Studies in Grand Historiography

1. The Science of Time

2. Addendum on Big History as the Science of Time

3. The Epistemic Overview Effect

4. 2014 IBHA Conference Day 1

5. 2014 IBHA Conference Day 2

6. 2014 IBHA Conference Day 3

7. Big History and Historiography

. . . . .

signature

. . . . .

Grand Strategy Annex

. . . . .

Friday


A Century of Industrialized Warfare:

Ernst Jünger

Ernst Jünger

Ernst Jünger is Mobilized


Saturday 01 August 1914

On Thursday 30 July 1914 Russia announced general mobilization. The next day, on Friday 31 July 1914, Germany declared Kriegsgefahr Zustand (danger of war) while France authorized full mobilization. One hundred years ago today, on Saturday 01 August 1914, With Russia failing to respond to Germany’s ultimatum to demobilize, Germany began full mobilization and declared war on Russia. The events that had been building through the July Crisis now broke in full force, and the major powers of Europe were mobilizing and declaring war. Among the fates of emperors, nations, and millions of people, one young soldier was mobilized, Ernst Jünger, whose life was to coincide with much of the violent twentieth century.

Ernst Jünger remains today a controversial figure, but also an influential figure — much like Heidegger, who read Jünger carefully and even conducted a seminar on Jünger’s work — but Jünger outlived both the First and Second World Wars in which he fought, and continued to write, leaving a substantial literary corpus. He was sufficiently rehabilitated to appear with both French and German leaders at events commemorating the First World War. His masterpiece, In Stahlgewittern, translated as Storm of Steel, was a celebration of the “frontline experience” (Fronterlebnis) in all its horror and power. The book was much revised throughout Jünger’s life and appeared in many editions; the later editions carry the simple dedication, “To the Fallen,” as Jünger came to be seen as the voice of the frontline soldier of the First World War regardless of nationality.

But while Jünger’s reputation rested on his first and most powerful book, he was much more than a soldier who left a single compelling memoir. Between the wars Jünger wrote a number of provocative works — most never translated into English — and came to seen as part of the “Conservative Revolution.” Whether the phrase “Conservative Revolution” is a term of art employ to distinguish Jünger from the Nazis, and to distance him from them, or there was a real difference between Nazi writers and writers of the Conservative Revolution, remains controversial today — again, for much the same reason that Heidegger remains controversial today.

Hugo Ott’s book on Heidegger, Martin Heidegger: A Political Life, only mentions Jünger in passing a few times, including this quasi-exculpatory passage from a de-nazification committee:

Prior to the revolution of 1933 the philosopher Martin Heidegger lived in a totally unpolitical intellectual world, but maintained friendly contacts (in part through his sons) with the youth movement of the day and with certain literary spokesmen for Germany’s youth — such as Ernst Jünger — who were heralding the end of the bourgeois-capitalist age and the dawning of a new German socialism. He looked to the National Socialist revolution to bring about a spiritual renewal of German life on a national-ethnic basis, and at the same time, in common with large sections of the German intelligentsia, a healing of social differences and the salvation of Western culture from the dangers of Communism. He had no clear grasp of the parliamentary-political processes that led up to the seizure of power by the National Socialists; but he believed in the historical mission of Hitler to bring about the spiritual and intellectual transformation that he himself envisaged.

Report of the Denazification Commission, Sept. 1945, Members: Prof. v. Dietze (chairman), Ritter, Oehlkers, Allgeier, Lampe. Quoted in Ott, Hugo, Martin Heidegger: A Political Life, New York: Basic Books, 1993, p. 324

In contrast, the most damning book yet written about Heidegger, Emmanuel Faye’s Heidegger: The Introduction of Nazism into Philosophy in Light of the Unpublished Seminars of 1933-1935, devotes several detailed pages to Jünger and Jünger’s influence on Heidegger. Faye’s reading of Jünger turns him into a enthusiastic Nazi, and this is not the reading usually given of Jünger’s relation to Nazism.

Whether Jünger is admired or deplored, he is one of the inescapable figures of the twentieth century, and it is his relationship to global industrialized warfare that has made Jünger into a pivotal figure. Many wrote on war and their experience of war; only Jünger fully revealed the changed character of war that reflected a new form of civilization.

The frontline experience that was central to Jünger’s Storm of Steel, and which was the bond of the quasi-fascist Freikorps in Germany during the inter-war period, deserves to be given an exposition as an countervailing account of the battlefield experience of the First World War. One of the most common claims made about the combat experience of the First World War was that it was exclusively an experience of terror and misery, and that this contrasted to the possible adventure, edification, glory, and personal engagement of past combat environments. According to this narrative, the industrialization of war eliminated the possibility of honorable single combat, and the men who went to war were reduced to mere widgets in the war machine. During the First World War we have tiny figures clambering over enormous guns which required crews of hundreds who operated this machinery dispassionately and without any personal connection to what they were doing, much as pilots for the first time bombed targets on the ground without seeing the lives they took. Killing became automated and impersonal.

What this conventional reading fails to tell us points to a fundamental and crucial aspect of the change that came to combat with the industrialization of war. Prior to the First World War, the structure of armies was a perfect mirror of the social structure of society. Not only was there the obvious distinction between officer corps, all of them aristocrats, and the foot soldiers, drawn from the lower classes of society, but even among the officers there was a feudal hierarchy. The higher one’s family in the peerage, the higher one could rise in military ranks, and the most desired spots in the army were reserved for those with the best connections. Thus highly coveted positions like being a mounted cavalry officer were only given to the sons of the “best” families, and in pre-industrialized warfare, the cavalry charge was the “highlight” of a battle in which the greatest glory was to be won.

When the First World War began, many believed it would be a replay of the Franco-Prussian war, complete with cavalry charges with swords drawn. In some places, the war did in fact start out like that, but this was not the primary experience of warfare after industrialization. The typical experience of a soldier in the Great War was to be one of many millions of men in the trenches. Most did not distinguish themselves in this uncompromising environment, but they slogged through and fought as best they could under the circumstances.

The fact that the first global industrialized war was a mass war predicated upon the mobilization of millions of men — the full participation of mass society in the war — meant that millions of men were exposed to the same stimulus, and different men responded differently to this stimulus. War exercised a selective effect in combat that could never effectively come into play with the rigidly feudal armed forces of ages past. While for the vast majority of men in the trenches, the war was miserable, in addition to being an unprecedented horror, there were some few men who “found” themselves in combat, and who came to relish the excitement of trench raids and risking their lives. In Maslovian terms, for some men, war is a peak experience. It certainly seemed to have been so for Junger.

It is often asserted that the last form of the personal duel in industrialized warfare was the experience of fighter pilots in dogfights — and, curiously, we sometimes read this side-by-side with the claim that air warfare is dehumanizing, impersonal, and technical. Everyone has heard of the Red Baron, and many have heard of the great aviation aces of the Second World War, but “aces” emerged in all forms of combat — in tanks, in submarines, and among frontline soldiers. These were men who intuitively mastered the new technologies and took to them as if by instinct. The personal duel, and the sense of honor intrinsic to this form of combat, lived on in global industrialized war, but it became a marginal experience, an outlier in the midst of the millions of men who went to war and who were in no sense suited for killing. In comparison to the many millions who fought and died and had no taste for war, the few who took to modern industrialized warefare represent only a very small fraction of the total.

The distinctive Fronterlebnis, and those who flourished in this violent atmosphere, was not the typical experience of war, but it was new experience of war emergent from the changed social conditions under which the war was fought, and Jünger was its prophet.

. . . . .

1914 to 2014

. . . . .

A Century of Industrialized Warfare

0. A Century of Industrialized Warfare

1. Assassination in Sarajevo

2. Headlines around the World

3. The July Crisis

4. A Blank Check for Austria-Hungary

5. Serbia and Austria-Hungary Mobilize

6. Austria-Hungary Declares War on Serbia

7. Ernst Jünger is Mobilized

. . . . .

twentieth century war collage

. . . . .

signature

. . . . .

Grand Strategy Annex

. . . . .

Thursday


Leopold von Ranke (1795 - 1886)

Leopold von Ranke (1795 – 1886)

In George Orwell’s dystopian classic Nineteen Eighty-Four there occurs a well known passage that presents a frightening totalitarian vision of history:

“And if all others accepted the lie which the Party imposed — if all records told the same tale — then the lie passed into history and became truth. ‘Who controls the past,’ ran the Party slogan, ‘controls the future: who controls the present controls the past.’ And yet the past, though of its nature alterable, never had been altered. Whatever was true now was true from everlasting to everlasting. It was quite simple. All that was needed was an unending series of victories over your own memory. ‘Reality control’, they called it: in Newspeak, ‘doublethink’.”

George Orwell, Nineteen Eighty-Four, Part One, Chapter 3

What Orwell called, “…an unending series of victories over your own memory,” is something anticipated by Nietzsche, who, however, placed it in the context of pride rather than dissimulation:

“I have done that,” says my memory. “I cannot have done that,” says my pride, and remains inexorable. Eventually — memory yields.

Friedrich Nietzsche, Beyond Good and Evil: Prelude to a Philosophy of the Future, section 68

The phrase above identified as the “party slogan” — Who controls the past, controls the future: who controls the present controls the past — is often quoted out of context to give the misleading impression that this was asserted by Orwell as his own position. This is, rather, the Orwellian formulation of the Stalinist position. (Stalin reportedly hated both Nineteen Eighty-Four and Animal Farm.) The protagonist of Nineteen Eighty-Four, Winston Smith, is himself part of the totalitarian machinery, rewriting past newspaper articles so that they conform to current party doctrine, and re-touching photographs to erase individuals who had fallen out of favor — both of which Stalin presided over in fact.

The idea that the control over history entails control over the future, and the control over history is a function of control in the present, constitutes a political dimension to history. Winston Churchill (who is said to have enjoyed Nineteen Eighty-Four as much as Stalin loathed it) himself came close to this when he said that, “History will be kind to me for I intend to write it.” This political dimension to history is one of which Orwell and other authors have repeatedly made us aware. There is another political dimension to history that is more difficult to fully appreciate, because it requires much more knowledge of the past to understand.

More than mere knowledge of the past, which seems empirically unproblematic, it also requires an understanding of the theoretical context of historiography in order to fully appreciate the political dimension of history. The name of Leopold von Ranke is not well known outside historiography, but Ranke has had an enormous influence in historiography and this influence continues today even among those who have never heard his name. Here is the passage that made Ranke’s historiographical orientation — the idea of objective and neutral history that we all recognize today — the definitive expression of a tradition of historiographical thought:

“History has had assigned to it the office of judging the past and of instructing the account for the benefit of future ages. To show high offices the present work does not presume; it seeks only to show what actually happened.”

Leopold von Ranke, History of the Latin and Teutonic Nations

The deceptively simple phrase, what actually happened (in German: wie es eigentlich gewesen — became a slogan if not a rallying cry among historians. The whole of the growth of scientific historiography, to which I have referred in many recent posts — Scientific Historiography and the Future of Science and Addendum on Big History as the Science of Time among them — is entirely predicated upon the idea of showing what actually happened.

Sometimes, however, there is a dispute about what actually happened, and the historical record is incomplete or ambiguous, so that to get the whole story we must attempt to fill in the ellipses employing what R. G. Collingwood called the historical a priori imagination (cf. The A Priori Futurist Imagination). Historical extrapolation, placed in this Collingwoodian context, makes it clear that the differing ways in which the historical record is filled in and filled out is due to the use of different a priori principles of extrapolation.

I have noted that diachronic extrapolation is a particular problem in futurism, since it develops historical trends in isolation and thereby marginalizes the synchrony of events. So, too, diachronic extrapolation is a problem in historiography, as it fills in the ellipses of history by a straight-forward parsimonious extrapolation — as though one could unproblematically apply Ochkam’s razor to history. (The symmetry of diachronic extrapolation in history and futurism nicely reveals how futurism is the history of the future and history the futurism of the past.) The political dimension of history is one of the synchronic forces that represents interaction among contemporaneous events, and this is the dimension of history that is lost when we lose sight of contemporaneous events.

There were always contemporaneous socio-political conflicts that defined the terms and the parameters of past debates; in many cases, we have lost sight of these past political conflicts, and we read the record of the debate on a level of abstraction and generality that it did not have as it occurred. In a sense, we read a sanitized version of history — not purposefully santitized (although this is sometimes the case), not sanitized for propagandistic effect, but sanitized only due to our limited knowledge, our ignorance, our forgetfulness (at times, a Nietzschean forgetfulness).

Many historical conflicts that come down to us, while formulated in the most abstract and formal terms, were at the time political “hot button” issues. We remember the principles today, and sometimes we continue to debate them, but the local (if not provincial) political pressures that created these conflicts has often all but disappeared and considerable effort is required to return to these debates and to recover the motivating forces. I have noted in many posts that particular civilizations are associated with particular problem sets, and following the dissolution of a particular civilization, the problems, too, are not resolved but simply become irrelevant — as, for example, the Investiture Controversy, which was important to agrarian-ecclesiastical civilization, but which has no parallel in industrial-technological civilization.

Some of these debates (like that of the Investiture Controversy) are fairly well known, and extensive scholarly research has gone into elucidating the political conflicts of the time that contributed to these debates. However, the fact that many of these past ideas — defunct ideas — are no longer relevant to the civilization in which we live makes is difficult to fully appreciate them as visceral motives in the conduct of public policy.

Among the most well-known examples of politicized historiography is what came to be called the Black Legend, which characterized the Spanish in the worst possible light. In fact, the Spanish were cruel and harsh masters, but that does not mean that every horrible thing said about them was true. But it is all too easy to believe the worst about people whom one has a reason to believe the worst, and to embroider stories with imagined details that become darker and more menacing over time. During the period of time in which the Black Legend originates, Spain was a world empire with no parallel, enforcing its writ in the New World, across Europe, and even in Asia (notably in the Philippines, named for Spanish Monarch Philip II). As the superpower of its day, Spain was inevitably going to be the target of smears, which only intensified as Spain become the leading Catholic power in the religious wars that so devastated Europe in the early modern period. Catholics called Protestants heretics, and Protestants called the Pope the Antichrist; in this context, political demonization was literal.

There are many Black Legends in history, often the result of conscious and purposeful propagandistic effort. There are also, it should be noted, white legends, also the work of intentional propaganda. White legends whitewash a chequered history — exactly the task that Stalin set for Soviet civilization and which Winston Smith undertook for Oceania.

. . . . .

Philip II of Spain (1527-1598)

Philip II of Spain (1527-1598)

. . . . .

signature

. . . . .

Grand Strategy Annex

. . . . .

Tuesday


blind justice 1

Exemplary Justice and Show Trials

Exemplary justice is a very old idea, and it has its origins in the inability of a political entity to effectively enforce its writ. Thus the idea of exemplary justice grows out of an intrinsic limitation of early political societies. In brief, exemplary justice is to make an example of a individual. The horrific punishments that we read about in history are largely a function of exemplary justice: it was so unusual to capture an individual guilty of a crime, that particularly brutal punishments were meted out as a deterrence. Thus the potential criminal would know that his risk of being caught was low, but that, if caught, the punishment would be so horrible that the low risk of being caught is balanced by the disproportionate consequences in the unlikely event of being caught.

It is surprisingly difficult to find contemporary sources discussing exemplary justice; contemporary philosophers of law and politics have had little or nothing to say on the topic. You will not find an entry on “exemplary justice” in any of the major dictionaries of philosophy (such works as I have cited in many previous posts), yet I found an exemplary characterization of exemplary justice from almost a hundred years ago:

“…exemplary justice, as it well known, aims to establish in the social mind a permanent association between the criminal deed and some painful consequence, in order to prevent the repetition of a similar deed in the future. This form of justice pays no regard to the offender; its attention is fixed only on the needs and welfare of society.”

Gustave A. Feingold, “The Association Reflex and Moral Development” in The Journal of Genetic Psychology, Volume 23, 1916, p. 473

Although the contemporary silence on exemplary justice might lead one to suppose that it no longer plays a role in contemporary society, in which the proportionality of retributive justice is carefully calibrated to the nature of the crime, there is one form, however, of exemplary justice that came of age in the twentieth century, and that is the show trial. The use of mass media — newspapers, magazines, radio, and television — to inflame public opinion was central to the mobilization mass sentiment against an offender whose crime subverted principles upon which a given regime was founded.

The most notorious show trials of the twentieth century were stage-managed by the most notorious political regimes of the twentieth century — Soviet communism, Nazi Germany, and communist China under Mao. However, there is a sense in which we can consider the Scopes Trial as a show trial, so such events are not unique to dysfunctional regimes. This recent innovation in exemplary justice demonstrates that, despite its antiquity, the idea of exemplary justice continues to be relevant in our time and cannot be dismissed as a defunct idea.

Civil Disobedience and Popular Ideology

Even as the idea of exemplary justice has largely fallen out of public consciousness, another idea has taken its place, which is closely related to exemplary justice, but which resemblance has not been widely recognized. I am speaking of civil disobedience. Unlike exemplary justice, the idea of civil disobedience is relatively recent, having its origins in the nineteenth century, and, quite specifically, in Henry David Thoreau’s essay, “On the Duty of Civil Disobedience.”

Unlike the idea of exemplary justice, civil disobedience is widely treated in contemporary literature. Here is a concise definition from a relatively recent source:

civil disobedience, a deliberate violation of the law, committed in order to draw attention to or rectify perceived injustices in the law or policies of a state.

The Cambridge Dictionary of Philosophy, 2nd Edition, Editor: Robert Audi, Cambridge et al.: Cambridge University Press, 1999, pp. 144-145

Civil disobedience, although a recent idea, proved to be one of the ideas that shaped the second half of the twentieth century. Mohandas Gandhi was influenced by Thoreau, and put Thoreau’s idea into practice as a mass movement in a country where the colonized masses so greatly outnumbered the colonizing forces that civil disobedience changed the direction of India’s modern history. After Gandhi, Martin Luther King jr. employed civil disobedience in the civil rights struggle in the United States, successfully turning public opinion against segregation laws in the US, which might also be said to have changed the direction of US history.

There are few ideologies that have shaped the fate of nation-states in the twentieth century, as I have pointed on in several posts, especially in relation to environmentalism, which is one of those few ideologies (cf. Ideology in our Time). While civil disobedience is not precisely an ideology, it is not entirely independent of ideology. Civil disobedience can only be effective when the campaign against formal legal institutions has the sympathy of a sufficient number of individuals that social change can be effected by the direct action of these individuals. Thus the content of civil disobedience reflects populist sentiment.

Exemplary Justice and Civil Disobedience

There is a sense in which exemplary justice and civil disobedience are each the mirror image of the other. Civil disobedience could be called exemplary defiance of the law, in order to more explicitly contrast it with the exemplary enforcement of the law. One might say that civil disobedience aims to establish in the social mind a permanent association between injustice and some socially painful consequence.

Exemplary justice is the response of formal, legal institutions to their inability to enforce their writ; civil disobedience is the response of those subject to formal, legal institutions of the inability of those institutions to enforce their writ. Both, thus, are predicated upon the intrinsic limitations of political societies, though the first approaches this from the perspective of the state while the second approaches this from the perspective of the population of the state.

Both of these ideas implicitly recognize Weber’s definition of the state as the legal monopoly on violence; exemplary justice celebrates this legal monopoly on violence, using it to social ends beyond the limits of the use of this violence, while civil disobedience exploits the legal monopoly on violence by not even seeking to employ violence but rather to employ non-violence. If the state as a legal monopoly of violence, it does not retain a legal monopoly on non-violence, leaving non-violence civil disobedience open as an avenue of protest against the state.

When one sovereign nation-state seeks to force another sovereign nation-state to do its will (a close approximation of Clausewitz’s definition of war, “War therefore is an act of violence intended to compel our opponent to fulfill our will”), it goes to war, or otherwise inflicts damage on the other nation-state. Each sovereign nation-state, reserving to itself a legal monopoly of violence, is free to use violence on other sovereign nation-states, and this is what we call war. The anarchic international system allows for the possibility of war though the de facto legitimization of redundant monopolies on violence.

Civil disobedience is parallel to war in its use of mass mobilization, and might be defined as, “an act of non-violence intended to compel our opponent to fulfill our will.”

The shift from state power to popular will is revelatory of the growth of popular sovereignty, which has been definitive of the modern era since the series of revolutions that shook the Western world from the American Revolution of 1776 to the French Revolution of 1789 and then the series of revolutions throughout Latin America that resulted in the decolonization process and the formation of independent nation-states in Latin America.

. . . . .

signature

. . . . .

Grand Strategy Annex

. . . . .

Thursday


Life Lessons from Morally Compromised Philosophers

What are we to make of Heidegger? Was he a mere apologist for the Nazis, as Hegel was taken to be an apologist for Prussianism? Can the philosopher be salvaged from the ruin of the man, as one book recently put it?

What are we to make of Heidegger? Was he a mere apologist for the Nazis, as Hegel was taken to be an apologist for Prussianism? Can the philosopher be salvaged from the ruin of the man, as one book recently put it?

With particular attention to the Heidegger case


I began this blog with the idea that I would write about current events from a philosophical perspective and said in my initial post that I wanted to see history through the prism of ideas. This continues to be my project, however imperfectly conceived or unevenly executed. It is a project that necessitates engagement both with the world and with philosophy simultaneously. And so it is that my posts have ranged widely over warfare and the history of ideas, inter alia, and as a consequence of this dual mandate I have often found myself reading and citing sources that are not the common run of reading for philosophers. Some philosophers, however, are both influential and controversial, and Martin Heidegger has become one such philosopher. Heidegger’s influence in philosophy has only grown since his death (primarily in Continental thought), but the controversy about his involvement with Nazism has kept pace and grown along with Heidegger’s reputation.

It may help my readers in the US to understand the impact of the Heidegger controversy to compare it to the intersection of evil and ideals in an iconic American thinker, taking as our example a man more familiar than Heidegger, who was an iconic continental thinker. Take Thomas Jefferson, for example. Some years ago (in 1998, to be specific) I saw two television documentaries about the life of Thomas Jefferson. The first was a typical laudatory television documentary about one of the American founding fathers (I didn’t take notes at the time, so I don’t know which documentary this was, but it may well have been the 1997 Ken Burns film about Jefferson, which I recently re-watched to confirm my memory of its ambiguous treatment of Jefferson’s relationship to this slaves), which touched upon the possibility of Jefferson fathering children by his slave Sally Hemmings, while not taking the idea very seriously.

Then in 1998 the news came out of DNA tests that proved conclusively that Jefferson had fathered the children of his slave Sally Hemmings, and the scientific nature of the evidence rapidly inroads among Jefferson scholars, who had been slow to acknowledge Jefferson’s “shadow family” (as such families were once called in the Ante-Bellum south). The consensus of Jefferson scholars changed so rapidly that it makes one’s head spin — but only after two hundred years of denial. And there remain those today who continue to deny Jefferson’s paternity of Sally Hemmings’ children.

Not long after this news was made public, I saw another documentary about Jefferson in which the whole issue was treated very differently; the perspective of this documentary accepted as unproblematic Jefferson’s paternity of Sally Hemmings’ children, and examined Jefferson’s life and ideas in the light of this “shadow family.” I don’t think that Jefferson suffered at all from this latter documentary treatment; he definitely came across less as an icon and more as a fallible human being, which is not at all objectionable. It is, in fact, more human, and more believable.

Though Jefferson did not suffer in my estimation because he was revealed to be human, all-too-human, there is nevertheless something deeply disturbing about the image of Jefferson sitting down to dinner with his white family while being served at dinner by his mulatto children that he sired with with slaves, and it is deeply disturbing in a way that it not at all unlike the way that it is deeply disturbing to know that when Heidegger met Karl Löwith in 1936 near Rome (two years after Heidegger left his Rectorship in Freiburg) that Heidegger wore a Nazi swastika pin on his lapel the entire time, knowing that Löwith was a Jew who had been forced to flee Nazi Germany. One cannot but wonder, on a purely human level, apart from any ideology, how one person could be so utterly unconcerned with the well being of another.

It would be disingenuous to attempt to defend the indefensible by making the claim that all intellectuals of Jefferson’s time were conflicted over slavery; this simply was not the case. Schopenhauer, for example, consistently wrote against slavery and never showed the slightest sign of wavering on the issue, but, of course, Schopenhauer’s income did not depend on slaves, while Jefferson’s did.

We know that Jefferson struggled mightily with the question of slavery in his later years, as is the case with most conflicted men tying himself in knots trying to square the actual record of his life with his ideals. It is easy to dismiss individuals, even those who have struggled with the contradictions in their life, as mere hypocrites, but the charge of hypocrisy, while carrying great emotional weight, is the least interesting charge that can be made against a man’s ideas. As I wrote in my Variations on the Theme of Life, “The world is mendacious through and through; mendacity is the human condition. To renounce hypocrisy is to renounce the world and to institute an asceticism that cannot ever be realized in practice.” (section 169)

Heidegger does not seem to have been conflicted about his Nazism in the way that Jefferson was conflicted about slavery. Many years after the Second World War, when the record of Nazi death camps was known to all, Heidegger could still refer to the “inner truth greatness of this movement,” while in the meeting with Löwith mentioned above Heidegger was quite explicit that his political engagement with Nazism was a direct consequence of his philosophical views.

One obvious and well-trodden path for handling a philosopher’s political “indiscretions” is to hold that a philosopher’s theoretical works are a thing apart, elevated above the world like Plato’s Forms — one might even say sublated in the Hegelian sense: at once elevated, suspended, and canceled. This strategy allows one to read any philosopher and ignore any detail of life that one chooses. I don’t think that this constitutes a good contribution to intellectual honesty.

I myself was once among those who read philosophers for their philosophical ideas only, and while I was never a Heidegger enthusiast or a Heidegger defender, I thought of Heidegger’s political engagement with Nazism as mostly irrelevant to his philosophy. At some point I don’t clearly recall, I become intensely interested in Heidegger’s Nazism, and there was a flood of books telling the whole sorry story to feed my interest: Heidegger And Nazism by Victor Farias, which was the book the opened by Heidegger’s Nazi past to scrutiny, On Heidegger’s Nazism and Philosophy by Tom Rockmore, The Heidegger Controversy: A Critical Reader edited by Richard Wolin, Heidegger’s Crisis: Philosophy and Politics in Nazi Germany by Hans Sluga, Heidegger, philosophy, Nazism by Julian Young, The Shadow of that Thought by Dominique Janicaud, and most recent and perhaps the most devastating of them all, Heidegger: The Introduction of Nazism into Philosophy in Light of the Unpublished Seminars of 1933-1935 by Emmanuel Faye.

Even with all this material now available on Heidegger’s Nazi past, Heidegger still has his apologists and defenders. Beyond the steadfast apologists for Heidegger — who are perhaps more compromised than Heidegger himself — there are a variety of strategies to excuse Heidegger from his involvement with the Nazis, as when Heidegger’s Nazism is called an “episode” or a “period,” or characterized as “compromise, opportunism, or cowardice” (as in Julian Young’s Heidegger, philosophy, Nazism, p. 4). Young also uses the terms conviction, commitment, and flirtation, though Young ultimately exculpates Heidegger, asserting that, “…neither the early philosophy of Being and Time, nor the later, post-war philosophy, nor even the philosophy of the mid-1930s — works such as the Introduction to Metaphysics with respect to which critics often feel themselves to have an open-and-shut case — stand in any essential connection to Nazism.” (Op. cit., p. 5)

Heidegger’s engagement with fascism represents the point at which Heidegger’s ideas demonstrate their relationship to the ordinary business of life, and this is a conjuncture of the first importance. This is, indeed, identical to the task I set myself in writing this blog: to demonstrate the relationship between life and ideas. And Heidegger, I came to realize, was a particularly clear and striking case of the intersection of life and thought, though not the kind of example that most philosophers would want to claim as their own. I can fully understand why a philosopher would simply prefer to distance themselves from Heidegger and, while not denying Heidegger’s Nazism, would choose not to talk about it either. But that Heidegger thereby becomes a problem for philosophy and philosophers is precisely what makes him interesting. We philosophers must claim Heidegger as one of our own, even if we are sickened by his Nazism, which was no mere “flirtation” or “episode,” but constituted a life-long commitment.

Heidegger was not merely a Nazi ideologue, but also briefly a Nazi official. The Nazification of the professions was central to the strategy of Nazi social revolution (with its own professional institution, the Ahnenerbe), and a willing collaborator such as Heidegger, prepared to Nazify a university, was a valuable asset to the Nazi party. Ultimately, however, Heidegger was embroiled in an internal conflict within the Nazi party, and when the SA was purged and many of its leaders killed on Night of the Long Knives, the Strasserist SA faction lost out decisively, and Heidegger with them. Thereafter Heidegger was watched by the Nazi party, and Heidegger defenders have used this party surveillance to argue that Heidegger was regarded as a subversive by the Nazi party. He was a subversive, in fact, but only because he represented a faction of Nazism that had been suppressed. Heidegger continued as a Nazi party member, and paid his party dues right up to the end of the war. We see, then, that the SA purge was not merely a brutal struggle for power within the Nazi party, but also an episode in the history of ideas. This is interesting and important, even if it is also horrific.

The more carefully we study Heidegger’s philosophy, and read it in relation to his life, the more we can understand the relation of even the most subtle and sophisticated philosophy to ideological commitment and to the ordinary business of life. And it wasn’t only Heidegger who compromised himself. There is Frege’s political diary, less well known than Heidegger’s political views, and the much more famous case of Sartre and Camus. There are at least two book-length studies of the public quarrel and falling-out between Sartre and Camus (Sartre and Camus: A Historic Confrontation and Camus and Sartre: The Story of a Friendship and the Quarrel that Ended It by Ronald Aronson). Camus most definitely comes off looking better in this quarrel, with Sartre, the sophisticated technical philosopher, looking like a party-line communist while Camus, the writer, the literary man, showing true independence of spirit. The political lives of Camus and Sartre have been written about extensively, but even still Heidegger remains an interesting case because of the impenetrable complexity of his thought and the manifest horrors of the regime he served. There ought to be a disconnect here, but there isn’t, and this, again, is interesting and important even if it is horrific.

I have had to ask myself if my interest in Heidegger’s Nazism is prurient (in so far as there is a purely intellectual sense of “prurient”). There is something a little discomfiting about becoming fascinated by studying a great philosopher’s engagement with fascism. I am not innocent in this either. I, too, am a morally compromised philosopher. Perhaps the most I can hope for is to be aware of what I am involved in by making a careful study of philosophy’s involvement in politics. Naïvété strikes me as inexcusable in this context. I hope I have not been naïve.

I have not scrupled to read, to think about, and to quote individuals who were not only ideologically associated with crimes of unprecedented magnitude, but who have personally carried out capital crimes. In the case of Theodore “Ted” Kaczynski, who was personally responsible for several murders, I have carefully read his manifesto, Industrial Society and its Future (read it several times through, in fact), have thought about it, and have quoted it. Others who have been influenced by Kaczynski’s work and have publicly discussed it have felt the need to apologize for it, like scientists who consider using the research of Nazi doctors. But an apology feels like an excuse. I don’t want to make excuses.

Heidegger, like Nazism itself, is a lesson from history. We can benefit from studying Heidegger by learning how the most sophisticated philosophical justifications can be formulated for the most vulgar and the most reprehensible of purposes. But we cannot learn the lesson without studying the lesson. Studying the lessons of history may well corrupt us. That is a danger we must confront, and a risk we must take.

. . . . .

signature

. . . . .

Grand Strategy Annex

. . . . .

Hegel and the Overview Effect

25 September 2013

Wednesday


G. W. F. Hegel

G. W. F. Hegel

Hegel is not remembered as the clearest of philosophical writers, and certainly not the shortest, but among his massive, literally encyclopedic volumes Hegel also left us one very short gem of an essay, “Who Thinks Abstractly?” that communicates one of the most interesting ideas from Hegel’s Phenomenology of Mind. The idea is simple but counter-intuitive: we assume that knowledgeable individuals employ more abstractions, while the common run of men content themselves with simple, concrete ideas and statements. Hegel makes that point that the simplest ideas and terms that tend to be used by the least knowledgeable among us also tend to be the most abstract, and that as a person gains knowledge of some aspect of the world the abstraction of a terms like “tree” or “chair” or “cat” take on concrete immediacy, previous generalities are replaced by details and specificity, and one’s perspective becomes less abstract. (I wrote about this previously in Spots Upon the Sun.)

We can go beyond Hegel himself by asking a perfectly Hegelian question: who thinks abstractly about history? The equally obvious Hegelian response would be that the historian speaks the most concretely about history, and it must be those who are least knowledgeable about history who speak and think the most abstractly about history.

Previously in An Illustration of the Truncation Principle I quoted a passage from the Annales school historian Marc Bloch:

“…it is difficult to imagine that any of the sciences could treat time as a mere abstraction. Yet, for a great number of those who, for their own purposes, chop it up into arbitrary homogenous segments, time is nothing more than a measurement. In contrast, historical time is a concrete and living reality with an irreversible onward rush… this real time is, in essence, a continuum. It is also perpetual change. The great problems of historical inquiry derive from the antithesis of these two attributes. There is one problem especially, which raises the very raison d’être of our studies. Let us assume two consecutive periods taken out of the uninterrupted sequence of the ages. To what extent does the connection which the flow of time sets between them predominate, or fail to predominate, over the differences born out of the same flow?”

Marc Bloch, The Historian’s Craft, translated by Peter Putnam, New York: Vintage, 1953, Chapter I, sec. 3, “Historical Time,” pp. 27-29

The abstraction of historical thought implicit in Hegel and explicit in Marc Bloch is, I think, more of a problem that we commonly realize. Once we look at the problem through Hegelian spectacles, it becomes obvious that most of us think abstractly about history without realizing how abstract our historical thought is. We talk in general terms about history and historical events because we lack the knowledge to speak in detail about exactly what happened.

Why should it be any kind of problem at all that we think abstractly about history? People say that the past is dead, and that it is better to let sleeping dogs lie. Why not forget about history and get on with the business of the present? All of this sounds superficially reasonable, but it is dangerously misleading.

Abstract thinking about history creates the conditions under which the events of contemporary history — that is to say, current events — are conceived abstractly despite our manifold opportunities for concrete and immediate experience of the present. This is precisely Hegel’s point in “Who Thinks Abstractly?” when he invites the reader to consider the humanity of the condemned man who is easily dismissed as a murderer, a criminal, or a miscreant. But we not only think in such abstract terms of local events, but also if not especially in regard to distant events, and large events that we cannot experience personally, so that massacres and famines and atrocities are mere massacres, mere famines, and mere atrocities because they are never truly real for us.

There is an important exception to all this abstraction, and it is the exception that shapes us: one always experiences the events of one’s own life with concrete immediacy, and it is the concreteness of personal experience contrasted to the abstractness of everything else not immediately experienced that is behind much (if not all) egocentrism and solipsism.

Thus while it is entirely possible to view the sorrows and reversals of others as abstractions, it is almost impossible to view one’s own sorrows and reversals in life as abstractions, and as a result of the contrast between our own vividly experienced pain and the abstract idea of pain in the life of another we have a very different idea of all that takes place in the world outside our experience as compared to the small slice of life we experience personally. This observation has been made in another context by Elaine Scarry, who in The Body in Pain: The Making and Unmaking of the World rightly observed that one’s own pain is a paradigm of certain knowledge, while the pain of another is a paradigm of doubt.

Well, this is exactly why we need to make the effort to see the big picture, because the small picture of one’s own life distorts the world so severely. But given our bias in perception, and the unavoidable point of view that our own embodied experience gives to us, is this even possible? Hegel tried to arrive at the big picture by seeing history whole. In my post The Epistemic Overview Effect I called this the “overview effect in time” (without referencing Hegel).

Another way to rise above one’s anthropic and individualist bias is the overview effect itself: seeing the planet whole. Frank White, who literally wrote the book on the overview effect, The Overview Effect: Space Exploration and Human Evolution, commented on my post in which I discussed the overview effect in time and suggested that I look up his other book, The Ice Chronicles, which discusses the overview effect in time.

I have since obtained a copy of this book, and here are some representative passages that touch on the overview effect in relation to planetary science and especially glaciology:

“In the past thirty-five years, we have grown increasingly fascinated with our home planet, the Earth. What once was ‘the world’ has been revealed to us as a small planet, a finite sphere floating in a vast, perhaps infinite, universe. This new spatial consciousness emerged with the initial trips into Low Earth Orbit…, and to the moon. After the Apollo lunar missions, humans began to understand that the Earth is an interconnected unity, where all things are related to one another, and there what happens on one part of the planet affects the whole system. We also saw that the Earth is a kind of oasis, a place hospitable to life in a cosmos that may not support living systems, as we know them, anywhere else. This is the experience that has come to be called ‘The Overview Effect’.”

Paul Andrew Mayewski and Frank White, The Ice Chronicles: The Quest to Understand Global Climate Change, University Press of New England: Hanover and London, 2002, p. 15

…and…

“The view of the whole Earth serves as a natural symbol for the environmental movement. it leaves us unable to ignore the reality that we are living on a finite ‘planet,’ and not a limitless ‘world.’ That planet is, in the words of another astronaut, a lifeboat in a hostile space, and all living things are riding in it together. This realization formed the essential foundation of an emerging environmental awareness. The renewed attention on the Earth that grew out of these early space flights also contributed to an intensified interest in both weather and climate.”

Paul Andrew Mayewski and Frank White, The Ice Chronicles: The Quest to Understand Global Climate Change, University Press of New England: Hanover and London, 2002, p. 20

…and…

“Making the right choices transcends the short-term perspectives produced by human political and economic considerations; the long-term habitability of our home planet is at stake. In the end, we return to the insights brought to us by our astronauts and cosmonauts as the took humanity’s first steps in the universe: We live in a small, beautiful oasis floating through a vast and mysterious cosmos. We are the stewards of this ‘good Earth,’ and it is up to us to learn how to take good care of her.”

Paul Andrew Mayewski and Frank White, The Ice Chronicles: The Quest to Understand Global Climate Change, University Press of New England: Hanover and London, 2002, p. 214

It is interesting to note in this connection that glaciology yielded one of the earliest forms of scientific dating techniques, which is varve chronology, originating in Sweden in the nineteenth century. Varve chronology dates sedimentary layers by the annual layers of alternating coarse and fine sediments from glacial runoff — making it something like dendrochronology, except for ice instead of trees.

Scientific historiography can give us a taste of the overview effect, though considerable effort is required to acquire the knowledge, and it is not likely to have the visceral impact of seeing the overview effect with your own eyes. Even an idealistic philosophy like that of Hegel, as profoundly different as this is from the empiricism of scientific historiography, can give a taste of the overview effect by making the effort to see history whole and therefore to see ourselves within history, as a part of an ongoing process. Probably the scientists of classical antiquity would have been delighted by the overview effect, if only they had had the opportunity to experience it. Certainly they had an inkling of it when they proved that the Earth is spherical.

There are many paths to the overview effect; we need to widen these paths even as we blaze new trails, so that the understanding of the planet as a finite and vulnerable whole is not merely an abstract item of knowledge, but also an immediately experienced reality.

. . . . .

signature

. . . . .

Grand Strategy Annex

. . . . .

Saturday


It is difficult to find an authentic expression of horror, due to its close resemblance to both fear and disgust, but one readily recognizes horror when one sees it.

It is difficult to find an authentic expression of horror, due to its close resemblance to both fear and disgust, but one readily recognizes horror when one sees it.

In several posts I have referred to moral horror and the power of moral horror to shape our lives and even to shape our history and our civilization (cf., e.g., Cosmic Hubris or Cosmic Humility?, Addendum on the Avoidance of Moral Horror, and Against Natural History, Right and Left). Being horrified on a uniquely moral level is a sui generis experience that cannot be reduced to any other experience, or any other kind of experience. Thus the experience of moral horror must not be denied (which would constitute an instance of failing to do justice to our intuitions), but at the same time it cannot be uncritically accepted as definitive of the moral life of humanity.

Our moral intuitions tell us what is right and wrong, but they do not tell us what is or is not (i.e., what exists or what does not exist). This is the upshot of the is-ought distinction, which, like moral horror, must not be taken as an absolute principle, even if it is a rough and ready guide in our thinking. It is perfectly consistent, if discomfiting, to explicitly acknowledge the moral horrors of the world, and not to deny that they exist even while acknowledging that they are horrific. Sometimes the claim is made that the world itself is a moral horror. Joseph Campbell attributes this view to Schopenhauer, saying that according to Schopenhauer the world is something that never should have been.

Apart from the horrors of the world as a central theme of mythology, it is also to be found in science. There is a famous quote from Darwin that illustrates the acknowledgement of moral horror:

“There seems to me too much misery in the world. I cannot persuade myself that a beneficent & omnipotent God would have designedly created the Ichneumonidæ with the express intention of their feeding within the living bodies of caterpillars, or that a cat should play with mice.

Letter from Charles Darwin to Asa Gray, 22 May 1860

This quote from Darwin underlines another point repeatedly made by Joseph Campbell: that different individuals and different societies draw different lessons from the same world. For some, the sufferings of the world constitute an affirmation of divinity, while for Darwin and others, the sufferings of the world constitute a denial of divinity. That being said, it is not the point I would like to make today.

Far more common than the acceptance of the world’s moral horrors as they are is the denial of moral horrors, and especially the denial that moral horrors will occur in the future. On one level, a pragmatic level, we like to believe that we have learned our lessons from the horrors of our past, and that we will not repeat them precisely because we have perpetrated horrors in past and came to realize that they were horrors.

To insist that moral horrors can’t happen because it would offend our sensibilities to acknowledge such a moral horror is a fallacy. Specifically, the moral horror fallacy is a special case of the argumentum ad baculum (argument to the cudgel or appeal to the stick), which is in turn a special case of the argumentum ad consequentiam (appeal to consequences).

Here is one way to formulate the fallacy:

Such-and-such constitutes a moral horror,
It would be unconscionable for a moral horror to take place,
Therefore, such-and-such will not take place.

For “such-and-such” you can substitute “transhumanism” or “nuclear war” or “human extinction” and so on. The inference is fallacious only when the shift is made from is to ought or from ought to is. If confine our inference exclusively either to what is or what ought to be, we do not have a fallacy. For example:

Such-and-such constitutes a moral horror,
It would be unconscionable for a moral horror to take place,
Therefore, we must not allow such-and-such to take place.

…is not fallacious. It is, rather, a moral imperative. If you do not want a moral horror to occur, then you must not allow it to occur. This is what Kant called a hypothetical imperative. This is a formulation entirely in terms of what ought to be. We can also formulate this in terms of what is:

Such-and-such constitutes a moral horror,
Moral horrors do not occur,
Therefore, such-and-such does not occur.

This is a valid inference, although it is false. That is to say, this is not a formal fallacy but a material fallacy. Moral horrors do, in fact, occur, so the premise stating that moral horrors do not occur is a false premise, and the conclusion drawn from this false premise is a false conclusion. (If one denies that moral horrors do, in fact, take place, then one argues for the truth of this inference.)

Moral horrors can and do happen. They are even visited upon us numerous times. After the Holocaust everyone said “never again,” yet subsequent history has not spared us further genocides. Nor will it spare us further genocides and atrocities in the future. We cannot infer from our desire to be spared further genocides and atrocities that they will not come to pass.

More interesting than the fact that moral horrors continue to be perpetrated by the enlightened and technologically advanced human societies of the twenty-first century is the fact that the moral life of humanity evolves, and it often is the case that the moral horrors of the future, to which we look forward with fear and trembling, sometimes cease to be moral horrors by the time they are upon us.

Malthus famously argued that, because human population growth outstrips the production of food (and Malthus was particularly concerned with human beings, but he held this to be a universal law affecting all life) that humanity must end in misery or vice. By “misery” Malthus understood mass starvation — which I am sure that most of us today would agree is misery — and by “vice” Malthus meant birth control. In other words, Malthus viewed birth control as a moral horror comparable to mass starvation. This is not a view that is widely held today.

A great many unprecedented events have occurred since Malthus wrote his Essay on the Principle of Population. The industrialization of agriculture not only provided the world with plenty of food for an unprecedented increase in human population, it did so while farming was reduced to a marginal sector of the economy. And in the meantime birth control has become commonplace — we speak of it today as an aspect of “reproductive rights” — and few regard it as a moral horror. However, in the midst of this moral change and abundance, starvation continues to be a problem, and perhaps even more of a moral horror because there is plenty of food in the world today. Where people are starving, it is only a matter of distribution, and this is primarily a matter of politics.

I think that in the coming decades and centuries that there will be many developments that we today regard as moral horrors, but when we experience them they will not be quite as horrific as we thought. Take, for instance, transhumanism. Francis Fukuyama wrote a short essay in Foreign Policy magazine, Transhumanism, in which he identified transhumanism as the world’s most dangerous idea. While Fukuyama does not commit the moral horror fallacy in any explicit way, it is clear that he sees transhumanism as a moral horror. In fact, many do. But in the fullness of time, when our minds will have changed as much as our bodies, if not more, transhumanism is not likely to appear so horrific.

On the other hand, as I noted above, we will continue to experience moral horrors of unprecedented kinds, and probably also on an unprecedented scope and scale. With the human population at seven billion and climbing, our civilization may well experience wars and diseases and famines that kill billions even while civilization itself continues despite such depredations.

We should, then, be prepared for moral horrors — for some that are truly horrific, and others that turn out to be less than horrific once they are upon us. What we should not try to do is to infer from our desires and preferences in the present what must be or what will be. And the good news in all of this is that we have the power to change future events, to make the moral horrors that occur less horrific than they might have been, and to prepare ourselves intellectually to accept change that might have, once upon a time, been considered a moral horror.

. . . . .

signature

. . . . .

Grand Strategy Annex

. . . . .

%d bloggers like this: