4 December 2015
In the last of his essays, essay XIII of Book III, “Of Experience,” Montaigne wrote:
“The mind has not willingly other hours enough wherein to do its business, without disassociating itself from the body, in that little space it must have for its necessity. They would put themselves out of themselves, and escape from being men. It is folly; instead of transforming themselves into angels, they transform themselves into beasts; instead of elevating, they lay themselves lower. These transcendental humours affright me, like high and inaccessible places; and nothing is hard for me to digest in the life of Socrates but his ecstasies and communication with demons; nothing so human in Plato as that for which they say he was called divine; and of our sciences, those seem to be the most terrestrial and low that are highest mounted; and I find nothing so humble and mortal in the life of Alexander as his fancies about his immortalisation.”
Michel Eyquem de Montaigne, Essays, Book III, “Of Experience”
In writing of “transcendental humors” Montaigne has brilliantly co-opted the medieval physiology of “humors” and gone beyond it even while employing a language that his readers would have immediately recognized. In this passage Montaigne has managed to transcend his era even while employing the language and the concepts of his time.
In medieval western medicine it was believed that the body possessed four “vital humors” including blood, phlegm, yellow bile, and black bile. This was not only a medical idea, but also a psychological idea, as differences in temperament were ascribed to an excess or deficiency of a given humor. We retain traces of these ideas in our language, as when we describe an individual as “sanguine” or “phlegmatic.” These humors were human, all-too-human. This may sound a bit strange, but if the medieval imagination had comprised the possibility other beings on other worlds, it seems likely that such an imagination would have posited other, alien humors that would have determined both the physical constitution and mental temperament of these other beings, and speculation on the character of ETI would have taken the form of suggesting what other kinds of humors there might possibly be.
It is possible that we, too, may be able to transcend the limits of our time even while continuing to employ the familiar linguistic and conceptual infrastructure that is as deeply embedded in contemporary history as Montaigne’s linguistic and conceptual infrastructure was deeply embedded in the thought of his time. It is an uncommon insight, but not an impossible insight, that throws away the ladder after having climbed up the same.
Perhaps this passage from Montaigne so appeals to me because it is so similar to my own way of thought. In my Variations on the Theme of Life I wrote (in section 572):
Biology of religion.–The more human, all-too-human a given phenomenon, the more certain it is to be called sacred or holy.
It almost sounds as though I am purposefully paraphrasing Montaigne, but when I wrote this I was not familiar with the passage from Montaigne quoted above.
One might well be accused of a “category error” to study religion in terms of biology, though in recent years this has become much more common, as quite a number of books on the evolutionary psychology of religion have appeared, though we can see above the idea is already present in Montaigne, and it occurs throughout Nietzsche, even if it is not as explicit there as we would hope, and Nietzsche lacked the detailed scientific background that would have made it possible for him to fully appreciate, and to fully develop, the idea.
We are now getting to the point at which such ideas can be formulated explicitly and given clear and unambiguous scientific content. But our linguistic and conceptual infrastructure, while it provides the basis of the possibility of our intellectual development and progress, remains limited, and moments of great insight are necessary to transcend the prejudices of our age and to begin to comprehend the ideas that, some hundreds of years from now, our descendants will be able to formulate in an explicit way.
Perhaps it is better for us at the present time that we cannot yet formulate our most elusive ideas explicitly. I am reminded of a passage from H. P. Lovecraft that I recently quoted in The Cosmos Primeval:
“The most merciful thing in the world, I think, is the inability of the human mind to correlate all its contents. We live on a placid island of ignorance in the midst of black seas of infinity, and it was not meant that we should voyage far. The sciences, each straining in its own direction, have hitherto harmed us little; but some day the piecing together of dissociated knowledge will open up such terrifying vistas of reality, and of our frightful position therein, that we shall either go mad from the revelation or flee from the light into the peace and safety of a new dark age.”
H. P. Lovecraft, “The Call of Cthulhu,” first paragraph
Lovecraft was half right, but he (like many others) failed to see, or refused to acknowledge (perhaps in Lovecraft it follows from a matter of principle), the possibility of progress in knowledge. While it is true that some go mad and some flee, while others exist on the cusp and madness and sanity, still others are able to look squarely at terrifying vistas of reality and to stare into the face of Medusa without turning to stone.
We are always engaged in the business of slowly and painstakingly assembling dissociated bits of knowledge into a larger and more comprehensive scheme, even if we are not aware that our thirst for comprehension and clarity (which thirst must certainly be accounted among the transcendental humors) is pushing us toward a revelation for which we are not prepared. Most this occurs on an historical scale of time, so that the frightening outlines of the world to come is only discerned dimly by us, and, as Lovecraft implied, this may be a mercy. But every once in a while, under the influence of especially strong transcendental humors, we may find ourselves suddenly face-to-face with the Medusa, quite unexpectedly. Such moments are definitive.
I have often quoted a passage from Kurt Gödel (most recently in Folk Concepts and Scientific Progress) about the possibility of progress in knowledge:
“Turing… gives an argument which is supposed to show that mental procedures cannot go beyond mechanical procedures. However, this argument is inconclusive. What Turing disregards completely is the fact that mind, in its use, is not static, but is constantly developing, i.e., that we understand abstract terms more and more precisely as we go on using them, and that more and more abstract terms enter the sphere of our understanding. There may exist systematic methods of actualizing this development, which could form part of the procedure. Therefore, although at each stage the number and precision of the abstract terms at our disposal may be finite, both (and, therefore, also Turing’s number of distinguishable states of mind) may converge toward infinity in the course of the application of the procedure.”
“Some remarks on the undecidability results” (Italics in original) in Gödel, Kurt, Collected Works, Volume II, Publications 1938-1974, New York and Oxford: Oxford University Press, 1990, p. 306.
Without any intention of belittling Gödel, it is perhaps worthwhile to note in this context that Gödel himself lived on the verge of madness, and that his mental health deteriorated to the point that he essentially starved himself to death, like some western equivalent of an Indian Yogi (or, if you prefer, a starving Buddha, representations of which always have the same haunted eyes that one sees in the photographs of the logician). One can imagine Montaigne transported into another place or time, writing essays on Gödel or a starving Buddha, neither of which he ever encountered, but each of which I think would have piqued his interest, as they represent those transcendental humors that have both plagued humanity with self-imposed ascetic rigors and which have equally advanced civilization in the most unexpected ways.
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
28 November 2015
As yet we have too little evidence of civilization to understand civilizational processes. This sounds like a mere platitude, but it is a platitude to which we can give content by pointing out the relative lack of content of our conception of civilization.
On scale below that of macro-historical transitions (which latter I previously called macro-historical revolutions), we have many examples: many examples of the origins of civilization, many examples of the ends of civilizations, and many examples of the transitions that occur within the development and evolution of civilization. In other words, we have a great deal of evidence when it comes to individual civilizations, but we have very little evidence — insufficient evidence to form a judgment — when it comes to civilization as such (what I previously, very early in the history of this blog, called The Phenomenon of Civilization).
On the scale of macro-historical change, we have only a single instance in the history of terrestrial civilization, i.e., only a single data point on which to base any theory about macro-historical intra-civilizational change, and that is the shift from agricultural civilization (agrarian-ecclesiastical civilization) to industrial civilization (industrial-technological civilization). Moreover, the transition from agricultural and industrial civilization is still continuing today, and is not yet complete, as in many parts of the world industrialization is marginal at best and subsistence agriculture is still the economic mainstay.
Prior to this there was a macro-scale transition with the advent of civilization itself — the transition from hunter-gatherer nomadism to agrarian-ecclesiastical civilization — but this was not an intra-civilizational change, i.e., this was not a fundamental change in the structure of civilization, but the origins of civilization itself. Thus we can say that we have had multiple macro-scale transitions in human history, but human history is much longer than the history of civilization. When civilization emerges within human history it is a game-changer, and we are forced to re-conceptualize human history in terms of civilization.
Parallel to agrarian-ecclesiastical civilization, but a little later in emergence and development, was pastoral-nomadic civilization, which proved to be the greatest challenge to face agrarian-ecclesiastical civilization until the advent of industrialization (cf. The Pastoralist Challenge to Agriculturalism). Pastoral-nomadic civilization seems to have emerged independently in central Asia shortly after the domestication of the horse (and then, again independently, in the Great Plains of North America when horses were re-introduced), probably among peoples practicing subsistence agriculture without having produced the kinds of civilization found in centers of civilization in the Old World — the Yellow River Valley, the Indus Valley, and Mesopotamia.
Pastoral-nomadic civilization, as it followed its developmental course, was not derived from any great civilization, so there was no intra-civilizational transition at its advent, and when it ultimately came to an end it did not end with a transition into a new kind of civilization, but was rather supplanted by agricultural civilization, which slowly encroached on the great grasslands that were necessary for the pasturage of the horses of pastoral-nomadic peoples. So while pastoral-nomadic civilization was a fundamentally different kind of civilization — as different from agricultural civilization as agricultural civilization is different from industrial civilization — the particular circumstances of the emergence and eventual failure of pastoral-nomadic civilization in human history did not yield additional macro-historical transitions that could have provided evidence for the study of intra-civilizational macro-historical change (though it certainly does provide evidence for the study of intra-civilizational conflict).
We would be right to be extremely skeptical of any predictions about the future transition of our civilization into some other form of civilization when we have so little information to go on. All of this is civilization beyond the prediction wall. The view from within a civilization (i.e., the view that we have of ourselves in our own civilization) places too much emphasis upon slight changes to basic civilizational structures. We see this most clearly in mass media publications which present every new fad as a “sea change” that heralds a new age in the history of the world; of course, newspapers and magazines (and now their online equivalents) must adopt this shrill strategy in order to pay the bills, and no one employed at these publications necessarily needs to believe the hyperbole being sold to a gullible public. The most egregious futurism of the twentieth century was a product of precisely the same social mechanism, so that we should not be surprised that it was an inaccurate as it was. (On media demand-driven futurism cf. The Human Future in Space)
. . . . .
. . . . .
. . . . .
. . . . .
18 November 2015
One of the most interesting aspects of our civilization today — what I call industrial-technological civilization — is that its emergence can be pinpointed in space and time to a much greater degree of precision than most major historical developments. Industrial-technological civilization comes into being following the industrial revolution, and the industrial revolution has its origins in England in the last quarter of the eighteenth century. Because the industrial revolution originated in England, England was the first industrialized society, though Germany was not far behind, and many of the fundamental scientific discoveries that intensified the ongoing industrial revolution had their origins in Germany.
It is no coincidence that, a hundred years after the industrial revolution, the British Empire had rapidly become the largest empire in human history. A Wikipedia article on the largest empires lists the British Empire as number one, covering more than twenty percent of the world’s land area and including about twenty percent of the world’s total population within its borders. (The greatest extent of the British Empire is given as 1922, so if we allow the validity of the idea of the “long nineteenth century” this means that this period of the greatest extent of the empire was roughly a century after industrialization when British power reached its zenith; it also was not a coincidence that the rise of British power occurred during the “long nineteenth century” which constituted the stable geopolitical context of Britain’s rise to global superpower status.) The British Empire had become, “The empire on which the sun never sets,” because its global reach meant that there was always some part of the empire in which it was daytime.
It is at least arguable that the British with their empire simply sought to do what all previous empire builders had sought to do. Why were they successful, or disproportionately successful, as compared with other empires? Empires in previous ages ran into the geographical limits of their technologies. In earlier history, once the idea of empire has its proof of concept in antiquity with empires such as the Akkadian Empire and the Assyrian Empire, and the possibilities of empire were first glimpsed, we see throughout history the rise of empires that expand spatially until their institutions of power can no longer sustain imperial control and the empire collapses internally. The rise and fall of empires is like the regular respiration of (agrarian) history.
And then something suddenly changed. The British expanded their empire at the first time in history when there were steam-powered ships, turreted battleships, trains, global telecommunications through the telegraph, and mass media newspapers. The limitations of the technology of administration and social control had suddenly been removed (or, at least, greatly mitigated), and the British were the first to take advantage of this because they were the first industrialized society and so the first to exploit these technologies on a large scale. The British had stumbled onto their moment in history. John Robert Seely wrote in his The Expansion of England (1883) that, “we seem, as it were, to have conquered half the world in a fit of absence of mind.” This improbable quote has been repeated so many times because it captures the haphazard and almost accidental character of British empire building.
Because the British Empire rapidly reached the extent of the globe, and had nowhere further to expand, this first experiment in global technological empire was also the last experiment in global technological empire. By the end of the twentieth century the British Empire had devolved its possessions, mostly peacefully, and its former subject peoples mostly enjoy self-determination, for better or worse. The British (unknowingly) exploited a singular historical opportunity to construct an empire not subject to the constraints of limited transportation and communications that hobbled earlier imperial efforts (one might even call this a “singularity” if the word had not already been overused in every imaginable way). No matter how often the terms “empire” and “imperial” are employed today as terms of abuse, no other political entity has moved into the vacuum left by the British Empire, because it left no power vacuum in its wake. The institutions of popular sovereignty and nation-states filled the void with very different power structures than that of empire.
It would be instructive to engage in a detailed comparative study of the devolution of the Hapsburg Empire and the British Empire, as in each case we have an empire that originated in medieval European kingship, surviving into the modern world and playing a major role in world history. Despite their similarities, the Hapsburg Empire vanished almost without a trace, whereas the British Empire lives on in a modified form as the Commonwealth. The Hapsburg Empire unwound almost in an instant with the end of the First World War, whereas the British Empire gradually unwound over many decades, through dozens of managed transitions to independence. There is something to be learned from the latter example that the world has failed to learn in its rush to condemn colonialism from an assumed position of moral superiority.
. . . . .
. . . . .
. . . . .
. . . . .
14 November 2015
An horrific attack has been perpetrated in Paris, France. The operation appears to have been a coordinated and simultaneous attack, employing both guns and explosives, against at least six targets: La Belle Equipe, Le Carillon bar and Le Petit Cambodge restaurant on the same street, La Casa Nostra restaurant, Stade de France, and the Bataclan concert venue. More than a hundred twenty have been killed in the violence. Islamic State (ISIS) has publicly claimed responsibility for the attack. French President Francois Hollande has stated the the attack is an “act of war” by Islamic State against the French.
Strategic Forecasting (Stratfor) has already correctly noted that, even if ISIS has claimed responsibility, we do not yet know the extent of their involvement in the planning, financing, and execution of the attack:
“While the Islamic State has claimed credit for the attack, it is still uncertain to what degree the Islamic State core organization was responsible for planning, funding or directing it. It is not clear whether the attackers were grassroots operatives encouraged by the organization, like Paris Kosher Deli gunman Ahmed Coulibaly, if the operatives were professional terrorist cadres dispatched by the core group, or if the attack was some combination of the two.” (Strafor Alert)
Islamic State is ideologically committed, by its interpretation of Jihad, to wage expansionist, aggressive war against Dar al-Harb, i.e., the House (or territory) of War. This is not an accident that follows from the brutality and violence of the ISIS campaign in Syria and Iraq, but a religious duty, as they understand it, to make war on the infidel. Thus Islamic State has a motivation to claim the attacks on Paris as their own work even if their involvement was peripheral, since this gives the appearance that they are “taking the fight to the enemy.”
I have previously discussed the ideology and historical consciousness of Islamic State in the following posts:
These posts go into much greater detail regarding the prophetic aspirations of Islamic State, which latter has motivations both to be involved in such an attack, and to claim responsibility for such an attack even if they were not directly behind it. Only subsequent investigation will determine the extent to which Islamic State was involved, and the extent to which this was an “act of war” of ISIS against France.
One interesting detail has emerged from the initial press coverage of the attack: a Syrian passport was found next to the body of one of the suicide bombers involved in the attack. If it is confirmed that terrorists are using the flow of refugees as a human conduit to infiltrate Europe with militants, this greatly complicates the problem of how Europe will deal with the war refugee problem, which is already a political hot potato (like genocide, this is a problem from hell). There is a limit to which European elite opinion can constrain the public narrative over war refugees.
It is to be expected that press coverage in the coming weeks will focus on the attack as a reprisal against French participation in the conflict in Syria. This is probably true, but it is only part of the story. Paris is one of the capitals of the western world, and particularly symbolic of the Enlightenment and the whole tradition of western civilization that emerged from the Enlightenment. The forces of reactionary traditionalism improve their standing with their constituency when they attack targets representative of the enemy’s highest achievements of civilization: this is an ideological aspect of war that is often overlooked in geopolitical, economic, and military assessments of conflict.
Not only is Paris a symbolic target, but if those sectors of society convinced of the reality of an apocalyptic confrontation can prod a major adversary into pouring more resources into the conflict in Mesopotamia and the Levant, and thus becoming the more deeply entrenched, this provides enhanced opportunities to engineer the desired apocalyptic battle.
. . . . .
. . . . .
. . . . .
. . . . .
12 November 2015
It caused quite a stir today when it was announced that the Russians had accidentally released some details of a proposed submersible weapons system (the Status-6, or Статус-6 in Russian) when television coverage of a conference among defense chiefs broadcast a document being held by one of the participants. This was first brought to my attention by a BBC story, Russia reveals giant nuclear torpedo in state TV ‘leak’. The BBC story led me to Russia may be planning to develop a nuclear submarine drone aimed at ‘inflicting unacceptable damage’ by Jeremy Bender, which in turn led me to Is Russia working on a massive dirty bomb? on the Russian strategic nuclear forces blog, which latter includes inks to a television news segment on Youtube, where you can see (at 1:48) the document in question. A comment on the article includes a link to a Russian language media story, Кремль признал случайным показ секретного оружия по Первому каналу и НТВ, that discusses the leak.
This news story is only in its earliest stages, and there are already many conflicting accounts as to exactly what was leaked and what it means. There is also the possibility that the “leak” was intentional, and meant for public consumption, both domestic and international. There is nothing yet on Janes or Stratfor about this, both of which sources I would consider more reliable on defense than the BBC or any mainstream media outlet. There is a story on DefenseOne, Russia: We Didn’t Mean to Show Everyone Our Massive New Nuclear Torpedo, but this seems to be at least partly derivative of the BBC story.
The BBC story suggested the the new Russian torpedo could carry a “dirty bomb,” or possibly a Colbalt bomb, as well as suggesting that it could carry a 100-megaton warhead. These possible warhead configurations constitute the extreme ends of the spectrum of nuclear devices. A “dirty bomb” that is merely a dirty bomb and not a nuclear warhead is a conventional explosive that scatters radioactive material. Such a device has long been a concern for anti-terrorism policy, because the worry is that it would be easier for terrorists to gain access to nuclear materials than to a nuclear weapon. Scattering radioactive elements in a large urban area would not be a weapon of mass destruction, but it has been called a “weapon of mass disruption,” as it would doubtless be attended by panic as as the 24/7 news cycle escalated the situation to apocalyptic proportions.
At the other end of the scale of nuclear devices, either a cobalt bomb or a 100-megaton warhead would be considered doomsday weapons, and there are no nation-states in the world today constructing such devices. The USSR made some 50-100 MT devices, most famously the Tsar Bomba, the most powerful nuclear device ever detonated, but no longer produces these weapons and is unlikely to retain any in its stockpile. It was widely thought that these enormous weapons were intended as “counterforce” assets, as, given the technology of the time (i.e., the low level of accuracy of missiles at this time), it would have required a warhead of this size to take out a missile silo on the other side of the planet. The US never made such large weapons, but its technology was superior, so if the US was also building counterforce missiles at this time, they could have gotten by with smaller yields. The US arsenal formerly included significant numbers of the B53, with a yield of about 9 MT, and before that the B41, with a yield of about 25 MT, but the US dismantled the last B53 in 2011 (cf. The End of a Nuclear Era).
Nuclear weapons today are being miniaturized, and their delivery systems are being given precision computerized guidance systems, so the reasons for building massively destructive warheads the only purpose of which is to participate in a MAD (mutually assured destruction) scenario have disappeared (mostly). A cobalt bomb (as distinct from a dirty bomb, with which it is sometimes confused, as both a dirty bomb and a cobalt bomb can be considered radiological weapons) would be a nuclear warhead purposefully configured to maximize radioactive fallout. In the case of the element cobalt, its dispersal by a nuclear weapon would result in the radioactive isotope cobalt-60, a high intensity gamma ray emitter with a half-life of 5.26 years — remaining highly radioactive for a sufficient period of time that it would likely poison any life that survived the initial blast of the warhead. The cobalt bomb was first proposed by physicist Leó Szilárd in the spirit of a warning as to the direction that nuclear technology could take, ultimately converging upon human extinction, which became a Cold War touchstone (cf. Existential Lessons of the Cold War).
The discussion of the new Russian weapon Status-6 (Статус-6) in terms of dirty bombs, cobalt bombs, and 100 MT warheads is an anachronism. If a major power were to build a new nuclear device today, they would want to develop what have been called fourth generation nuclear weapons, which is an umbrella term to cover a number of innovative nuclear technologies not systematically researched due to both the end of the Cold War and the nuclear test ban treaty. (On the Limited Nuclear Test Ban Treaty and the Comprehensive Nuclear-Test-Ban Treaty cf. The Atomic Age Turns 70) Thus this part of the story so far is probably very misleading, but the basic idea of a nuclear device on a drone submersible is what we need to pay attention to here. This is important.
I am not surprised by this development, because I predicted it. In WMD: The Submersible Vector of January 2011 I suggested the possibility of placing nuclear weapons in drone submersibles, which could then be quietly infiltrated into the harbors of major port cities (or military facilities, although these would be much more difficult to infiltrate stealthily and to keep hidden), there to wait for a signal to detonate. By this method it would be possible to deprive an adversary of major cities, port, and military facilities in one fell swoop. The damage that could be inflicted by such a first strike would be just as devastating as the first strikes contemplated during the Cold War, when first strikes were conceived as a massive strike by ICBMs coming over the pole. Only now, with US air superiority so far in advance of other nation-states, it makes sense to transfer the nuclear strategic strike option to below the world’s oceans. Strategically, this is a brilliant paradigm shift, and one can see a great many possibilities for its execution and the possible counters to such a strategy.
During the Cold War, the US adopted a strategic defense “triad” consisting of nuclear weapons deliverable by ground-based missiles (ICBMs), jet bombers (initially the subsonic B-52, and later supersonic bombers such as the B-1 and B-2), and submarine launched ballistic missiles (SLBMs). Later this triad was supplemented by nuclear-tipped cruise missiles, which represent the beginning of a disruptive change in nuclear strategy, away from massive bombardment to precision strikes.
The Russians depended on ground-based ICBMs, of which they possessed more, but, in the earlier stages of the Cold War Russian ICBMs were rather primitive, subject to failure, and able to carry only a single warhead. As Soviet technology caught up with US technology, and the Russians were able to build reliable missile boats and MIRVs for their ICBMs, the Russians too began to converge upon a triad of strategic defense, adding supersonic bombers (the Tu-22M “Backfire” and then the Tu-160 “Blackjack”) and missile boats to their ground-based missiles. For a brief period of the late Cold War, there was a certain limited nuclear parity that roughly corresponded with détente.
This rough nuclear parity was upset by political events and continuing technological changes, the latter almost always led by the US. An early US lead in computing technology once again led to a generational divide between US and Soviet technology, with the Soviet infrastructure increasingly unable to keep up with technological advances. The introduction of SDI (Strategic Defense Initiative) threatened to further destabilize nuclear parity, and which in particular was perceived to as a threat to the stability of MAD. Long after the Cold War is over, the US continues to pursue missile defense, which has been a remarkably powerful political tool, but despite several decades of greatly improved technology, cannot deliver on its promises. So SDI upset the applecart of MAD, but still cannot redeem its promissory note. This is an important detail, because the weapons system that the Russians are contemplating with Status-6 (Статус-6) can be built with contemporary technologies. Thus even if the US could extend its air superiority to space, in addition to fielding an effective missile defense system, none of this would be an adequate counter to a Russian submersible strategic weapon, except in a second strike capacity.
As I noted above, there would be many ways in which to build out this submersible drone strategic capability, and many ways to counter it, which suggests the possibility of a new arms race, although this time without Russia being ideologically crippled by communism (which during the Cold War prevented the Soviet Union from achieving parity with western scientific and economic strength). A “slow” strategic capability could be constructed based something like what I described in WMD: The Submersible Vector, involving infiltration and sequestered assets, or a “fast” strategic capability closer to what was revealed in the Russian document that sparked the story about Status-6, in which the submersibles could fan out and position themselves in hours or days. Each of these strategic assets would suggest different counter measures.
What we are now seeing is the familiar Cold War specter of a massive nuclear exchange displaced from our skies into the oceans. If the Russians thought of it, and I thought of it, you can be certain that all the defense think tanks of the world’s major nation-states have thought of it also, and have probably gamed some of the obvious scenarios that could result.
. . . . .
Addendum Added Sunday 15 November 2015: In what way is a nuclear-tipped drone submersible different from a conventional nuclear torpedo? Contemporary miniaturization technology makes it possible to have a precision guided submersible that is very small — small enough that such a weapon might conceivably bury itself in the mud on the bottom of a waterway and so be impossible to detect, even to be visually by divers alerted to search for suspicious objects on the bottom (as presumably happens in military harbors). Also, the Status-6 was given a range of some 6,000 nautical miles, which means that these weapons could be released by a mothership almost anywhere in the world’s oceans, and travel from that point to their respective targets. Such weapons could be dropped from the bottom of a ship, and would not necessarily have to be delivered by submarine. Once the drones were on their way, they would be almost impossible to find because of their small size. The key vulnerability would be the need for some telecommunications signaling to the weapon. If the decision had already been made to strike, and those making the decision were sufficiently confident that they would not change their minds, such drones could be launched programmed to detonate and therefore with no need to a telecommunications link. Alternatively, drones could be launched programmed to detonate, but the detonation could be suppressed by remote command, which would be a one-time signal and not an ongoing telecommunications link to the drone. This presents obvious vulnerabilities as well — what if the detonation suppression signal were blocked? — but any weapons systems will have vulnerabilities. It would be a relatively simple matter to have the device configurable as either fail-safe or fail-deadly, with the appropriate choice made at the time of launch.
. . . . .
Note Added Saturday 14 November 2015: Since writing the above, an article has appeared on Janes, Russian state TV footage reveals ‘oceanic multi-purpose’ torpedo-based nuclear system, by Bruce Jones, London, IHS Jane’s Defence Weekly, though it doesn’t add much in addition to what is already known.
. . . . .
. . . . .
. . . . .
. . . . .
10 November 2015
A medieval logician in the twenty-first century
In the discussion surrounding the unusual light curve of the star KIC 8462852, Ockham’s razor has been mentioned numerous times. I have written a couple of posts on this topic, i.e., interpreting the light curve of KIC 8462852 in light of Ockham’s razor, KIC 8462852 and Parsimony and Plenitude in Cosmology.
What is Ockham’s razor exactly? Well, that is a matter of philosophical dispute (and I offer my own more precise definition below), but even if it is difficult to say that Ockham’s razor is exactly, we can say something about what it was originally. Philotheus Boehner, a noted Ockham scholar, wrote of Ockham’s razor:
“It is quite often stated by Ockham in the form: ‘Plurality is not to be posited without necessity’ (Pluralitas non est ponenda sine necessitate), and also, though seldom: ‘What can be explained by the assumption of fewer things is vainly explained by the assumption of more things’ (Frustra fit per plura quod potest fieri per pauciora). The form usually given, ‘Entities must not be multiplied without necessity’ (Entia non sunt multiplicanda sine necessitate), does not seem to have been used by Ockham.”
William of Ockham, Philosophical Writings: A Selection, translated, with an Introduction, by Philotheus Boehner, O.F.M., Indianapolis and New York: The Library of Liberal Arts, THE BOBBS-MERRILL COMPANY, INC., 1964, Introduction, p. xxi
Most references to (and even most uses of) Ockham’s razor are informal and not very precise. In Maybe It’s Time To Stop Snickering About Aliens, which I linked to in KIC 8462852 Update, Adam Frank wrote of Ockham’s razor in relation to KIC 8462852:
“…aliens are always the last hypothesis you should consider. Occam’s razor tells scientists to always go for the simplest explanation for a new phenomenon. But even as we keep Mr. Occam’s razor in mind, there is something fundamentally new happening right now that all of us, including scientists, must begin considering… the exoplanet revolution means we’re developing capacities to stare deep into the light produced by hundreds of thousands of boring, ordinary stars. And these are exactly the kind of stars where life might form on orbiting planets… So we are already going to be looking at a lot of stars to hunt for planets. And when we find those planets, we are going to look at them for basic signs that life has formed. But all that effort means we will also be looking in exactly the right places to stumble on evidence of not just life but intelligent, technology-deploying life.
Here the idea of Ockham’s razor is present, but little more than the idea. Rather than merely invoking the idea of Ockham’s razor, and merely assuming what constitutes simplicity and parsimony, if we are going to profitably employ the idea today, we need to develop it more fully in the context of contemporary scientific knowledge. In KIC 8462852 I wrote:
“One can see an emerging adaptation of Ockham’s razor, such that explanations of astrophysical phenomena are first explained by known processes of nature before they are attributed to intelligence. Intelligence, too, is a process of nature, but it seems to be rare, so one ought to exercise particular caution in employing intelligence as an explanation.”
In a recent post, Parsimony and Emergent Complexity I went a bit further and suggested that Ockham’s razor can be formulated with greater precision in terms of emergent complexity, such that no phenomenon should be explained in terms of a level of emergent complexity higher than that necessary to explain the phenomenon.
De revolutionibus orbium coelestium and its textual history
Like Darwin many centuries later, Copernicus hesitated to publish his big book to explain his big idea, i.e., heliocentrism. Both men, Darwin and Copernicus, understood the impact that their ideas would have, though both probably underestimated the eventual influence of these ideas; both were to transform the world and leave as a legacy entire cosmologies. The particular details of the Copernican system are less significant than the Copernican idea, i.e., the Copernican cosmology, which, like Ockham’s razor, has gone on to a long career of continuing influence.
Darwin eventually published in his lifetime, prompted by the “Ternate essay” that Wallace sent him, but Copernicus put off publishing until the end of his life. It is said that Copernicus was shown a copy of the first edition of De revolutionibus on his deathbed (though this is probably apocryphal). Copernicus, of course, lived much closer to the medieval world than did Darwin — one could well argue that Toruń and Frombork in the fifteenth and sixteenth centuries was the medieval world — so we can readily understand Copernicus’ hesitation to publish. Darwin published in a world already transformed by industrialization, already wrenched by unprecedented social change; Copernicus eventually published in a world that, while on the brink of profound change, had not appreciably changed in a thousand years.
Copernicus’ hesitation meant that he did not directly supervise the publication of his manuscript, that he was not able to correct or revise subsequent editions (Darwin revised On the Origin of Species repeatedly for six distinct editions in his lifetime, not including translations), and that he was not able to respond to the reception of his book. All of these conditions were to prove significant in the reception and propagation of the Copernican heliocentric cosmology.
Copernicus, after long hesitation, was stimulated to pursue the publication of De revolutionibus by his contact with Georg Joachim Rheticus, who traveled to Frombork for the purpose of meeting Copernicus. Rheticus, who had great respect for Copernicus’ achievement, came from the hotbed of renaissance and Protestant scholarship that was Nuremberg. He took Copernicus’ manuscript to Nuremberg to be published by a noted scientific publisher of the day, but Rheticus did not stay to oversee the entire publication of the work. This job was handed down to Andreas Osiander, a Protestant theologian who sought to water down the potential impact of De Revolutionibus by adding a preface that suggested that Copernicus’ theory should be accepted in the spirit of an hypothesis employed for the convenience of calculation. Osiander did not sign this preface, and many readers of the book, when it eventually came out, thought that this preface was the authentic Copernican interpretation of the text.
Osiander’s preface, and Osiander’s intentions in writing the preface (and changing the title of the book) continue to be debated to the present day. This debate cannot be cleanly separated from the tumult surrounding the Protestant Reformation. Luther and the Lutherans were critical of Copernicus — they had staked the legitimacy of their movement on Biblical literalism — but one would have thought that Protestantism would have been friendly to the work of Ockham, given Ockham’s conflict with the Papacy, Ockham’s fideism, and his implicit position as a critic of Thomism. (I had intended to read up on the Protestant interpretation of Ockham prior to writing this post, but I haven’t yet gotten to this.) The parsimony of Copernicus’ formulation of cosmology, then, was a mixed message to the early scientific revolution in the context of the Protestant Reformation.
Both Rheticus and Copernicus’ friend Tiedemann Giese were indignant over the unsigned and unauthorized preface by Osiander. Rheticus, by some accounts, was furious, and felt that the book and Copernicus had been betrayed. He pursued legal action against the printer, but it is not clear that it was the printer who was at fault for the preface. While Rheticus suspected Osiander as the author of the preface, this was not confirmed until some time later, when Rheticus had moved on to other matters, so Osiander was never pursued legally over the preface.
The most common reason adduced to preferring Copernican cosmology to Ptolematic cosmology is not that one is true and the other is false (though this certainly is a reason to prefer Copernicus) but rather that the Copernican cosmology is the simpler and more straight-forward explanation for the observed movements of the stars and the planets. The Ptolemaic system can predict the movements of stars, planets, and the moon (within errors of margin relevant to its time), but it does so by way of a much more complex and cumbersome method than that of Copernicus. Copernicus was radical in the disestablishment of traditional cosmological thought, but once beyond that first radical step of displacing the Earth of the center of the universe (a process we continue to iterate today), the solar system fell into place according to a marvelously simple plan that anyone could understand once it was explained: the sun at the center, and all the planets revolving around it. From the perspective of our rotating and orbiting Earth, the other planets also orbiting the sun appear to reverse in their course, but this is a mere artifact due to our position as observers. Once Copernicus can convince the reader that, despite the apparent solidity of the Earth, it is in fact moving through space, everything else falls into place.
One of the reasons that theoretical parsimony and elegance played such a significant role in the reception of Copernicus — and even the theologians who rejected his cosmology employed his calculations to clarify the calendar, so powerful was Copernicus’ work — was that the evidence given for the Copernican system was indirect. Even today, only a handful of the entire human population has ever left the planet Earth and looked down on it from above — seeing Earth from the perspective of the overview effect — and so acquired direct evidence of the Earth in space. No one, no single human being, has hovered above the solar system entire and looked down upon it and so obtained the most direct evidence of the Copernican theory — this is an overview affect that we have not yet attained. (NB: in The Scientific Imperative of Human Spaceflight I suggested the possibility of a hierarchy of overview effects as one moved further out from Earth.)
The knowledge that we have of our solar system, and indeed of the universe entire, is derived from observations and deduction from observations. Moreover, seeing the truth of Copernican heliocentrism would not only require an overview in space, but an overview in time, i.e., one would need to hover over our solar system for hundreds of years to see all the planets rotating around the common center of the sun, and one would have to, all the while, remain focused on observing the solar system in order to be able to have “seen” the entire process — a feat beyond the limitations of the human lifetime, not to mention human consciousness.
Copernicus himself did not mention the principle of parsimony or Ockham’s razor, and certainly did not mention William of Ockham, though Ockham was widely read in Copernicus’ time. The principle of parsimony is implicit, even pervasive, in Copernicus, as it is in all good science. We don’t want to account for the universe with Rube Goldberg-like contraptions as our explanations.
In a much later era of scientific thought — in the scientific thought of our own time — Stephen J. Gould wrote an essay titled “Is uniformitarianism necessary?” in which he argued for the view that uniformitarianism in geology had simply come to mean that geology follows the scientific method. Similarly, one might well argued that parsimony is no more necessary than uniformitarianism, and that what content of parsimony remains is simply coextenisve with the scientific method. To practice science is to reason in accordance with Ockham’s razor, but we need not explicitly invoke or apply Ockham’s razor, because its prescriptions are assimilated into the scientific method. And indeed this idea fits in quite well with the casual references to Ockham’s razor such as that I quoted above. Most scientists do not need to think long and hard about parsimony, because parsimonious formulations are already a feature of the scientific method. If you follow the scientific method, you will practice parsimony as a matter of course.
Copernicus’ Ockham, then, was already the Ockham already absorbed into nascent scientific thought. Perhaps it would be better to say that parsimony is implicit in the scientific method, and Copernicus, in implicitly following a scientific method that had not yet, in his time, been made explicit, was following the internal logic of the scientific method and its parsimonious demands for simplicity.
Osiander was bitterly criticized in his own time for his unauthorized preface to Copernicus, though many immediately recognized it as a gambit to allow for the reception of Copernicus’ work to involve the least amount of controversy. As I noted above, the Protestant Reformation was in full swing, and the events that would lead up the Thirty Years’ War were beginning to unfold. Europe was a powder keg, and many felt that it was the better part of valor not to touch a match to any issue that might explode. All the while, others were doing everything in their power to provoke a conflict that would settle matters once and for all.
Osiander not only added the unsigned and unauthorized preface, but also changed the title of the whole work from De revolutionibus to De revolutionibus orbium coelestium, adding a reference to the heavenly spheres that was not in Copernicus. This, too, can be understood as a concession to the intellectually conservative establishment — or it can be seen as a capitulation. But it was the preface, and what the preface claimed as the proper way to understand the work, that was the nub of the complaint against Osiander.
Here is a long extract of Osiander’s unsigned and unauthorized preface to De revolutionibus, not quite the whole thing, but most of it:
“…it is the duty of an astronomer to compose the history of the celestial motions through careful and expert study. Then he must conceive and devise the causes of these motions or hypotheses about them. Since he cannot in any way attain to the true causes, he will adopt whatever suppositions enable the motions to be computed correctly from the principles of geometry for the future as well as for the past. The present author has performed both these duties excellently. For these hypotheses need not be true nor even probable. On the contrary, if they provide a calculus consistent with the observations, that alone is enough. Perhaps there is someone who is so ignorant of geometry and optics that he regards the epicyclc of Venus as probable, or thinks that it is the reason why Venus sometimes precedes and sometimes follows the sun by forty degrees and even more. Is there anyone who is not aware that from this assumption it necessarily follows that the diameter of the planet at perigee should appear more than four times, and the body of the planet more than sixteen times, as great as at apogee? Yet this variation is refuted by the experience of every age. In this science there are some other no less important absurdities, which need not be set forth at the moment. For this art, it is quite clear, is completely and absolutely ignorant of the causes of the apparent nonuniform motions. And if any causes are devised by the imagination, as indeed very many are, they are not put forward to convince anyone that are true, but merely to provide a reliable basis for computation. However, since different hypotheses are sometimes offered for one and the same motion (for example, eccentricity and an epicycle for the sun’s motion), the astronomer will take as his first choice that hypothesis which is the easiest to grasp. The philosopher will perhaps rather seek the semblance of the truth. But neither of them will understand or state anything certain, unless it has been divinely revealed to him.”
Nicholas Copernicus, On the Revolutions, Translation and Commentary by Edward Rosen, THE JOHNS HOPKINS UNIVERSITY PRESS, Baltimore and London
If we eliminate the final qualification, “unless it has been divinely revealed to him,” Osiander’s preface is a straight-forward argument for instrumentalism. Osiander recommends Copernicus’ work because it gives the right results; we can stop there, and need not make any metaphysical claims on behalf of the theory. This ought to sound very familiar to the modern reader, because this kind of instrumentalism has been common in positivist thought, and especially so since the advent of quantum theory. Quantum theory is the most thoroughly confirmed theory in the history of science, confirmed to a degree of precision almost beyond comprehension. And yet quantum theory still lacks an intuitive correlate. Thus we use quantum theory because it gives us the right results, but many scientists hesitate to give any metaphysical interpretation to the theory.
Copernicus, and those most convinced of his theory, like Rheticus, was a staunch scientific realist. He did not propose his cosmology as a mere system of calculation, but insisted that his theory was the true theory describing the motions of the planets around the sun. It follows from this uncompromising scientific realism that other theories are not merely less precise in calculating the movements of the planets, but false. Scientific realism accords with common sense realism when it comes to the idea that there is a correct account of the world, and other accounts that deviate from the correct account are false. But we all know that scientific theories are underdetermined by the evidence. To formulate a law is to go beyond the finite evidence and to be able to predict an infinitude of possible future states of the phenomenon predicted.
Scientific realism, then, is an ontologically robust position, and this ontological robustness is a function of the underdetermination of the theory by the evidence. Osiander argues of Copernicus’ theory that, “if they provide a calculus consistent with the observations, that alone is enough.” So Osiander is not willing to go beyond the evidence and posit the truth of an underdetermined theory. Moreover, Osiander was willing to maintain empirically equivalent theories, “since different hypotheses are sometimes offered for one and the same motion.” Given empirically equivalent theories that can both “provide a calculus consistent with the observations,” why would one theory be favored over another? Osiander states that the astronomer will prefer the simplest explanation (hence explaining Copernicus’ position) while the philosopher will seek a semblance of truth. Neither, however, can know what this truth is without divine revelation.
Osiander’s Ockham is the convenience of the astronomer to seek the simplest explanation for his calculations; the astronomer is justified in employing the simplest explanation of the most precise method available to calculate and predict the course of the heavens, but he cannot know the truth of his theory unless that truth is guaranteed by some outside and transcendent evidence not available through science — a deus ex machina for the mind.
The origins of the scientific revolution in Copernicus
Copernicus’ Ockham was ontological parsimony; Osiander’s Ockham was methodological parsimony. Are we forced to choose between the two, or are we forced to find a balance between ontological and methodological parsimony? These are still living questions in the philosophy of science today, and there is a sense in which it is astonishing that they appeared so early in the scientific revolution.
As noted above, the world of Copernicus was essentially a medieval world. Toruń and Frombork were far from the medieval centers of learning in Paris and Oxford, and about as far from the renaissance centers of learning in Florence and Nuremberg. Nevertheless, the new cosmology that emerged from the scientific revolution, and which is still our cosmology today, continuously revised and improved, can be traced to the Baltic coast of Poland in the late fifteenth and early sixteenth century. The controversy over how to interpret the findings of science can be traced to the same root.
The conventions of the scientific method were established in the work of Copernicus, Galileo, and Newton, which means that it was the work of these seminal thinkers who established these conventions. Like the cosmologies of Copernicus, Galileo, and Newton, the scientific method has also been continuously revised and improved. That Copernicus grasped in essence as much of the scientific method as he did, working in near isolation far from intellectual centers of western civilization, demonstrates both the power of Copernicus’ mind and the power of the scientific method itself. As implied above, once grasped, the scientific method has an internal logic of its own that directs the development of scientific thought.
The scientific method — methodological naturalism — exists in an uneasy partnership with scientific realism — ontological naturalism. We can see that this tension was present right from the very beginning of the scientific revolution, before the scientific method was ever formulated, and the tension continues down to the present day. Contemporary analytical philosophers discuss the questions of scientific realism in highly technical terms, but it is still the same debate that began with Copernicus, Rheticus, and Osiander. Perhaps we can count the tension between methodological naturalism and ontological naturalism as one of the fundamental tensions of scientific civilization.
. . . . .
Updates and Addenda
This post began as a single sentence in one of my note books, and continued to grow as I worked on it. As soon as I posted it I realized that the discussions of scientific realism, instrumentalism, and methodological naturalism in relation to parsimony could be much better. With additional historical and philosophical discussion, this post might well be transformed into an entire book. So for the questioning reader, yes, I understand the inadequacy of what I have written above, and that I have not done justice to my topic.
Shortly after posting the above Paul Carr pointed out to me that the joint ESA-NASA Ulysses deep-space mission sent a spacecraft to study the poles of the sun, so we have sent a spacecraft out of the plane of the solar system, which could “look down” on our star and its planetary system, although the mission was not designed for this and had no cameras on board. If we did position a camera “above” our solar system, it would be able to take pictures of our heliocentric solar system. This, however, would be more indirect evidence — more direct than deductions from observations, but not as direct as seeing this with one’s own eyes — like the famous picture of the “blue marble” Earth, which is an overview experience for those of us who have not been into orbit to the moon, but which is not quite the same as going into orbit or to the moon.
Paul Carr also drew my attention to Astronomy Cast Episode 390: Occam’s Razor and the Problem with Probabilities, with Fraser Cain and Pamela Gay, which discusses Ockham’s razor in relation to positing aliens as a scientific explanation.
. . . . .
. . . . .
. . . . .
. . . . .
2 November 2015
Historians can always reach further back into the past in order to find ever-more-distant antecedents to the world of today. This is one of the persistent problems of periodization, and it often results in different historians employing different periodizations of the same temporal continuum. There are periodizations that involve greater and lesser consensus. There is a significant degree of consensus that the industrial revolution begins with James Watt’s steam engine developed from 1763 to 1775. Watt’s steam engine, of course, does not appear out of nowhere. It was preceded by the use of much less efficient Newcomen engines used to pump water from mine shafts. It was also preceded by hundreds of years of medieval industry that employed wind and water power to run machinery, so that it was “merely” a matter of installing one of Watt’s new steam engines in an existing mechanical infrastructure that made the industrial revolution possible. Of course, the reality of the historical process is much more detailed — and much more interesting — than that. The steam engine was a trigger, and large scale economic and social forces were already in play that made it possible for the industrial revolution to transform civilization.
The life of Sir Richard Arkwright reveals the search for historical antecedents in particular clarity — as well as revealing the complexity of of the historical process — as Arkwright spent the greater part of his life inventing textile machinery and building mills, some of which were horse powered and most of which were water powered. In 1790 Arkwright built the first textile factory powered by a Boulton and Watt steam engine in Nottingham, England. Arkwright was a man of many plans, who always had another new project into which he poured his apparently abundant energies. The industrial application of the steam engine was only one of many of Arkwright’s projects. Men like Arkwright prepared the ground for the Industrial revolution by a thousand events that occurred long before the industrial revolution. Everything had to be in place for the steam engine to be exploited in the way that it was — a capitalist economy as described by Adam Smith on the eve of the Industrial Revolution, legal institutions that respected private property, nascent industry powered by wind and water, literacy, science in its modern form, and so on.
The steam engine might have come about merely by tinkering — its construction was not predicated upon the most advanced scientific knowledge of the time, or the application of this science — and it might have stayed within the realm of tinkering, confined to a social class that did not receive an education in science. Instead, something unprecedented happened. The development of the steam engine led to theorizing about the steam engine, which in turn led to the development of a fundamental science that is still with us today, long after steam engines have ceased to play a significant role in our civilization. Other technologies replaced the steam engine, and the technologies that replaced the steam engine were replaced with later technologies, and so on through several generations of technologies. But the science that grew out of the study of steam engines is with us still in the form of thermodynamics, and thermodynamics is central to contemporary science.
Indeed, we have passed from the study of ideal steam engines to the study of the universe entire in terms of thermodynamics, so that the scope of thermodynamics has relentlessly expanded since its introduction, even while the applications of steam engines have been been reduced in scope until they are a marginal technology. How is this unprecedented? No Greek philosopher ever wrote a theoretical treatise on Hero’s steam turbine, and if a Greek philosopher had done so, there simply was not enough of a background of scientific knowledge to do so coherently. Archimedes did write several treatises on practical matters, and there was enough mathematics in classical antiquity to give a mathematical treatment of certain problems that might be characterized as physics, but Archimedes remained an individual working mostly in isolation. His work did not become a scientific research program (in the Lakatosian sense); he was not a member of a community of researchers sharing results and working jointly on experiments.
There is a striking resemblance between the industrial revolution and the British agricultural revolution. In most feudal societies of the time — and almost every society at the time was feudal to some degree — the land-owning classes that controlled the agricultural economy that was the engine of society would not work with their hands. To work with one’s hands was to acknowledge that one was a laborer or a tradesman, and this would be a considerable reduction in social status for an aristocrat. What is distinctive about England is that a few aristocrats became passionately interested in the ordinary business of life, and they threw themselves into this engagement in a way that cast aside the traditional taboo against the upper classes working with their hands. A figure who somewhat resembles Arkwright is Sir Thomas Coke of Norfolk, an aristocrat who did not scruple to mix with his tenant farmers, and who actively participated in agricultural reforms. The selective breeding of stock became progressively more scientific over time, and influenced Darwin, who devoted the opening chapter of On the Origin of Species to “Variation under Domestication,” which is concerned with selective breeding.
The core of scientific civilization as we know it is the patient and methodical application of the scientific method to industrial processes (including the processes of industrial agriculture). All civilizations have had technologies; all civilizations have had industries. Only scientific civilizations apply science to technology and industry in a systematic way. The tightly-coupled STEM cycle of our industrial-technological civilization has led to more technological change in the past century than occurred in the previous ten thousand years. Thus technology has experienced exponential growth, but only because this growth was driven by the application of science.
The role of science in industrial-technological civilization may be less evident than the role of technology, and indeed some desire the technology but are suspicious of the science, and seek to decouple the two. While some technologies pose some moral dilemmas, these dilemmas can be met (if unsatisfactorily met) simply by limiting the application of the technology. The ideas of science are not so easily limited, and they pose an intellectual threat — an existential threat — to ideological complacency.
The scientific civilization that has been created in the wake of the industrial revolution is so productive that it enables non-survival behavior orders of magnitude beyond the non-survival behavior of earlier civilizations. Human intellectual capacity gives us a survival margin not possessed by other species, so that even in a non-civilized condition human societies can engage in non-survival behavior. Here is a passage from Sam Harris on non-survival behavior that suggests the meaning I am getting at:
“Many social scientists incorrectly believe that all long-standing human practices must be evolutionarily adaptive: for how else could they persist? Thus, even the most bizarre and unproductive behaviors — female genital excision, blood feuds, infanticide, the torture of animals, scarification, foot binding, cannibalism, ceremonial rape, human sacrifice, dangerous male initiations, restricting the diet of pregnant and lactating mothers, slavery, potlatch, the killing of the elderly, sati, irrational dietary and agricultural taboos attended by chronic hunger and malnourishment, the use of heavy metals to treat illness, etc. — have been rationalized, or even idealized, in the fire-lit scribblings of one or another dazzled ethnographer. But the mere endurance of a belief system or custom does not suggest that it is adaptive, much less wise. It merely suggests that it hasn’t led directly to a society’s collapse or killed its practitioners outright.”
Sam Harris, The Moral Landscape, Introduction
As a result of the productive powers of scientific civilization, science can remain a marginal activity, largely walled off from the general public, while continuing to revolutionize the production processes of industry. This process of walling off science from the general public partly occurs due to the public’s discomfort with and distrust of science, but it also occurs partly due to the desire of scientists to continue their work without having to justify it to the general public, as the process of public justification inevitably becomes a social and political process in which the values unique to science easily become lost (This will be the topic of a future post, currently being drafted, on science communication to the public).
This social disconnect sets up an image of embattled scientists trying to carry on the work of scientific civilization in the face of what Ortega y Gasset called the revolt of the masses. A public indifferent to, or even hostile to, science decides, through its representatives, what sciences get funded and how much they get funded, and their social choices decide the social standing of the sciences and scientists. Can scientific civilization endure when those responsible for its continuation are increasingly marginal in social and political thought?
The house of industrial-technological civilization cannot long stand divided against itself. But taking the long view that was seen to be necessary to understanding the industrial revolution — that the steam engine was a trigger that occurred in the context of a civilization ripe for transformation — we must wonder what pervasive yet subtle changes are taking place today that may be triggered by the advent of some new invention that will transform civilization. While I think that scientific civilization has a long run ahead of it, scientific civilization can take many forms, of which industrial-technological civilization is but one early example. We live in the midst of industrial-technological civilization, so its institutions feel permanent and unchangeable to us, even as the most passing acquaintance with history will demonstrate that almost everything we take for granted today is historically unprecedented.
. . . . .
. . . . .
. . . . .
. . . . .
26 October 2015
Between the advent of cognitive modernity, perhaps seventy thousand years ago (more or less), and the advent of settled agricultural civilization, about ten thousand years ago, there is a period of fifty thousand years or more of human history — an order of magnitude of history beyond the historical period, sensu stricto, i.e., the period of written records formerly presumed coextensive with civilization — that we have only recently begun to recover by the methods of scientific historiography. This pre-Holocene world was a world of the “ice age” and of “cave men.” These ideas have become so confused in popular culture that I must put them in scare quotes, but in some senses they are accurate, if occasionally misleading.
One way in which the idea of an “Ice Age” is misleading is that it implies that our warmer climate today is the norm and an ice age is a passing exception to that norm. This is the reverse of the case. For the past two and a half million years the planet has been passing through the Quaternary Period, which mostly consists of long (about 100,000 year) periods of glaciation punctuated by shorter (about 10,000 year) interglacial periods (also called warming periods) during which the global climate warms and the polar ice sheets retreat. I have pointed out elsewhere that, although human ancestors have been present throughout the entire Quaternary, and so have therefore experienced several cycles of glaciation and interglacials, the present interglacial (the Holocene) is the first warming period since cognitive modernity, and we find the beginnings of civilization as soon as this present warming period begins. Thus the Holocene Epoch is dominated, from an anthropocentric perspective, by civilization; the Quaternary Period before the Holocene Epoch is, again from an anthropocentric perspective, human history before civilization: history before history.
We should remind ourselves that this very alien world and its inhabitants is the precursor to our world and the inhabitants are our direct ancestors. In other words, this is us. This is our history, even if we have only recently become accustomed to thinking of prehistory as history no less than the historical period sensu stricto. The Upper Paleolithic, with its ice age, cave bears, cave men, painted animals seen in flickering torchlight, and thousands upon thousands of years of a winter that does not end was a human world — the human world of the Upper Paleolithic — that we can only with effort recover as our own and come to feel its formative power to shape what we have become. The technical term is that his human world of the Upper Paleolithic was our environment of evolutionary adaptedness (EEA). It is this world that made us what we are today.
One website has this very evocative passage describing the world of the Upper Paleolithic:
“The longest war ever fought by humans was not fought against other humans, but against another species — Ursus spelaeus, the Cave Bear. For several hundred thousand years our stone age ancestors fought pitched and bloody battles with these denizens of the most precious commodity on earth — habitable caves. Without these shelters homo sapiens would have had little chance of surviving the Ice Ages, the winter storms, and the myriad of predators that lurked in the dark.”
While there isn’t direct scientific evidence for this compellingly dramatic way of thinking about the Upper Paleolithic (though I was very tempted to title this post “The 100,000 Year War”), it can accurately be said that human/cave bear interactions did occur during the most recent glacial maximum, that both human beings and cave bears are warm-blooded mammals and caves would have provided a measure of protection and warmth that would have endured literally for thousands or tens of thousands of years during this climatological “bottleneck” for mammals, whereas no human-built shelter could have survived these conditions for this period of time. Another species as ill-suited for cold weather as homo sapiens would have simply moved on or gone extinct, but we had our big brains by this time, and this made it possible for early man to fight tenaciously for keep a grip on life even in an environment in which they have to fight cave bears for the few available shelters.
Human beings would have survived elsewhere on the planet in any event, because the equatorial belt was still plenty warm at the time, but the fact that some human beings survived in caves in glaciated Europe is a testament both to their cognitive modernity and their stubbornness. It becomes a little easier to understand how and why early human beings squeezed into caves by passages that cause contemporary archaeologists to experience not a little claustrophobia, when we understand that human beings were routinely inhabiting caves, and probably had to explore them in some depth to make sure they wouldn’t have any unpleasant surprises when a cave bear woke up from its hibernation in the spring.
Unlike human beings, cave bears probably could not have survived elsewhere — they were a species endemic to a particular climate and a particular range and did not have the powers of behavioral adaptation possessed by human beings. The caves of ice age Eurasia were their world, and they spent enough time in these shelters that the walls of caves have a distinctive sheen that is called “Bärenschliffe”:
The “Bärenschliffe” are smooth, polished and often shining surfaces, thought to be caused by passing bears, rubbing their fur along the walls. These surfaces do not only occur in narrow passages, where the bear would come into contact with the walls, but also at corners or rocks in wider passages.
“Trace fossils from bears in caves of Germany and Austria” by Wilfried Rosendahl and Doris Döppes, Scientific Annals, School of Geology Aristotle University of Thessaloniki, Special volume 98, p. 241-249, Thessaloniki, 2006.
Some of these caves are said to be polished “like marble” (I haven’t visited any of these caves myself, so I am reporting what I have read in the literature), so that one must imagine cave bears passing through the narrow passages of their caves for thousands of years, brushing against the wall with their fur until the rough stone is made smooth. The human beings who later took over these caves would have run their hand along these smooth walls, noted the niches where the bears hibernated, and wondered if another bear would come to claim the cave they had claimed.
There is a particularly interesting cave in Switzerland, Drachenloch (which means “dragon’s lair,” as cave bear skulls were once thought to have been the skulls of dragons), in which early human beings seem to have stacked cave bear skulls in a stone “vault” in the floor of the cave. Certainly these two mammal species — ursus spelaeus and homo sapiens — would have known each other by all their shared signs of cave habitation. Indeed, they would have smelled each other.
Mythology scholar Joseph Campbell many times pointed out the fundamental mythological differences between hunter-gatherer peoples and settled agricultural peoples; in the case of the Upper Paleolithic, we have hunter-gatherers and only hunter-gatherers — that is to say, tens of thousands of years of a belief system emergent from a hunting culture with virtually no alternatives. Given the tendency of hunting peoples to animism, and of viewing other species as spiritually significant — metaphysical peers, as it were — one would expect that hunters who fought and killed cave bears in order to take over their shelters would have revered these animals in a religious sense, and this religious reverence for the slain foe (of any species) could explain the prevalence of apparent cave bear altars in caves inhabited by human beings during the Upper Paleolithic.
The human world of the Upper Paleolithic would also have been a world shared with other hominid species — an experience we do not have today, being the sole surviving hominid (perhaps as the result of being a genocidal species) — and most especially shared with Neanderthals. Recent genetic research has demonstrated that there was limited interbreeding between homo sapiens and Neanderthals (cf., e.g., Neanderthals had outsize effect on human biology), but it is likely that these communities were mostly separate. If we reflect on the still powerful effect of in-group bias in our cosmopolitan world, how much stronger must in-group bias have been among these small communities of homo sapiens, homo neanderthalensis, and Denisova hominins? One suspects that strong taboos were associated with other species, and rivals in hunting.
It is likely that Neanderthals evolved in the Levant or Europe from human ancestors who left Africa prior to the speciation of Homo sapiens. The Neanderthal were specifically adapted to life in the cold climates of Eurasia during the last glacial maximum. However, such is the power of intelligence as an adaptive tool that the modern human beings who left Africa were able displace Neanderthals in their own environment, much as homo sapiens displaced a great many other species (and much as they displaced cave bears from their caves). While Neanderthals had larger brains than Homo sapiens, they made tools and they wore clothing after a fashion, Neanderthals did not pass through a selective filter that (would have) resulted in the Neanderthal equivalent of cognitive modernity.
Homo sapiens made better tools and better clothing, and, in the depths of the last glacial maximum, better tools and better clothing constituted the margin between survival and extinction. Perhaps the most significant invention in hominid history after the control of fire was the bone needle, that allowed for the sewing of form-fitting clothing. With form-fitting clothing our prehistoric ancestors were able to make their way through the world of the last glacial maximum and the occupy every biome and every continent on the planet (with the exception of Antarctica).
While “lost worlds” and inexplicable mysteries are a favorite feature of historical popularization, the lost human world of the Upper Paleolithic is being recovered for us by scientific historiography. We are, as a result, reclaiming a part of our identity lost for the ten thousand years of civilization since the advent of the Holocene. The mystery of human origins is gradually becoming less mysterious, and will become less more, the more that we learn.
. . . . .
. . . . .
. . . . .
. . . . .
23 October 2015
Some Lessons from Ineffective Interventions
Nation-states and other mainstream political entities often find their policy options constrained by public opinion, legal limitations, treaty obligations, and the moral scruples of individual leaders, and so in order to act with fewer constraints they cultivate relationships with militant groups that can act as proxies and which can take on missions that the regular forces of a nation-state cannot be tasked to accomplish. During the Cold War, the Soviet Union supported an array of militant proxies by identifying their struggle with global communist revolution and so exapting local struggles as new theaters for the Cold War, and they did this very effectively. At the present time, Iran has proved itself masterful in the use of militant proxies to affect outcomes throughout its sphere of influence, even as its economy has suffered from international sanctions. One can only admire this bravura performance, especially in comparison to the lackluster US efforts to mobilize militant proxies. What is it about the US political process that has meant that the militant proxies selected by the US have been largely ineffective?
The relation between a militant proxy and its state sponsor is what we would today call a “mutually beneficial relationship.” The relationship to a militant proxy with military objectives that are politically unacceptable (especially for a democracy) grants plausible deniability to the sponsor, who can then act with fewer constraints from behind a veil of secrecy, and it provides resources for the militant group. However, the relationship is often a troubled one. Militant proxies are often extraordinarily difficult to control and constrain, even when a state sponsor of such a proxy can pull the plug on its funding. It was said that Mullah Mohammad Omar was a, “rigid man who defied even his patrons.” While the Taliban were not properly a military proxy organization, the precursors of the Taliban functioned as US militant proxies employed against the Soviet occupation of Afghanistan, and constituted one of the few successes for the US in proxy warfare — though this success came at a great cost. The description of Mullah Omar by his Pakistani “handlers” prophetically fits the profile of an unmanageable proxy.
Most militant proxies do not view themselves as proxies, but as stand-alone militant groups with their own ends, aims, objectives, and motivations. If a state sponsor gives them arms, matériel, supplies, training, and advisers, such groups rarely feel beholden by these sponsors. Should the sponsor object to its other objectives or its methods, the attitude of the militant proxy is often the equivalent of a shoulder shrug and and a dismissive, “Let them fund the revolution.” There is nothing more commonplace in geopolitics that a nation-state patron of a militant proxy believing itself to be in control of a situation, only to discover that it cannot force the cooperation of its militant proxy at a sensitive political moment. And if a militant proxy can make itself strong enough through the temporary receipt of aid from a state sponsor, it can accept this aid in a pure spirit of cynical opportunism, making the calculation that once the aid is cut off due to lack of cooperation of the militant proxy with the sponsor state’s agenda, the group can then function on its own.
Perhaps the most well-known and effective militant proxy of our time is Hezbollah, which runs a virtual state-within-a-state in Lebanon, controlling much of the county, its politics, and a considerable geographical region. Hezbollah has long been one of the most effective and efficient militant proxies for Iran, and until recently also acted as a militant proxy of Syria; Syria was a crucial conduit for Iranian aid to reach Hezbollah in Lebanon. However, since Syria’s descent into civil war, Hezbollah has acted on behalf of the Syrian government as an agent in the internal struggle, rather than as an instrument of external force projection. That is to say, Hezbollah has proved itself such an effective fighting force that it not only defends its own interests, but now returns to defend the interests of its former sponsor, now under duress and in need of sponsorship itself.
During the Vietnam war, the CIA virtually created a militant proxy from Hmong tribesmen. Because the US could not openly operate in Laos, a militant proxy was the weapon of choice to expand the war against Vietnam’s communists to the Pathet Lao communists in Laos. By most accounts the Hmong were effective fighters, but they were the military equivalent of astroturf: not grass roots, but essentially created by the CIA for US purposes. As long as the money flowed to pay and supply the fighters, they fought. The Hmong, then, turned out to be ineffective not because they couldn’t fight, but because they were more mercenaries than militant proxies. There is an interesting lesson in this observation: a truly effective militant proxy should have its own agenda, but, as we have already seen, recalcitrant proxies can be dangerous, and so there must be a balance between the militant agenda and the sponsor’s agenda.
After the US withdrawal from Indochina, Cold War proxy wars came closer to US shores as a number of guerrilla wars were fought in Central America, with the US backing a number of militant proxies in the region, most famously the Nicaraguan Contras, who fought against the Sandinista government of Nicaragua. The Contras did manage to disrupt the region, but they mounted few effective military operations, and no decisive operations. One would have thought, what with US resources flowing into a conflict so close to its borders, that it would have created a truly formidable militant proxy. This did not happen. In Latin America, militant proxies both communist and anti-communist became deeply involved in the drug trade in order to finance their operations. There is another important lesson to be learned from this failure: personal greed often trumps ideological fervor, and if the head of a militant proxy sees an opportunity to transform himself into a drug lord, he may well do so.
In the war in Iraq to unseat Saddam Hussein, with the Kurdish Peshmerga, the US had, in a rare instance, partnered with a militant proxy that really could deliver the goods. The Kurds fought effectively and seemed to possess the right balance between serving their own ends by serving the agenda of a sponsor. If the US had promised the Kurds a state of their own in exchange for an effective settlement of the conflict in Kurdish lands (or, perhaps I should say, the aspirational map of Kurdish sovereignty), it is highly likely that this could have been achieved. And by “a state of their own” I mean a real de jure nation-state, and not the de facto state the Kurds now possess. But the US is a world power, and the use of US power to guarantee a Kurdish state would have offended too many US allies. The overall strategic goals of the US and the Kurds were and are incompatible. Almost certainly both the US and the Kurds knew from the beginning that their alignment had to be temporary.
The Kurds have been in the news again lately because they have proved themselves among the few regional forces that have been combat effective against ISIS, and because of a particularly compelling photograph of a female Kurdish fighter that became briefly famous on the internet. However, the temporary and effective convergence of interests between the US and the Kurds was not fully exploited and the moment for this has probably passed.
In the tumult and confusion of regional instability in Mesopotamia, the US is now supporting the Free Syrian Army in Syria. Unfortunately, the Free Syrian Army is not making progress, but they have said the kind of things that US political leaders like to hear, and so they receive support. This is the “strategy” for Syria, but Syria cannot be treated in isolation from the rest of the region.
We can no longer speak of a US strategy in Iraq, Afghanistan, and Syria, because these states are nation-states in name only. The facts on the ground belie the official maps and the seats for delegates to the UN. We must speak of US policy regionally, apart from any hollowed-out and ineffective nation-state; and regionally, it must be said, US strategy has been a catastrophic failure. Worse yet, the US is continuing old and repeatedly failed policies, as though the situation on the ground can be rescued and turned around if the US will simply keep doing what it has been doing. In other words, the US is digging itself in deeper by the day.
One of the problems in Mesopotamia and the Levant is the failure of the US to support effective militant proxies, and its willingness to support ineffective militant proxies, so that even as it is spending money and political capital on forces that cannot win, it is not spending these same resources that could win if given the chance. Thus the US is experiencing an opportunity cost in the region with profound consequences. If the US supported the Kurds instead of the Free Syrian Army it might actually accomplish something, but this apparently comes with unacceptable political costs — as though the costs of failure could more easily be borne.
The bottom line is that US militant proxies are selected for ideological reasons rather than for reasons of combat effectiveness or shared military objectives. This is a disastrous mistake. Trying to select winners on the battlefield is a lot like a nation-state attempting to choose winners in the marketplace: states are notoriously bad at picking winners, and when they attempt to use the power they possess as a nation-state to enforce their choice (i.e., when they try to turn a loser into a winner by the methods available to nation-states) they usually fail. Not only that, they fail at great cost.
. . . . .
. . . . .
. . . . .
. . . . .
13 October 2015
Case Studies in Civilization:
Civilizations of the Tropical Rainforest Biome
In an earlier post, Riparian Civilizations, I outlined some of the commonalities of civilizations that had their origins in fertile river valleys — most notably the civilizations of Mesopotamia, i.e., the Fertile Crescent bounded by the Tigris and Euphrates rivers, the civilization of ancient Egypt, based on the annual flooding of the Nile, the Yellow River Valley civilization (the source of Chinese civilization), and the Indus Valley civilization (the source of civilization in the Indian subcontinent).
While these early civilizations occurred within the equatorial belt, i.e., in the tropics, they were not in tropical rain forests. The biome of a river valley can vary according to rainfall and temperature, even within the tropics. The Congo basin is dominated by tropical rain forests, while the Nile Valley is a canyon that cuts through a desert biome, and so shares properties of the desert and of the river. Mesopotamia has (or had) extensive wetlands fed by its rivers, which became the domain of the Marsh Arabs, who adopted a unique way of life specially suited to this environment. But, again, this was not a tropical rain forest, though Mesopotamia lies in the tropics.
In additional to spatial distinctions among biomes, i.e., recognizing biomes confined to a given geographical region, temporal distinctions must also be made, both because of changing biomes over time due to climatological shifts, and changing human abilities to inhabit and settle a given biome, largely a function of increasing technology. Thus a distinction can be made between civilizations that originate within a given biome and civilizations that acclimate to a given biome. The colonial civilizations that came to Brazil in the early modern period, and to the Congo and SE Asia in the nineteenth century, were transplanted civilizations that adapted to and acclimated to a tropical rainforest biome, and can legitimately be called rainforest civilizations, but none of these civilizations originated in a tropical rainforest biome.
We are fortunate to have the terrestrial example of two civilizations of completely independent origins, both of the tropical rainforest biome, though in opposite hemispheres: Mayan civilization in the western hemisphere and Khmer civilization in the eastern hemisphere. In the best tradition of settled agricultural civilizations, both the Mayans and the Khmer left monumental architecture. Indeed, the pyramids of Central America and the temples of Angkor Wat, made picturesque by their reclamation by the tropical rain forest that was the incubator of these civilizations, overgrown by vines and their foundations tumbled by the roots of gigantic trees, have become iconic tourist draws in their respective regions of the world. The riches of past civilizations have now been passed down as a kind of legacy to the present peoples, mostly ethnically continuous with the peoples who built these civilizations, whose descendants now derive a modest income from tourist traffic.
We do not yet possess a complete seriation of civilization in the western hemisphere. We know that maize cultivation began in the Rio Balsas valley in what is now southern Mexico, a semi-arid tropical biome (and the native range of the teosinte grasses that were transformed by ancient agriculturalists into maize), and so may be assimilated to the paradigm of riparian civilizations. Mayan civilization, however, was concentrated in the rain forests of Central America. How exactly Mayan civilization was related to its northern neighbor, thousands of years its senior, is not yet fully understood.
Genetic sequencing of maize is a source of recent knowledge about the origins of maize, hence of origins of settled agriculturalism in the western hemisphere, but this work is ongoing at present. Moreover, while maize was an important crop for the Maya, and the Mayan corn god plays an important role in Mayan mythology, it was not the sole staple of the Maya. Maize was one of the “Three Sisters”, along with squash and beans, which together constituted a nutritionally balanced diet, and the cultivation of these crops together was ecologically sustainable due to complementary biochemical interaction with the soil.
We also lack a complete seriation of civilization in Asia, of which a seriation of civilization in Indochina would be an appendage. Khmer civilization rose from a pre-existing context of minor kingdoms in Indochina, and seems to draw upon both Indian and Chinese civilizational origins (though primarily Indian and Hindu), though it should be noted that recent archaeological work in the Malay archipelago suggest that civilization may have independently originated on the island of Java as well (depending upon the antiquity of Gunung Padang), in which case Khmer civilization would constitute a florid syncretism of Indian, Chinese, and Javanese cultural antecedents. Indeed, this is true whether or not civilization independently arose in Java, as the Khmer civilization is many thousands of years younger than these other examples.
The biome in which a civilization arises not only dictates the species available for harvesting and domestication, but also shapes the way in which peoples harvest energy from their environments. Agriculture is one way in which human beings harvest energy from their environments, and different forms of agriculture emerge in distinct biomes. The tropical rainforest biome offers enormous biodiveristy, but in tropical civilization we still find the same reliance on a handful of staple crops, as we find in civilizations originating in other biomes. Civilization is, in a certain narrow sense (a narrow sense compatible with the biological definition of civilization mentioned below), a voluntary truncation of biodiversity. Hunter-gatherers almost always have a much more varied diet that settled agricultural peoples, who are usually dependent on less than a dozen staple food crops.
The biological definition of civilization as a coevolving cohort of species (cf. section 6 of my Transhumanism and Adaptive Radiation) not only gives us a new tool with which to analyze civilization, but also a suggestive way to compare civilizations. The comparison of civilizations from similar biomes and the contrast of civilizations from distinct biomes is one of these tools. With this method we approach the equivalent of symmetry for the social sciences. Thus we have something to learn from the various ways that riparian civilizations have come to exploit the resources of river systems, and presumably we will have something to learn from the ways that civilizations of the tropical rain forest biome have exploited the high biodiversity of climax communities of tropical rain forests.
Since there is no winter in a tropical rainforest, in Mesoamerica it is possible to raise three crops of maize in a year, and in Indochina it is possible to raise three or four crops of rice in a year. Tropical rainforests thus offer to a civilization the unique opportunity to support the high population densities of cities and ceremonial centers via continuous, year-round food production. However, none of this can happen without water storage and irrigation. Both Mayan and Khmer civilizations might be characterized as hydrological civilizations, since they were predicated upon the careful management of water for irrigation, and both constructed major engineering works (perhaps not as visually impressive as their monumental architecture, but much more interesting from a scientific point of view) to store and to distribute water. The rainforest of Indochina, it should be noted, is a monsoon rainforest, with about six months of rain and six months of drought, so that in order to keep up food production through the months of drought, significant irrigation is necessary, which the Khmer achieved through use of the waters of the Siem Reap river.
Compared to civilizations originating in river valleys, civilizations originating in tropical rain forests are comparatively rare. I have here discussed the two most obvious examples. It is interesting also that both of these civilizations, while they came to full maturity and endured for significant periods of time — many centuries, such as is necessary for a civilization to reach full maturity — both civilizations seem to have collapsed internally, and not due to contacts with other civilizations. There are, of course, many theories about the collapse of Maya civilization; this has become a perennial archaeological riddle. Current theories favor drought or climate change. I am less familiar with the causes of Khmer decline. But whatever the cause of the decline of the Maya and the Khmer, they were not, for the most part, conquered and subdued. Their cities and temples were abandoned and reclaimed by the jungle, not burned and thrown down.
There are still Mayan people speaking the Mayan language in Mesoamerica, and Khmer people in Indochina; the collapse of these civilizations must have led to at least a partial dispersal of the populations from the great urban centers, which remain in ruins, but whatever catastrophes (or slow decline, if that was the case) befell these civilizations, the people who built them are still to be found in the region. The civilizations became extinct, but the populations did not. The difficulty of building a civilization in a tropical rain forest biome constitutes a significant challenge, and this climatological and biological challenge to civilization may be the reason, or one reason among many, that so few civilizations originated in the tropical rain forest, and, of these two here examined, both came to a natural end.
. . . . .
. . . . .
3. Civilizations of the Tropical Rainforest Biome
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .