Saturday


Knowledge relevant to the Fermi paradox will expand if human knowledge continues to expand, and we can expect human knowledge to continue to expand for as long as civilization in its contemporary form endures. Thus the development of scientific knowledge, once the threshold of modern scientific method is attained (which, in terrestrial history, was the scientific revolution), is a function of “L” in the Drake equation, i.e., a function of the longevity of civilization. It is possible that there could be a qualitative change in the nature of civilization that would mean the continuation the civilization but without the continuing expansion of scientific knowledge. However, if we take “L” in the big picture, a civilization may undergo qualitative changes throughout its history, some of which would be favorable to the expansion of scientific knowledge, and some of which would be unfavorable to the same. Under these conditions, scientific knowledge will tend to increase over the long term up to the limit of possible scientific knowledge (if there is such a limit).

At least part of the paradox of the the Fermi paradox is due to our limited knowledge of the universe of which we are a part. With the expansion of our scientific knowledge the “solution” to the Fermi paradox may be slowly revealed to us (which could include the “no paradox” solution to the paradox, i.e., the idea that the Fermi paradox isn’t really paradoxical at all if we properly understand it, which is an understanding that may dawn on us gradually), or it may hit us all at once if we have a major breakthrough that touches upon the Fermi paradox. For example, a robust SETI signal confirmed to emanate from an extraterrestrial source might open up the floodgates of scientific knowledge through interstellar idea diffusion from a more advanced civilization. This isn’t a likely scenario, but it is a scenario in which we not only confirm that we are not alone in the universe, but also in which we learn enough to formulate a scientific explanation of our place in the universe.

The growth of scientific knowledge could push our understanding of the Fermi paradox in several different directions, which again points to our relative paucity of knowledge of our place in the universe. In what follows I want to construct one possible direction of the growth of scientific knowledge and how it might inform our ongoing understanding of the Fermi paradox and its future formulations.

At the present stage of the acquisition of scientific knowledge and the methodological development of science (which includes the development of technologies that expand the scope of scientific research), we are aware of ourselves as the only known instance of life, of consciousness, of intelligence, of technology, and of civilization in the observable universe. These emergent complexities may be represented elsewhere in the universe, but we do not have any empirical evidence of these emergent complexities beyond Earth.

Suppose, then, that scientific knowledge expands along with human civilization. Suppose we arrive at the geologically complex moons of Jupiter and Saturn, whether in the form of human explorers or in the form of automated spacecraft, and despite sampling several subsurface oceans and finding them relatively clement toward life, they are all nevertheless sterile. And suppose that we extensively research Mars and find no subsurface, deep-dwelling microorganisms on the Red Planet. Suppose we search our entire solar system high and low and there is no trace of life anywhere except on Earth. The solar system, in this scenario, is utterly sterile except for Earth and the microbes that may float into space from the upper atmosphere.

Further suppose that, even after we discover a thoroughly sterile solar system, all of the growth of scientific knowledge either confirms or is consistent with the present body of scientific knowledge. That is to say, we add to our scientific knowledge throughout the process of exploring the solar system, but we don’t discover anything that overturns our scientific knowledge in a major way. There may be “revolutionary” expansions of knowledge, but no revolutionary paradigm shifts that force us to rethink science from the ground up.

At this stage, what are we to think? The science that brought to to see the potential problem represented by the Fermi paradox is confirmed, meaning that our understanding of biology, the origins of life, and the development of planets in our solar system is refined but not changed, but we don’t find any other life even in environments in which we would expect to find life, as in clement subsurface oceans. I think this would sharpen the feeling of the paradoxicalness of the Fermi paradox still without shedding much light on an improved formulation of the problem that would seem less paradoxical, but it wouldn’t sharpen the paradox to a degree that would force a paradigm shift and a reassessment of our place in the universe, i.e., it wouldn’t force us to rethink the astrobiology of the human condition.

Let us take this a step further. Suppose our technology improves to the point that we can visit a number of nearby planetary systems, again, whether by human exploration or by automated spacecraft. Supposed we visit a dozen nearby stars in our galactic neighborhood and we find a few planets that would be perfect candidates for living worlds with a biosphere — in the habitable zone of their star, geologically complex with active plate tectonics, liquid surface water, appropriate levels of stellar insolation without deadly levels of radiation or sterilizing flares, etc. — and these worlds are utterly sterile, without even so much as a microbe to be found. No sign of life. And no sign of life in any other nooks and crannies of these other planetary systems, which will no doubt also have subsurface oceans beyond the frost line, and other planets that might give rise to other forms of life.

At this stage in the expansion of our scientific knowledge, we would probably begin to think that the Fermi paradox was to be resolved by the rarity of the origins of life. In other words, the origins of life is the great filter. We know that there is a lot of organic chemistry in the universe, but what doesn’t take place very often is the integration of organic molecules into self-replicating macro-molecules. This would be a reasonable conclusion, and might prove to be an additional spur to studying the origins of life on Earth. Again, our deep dive both into other planets and into the life sciences, confirms what we know about science and finds no other life (in the present thought experiment).

While there would be a certain satisfaction in narrowing the focus of the Fermi paradox to the origins of life, if the growth of scientific knowledge continues to confirm the basic outlines of what we know about the life sciences, it would still be a bit paradoxical that the life sciences understood in a completely naturalistic manner would render the transition from organic molecules to self-replicating macro-molecules so rare. In addition to prompting a deep dive into origins of life research, there would probably also be a lot of number-crunching in order to attempt to nail down the probability of an origins of life event taking place given all the right elements are available (and in this thought experiment we are stipulating that all the right elements and all the right conditions are in place).

Suppose, now, that human civilization becomes a spacefaring supercivilization, in possession of technologies so advanced that we are more-or-less empowered to explore the universe at will. In our continued exploration of the universe and the continued growth of scientific knowledge, the same scenario as previously described continues to obtain: our scientific knowledge is refined and improved but not greatly upset, but we find that the universe is utterly and completely sterile except for ourselves and other life derived from the terrestrial biosphere. This would be “proof” of a definitive kind that terrestrial life is unique in the universe, but would this finding resolve the Fermi paradox? Wouldn’t it be a lot like cutting the Gordian knot to assert that the Fermi paradox was resolved because only a single origins of life event occurred in the universe? Wouldn’t we want to know why the origins of life was such a hurdle? We would, and I suspect that origins of life research would be pervasively informed by a desire to understand the rarity of the event.

Suppose that we ran the numbers on the kind of supercomputers that a supercivilization would have available to it, and we found that, even though our application of probability to the life sciences indicated the origins of life events should, strictly speaking, be very rare, they shouldn’t be so rare that there was only a single, unique origins of life event in the history of the universe. Say, given the age and the extent of the universe, which is very old and vast beyond human comprehension, life should have originated, say, a half dozen times. However, at this point we are a spacefaring supercivilization, we can can empirically confirm that there is no other life in the universe. We would not have missed another half dozen instances of life, and yet our science points to this. However, a half dozen compared to no other instances of life isn’t yet even an order of magnitude difference, so it doesn’t bother us much.

We can ratchet up this scenario as we have ratcheted up the previous scenarios: probability and biology might converge upon a likelihood of a dozen instances of other origins of life events, or a hundred such instances, and so on, until the orders of magnitude pile up and we have a paradox on our hands again, despite having exhaustive empirical evidence of the universe and its sterility.

At what point in the escalation of this scenario do we begin to question ourselves and our scientific understanding in a more radical way? At what point does the strangeness of the universe begin to point beyond itself, and we begin to consider non-naturalistic solutions to the Fermi paradox, when, by some ways of understanding the paradox, it has been fully resolved, and should be regarded as such by any reasonable person? At what point should a rational person consider as a possibility that a universe empty of life except for ourselves might be the result of supernatural creation? At what point would we seriously consider the naturalistic equivalent of supernatural creation, say, in a scenario such as the simulation hypothesis? It might make more sense to suppose that we are an experiment in cosmic isolation conducted by some greater intelligence, than to suppose that the universe entire is sterile except for ourselves.

I should be clear that I am not advocating a non-naturalistic solution to the Fermi paradox. However, I find it an interesting philosophical question that there might come a point at which the resolution of a paradox requires that we look beyond naturalistic explanations, and perhaps we may have to, in extremis, reconsider the boundary between the naturalistic and the non-naturalistic. I have been thinking about this problem a lot lately, and it seems to me that the farther we depart from the ordinary business of life, when we attempt to think about scales of space and time inaccessible to human experience (whether the very large or the very small), the line between the naturalistic and the non-naturalistic becomes blurred, and perhaps it ultimately ceases to be meaningful. In order to solve the problem of the universe and our place within the universe (if it is a problem), we may have to consider a solution set that is larger than that dictated by the naturalism of science on a human scale. This is not a call for supernaturalistic explanations for scientific problems, but rather a call to expand the scope of science beyond the bounds with which we are currently comfortable.

. . . . .

. . . . .

signature

. . . . .

Grand Strategy Annex

. . . . .

project astrolabe logo smaller

. . . . .

Advertisements

Saturday


The truncated icosahedron geometry employed for the symmetrical shockwave compression of fission implosion devices.

The simplest nuclear weapon is commonly known as a gun-type device, because it achieves critical mass by forcing together two sub-critical masses of uranium through a mechanism very much like a gun that shoots a smaller wedge-shaped sub-critical mass into a larger sub-critical mass. This was the design of the “Little Boy” Hiroshima atomic bomb. The next level of complexity in nuclear weapon design was the implosion device, which relied upon conventional explosives to symmetrically compress a larger reflector/tamper sphere of U-238 into a smaller sphere of Pu-239, with a polonium-beryllium “Urchin” initiator at the very center. The scientists of the Manhattan project were so certain that the gun-type device would work that they didn’t even bother to test it, so the first nuclear device to be tested, and indeed the first nuclear explosion on the planet, was the Gadget device designed to be the proof of concept of the more sophisticated implosion design. It worked, and this design was used for the “Fat Man” atomic bomb dropped on Nagasaki.

These early nuclear weapon designs (conceptually familiar, but all the engineering designs are still very secret) are usually called First Generation nuclear weapons. The two-stage thermonuclear devices (fission primaries to trigger fusion secondaries, though most of the explosive yield still derives from fission) designed and tested a few years later, known as the Teller-Ulam design (and tested with the Ivy Mike device), were called Second Generation nuclear weapons. A number of ideas were floated for third generation nuclear weapons design, and probably many were tested before the Nuclear Test Ban Treaty came into effect (and for all practical purposes brought an end to the rapid development of nuclear weapon design). One of the design concepts for Third Generation nuclear weapons was that of a shaped charge that could direct the energy of the explosion, rather that dissipating the blast in an omnidirecitonal explosion. There are also a lot of concepts for Fourth Generation nuclear weapons, though many of these ideas are both on the cutting edge of technology and they can’t be legally tested, so it is likely that little will come of these as long as the current test ban regime remains in place.

According to Kosta Tsipis, “Nuclear weapons designed to maximize certain of their properties and to suppress others are considered to constitute a third generation in the sense that their design goes beyond the basic, even though sophisticated, design of modern thermonuclear weapons.” These are sometimes also referred to as “tailored effects.” Examples of tailored effects include enhanced radiation warheads (the “neutron bomb”), so-called “salted” nuclear weapons like the proposed cobalt bomb, electro-magnetic pulse weapons (EMP), and the X-ray laser. We will here be primarily interesting in enhancing the directionality of a nuclear detonation, as in the case of the Casaba-Howitzer, shaped nuclear charges, and the X-ray laser.

What I would like to propose as a WMD is the use of multiple shaped nuclear charges directing their blast at a common center. This is like a macroscopic implementation of the implosion employed in first generation nuclear weapons. The symmetry of implosion in the gadget device and the Fat Man bomb employed 32 simultaneous high explosive charges, arranged according to the geometry of a truncated icosahedron, which would result in a nicely symmetrical convergence on the central trigger without having to scale up to an unrealistic number of high explosive charges for an even more evenly symmetrical implosion. (The actual engineering is a bit more complicated, as a combination of rapid explosions and slower explosions were needed for the optimal convergence of the implosion on the trigger.) This could be employed at a macroscopic scale by directional nuclear charges arranged around a central target. I call this a meta-implosion device. In a “conventional” nuclear strike, the explosive force is dissipated outward from ground zero. With a meta-implosion device, the explosive force would be focused inward toward ground zero, which would experience a sharply higher blast pressure than elsewhere as a result of the constructive interference of multiple converging shockwaves.

A partially assembled implosion device of a first generation nuclear weapon.

The reader may immediately think of the Casaba-Howitzer as a similar idea, but what I am suggesting is a bit different. You can read a lot about the Casaba-Howitzer at The Nuclear Spear: Casaba Howitzer, which is contextualized in even more information on Winchell Chung’s Atomic Rockets site. If you were to surround a target with multiple Casaba-Howitzers and fire at a common center at the same time you would get something like the effect I am suggesting, but this would require far more infrastructure. What I am suggesting could be assembled as a deliverable weapons system engineered as an integrated package.

A cruise missile would be a good way to deliver a meta-implosion device to its target.

There are already weapons designs that release multiple bomblets near a target with each individual bomblet precision targeted (the CBU-103 Combined Effects Munition, more commonly known as a cluster bomb). This could be scaled up in a cruise missile package, so that a cruise missile in approaching its target could open up and release 12 to 16 miniaturized short-range cruise missiles which could then by means of GPS or similar precision location technology arrange themselves around the target in a hemisphere and then simultaneously detonate their directed charges toward ground zero. Both precision timing and precision location would be necessary to optimize shockwave convergence, but with technologies like atomic clocks and dual frequency GPS (and quantum positioning in the future) such performance is possible.

A meta-implosion device could also be delivered by drones flown out of a van.

A similar effect could be obtained, albeit a bit more slowly but also more quietly and more subtly, with the use of drones. A dozen or so drones could be released either from the air or launched from the ground, arrange themselves around the target, and then detonate simultaneously. Where it would be easier to approach a target with a small truck, even an ordinary delivery van (perhaps disguised as some local business), as compared to a cruise missile, which could set off air defense warnings, this would be a preferred method of deployment, although the drones would have to be relatively large because they would have to carry a miniaturized nuclear weapon, precision timing, and precision location devices. There are a few commercially available drones today that can lift 20 kg, which is probably just about the lower limit of a miniaturized package such as I have described.

The most elegant deployment of a meta-implosion device would be a hardened target in exoatmospheric space. Currently there isn’t anything flying that is large enough or hardened enough to merit being the target of such a device, but in a future war in space meta-implosion could be deployed against a hard target with a full spherical implosion converging on a target. For ground-based targets, a hemisphere with the target at the center would be the preferred deployment.

In the past, a nation-state pursuing a counter-force strategy, i.e., a nuclear strategy based on eliminating the enemy’s nuclear forces, hence the targeting of nuclear missiles, had to employ very large and very destructive bombs because nuclear missile silos were hardened to survive all but a near miss with a nuclear weapon. Now the age of land-based ICBMs is over for the most advanced industrialized nation-states, and there is no longer any reason to build silos for land-based missiles, therefore no reason to pursue this particular kind of counter-force strategy. SLBMs and ALCMs are now sufficiently sophisticated that they are more accurate than the most accurate land-based ICBMs of the past, and they are far more difficult to find and to destroy because they are small and mobile and can be hidden.

However, hardened, high-value targets like the missile silos of the past would be precisely the kind of target one would employ a meta-implosion device to destroy. And while ICBM silos are no longer relevant, there are plenty of hardened, high-value targets out there. A decapitation strike against a leadership target where the location of the bunker is known (as in the case of Cheyenne Mountain Complex or Kosvinsky Kamen) is such an example.

This is, of course, what “bunker buster” bombs like the B61 were designed to do. However, earth penetrating bunker buster bombs, while less indiscriminate than above ground bursts, are still nuclear explosions in the ground that release their energy in an omnidirectional burst (or perhaps along an axis). The advantage of a meta-implosion device would be that the focused blast pressures would collapse any weak spots in a target area, and, when you’re talking about a subterranean bunker, even an armored door would constitute a weak spot.

I haven’t seen any discussion anywhere of a device such as I have described above, though I have no doubt that the idea has been studied already.

. . . . .

signature

. . . . .

Grand Strategy Annex

. . . . .

project astrolabe logo smaller

. . . . .

Tuesday


It has long been my impression that one of the unacknowledged problems of industrialized civilization is that the individuals who ascend to the highest positions of influence and political power are the worst kind of people — the kind of people who, if you met them on a personal basis, you would hereafter seek to avoid them. I have not heretofore attempted an exposition of this impression because I could not express it concisely nor offer a causal mechanism to explain it. Moreover, my impression is merely anecdotal, and might be better explained as the sour grapes of someone not successful in the context of contemporary social institutions. Nevertheless, I cannot shake the feeling that most politicians and celebrities (the people with power in our society) are unpleasant, self-serving social climbers whose only redeeming quality is that, usually, they are not openly malevolent.

Having recently learned the meaning of the term “the managerial state” (also known as anarcho-tyranny, but I will use the aforementioned term) I find that I can use this concept to give an exposition of the idea that industrialized civilization promotes the worst kind of person into positions of influence and authority. Intuitively we can understand that the managerial state is a bureaucratic institution characterized by technocratic management; the anarcho-tyranny part comes into the equation because the managerial state, through selective enforcement of the laws, aids and abets criminality while coming down hardest on the law abiding citizens. If this sounds strange and improbable to you, I ask you to search your memory, and I would be surprised if you cannot think of someone whose life was destroyed, or nearly destroyed, due to some some infraction that was enforced as though it were to be an instance of exemplary justice, even while obvious criminals were allowed to go unmolested because of their wealth, their influence, or some other “mitigating” factor. If you have never heard of any such episode, then you are fortunate. I suspect that most people have experienced these injustices, if only obliquely.

What kind of person — what kind of bureaucratic manager — would thrive in the managerial state? Here we have a ready answer, familiar to us since classical antiquity: Plato’s perfectly unjust man. In an earlier post, Experimenting with Thought Experiments, I discussed the section of Plato’s Republic in which he contrasts the perfectly just man — who has the reality of justice but the appearance of injustice — and the perfectly unjust man — who has the reality of injustice but the appearance of justice. Thus the Platonic metaphysics of appearance and reality, which has shaped all subsequent western metaphysics, is invoked in order to provide an exposition of moral virtue and vice in a social context.

The perfectly unjust man would thrive in the role of apparently virtuous manager of the state while in reality exclusively serving the interests of the managerial class, who retain their authority by doing the bare minimum in terms of maintaining the institutions of society while turning the full force of their talents and interest to the greater glory of the technocratic elite.

The existence of the managerial state, then, engenders the conditions in which the perfectly unjust man can thrive, as though a petri dish were specially prepared to cultivate this species. The managerial state, in turn, appears in industrialized civilization partly due to the technocratic demands placed upon the leadership (charismatic and dynastic authority are likely to no longer be sufficient to the management of the industrialized state) and the increasingly scientific character of society encourages the rationalization of institutions, which in turn selects for an early maturation of the institutions of industrialized society.

I have here painted a very unflattering portrait of contemporary political power, but that I would do so starting from the premise that industrialized civilization raises the worst people to the top should come as no surprise. For a countervailing view we might take the many recent pronouncements of Jordan Peterson. I wrote a post about Peterson when he was first coming into wide public recognition, Why Freedom of Inquiry in Academia Matters to an Autodidact. Since that time Peterson has rocketed to notoriety, and has had many opportunities to present his views.

One of the themes that Peterson returns to time and again (I’ve listened to a lot of his lectures, though by no means all of them) is that the hierarchies that characterize western civilization are hierarchies of competence and not hierarchies of tyranny established through the naked exercise of power. The proof of this is that our society functions rather well: water comes out of the tap, electricity is there when we turn on the switch, and our institutions are probably less corrupt than the analogous institutions of other societies. I more-or-less agree with Peterson on this, except that I regard our hierarchies as more of a mixed bag. We have some hierarchies of competence, and some hierarchies that have more to do with birth, wealth, family, and, worst of all, dishonesty and cunning.

In traditional western civilization — by which I mean western civilization prior to the three revolutions of science, popular sovereignty, and industrialization — power was secured either through the naked exercise of force, or through dynastic pan-generational inheritance. In a dynastic political system (like that of contemporary North Korea), you get a mixed bag: some generations get good kings and some generations get lousy kings. Given the knowledge that the heir to the throne was not always the best leader, feudal systems developed a wide distribution of power and a battery of alternative institutions through which power could be exercised in their event of a weak, stupid, insane, or feckless king.

The feudal system called itself “aristocracy,” which literally means “rule by the best,” and this is precisely what is meant by hierarchies of competence: rule by the best. But the people who actually lived in feudal systems knew that the best were not necessarily or inevitably at the apex of the political system, and so they prepared themselves with institutions that could survive poor kingship. Each generation had the luck of the draw in terms of the king they got, but since this was a known weakness of the system, it could be mitigated to some degree, and it was.

One of the problems of industrialized civilization has been the simultaneous and uncritical embrace of popular sovereignty, which is at least as easily manipulable as feudal institutions, and arguably is more manipulable than feudalism. By throwing ourselves headlong into popular sovereignty, and, at least in the case of the US, slowly dismantling those institutions that once insulated us from the brunt of popular politics (thus accelerating the progress of popular sovereignty), we have few of the protections that feudalism had built into its institutions to limit the reach of incompetent leadership.

The perfectly unjust man is no analogue of an incompetent king: he is good at what he does. Plato called the perfectly unjust man, “great in his injustice.” Just so, the perfectly unjust man is a competent manager of the managerial state, but being a competent manager of a managerial state is not the ideal of democracy. And yet democracy, the more it seeks an illusory perfect egalitarianism, and deconstructs the last of the institutions that limit and balance power (for even the unlimited exercise of popular sovereignty is a dystopian tyranny), the more the managerial state comes into the possession of those temperamentally constituted to thrive within its institutions: the perfectly unjust men. This is my response to hierarchies of competence: yes, perfectly unjust men are competent, but they are not the ideal of leadership for civilization. They may even be the antithesis of the leadership that civilization needs. And now they have the stranglehold on power and will not be forced out without a struggle.

. . . . .

signature

. . . . .

Grand Strategy Annex

. . . . .

project astrolabe logo smaller

. . . . .

Saturday


The famous Virgin of the Navigators by Alejo Fernández, on display in Sevilla, like the Tordesillas meridian is one of the great ideological expressions of the Age of Discovery.

Some time ago I wrote Modernism without Industrialism: Europe 1500-1800, in which I identified the period from approximately 1500 to 1800 in western civilization as a distinctive kind of civilization, which I have in subsequent posts simply called “modernism without industrialism.” For present purposes it doesn’t matter whether this is a distinctive kind of civilization, co-equal with other distinctive kinds of civilization, or whether it is a developmental stage in a larger civilization under which it is subsumed. This is an interesting question in the theory of civilization, but it will not bear upon what I have to say here. Whether we take the period 1500-1800 as a stage in the larger development of western civilization, or as a civilization in its own right, it is a period that can be described in terms of properties that do not apply to other periods.

The first two of what I have elsewhere called the “three revolutions” — the scientific revolution, the political revolutions, and the industrial revolution — transformed this period in novel ways, and the last of these three revolutions terminates the period decisively as a new way of life results from industrialization pre-empting the interesting social experiment of modernism without industrialism. Without the preemption of industrialization, this experiment might have continued, and the world today would be a different place than the world we known — it would be a counterfactual planetary high-level equilibrium trap.

There is another great revolution that occurred in this period, and that is the Age of Discovery, when explorers and merchants and conquerors set out from Europe and circled the planet for the first time since our Paleolithic ancestors settled the planet entire, without knowing what they had done. That the explorers and merchants and conquerors of the Age of Discovery knew what they were doing is evidenced by the maps they made and what they wrote about their experiences. They discovered that humanity is one species on one planet, and they knew that this is what they had discovered, though assimilating that knowledge was another matter; we still struggle with this knowledge today.

The Age of Discovery bounds the beginning of this period as the industrial revolution bounds the ending of this period, and, to a large extent, defines the period, since exploration and discovery is what initiates the human recognition of itself as a whole and the planet as a whole. Europe had been building out a shipping capacity in excess of its internal needs since the late Middle Ages, and it was the exaptation of this shipping infrastructure with its attendant technologies and expertise that made the Age of Discovery possible. Once the proof of concept was provided by Columbus, Magellan, and the other early explorers, initiating the Columbian Exchange, the planet opened to global commerce with astonishing rapidity.

What this global transportation infrastructure meant was that this distinctive period of civilization might be called the First Planetary Civilization, since throughout this period trade and communications take place on a global scale, and this in turn makes global empires possible for the first time. There were, of course, many survivals from the medieval period that characterize this first planetary civilization, but there were also perhaps as many novel features of this civilization as well. This was a civilization in possession of science, though science at a small scale, and not yet exploited for human purposes to the extent that science today is exploited for human purposes. This was a civilization in which merchants and industries had a distinctive place, and the political system was no longer dominated by rural manorial estates and their local feudal lords. Planetary-scale concerns now shaped the policies of increasingly centralized regimes, that would only become more centralized as the period drew to a close in the time of the Sun King. And while political regimes were marked by increasing centralization and the rationalization of institutions, it was also a time of great lawlessness, as the expansion of European civilization into the western hemisphere was also an age of piracy.

Since the industrial revolution we have also had a planetary civilization, but the planetary civilization that began to take form in the wake of the industrial revolution is distinct from the first planetary civilization that characterized the period from 1500 to 1800. The planetary civilization we have been building since the industrial revolution might be called the second planetary civilization, and it has been marked by the spread of popular sovereignty and Enlightenment ideals (and, I would argue, the gradual adoption of the Enlightenment project as the central project of planetary civilization), the mechanization and then the electrification of the global transportation and communications network (further accelerating the rapidity of commerce), the planetary propagation of cultural and social influences, and the rise of commerce and industry to a position rivaling that of nation-states. Merchants no longer merely have a place in civilization, but they often dictate to others the place that they will hold in the social order.

Are these successive first and second planetary civilizations an accident of terrestrial history, that could be and probably are different wherever other civilizations are to be found in the universe (if they are to be found)? Or are these first and second planetary civilizations sufficiently distinctive as kinds of civilization that they ought to be present in any taxonomy of civilizations because they are likely to be exemplified wherever there are worlds with civilizations? One of the ways in which to approach the problem mentioned above, that of whether the First Planetary Civilization of 1500-1800 is a kind of civilization in its own right, or whether it is a developmental stage in a larger formation of civilization, would be to identify as a distinctive kind of civilization any formation of civilization that can be formalized to the point of potential applicability to any civilization anywhere, whether on Earth or elsewhere. In this way, a scientific theory of civilization that is sufficiently comprehensive to address any and all civilizations can shed light on the particular problems of human civilization, even if that was not the motivation for formulating a science of civilization.

. . . . .

signature

. . . . .

Grand Strategy Annex

. . . . .

project astrolabe logo smaller

. . . . .

Sunday


Chinese politics was dominated by Mao Zedong from 1949 to 1976, and for more than a decade before that if we count the period from the Long March forward. Mao was effectively President for Life of China, though he wasn’t called that. However, he was called “The Red Emperor.” After the chaos of the Cultural Revolution some effort was made to regularize the political system after Mao’s death, and, to a certain extent, China managed to present itself to the world as a “normal” nation-state under the rule of law (not under military rule, or in the grip of a warlord or a strongman) and with a political succession that, while entirely internal to the communist party, seemed to follow certain rules. There was a semi-orderly succession process from Deng Xiaoping to Jiang Zemin to Hu Jintao to Xi Jinping.

This façade of orderly political succession occurred throughout a period a spectacular economic growth for China. Given that economic growth at this pace can result in extreme social dislocation, it reflected well on the Communist Party of China’s firm grasp on power that it was able to preside over this orderly succession of political power during economic and social conditions that would prove challenging even to a stable and well-established political system. In consequence, the CPC seemed to be a source of strength, stability, and order for China at a time when much else was in flux.

The apparent solidity of the CPC and its own internal mechanisms for orderly political succession have now been revealed to be illusory. It has been clear for some time that Xi Jinping has been the strongest political figure in China since Deng Xiaoping, but we now must see him as the strongest figure in China since Mao Zedong. This month, the CPC eliminated term limits for the president and vice president, re-appointing Xi Jinping as president with no term limit. What this means is that a sufficiently powerful individual can bend the CPC to his will, so that the power is vested in the individual rather than in the party or its offices. And given that the CPC is the political institution of China, that these institutions can bend to the will of one man points to the weakness of CPC institutions. In other words, China is much more vulnerable than it appears on the surface.

To a remarkable extent the western press have given Xi uncritical coverage during his rise to power. A few China specialists discuss how the elements of Shanghai clique were pushed aside in Xi’s rise to power, and some of the internal machinations of the state machinery, but much less than the issue deserves with China now the second largest economy in the world, and the largest nation-state on the planet in terms of population. Most notable of all is the uncritical coverage of Xi’s “anti-corruption” drive, which has given Xi the moral high ground in cleaning house and consolidating power. I have not read a single account in the western press that has observed that the anti-corruption efforts in China have left Xi’s inner circle entirely untouched. But who is going to take a stand in favor of corruption? Consolidating power by punishing rivals for corruption is a winning strategy.

Now that we know that China is a nation-state secondarily, and primarily the domain of a strongman, all that follows will depend on Xi himself. If Xi cares about the Chinese people and their welfare, he will use his power to strengthen the institutions of the country and will make it possible for an orderly political succession after he leaves power. But Xi could just as easily transform China into the largest kleptocracy on the planet, or into a tyranny, or any number of suboptimal outcomes. The stakes are high. The lives of more than a billion persons are in play. Much of the world’s manufacturing is sourced from China; rare is the supply chain that does not incorporate China at some point.

Even if Xi proves to be an honest and competent leader, China’s position in the world economic system is placed at risk merely by the revelation of the weakness of its institutions. China has put a lot of effort into trying to convince western businesses that China is a stable place to do business, where assets would not be arbitrarily expropriated and international legal norms would be respected. There is no reason to believe that this will suddenly change, but the weakness of the CPC is (or ought to be) a red flag for every business operating in China. The economy is stable at present, but that could change with a single executive decision on the part of Xi.

. . . . .

signature

. . . . .

Grand Strategy Annex

. . . . .

project astrolabe logo smaller

. . . . .

quoted text

Saturday


Ribera painted several imaginary portraits of ancient philosophers.

Protagoras of Abdera, by Jusepe de Ribera

In the spirit of my Extrapolating Plato’s Definition of Being, in which I took a short passage from Plato and extrapolated it beyond its originally intended scope, I would like to take a famous line from Protagoras and also extrapolate this beyond its originally intended scope. The passage from Protagoras I have in mind is his most famous bon mot:

“Man is the measure of all things, of the things that are, that they are, and of the things that are not, that they are not.”

…and in the original Greek…

“πάντων χρημάτων μέτρον ἔστὶν ἄνθρωπος, τῶν δὲ μὲν οντῶν ὡς ἔστιν, τῶν δὲ οὐκ ὄντων ὠς οὐκ ἔστιν”

Presocratic scholarship has focused on the relativism of Protagoras’ μέτρον, especially in comparison to the strong realism of Plato, but I don’t take the two to be mutually exclusive. On the contrary, I think we can better understand Plato through Protagoras and Protagoras through Plato.

Firstly, the Protagorean dictum reveals at once both the inherent naturalism of Greek philosophy, which is the spirit that continues to motivate the western philosophical tradition (which Bertrand Russell once commented is all, essentially, Greek philosophy), and the ontologizing nature of Greek thought, which is another persistent theme of western philosophy, though less often noticed than the naturalistic theme. Plato, despite his otherworldly realism, is part of this inherent naturalism of Greek philosophy, which in our own day has become explicitly naturalistic. Indeed, Greek philosophy since ancient Greece might be characterized as the convergence upon a fully naturalistic conception of the world, though this has been a long and bumpy road.

The naturalism of Greek thought, in turn, points to the proto-scientific character of Greek philosophy. The closest approximation to modern scientific thought prior to the scientific revolution is to be found in works such as Archimedes’ Statics and Eratosthenes of Cyrene’s estimate of the diameter of the earth. If these examples are not already fully scientific inquiries, they are at least proto-science, from which a fully scientific method might have emerged under different historical conditions.

Plato and Protagoras were both guilty of a certain degree of mysticism, but strong traces of the scientific naturalism of Greek thought is expressed in their work. Protagoras’ μέτρον in particular can be understood as an early step in the direction of quantificational concepts. Quantification is central to scientific thought (in my podcast The Cosmic Archipelago, Part II, I offered a variation on the familiar Cartesian theme of cogito, ergo sum, suggesting that, from the perspective of science, we could say I measure, therefore I am), and when we think of quantification we think of measurement in the sense of gradations on a standard scale. However, the most fundamental form of quantification is revealed by counting, and counting is essentially the determination whether something exists or not. Thus the Protagorean μέτρον — specifically, the things that are, that they are, and the things that are not, that they are not — is a quantificational schema for determining existence relative to a human observer. Protagoras’ μέτρον is a postulate of counting, and without counting there would be no mathematicized natural science.

All scientific knowledge as we know it is human scientific knowledge, and all of it is therefore anthropocentric in a way that is not necessarily a distortion. For human beings to have knowledge of the world in which they find themselves, they must have knowledge that the human mind can assimilate. Our epistemic concepts are the framework we have erected in order to make sense of the world, and these concepts are human creations. That does not mean that they are wrong, even if they have been frequently misleading. The pyrrhonian skeptic exploits this human, all-too-human weakness in our knowledge, claiming that because our concepts are imperfect, no knowledge whatsoever is possible. This is a strawman argument. Knowledge is possible, but it is human knowledge. Protagoras made this explicit. (This is one of the themes of my Cosmic Archipelago series.)

Taking Plato and Protagoras together — that is, taking Plato’s definition of being and Protagoras’ doctrine of measure — we probably come closer to the originally intended meaning of both Plato and Protagoras than if we treat them in isolation, a fortiori if we treat them as antagonists. Plato’s definition of being — the power to affect or be affected — and Protagoras’ dictum — that man is the measure of all things, which we can take to mean that quantification begins with a human observer — naturally coincide when the power to affect or be affected is understood relative to the human power to affect or be affected.

Since human knowledge begins with a human observer and human experience, knowledge necessarily also follows from that which affects a human being or that which a human being can effect. The role of experimentation in science since the scientific revolution takes this ontological interaction of affecting and being affected, makes it systematic, and derives all natural knowledge from this principle. Human beings formulate scientific experiments, and in so doing affect the world in building an experimental apparatus and running the experiment. The experiment, in turn, affects human beings as the scientist observes the experiment running and records how it affects him, i.e., what he observers in the world as a result of his intervention in the course of events.

Plato and Protagoras taken together as establishing an initial ontological basis for quantification lay the metaphysical groundwork for scientific naturalism, even if neither philosopher was a scientific naturalist in the strict sense.

. . . . .

I have previously discussed Protagoras’ μέτρον in Ontological Ruminations: Six Protagorean Propositions on the Nature of Man and the World and A Non-Constructive World.

. . . . .

signature

. . . . .

Grand Strategy Annex

. . . . .

project astrolabe logo smaller

. . . . .

Biological Bias

3 March 2018

Saturday


What does it mean to be a biological being? It means, among other things, that one sees the world from a biological perspective, thinks in terms of concepts amenable to a biological brain, understands oneself and one’s species in its biological context, which is the biosphere of our homeworld, and that one persists in a mode of being distinctive to biological beings (which mode of being we call life). To be a biological being is to be related to the world through one’s biology; one has biological desires, biological aversions, biological imperatives, biological expectations, and biological intentions. Human beings are biological beings, and so are subject to all of these conditions of biological being.

When we think in terms of human bias — and we are subject to many biases as human beings — we usually focus on exclusively human biases, our anthropocentrism, our anthropic bias, but we are also subject to biases that follow from the other ontological classes of beings of which we are members. We are human beings, but we are also cognitive beings (i.e., intelligent agents), linguistic beings, mammalian beings, biological beings, physical beings, Stelliferous Era beings, and so on. This litany may be endless; whether or not we are aware of it, we may belong to an infinitude of ontological classes in virtue of the kind of beings that we are.

Another example of a bias to which human beings are subject but which is not exclusively anthropic, is what I have called terrestrial bias. Some time ago in Terrestrial Bias: Thought Experiments I asked, “…is there, among human beings, any sense of identification with the life of Earth? Is there a terrestrial bias, or will there be a terrestrial bias when we are able to compare our response to terrestrial life to our response to extraterrestrial life?” As I write this it occurs to me that a distinction can be made between planetary bias, to which any being of planetary endemism would be subject, and terrestrial bias understood as a bias specific to Earth, to which only life on Earth would be subject. In making this distinction, we understand that terrestrial bias is a special case of planetary bias, which latter is the more comprehensive concept.

Similarly, anthropic bias is a special case of the more comprehensive concept of intelligent agent bias. Again, we can distinguish between intelligent agent bias and anthropic bias, with intelligent agent bias being the more comprehensive concept under which anthropic bias falls. However, intelligent agents could also include artificial agents, who would be peers of human intelligent agents in respect of intelligence, but which would not share our biological bias. The many biases, then, which attend and inform human cognition, are nested within more comprehensive biases, as well as overlapping with the biases of other agents that might potentially exist and which would share some of our biases but which would not fall under exactly the same more comprehensive concepts. In Wittgensteinian terms, there is a complicated network of biases that overlap and intersect (cf. Philosophical Investigations, sec. 66); these biases correspond to a complicated network of ontological classes that overlap and intersect.

Our biological biases overlap and intersect with our other biases, such as our biases as the result of being human (anthropic bias) or our biases in virtue of being composed of matter (material or physical bias). Biological bias occupies a point midway between these two ontological classes. Our anthropic bias is exclusive to human beings, but we share our biological bias with every living thing on Earth, and perhaps with living things elsewhere in the cosmos, while we share our material bias much more widely with dust and gas and stars, except that these latter beings, not being intelligent agents, cannot exercise judgment or act as agents, so that their bias can only be manifested passively. One might well characterize the Platonic definition of beingthe capacity to affect or be affected — as the passive exercise of bias, with each class of beings affecting and being affected by other beings of the same class as peers.

I have sought to exhibit and disentangle and overlapping and intersecting of biological baises in a number of posts related to biophilia and biophobia, including:

Biocentrism and Biophilia

The Biocentric Thesis

The Scope of Biophilia

Not all biases are catastrophic distortions of reasoning. In Less than Cognitive Bias I made a distinction between anthropic biases that characterize the human condition without necessarily adversely affecting rational judgment, and anthropic biases that do undermine our ability to reason rationally. And in The Human Overview I sketched out the complexity of ordinary human communication, which is dense in subtle biases, some of which compromise our rationality, but many of which are crucial to our ability to rapidly reason about our circumstances — a skill with high survival value, and a skill at which human beings excel and which will not soon by modeled by artificial intelligence on account of its subtlety. A tripartite distinction can be made, then, among biases that compromise our reason, biases that are neutral in regard to out ability to reason, and biases that augment our ability to reason.

Our biological biases coincide to a large extent with our evolutionary psychology, and, in so far as our evolutionary psychology enabled us to survive in our environment of evolutionary adaptedness, our biological biases augment our ability to reason cogently and to act effectively in biological contexts — though only in what might be called peer biological contexts, as far as our particular scale of biological individuality allows us to identify with other biological individuals as peers. Our peer biological biases do not allow us to interact effectively at the level of the microbiome or at the level of the biosphere, with the result that considerable scientific effort has been required for us to understand and to interact effectively at these biological scales.

A similar applicability of bias may be true more widely of our other biases, which help us in some circumstances while hurting us in other circumstances. Certainly our anthropic biases have helped us to survive, and that is why we possess them in such robust forms, though they have helped us to survive as a species of planetary endemism. In the event of humanity breaking out of our homeworld as a spacefaring civilization, our anthropic, homeworld, and planetary endemism biases may not serve us as well in cosmological contexts. however, we know what to do about this. The cultivation of science and rigorous reasoning has allowed us to transcend many of our biases without actually losing our biases. Instead of viewing this as a human, all-too-human failure, we should think of this as a human strength: we can, when we apply ourselves, selectively transcend our biases, but when we need them, they are there for us, and they will be there for us until we actually alter ourselves biologically. Thus there is a biological “way out” from biological biases, but we might want to think twice before pursuing this way out, as our biological biases may well prove to be an asset (and perhaps an asset in unexpected, instinctive ways) when we eventually explore other biospheres and encounter another form of biology.

What Carl Sagan called the “deprovincialization” of biology may also take place at the level of human evolutionary psychology. If so, we shouldn’t desire to transcend or eliminate our biological biases as we should desire to augment and expand them in order to overcome what will be eventually learn about our terrestrial and homeworld biases from the biology of other worlds.

. . . . .

signature

. . . . .

Grand Strategy Annex

. . . . .

project astrolabe logo smaller

. . . . .

Friday


Herodotus, the Father of History

In Rational Reconstructions of Time I described a series of intellectual developments in historiography in which big history appeared in the penultimate position as a recent historiographical innovation. There is another sense, however, in which there have always been big histories — that is to say, histories that take us from the origins of our world through the present and into the future — and we can identify a big history that represents many of the major stages through which western thought has passed. In what follows I will focus on western history, in so far as any regional focus is relevant, as “history” is a peculiarly western idea, originating in classical antiquity among the Greeks, and with its later innovations all emerging from western thought.

Saint Augustine, author of City of God

Shortly after Christianity emerged, a Christian big history was formulated across many works by many different authors, but I will focus on Saint Augustine’s City of God. Christianity takes up the mythological material of the earlier seriation of western civilization and codifies it in the light of the new faith. Augustine presented an over-arching vision of human history that corresponded to the salvation history of humanity according to Christian thought. Some scholars have argued that western Christianity is distinctive in its insistence upon the historicity of its salvation history. If this is true, then Augustine’s City of God is Exhibit “A” in the development of this idea, tracing the dual histories of the City of God and the City of Man, each of which punctuates the other in actual moments of historical time when the two worlds are inseparable for all their differences. Here, the world behind the world is always vividly present, and in a Platonic way (for Augustine was a Christian Platonist) was more real than the world we take for the real world.

Immanuel Kant, author of Universal Natural History and Theory of the Heavens

The Christian vision of history we find in Saint Augustine passed through many modifications but in its essentials remained largely intact until the Enlightenment, when the combined force of the scientific revolution and political turmoil began to dissolve the institutional structures of agricultural civilization. Here we have the remarkable work of Kant, better known for his three critiques, but who also wrote his Universal Natural History and Theory of the Heavens. The idea of a universal natural history extends the idea of natural history to the whole of the cosmos, and to human endeavor as well, and more or less coincides with the contemporary conception of big history, at least in so far as the scope and character of big history is concerned. Kant deserves a place in intellectual history for this if for nothing else. In other words, despite his idealist philosophy (formulated decades after his Universal Natural History), Kant laid the foundations of a naturalistic historiography for the whole of natural history. Since then, we have only been filling in the blanks.

Marie Jean Antoine Nicolas de Caritat, marquis de Condorcet, author of Sketch for a Historical Picture of the Progress of the Human Spirit

The Marquis de Condorcet took this naturalistic conception of universal history and interpreted it within the philosophical context of the Encyclopédistes and the French Philosophes (being far more empiricist and materialist than Kant), in writing his Esquisse d’un tableau historique des progrès de l’esprit humain (Sketch for a Historical Picture of the Progress of the Human Mind), in ten books, the tenth book of which explicitly concerns itself with the future progress of the human mind. I may be wrong about this, but I believe this to be the first sustained effort at historiographical futurism in western thought. And Condorcet wrote this work while on the run from French revolutionary forces, having been branded a traitor by the revolution he had served. That Condorcet wrote his big history of progress and optimism while hiding from the law is a remarkable testimony to both the man and the idea to which he bore witness.

Johann Gottfried von Herder, author of Reflections on the Philosophy of History of Mankind

After the rationalism of the Enlightenment, European intellectual history took a sharp turn in another direction, and it was romanticism that was the order of the day. Kant’s younger contemporary, Johann Gottfried Herder, wrote his Ideen zur Philosophie der Geschichte der Menschheit (Ideas upon Philosophy and the History of Mankind, or Reflections on the Philosophy of History of Mankind, or any of the other translations of the title), as well as several essays on related themes (cf. the essays, “How Philosophy Can Become More Universal and Useful for the Benefit of the People” and “This Too a Philosophy of History for the Formation of Humanity”), at this time. In some ways, Herder’s romantic big history closely resembles the big histories of today, as he begins with what was known of the universe — the best science of the time, as it were — though he continues on in a way to justify regional nationalistic histories, which is in stark contrast to the big history of our time. We could learn from Herder on this point, if only we could be truly scientific in our objectivity and set aside the ideological conflicts that have arisen from nationalistic conceptions of history, which still today inform perspectives in historiography.

Otto Neurath, author of Foundations of the Social Sciences

In a paragraph that I have previously quoted in Scientific Metaphysics and Big History there is a plan for a positivist big history as conceived by Otto Neurath:

“…we may look at all sciences as dovetailed to such a degree that we may regard them as parts of one science which deals with stars, Milky Ways, earth, plants, animals, human beings, forests, natural regions, tribes, and nations — in short, a comprehensive cosmic history would be the result of such an agglomeration… Cosmic history would, as far as we are using a Universal Jargon throughout all branches of research, contain the same statements as our unified science. The language of our Encyclopedia may, therefore, be regarded as a typical language of history. There is no conflict between physicalism and this program of cosmic history.”

Otto Neurath, Foundations of the Social Sciences, Chicago and London: The University of Chicago Press, 1970 (originally published 1944), p. 9

To my knowledge, no one wrote this positivist big history, but it could have been written, and perhaps it should have been written. I can imagine an ambitious but eccentric scholar completely immersing himself or herself in the intellectual milieu of early twentieth century logical positivism and logical empiricism, and eventually coming to write, ex post facto, the positivist big history imagined by Neurath but not at that time executed. One might think of such an effort as a truly Quixotic quest, or as the fulfillment of a tradition of writing big histories on the basis of current philosophical thought.

From this thought experiment in the ex post facto writing of a history not written in its own time we can make an additional leap. I have noted elsewhere (The Cosmic Archipelago, Part III: Reconstructing the History of the Observable Universe) that scientific historiography has reconstructed the histories of peoples who did not write their own histories. This could be done in a systematic way. An exhaustive scientific research program in historiography could take the form of writing the history of every time and place from the perspective of every other time and place. We would have the functional equivalent of this research program if we had a big history written from the perspective of every time and place for which a distinctive perspective can be identified, because each big history from each identifiable perspective would be a history of the world entire, and thus would subsume under it all regional and parochial histories.

I previously proposed an idea of a similarly exhaustive historiography of the kind that could only be written once the end was known. In my Who will read the Encyclopedia Galactica? I suggested that Freeman Dyson’s eternal intelligences could busy themselves as historiographers through the coming trillions of years when the civilizations of the Stelliferous Era are no more, and there can be no more civilizations of this kind because there are no longer planets being warmed by stellar insolation, hence no more civilizations of planetary endemism.

It is a commonplace of historiographical thought that each generation must write and re-write the past for its own purposes and from its own point of view. Gibbon’s Enlightenment history of the later Roman Empire is distinct in temperament and outlook from George Ostrogorsky’s History of the Byzantine State. While an advanced intelligence in the post-Stelliferous Era would want to bring its own perspective to the histories of the civilizations of the Stelliferous Era, it would also want a complete “internal” account of these civilizations, in the spirit of thought experiments in writing histories that could have or should have been written during particular periods, but which, for one reason or another, never were written. If we imagine eternal intelligences (at least while sufficient energy remains in the universe) capable of running detailed simulations of the past, this could be a source of the immersive scholarship that would make it possible to write the unwritten big histories of ages that produced a distinctive philosophical perspective, but which did not produce a historian (or the idea of a big history) that could execute the idea in historical form.

There is a sense in which these potentially vast unwritten histories, the unactualized rivals to Gibbon’s Decline and Fall of the Roman Empire, are like the great unbuilt buildings, conceived and sketched by architects, but for which there was neither the interest nor the wherewithal to build. I am thinking, above all, of Étienne-Louis Boullée’s Cenotaph for Isaac Newton, but I could just as well cite the unbuilt cities of Antonio Sant’Elia, the skyscraper designed by Antonio Gaudí, or Frank Lloyd Wright’s mile high skyscraper (cf. Planners and their Cities, in which I discuss other great unbuilt projects, such as Le Corbusier’s Voisin Plan for Paris and Wright’s Broadacre City). Just as I have here imagined unwritten histories eventually written, so too I have imagined these great unbuilt buildings someday built. Specifically, I have suggested that a future human civilization might retain its connection to the terrestrial past without duplicating the past by building structures proposed for Earth but never built on Earth.

History is an architecture of the past. We construct a history for ourselves, and then we inhabit it. If we don’t construct our own history, someone else will construct our history for us, and then we live in the intellectual equivalent of The Projects, trying to make a home for ourselves in someone else’s vision of our past. It is not likely that we will feel entirely comfortable within a past conceived by another who does not share our philosophical presuppositions.

From the perspective of big history, and from the perspective of what I call formal historiography, history is also an architecture of the future, which we inhabit with our hopes and fears and expectations and intentions of the future. And indeed we might think of big history as a particular kind of architecture — a bridge that we build between the past and the future. In this way, we can understand why and how most ages have written big histories for themselves out of the need to bridge past and future, between which the present is suspended.

. . . . .

. . . . .

Studies in Grand Historiography

1. The Science of Time

2. Addendum on Big History as the Science of Time

3. The Epistemic Overview Effect

4. 2014 IBHA Conference Day 1

5. 2014 IBHA Conference Day 2

6. 2014 IBHA Conference Day 3

7. Big History and Historiography

8. Big History and Scientific Historiography

9. Philosophy for Industrial-Technological Civilization

10. Is it possible to specialize in the big picture?

11. Rational Reconstructions of Time

12. History in an Extended Sense

13. Scientific Metaphysics and Big History

14. Copernican Historiography

. . . . .

signature

. . . . .

Grand Strategy Annex

. . . . .

project astrolabe logo smaller

. . . . .

The Snapshot Effect

22 January 2018

Monday


Images will always be with us, but the age of the snapshot understood in its cultural and technological context, now belongs to the past. Or, if not to the past, it belongs to antiquarians and enthusiasts who will keep the technology of the snapshot alive even as it passes out of the popular mind. The snapshot inhabited that era that intervened between the age of cameras as large, bulky, specialized equipment that required a certain expertise to operate, and today’s universal presence of cameras and consequent universal availability of images — images often made available on the same electronic device that captured the image. The snapshot — presumably named for the onomatopoeic mechanical sound of the camera shutter that went “Snap!” as one took the “shot” — is, then, predicated upon a particular degree of finitude, of images more common and more spontaneous than a daguerreotype, but also less common and of more value than a smartphone selfie.

The most famous photographers of the snapshot era — for example, Henri Cartier-Bresson — become known for their candid and spontaneous images of ordinary life, sort of the still life version of cinéma vérité. Never before had so much of ordinary life been captured and preserved. Painters had always been interested in genre scenes, and the early photographers who lugged around their heavy and complex gear often followed the interest and example of these painters, but these images were relatively rare. In the age of the snapshot, images of ordinary people engaged in ordinary pursuits became as ordinary as the people and the pursuits themselves.

Part of what we mean, then, when we refer casually to a “snapshot,” is this sense of an image that spontaneously captures an ordinary moment of history, without formality or pretense, but with a documentarian’s fidelity. And once the moment is past, it remains only in the snapshot, almost a random moment fixed in time, while the persons and the events and the circumstances that once came together in the confluence of the snapshot, are now gone or changed beyond recognition.

It is partly this meaning that I want to tap into when I use the term “snapshot effect” to convey a particular idea about the human relationship to time and to history. Human life is long compared to the life of a mayfly, but it is quite short compared to the life of a redwood, and shorter still when measured against evolutionary, geological, or cosmological scales of time. What the individual human being experiences — what the individual sees, hears, feels, and so on — is as a snapshot in comparison to the world of which it is a fleeting image. A snapshot may or may not be representative of what it purports to represent; it may be a good likeness or a poor likeness. Because a snapshot is a moment snatched out of a continuum, we can only judge its fidelity if we compare it to a sufficient number of comparable moments taken from the same continuum. But the image often has the impact that it has precisely because it is a moment snatched out of time and stripped of all context. Often we resist a survey that would reveal the representativeness of the snapshot because to do so would be to deprive ourselves of the power of the isolated image.

I am going to use the term “snapshot effect,” then, to refer to the temporally narrow nature (and perhaps also the fragmentary nature) of human perception. We see not the world, but a snapshot of the world. We see not the object, but the side of the face that happens to be turned toward us when we glance in its direction. We hear not the narrative of a life, but a snippet of conversation that relates only a fragment of a single experience. We taste not the crop of strawberries, but the single strawberry that dissolves on our tongue, and judge the quality of the year’s produce by this experience. Even the grandest of grand views of the world are snapshots: to look into the night sky is to experience a snapshot of cosmology, and to recognize a geological formation is a snapshot of deep time. These snapshots reveal more than a casual glance, especially if they are attended by understanding, but they still exclude far more than they include.

Any rational individual, and any individual trained in the sciences, learns to control for the limited evidence available to us, but as carefully as we set our trap for limited evidence by rigorously controlling the conditions of our observations — observations that will count toward scientific knowledge, whereas our ordinary observations do not count because they are not so controlled — so too we also grant ourselves license to derive generalities from these observations. Ordinary experience is but a snapshot of the world; scientific experience derived from controlled conditions is an even more fragmentary snapshot of the world.

Because of the snapshot effect, we have recourse to principles that generalize the limited evidence to which we are privileged. The cosmological principle legitimizes our extrapolation from limited evidence to the universe entire. The principle of mediocrity legitimizes our extrapolation from a possibly exceptional moment to a range of ordinary cases and the most likely course of events. Conservation principles assure us that we can generalize from our limited experience of matter and energy to the behavior of the universe entire.

A recognition of the snapshot effect has long been with us, though called by other names. It has been a truism of philosophy, equally acknowledged by diverse (if not antagonistic) schools of thought, that our experiences constitute only a small slice of the actuality of the world. To cite two examples from the twentieth century, here, to start, is Bertrand Russell:

“…let us concentrate attention on the table. To the eye it is oblong, brown and shiny, to the touch it is smooth and cool and hard; when I tap it, it gives out a wooden sound. Any one else who sees and feels and hears the table will agree with this description, so that it might seem as if no difficulty would arise; but as soon as we try to be more precise our troubles begin. Although I believe that the table is ‘really’ of the same colour all over, the parts that reflect the light look much brighter than the other parts, and some parts look white because of reflected light. I know that, if I move, the parts that reflect the light will be different, so that the apparent distribution of colours on the table will change. It follows that if several people are looking at the table at the same moment, no two of them will see exactly the same distribution of colours, because no two can see it from exactly the same point of view, and any change in the point of view makes some change in the way the light is reflected.”

Bertrand Russell, The Problems of Philosophy, Chap. I, “Appearance and Reality”

Russell represents the tradition that would become Anglo-American analytical philosophy, temperamentally and usually also theoretically disjoint from European continental philosophy, which might well be represented by Jean-Paul Sartre. Nevertheless, Sartre opens his enormous treatise Being and Nothingness with a passage that closely echoes that of Russell quoted above:

“…an object posits the series of its appearances as infinite. Thus the appearance, which is finite, indicates itself in its finitude, but at the same time in order to be grasped as an appearance-of-that-which-appears, it requires that it be surpassed toward infinity. This new opposition, the ‘finite and the infinite,’ or better, ‘the infinite in the finite,’ replaces the dualism of being and appearance. What appears in fact is only an aspect of the object, and the object is altogether in that aspect and altogether outside of it.”

Jean-Paul Sartre, Being and Nothingness, translated by Hazel Barnes, Introduction: The Pursuit of Being, “I. The Phenomenon,” p. xlvii

Both Russell and Sartre in the passages quoted above are wrestling with the ancient western metaphysical question of appearance and reality. Both recognize a multiplicity of appearances and a presumptive unity of the objects of which the appearances are a manifestation. Seen in this light, the snapshot effect is a recognition that we see only an appearance and not the reality, and this reflection in turn embeds this simple observation in a metaphysical context that has been with us since the Greeks created western philosophy.

The snapshot effect means that our experiences are appearances, but our appreciation of appearances has grown since the time of Parmenides and Plato, and we see Russell and Sartre alike struggling to make out exactly why we should attach an ontological import to appearances — snapshots, as it were — when we know that they do no exhaust reality, and sometimes they betray reality.

The ontology of time and of history ought to concern us as much as the ontology of objects implicitly schematized by Russell and Sartre. A snapshot of time is an appearance of time, and as an appearance it does not exhaust the reality of time. Nevertheless, we struggle to do justice to this appearance — just as we struggle to do justice to our intuitions, for, indeed, a snapshot of time is an instance of sensible intuition — because the moment abstracted from time is still an authentic manifestation of time.

The “snapshot effect,” then, will be the term I will use to refer to the fact that human perceptions are a mere snapshot, perhaps representative or perhaps not, but perceptions which we tend to treat as normative, though we rarely take the trouble even to attempt to understand the extent to which our snapshot views of the world are, in fact, normative. There is, then, not only a metaphysical aspect to the snapshot effect, but also an axiological aspect to the snapshot effect, as our valuations are likely to be tied to, if not derived from, a snapshot in this sense.

. . . . .

. . . . .

signature

. . . . .

Grand Strategy Annex

. . . . .

project astrolabe logo smaller

. . . . .

Thursday


The Science and Technology of Civilization

In several contexts I have observed that there is no science of civilization, i.e., that there is no science that takes civilization as its unique object of inquiry. I wrote a short paper, Manifesto for the Scientific Study of Civilization, in which I outlined how I would begin to address this deficit in our knowledge. (And I’ve written several blog posts on the same, such as The Study of Civilization as Rigorous Science, Addendum on the Study of Civilization as Rigorous Science, and The Study of Civilization as Formal Science and as Adventure Science, inter alia.) Suppose we were to undertake a science of civilization (whether by my plan or some other plan) and thus began to assemble reliable scientific knowledge of civilization. Would we be content only to understand civilization, or would we want to employ our scientific knowledge in order to effect changes in the same way that scientific knowledge of other aspects of human life have facilitated more effective action?

Can we distinguish between a science of civilization and technologies of civilization? What is the difference between a science and a technology? One of the ways to distinguish science from technology is that science seeks knowledge, understanding, and explanation as ends in themselves, while technology employs scientific knowledge, understanding and explanation in order to attain some end or aim. Roughly, science has no purpose beyond itself, while technology is conceived specifically for some purpose. Thus if we wish to use scientific knowledge of civilization not only to understand what civilization is, but also to shape, direct, and develop civilization in particular ways, we would then need to go beyond formulating a science of civilization and to also construct technologies of civilization.

This distinction, while helpful, implies that technologies follow from science as applications of that science. This implication is misleading because technologies can appear in isolation from any science (other than the most rudimentary forms of knowledge). Epistemically, science precedes technology, but in terms of historical order, technology long preceded science. Our ancestors were already shaping stone tools millions of years ago, and by the time civilization emerged in human history and the first glimmerings of science can be discerned, technology was already well advanced. However, the greatest disruption in the history of civilization (to date) has been the industrial revolution, and the industrial revolution marks the point at human history in which the historical order of technology followed by science was reversed by the systematic application of science to industry, and since that time the most powerful technologies have been derived from following the epistemic order of starting with science and only then, after attaining scientific knowledge, applying this scientific knowledge to the building of technology.

Social Engineering for Preferred Outcomes

If we were to formulate a science of civilization today, it would be a science formulated in this post-industrialization historical context, and we would expect that we could converge on a body of knowledge about civilization that could then be applied reflexively to civilization as technologies in order to achieve whatever results are desired (within the scope of what is possible; assuming that there are intrinsic modal limits to civilization). At the same time, thinking of civilization in this way, and looking back over the historical record, we can easily see that there have been many technologies of civilization (i.e., technologies of civilization preceding a science of civilization) in use from the beginnings of large-scale social organization. (In an earlier post I called these social technologies, among which we can count civilization itself.)

Almost all civilizations have intervened in social outcomes in a heavy-handed way through social engineering. The inquisition, for example, was a form of social engineering intended to limit, to contain, to punish, and to expunge religious non-conformity. While this is perhaps an extreme example of social engineering through religious institutions, since most central projects of civilizations have been religious in character, most of human history has been marked by the use of religious institutions to shape and direct social life. Or, to take an example less likely to be controversial (religious examples are controversial both because those who continue to identify with Axial Age religious faiths would see this discussion as an affront to their beliefs, and also because religiously-based social engineering could be taken to be a soft target), law can be understood as a technology of civilization. From the earliest attempts at the regulation of social life, as, for example, with the code of Hammurabi, to the present day, systems of law have been central to shaping large-scale social organization.

The Structure of Civilization through the Lens of Social Technologies

Elsewhere I have suggested that civilization can be understood as an institution of institutions. This is a very low resolution conception, but it has its uses. In the same spirit we can say that civilization is a social technology of social technologies, and this, too, is a very low resolution concept. I have also proposed that a civilization can be defined as an economic infrastructure linked to an intellectual superstructure by a central project (for example in my 2017 Icarus Interstellar Starship Congress presentation, The Role of Lunar Civilization in Interstellar Buildout). This conception of civilization is a bit more articulated, as it gives specific classes of social institutions that jointly constitute the social institution of civilization, and how these classes of institutions are related to each other.

In revisiting the question of civilization from the perspective of a science of civilization that might make technologies of civilization available, I have come to realize that the definition one gives of the structure of civilization will reflect (in part) the concepts employed in the analysis of civilization. What I have previously identified as the economic infrastructure and intellectual superstructure of civilization could mostly be classed under the concept of technologies of civilization, and this can be employed to present a structural model of civilization slightly different from that I have previous presented.

As noted above, technologies are purposive, and in order to organize purposive activity it is necessary to define or otherwise specify these purposes. This is the function of the central project of a civilization. From this perspective, the structure of civilization is a central project that delineates purposes and all the other institutions of civilization are social technologies that implement the purposes of the central project. This account of the structure of civilization does not contradict my definition of civilization in terms of superstructure and infrastructure joined by a central project, but it does give a distinctly different emphasis.

Partial and Complete Definitions of Civilization

There are many definitions of civilization that have been proposed. Civilization is a multivariant phenomenon (it is characterized by many different properties) and so each time we look at civilization a bit differently, we tend to see something a bit different. I have been thinking about civilization for many years, writing up my ideas in fragmentary form on this blog, and continually re-visiting these ideas and testing them for adequacy in the light of later formulations. In the above I have tried to show how different definitions of civilization (especially definitions of varying degrees of resolution) can be compatible and do not necessarily point to contradiction. However, this is does not entail that all definitions of civilization are compatible.

Formally, we will want to know which definitions of civilization are different ways of looking at the same thing, and thus ultimately compatible if we can fit them together properly within an overarching framework, and which definitions are not singling out the same thing, either because they fail to single out anything, or they fail to single out civilization specifically. Someone may set out to define civilization, and they end up defining culture or society instead (and perhaps conflating culture, society, and civilization). Some others may set out to define civilization and end up producing an incoherent definition that doesn’t allow us to converge upon civilizations in any reliable theoretical way. More often, attempts at defining civilization end up defining some part or aspect or property of civilization, but fail to illuminate civilization on the whole.

Partial definitions of civilization mean that the definition does not yet capture the big picture of civilization, but partial definitions can still be very helpful. As we have seen above, the institutions that jointly shape civil society can be distinguished between a class of institutions of the economic infrastructure (the ways and means of civilization) and a class of institutions of the intellectual superstructure (exposition of the ends and aims of civilization), but that all of these institutions can also be seen as falling within the same class of social technologies employed to implement the central project of a civilization.

. . . . .

signature

. . . . .

Grand Strategy Annex

. . . . .

project astrolabe logo smaller

. . . . .

%d bloggers like this: