14 October 2013
The Escalation of Integration
in Combined Arms Operations
In separate posts that made no attempt at a comprehensive treatment I have written about the past, present, and possible future military use of swarms, drones, and decoys. I realize now that a tactical doctrine that could integrate swarm, drone, and decoy weapons systems and their tactics would be a powerful conceptual tool for future combat scenarios, and possibly also would point the way to an extended conception of combined arms operations that transcends that concept as it is known today.
If the reader is familiar with some of my other posts, you may be aware that I have some interest in what I call extended conceptions and have written about them on several occasions, most specifically in relation to an extended conception of ecology that I call metaphysical ecology and an extended conception of history that I call metaphysical history. You can readily understand, then, the intrinsic interest that I find in an extended conception of combined arms operations. From a philosophical point of view, we have an intellectual obligation to push our ideas to the very limit of their coherency and applicability to order to explore their outermost possibilities. That is what I have suggested (or attempted to suggest) in relation to ecology and history, and that is what I am suggesting here. But even a sketch of an extended conception of warfare — call it metaphysical warfare, if you like — would be beyond the parameters of a blog post, so at the present moment I will confine myself to mostly practical consequences for combined arms operations in the light of an extended conception of warfare, but I hope to return to this topic in more detail later. In fact, I hope someday to literally write the book on metaphysical warfare, but that remains a project for the future.
One of the distinctive aspects of combined arms operations is to recognize both the individual strengths and weaknesses of a given weapons system and its particular doctrine of employment in the battlespace and to integrate individual weapons systems in their doctrinal context with other weapons systems that can, in combination, uniquely facilitate the strengths of a given weapons system while compensating (to the degree possible) for the weaknesses of the same. This is a principle that admits of generalization both to smaller scales and to larger scales. It brings a certain unity to our conception of combined arms warfare when we can see this single principle expressed at different orders of magnitude in space and time.
An illustration of what I mean by combined arms warfare “expressed at different orders of magnitude in space and time” (and, I might add, integrated within and across different orders of magnitude, diachronically and synchronically) can be seen at the microscopic level with the trend toward integrated avionics in the F-22 and F-35A, which seamlessly bring together mission systems and vehicle systems in a tightly integrated package — this is combined arms (better, integrated arms) within a single weapons system. At the macroscopic level, combined arms warfare goes beyond the integration of many distinct weapons systems and naturally seeks the integration of distinct forces — this is usually called “inter-operability” — so that inter-service rivalries and differences in training, doctrine, and tactics among the services of one nation-state (in the case of the US, this means Army, Navy, Air Force, Marines, and the Coast Guard) and among multi-national forces do not become obstacles to unity of command and clarity of the objective.
Neil Warner provides a clear definition of inter-operability that illustrates this macroscopic sale of combined arms that converges on the interoperability of distinct forces:
“Interoperability can be defined as the ability of systems, units or forces to provide to and accept services from other systems, units or forces and to use the services so exchanged to enable them to operate effectively together. Interoperability cannot solely be thought of on an information system level, but must include doctrine, people, procedures and training.”
Neil Warner, ADI Limited, Interoperability – An Australian View, 7th International Command and Control Research and Technology Symposium
Given the realities of interservice rivalries and the disproportionate control that each service may have over particular classes of weapons systems (e.g., the Air Force has more jets than the Navy, but the Navy still does have jets), ideal interoperability must not only integrate the forces of distinct nation-states but also the various forces of a single nation-state.
Between the polar extremes of microscopic integration of individual weapons systems and the macroscopic integration of entire armed forces there lies the middle ground, which is what most people mean when they talk about combined arms operations — the integration of soldiers on the ground with man-portable systems, mobile fire, armored assets, air assets and so on in a single battleplan in which all act in concert under a unified command to achieve a clearly defined objective.
Combined arms operations are as old as warfare, which is in turn as old as civilization. The most famous examples of combined arms operations were those of mobile mechanized units with close air support that came of age during the Second World War and which are still the basis of military doctrine in our time. Rapid technological advances in weapons systems in recent decades, however, points toward a new era of combined arms operations.
In terms of air power, we are all aware of the rapid success of drones both for surveillance and combat roles, there have been many recent discussions of swarm warfare (something I have attempted to contribute to myself in The Swarming Attack), and decoys are, like combined arms operations, as old as war itself. I think that these three elements — swarms, drones, and decoys — will come together in a very power way in future military operations. Drones are more effective when sent out in swarms and accompanied by decoys to increase the numbers of the swarm; decoys are more effective when accompanied by drones and flying in a swarm; swarms are more effective when they combine drones and delays into an indistinguishable whole that descends upon an enemy like a plague of locusts.
Already we have seen the utility of drones, and many have forecast that the F-35 will be the last generation of human-piloted fighter aircraft. Just recently, an F-16 was fitted out as a drone and was flown without a pilot. It ought be possible, in theory, to do exactly the same thing with an F-22 or an F-35. Drone warfare is not something that is coming soon; it is here now. But drones are vulnerable (as are all pieces of hardware), and the best drones are expensive and complex pieces of equipment. It would make sense to deploy a few expensive drones with offensive capabilities with a much larger number of cheaper drones that would be indistinguishable from the drones with offensive capabilities. A few combat capable drones together with a much larger number of decoys would constitute a swarm of drones and decoys, and a swarm has combat advantages of its own that would make this combined arms weapons system of drones and decoys all the more powerful.
Combined arms operations of swarms, drones, and decoys need not be limited to air assets. Most of the considerations above I mentioned in relation to aerial swarms, drones, and decoys are equally true for naval swarms, drones, and decoys — something that I discussed in Small Boat Swarms: Strategic or Tactical? and Flying Boat Swarms? Recent reports have also discussed the DARPA’s Maximum Mobility and Manipulation program, which includes a variety of distinct robots for land-based warfare (cf. Pentagon-funded Atlas robot refuses to be knocked over by Matthew Wall, Technology reporter, BBC News) including both two- and four-legged robots, some built to carry heavy loads and others built for speed. Land-based robots could also be deployed according to the combined arms principles of swarms, drones, and decoys.
While the robotization of warfare — drone aircraft, drone naval vessels (both surface and subsurface), self-driving vehicles, robots on two legs and four legs — presents significant opportunities for the most technologically advanced nation-states, their deployment would require a highly robust control architecture, without which unity of command would be impossible. The growing acronyms to describe the kind of control architecture necessary to automated combined arms operations have gone from command and control to command, control, and computers to C4 to C4I to C4ISR (command, control, communications, computers, intelligence, surveillance, and reconnaissance). What this culminates in is now called the networked battlespace or netcentric warfare (something that I discussed in Epistemological Warfare).
Future wars will always be parallel wars, with one war being prosecuted in the actual battlespace and another war being prosecuted in parallel in the virtual battlespace (i.e., cyberwarfare or netcentric warfare). There has always been a parallel prosecution of wars on the homefront and on the front line, with the homefront being a war of propaganda, information, and ideology, while the front line is a war of men and machines thrown up against each other.
The opening of a virtual front is closely analogous to the advent of air power, which added the need for command of the air to the already familiar need of command of the ground and command of the seas. Douhet’s visionary treatise, The Command of the Air, set this out in astonishing prescience. It is impossible for me to read Douhet without being impressed by his clarity of vision of the future. This is a rare ability. And yet we know that by the time of the Second War War (and even more so today) the command of the air is not merely another front: command of the air is central to warfare as we know it today.
The fact that I wrote that it would be the virtual battlespace that hosts a parallel fight betrays my now-archaic point of view: the primary battle may well be in the virtual battlespace, while the actual combat in the actual battlespace is that which is fought in parallel. A first strike could come in the virtual battlespace; an ambush could come in the virtual battlespace; a war of attrition could be fought in the virtual battlespace. Command of cyberspace may prove to be as central to future warfare as command of the air is to contemporary warfare. This introduces yet another conception of integrated warfare: the integration of actual and virtual battlespaces.
Each party to a conflict will see to secure its own C4ISR capabilities while compromising the C4ISR capabilities of its adversary or adversaries. Each will develop its own strategies, tactics, and doctrines for this new front, and it is to be expected that in the attempt to overwhelm the enemy’s computer and communications systems that we will see that electronic equivalent of B. H. Liddel-Hart’s “expanding torrent” in cyberspace seeking the disruption of enemy computer networks.
It may be taken as axiomatic that computing power is finite. Although the upper bound of computing systems is not known, and may not be known, the fact that there is an upper limit is known. (I will observe that this is a non-constructive assertion, which demonstrates that non-constructivist thought is not abstruse but often has a direct applicability to experience.) A finite computing system can be overwhelmed. If a system is 99% effect, a swarm of a total of 100 drones and decoys may result in one getting through; if a system is 99.9 % effective, a swarm of 1,000 may result in getting through, etc. If you know the limitations of your enemy’s targeting computers, you can defeat them numerically.
In many cases, the operational parameters of a computerized targeting system may be known, or can be estimated with a high degree of accuracy. Continuous improvements in technology will continuously augment the parameters of updated or newly designed computerized targeting systems, but even the latest and greatest technology will remain finite. This finitude is a vulnerability that can be exploited. In fact, Leibniz defined metaphysical evil in terms of finitude. We can to better than a definition, however: we can quantify the metaphysical evil (i.e., the finitude) of a weapons system. More importantly — and this is one of those rare cases in which comparative concepts may be more significant than quantitative concepts — we can introduce comparative measures of finitude. If one party to a conflict can simply get the better of its adversary in a comparative measure of computing finitude, they will win the C4ISR battle, though that does not yet guarantee a win on parallel fronts, much less winning the war.
. . . . .
. . . . .
. . . . .
10 October 2013
Life Lessons from Morally Compromised Philosophers
With particular attention to the Heidegger case
I began this blog with the idea that I would write about current events from a philosophical perspective and said in my initial post that I wanted to see history through the prism of ideas. This continues to be my project, however imperfectly conceived or unevenly executed. It is a project that necessitates engagement both with the world and with philosophy simultaneously. And so it is that my posts have ranged widely over warfare and the history of ideas, inter alia, and as a consequence of this dual mandate I have often found myself reading and citing sources that are not the common run of reading for philosophers. Some philosophers, however, are both influential and controversial, and Martin Heidegger has become one such philosopher. Heidegger’s influence in philosophy has only grown since his death (primarily in Continental thought), but the controversy about his involvement with Nazism has kept pace and grown along with Heidegger’s reputation.
It may help my readers in the US to understand the impact of the Heidegger controversy to compare it to the intersection of evil and ideals in an iconic American thinker, taking as our example a man more familiar than Heidegger, who was an iconic continental thinker. Take Thomas Jefferson, for example. Some years ago (in 1998, to be specific) I saw two television documentaries about the life of Thomas Jefferson. The first was a typical laudatory television documentary about one of the American founding fathers (I didn’t take notes at the time, so I don’t know which documentary this was, but it may well have been the 1997 Ken Burns film about Jefferson, which I recently re-watched to confirm my memory of its ambiguous treatment of Jefferson’s relationship to this slaves), which touched upon the possibility of Jefferson fathering children by his slave Sally Hemmings, while not taking the idea very seriously.
Then in 1998 the news came out of DNA tests that proved conclusively that Jefferson had fathered the children of his slave Sally Hemmings, and the scientific nature of the evidence rapidly inroads among Jefferson scholars, who had been slow to acknowledge Jefferson’s “shadow family” (as such families were once called in the Ante-Bellum south). The consensus of Jefferson scholars changed so rapidly that it makes one’s head spin — but only after two hundred years of denial. And there remain those today who continue to deny Jefferson’s paternity of Sally Hemmings’ children.
Not long after this news was made public, I saw another documentary about Jefferson in which the whole issue was treated very differently; the perspective of this documentary accepted as unproblematic Jefferson’s paternity of Sally Hemmings’ children, and examined Jefferson’s life and ideas in the light of this “shadow family.” I don’t think that Jefferson suffered at all from this latter documentary treatment; he definitely came across less as an icon and more as a fallible human being, which is not at all objectionable. It is, in fact, more human, and more believable.
Though Jefferson did not suffer in my estimation because he was revealed to be human, all-too-human, there is nevertheless something deeply disturbing about the image of Jefferson sitting down to dinner with his white family while being served at dinner by his mulatto children that he sired with with slaves, and it is deeply disturbing in a way that it not at all unlike the way that it is deeply disturbing to know that when Heidegger met Karl Löwith in 1936 near Rome (two years after Heidegger left his Rectorship in Freiburg) that Heidegger wore a Nazi swastika pin on his lapel the entire time, knowing that Löwith was a Jew who had been forced to flee Nazi Germany. One cannot but wonder, on a purely human level, apart from any ideology, how one person could be so utterly unconcerned with the well being of another.
It would be disingenuous to attempt to defend the indefensible by making the claim that all intellectuals of Jefferson’s time were conflicted over slavery; this simply was not the case. Schopenhauer, for example, consistently wrote against slavery and never showed the slightest sign of wavering on the issue, but, of course, Schopenhauer’s income did not depend on slaves, while Jefferson’s did.
We know that Jefferson struggled mightily with the question of slavery in his later years, as is the case with most conflicted men tying himself in knots trying to square the actual record of his life with his ideals. It is easy to dismiss individuals, even those who have struggled with the contradictions in their life, as mere hypocrites, but the charge of hypocrisy, while carrying great emotional weight, is the least interesting charge that can be made against a man’s ideas. As I wrote in my Variations on the Theme of Life, “The world is mendacious through and through; mendacity is the human condition. To renounce hypocrisy is to renounce the world and to institute an asceticism that cannot ever be realized in practice.” (section 169)
Heidegger does not seem to have been conflicted about his Nazism in the way that Jefferson was conflicted about slavery. Many years after the Second World War, when the record of Nazi death camps was known to all, Heidegger could still refer to the “inner truth greatness of this movement,” while in the meeting with Löwith mentioned above Heidegger was quite explicit that his political engagement with Nazism was a direct consequence of his philosophical views.
One obvious and well-trodden path for handling a philosopher’s political “indiscretions” is to hold that a philosopher’s theoretical works are a thing apart, elevated above the world like Plato’s Forms — one might even say sublated in the Hegelian sense: at once elevated, suspended, and canceled. This strategy allows one to read any philosopher and ignore any detail of life that one chooses. I don’t think that this constitutes a good contribution to intellectual honesty.
I myself was once among those who read philosophers for their philosophical ideas only, and while I was never a Heidegger enthusiast or a Heidegger defender, I thought of Heidegger’s political engagement with Nazism as mostly irrelevant to his philosophy. At some point I don’t clearly recall, I become intensely interested in Heidegger’s Nazism, and there was a flood of books telling the whole sorry story to feed my interest: Heidegger And Nazism by Victor Farias, which was the book the opened by Heidegger’s Nazi past to scrutiny, On Heidegger’s Nazism and Philosophy by Tom Rockmore, The Heidegger Controversy: A Critical Reader edited by Richard Wolin, Heidegger’s Crisis: Philosophy and Politics in Nazi Germany by Hans Sluga, Heidegger, philosophy, Nazism by Julian Young, The Shadow of that Thought by Dominique Janicaud, and most recent and perhaps the most devastating of them all, Heidegger: The Introduction of Nazism into Philosophy in Light of the Unpublished Seminars of 1933-1935 by Emmanuel Faye.
Even with all this material now available on Heidegger’s Nazi past, Heidegger still has his apologists and defenders. Beyond the steadfast apologists for Heidegger — who are perhaps more compromised than Heidegger himself — there are a variety of strategies to excuse Heidegger from his involvement with the Nazis, as when Heidegger’s Nazism is called an “episode” or a “period,” or characterized as “compromise, opportunism, or cowardice” (as in Julian Young’s Heidegger, philosophy, Nazism, p. 4). Young also uses the terms conviction, commitment, and flirtation, though Young ultimately exculpates Heidegger, asserting that, “…neither the early philosophy of Being and Time, nor the later, post-war philosophy, nor even the philosophy of the mid-1930s — works such as the Introduction to Metaphysics with respect to which critics often feel themselves to have an open-and-shut case — stand in any essential connection to Nazism.” (Op. cit., p. 5)
Heidegger’s engagement with fascism represents the point at which Heidegger’s ideas demonstrate their relationship to the ordinary business of life, and this is a conjuncture of the first importance. This is, indeed, identical to the task I set myself in writing this blog: to demonstrate the relationship between life and ideas. And Heidegger, I came to realize, was a particularly clear and striking case of the intersection of life and thought, though not the kind of example that most philosophers would want to claim as their own. I can fully understand why a philosopher would simply prefer to distance themselves from Heidegger and, while not denying Heidegger’s Nazism, would choose not to talk about it either. But that Heidegger thereby becomes a problem for philosophy and philosophers is precisely what makes him interesting. We philosophers must claim Heidegger as one of our own, even if we are sickened by his Nazism, which was no mere “flirtation” or “episode,” but constituted a life-long commitment.
Heidegger was not merely a Nazi ideologue, but also briefly a Nazi official. The Nazification of the professions was central to the strategy of Nazi social revolution (with its own professional institution, the Ahnenerbe), and a willing collaborator such as Heidegger, prepared to Nazify a university, was a valuable asset to the Nazi party. Ultimately, however, Heidegger was embroiled in an internal conflict within the Nazi party, and when the SA was purged and many of its leaders killed on Night of the Long Knives, the Strasserist SA faction lost out decisively, and Heidegger with them. Thereafter Heidegger was watched by the Nazi party, and Heidegger defenders have used this party surveillance to argue that Heidegger was regarded as a subversive by the Nazi party. He was a subversive, in fact, but only because he represented a faction of Nazism that had been suppressed. Heidegger continued as a Nazi party member, and paid his party dues right up to the end of the war. We see, then, that the SA purge was not merely a brutal struggle for power within the Nazi party, but also an episode in the history of ideas. This is interesting and important, even if it is also horrific.
The more carefully we study Heidegger’s philosophy, and read it in relation to his life, the more we can understand the relation of even the most subtle and sophisticated philosophy to ideological commitment and to the ordinary business of life. And it wasn’t only Heidegger who compromised himself. There is Frege’s political diary, less well known than Heidegger’s political views, and the much more famous case of Sartre and Camus. There are at least two book-length studies of the public quarrel and falling-out between Sartre and Camus (Sartre and Camus: A Historic Confrontation and Camus and Sartre: The Story of a Friendship and the Quarrel that Ended It by Ronald Aronson). Camus most definitely comes off looking better in this quarrel, with Sartre, the sophisticated technical philosopher, looking like a party-line communist while Camus, the writer, the literary man, showing true independence of spirit. The political lives of Camus and Sartre have been written about extensively, but even still Heidegger remains an interesting case because of the impenetrable complexity of his thought and the manifest horrors of the regime he served. There ought to be a disconnect here, but there isn’t, and this, again, is interesting and important even if it is horrific.
I have had to ask myself if my interest in Heidegger’s Nazism is prurient (in so far as there is a purely intellectual sense of “prurient”). There is something a little discomfiting about becoming fascinated by studying a great philosopher’s engagement with fascism. I am not innocent in this either. I, too, am a morally compromised philosopher. Perhaps the most I can hope for is to be aware of what I am involved in by making a careful study of philosophy’s involvement in politics. Naïvété strikes me as inexcusable in this context. I hope I have not been naïve.
I have not scrupled to read, to think about, and to quote individuals who were not only ideologically associated with crimes of unprecedented magnitude, but who have personally carried out capital crimes. In the case of Theodore “Ted” Kaczynski, who was personally responsible for several murders, I have carefully read his manifesto, Industrial Society and its Future (read it several times through, in fact), have thought about it, and have quoted it. Others who have been influenced by Kaczynski’s work and have publicly discussed it have felt the need to apologize for it, like scientists who consider using the research of Nazi doctors. But an apology feels like an excuse. I don’t want to make excuses.
Heidegger, like Nazism itself, is a lesson from history. We can benefit from studying Heidegger by learning how the most sophisticated philosophical justifications can be formulated for the most vulgar and the most reprehensible of purposes. But we cannot learn the lesson without studying the lesson. Studying the lessons of history may well corrupt us. That is a danger we must confront, and a risk we must take.
. . . . .
. . . . .
. . . . .
6 October 2013
What is astrobiology?
I suppose that “astrobiology” could be called one of those “ten dollar” words, but despite being a long word of six syllables and a dozen letters, it can be defined quite simply.
Astrobiology has been called, “The study of life in space” (Mix, Life in Space: Astrobiology for Everyone, 2009) and that, “Astrobiology… removes the distinction between life on our planet and life elsewhere.” (Plaxco and Gross, Astrobiology: A Brief Introduction, 2006). Taking these sententious formulations of astrobiology as the study of life in space, which removes the distinction between life on our planet and life elsewhere, together gives us a new perspective with which to view life on Earth (and beyond).
There are, of course, longer and more detailed definitions of astrobiology. There are two in particular that I have cited in previous posts:
“The study of the living universe. This field provides a scientific foundation for a multidisciplinary study of (1) the origin and distribution of life in the universe, (2) an understanding of the role of gravity in living systems, and (3) the study of the Earth’s atmospheres and ecosystems.”
from the NASA strategic plan of 1996, quoted in Steven J. Dick and James E. Strick, The Living Universe: NASA and the Development of Astrobiology, 2005
“Astrobiology is the study of the origin, evolution, distribution, and future of life in the universe. This multidisciplinary field encompasses the search for habitable environments in our Solar System and habitable planets outside our Solar System, the search for evidence of prebiotic chemistry and life on Mars and other bodies in our Solar System, laboratory and field research into the origins and early evolution of life on Earth, and studies of the potential for life to adapt to challenges on Earth and in space.”
from the NASA astrobiology website
I cited these two definitions of astrobiology from NASA in Eo-, Eso-, Exo-, Astro- and other posts in which I used parallel formulations to define astrocivilization.
Learning to take the astrobiological point of view
Astrobiology is island biogeography writ large.
This is one of the few “tweets” I’ve written that was “re-tweeted” multiple times (I’m not very popular on Twitter.) After I wrote this I began a more extensive blog post on this theme, but didn’t finish it; the topic rapidly became too large and started to look like a book rather than a post. Then last month I posted this on Twitter:
In the same way that Darwin provided a new perspective on life, astrobiology provides a novel perspective that allows us to see life anew.
Recently I’ve also been referring to astrobiology with increasing frequency in my blog posts, and I referenced astrobiology in my 2012 presentation at the 100YSS symposium in Houston and just last month in my presentation at the Icarus Interstellar Starship Congress in Dallas.
It will be apparent to the reader, then, then the idea of astrobiology has been slowly growing on me for the past few years, and the more I think about it, the more I come to realize the fundamentally new perspective that astrobiology offers on life and its evolution. Moreover, astrobiology also is suggestive for the future of life, and what we will discover about life the more we explore the cosmos.
Astrobiology: the Fourth Revolution in the Life Sciences
The more I think about astrobiology, the more I realize that, like earlier revolutions in the life sciences, the astrobiological point of view gives a novel perspective on familiar facts, and in so doing it potentially orients science in a new direction. For this reason I now see astrobiology as the fourth of four revolutions that instantiated the life sciences in their present form and continue to shape the way that we think about biology and the living world.
Here is my list of the four major revolutions in biological thought that have shaped the life sciences:
● Natural selection Independently discovered by Charles Darwin and Alfred Russel Wallace, natural selection gave sharpness of focus to many vague evolutionary ideas that were being circulated in the nineteenth century. With natural selection, biology had a theory by which to work, that could unify biological thought in a way that had not previously been possible. Of the Darwinian revolution Harald Brüssow wrote, “How can biologists cope conceptually and technically with this enormous species number? A deep sigh of relief came for biologists already in 1859 with the publication of Charles Darwin’s book ‘On the Origin of Species’. Suddenly, biologists had a unifying theory for their branch of science. One could even argue that the holy grail of a great unifying theory was achieved by Darwin and Wallace at a time when Maxwell was unifying physics, the older sister of biology, at the level of the electromagnetic field theory.” (“The not so universal tree of life or the place of viruses in the living world” Phil. Trans. R. Soc. B, 2009, 364, 2263–2274)
● Genetics After Darwin and Wallace came Gregor Mengel, who solved fundamental problems in the theory of inheritance and so greatly strengthened the Darwinian theory of descent with modification. As Darwin had provided the mechanism for the overall structure of life, Mendel provided the mechanism that made natural selection possible. Mendel’s work, contemporaneous with Darwin, was forgotten and not rediscovered until the early twentieth century. It was not until the middle of the twentieth century that Crick and Watson were able to delineate the structure of DNA, which made it possible to describe Mendelian genetics on a molecular level, thus making possible molecular biology.
● Evo-devo Evo-devo, which is a contraction of evolutionary developmental biology, once again went back to the roots of biology (as Darwin had done by formulating a fundamental theory, and as Mendel had done by his careful study of inheritance in pea plants), and returned the study of embryology to the center of attention of evolutionary biology. Studying the embryology of organisms with the tools of molecular biology gave (and continues to give) new insights into the fine structure of life’s evolution. Before evo-devo, few if any suspected that the homology that Darwin and others notes on a macro-biological scale (the structural similarity of the hand of a man, the wing of a bat, and the flipper of a dolphin) would be reducible to homology on a genetic level, but evo-devo has demonstrated this in remarkable ways, and in so doing has further underlined the unity of all terrestrial life.
● Astrobiology Astrobiology now lifts life out of its exclusively terrestrial context and studies life in its cosmological context. We have known for some time that climate is a major driver of evolution, and that climatology is in turn largely driven by the vicissitudes of the Earth as the Earth orbits the sun, exchanges material with other bodies in our solar system, and the solar system entire bobs up and down in the plane of the Milky Way galaxy. Of understanding of life gains immensely by being placed in the cosmological context, which forces us both to think big, in terms of the place of life in the universe, as well as to think small, in terms of the details of origins of life on Earth and its potential relation to life elsewhere in the universe.
This is obviously a list of revolutions in biological thought compiled by an outsider, i.e., by someone who is not a biologist. Others might well compile different lists. For example, I can easily imagine someone putting the Woesean revolution on a short list of revolutions in biological thought. Woese was largely responsible for replacing the tripartite division of animals, plants, and fungi with the tripartite division of the biological domains of Bacteria, Archaea and Eukarya. (There remains the question of where viruses fit in to this scheme, as discussed in the Brüssow paper cited above.)
Since I have included molecular phylogeny among the developments of evo-devo (in the graphic at the bottom of this post), I have implicitly place Woese’s work within the evo-devo revolution, since it was the method of molecular phylogeny that made it possible to demonstrate that plants, animals and fungi are all closely related biologically, while the truly fundamental division in terrestrial life is between the eukarya (which includes plants, animals, and fungi, which are all multicellular organisms), bacteria, and archaea. If any biologists happen to read this, I hope you will be a bit indulgent toward my efforts, though I certainly encourage you to leave a comment if I have made any particularly egregious errors.
Toward a Radical Biology
Darwin mentioned the origins of life only briefly and in passing. There is the famous reference to, “some warm little pond with all sorts of ammonia and phosphoric salts, — light, heat, electricity &c. present” in his letter to Joseph Hooker, and there is the famous passage at the end of his Origin of Species which I discussed in Darwin’s Cosmology:
“Thus, from the war of nature, from famine and death, the most exalted object which we are capable of conceiving, namely, the production of the higher animals, directly follows. There is grandeur in this view of life, with its several powers, having been originally breathed into a few forms or into one; and that, whilst this planet has gone cycling on according to the fixed law of gravity, from so simple a beginning endless forms most beautiful and most wonderful have been, and are being, evolved.”
Darwin, of course, had nothing to go on at this point. Trying to understand or explain the origins of life without molecular biology would be like trying to explain the nature of water without the atomic and molecular theory of matter: the conceptual infrastructure to circumscribe the most basic elements of life did not yet exist. (The example of trying to define water without the atomic theory of matter is employed by Robert M. Hazen in his lectures on the Origins of Life.)
Just as Darwin pressed biology beyond the collecting and comparison of beetles in the backyard, and opened up deep time to biology (and, vice versa, biology to deep time), so astrobiology presses forward with the project of evolutionary biology, pursuing the natural origins of life to its chemical antecedents. Astrobiology is a radical biology in the same way that Darwin was radical biology in his time: both go to the root to the matter to the extent possible given the theoretical, scientific, and technological parameters of thought. It is in the radical sense that astrobiology is integral with origins of life research; it is in this sense in which the two are one.
The humble origins of radical ideas
The radical biology of Darwin did not start out as such. In his early life, Darwin considered becoming a country parson, and when Darwin left on his voyage on the Beagle as Captain Fitzroy’s gentleman companion, he held mostly conventional views. It is easy to imagine an alternative history in which Darwin retained his conventional views, went on to become a country parson, and gave Sunday sermons that were mostly moral homilies punctuated by the occasional quote from scripture the illustrate the moral lesson with a story from the tradition he nominally represented. Such a Darwin from an alternative history would have continued to collect beetles during the week and would have maintained his interest in natural history.
Just as Darwin came out of the context of English natural history (which, before Darwin, gave us those classic works of teleology, Paley’s Natural Theology and Chambers’ Vestiges of the Natural History of Creation — a work that the young Darwin greatly admired), so too astrobiology comes out of the context of a later development of natural history — the scientific search for the origins of life and for extraterrestrial life. While the search for extraterrestrial life is “big science” of an order of magnitude only possible by an institution like NASA, in this respect it stands in the humble tradition of natural history, since we must send robots of Mars and the other planets until we can go there ourselves with a shovel and rock hammer. From such humble beginnings sometimes emerge radical consequences.
I think we are already beginning to see the potentially radical character of astrobiology, and that this development in biology promises a paradigm shift almost of the scope and magnitude of natural selection. Indeed, both natural selection and astrobiology can be understood as further (and radical) contextualizations of the theme of man’s place in nature. When Darwin wrote, he contextualized human history in the most comprehensive conception of nature then possible; today astrobiology must contextualize not only human history but also the totality of life on Earth in a much more comprehensive cosmological context.
As our knowledge of the world (which was once very small, and very parochial) steadily expands, we are eventually forced to extend and refine our concepts in order to adequately account for the world that we now know. Natural selection and astrobiology are steps in the extension and refinement of our conception of life, and of the place of life in the world. Life simpliciter is, after all, a “folk” concept. Indeed, “life” is folk biology and “world” is folk cosmology. Astrobiology brings together these folk concepts and attempts to bring scientific rigor to them.
The biology of the future
Astrobiology is laying the foundations for the biology of the future. Here and now on earth, without having surveyed life on other worlds, astrobiologists are attempting for formulate concepts adequate to understanding life at the largest and the smallest scales. Once we take these conceptions along with us when we eventually explore alien worlds — including alien worlds close to home, such as Mars and the ocean beneath the ice of Europa — it is to be expected that further revolutions in the life sciences will come about as a result of attempting to understand what we eventually find in the light of the concepts we have preemptively developed in order to understand biology beyond the surface of the Earth.
Future revolutions in biology will likely have the same radical character as natural selection, genetics, evo-devo, and astrobiology. Future naturalists will do what naturalists do best: they will spend their time in the field finding new specimens and describing them for science, and in the process of the slow and incremental accumulation of scientific knowledge new ideas will suggest themselves. Perhaps someone laid low by some alien fever, like Wallace tossing and turning as he suffered from a fever in the Indonesian archipelago, will, in a moment of insight, rise from their sick bed long enough to dash off a revolutionary paper, sending it off to another naturalist, now settled and meditating over his own experiences of new and unfamiliar forms of life.
The naturalists of alien forms of life will not necessarily have the same point of view as that of astrobiologists — and that is all to the good. Science thrives when it is enriched by new perspectives. At present, the revolutionary new perspective is astrobiology, but that will not likely remain true indefinitely.
. . . . .
. . . . .
. . . . .
. . . . .
25 September 2013
Hegel is not remembered as the clearest of philosophical writers, and certainly not the shortest, but among his massive, literally encyclopedic volumes Hegel also left us one very short gem of an essay, “Who Thinks Abstractly?” that communicates one of the most interesting ideas from Hegel’s Phenomenology of Mind. The idea is simple but counter-intuitive: we assume that knowledgeable individuals employ more abstractions, while the common run of men content themselves with simple, concrete ideas and statements. Hegel makes that point that the simplest ideas and terms that tend to be used by the least knowledgeable among us also tend to be the most abstract, and that as a person gains knowledge of some aspect of the world the abstraction of a terms like “tree” or “chair” or “cat” take on concrete immediacy, previous generalities are replaced by details and specificity, and one’s perspective becomes less abstract. (I wrote about this previously in Spots Upon the Sun.)
We can go beyond Hegel himself by asking a perfectly Hegelian question: who thinks abstractly about history? The equally obvious Hegelian response would be that the historian speaks the most concretely about history, and it must be those who are least knowledgeable about history who speak and think the most abstractly about history.
“…it is difficult to imagine that any of the sciences could treat time as a mere abstraction. Yet, for a great number of those who, for their own purposes, chop it up into arbitrary homogenous segments, time is nothing more than a measurement. In contrast, historical time is a concrete and living reality with an irreversible onward rush… this real time is, in essence, a continuum. It is also perpetual change. The great problems of historical inquiry derive from the antithesis of these two attributes. There is one problem especially, which raises the very raison d’être of our studies. Let us assume two consecutive periods taken out of the uninterrupted sequence of the ages. To what extent does the connection which the flow of time sets between them predominate, or fail to predominate, over the differences born out of the same flow?”
Marc Bloch, The Historian’s Craft, translated by Peter Putnam, New York: Vintage, 1953, Chapter I, sec. 3, “Historical Time,” pp. 27-29
The abstraction of historical thought implicit in Hegel and explicit in Marc Bloch is, I think, more of a problem that we commonly realize. Once we look at the problem through Hegelian spectacles, it becomes obvious that most of us think abstractly about history without realizing how abstract our historical thought is. We talk in general terms about history and historical events because we lack the knowledge to speak in detail about exactly what happened.
Why should it be any kind of problem at all that we think abstractly about history? People say that the past is dead, and that it is better to let sleeping dogs lie. Why not forget about history and get on with the business of the present? All of this sounds superficially reasonable, but it is dangerously misleading.
Abstract thinking about history creates the conditions under which the events of contemporary history — that is to say, current events — are conceived abstractly despite our manifold opportunities for concrete and immediate experience of the present. This is precisely Hegel’s point in “Who Thinks Abstractly?” when he invites the reader to consider the humanity of the condemned man who is easily dismissed as a murderer, a criminal, or a miscreant. But we not only think in such abstract terms of local events, but also if not especially in regard to distant events, and large events that we cannot experience personally, so that massacres and famines and atrocities are mere massacres, mere famines, and mere atrocities because they are never truly real for us.
There is an important exception to all this abstraction, and it is the exception that shapes us: one always experiences the events of one’s own life with concrete immediacy, and it is the concreteness of personal experience contrasted to the abstractness of everything else not immediately experienced that is behind much (if not all) egocentrism and solipsism.
Thus while it is entirely possible to view the sorrows and reversals of others as abstractions, it is almost impossible to view one’s own sorrows and reversals in life as abstractions, and as a result of the contrast between our own vividly experienced pain and the abstract idea of pain in the life of another we have a very different idea of all that takes place in the world outside our experience as compared to the small slice of life we experience personally. This observation has been made in another context by Elaine Scarry, who in The Body in Pain: The Making and Unmaking of the World rightly observed that one’s own pain is a paradigm of certain knowledge, while the pain of another is a paradigm of doubt.
Well, this is exactly why we need to make the effort to see the big picture, because the small picture of one’s own life distorts the world so severely. But given our bias in perception, and the unavoidable point of view that our own embodied experience gives to us, is this even possible? Hegel tried to arrive at the big picture by seeing history whole. In my post The Epistemic Overview Effect I called this the “overview effect in time” (without referencing Hegel).
Another way to rise above one’s anthropic and individualist bias is the overview effect itself: seeing the planet whole. Frank White, who literally wrote the book on the overview effect, The Overview Effect: Space Exploration and Human Evolution, commented on my post in which I discussed the overview effect in time and suggested that I look up his other book, The Ice Chronicles, which discusses the overview effect in time.
I have since obtained a copy of this book, and here are some representative passages that touch on the overview effect in relation to planetary science and especially glaciology:
“In the past thirty-five years, we have grown increasingly fascinated with our home planet, the Earth. What once was ‘the world’ has been revealed to us as a small planet, a finite sphere floating in a vast, perhaps infinite, universe. This new spatial consciousness emerged with the initial trips into Low Earth Orbit…, and to the moon. After the Apollo lunar missions, humans began to understand that the Earth is an interconnected unity, where all things are related to one another, and there what happens on one part of the planet affects the whole system. We also saw that the Earth is a kind of oasis, a place hospitable to life in a cosmos that may not support living systems, as we know them, anywhere else. This is the experience that has come to be called ‘The Overview Effect’.”
Paul Andrew Mayewski and Frank White, The Ice Chronicles: The Quest to Understand Global Climate Change, University Press of New England: Hanover and London, 2002, p. 15
“The view of the whole Earth serves as a natural symbol for the environmental movement. it leaves us unable to ignore the reality that we are living on a finite ‘planet,’ and not a limitless ‘world.’ That planet is, in the words of another astronaut, a lifeboat in a hostile space, and all living things are riding in it together. This realization formed the essential foundation of an emerging environmental awareness. The renewed attention on the Earth that grew out of these early space flights also contributed to an intensified interest in both weather and climate.”
Paul Andrew Mayewski and Frank White, The Ice Chronicles: The Quest to Understand Global Climate Change, University Press of New England: Hanover and London, 2002, p. 20
“Making the right choices transcends the short-term perspectives produced by human political and economic considerations; the long-term habitability of our home planet is at stake. In the end, we return to the insights brought to us by our astronauts and cosmonauts as the took humanity’s first steps in the universe: We live in a small, beautiful oasis floating through a vast and mysterious cosmos. We are the stewards of this ‘good Earth,’ and it is up to us to learn how to take good care of her.”
Paul Andrew Mayewski and Frank White, The Ice Chronicles: The Quest to Understand Global Climate Change, University Press of New England: Hanover and London, 2002, p. 214
It is interesting to note in this connection that glaciology yielded one of the earliest forms of scientific dating techniques, which is varve chronology, originating in Sweden in the nineteenth century. Varve chronology dates sedimentary layers by the annual layers of alternating coarse and fine sediments from glacial runoff — making it something like dendrochronology, except for ice instead of trees.
Scientific historiography can give us a taste of the overview effect, though considerable effort is required to acquire the knowledge, and it is not likely to have the visceral impact of seeing the overview effect with your own eyes. Even an idealistic philosophy like that of Hegel, as profoundly different as this is from the empiricism of scientific historiography, can give a taste of the overview effect by making the effort to see history whole and therefore to see ourselves within history, as a part of an ongoing process. Probably the scientists of classical antiquity would have been delighted by the overview effect, if only they had had the opportunity to experience it. Certainly they had an inkling of it when they proved that the Earth is spherical.
There are many paths to the overview effect; we need to widen these paths even as we blaze new trails, so that the understanding of the planet as a finite and vulnerable whole is not merely an abstract item of knowledge, but also an immediately experienced reality.
. . . . .
. . . . .
. . . . .
20 September 2013
What happens when an individual achieves a new level of perspective taking? Is one perspective displaced by another? Is a new perspective added to existing perspectives?
Firstly, what do I mean by “perspective taking” in this context? In a couple of posts I’ve discussed perspective taking as a theme of developmental psychology — The Hierarchy of Perspective Taking and The Overview Effect as Perspective Taking — but I haven’t tried to rigorously define it. The short version is that perspective taking is putting yourself in the position of the other and seeing the world from the other’s point of view (or perspective, hence perspective taking). This is, as I have previously remarked, one of the most basic thought experiments in moral philosophy.
Kant implicitly appeals to perspective taking in his categorical imperative, in which he asserts that one ought to act as though the principle that guides one’s actions should be made a universal law. In other words, you ask yourself, “What would the consequences be if everyone acted as I am acting?” This, in turn, supposes that one can place oneself into the position of the other and imagine how the principle of your action would be interpreted and put into practice by others.
It could also be argued that Kant’s categorical imperative is implicit in the regime of popular sovereignty, which has its origins in Kant’s time but which only in the twentieth century became the universally acclaimed normative standard for political entities. Everyone knows that their individual vote does not count, because if you subtracted your vote from the total in any election, it would not affect the outcome. Nevertheless, if everyone came to this conclusion and acted upon it, no one would vote and democracy would collapse.
To return to perspective taking, there are much more sophisticated formulations than the off-the-cuff version I have given above. There is an interesting paper that discusses the conception of perspective taking — THE DEVELOPMENT OF SOCIO-MORAL MEANING MAKING: DOMAINS, CATEGORIES, AND PERSPECTIVE-TAKING — the authors of which, Monika Keller and Wolfgang Edelstein, converge on this definition:
“…perspective-taking is taken to represent the formal structure of coordination of the perspectives of self and other as they relate to the different categories of people’s naive theories of action. The differentiation and coordination of the categories of action and the self-reflexive structure of this process are basic to those processes of development and socialization in which children come to reconstruct the meaning of social interaction in terms of both what is the case and what ought to be the case in terms of morally responsible action. In order to achieve the task of establishing consent and mutually acceptable lines of action in situations of conflicting claims and expectations, a person has to take into account the intersubjective aspects of the situation that represent the generalizable features, as well as the subjective aspects that represent the viewpoints of the persons involved in the situation. In its fully developed form, this complex process of regulation and interaction calls for the existence and operation of complex socio-moral knowledge structures and a concept of self as a morally responsible agent. The ability to differentiate and coordinate the perspectives of self and other thus is a necessary condition both in the development of socio-moral meaning making and in the actual process of solving situations of conflicting claims.”
“THE DEVELOPMENT OF SOCIO-MORAL MEANING MAKING: DOMAINS, CATEGORIES, AND PERSPECTIVE-TAKING,” by Monika Keller and Wolfgang Edelstein, in W. M. Kurtines & J. L. Gewirtz (Eds.) (1991). Handbook of moral behavior and development: Vol. 2. Research (pp. 89-114). Hillsdale, NJ: Erlbaum.
I recommend the entire paper, which discusses, among other matters, the attempts by others to formulate an adequate explication of perspective taking. But I have an ulterior motive for this discussion of perspective taking. The real reason I have engaged in this inquiry about perspective taking is because of my recent posts about the overview effect — The Epistemic Overview Effect and The Overview Effect as Perspective Taking — in which I treat the overview effect as a visceral trigger for perspective taking on a global (and even trans-planetary) scale.
Thinking about the overview effect as perspective taking, I considered the possibility that taking a new global or even trans-planetary perspective might involve either dispensing with a former perspective in order to replace it with a novel perspective (which I will call the displacement model), or adding a new perspective to already existing perspectives (which I will call the augmentation model). (And here I want to cite Siggi Becker and Mark Lambertz, who commented on my earlier overview effect post on Facebook, and spurred me into thinking about what it means for one to achieve a new perspective on the world.)
For cognitive scientists and sociologists, perspective taking is cumulative, especially in the case of moral development. There is an entire literature devoted to Robert L. Selman’s five stages of perspective taking (which is very much influenced by Piaget) and Lawrence Kohlberg’s six stages of moral development (three stages — pre-conventional, conventional and post-conventional — each broken into two divisions).
There are, however, definite limits to this Piagetian cognitive basis for the development of the moral life of the individual. Without some degree of empathy for the other, all of this cognitive approach to moral development falls apart, because one might systematically pursue the development of one’s perspective taking ability only to more successfully exploit and manipulate others through a more effective cunning than that provided by a purely egocentric approach to interaction with others. Thus we arrive at the Schopenhauerian conception such that compassion is the basis of morality.
Max Scheler systematically critiqued this classic Schopenhauerian position in his book The Nature of Sympathy. Scheler concluded that compassion alone is insufficient for morality, thus undermining Schopenhauer’s position, but that compassion alone may not be a sufficient condition for morality, it may yet be a necessary condition. Perhaps it is compassion and perspective taking together that make morality possible. These philosophical issues have also been taken up in the spirit of social science by Carol Gilligan’s work on the ethics of care. I only touch on these issues here in passing, since any serious consideration of these works and their authors would require substantial exposition.
Perspective taking is central to Lawrence Kohlberg’s theory of moral development, and what Kohlberg calls “disequilibrium” (which serves as a spur to moral development) might also be called “disorientation,” or, more specifically, “moral disorientation.” And it is disorienting when one achieves a new perspective, and especially when one does as suddenly, as in the case of the visceral trigger provided by the overview effect. Plato describes such a disorienting experience beautifully in his famous allegory of the cave — the philosopher is twice disoriented, initially as he ascends from the cave of shadows up into the real world, and again when we descends again into the cave of shadows in order to attempt to enlighten those still chained below. A power experience leaves one feeling disoriented, and much of this disorientation is due to the collapse of a familiar system of thought that gave one a sense of one’s place in the world, and its replacement by a new system of thought that is not yet familiar.
If we focus too much on cumulative and continuous aspects of perspective taking, on the assumption that each level of development must build upon the immediately previous level, we may lose sight of the disruptive nature of perspective taking — and moral development is not a primrose path. As individuals confront moral dilemmas they are forced to consider difficult question and sometimes to give hard answers to difficult questions. This is central to the moral growth of the person, and it is often quite uncomfortable and attended by anxiety and inner conflict. One often feels that one fights one’s way through a problem, only to surmount it. This is very much like Wittgenstein’s description of throwing away the ladder once one has climbed up.
If the overview effect constitutes a new level of perspective taking, and if perspective taking is central to moral development, then the perspective taking of the overview effect constitutes a stage in human moral development — and it constitutes that stage of moral development that coincides with civilization’s expansion beyond terrestrial boundaries.
. . . . .
. . . . .
. . . . .
18 September 2013
In my previous post on The Epistemic Overview Effect I now realize that I failed to make an obvious connection with some earlier threads of my thought. Specifically, I failed to see or to develop the connection between the overview effect and what some developmental psychologists call “perspective taking.”
In The Hierarchy of Perspective Taking I discussed the developmental psychology of Jean Piaget, Erik Erikson, and Lev Vygotsky. In this post I attempted to show how perspective taking transcends the life of the individual and applies as well to entire civilizations — which distinction might be called that between ontogenetic perspective taking and phylogenetic perspective taking. In this post I wrote:
“Piagetian cognitive development in terms of perspective taking can easily be extended throughout the human lifespan (and beyond) by the observation that there are always new perspectives to take. As civilization develops and grows, becoming ever more comprehensive as it does so, the human beings who constitute this civilization are forced to formulate always more comprehensive conceptions in order to take the measure of the world being progressively revealed to us. Each new idea that takes the measure of the world at a greater order of magnitude presents the possibility of a new perspective on the world, and therefore the possibility of a new achievement in terms of perspective taking.”
Re-reading this passage in light of the overview effect — the view of the earth entire experienced by astronauts and cosmonauts, as well as the change in perspective that a few of these observers have had as a result of seeing the earth whole with their own eyes — I would now add to my exposition of a hierarchy of perspective taking that the expansion and extension of civilization not only produces new ideas and conceptions, but also new experiences. Technology makes it possible to experience aspects of the world directly that were impossible to experience prior to the advent of industrial-technological civilization.
The overview effect is a paradigmatic case of technologically-facilitated experience. While I could say that those who have, so far, been fortunate enough to experience the overview effect, are “forced” as a result of their experience to formulate new conceptions of the world as a consequence of their experience (as I used this idiom of being “forced” previously), it would be better to say as I put it more recently in The Epistemic Overview Effect that the experience is a trigger that inspires an effort to formulate a conception of the world adequate to the experience.
While the overview effect itself is likely a powerful experience, merely the idea that others are experiencing an overview can itself be a powerful experience. This involves the most fundamental of all ethical thought experiments: the attempt to place ourselves in the position of the other, and so to experience the otherness of the other and the otherness of ourselves. When we believe that we have understood the other’s point of view, it is not unusual to say, “I can see your perspective.”
Perspective taking in the form of taking the perspective of the other is a key achievement in the development of an ethical perspective of the individual life. Some never achieve this level of insight, and some come to an adequate appreciation of the perspective of the other only late in life.
In the Swedish film My Life as a Dog there is an beautiful evocation of such ethical perspective taking in the life of a young boy, by way of the theme of the Russian space dog Laika, which recurs as a motif to which the young protagonist returns time and again as an example of perspective. Here are some of the voiceovers from the protagonist’s narration:
“And what about Laika, the space dog? They put her in the Sputnik and sent her into space. They attached wires to her heart and brain to see how she felt. I don’t think she felt too good. She spun around up there for five months until her doggy bag was empty. She starved to death. It’s important to have something like that to compare things to.”
“It’s strange how I can’t stop thinking about Laika. People shouldn’t think so much. ‘Time heals all wounds,’ Mrs. Arvidsson says. Mrs. Arvidsson says some wise things. You have to try to forget.”
“…I’ve been kinda lucky. I mean, compared to others. You have to compare, so you can get a little distance from things. Like Laika. She really must have seen things in perspective.”
Laika did indeed see things in perspective, and may well have experienced the overview effect before any human being. The young boy in My Life as a Dog understands this, intuiting the Laika’s perspective, and is able to better judge his own station in life by comparing his situation to that of Laika.
As long as our industrial-technological civilization continues in its development (i.e., as long as it does not succumb to the existential risks of flawed realization or permanent stagnation), we individuals contextualized within this civilization can continue our development, and this development will be facilitated by the technologies produced by this civilization that will give us new experiences, and these new experiences will afford us with new perspectives on the world.
Recently there have been many new stories about Voyager-1 being the first human artifact to leave the solar system (cf. Voyager probe ‘leaves Solar System’ by Jonathan Amos, Science correspondent, BBC News). Meditations upon the achievement of Voyager-1 have taken the form of a perspective taking on our solar system entire. We are inspired to contemplate our perspective on the world by imaginatively taking the point of view of Voyager-1. Some day, a human being will travel as far or farther than Voyager-1, and will look back and see our sun at a distance, as we once looked back and saw the earth for the first time at a distance.
Our technologically-facilitated perspective taking will not end there. There are grander views yet to contemplate, and grander conceptions of nature that will follow from a direct, visceral experience of these grander views. As wonderful as the Earth must appear from space, and as transformative as seeing this must be, further in the future there will be the possibility of flying far enough beyond the Milky Way that we will be able to turn around and look back at our home galaxy. Knowing it to be our home (and by that time having come to a kind of astronautical familiarity with the Earth, our solar system, and the Orion Spur of the Milky Way), we will be moved by the sight of our entire galaxy seen whole, in one glance of the eye, hanging suspended and seemingly motionless against the blackness of space unrelieved by stars — for the only companions to our galaxy from this extra-galactic point of view will be other galaxies, and this astonishing perspective may well spur us toward a yet more comprehensive, therefore more adequate, conception of the universe.
. . . . .
. . . . .
. . . . .
. . . . .
14 September 2013
The Overview Effect
The “overview effect” is so named for the view of the earth entire — an “overview” of the earth — enjoyed by astronauts and cosmonauts, as well as the change in perspective that a few of these privileged observers have had as a result of seeing the earth whole with their own eyes.
One of these astronauts, Edgar Mitchell, who was on the 1971 Apollo mission and was the sixth human being to walk on the moon, has been instrumental to bringing attention to the overview effect, and has written a book about his experiences as an astronaut and how it affected his perception and perspective, The Way of the Explorer: An Apollo Astronaut’s Journey Through the Material and Mystical Worlds. A short film has been made about the overview effect, and an institution has been established to study and to promote the overview effect, The Overview Institute.
Here is an extract from the declaration of The Overview Institute:
For more than four decades, astronauts from many cultures and backgrounds have been telling us that, from the perspective of Earth orbit and the Moon, they have gained such a vision. There is even a common term for this experience: “The Overview Effect”, a phrase coined in the book of the same name by space philosopher and writer Frank White. It refers to the experience of seeing firsthand the reality of the Earth in space, which is immediately understood to be a tiny, fragile ball of life, hanging in the void, shielded and nourished by a paper-thin atmosphere. From space, the astronauts tell us, national boundaries vanish, the conflicts that divide us become less important and the need to create a planetary society with the united will to protect this “pale blue dot” becomes both obvious and imperative. Even more so, many of them tell us that from the Overview perspective, all of this seems imminently achievable, if only more people could have the experience!
We have a hint of the overview effect when we see pictures of the Earth as a “blue marble” and as a “pale blue dot”; those who have had the opportunity to see the Earth as a blue marble with their own eyes have been affected by this vision to a greater extent than we can presumably understand from seeing the photographs. Here is another description of the overview effect:
When people leave the surface of the Earth and travel into Low Earth Orbit, to a space station, or the moon, they see the planet differently. My colleague at the Overview Institute, David Beaver, likes to emphasize that they not only see the Earth from space but also in space. He has also been a strong proponent that we describe what then happens as a change in world view.
Deep Space: The Philosophy of the Overview Effect, Frank White
In the same essay White then quotes himself from his book, The Overview Effect: Space Exploration and Human Evolution, on the same theme:
“Mental processes and views of life cannot be separated from physical location. Our “world view” as a conceptual framework depends quite literally on our view of the world from a physical place in the universe.”
Frank White has sought to give a systematic exposition of the overview effect in his book, The Overview Effect: Space Exploration and Human Evolution, which seeks to develop a philosophy of space travel derived from the personal experience of space by space travelers.
The Spatial Overview
There is no question in my mind that sometimes you have to see things for yourself. I have invoked this argument numerous times in writing about travel — no amount of eloquent description or stunning photographs can substitute for the experience of seeing a place for yourself with your own eyes. This is largely a matter of context: being in a place, experiencing a place as a presence, requires one’s own presence, and one’s own presence can be realized only as the result of a journey. A journey contextualizes an experience within the experiences required the reach the object of the journey. The very fact that one must travel in order to each a destination alters the experience of the destination itself.
To be present in a landscape means that all of one’s senses are engaged: one not only sees, but one sees with the whole of one’s peripheral vision, and when one turns one’s body in order to take in more of the landscape, one not only sees more of the landscape, but one feels one’s body turn; one smells the air; one hears the distinctive reverberations of the most casual sounds — all of the things that remind us that this is not an illusion but possesses all the chance qualities that mark a real, concrete experience.
I have remarked in other posts that one of the distinctive trends in contemporary philosophy of mind is that of emphasizing the embodiedness of the mind, and in this context the embodied mind is a mind that is inseparable from its sensory apparatus and its sensory apparatus is inseparable from the world with which it is engaged. When our eyes hurt as we look at the sun we are reminded by this visceral experience of sight — one might say overwhelming sight — that we experience the world in virtue of a sensory apparatus that is made of essentially the same materials as the world — that there is an ontological reciprocity of eye that sees and sun that shines, and it is only because the two share the same world and are made of the same materials that they stand in a relation of cause and effect to each other. We are part of the world, of the world, and in the world.
Presumably, then, to the present in space and feel oneself kineasthetically in space — most obviously, the feeling of a micro-gravity environment once off the surface of the earth — is part of the experience of the overview effect, as is the dramatic journey into orbit, which must remind the viewer of the difficulty of attaining the perspective of seeing the world whole. This is the overview effect in space.
The Temporal Overview
There is also the possibility of an overview effect in time. For the same reason that we might insist that some experiences must be had for oneself, and that one must be present spatially in a spatial landscape in order to appreciate that landscape for what it is, we might also insist that a person who has lived a long life and who has experienced many things has a certain kind of understanding of the temporal landscape of life, and it is only through a conscious knowledge of the experience of time and history that we can attain an overview of time.
The movement in contemporary historiography called Big History (which I have written about several times, e.g., in The Science of Time and Addendum on Big History as the Science of Time) is an attempt to achieve an overview experience of time and history.
I have observed elsewhere that we find ourselves swimming in the ocean of history, but this very immersion in history often prevents us from seeing history whole — which is an interesting contrast to the spatial overview experience, which which contextualization in a particular space is necessary to its appreciation and understanding. But contextualization in a particular time — which we would otherwise call parochialism — tends to limit our historical perspective, and we must actively make an effort to free ourselves from our temporal and historical contextualization in order to see time and history whole.
It is the effort to free ourselves from temporal parochialism, and the particularities and peculiarities of our own time, that give as a perspective on history that is not tied to any one history but embraces the whole of time as the context of many different histories. This is the overview effect in time.
The Epistemic Overview
I would like to suggest that there is also an epistemic overview effect. It is not enough to be told about knowledge in the way that newspaper and magazine articles might tell a popular audience about a new scientific discovery, or in the way that textbooks tell students about the wider world. While in some cases this may be sufficient, and we must rely upon the reports of others because we cannot construct the whole of knowledge on our own, in many cases knowledge must be gained firsthand in order for its proper significance to be appreciated.
Elsewhere (in P or not-P) I have illustrated the distinction between a constructive and a non-constructive point of view being something like the difference between climbing up a mountain, clambering over every rock until one achieves the summit (constructive) versus taking a helicopter and being set down on the summit from above (non-constructive). (I have taken this example over from French mathematician Alain Connes.) With this image in mind, being blasted off into space and seeing the mountain from orbit is a paradigmatically non-constructive experience, and it is difficult to imagine how it could be made a constructive experience.
Well, there are ways. Once space technology becomes widely distributed and accessible, if a person were to build their own SSTO from off-the-shelf parts and then pilot themselves into orbit, that would be something like a constructive experience of the overview effect. And if we go on to create a vibrant and vigorous spacefaring civilization, making it into orbit will only be the first of many steps, so that a constructive experience of space travel will be to “climb” one’s way from the surface of the earth through the solar system and beyond, touching every transitional point in between. It has been said that the journey of the thousand miles begins with a single step — this is very much a constructivist perspective. And it holds true that a journey of a million miles or a billion miles begins with a single step, and that first step of a cosmic voyage is the step that takes us beyond the surface of the earth.
Despite the importance and value of the constructivist perspective, it has its limitations, just as the oft-derided non-constructive point of view has its particular virtues and its significance. Non-constructive methods can reveal to us knowledge that is disruptive because it is forced upon us suddenly, in one fell swoop. Such an experience is memorable; it leaves an impression, and quite possibly it leaves much more of an impression that a painstakingly gradual revelation of exactly the same perspective.
This is the antithesis of the often-cited example of a frog placed in a pot of water and which doesn’t jump out as the water is slowly brought to a boil. The frog in this scenario is a victim of constructivist gradualism; if the frog had had a non-constructive perspective on the hot water in which he was being boiled to death, he might have jumped out and saved himself. And perhaps this is exactly what we need as human beings: a non-constructive (and therefore disruptive) perspective on a the familiar life that has crept over us day-by-day, step-by-step, and bit-by-bit.
An epistemic overview of knowledge can give us a disruptive conception of the totality of knowledge that is not unlike the disruptive experience of the overview effect in space, which allows us to see the earth whole, and the disruptive experience of time that allows us to see history whole. Moreover, I would argue that the epistemic overview is the ultimate category — the summum genus — that must contextualize the overview effect in space and in time. However, it is important to point out that the immediate visceral experience of the overview effect may be the trigger that is required for an individual to begin to seek the epistemic overview that will give meaning to his experiences.
. . . . .
. . . . .
. . . . .
11 September 2013
It has been a dozen years now since 11 September 2001. Like that day, today is a beautifully clear and pleasant September day. That such an event should be associated in my memory with nice weather is not unlike that memory almost a hundred years ago of the summer of 1914, just before the Guns of August, when Europeans reported one of their most pleasant summers ever, as though to drive home the stark horror of all that followed that beautiful summer.
I last wrote about September 11 two years ago, on the tenth anniversary, in Ephemera and Pseudo-Events, when I explored the nature of anniversaries as “pseudo-events” that are created by media participation. This year media participation seems pretty low key, making the anniversary perhaps less of a pseudo-event. Moreover, the anniversary of September 11 is also the anniversary of the coup that ousted Salvador Allende from power in Chile, and the news in the sources I read (I should mention that I don’t get my news from US sources) gave almost as much play to the 40th anniversary of the Pinochet coup as to the terrorist attacks on the US.
September 11 not only marked a turning point for US geopolitical involvement in the world in the post-Cold War era, it also marked a decisive turning point in the narrative by which we understand these events and their place in history. New terms and political catch-phrases entered our vocabulary and were relentlessly repeated by media outlets until they became meaningless almost as rapidly as they were introduced. A lot can happen in a dozen years — three or four presidents, for example, though in fact the post-9/11 political environment has yielded only two.
As rapidly as events occurred in the wake of September 11, events have continued to succeed each other with astonishing rapidity, and for all the day-to-day continuity that one experiences when swimming in the ocean of history, we can already begin to see the dissolution of the political patterns of the first decade of the 21st century and the emergence of patterns that will define the second decade of the 21st century. The US has sought to execute a “strategic pivot” to the Asia-Pacific region, even as the apparent clarity of purpose in Afghanistan and Iraq yield to the irremediable ambiguities of Libya and Syria.
It is a worthwhile thought experiment to attempt to see one’s own time in historical perspective, but this is admittedly very difficult. As I noted above, the onward rush of events in the present does not allow us to lose sight of the continuity of history, but we know that when we look back on previous centuries (which is itself an arbitrary historical periodization) we tend to break up the centuries into decades and make sweeping generalizations about each decade (perhaps an even more arbitrary historical periodization) as though each were lived separately, in isolation from the decade immediately preceding and immediately following.
What will be said, a hundred years from now (or five hundred years from now), about the first two decades of the twenty-first century? How will they be compared and contrasted in university examinations? What will our descendents say about how we lived, and how different it was to be alive in 2013 as compared to 2003? One obvious narrative structure would be to consign US political history to the presidents in office, so that the first decade of the twenty-first century will be thought of as the Bush years, defined by 9/11 and the response thereto, while the second decade of the twenty-first century will be thought of as the Obama years, when Americans wanted to distance themselves from the radical democratization initiatives of the Bush years and return to a more traditional isolationist stance in relation to the larger world.
This is one particularly obvious narrative, but one of the things that makes it obvious is its traditionalist focus on political leaders and military engagements — the dreaded grammar-school triumvirate of “names, dates, places” — whereas historiography has turned decisively away from top-down narratives in favor of bottom-up narratives that focus on the ordinary lives of ordinary people. But who is ordinary? In the context of the succession of presidents, any one president is ordinary, so context must be taken into account.
How could we arrive at a bottom-up narrative structure for contemporary history since the end of the Cold War? Or must we change our perspective even more, acknowledging that on the micro-historical level things change little and slowly, so that periodizations must look to macro-historical forces and structures that are so much larger than the Cold War, and what preceded and followed it, that such events barely register in the lives of ordinary individuals? In this context, what would seem to matter is the slow erosion of the position of the middle class, widening income disparity (just yesterday it was reported that US income inequality at record high), and the large-scale change in the structure of the labor market influencing the kind of jobs that are available, how much they pay, and how long they last.
Of course, no one is going to be satisfied with any one narrative or another exclusively. Part of the complexity of history is the collision of competing narratives. While in the history text books one narrative may triumph to the exclusion of others, the conflict from which the triumph emerges inevitably alters the triumphant narrative so that it becomes a kind of synthesis of the trends of the age.
. . . . .
. . . . .
. . . . .
. . . . .
8 September 2013
The Life of Civilization
Tenth in a Series on Existential Risk
What makes a civilization viable? What makes a species viable? What makes an individual viable? To put the question in its most general form, what makes a given existent viable?
These are the questions that we must ask in the pursuit of the mitigation of existential risk. The most general question — what makes an existent viable? — is the most abstract and theoretical question, and as soon as I posed this question to myself in these terms, I realized that I had attempted to answer this earlier, prior to the present series on existential risk.
In January 2009 I wrote, generalizing from a particular existential crisis in our political system:
“If we fail to do what is necessary to perpetuate the human species and thus precipitate the end of the world indirectly by failing to do what was necessary to prevent the event, and if some alien species should examine the remains of our ill-fated species and their archaeologists reconstruct our history, they will no doubt focus on the problem of when we turned the corner from viability to non-viability. That is to say, they would want to try to understand the moment, and hence possibly also the nature, of the suicide of our species. Perhaps we have already turned that corner and do not recognize the fact; indeed, it is likely impossible that we could recognize the fact from within our history that might be obvious to an observer outside our history.”
This poses the viability of civilization in stark terms, and I can now see in retrospect that I was feeling my way toward a conception of existential risk and its moral imperatives before I was fully conscious of doing so.
From the beginning of this blog I started writing about civilizations — why they rise, why they fall, and why some remain viable for longer than others. My first attempt to formulate the above stark dilemma facing civilization in the form of a principle, in Today’s Thought on Civilization, was as follows:
a civilization fails when it fails to change when the world changes
This formulation in terms of the failure of civilization immediately suggests a formulation in terms of the success (or viability) of a civilization, which I did not formulate at that time:
A civilization is viable when it successfully changes when the world changes.
I also stated in the same post cited above that the evolution of civilization has scarcely begun, which continues to be my point of view and informs my ongoing efforts to formulate a theory of civilization on the basis of humanity’s relatively short experience of civilized life.
In any case, in the initial formulation given above I have, like Toynbee, taken the civilization as the basic unit of historical study. I continued in this vein, writing a series of posts about civilization, The Phenomenon of Civilization, The Phenomenon of Civilization Revisited, Revisiting Civilization Revisited, Historical Continuity and Discontinuity, Two Conceptions of Civilization, A Note on Quantitative Civilization, inter alia.
I moved beyond civilization-specific formulations of what I would come to call the principle of historical viability in a later post:
…the general principle enunciated above has clear implications for historical entities less comprehensive than civilizations. We can both achieve a greater generality for the principle, as well as to make it applicable to particular circumstances, by turning it into the following schema: “an x fails when it fails to change when the world changes” where the schematic letter “x” is a variable for which we can substitute different historical entities ceteris paribus (as the philosophers say). So we can say, “A city fails when it fails to change…” or “A union fails when it fails to change…” or (more to the point at present), “A political party fails when it fails to change when the world changes.”
And in Challenge and Response I elaborated on this further development of what it means to be historically viable:
…my above enunciated principle ought to be amended to read, “An x fails when it fails to change as the world changes” (instead of “…when the world changes”). In other words, the kind of change an historical entity must undergo in order to remain historically viable must be in consonance with the change occurring in the world. This is, obviously, or rather would be, a very difficult matter to nail down in quantitative terms. My schema remains highly abstract and general, and thus glides over any number of difficulties vis-à-vis the real world. But the point here is that it is not so much a matter of merely changing in parallel with the changing world, but changing how the world changes, changing in the way that the world changes.
It was also in this post that I first called this the principle of historical viability.
I now realize that what I then called historical viability might better be called existential viability — at least, by reformulating by principle of historical viability again and calling it the principle of existential viability, I can assimilate these ideas to my recent formulations of existential risk. Seeing historical viability through the lens of existential risk and existential viability allows us to formulate the following relationship between the latter two:
Existential viability is the condition that follows from the successful mitigation of existential risk.
Thus the achievement of existential risk mitigation is existential viability. So when we ask, “What makes an existent viable?” we can answer, “The successful mitigation of risks to that existent.” This gives us a formal framework for understanding existential viability as a successful mitigation of existential risk, but it tells us nothing about the material conditions that contribute to existential viability. Determining the material conditions of existential viability will be a matter both of empirical study and the formulation of a theoretical infrastructure adequate to the conditions that bear upon civilization. Neither of these exist yet, but we can make some rough observations about the material conditions of existential viability.
Different qualities in different places at different times have contributed to the viability of existents. This is one of the great lessons of natural selection: evolution is not about a ladder of progress, but about what organism is best adapted to the particular conditions of a particular area at a particular time. When the “organism” in question is civilization, the lesson of natural selection remains valid: civilizations do not describe a ladder of progress, but those civilizations that have survived have been those best adapted to the particular conditions of a particular region at a particular time. Existential risk mitigation is about making civilization part of evolution, i.e., part of the long term history of the universe.
To acknowledge the position of civilization in the long term history of the universe is to recognize that a change has come about in civilization as we know it, and this change is primarily the consequence of the advent of industrial-technological civilization: civilization is now global, populations across the planet, once isolated by geographical barriers, now communicate instantaneously and trade and travel nearly instantaneously. A global civilization means that civilization is no longer selected on the basis of local conditions at a particular place at a particular time — which was true of past civilizations. Civilization is now selected globally, and this means placing the earth that is the bearer of global civilization in a cosmological context of selection.
What selects a planet for the long term viability of the civilization that it bears? This is essentially a question of astrobiology, which is a point that I recently attempted to make in my recent presentation at the Icarus Interstellar Starship Congress and my post on Paul Gilster’s Centauri Dreams, Existential Risk and Far Future Civilization.
An astrobiological context suggests what we might call an astroecological context, and I have many times pointed out the relevance of ecology to questions of civilization. Pursuing the idea of existential viability may offer a new perspective for the application methods developed for the study of the complex systems of ecology to the complex systems of civilization. And civilizations are complex systems if they are anything.
There is a growing branch of mathematical ecology called viability theory, with obvious application to the viability of the complex systems of civilization. We can immediately see this applicability and relevance in the following passage:
“Agent-based complex systems such as economics, ecosystems, or societies, consist of autonomous agents such as organisms, humans, companies, or institutions that pursue their own objectives and interact with each other an their environment (Grimm et al. 2005). Fundamental questions about such systems address their stability properties: How long will these systems exist? How much do their characteristic features vary over time? Are they sensitive to disturbances? If so, will they recover to their original state, and if so, why, from what set of states, and how fast?”
Viability and Resilience of Complex Systems: Concepts, Methods and Case Studies from Ecology and Society (Understanding Complex Systems), edited by Guillaume Deffuant and Nigel Gilbert, p. 3
Civilization itself is an agent-based complex system like, “economics, ecosystems, or societies.” Another innovative approach to complex systems and their viability is to be found in the work of Hartmut Bossel. Here is an extract from the Abstract of his paper “Assessing Viability and Sustainability: a Systems-based Approach for Deriving Comprehensive Indicator Sets”:
Performance assessment in holistic approaches such as integrated natural resource management has to deal with a complex set of interacting and self-organizing natural and human systems and agents, all pursuing their own “interests” while also contributing to the development of the total system. Performance indicators must therefore reflect the viability of essential component systems as well as their contributions to the viability and performance of other component systems and the total system under study. A systems-based derivation of a comprehensive set of performance indicators first requires the identification of essential component systems, their mutual (often hierarchical or reciprocal) relationships, and their contributions to the performance of other component systems and the total system. The second step consists of identifying the indicators that represent the viability states of the component systems and the contributions of these component systems to the performance of the total system. The search for performance indicators is guided by the realization that essential interests (orientations or orientors) of systems and actors are shaped by both their characteristic functions and the fundamental and general properties of their system environments (e.g., normal environmental state, scarcity of resources, variety, variability, change, other coexisting systems). To be viable, a system must devote an essential minimum amount of attention to satisfying the “basic orientors” that respond to the properties of its environment. This fact can be used to define comprehensive and system-specific sets of performance indicators that reflect all important concerns.
…and in more detail from the text of his paper…
● Obtaining a conceptual understanding of the total system. We cannot hope to find indicators that represent the viability of systems and their component systems unless we have at least a crude, but essentially realistic, understanding of the total system and its essential component systems. This requires a conceptual understanding in the form of at least a good mental model.
● Identifying representative indicators. We have to select a small number of representative indicators from a vast number of potential candidates in the system and its component systems. This means concentrating on the variables of those component systems that are essential to the viability and performance of the total system.
● Assessing performance based on indicator states. We must find measures that express the viability and performance of component systems and the total system. This requires translating indicator information into appropriate viability and performance measures.
● Developing a participative process. The previous three steps require a large number of choices that necessarily reflect the knowledge and values of those who make them. In holistic management, it is therefore essential to bring in a wide spectrum of knowledge, experience, mental models, and social and environmental concerns to ensure that a comprehensive indicator set and proper performance measures are found.
“Assessing Viability and Sustainability: a Systems-based Approach for Deriving Comprehensive Indicator Sets,” Hartmut Bossel, Ecology and Society, Vol. 5, No. 2, Art. 12, 2001
Another dimension can be added to this applicability and relevance by the work of Xabier E. Barandiaran and Matthew D. Egber on the role of norms in complex systems involving agents. Here is an extract from the abstract of their paper:
“One of the fundamental aspects that distinguishes acts from mere events is that actions are subject to a normative dimension that is absent from other types of interaction: natural agents behave according to intrinsic norms that determine their adaptive or maladaptive nature. We briefly review current and historical attempts to naturalize normativity from an organism-centred perspective that conceives of living systems as defining their own norms in a continuous process of self-maintenance of their individuality. We identify and propose solutions for two problems of contemporary modelling approaches to viability and normative behaviour in this tradition: 1) How to define the topology of the viability space beyond establishing normatively-rigid boundaries, so as to include a sense of gradation that permits reversible failure; and 2) How to relate, in models of natural agency, both the processes
that establish norms and those that result in norm-following behaviour.”
The author’s definition of a viability space in the same paper is of particular interest:
Viability space: the space defined by the relationship between: a) the set of essential variables representing the components, processes or relationships that determine the system’s organization and, b) the set of external parameters representing the environmental conditions that are necessary for the system’s self-maintenance
“Norm-establishing and norm-following in autonomous agency,” Xabier E. Barandiaran, IAS-Research Centre for Life, Mind, and Society, Dept. of Logic and Philosophy of Science, UPV/EHU University of the Basque Country, Spain, email@example.com, and Matthew D. Egbert, Center for Computational Neuroscience and Robotics, University of Sussex, Brighton, U.K.
Clearly, an adequate account of the existential viability of civilization would want to address the “essential variables representing the components, processes or relationships that determine” the civilization’s structure, as well as the “external parameters representing the environmental conditions that are necessary” for the civilization’s self-maintenance.
In working through the conception of existential risk in the series of posts I have written here I have come to realize how comprehensive the idea of existential risk is, which gives it a particular utility in discussing the big picture and the human future. In so far as existential viability is the condition that results from the successful mitigation of existential risk, then the idea of existential viability is at least as comprehensive as that of existential risk.
In formulating this initial exposition of existential viability I have been struck by the conceptual synchronicities that have have emerged: recent work in viability theory suggests the possibility of the mathematical modeling of civilization; the work of Barandiaran and Egbert on viability space has shown me the relevance of artificial life and artificial intelligence research; the key role of the concept of viability in ecology makes recent ecological studies (such as Assessing Viability and Sustainability cited above) relevant to existential viability and therefore also to existential risk; formulations of ecological viability and sustainability, and the recognition that ecological systems are complex systems demonstrates the relevance of complexity theory; ecological relevance to existential concerns points to the possibility of employing what I have written earlier about metaphysical ecology and ecological temporality to existential risk and existential viability, which in turn demonstrates the relevance of Bronfenbrenner’s work to this intellectual milieu. I dare say that the idea of existential viability has itself a kind of viability and resilience due to its many connections to many distinct disciplines.
. . . . .
. . . . .
Existential Risk: The Philosophy of Human Survival
10. Existential Risk and Existential Viability
. . . . .
. . . . .
. . . . .
. . . . .