18 December 2013
What does it mean for a body of knowledge to be founded in fact? This is a central question in the philosophy of science: do the facts suggest hypotheses, or are hypotheses used to give meanings to facts? These questions are also posed in history and the philosophy of history. Is history a body of knowledge founded on facts? What else could it be? But do the facts of history suggest historical hypotheses, or do our historical hypotheses give meaning and value to historical facts, without which the bare facts would add up to nothing?
Is history a science? Can we analyze the body of historical knowledge in terms of facts and hypotheses? Is history subject to the same constraints and possibilities as science? An hypothesis is an opportunity — an opportunity to transform facts in the image of meaning; facts are limitations that constrain hypotheses. An hypothesis is an epistemic opportunity — an opportunity to make sense of the world — and therefore an hypothesis is also at the same time an epistemic risk — a risk of getting interpreting the world incorrectly and misunderstanding events.
The ancient question of whether history is an art or a science would seem to have been settled by the emergence of scientific historiography, which clearly is a science, but this does not answer the question of what history was before scientific historiography. One might reasonably maintain that scientific historiography was the implicit telos of all previous historiographical study, but this fails to acknowledge the role of historical narratives in shaping our multiple human identities — personal, cultural, ethnic, political, mythological.
If Big History should become the basis of some future axialization of industrial-technological civilization, then scientific historiography too will play a constitutive role in human identity, and while other and older identity narratives presently coexist and furnish different individuals with a different sense of their place in the world, we have already seen the beginnings of an identity shaped by science.
There is a sense in which the scientific historian today knows much more about the past than those who lived in the past and experienced that past as an immediate yet fragmentary present. One might infer the possibility of a total knowledge of the past through the cumulative knowledge scientific historiography — a condition denied to those who actually lived in the past — although this “total” knowledge must fall short of the peculiar kind of knowledge derived from immediate personal experience, as contemplated in the thought experiment known as “Mary’s room.”
In the thought experiment known as Mary’s room, also called the knowledge argument, we imagine a condition of total knowledge and compare this with the peculiar kind of knowledge that is derived from experience, in contradistinction to the knowledge of knowledge we come to through science. Here is the source of the Mary’s room thought experiment:
“Mary is a brilliant scientist who is, for whatever reason, forced to investigate the world from a black and white room via a black and white television monitor. She specializes in the neurophysiology of vision and acquires, let us suppose, all the physical information there is to obtain about what goes on when we see ripe tomatoes, or the sky, and use terms like ‘red’, ‘blue’, and so on. She discovers, for example, just which wavelength combinations from the sky stimulate the retina, and exactly how this produces via the central nervous system the contraction of the vocal cords and expulsion of air from the lungs that results in the uttering of the sentence ‘The sky is blue’. [...] What will happen when Mary is released from her black and white room or is given a color television monitor? Will she learn anything or not?”
Frank Jackson, “Epiphenomenal Qualia” (1982)
Philosophers disagree on whether Mary learns anything upon leaving Mary’s room. As a thought experiment, it is intended not to give as a definitive answer to a circumstance that is never likely to occur in fact, but to sharpen our intuitions and refine our formulations. We can try to do the same with formulations of an ideal totality of knowledge derived from scientific historiography. There is a sense in which scientific historiography allows us to know much more about the past than those who lived in the past. To echo a question of Thomas Nagel, was there something that it was like to be in the past? Are there, or were there, historical qualia? Is the total knowledge of history afforded by scientific historiography short of capturing historical qualia?
In the Mary’s room thought experiment the agent in question is human and the experience is imposed colorblindness. Many people live with colorblindness within the condition greatly impacting their lives, so in this context it is plausible that Mary learns nothing upon the lifting of her imposed colorblindness, since the gap between these conditions is not as intuitively obvious as the gap between agents of a fundamentally different kind (as, e.g., distinct species) or between experiences of a fundamental different kind in which it is not plausible that the the lifting of an imposed limitation on experience results in no significant impact on one’s life.
We can sharpen the formulation of Mary’s room, and thus potentially sharpen our own intuitions, by taking a more intense experience than that of color vision. We can also alter the sense of this thought experiment by considering the question across distinct species or across the division between minds and machines. For example, if a machine learned everything that there is to know about eating would that machine know what it was like to eat? Would total knowledge after the manner of Mary’s knowledge of color suffice to exhaust knowledge of eating, even in the absence of an actual experience of eating? I doubt that many would be convinced that learning about eating without the experience of eating would be sufficient to exhaust what there is to know about eating. Thomas Nagel’s thought experiment in “What is it like to be a bat?” alluded to above poses the knowledge argument across species.
We can give this same thought experiment yet another twist if we reverse the roles of minds and machines, and asking of machine experience, should machine consciousness emerge, the questions we have asked of human experience (or bat experience). If a human being learned everything there is to know about AI and machine consciousness, would such a human being know what it is like to be a machine? Could knowledge of machines exhaust uniquely machine experience?
The kind of total scientific knowledge of the world implicit in scientific historiography is not unlike what Pierre Simon LaPlace had in mind when he posited the possibility of predicting the entire state of the universe, past or future, on the basis of an exhaustive knowledge of the present. LaPlace’s argument is also a classic determinist position:
“We ought then to regard the present state of the universe as the effect of its anterior state and as the cause of the one which is to follow. Given for one instant an intelligence which could comprehend all the forces by which nature is animated and the respective situation of the beings who compose it — an intelligence sufficiently vast to submit these data to analysis — it would embrace in the same formula the movements of the greatest bodies of the universe and those of the lightest atom; for it, nothing would be uncertain and the future, as the past, would be present to its eyes. The human mind offers, in the perfection which it has been able to give to astronomy, a feeble idea of this intelligence. Its discoveries in mechanics and geometry, added to that of universal gravity, have enabled it to comprehend in the same analytical expressions the past and future states of the system of the world. Applying the same method to some other objects of its knowledge, it has succeeded in referring to general laws observed phenomena and in foreseeing those which given circumstances ought to produce. All these efforts in the search for truth tend to lead it back continually to the vast intelligence which we have just mentioned, but from which it will always remain infinitely removed. This tendency, peculiar to the human race, is that which renders it superior to animals; and their progress in this respect distinguishes nations and ages and constitutes their true glory.”
A Philosophical Essay on Probabilities, PIERRE SIMON, MARQUIS DE LAPLACE, WITH AN INTRODUCTORY NOTE BY E. T. BELL, DOVER PUBLICATIONS, INC., New York, CHAPTER II.
While such a LaPlacean calculation of the universe would lie beyond the capability of any human being, it might someday lie within the capacity of another kind of intelligence. What LaPlace here calls, “an intelligence sufficiently vast to submit these data to analysis,” suggests the possibility of a sufficiently advanced (i.e., sufficiently large and fast) computer that could make this calculation, thereby achieving a kind of computational omniscience.
Long before we have reached the point of an “intelligence explosion” (the storied “technological singularity”) and machines surpass the intelligence of human beings, and each generation of machine is able to build a yet more intelligent successor (i.e., an “intelligence explosion”), the computational power at our disposal will for all practical purposes exhaust the world and will thus have obtained computational omniscience. We have already begun to converge upon this kind of total knowledge of the cosmos with the Bolshoi Cosmological Simulations and similar efforts with other supercomputers.
It is this kind of reasoning in regard to the future of cosmological simulations that has led to contemporary formulations of the “Simulation Hypothesis” — the hypothesis that we are, ourselves, at this moment, living in a computer simulation. According to the simulation argument, cosmological simulations become so elaborate and are refined to such a fine-grained level of detail that the simulation eventually populates itself with conscious agents, i.e., ourselves. Here, the map really does coincide with the territory, at least for us. The entity or entities conducting such a grand simulation, and presumably standing outside the whole simulation observing, can see the simulation for the simulation that it is. (The connection between cosmology and the simulation argument is nicely explained in the episode “Are We Real?” of the television series “What We Still Don’t Know” hosted by noted cosmologist Martin Rees.)
One way to formulate the idea of omniscience is to define omniscience as knowledge of the absolute infinite. The absolute infinite is an inconsistent multiplicity (in Cantorian terms). There is a certain reasonableness in this, as the logical principle of explosion, also known as ex falso quodlibet (namely, the principle that anything follows from a contradiction), means that an inconsistent multiplicity that incorporates contradictions is far richer than any consistent multiplicity. In so far as omniscience could be defined as knowledge of the absolute infinite, few would, I think, be willing to argue for the possibility of computational omniscience, so we will below pursue this from another angle, but I wanted to mention this idea of defining omniscience as knowledge of the absolute infinite because it strikes me as interesting. But no more of this for now.
The claim of computational omniscience must be qualified, since computational omniscience can exhaust only that portion of the world exhaustible by computational means; computational omniscience is the kind of omniscience that we encountered in the “Mary’s room” thought experiment, which might plausibly be thought to exhaust the world, or which might with equal plausibility be seen as falling far short of all that might be known of some body of knowledge.
Computational omniscience is distinct from omniscience simpliciter; while exhaustive in one respect, it fails to capture certain aspects of the world. Computational omniscience may be defined as the computation of all that is potentially computable, which leaves aside that which is not computable. The non-computable aspects of the world include, but are not limited to, non-computable functions, quantum indeterminacy, that which is non-quantifiable (for whatever reason), the qualitative dimension of conscious experience (i.e., qualia), and that which is inferred but not observable. These are pretty significant exceptions. What is left over? What part of the world is computable? This is a philosophical question that we must ask once we understand that computability has limits and that these limits may be distinct from the limits of human intelligence. Just as conscious biological agents face intrinsic epistemic limits, so also non-biological agents would also face intrinsic epistemic limits — in so far as a non-biological agent can be considered an epistemic agent — but these limitations on biological and non-biological agents are not necessarily the same.
The ultimate inadequacy of computational omniscience points to the possibility of limited omniscience — though one might well assert that omniscience that is limited is not really omniscience at all. The limited omniscience of a computer capable of computing the fate of the known universe may be compared to recent research on what Daniel Kahneman calls the bounded rationality of human minds. Artificial intelligence is likely to be a bounded intelligence that exemplifies bounded rationality, although its boundaries will not necessarily coincide precisely with the boundaries that defined human bounded rationality.
The idea of limited omniscience has been explored in mathematics, particular in regard to constructivism. Constructivist mathematicians have formulated principles of omniscience, and, wary of both unrestricted use of tertium non datur and of its complete interdiction in the manner of intuitionism, the limited principle of omniscience has been proposed as a specific way to skirt around some of the problems implicit in the realism of unrestricted tertium non datur.
When we allow our mathematical thought to coincide with realities and infinities — an approach that we are assured is practical and empirical, and bound to only yield benefits — we find ourselves mired in paradoxes, and in the interest of freeing ourselves from this conceptual mire we are driven to a position like Einstein’s famous aphorism that, “As far as the laws of mathematics refer to reality, they are not certain; and as far as they are certain, they do not refer to reality.” We separate and compartmentalize factual realities and mathematical infinities because we have difficulty, “to hold two opposing ideas in mind at the same time and still retain the ability to function.”
Indeed, it was Russell’s attempt to bring together Cantorian conceptions of set theory with practical measures of the actual world that begat the definitive paradox of set theory that bear Russell’s name, and the responses to which have in large measure shaped post-Cantorian mathematics. Russell gives the following account of his discovery of his eponymous paradox in his Autobiography:
Cantor had a proof that there is no greatest number, and it seemed to me that the number of all the things in the world ought to be the greatest possible. Accordingly, I examined his proof with some minuteness, and endeavoured to apply it to the class of all the things there are. This led me to consider those classes which are not members of themselves, and to ask whether the class of such classes is or is not a member of itself. I found that either answer implies its contradictory.
Bertrand Russell, The Autobiography of Bertrand Russell, Vol. II, 1872-1914, “Principia Mathematica”
None of the great problems of philosophical logic from this era — i.e., the fruitful period in which Russell and several colleagues created mathematical logic — were “solved”; rather, a consensus emerged among philosophers of logic, conventions were established, and, perhaps most importantly, Zermelo’s axiomatization of set theory became the preferred mathematical treatment of set theory, which allowed mathematicians to skirt the difficult issues in philosophical logic and to focus on the mathematics of set theory largely without logical distractions.
It is an irony of intellectual history that the next great revolution in mathematics to follow after set theory — which latter is, essentially, the mathematical theory of the infinite — was to be that of computer science, which constitutes the antithesis of set theory in so far as it is the strictest of strict finitisms. It would be fair to characterize the implicit theoretical position of computer science as a species of ultra-finitism, since computers cannot formulate even the most tepid potential infinite. All computing machines have an upper bound of calculation, and this is a physical instantiation of the theoretical position of ultra-finitism. This finitude follows from embodiment, which a computer shares with the world itself, and which therefore makes ultra-finite computing consistent with an ultra-finite world. In an ultra-finite world, it is possible that the finite may exhaust the finite and computational omniscience realized.
The universe defined by the Big Bang and all that followed from the Big Bang is a finite universe, and may in virtue of its finitude admit of exhaustive calculation, though this finite universe of observable cosmology may be set in an infinite context. Indeed, even the finite universe may not be as rigorously finite as we suppose, given that the limitations of our observations are not necessarily the limits of the real, but rather are defined by the limit of the speed of light. Leonard Susskind has rightly observed that what we observe of the universe is like being inside a room, the walls of which are the distant regions of the universe receding from us at superluminous velocity at the point at which they disappear from our view.
Recently in The Size of the World I quoted this passage from Leonard Susskind:
“In every direction that we look, galaxies are passing the point at which they are moving away from us faster than light can travel. Each of us is surrounded by a cosmic horizon — a sphere where things are receding with the speed of light — and no signal can reach us from beyond that horizon. When a star passes the point of no return, it is gone forever. Far out, at about fifteen billion light years, our cosmic horizon is swallowing galaxies, stars, and probably even life. It is as if we all live in our own private inside-out black hole.”
Leonard Susskind, The Black Hole War: My Battle with Stephen Hawking to make the World Safe for Quantum Mechanics, New York, Boston, and London: Little, Brown and Company, 2008, pp. 437-438
This observation has not yet been sufficiently appreciated (as I previously noted in The Size of the World). What lies beyond Susskind’s cosmic horizon is unobservable, just as anything that disappears beyond the event horizon of a black hole has become unobservable. We might term such empirical realities just beyond our grasp empirical unobservables. Empirical unobservables include (but are presumably not limited to — our “out” clause) all that which lies beyond the event horizon of Susskind’s inside-out black hole, that which lies beneath the event horizon of a black hole as conventionally conceived, and that which lies outside the lightcone defined by our present. There may be other empirical unobservables that follow from the structure of relativistic space. There are, moreover, many empirically inaccessible points of view, such as the interiors of stars, which cannot be observed for contingent reasons distinct from the impossibility of observing certain structures of the world hidden from us by the nature of spacetime structure.
What if the greater part of the universe passes in the oblivion of the empirical unobservables? This is a question that was posed by a paper appeared that in 2007, The Return of a Static Universe and the End of Cosmology, which garnered some attention because of its quasi-apocalyptic claim of the “end of cosmology” (which sounds a lot like Heidegger’s proclamation of the “end of philosophy” or any number of other proclamations of the “end of x“). This paper was eventually published in Scientific American as The End of Cosmology? An accelerating universe wipes out traces of its own origins by Lawrence M. Krauss and Robert J. Scherrer.
In calling the “end of cosmology” a “quasi-apocalyptic” claim I don’t mean to criticize or ridicule the paper or its argument, which is of the greatest interest. As in the subtitle of the Scientific American article, it appears to be the case that an accelerating universe wipes out traces of its own origins. If a quasi-apocalyptic claim can be scientifically justified, it is a legitimate and deserves our intellectual respect. Indeed, the study of existential risk could be considered a scientific study of apocalyptic claims, and I regard this as an undertaking of the first importance. We need to think seriously about existential risks in order to mitigate them rationally to the extent possible.
In my posts on the prediction and retrodiction walls (The Retrodiction Wall and Addendum on the Retrodiction Wall) I introduced the idea of effective history, which is that span of time which lies between the retrodiction wall in the past and the prediction wall in the future. One might similarly define effective cosmology as consisting of that region or those regions of space within the practical limits of observational cosmology, and excluding those regions of space that cannot be observed — not merely what is hidden from us by contingent circumstances, but that which are are incapable of observing because of the very structure of the universe and our place (ontologically speaking) within it.
There are limits to what we can know that are intrinsic to what we might call the human condition, except that this formulation is anthropocentric. The epistemic limits represented by effective history and effective cosmology are limitations that would hold for any sentient, conscious organism emergent from natural history, i.e., would hold for any peer species. Some of these limitations are limitations intrinsic to our biology and to the kind of mind that is emergent from biological organisms. Some of these limitations are limitations intrinsic to the world in which we find ourselves, and the vantage point from within the cosmos that we view our world. Ultimately, these limitations are one and the same, as the kind of biological beings that we are is a function of the kind of cosmos in which we have emerged, and which has served as the context of our natural history.
Within the domains of effective history and effective cosmology, we are limited further still by the non-quantifiable aspects of the world noted above. Setting aside non-quantifiable aspects of the world, what I have elsewhere called intrinsically arithmetical realities are a paradigm case of what remains computable once we have separated out the non-computable exceptions. (Beyond the domains of effective history and effective cosmology, hence beyond the domain of computational omniscience, there lies the infinite context of our finite world, about which we will say no more at present.) Intrinsically arithmetical realities are intrinsically amenable to quantitative methods are potentially exhaustible by computational omniscience.
Some have argued that the whole of the universe is intrinsically arithmetical in the sense of being essentially mathematical, as in the “Mathematical Universe Hypothesis” of Max Tegmark. Tegmark writes:
“[The Mathematical Universe Hypothesis] explains the utility of mathematics for describing the physical world as a natural consequence of the fact that the latter is a mathematical structure, and we are simply uncovering this bit by bit.”
The Mathematical Universe by Max Tegmark
Tegmark also explicitly formulates two companion principles:
External Reality Hypothesis (ERH): There exists an external physical reality completely independent of us humans.
Mathematical Universe Hypothesis (MUH): Our external physical reality is a mathematical structure.
I find these formulations to be philosophically naïve in the extreme, but as a contemporary example of a perennial tradition of philosophical thought Tegmark is worth citing. Tegmark is seeking an explicit answer to Wigner’s famous question about the “unreasonable effectiveness of mathematics.” It is to be expected that some responses to Wigner will take the form that Tegmark represents, but even if our universe is a mathematical structure, we do not yet know how much of that mathematical structure is computable and how much of that mathematical structure is not computable.
In my Centauri Dreams post on SETI, METI, and Existential Risk I mentioned that I found myself unable to identify with either the proponents of unregulated METI or those who argue for the regulation of METI efforts, since I disagreed with key postulates on both sides of the argument. METI advocates typically hold that interstellar flight is impossible, therefore METI can pose no risk. Advocates of METI regulation typically hold that unintentional EM spectrum leakage is not detectable at interstellar distances, therefore METI poses a risk we do not face at present. Since I hold that interstellar flight is possible, and that unintentional EM spectrum radiation is (or will be) detectable, I can’t comfortably align myself with either party in the discussion.
I find myself similarly hamstrung on the horns of a dilemma when it comes to computability, the cosmos, and determinism. Computer scientists and singulatarian enthusiasts of exponential increasing computer power ultimately culminating in an intelligence explosion seem content to assume that the universe is not only computable, and presents no fundamental barriers to computation, but foresee a day when matter itself is transformed into computronium and the whole universe becomes a grand computer. Critics of such enthusiasts often take the form of denying the possibility of AI or denying the possibility of machine consciousness, denying this or that is technically possible, and so on. It seems clear to me that only a portion of the world will ever be computable, but that portion is considerable and that a great many technological developments will fundamentally change our relationship to the world. But no matter how much either human beings or machines are transformed by the continuing development of industrial-technological civilization, non-computable functions will remain non-computable. This I cannot count myself either as a singulatarian or a Luddite.
How are we to understand the limitations to computational omniscience imposed by the limits of computation? The transcomputational problem, rather than laying bare human limitations, points to the way in which minds are not subject to computational limits. Minds as minds do not function computationally, so the evolution of mind (which drives the evolution of civilization) embodies different bounds and different limits than the Bekenstein bound and Bremermann’s limit, as well as different possibilities and different opportunities. The evolutionary possibilities of the mind are radically distinct from the evolutionary possibilities of bodies subject to computational limits, even though minds are dependent upon the bodies in which they are embodied.
Bremermann’s limit is 1093, which is somewhat arbitrary, but whether we draw the line here or elsewhere it doesn’t really matter for the principle at stake. Embodied computing must run into intrinsic limits, e.g., from relativity — a computer that exceeded Bremerman’s limit by too much would be subject to relativistic effects that would mean that gains in size would reach a point of diminishing returns. Recent brain research was suggested that the human brain is already close to the biological limit for effective signal transmission within and between the various parts of the brain, so that a larger brain would not necessarily be smarter or faster or more efficient. Indeed, it has been pointed out the elephant and whale brains are larger than mammal brains, although the encephalization quotient is much higher in human beings despite the difference in absolute brain size.
The function of organic bodies easily peaks over 1093. The Wikipedia entry on the transcomputational problem says:
“The retina contains about a million light-sensitive cells. Even if there were only two possible states for each cell (say, an active state and an inactive state) the processing of the retina as a whole requires processing of more than 10 to the 300,000 bits of information. This is far beyond Bremermann’s limit.”
This is just the eye alone. The body has far more nerve ending inputs than just those of the eye, and essentially a limitless number of outputs. So exhausting the possible computational states of even a relatively simple organism easily surpasses Bremermann’s limit and is therefore transcomputational. Some very simple organisms might not be transcomputational, given certain quantifiable parameters, but I think most complex life, and certainly things are complex as mammals, are radically transcomputational. Therefore the mind (whatever it is) is embodied in a transcomputational body, of which no computer could exhaustively calculate its possible states. The brain itself is radically transcomputational with its 100 billion neurons (each of which can take at minimum two distinct states, and possibly more).
Yet even machine embodiments can be computationally intractable (in the same way that organic bodies are computationally intractable), exceeding the possibility of exhaustively calculating every possible material state of the mechanism (on a molecular or atomic level). Thus the emergence of machine consciousness would also supervene upon a transcomputational embodiment. It is, at present, impossible to say whether a machine embodiment of consciousness would be a limitation upon that consciousness (because the embodiment is likely to be less radically transcomputational than the brain) or a facilitation of consciousness (because machines can be arbitrarily scaled up in a way that organic bodies cannot be).
Since the mind stands outside the possibilities of embodied computation, if machine consciousness emerges, machine embodiments will be as non-transparent to machine minds as organic embodiment is non-transparent to organic minds, but the machine minds, non-transparent to their embodiment as they are, will have access to energy sources far beyond any resources an organic body could provide. Such machine consciousness would not be bound by brute force calculation or linear models (as organic minds are not so bound), but would have far greater resources at its command for the development of its consciousness.
Since the body that today embodies mind already far exceeds Bremermann’s limit, and no machine as machine is likely to exceed this limit, machine consciousness emergent from computationally tractable bodies may, rather than being super-intelligent in ways that biologically derived minds can never be, may on the contrary be a pale shadow of an organic mind in an essentially transcomputational body. This gives a whole new twist to the much-discussed idea of the mind’s embodiment.
Computation is not the be-all and end-all of of mind; it is, in fact, only peripheral to mind as mind. If we had to rely upon calculation to make it through our day, we wouldn’t be able to get out of bed in the morning; most of the world is simply too complex to calculate. But we have a “work around” — consciousness. Marginalized as the “hard problem” in the philosophy of mind, or simply neglected in scientific studies, consciousness enables us to cut the Gordian Knot of transcomputability and to act in a complex world that far exceeds our ability to calculate.
Neither is consciousness the be-all and end-all of mind, although the rise of computer science and the increasing role of computers in our life has led many to conclude that computation is primary and that it is consciousness is that is peripheral. and, to be sure, in some contexts, consciousness is peripheral. In many of the same contexts of our EEA in which calculation is impossible due to complexity, consciousness is also irrelevant because we respond by an instinct that is deeper than and other than consciousness. In such cases, the mechanism of instinct takes over, but this is a biologically specific mechanism, evolved to serve the purpose of differential survival and reproduction; it would be difficult to re-purpose a biologically specific mechanism for any kind of abstract computing task, and not particularly helpful either.
Consciousness is not the be-all and end-all not only because instinct largely circumvents it, but also because machines have a “work around” for consciousness just as consciousness is a “work around” for the limits of computability; mechanism is a “work around” for the inefficiencies of consciousness. Machine mechanisms can perform precisely those tasks that so tax organic minds as to be virtually unsolvable, in a way that is perfectly parallel to the conscious mind’s ability to perform tasks that machines cannot yet even approach — not because machines can’t do the calculations, but because machines don’t possess the “work around” ability of consciousness.
It is when computers have the “work around” capacity that conscious beings have that they will be in a position to effect an intelligence explosion. That is to say, machine consciousness is crucial to AI that is able to perform in that way that AI is expected to perform, though AI researchers tend to be dismissive of consciousness. If the proof of the pudding is in the eating, well, then it is consciousness that allows us to “chunk the proofs” (i.e., to divide the proof into individually manageable pieces) and get to the eating all the more efficiently.
. . . . .
. . . . .
. . . . .
27 November 2013
Immanuel Kant, in an often-quoted passage, spoke of, “…the starry heavens above me and the moral law within me.” Kant might have with equal justification spoken of the formal law within and the starry heavens above. There is a sense in which the formal laws of thought are the moral laws of the mind — in logic, a good thought is a rigorous thought — so that given sufficient latitude of translation, we can interpret Kant in this way — except that we know (as Nietzsche put it) that Kant was a moral fanatic à la Rousseau.
However we choose to interpret Kant, I would like to quote more fully from the passage in the Critique of Practical Reason where Kant invokes the starry heavens above and the moral law within:
“Two things fill the mind with ever new and increasing admiration and awe, the oftener and the more steadily we reflect on them: the starry heavens above and the moral law within. I have not to search for them and conjecture them as though they were veiled in darkness or were in the transcendent region beyond my horizon; I see them before me and connect them directly with the consciousness of my existence. The former begins from the place I occupy in the external world of sense, and enlarges my connection therein to an unbounded extent with worlds upon worlds and systems of systems, and moreover into limitless times of their periodic motion, its beginning and continuance. The second begins from my invisible self, my personality, and exhibits me in a world which has true infinity, but which is traceable only by the understanding, and with which I discern that I am not in a merely contingent but in a universal and necessary connection, as I am also thereby with all those visible worlds. The former view of a countless multitude of worlds annihilates as it were my importance as an animal creature, which after it has been for a short time provided with vital power, one knows not how, must again give back the matter of which it was formed to the planet it inhabits (a mere speck in the universe). The second, on the contrary, infinitely elevates my worth as an intelligence by my personality, in which the moral law reveals to me a life independent of animality and even of the whole sensible world, at least so far as may be inferred from the destination assigned to my existence by this law, a destination not restricted to conditions and limits of this life, but reaching into the infinite.”
Immanuel Kant, Critique of Practical Reason, 1788, translated by Thomas Kingsmill Abbott, Part 2, Conclusion
This passage is striking for many reasons, not least among them them degree to which Kant has assimilated the Copernican revolution, acknowledging Earth as a mere speck in the universe. Also particularly interesting is Kant’s implicit appeal to objectivity and realism, notwithstanding the fact that Kant himself established the tradition of transcendental idealism. Kant in this passage invokes the starry heavens above and the moral law within because they are independent of the individual …
Moreover, Kant identifies both the starry heavens above and the moral law within not only as objective and independent realities, but also as infinitistic. Just as Kant the idealist looks to the stars and the moral law in a realistic spirit, so Kant the proto-constructivist invokes the “…unbounded extent with worlds upon worlds” of the starry heavens and the moral law as, “…reaching into the infinite.” I have earlier and elsewhere observed how Kant’s proto-constructivism nevertheless involves spectacularly non-constructive arguments. In the passage quoted above both Kant’s proto-constructivism and his non-constructive moments are retained in lines such as, “exhibits me in a world which has true infinity,” which by invoking exhibition in intuition toes the constructivist line, while invoking true infinity allows a legitimate role for the non-constructive.
When it comes to constructivism, we can see that Kant is conflicted. He’s not the only one. One might call Aristotle the first constructivist (or, at least, the first proto-constructivist) as the originator of the idea of the potential infinite, and here (i.e., in the context of the above discussion of Kant’s use of the infinite) Aristotelian permissive finitism is particularly relevant. (Aristotle would likely not have had much sympathy for intuitionistic constructivism, which its rejection of tertium non datur.)
The Greek intellectual attitude to the infinite was complex and conflicted. I have written about this previously in Reason in Moderation and Salto Mortale. The Greek quest for harmony, order, and proportion rejected the infinite as something that transgresses the boundaries of good taste and propriety (dismissing the infinite as apeiron, in contradistinction to peras). Nevertheless, Greek philosophers routinely argued from the infinity and eternity of the world.
Here is a famous passage from Democritus, who was perhaps best known among the Greek philosophers in arguing for the infinity of the world, making the doctrine a virtual tenet among ancient atomists:
“Worlds are unlimited and of different sizes. In some worlds there is no Sun and Moon, in others, they are larger than in our world, and in others more numerous. … Intervals between worlds are unequal. In some parts there are more worlds, in others fewer; some are increasing, some at their height, some decreasing; in some parts they are arising, in others failing… There are some worlds devoid of living creatures or plants or any moisture.”
…and Epicurus on the same theme of the infinity of the world…
“…there is an infinite number of worlds, some like this world, others unlike it. For the atoms being infinite in number, as has just been proved, are borne ever further in their course. For the atoms out of which a world might arise, or by which a world might be formed, have not all been expended on one world or a finite number of worlds, whether like or unlike this one. Hence there will be nothing to hinder an infinity of worlds.”
Epicurus, Letter to Herodotus
There were also poetic invocations of the idea of the infinity of the world, which demonstrates the extent to which the idea had penetrated popular consciousness in classical antiquity:
“When Alexander heard from Anaxarchus of the infinite number of worlds, he wept, and when his friends asked him what was the matter, he replied, ‘Is it not a matter for tears that, when the number of worlds is infinite, I have not conquered one?’”
Plutarch, PLUTARCH’S MORALS, ETHICAL ESSAYS TRANSLATED WITH NOTES AND INDEX BY ARTHUR RICHARD SHILLETO, M.A., Sometime Scholar of Trinity College, Cambridge, Translator of Pausanias, LONDON: GEORGE BELL AND SONS, 1898, “On Contentedness of Mind,” section IV
Like poetry, history had particular prestige in the ancient world, and here the theme of the infinity of the world also occurs:
“…Constantius, elated by this extravagant passion for flattery, and confidently believing that from now on he would be free from every mortal ill, swerved swiftly aside from just conduct so immoderately that sometimes in dictation he signed himself ‘My Eternity,’ and in writing with his own hand called himself lord of the whole world — an expression which, if used by others, ought to have been received with just indignation by one who, as he often asserted, laboured with extreme care to model his life and character in rivalry with those of the constitutional emperors. For even if he ruled the infinity of worlds postulated by Democritus, of which Alexander the Great dreamed under the stimulus of Anaxarchus, yet from reading or hearsay he should have considered that (as the astronomers unanimously teach) the circuit of whole earth, which to us seems endless, compared with the greatness of the universe has the likeness of a mere tiny point.
Ammianus Marcellinus, Roman Antiquities, Book XV, section 1
Like the quote from Kant quoted above, this passage is remarkable for its Copernican outlook, which shows that the ancients were not only capable of thinking in infinitistic terms, but also in more-or-less Copernican terms.
Lucretius was a follower of Epicurus, and gave one of the more detailed arguments for the infinity of the world to be found in ancient philosophy:
It matters nothing where thou post thyself,
In whatsoever regions of the same;
Even any place a man has set him down
Still leaves about him the unbounded all
Outward in all directions; or, supposing
moment the all of space finite to be,
If some one farthest traveller runs forth
Unto the extreme coasts and throws ahead
A flying spear, is’t then thy wish to think
It goes, hurled off amain, to where ’twas sent
And shoots afar, or that some object there
Can thwart and stop it? For the one or other
Thou must admit; and take. Either of which
Shuts off escape for thee, and does compel
That thou concede the all spreads everywhere,
Owning no confines. Since whether there be
Aught that may block and check it so it comes
Not where ’twas sent, nor lodges in its goal,
Or whether borne along, in either view
‘Thas started not from any end. And so
I’ll follow on, and whereso’er thou set
The extreme coasts, I’ll query, “what becomes
Thereafter of thy spear?” ‘Twill come to pass
That nowhere can a world’s-end be, and that
The chance for further flight prolongs forever
The flight itself. Besides, were all the space
Of the totality and sum shut in
With fixed coasts, and bounded everywhere,
Then would the abundance of world’s matter flow
Together by solid weight from everywhere
Still downward to the bottom of the world,
Nor aught could happen under cope of sky,
Nor could there be a sky at all or sun-
Indeed, where matter all one heap would lie,
By having settled during infinite time.
Lucretius, De rerum natura
The above argument is one that is still likely to be heard today, in various forms. If you go to the edge of the universe and throw a spear, either it is stopped by the boundary of the universe, or it continues on, and, as Lucretius says, For the one or other, Thou must admit. If the spear is stopped, what stopped it? And if it continues on, into what does it continue?
The contemporary relativistic cosmology has a novel answer to this ancient idea: the universe is finite and unbounded, so that space is wrapped back around on itself. What this means for the spear-thrower at the edge of the universe is that if he throws the spear with enough force, it may travel around the cosmos and return to pierce him in the back. There is nothing to stop the spear, because the universe is unbounded, but since the universe is also finite the spear will eventually cross its own path if it continues to travel. I do not myself think that the universe is finite and unbounded in precisely the way the many modern cosmologists argue, but I am not going to go into this interesting problem at the present time.
Other than the response to Lucretius in terms of relativistic cosmology, with its curved spacetime — a material response to the Lucretian argument for the infinity of the world — there is another response, that of intuitionistic constructivism, which denies the law of the excluded middle (tertium non datur) — i.e, a formal response to Lucretius. Lucretius asserted that, For the one or other, Thou must admit, and this is exactly what the intuitionist does not admit. As with the relativistic response to Lucretius, I do not myself agree with the intuitionist response to Lucretius. Consequently, I believe that Lucretius argument is still valid in spirit, though it must be reformulated in order to be applicable to the world as revealed to us by contemporary science. Consequently, I take it as demonstrable that the universe is infinite, taking the view of ancient natural philosophers.
Within the overall context of Greek thought, within its contending finitist and infinitistic strains, Greek cosmology was non-constructive, and the Greeks asserted (and argued for) the infinity of the world on the basis of non-constructive argument. Perhaps it would even be fair to say that the Greeks assumed the universe to be infinite in extent, and they at times sought to justify this assumption by philosophical argument, while at other times they confined themselves to the sphere of the peras.
Much of contemporary science is constructivist in spirit, though this constructivism is rarely made explicit, except among logicians and mathematicians. By this I mean that the general drift of science ever since the scientific revolution has been toward bottom-up constructions on the basis of quantifiable evidence and away from top-down argument. I made this point previously in Advanced Thinking and A Non-Constructive World, as well as other posts, though I haven’t yet given a detailed formulation of this idea. Yet the emergence of a “quantum logic” in quantum theory that does away with the principle of the excluded middle is a clear expression of the increasing constructivism of science.
In A Non-Constructive World I also made the point that the world appears to have both constructive and non-constructive features. In several posts about constructivism (e.g., P or not-P) I have argued that constructivism and non-constructivism are complementary perspectives on formal thought, and that each needs the other for an adequate account of the world.
In so far as contemporary science is essentially constructive, it lacks a non-constructive perspective on the phenomena it investigates. This is, I believe, intrinsic to science, and to the kind of civilization that emerges from the application of science to the economy (viz. industrial-technological civilization). By the constructive methods of science we can attain ever larger and ever more comprehensive conceptions of the universe — such as I described in my previous post, The Size of the World — but these constructive methods will never reach the infinite universe contemplated by the ancient Greeks.
How could the logical framework employed by a scientist have any effect over what they see in the heavens? Well, constructive science is logically incapable of formulating the idea of an infinite universe in any sense other than an Aristotelian potential infinite. No one can observe the infinite (in the philosophy of mathematics we say that the infinite is “unsurveyable”). And if you cannot produce observational evidence of the infinite, then you cannot formulate a falsifiable theory of an infinite universe. Thus the infinity of the world is, in effect, ruled out by our methods.
No one should be surprised at the direct impact the ethos of formal thought has a upon the natural sciences; one of the fundamental trends of the scientific revolution has been the mathematization of natural science, and one of the fundamental trends of mathematical rigor since the late nineteenth century has been the arithmetization of analysis, which has been taken as far as the logicization of mathematics. Logic and mathematics have been “finitized” and these finite formal methods have been employed in the rational reconstruction of the sciences.
I look forward to the day when the precision and rigor of formal methods employed in the natural sciences again includes infinitistic methods, and it once again becomes possible to formulate the thesis of the infinity of the world in science — and possible once again to see the world as infinite.
. . . . .
. . . . .
. . . . .
. . . . .
26 October 2013
In my last post, The Retrodiction Wall, I introduced several ideas that I think were novel, among them:
● A retrodiction wall, complementary to the prediction wall, but in the past rather than the present
● A period of effective history lying between the retrodiction wall in the past and the prediction wall in the future; beyond the retrodiction and prediction walls lies inaccessible history that is not a part of effective history
● A distinction between diachronic and synchronic prediction walls, that is to say, a distinction between the prediction of succession and the prediction of interaction
● A distinction between diachronic and synchronic retrodiction walls, that is to say, a distinction between the retrodiction of succession and the retrodiction of interaction
I also implicitly formulated a principle, though I didn’t give it any name, parallel to Einstein’s principle (also without a name) that mathematical certainty and applicability stand in inverse proportion to each other: historical predictability and historical relevance stand in inverse proportion to each other. When I can think of a good name for this I’ll return to this idea. For the moment, I want to focus on the prediction wall and the retrodiction wall as the boundaries of effective history.
In The Retrodiction Wall I made the assertion that, “Effective history is not fixed for all time, but expands and contracts as a function of our knowledge.” An increase in knowledge allows us to push the boundaries the prediction and retrodiction walls outward, as a diminution of knowledge means the contraction of prediction and retrodiction boundaries of effective history.
We can go farther than this is we interpolate a more subtle and sophisticated conception of knowledge and prediction, and we can find this more subtle and sophisticated understand in the work of Frank Knight, which I previously cited in Existential Risk and Existential Uncertainty. Knight made a tripartite distinction between prediction (or certainty), risk, and uncertainty. Here is the passage from Knight that I quoted in Addendum on Existential Risk and Existential Uncertainty:
1. A priori probability. Absolutely homogeneous classification of instances completely identical except for really indeterminate factors. This judgment of probability is on the same logical plane as the propositions of mathematics (which also may be viewed, and are viewed by the writer, as “ultimately” inductions from experience).
2. Statistical probability. Empirical evaluation of the frequency of association between predicates, not analyzable into varying combinations of equally probable alternatives. It must be emphasized that any high degree of confidence that the proportions found in the past will hold in the future is still based on an a priori judgment of indeterminateness. Two complications are to be kept separate: first, the impossibility of eliminating all factors not really indeterminate; and, second, the impossibility of enumerating the equally probable alternatives involved and determining their mode of combination so as to evaluate the probability by a priori calculation. The main distinguishing characteristic of this type is that it rests on an empirical classification of instances.
3. Estimates. The distinction here is that there is no valid basis of any kind for classifying instances. This form of probability is involved in the greatest logical difficulties of all, and no very satisfactory discussion of it can be given, but its distinction from the other types must be emphasized and some of its complicated relations indicated.
Frank Knight, Risk, Uncertainty, and Profit, Chap. VII
This passage from Knight’s book (as the entire book) is concerned with applications to economics, but the kernel of Knight’s idea can be generalized beyond economics to generally represent different stages in the acquisition of knowledge: Knight’s a priori probability corresponds to certainty, or that which is so exhaustively known that it can be predicted with precision; Knight’s statistical probably corresponds with risk, or partial and incomplete knowledge, or that region of human knowledge where the known and unknown overlap; Knight’s estimates correspond to unknowns or uncertainty.
Knight formulates his tripartite distinction between certainty, risk, and uncertainty exclusively in the context of prediction, and just as Knight’s results can be generalized beyond economics, so too Knight’s distinction can be generalized beyond prediction to also embrace retrodiction. In The Retrodiction Wall I generalized John Smart‘s exposition of a prediction wall in the future to include a retrodiction wall in the past, both of which together define the boundaries of effective history. These two generalizations can be brought together.
A prediction wall in the future or a retrodiction wall in the past are, as I noted, functions of knowledge. That means we can understand this “boundary” not merely as a threshold that is crossed, but also as an epistemic continuum that stretches from the completely unknown (the inaccessible past or future that lies utterly beyond the retrodiction or prediction wall) through an epistemic region of prediction risk or retrodiction risk (where predictions or retrodictions can be made, but are subject to at least as many uncertainties as certainties), to the completely known, in so far as anything can be completely known to human beings, and therefore well understood by us and readily predictable.
Introducing and integrating distinctions between prediction and retrodiction walls, and among prediction, risk and uncertainty gives a much more sophisticated and therefore epistemically satisfying structure to our knowledge and how it is contextualized in the human condition. The fact that we find ourselves, in medias res, living in a world that we must struggle to understand, and that this understanding is an acquisition of knowledge that takes place in time, which is asymmetrical as regards the past and future, are important features of how we engage with the world.
This process of making our model of knowledge more realistic by incorporating distinctions and refinements is not yet finished (nor is it ever likely to be). For example, the unnamed principle alluded to above — that of the inverse relation between historical predictability and relevance, suggests that the prediction and retrodiction walls can be penetrated unevenly, and that our knowledge of the past and future is not consistent across space and time, but varies considerably. An inquiry that could demonstrate this in any systematic and schematic way would be more complicated than the above, so I will leave this for another day.
. . . . .
. . . . .
. . . . .
23 October 2013
Prediction in Science
One of the distinguishing features of science as a system of thought is that it makes testable predictions. The fact that scientific predictions are testable suggests a methodology of testing, and we call the scientific methodology of testing experiment. Hypothesis formation, prediction, experimentation, and resultant modification of the hypothesis (confirmation, disconfirmation, or revision) are all essential elements of the scientific method, which constitutes an escalating spiral of knowledge as the scientific method systematically exposes predictions to experiment and modifies its hypotheses in the light of experimental results, which leads in turn to new predictions.
The escalating spiral of knowledge that science cultivates naturally pushes that knowledge into the future. Sometimes scientific prediction is even formulated in reference to “new facts” or “temporal asymmetries” in order to emphasize that predictions refer to future events that have not yet occurred. In constructing an experiment, we create a few set of facts in the world, and then interpret these facts in the light of our hypothesis. It is this testing of hypotheses by experiment that establishes the concrete relationship of science to the world, and this is also a source of limitation, for experiments are typically designed in order to focus on a single variable and to that end an attempt is made to control for the other variables. (A system of thought that is not limited by the world is not science.)
Alfred North Whitehead captured this artificial feature of scientific experimentation in a clever line that points to the difference between scientific predictions and predictions of a more general character:
“…experiment is nothing else than a mode of cooking the facts for the sake of exemplifying the law. Unfortunately the facts of history, even those of private individual history, are on too large a scale. They surge forward beyond control.”
Alfred North Whitehead, Adventures of Ideas, New York: The Free Press, 1967, Chapter VI, “Foresight,” p. 88
There are limits to prediction, and not only those pointed out by Whitehead. The limits to prediction have been called the prediction wall. Beyond the prediction wall we cannot penetrate.
The Prediction Wall
John Smart has formulated the idea of a prediction wall in his essay, “Considering the Singularity,” as follows:
With increasing anxiety, many of our best thinkers have seen a looming “Prediction Wall” emerge in recent decades. There is a growing inability of human minds to credibly imagine our onrushing future, a future that must apparently include greater-than-human technological sophistication and intelligence. At the same time, we now admit to living in a present populated by growing numbers of interconnected technological systems that no one human being understands. We have awakened to find ourselves in a world of complex and yet amazingly stable technological systems, erected like vast beehives, systems tended to by large swarms of only partially aware human beings, each of which has only a very limited conceptualization of the new technological environment that we have constructed.
Business leaders face the prediction wall acutely in technologically dependent fields (and what enterprise isn’t technologically dependent these days?), where the ten-year business plans of the 1950′s have been replaced with ten-week (quarterly) plans of the 2000′s, and where planning beyond two years in some fields may often be unwise speculation. But perhaps most astonishingly, we are coming to realize that even our traditional seers, the authors of speculative fiction, have failed us in recent decades. In “Science Fiction Without the Future,” 2001, Judith Berman notes that the vast majority of current efforts in this genre have abandoned both foresighted technological critique and any realistic attempt to portray the hyper-accelerated technological world of fifty years hence. It’s as if many of our best minds are giving up and turning to nostalgia as they see the wall of their own conceptualizing limitations rising before them.
Considering the Singularity: A Coming World of Autonomous Intelligence (A.I.) © 2003 by John Smart (This article may be reproduced for noncommercial purposes if it is copied in its entirety, including this notice.)
I would to suggest that there are at least two prediction walls: synchronic and diachronic. The prediction wall formulated above by John Smart is a diachronic prediction wall: it is the onward-rushing pace of events, one following the other, that eventually defeats our ability to see any recognizable order or structure of the future. The kind of prediction wall to which Whitehead alludes is a synchronic prediction wall, in which it is the outward eddies of events in the complexity of the world’s interactions that make it impossible for us to give a complete account of the consequences of any one action. (Cf. Axes of Historiography)
Retrodiction and the Historical Sciences
Science does not live by prediction alone. While some philosophers of science have questioned the scientificity of the historical sciences because they could not make testable (and therefore falsifiable) predictions about the future, it is now widely recognized that the historical sciences don’t make predictions, but they do make retrodictions. A retrodiction is a prediction about the past.
The Oxford Dictionary of Philosophy by Simon Blackburn (p. 330) defines retrodiction thusly:
retrodiction The hypothesis that some event happened in the past, as opposed to the prediction that an event will happen in the future. A successful retrodiction could confirm a theory as much as a successful prediction.
As with predictions, there is also a limit to retrodiction, and this is the retrodiction wall. Beyond the retrodiction wall we cannot penetrate.
I haven’t been thinking about this idea for long enough to fully understand the ramifications of a retrodiction wall, so I’m not yet clear about whether we can distinction diachronic retrodiction and synchronic retrodiction. Or, rather, it would be better to say that the distinction can certainly be made, but that I cannot think of good contrasting examples of the two at the present time.
We can define a span of accessible history that extends from the retrodiction wall in the past to the prediction wall in the future as what I will call effective history (by analogy with effective computability). Effective history can be defined in a way that is closely parallel to effectively computable functions, because all of effective history can be “reached” from the present by means of finite, recursive historical methods of inquiry.
Effective history is not fixed for all time, but expands and contracts as a function of our knowledge. At present, the retrodiction wall is the Big Bang singularity. If anything preceded the Big Bang singularity we are unable to observe it, because the Big Bang itself effectively obliterates any observable signs of any events prior to itself. (Testable theories have been proposed that suggest the possibility of some observable remnant of events prior to the Big Bang, as in conformal cyclic cosmology, but this must at present be regarded as only an early attempt at such a theory.)
Prior to the advent of scientific historiography as we know it today, the retrodiction wall was fixed at the beginning of the historical period narrowly construed as written history, and at times the retrodiction wall has been quite close to the present. When history experiences one of its periodic dark ages that cuts it off from his historical past, little or nothing may be known of a past that once familiar to everyone in a given society.
The emergence of agrarian-ecclesiastical civilization effectively obliterated human history before itself, in a manner parallel to the Big Bang. We know that there were caves that prehistorical peoples visited generation after generation for time out of mind, over tens of thousands of years — much longer than the entire history of agrarian-ecclesiastical civilization, and yet all of this was forgotten as though it had never happened. This long period of prehistory was entirely lost to human memory, and was not recovered again until scientific historiography discovered it through scientific method and empirical evidence, and not through the preservation of human memory, from which prehistory had been eradicated. And this did not occur until after agrarian-ecclesiastical civilization had lapsed and entirely given way to industrial-technological civilization.
We cannot define the limits of the prediction wall as readily as we can define the limits of the retrodiction wall. Predicting the future in terms of overall history has been more problematic than retrodicting the past, and equally subject to ideological and eschatological distortion. The advent of modern science compartmentalized scientific predictions and made them accurate and dependable — but at the cost of largely severing them from overall history, i.e., human history and the events that shape our lives in meaningful ways. We can make predictions about the carbon cycle and plate tectonics, and we are working hard to be able to make accurate predictions about weather and climate, but, for the most part, our accurate predictions about the future dispositions of the continents do not shape our lives in the near- to mid-term future.
I have previously quoted a famous line from Einstein: “As far as the laws of mathematics refer to reality, they are not certain; and as far as they are certain, they do not refer to reality.” We might paraphrase this Einstein line in regard to the relation of mathematics to the world, and say that as far as scientific laws of nature predict events, these events are irrelevant to human history, and in so far as predicted events are relevant to human beings, scientific laws of nature cannot predict them.
Singularities Past and Future
As the term “singularity” is presently employed — as in the technological singularity — the recognition of a retrodiction wall in the past complementary to the prediction wall in the future provides a literal connection between the historiographical use of “singularity” and the use of the term “singularity” in cosmology and astrophysics.
Theorists of the singularity hypothesis place a “singularity” in the future which constitutes an absolute prediction wall beyond which history is so transformed that nothing beyond it is recognizable to us. This future singularity is not the singularity of astrophysics.
If we recognize the actual Big Bang singularity in the past as the retrodiction wall for cosmology — and hence, by extension, for Big History — then an actual singularity of astrophysics is also at the same time an historical singularity.
. . . . .
I have continued my thoughts on the retrodiction wall in Addendum on the Retrodiction Wall.
. . . . .
. . . . .
. . . . .
10 October 2013
Life Lessons from Morally Compromised Philosophers
With particular attention to the Heidegger case
I began this blog with the idea that I would write about current events from a philosophical perspective and said in my initial post that I wanted to see history through the prism of ideas. This continues to be my project, however imperfectly conceived or unevenly executed. It is a project that necessitates engagement both with the world and with philosophy simultaneously. And so it is that my posts have ranged widely over warfare and the history of ideas, inter alia, and as a consequence of this dual mandate I have often found myself reading and citing sources that are not the common run of reading for philosophers. Some philosophers, however, are both influential and controversial, and Martin Heidegger has become one such philosopher. Heidegger’s influence in philosophy has only grown since his death (primarily in Continental thought), but the controversy about his involvement with Nazism has kept pace and grown along with Heidegger’s reputation.
It may help my readers in the US to understand the impact of the Heidegger controversy to compare it to the intersection of evil and ideals in an iconic American thinker, taking as our example a man more familiar than Heidegger, who was an iconic continental thinker. Take Thomas Jefferson, for example. Some years ago (in 1998, to be specific) I saw two television documentaries about the life of Thomas Jefferson. The first was a typical laudatory television documentary about one of the American founding fathers (I didn’t take notes at the time, so I don’t know which documentary this was, but it may well have been the 1997 Ken Burns film about Jefferson, which I recently re-watched to confirm my memory of its ambiguous treatment of Jefferson’s relationship to this slaves), which touched upon the possibility of Jefferson fathering children by his slave Sally Hemmings, while not taking the idea very seriously.
Then in 1998 the news came out of DNA tests that proved conclusively that Jefferson had fathered the children of his slave Sally Hemmings, and the scientific nature of the evidence rapidly inroads among Jefferson scholars, who had been slow to acknowledge Jefferson’s “shadow family” (as such families were once called in the Ante-Bellum south). The consensus of Jefferson scholars changed so rapidly that it makes one’s head spin — but only after two hundred years of denial. And there remain those today who continue to deny Jefferson’s paternity of Sally Hemmings’ children.
Not long after this news was made public, I saw another documentary about Jefferson in which the whole issue was treated very differently; the perspective of this documentary accepted as unproblematic Jefferson’s paternity of Sally Hemmings’ children, and examined Jefferson’s life and ideas in the light of this “shadow family.” I don’t think that Jefferson suffered at all from this latter documentary treatment; he definitely came across less as an icon and more as a fallible human being, which is not at all objectionable. It is, in fact, more human, and more believable.
Though Jefferson did not suffer in my estimation because he was revealed to be human, all-too-human, there is nevertheless something deeply disturbing about the image of Jefferson sitting down to dinner with his white family while being served at dinner by his mulatto children that he sired with with slaves, and it is deeply disturbing in a way that it not at all unlike the way that it is deeply disturbing to know that when Heidegger met Karl Löwith in 1936 near Rome (two years after Heidegger left his Rectorship in Freiburg) that Heidegger wore a Nazi swastika pin on his lapel the entire time, knowing that Löwith was a Jew who had been forced to flee Nazi Germany. One cannot but wonder, on a purely human level, apart from any ideology, how one person could be so utterly unconcerned with the well being of another.
It would be disingenuous to attempt to defend the indefensible by making the claim that all intellectuals of Jefferson’s time were conflicted over slavery; this simply was not the case. Schopenhauer, for example, consistently wrote against slavery and never showed the slightest sign of wavering on the issue, but, of course, Schopenhauer’s income did not depend on slaves, while Jefferson’s did.
We know that Jefferson struggled mightily with the question of slavery in his later years, as is the case with most conflicted men tying himself in knots trying to square the actual record of his life with his ideals. It is easy to dismiss individuals, even those who have struggled with the contradictions in their life, as mere hypocrites, but the charge of hypocrisy, while carrying great emotional weight, is the least interesting charge that can be made against a man’s ideas. As I wrote in my Variations on the Theme of Life, “The world is mendacious through and through; mendacity is the human condition. To renounce hypocrisy is to renounce the world and to institute an asceticism that cannot ever be realized in practice.” (section 169)
Heidegger does not seem to have been conflicted about his Nazism in the way that Jefferson was conflicted about slavery. Many years after the Second World War, when the record of Nazi death camps was known to all, Heidegger could still refer to the “inner truth greatness of this movement,” while in the meeting with Löwith mentioned above Heidegger was quite explicit that his political engagement with Nazism was a direct consequence of his philosophical views.
One obvious and well-trodden path for handling a philosopher’s political “indiscretions” is to hold that a philosopher’s theoretical works are a thing apart, elevated above the world like Plato’s Forms — one might even say sublated in the Hegelian sense: at once elevated, suspended, and canceled. This strategy allows one to read any philosopher and ignore any detail of life that one chooses. I don’t think that this constitutes a good contribution to intellectual honesty.
I myself was once among those who read philosophers for their philosophical ideas only, and while I was never a Heidegger enthusiast or a Heidegger defender, I thought of Heidegger’s political engagement with Nazism as mostly irrelevant to his philosophy. At some point I don’t clearly recall, I become intensely interested in Heidegger’s Nazism, and there was a flood of books telling the whole sorry story to feed my interest: Heidegger And Nazism by Victor Farias, which was the book the opened by Heidegger’s Nazi past to scrutiny, On Heidegger’s Nazism and Philosophy by Tom Rockmore, The Heidegger Controversy: A Critical Reader edited by Richard Wolin, Heidegger’s Crisis: Philosophy and Politics in Nazi Germany by Hans Sluga, Heidegger, philosophy, Nazism by Julian Young, The Shadow of that Thought by Dominique Janicaud, and most recent and perhaps the most devastating of them all, Heidegger: The Introduction of Nazism into Philosophy in Light of the Unpublished Seminars of 1933-1935 by Emmanuel Faye.
Even with all this material now available on Heidegger’s Nazi past, Heidegger still has his apologists and defenders. Beyond the steadfast apologists for Heidegger — who are perhaps more compromised than Heidegger himself — there are a variety of strategies to excuse Heidegger from his involvement with the Nazis, as when Heidegger’s Nazism is called an “episode” or a “period,” or characterized as “compromise, opportunism, or cowardice” (as in Julian Young’s Heidegger, philosophy, Nazism, p. 4). Young also uses the terms conviction, commitment, and flirtation, though Young ultimately exculpates Heidegger, asserting that, “…neither the early philosophy of Being and Time, nor the later, post-war philosophy, nor even the philosophy of the mid-1930s — works such as the Introduction to Metaphysics with respect to which critics often feel themselves to have an open-and-shut case — stand in any essential connection to Nazism.” (Op. cit., p. 5)
Heidegger’s engagement with fascism represents the point at which Heidegger’s ideas demonstrate their relationship to the ordinary business of life, and this is a conjuncture of the first importance. This is, indeed, identical to the task I set myself in writing this blog: to demonstrate the relationship between life and ideas. And Heidegger, I came to realize, was a particularly clear and striking case of the intersection of life and thought, though not the kind of example that most philosophers would want to claim as their own. I can fully understand why a philosopher would simply prefer to distance themselves from Heidegger and, while not denying Heidegger’s Nazism, would choose not to talk about it either. But that Heidegger thereby becomes a problem for philosophy and philosophers is precisely what makes him interesting. We philosophers must claim Heidegger as one of our own, even if we are sickened by his Nazism, which was no mere “flirtation” or “episode,” but constituted a life-long commitment.
Heidegger was not merely a Nazi ideologue, but also briefly a Nazi official. The Nazification of the professions was central to the strategy of Nazi social revolution (with its own professional institution, the Ahnenerbe), and a willing collaborator such as Heidegger, prepared to Nazify a university, was a valuable asset to the Nazi party. Ultimately, however, Heidegger was embroiled in an internal conflict within the Nazi party, and when the SA was purged and many of its leaders killed on Night of the Long Knives, the Strasserist SA faction lost out decisively, and Heidegger with them. Thereafter Heidegger was watched by the Nazi party, and Heidegger defenders have used this party surveillance to argue that Heidegger was regarded as a subversive by the Nazi party. He was a subversive, in fact, but only because he represented a faction of Nazism that had been suppressed. Heidegger continued as a Nazi party member, and paid his party dues right up to the end of the war. We see, then, that the SA purge was not merely a brutal struggle for power within the Nazi party, but also an episode in the history of ideas. This is interesting and important, even if it is also horrific.
The more carefully we study Heidegger’s philosophy, and read it in relation to his life, the more we can understand the relation of even the most subtle and sophisticated philosophy to ideological commitment and to the ordinary business of life. And it wasn’t only Heidegger who compromised himself. There is Frege’s political diary, less well known than Heidegger’s political views, and the much more famous case of Sartre and Camus. There are at least two book-length studies of the public quarrel and falling-out between Sartre and Camus (Sartre and Camus: A Historic Confrontation and Camus and Sartre: The Story of a Friendship and the Quarrel that Ended It by Ronald Aronson). Camus most definitely comes off looking better in this quarrel, with Sartre, the sophisticated technical philosopher, looking like a party-line communist while Camus, the writer, the literary man, showing true independence of spirit. The political lives of Camus and Sartre have been written about extensively, but even still Heidegger remains an interesting case because of the impenetrable complexity of his thought and the manifest horrors of the regime he served. There ought to be a disconnect here, but there isn’t, and this, again, is interesting and important even if it is horrific.
I have had to ask myself if my interest in Heidegger’s Nazism is prurient (in so far as there is a purely intellectual sense of “prurient”). There is something a little discomfiting about becoming fascinated by studying a great philosopher’s engagement with fascism. I am not innocent in this either. I, too, am a morally compromised philosopher. Perhaps the most I can hope for is to be aware of what I am involved in by making a careful study of philosophy’s involvement in politics. Naïvété strikes me as inexcusable in this context. I hope I have not been naïve.
I have not scrupled to read, to think about, and to quote individuals who were not only ideologically associated with crimes of unprecedented magnitude, but who have personally carried out capital crimes. In the case of Theodore “Ted” Kaczynski, who was personally responsible for several murders, I have carefully read his manifesto, Industrial Society and its Future (read it several times through, in fact), have thought about it, and have quoted it. Others who have been influenced by Kaczynski’s work and have publicly discussed it have felt the need to apologize for it, like scientists who consider using the research of Nazi doctors. But an apology feels like an excuse. I don’t want to make excuses.
Heidegger, like Nazism itself, is a lesson from history. We can benefit from studying Heidegger by learning how the most sophisticated philosophical justifications can be formulated for the most vulgar and the most reprehensible of purposes. But we cannot learn the lesson without studying the lesson. Studying the lessons of history may well corrupt us. That is a danger we must confront, and a risk we must take.
. . . . .
. . . . .
. . . . .
6 October 2013
What is astrobiology?
I suppose that “astrobiology” could be called one of those “ten dollar” words, but despite being a long word of six syllables and a dozen letters, it can be defined quite simply.
Astrobiology has been called, “The study of life in space” (Mix, Life in Space: Astrobiology for Everyone, 2009) and that, “Astrobiology… removes the distinction between life on our planet and life elsewhere.” (Plaxco and Gross, Astrobiology: A Brief Introduction, 2006). Taking these sententious formulations of astrobiology as the study of life in space, which removes the distinction between life on our planet and life elsewhere, together gives us a new perspective with which to view life on Earth (and beyond).
There are, of course, longer and more detailed definitions of astrobiology. There are two in particular that I have cited in previous posts:
“The study of the living universe. This field provides a scientific foundation for a multidisciplinary study of (1) the origin and distribution of life in the universe, (2) an understanding of the role of gravity in living systems, and (3) the study of the Earth’s atmospheres and ecosystems.”
from the NASA strategic plan of 1996, quoted in Steven J. Dick and James E. Strick, The Living Universe: NASA and the Development of Astrobiology, 2005
“Astrobiology is the study of the origin, evolution, distribution, and future of life in the universe. This multidisciplinary field encompasses the search for habitable environments in our Solar System and habitable planets outside our Solar System, the search for evidence of prebiotic chemistry and life on Mars and other bodies in our Solar System, laboratory and field research into the origins and early evolution of life on Earth, and studies of the potential for life to adapt to challenges on Earth and in space.”
from the NASA astrobiology website
I cited these two definitions of astrobiology from NASA in Eo-, Eso-, Exo-, Astro- and other posts in which I used parallel formulations to define astrocivilization.
Learning to take the astrobiological point of view
Astrobiology is island biogeography writ large.
This is one of the few “tweets” I’ve written that was “re-tweeted” multiple times (I’m not very popular on Twitter.) After I wrote this I began a more extensive blog post on this theme, but didn’t finish it; the topic rapidly became too large and started to look like a book rather than a post. Then last month I posted this on Twitter:
In the same way that Darwin provided a new perspective on life, astrobiology provides a novel perspective that allows us to see life anew.
Recently I’ve also been referring to astrobiology with increasing frequency in my blog posts, and I referenced astrobiology in my 2012 presentation at the 100YSS symposium in Houston and just last month in my presentation at the Icarus Interstellar Starship Congress in Dallas.
It will be apparent to the reader, then, then the idea of astrobiology has been slowly growing on me for the past few years, and the more I think about it, the more I come to realize the fundamentally new perspective that astrobiology offers on life and its evolution. Moreover, astrobiology also is suggestive for the future of life, and what we will discover about life the more we explore the cosmos.
Astrobiology: the Fourth Revolution in the Life Sciences
The more I think about astrobiology, the more I realize that, like earlier revolutions in the life sciences, the astrobiological point of view gives a novel perspective on familiar facts, and in so doing it potentially orients science in a new direction. For this reason I now see astrobiology as the fourth of four revolutions that instantiated the life sciences in their present form and continue to shape the way that we think about biology and the living world.
Here is my list of the four major revolutions in biological thought that have shaped the life sciences:
● Natural selection Independently discovered by Charles Darwin and Alfred Russel Wallace, natural selection gave sharpness of focus to many vague evolutionary ideas that were being circulated in the nineteenth century. With natural selection, biology had a theory by which to work, that could unify biological thought in a way that had not previously been possible. Of the Darwinian revolution Harald Brüssow wrote, “How can biologists cope conceptually and technically with this enormous species number? A deep sigh of relief came for biologists already in 1859 with the publication of Charles Darwin’s book ‘On the Origin of Species’. Suddenly, biologists had a unifying theory for their branch of science. One could even argue that the holy grail of a great unifying theory was achieved by Darwin and Wallace at a time when Maxwell was unifying physics, the older sister of biology, at the level of the electromagnetic field theory.” (“The not so universal tree of life or the place of viruses in the living world” Phil. Trans. R. Soc. B, 2009, 364, 2263–2274)
● Genetics After Darwin and Wallace came Gregor Mengel, who solved fundamental problems in the theory of inheritance and so greatly strengthened the Darwinian theory of descent with modification. As Darwin had provided the mechanism for the overall structure of life, Mendel provided the mechanism that made natural selection possible. Mendel’s work, contemporaneous with Darwin, was forgotten and not rediscovered until the early twentieth century. It was not until the middle of the twentieth century that Crick and Watson were able to delineate the structure of DNA, which made it possible to describe Mendelian genetics on a molecular level, thus making possible molecular biology.
● Evo-devo Evo-devo, which is a contraction of evolutionary developmental biology, once again went back to the roots of biology (as Darwin had done by formulating a fundamental theory, and as Mendel had done by his careful study of inheritance in pea plants), and returned the study of embryology to the center of attention of evolutionary biology. Studying the embryology of organisms with the tools of molecular biology gave (and continues to give) new insights into the fine structure of life’s evolution. Before evo-devo, few if any suspected that the homology that Darwin and others notes on a macro-biological scale (the structural similarity of the hand of a man, the wing of a bat, and the flipper of a dolphin) would be reducible to homology on a genetic level, but evo-devo has demonstrated this in remarkable ways, and in so doing has further underlined the unity of all terrestrial life.
● Astrobiology Astrobiology now lifts life out of its exclusively terrestrial context and studies life in its cosmological context. We have known for some time that climate is a major driver of evolution, and that climatology is in turn largely driven by the vicissitudes of the Earth as the Earth orbits the sun, exchanges material with other bodies in our solar system, and the solar system entire bobs up and down in the plane of the Milky Way galaxy. Of understanding of life gains immensely by being placed in the cosmological context, which forces us both to think big, in terms of the place of life in the universe, as well as to think small, in terms of the details of origins of life on Earth and its potential relation to life elsewhere in the universe.
This is obviously a list of revolutions in biological thought compiled by an outsider, i.e., by someone who is not a biologist. Others might well compile different lists. For example, I can easily imagine someone putting the Woesean revolution on a short list of revolutions in biological thought. Woese was largely responsible for replacing the tripartite division of animals, plants, and fungi with the tripartite division of the biological domains of Bacteria, Archaea and Eukarya. (There remains the question of where viruses fit in to this scheme, as discussed in the Brüssow paper cited above.)
Since I have included molecular phylogeny among the developments of evo-devo (in the graphic at the bottom of this post), I have implicitly place Woese’s work within the evo-devo revolution, since it was the method of molecular phylogeny that made it possible to demonstrate that plants, animals and fungi are all closely related biologically, while the truly fundamental division in terrestrial life is between the eukarya (which includes plants, animals, and fungi, which are all multicellular organisms), bacteria, and archaea. If any biologists happen to read this, I hope you will be a bit indulgent toward my efforts, though I certainly encourage you to leave a comment if I have made any particularly egregious errors.
Toward a Radical Biology
Darwin mentioned the origins of life only briefly and in passing. There is the famous reference to, “some warm little pond with all sorts of ammonia and phosphoric salts, — light, heat, electricity &c. present” in his letter to Joseph Hooker, and there is the famous passage at the end of his Origin of Species which I discussed in Darwin’s Cosmology:
“Thus, from the war of nature, from famine and death, the most exalted object which we are capable of conceiving, namely, the production of the higher animals, directly follows. There is grandeur in this view of life, with its several powers, having been originally breathed into a few forms or into one; and that, whilst this planet has gone cycling on according to the fixed law of gravity, from so simple a beginning endless forms most beautiful and most wonderful have been, and are being, evolved.”
Darwin, of course, had nothing to go on at this point. Trying to understand or explain the origins of life without molecular biology would be like trying to explain the nature of water without the atomic and molecular theory of matter: the conceptual infrastructure to circumscribe the most basic elements of life did not yet exist. (The example of trying to define water without the atomic theory of matter is employed by Robert M. Hazen in his lectures on the Origins of Life.)
Just as Darwin pressed biology beyond the collecting and comparison of beetles in the backyard, and opened up deep time to biology (and, vice versa, biology to deep time), so astrobiology presses forward with the project of evolutionary biology, pursuing the natural origins of life to its chemical antecedents. Astrobiology is a radical biology in the same way that Darwin was radical biology in his time: both go to the root to the matter to the extent possible given the theoretical, scientific, and technological parameters of thought. It is in the radical sense that astrobiology is integral with origins of life research; it is in this sense in which the two are one.
The humble origins of radical ideas
The radical biology of Darwin did not start out as such. In his early life, Darwin considered becoming a country parson, and when Darwin left on his voyage on the Beagle as Captain Fitzroy’s gentleman companion, he held mostly conventional views. It is easy to imagine an alternative history in which Darwin retained his conventional views, went on to become a country parson, and gave Sunday sermons that were mostly moral homilies punctuated by the occasional quote from scripture the illustrate the moral lesson with a story from the tradition he nominally represented. Such a Darwin from an alternative history would have continued to collect beetles during the week and would have maintained his interest in natural history.
Just as Darwin came out of the context of English natural history (which, before Darwin, gave us those classic works of teleology, Paley’s Natural Theology and Chambers’ Vestiges of the Natural History of Creation — a work that the young Darwin greatly admired), so too astrobiology comes out of the context of a later development of natural history — the scientific search for the origins of life and for extraterrestrial life. While the search for extraterrestrial life is “big science” of an order of magnitude only possible by an institution like NASA, in this respect it stands in the humble tradition of natural history, since we must send robots of Mars and the other planets until we can go there ourselves with a shovel and rock hammer. From such humble beginnings sometimes emerge radical consequences.
I think we are already beginning to see the potentially radical character of astrobiology, and that this development in biology promises a paradigm shift almost of the scope and magnitude of natural selection. Indeed, both natural selection and astrobiology can be understood as further (and radical) contextualizations of the theme of man’s place in nature. When Darwin wrote, he contextualized human history in the most comprehensive conception of nature then possible; today astrobiology must contextualize not only human history but also the totality of life on Earth in a much more comprehensive cosmological context.
As our knowledge of the world (which was once very small, and very parochial) steadily expands, we are eventually forced to extend and refine our concepts in order to adequately account for the world that we now know. Natural selection and astrobiology are steps in the extension and refinement of our conception of life, and of the place of life in the world. Life simpliciter is, after all, a “folk” concept. Indeed, “life” is folk biology and “world” is folk cosmology. Astrobiology brings together these folk concepts and attempts to bring scientific rigor to them.
The biology of the future
Astrobiology is laying the foundations for the biology of the future. Here and now on earth, without having surveyed life on other worlds, astrobiologists are attempting for formulate concepts adequate to understanding life at the largest and the smallest scales. Once we take these conceptions along with us when we eventually explore alien worlds — including alien worlds close to home, such as Mars and the ocean beneath the ice of Europa — it is to be expected that further revolutions in the life sciences will come about as a result of attempting to understand what we eventually find in the light of the concepts we have preemptively developed in order to understand biology beyond the surface of the Earth.
Future revolutions in biology will likely have the same radical character as natural selection, genetics, evo-devo, and astrobiology. Future naturalists will do what naturalists do best: they will spend their time in the field finding new specimens and describing them for science, and in the process of the slow and incremental accumulation of scientific knowledge new ideas will suggest themselves. Perhaps someone laid low by some alien fever, like Wallace tossing and turning as he suffered from a fever in the Indonesian archipelago, will, in a moment of insight, rise from their sick bed long enough to dash off a revolutionary paper, sending it off to another naturalist, now settled and meditating over his own experiences of new and unfamiliar forms of life.
The naturalists of alien forms of life will not necessarily have the same point of view as that of astrobiologists — and that is all to the good. Science thrives when it is enriched by new perspectives. At present, the revolutionary new perspective is astrobiology, but that will not likely remain true indefinitely.
. . . . .
. . . . .
. . . . .
. . . . .
25 September 2013
Hegel is not remembered as the clearest of philosophical writers, and certainly not the shortest, but among his massive, literally encyclopedic volumes Hegel also left us one very short gem of an essay, “Who Thinks Abstractly?” that communicates one of the most interesting ideas from Hegel’s Phenomenology of Mind. The idea is simple but counter-intuitive: we assume that knowledgeable individuals employ more abstractions, while the common run of men content themselves with simple, concrete ideas and statements. Hegel makes that point that the simplest ideas and terms that tend to be used by the least knowledgeable among us also tend to be the most abstract, and that as a person gains knowledge of some aspect of the world the abstraction of a terms like “tree” or “chair” or “cat” take on concrete immediacy, previous generalities are replaced by details and specificity, and one’s perspective becomes less abstract. (I wrote about this previously in Spots Upon the Sun.)
We can go beyond Hegel himself by asking a perfectly Hegelian question: who thinks abstractly about history? The equally obvious Hegelian response would be that the historian speaks the most concretely about history, and it must be those who are least knowledgeable about history who speak and think the most abstractly about history.
“…it is difficult to imagine that any of the sciences could treat time as a mere abstraction. Yet, for a great number of those who, for their own purposes, chop it up into arbitrary homogenous segments, time is nothing more than a measurement. In contrast, historical time is a concrete and living reality with an irreversible onward rush… this real time is, in essence, a continuum. It is also perpetual change. The great problems of historical inquiry derive from the antithesis of these two attributes. There is one problem especially, which raises the very raison d’être of our studies. Let us assume two consecutive periods taken out of the uninterrupted sequence of the ages. To what extent does the connection which the flow of time sets between them predominate, or fail to predominate, over the differences born out of the same flow?”
Marc Bloch, The Historian’s Craft, translated by Peter Putnam, New York: Vintage, 1953, Chapter I, sec. 3, “Historical Time,” pp. 27-29
The abstraction of historical thought implicit in Hegel and explicit in Marc Bloch is, I think, more of a problem that we commonly realize. Once we look at the problem through Hegelian spectacles, it becomes obvious that most of us think abstractly about history without realizing how abstract our historical thought is. We talk in general terms about history and historical events because we lack the knowledge to speak in detail about exactly what happened.
Why should it be any kind of problem at all that we think abstractly about history? People say that the past is dead, and that it is better to let sleeping dogs lie. Why not forget about history and get on with the business of the present? All of this sounds superficially reasonable, but it is dangerously misleading.
Abstract thinking about history creates the conditions under which the events of contemporary history — that is to say, current events — are conceived abstractly despite our manifold opportunities for concrete and immediate experience of the present. This is precisely Hegel’s point in “Who Thinks Abstractly?” when he invites the reader to consider the humanity of the condemned man who is easily dismissed as a murderer, a criminal, or a miscreant. But we not only think in such abstract terms of local events, but also if not especially in regard to distant events, and large events that we cannot experience personally, so that massacres and famines and atrocities are mere massacres, mere famines, and mere atrocities because they are never truly real for us.
There is an important exception to all this abstraction, and it is the exception that shapes us: one always experiences the events of one’s own life with concrete immediacy, and it is the concreteness of personal experience contrasted to the abstractness of everything else not immediately experienced that is behind much (if not all) egocentrism and solipsism.
Thus while it is entirely possible to view the sorrows and reversals of others as abstractions, it is almost impossible to view one’s own sorrows and reversals in life as abstractions, and as a result of the contrast between our own vividly experienced pain and the abstract idea of pain in the life of another we have a very different idea of all that takes place in the world outside our experience as compared to the small slice of life we experience personally. This observation has been made in another context by Elaine Scarry, who in The Body in Pain: The Making and Unmaking of the World rightly observed that one’s own pain is a paradigm of certain knowledge, while the pain of another is a paradigm of doubt.
Well, this is exactly why we need to make the effort to see the big picture, because the small picture of one’s own life distorts the world so severely. But given our bias in perception, and the unavoidable point of view that our own embodied experience gives to us, is this even possible? Hegel tried to arrive at the big picture by seeing history whole. In my post The Epistemic Overview Effect I called this the “overview effect in time” (without referencing Hegel).
Another way to rise above one’s anthropic and individualist bias is the overview effect itself: seeing the planet whole. Frank White, who literally wrote the book on the overview effect, The Overview Effect: Space Exploration and Human Evolution, commented on my post in which I discussed the overview effect in time and suggested that I look up his other book, The Ice Chronicles, which discusses the overview effect in time.
I have since obtained a copy of this book, and here are some representative passages that touch on the overview effect in relation to planetary science and especially glaciology:
“In the past thirty-five years, we have grown increasingly fascinated with our home planet, the Earth. What once was ‘the world’ has been revealed to us as a small planet, a finite sphere floating in a vast, perhaps infinite, universe. This new spatial consciousness emerged with the initial trips into Low Earth Orbit…, and to the moon. After the Apollo lunar missions, humans began to understand that the Earth is an interconnected unity, where all things are related to one another, and there what happens on one part of the planet affects the whole system. We also saw that the Earth is a kind of oasis, a place hospitable to life in a cosmos that may not support living systems, as we know them, anywhere else. This is the experience that has come to be called ‘The Overview Effect’.”
Paul Andrew Mayewski and Frank White, The Ice Chronicles: The Quest to Understand Global Climate Change, University Press of New England: Hanover and London, 2002, p. 15
“The view of the whole Earth serves as a natural symbol for the environmental movement. it leaves us unable to ignore the reality that we are living on a finite ‘planet,’ and not a limitless ‘world.’ That planet is, in the words of another astronaut, a lifeboat in a hostile space, and all living things are riding in it together. This realization formed the essential foundation of an emerging environmental awareness. The renewed attention on the Earth that grew out of these early space flights also contributed to an intensified interest in both weather and climate.”
Paul Andrew Mayewski and Frank White, The Ice Chronicles: The Quest to Understand Global Climate Change, University Press of New England: Hanover and London, 2002, p. 20
“Making the right choices transcends the short-term perspectives produced by human political and economic considerations; the long-term habitability of our home planet is at stake. In the end, we return to the insights brought to us by our astronauts and cosmonauts as the took humanity’s first steps in the universe: We live in a small, beautiful oasis floating through a vast and mysterious cosmos. We are the stewards of this ‘good Earth,’ and it is up to us to learn how to take good care of her.”
Paul Andrew Mayewski and Frank White, The Ice Chronicles: The Quest to Understand Global Climate Change, University Press of New England: Hanover and London, 2002, p. 214
It is interesting to note in this connection that glaciology yielded one of the earliest forms of scientific dating techniques, which is varve chronology, originating in Sweden in the nineteenth century. Varve chronology dates sedimentary layers by the annual layers of alternating coarse and fine sediments from glacial runoff — making it something like dendrochronology, except for ice instead of trees.
Scientific historiography can give us a taste of the overview effect, though considerable effort is required to acquire the knowledge, and it is not likely to have the visceral impact of seeing the overview effect with your own eyes. Even an idealistic philosophy like that of Hegel, as profoundly different as this is from the empiricism of scientific historiography, can give a taste of the overview effect by making the effort to see history whole and therefore to see ourselves within history, as a part of an ongoing process. Probably the scientists of classical antiquity would have been delighted by the overview effect, if only they had had the opportunity to experience it. Certainly they had an inkling of it when they proved that the Earth is spherical.
There are many paths to the overview effect; we need to widen these paths even as we blaze new trails, so that the understanding of the planet as a finite and vulnerable whole is not merely an abstract item of knowledge, but also an immediately experienced reality.
. . . . .
. . . . .
. . . . .
30 July 2013
One of the most famous thought experiments of twentieth century philosophy of mind is presented in Thomas Nagel’s paper “What is is like to be a bat?” Nagel’s point was that consciousness involves a point of view, and that means that there is something that it is like to be in being some conscious organism. Here is the opening paragraph of Nagel’s paper:
Conscious experience is a widespread phenomenon. It occurs at many levels of animal life, though we cannot be sure of its presence in the simpler organisms, and it is very difficult to say in general what provides evidence of it. (Some extremists have been prepared to deny it even of mammals other than man.) No doubt it occurs in countless forms totally unimaginable to us, on other planets in other solar systems throughout the universe. But no matter how the form may vary, the fact that an organism has conscious experience at all means, basically, that there is something it is like to be that organism. There may be further implications about the form of the experience; there may even (though I doubt it) be implications about the behavior of the organism. But fundamentally an organism has conscious mental states if and only if there is something that it is to be that organism—something it is like for the organism.
The choice of a bat for this thought experiment is interesting. As a mammal, the bat shares much with us in its relation to the world, but its fundamental mechanism of finding its way around — echolocation — is sharply distinct from our primate experience of the world, dominated as it is by vision. Thus while what it is like to be a bat overlaps considerably with what it is like to be a hominid, there are also substantial divergences between being a bat and being a hominid. A bat has a different sensory apparatus than a hominid, and the bat’s distinctive sonar sensory apparatus presumably shapes its cognitive architecture in distinctive ways.
As a philosopher I have a great fascination with the sensory organs of other species, which seem to me both to pose epistemological problems as well as to suggest really interesting thought experiments. In my post on Kantian Critters I argued that if human beings must have recourse to the transcendental aesthetic in order to sort out the barrage of sense perception that the brain and central nervous system receive, that other terrestrial species, constituted as they are much like ourselves, must also have recourse to some transcendental aesthetic of their own (or, if you prefer Husserl to Kant, and phenomenology to idealism, other species must employ their own passive synthesis). This interpretation of Kant obviously presupposes a naturalistic point of view, which Kant did not have, but if we grant this scientific realism, the Kantian insight regarding the transcendental aesthetic remains valid and may moreover be extrapolated beyond human beings.
Distinctive transcendental aesthetics of distinct species would follow from distinct sensory apparatus and the distinctive cognitive architecture required to take advantage of this sensory apparatus. This implies that distinct species “see” the world differently, with “see” here understood in a comprehensive sense and not in a purely visual sense. Although bats rely on sonar, they “see” the world in his comprehensive sense, even if their eyes are not as good as our hominid eyes, and not nearly as good as the eyes of an eagle. A couple of ethologists, Dorothy L. Cheney and Robert M. Seyfarth, have written several books on the Weltanschauung of other species, How Monkeys See the World: Inside the Mind of Another Species and Baboon Metaphysics: The Evolution of a Social Mind.
Does a primate have more in common, Weltanschauung-wise (if you know what I mean), with a flying mammal such as a bat (since any two mammals have much life experience in common) or with a terrestrial reptile such as a serpent? Primates don’t know what it is like to fly with their own wings, but they also don’t know what it is like to move along the ground by slithering. Does a primate have more in common, again, Weltanschauung-wise, with a reptile that has given up its legs or with an octopus that never had any legs? We might be able to refine these questions a bit more by a more careful consideration of particular sensory organs and the particular cognitive architecture that both is driven by the development of the organ and makes the fullest exploitation of that organ for survival and reproductive advantage possible.
Among the most intriguing sense organs possessed by other species but not by homo sapiens is the pit of the pit viper, which is a rudimentary sensing organ for heat. Since pit vipers are predators who typically eat small, furry animals with a high metabolism and presumably also a high body temperature, being able to sense the body heat of one’s prey would be a substantial selective advantage.
Because the pit of the pit viper represents such a great selective advantage, one would expect that the pit will evolve, driven by this selective pressure. To paraphrase what Richard Dawkins said of wings, one percent of a infrared sensing organ represents a one percent selective advantage, and so on. Thus a one percent improvement of an existing pit would represent another one percent selective advantage. While it would be difficult to observe such subtle advantage in the lives of individual organisms, when in comes to species whose members number in the millions, that one percent will eventually make a significant difference in differential survival and reproduction. A statistical study would reveal what a study of individuals would likely obscure.
There is a sense in which the pit of the pit viper is like an eye for perceiving infrared radiation. The infrared radiation spectrum lies just beyond the visible spectrum at the red end, so having a pit like a pit viper in addition to color vision would be like being able to see additional colors beyond red. Having a slightly different visible spectrum is not uncommon among other species. Many insects see a little way into the ultraviolet spectrum (at the opposite end of our visible spectrum from red) and flowers are said to present colorful displays to insects in the ultraviolet spectrum that we cannot see (except for the case I heard about some years ago about a man whose eye was injured and as a result of the injury was able to see a little way into the ultraviolet beyond the visible spectrum).
The eye itself, whatever portion of the electromagnetic spectrum it accesses, is a wonderful example of the power of an adaptation. The eye is so useful that it has emerged independently several times in the course of evolution of life on earth. I don’t know much about the details, but insect eyes, mollusc eyes, and vertebrate eyes (as well as several other instances) are each the result of separate and independent emergence of the eye. The mollusc eye and the vertebrate eye represent an astonishing example of convergent evolution, since the structure of the two instances of eyes is so similar. The eye is of course a provocative evolutionary example because of a famous passage from Darwin himself, who wrote about “organs of extreme perfection”:
“To suppose that the eye with all its inimitable contrivances for adjusting the focus to different distances, for admitting different amounts of light, and for the correction of spherical and chromatic aberration, could have been formed by natural selection, seems, I freely confess, absurd in the highest degree. Yet reason tells me, that if numerous gradations from a perfect and complex eye to one very imperfect and simple, each grade being useful to its possessor, can be shown to exist; if further, the eye does vary ever so slightly, and the variations be inherited, which is certainly the case; and if any variation or modification in the organ be ever useful to an animal under changing conditions of life, then the difficulty of believing that a perfect and complex eye could be formed by natural selection, though insuperable by our imagination, can hardly be considered real. How a nerve comes to be sensitive to light, hardly concerns us more than how life itself first originated; but I may remark that several facts make me suspect that any sensitive nerve may be rendered sensitive to light, and likewise to those coarser vibrations of the air which produce sound.”
Of this quote Richard Dawkins wrote in The God Delusion:
“Darwin’s fulsomely free confession turned out to be a rhetorical device. He was drawing his opponents towards him so that his punch, when it came, struck the harder. The punch, of course, was Darwin’s effortless explanation of exactly how the eye evolved by gradual degrees. Darwin may not have used the phrase ‘irreducible complexity’, or ‘the smooth gradient up Mount Improbable’, but he clearly understood the principle of both.”
Partly due to this Darwin quote, the evolution of the eye has been the topic of some very interesting research that has helped the clarify the development of the eye. There is a wonderful documentary on evolution, the first episode of which was titled Darwin’s Dangerous Idea (presumably intended to echo Daniel Dennett’s well known book of the same title), which an excellent segment on the evolution of the eye which you can watch on Youtube. In this documentary the work of Dan-Eric Nilsson of the University of Lund is shown, and he demonstrates in a particularly clear and concrete way the step-by-step process of improving vision through the increasing complexity of the eye. When I was watching this documentary recently I was thinking about how the pit of the pit viper resembles the early stages of the evolution of the eye.
The pit of the pit viper is a depressed, folded area lined with infrared sensitive nerve endings that allows limited directional sensitivity. In the long term future of the pit of the pit viper, which at present seems to correspond to the earliest stages of the evolution of the vertebrate eye, sometimes called a “cup eye,” there would seem to be much room for improvement. Of course, the details of infrared (IR) perception are different than the details of human visible spectrum perception, but not so different that we cannot imagine a similar series of stepwise improvements to the infrared pit that might, in many millions of years, yield sharp, clear, and directional infrared vision. If this infrared vision became sufficiently effective, it is possible that brain and body resources might be redirected to focus on the pits, and the eyes could eventually degrade into a vestigial organ, as in bats and moles. After all, snakes gave up their legs, so there’s no reason they shouldn’t also give up their eyes if they have something better to fall back on.
There is another possibility, and that is the evolutionary advantage that might be obtained through adding a pair of fully functional IR “eyes” to a pair of fully functional visible spectrum eyes. Such a development would be biologically costly, and it would be much more likely that a pit viper would chose one evolutionary path or the other and not both. Yet there are some rare instances of biologically costly organs (or clusters of organs) that have been successful despite the cost. The brain is a good example — or, rather, large complex brains that evolve under particular selection pressures but which later are exapted for intelligence.
Natural selection is a great economist, and often reduces organisms to the simplest structure compatible with their function. This is one of the reasons we find the shapes of plants and the bodies of animals both elegant and beautiful. The economy of nature was resulted in the fact that a large brain, and the intelligence that large brains make possible, are rare. Despite their rarity, and their biological expense, large complex brains do emerge (though not often), and, like the eye (which has emerged repeatedly in evolutionary history), large brains have emerged more than once. Interestingly enough, complex eyes and large complex brains are found together not only in primates but also in molluscs.
The octopus (among other molluscs) is bequeathed a large, complex brain because the octopus went down the evolutionary path of camouflage, and the camouflage of some molluscs became so elaborate that almost every cell on the surface of the organism’s skin is individually controlled, which means a nerve connected to every spot of color on (or under) the skin, and a nervous system that is capable of handling this. It requires a lot of processing power to put on the kind of displays seen on the skin of octopi and cuttlefish, and an evolutionary spiral that favored the benefits of camouflage also then drove the development of a large, complex brain that could optimize the use of camouflage.
The octopus also has remarkably sophisticated eyes — eyes that are, in some respects, very similar to yet more elegant in structure than primate eyes. Our eyes are “wired” from the front, which gives us a blind spot where the optic nerve passes through the retina; mollusc eyes are “wired” from the back and consequently suffer from no blind spot. (“Wired” is in scare quotes here because it is a metaphor to refer to eyes being wired to the nervous system; while electrical signals travel down nerves, the connection between distinct nerve cells is primarily biochemical and not electrical.)
How an octopus sees the world is as fascinating an inquiry as what it is like to be a bat — or a serpent, for that matter. Both the octopus and an arboreal primate live in a three dimensional habitat, and this may have something to do with their common development of sharp eyesight and large brains, although there are vastly greater number of organisms in the sea and in trees with far smaller brains and far less cognitive processing power. (A recent study reported in The New York Times suggests a link between spatial ability and intellectual innovation, and while the study was primarily concerned with the ontogenesis of creativity, it is possible that the apparatus of spatial perception and the cognitive architecture that facilitates this perception is phylogenetically linked to intellectual creativity.) This simply shows us that intelligence is one strategy among many for survival, and not the most common strategy.
A large, complex brain is very costly in a biological sense. In a typical human being, the brain represents less than three percent of total body weight, yet it consumes about twenty percent of the body’s resources — that’s a very big chunk of metabolism that could be directed toward running faster or jumping higher or reaching farther. Nothing as unlikely as the brain’s disproportionate consumption of resources would come about unless this expenditure of resources bequeathed some survival or reproductive advantage to the organism possessing such a high cost of ownership. The brain isn’t a luxury that produces poetry and art; it is a survival machine, optimized (in hominids) by more than five million years of development to make human beings effective hunters and foragers. The brain was so successful, in fact, that it made is possible for human beings to take over the planet entire and convert it to serving human needs. Thus the relatively rare and costly strategy of developing a large, complex brain paid off in this particular case. (One may think of it as a high risk/high reward strategy.)
If the evolution of the brain and the exaptation of intelligence to produce civilization did not result in the disproportionate evolutionary success of a single species, it seems likely that we would see intelligence emerge repeatedly in evolutionary history, much as eyes have evolved repeatedly. On other worlds with other natural histories, under conditions where intelligence does not allow a single species to dominate (possibly due to some selection pressure that does not operate on Earth), it is possible that evolution results in the repeated emergence of intelligence just as on Earth evolution has resulted in the repeated emergence of eyes. On Earth, intelligence preempted another developments, and means that not only human history but also natural history were irremediably changed.
In The Preemption Hypothesis I argued that industrialization preempted other developments in the history of civilization (for more on this also see my post Human Agency and the Exaptation of Selection). This current line of thought makes me realize that purely biological preemption is also a force shaping history. Consciousness, and then intelligence arising from biochemically based consciousness, is one such preemption of our evolutionary history. Another preemption of natural history that has operated repeatedly is that of mass extinction. But whereas historical preemptions such as the development of large, complex brains or industrialization represent a preemption of greater complexity, mass extinctions represent a preemption of decreased complexity.
It seems that “weedy” species that are especially hearty and resilient tend to survive the rigorous of mass extinctions; the more delicate and refined productions of natural selection, which are dependent upon mature ecosystems and their many specialized niches, do not fare as well when these mature ecosystems are subject to pressure and possible catastrophic failure. One could think of mass extinctions, and indeed of all historical preemptions that favor simplicity over complexity, as a catastrophic “reset” of the evolutionary process. Events such as mass extinctions can favor rudimentary organisms that are sufficiently hardy to survive catastrophic changes, but, as we have seen, there is also the possibility of historical preemptions that favor greater complexity. The Cambrian Explosion, for example, might be considered another instance of an historical preemption.
There is a tension in the structure of history between continuity and preemption. In the particular case of the earth, the continuity of natural history has been interrupted by the preemption of intelligence and then industrialization. These preemptions of greater complexity — in contradistinction to preemptions of lesser complexity, as in the case of mass extinctions — may provide for the possibility of the continuity of earth-originating life beyond the terrestrial biosphere. In the case of an otherwise sterile universe, the intelligence/industrialization preemption would be a basis of a new explosion or radiation of earth-originating life in the Milky Way. In the case of a universe already living, it may be only intelligence and industrial-technological civilization that is a novelty in the natural history of the universe.
Whatever happens on the largest scale of life, as long as life continues to evolve on the earth, its development is likely to be marked by both continuity and preemptive developments. In thinking about the pit viper, I suggested above that the pit viper might eventually, over many millions of years, develop a fully functional pair of IR eyes in addition to its visible spectrum eyes. This suggestion points to an interesting possibility. In so far as complex life is allowed to develop in continuity, with a minimum of preemptions, specialization and refinement of existing mechanisms of survival may give rise of species of greater complexity than what we know today. While mass extinctions have repeatedly cleared the ground and given a more or less blank slate for the radiation of resilient weedy species, this may not always be the case.
As our earth and the solar system of which it is a part becomes older, catastrophic events may become less common. For example, stray bodies in the solar system that might collide with the earth, while once common in the early solar system, eventually end up colliding with something or getting swept out of the path of the earth’s orbit by the gravity of Jupiter. If, moreover, civilization expands extraterrestrially and seeks to protect the earth as an existential risk mitigation measure, life on earth may become even more secure and even less subject to disruption and preemption than in the past. New species might eventually come into being with a delicate complexity of sensory organs and accompanying cognitive architecture that facilitates these senses. Imagine species with a whole range of sensory organs that complement each other, without former mainstay sensory organs being reduced to vestigial status, and this might possibly be the future of life on Earth.
Eventually the most interesting question may not be, “What is it like to be a serpent?” but, “What will it be like to be a serpent?”
. . . . .
The reader can compare my earlier post, The Future of the Pit Viper, which was the origin and inspiration of this post.
. . . . .
. . . . .
. . . . .
27 July 2013
Ninth in a Series on Existential Risk:
How we understand what exactly is at risk.
How we understand existential risk, then, affects what we understand to be a risk and what we understand to be a reward.
It is possible to clarify this claim, or at least to lay out in greater detail the conceptualization of existential risk, and it is worthwhile to pursue such a clarification.
We cannot identify risk-taking behavior or risk averse behavior unless we can identify instances of risk. Any given individual is likely to identify risks differently than any other individual, and the greater the difference between any two given individuals, the greater the difference is likely to be in their identification of risks. Similarly, a given community or society will be likely to identify risks differently than any other given community or society, and the greater the differences between two given communities, the greater the difference is likely to be between the existential risks identified by the two communities.
This difference in the assessment of risk can at least in part be put to the role of knowledge in determining the distinction between prediction, risk, and uncertainty, as discussed in Existential Risk and Existential Uncertainty and Addendum on Existential Risk and Existential Uncertainty: distinct individuals, communities, societies, and indeed civilizations are in possession not only of distinct knowledge, but also of distinct kinds of knowledge. The distinct epistemic profiles of different societies results in distinct understandings of existential risk.
Consider, for example, the kind of knowledge that is widespread in agrarian-ecclesiastical civilization in contradistinction to industrial-technological civilization: in the former, many people know the intimate details of farming, but few are literate; in the latter, many are literate, but few know how to farm. The macro-historical division of civilization in which a given population is to be found profoundly shapes the epistemic profile of the individuals and communities that fall within a given macro-historical division.
Moreover, knowledge is integral with ideological, religious and philosophical ideas and assumptions that provide the foundation of knowledge within a given macro-historical division of civilization. The intellectual foundations of agrarian-ecclesiastical civilization (something I explicitly discussed in Addendum on the Agrarian-Ecclesiastical Thesis) differ profoundly from the intellectual foundations of industrial-technological civilization.
Differences in knowledge and differences in the conditions of the possibility of knowledge among distinct individuals and civilizations mean that the boundaries between prediction, risk, and uncertainty are differently constructed. In agrarian-ecclesiastical civilization, the religious ideology that lies at the foundation of all knowledge gives certainty (and therefore predictability) to things not seen, while consigning all of this world to an unpredictable (therefore uncertain) vale of tears in which any community might find itself facing starvation as the result of a bad harvest. The naturalistic philosophical foundations of knowledge in industrial-technological civilization have stripped away all certainty in regard to things not seen, but by systematically expanding knowledge has greatly reduced uncertainty in this world and converted many certainties into risks and some risks into certain predictions.
Differences in knowledge can also partly explain differences in risk perception among individuals: the greater one’s knowledge, the more one faces calculable risks rather than uncertainties, and predictable consequences rather than risks. Moreover, the kind of knowledge one possesses will govern the kind of risk one perceives and the kind of predictions that one can make with a degree of confidence in the outcome.
While there is much that can be explained between differences in knowledge, and differences between kinds of knowledge (a literary scholar will be certain of different epistemic claims than a biologist), there is also much that cannot be explained by knowledge, and these differences in risk perception are the most fraught and problematic, because they are due to moral and ethical differences between individuals, between communities, and between civilizations.
One might well ask — Who would possibly object to preventing human extinction? There are many interesting moral questions hidden within this apparently obvious question. Can we agree on what constitutes human viability in the long term? Can we agree on what is human? Would some successor species to humanity count as human, and therefore an extension of human viability, or must human viability be attached to a particular idea of the homo sapiens genome frozen in time in its present form? And we must also keep in mind that many today view human actions as being so egregious that the world would be better off without us, and such persons, even if in the minority, might well affirm that human extinction would be a good thing.
Let us consider, for a moment, a couple of Nick Bostrom’s formulations of existential risk:
An existential risk is one that threatens the premature
extinction of Earth-originating intelligent life or the permanent and drastic destruction of its potential for desirable future development.
…an existential risk is one that threatens to cause the extinction of Earth-originating intelligent life or the permanent and drastic failure of that life to realise its potential for desirable development. In other words, an existential risk jeopardises the entire future of humankind.
Existential Risk Prevention as Global Priority, Nick Bostrom, University of Oxford, Global Policy (2013) 4:1, 2013, University of Durham and John Wiley & Sons, Ltd.
What exactly would constitute the “drastic failure of that life to realise its potential for desirable development”? What exactly is permanent stagnation? Flawed realization? Subsequent ruination? What is human potential? Does it include transhumanism?
For some, the very idea of transhumanism is a moral horror, and a paradigm case of flawed realization. For others, transhumanism is a necessary condition of the full realization of human potential. Thus one might imagine an exciting human future of interstellar exploration and expanding knowledge of the world, and understand this to be an instance of permanent stagnation because human beings do not augment themselves and become something more or something different than we are today. And, honestly, such a scenario does involve an essentially stagnant conception of humanity. Another might imagine a future of continual human augmentation and experimentation, but more or less populated by beings — however advanced — who engage in essentially the same pursuits as those we pursue today, so that while the concept of humanity has not remained stagnant, the pursuits of humanity are essentially mired in permanent stagnation.
Similar considerations hold for civilization as hold for individuals: there are vastly different conceptions of what constitutes a viable civilization and of what constitutes the good for civilization. Future forms of civilization that depart too far from the Good may be characterized as instances of flawed realization, while future forms of civilization that don’t depart at all from contemporary civilization may be characterized as instances of permanent stagnation. The extinction of earth-originating intelligent life, or the subsequent ruination of our civilization, may seem more straight-forward, but what constitutes earth-originating intelligent life is vulnerable to the questions above about human successor species, and subsequent ruination may be judged by some to be preferable to the present trajectory of civilization continuing.
Sometimes these moral differences among peoples are exemplified in distinct civilizations. The kind of existential risks recognized within agrarian-ecclesiastical civilization are profoundly different from the kind of existential risks now being recognized by industrial-technological civilization. We can see earlier conceptions of existential risk as deviant, limited, or flawed as compared to those conceptions made possible by the role of science within our civilization, but we should also realize that, if we could revive representatives of agrarian-ecclesiastical civilization and give them a tour of our world today, they would certainly recognize features of our world of which we are most proud as instances of flawed realization (once we had explained to them what “flawed realization” means). For a further investigation of this idea I strongly recommend that the reader peruse Reinhart Koselleck’s Future’s Past: On the Semantics of Historical Time.
It would be well worth the effort to pursue possible quantitative measures of human extinction, permanent stagnation, flawed realization, and subsequent realization, but if we do so we must do so in the full knowledge that this is as much a moral and philosophical inquiry as it would be a scientific and theoretical inquiry; we cannot separate the desirability of future outcomes from the evaluative nature of our desires.
Like the sailors on the Pequod who each look into the gold doubloon nailed to the mast and see themselves and their personal concerns within, just so when we look into the mirror that is the future, we see our own hopes and fears, notwithstanding the fact that, when the future arrives, our concerns will be long washed away by the passage of time, replaced by the hopes and fears of future men and women (or the successors of men and women).
. . . . .
. . . . .
Existential Risk: The Philosophy of Human Survival
9. Conceptualization of Existential Risk
. . . . .
. . . . .
. . . . .
. . . . .