How to Live on a Planet

27 September 2017

Wednesday


Humanity is learning, slowly, how to live on a planet. What does it mean to live on a planet? Why is this significant? How has our way of living on a planet changed over time? How exactly does an intelligent species capable of niche-construction on a planetary scale go about revising its approach to niche construction to make this process consistent with the natural history and biospheric evolution of its homeworld?

Once upon a time the Earth was unlimited and inexhaustible for human beings for all practical purposes. Obviously, Earth was was not actually unlimited and inexhaustible, but for a few tens of thousands or hundreds of thousands of hunter-gatherers distributed across the planet in small bands, this was an ecosystem that they could not have exhausted even if they had sought to do so. Human influence over the planet at this time was imperceptible; our ancestors were simply one species among many species in the terrestrial biosphere. Even before civilization this began to change, as our ancestors have been implicated in the extinction of ice age megafauna. The evidence for this is still debated, but human populations had become sufficiently large and sufficiently organized by the upper Paleolithic that their hunting could plausibly have driven anthropogenic extinctions.

In this earliest (and longest) period of human history, we did not know that we lived on a planet. We did not know what a planet was, the relation of a planet to a star, and the place of stars in the galaxy. The Earth for us at this time was not a planet, but a world, and the world was effectively endless. Only with the advent of civilization and written language were we able to accumulate knowledge trans-generationally, slowly working out that we lived on a planet orbiting a star. This process required several thousand years, and for most of these thousands of years the size of our homeworld was so great that human efforts seemed to not even make a dent in the biosphere. It seemed the the forests could not be exhausted of trees or the oceans exhausted of fish. But all that has changed.

In the past few hundred years, the scope and scale of human activity, together with the size of the human population, has grown until we have found ourselves at the limits of Earth’s resources. We actively manage and limit the use of resources, because if we did not, the seven billion and growing human population would strip the planet clean and leave nothing. This process had already started in the Middle Ages, when many economies were forced to manage strategic resources like timber for shipbuilding, but the process has come to maturity in our time, as we are able to describe and explain scientifically the impact of the human population on our homeworld. We have, today, the conceptual framework necessary to understand that we live on a planet, so that we understand the limitations on our use of resources theoretically as well as practically. When earlier human activities resulted in localized extinctions and shortages, we could not put this in the context of the big picture; now we can.

Today we know what a planet is; we know what we are; we know the limitations dictated by a planet for the organisms constituting its ecosystems. This knowledge changes our relationship to our homeworld. Many definitions have been given for the Anthropocene. One way in which we could define the anthropocene in this context is that it is that period in terrestrial history when human beings learn to live on Earth as a planet. Generalized beyond this anthropocentric formulation, this becomes the period in the history of a life-bearing planet in which the dominant intelligent species (if there is one) learns to live on its planet as a planet.

In several posts I have written about the transition of the terrestrial energy grid from fossil fuels to renewable resources (cf. The Human Future in Space, The Conversion of the Terrestrial Power Grid, and Planetary Constraints 9). This process has already started, and it can be expected to play out over a period of time at least equal to the period of time we have been exploiting fossil fuels.

I recently happened upon the article How to Run the Economy on the Weather by Kris De Decker, which discusses in detail how economies and technologies prior to the industrial revolution were adapted to the intermittency of wind and water, and the adaptability of such habits to contemporary technologies. And I recall some years ago when I was in Greece, specially the island of Rhodes, every house had solar water heaters on the roof (and, of course, sunshine is plentiful in Greece), and everyone seemed to accept as a matter of course that you must shower while the sun is out. A combination of very basic behavioral changes supplemented by contemporary technology could facilitate the transition of the terrestrial power grid with little or no decline in standards of living. This is part of what it means to learn to live on a planet.

As we come to better understand biology, astrobiology, ecology, geology, and cosmology, and we thus come to better understand our homeworld and ourselves, we will learn more about how to live on a planet. But the expansion of our knowledge of exoplanets and astrobiology will be predicated upon our ability to travel to other worlds in order to study them, and if we are fortunate enough to endure for such a time and to achieve such things, then we will have to learn how to live in a universe.

The visible universe is finite. Though the visible universe may be part of an infinitistic cosmology (or even an infinitistic multiverse), the expansion of the universe has created a cosmological horizon beyond which we cannot see. I have previously quoted a passage from Leonard Susskind to this effect:

“In every direction that we look, galaxies are passing the point at which they are moving away from us faster than light can travel. Each of us is surrounded by a cosmic horizon — a sphere where things are receding with the speed of light — and no signal can reach us from beyond that horizon. When a star passes the point of no return, it is gone forever. Far out, at about fifteen billion light years, our cosmic horizon is swallowing galaxies, stars, and probably even life. It is as if we all live in our own private inside-out black hole.”

Leonard Susskind, The Black Hole War: My Battle with Stephen Hawking to make the World Safe for Quantum Mechanics, New York, Boston, and London: Little, Brown and Company, 2008, pp. 437-438

We know, then, scientifically, that the universe is effectively finite as our homeworld is finite, but the universe is so large in comparison to the scale of human activity, indeed, so large even in comparison to the aspirational scale of human activity, that the universe is endless for all practical purposes. Though we are already learning how to live on a planet, in relation to the universe at large we are like our hunter-gather ancestors dwarfed by a world that was, for them, effectively endless.

Only at the greatest reach of the scale of supercivilizations will we — if we last that long and achieve that scale of development — run into the limits of our home galaxy, and then into the limits of the universe, at which time we will have to learn how to live in a universe. I implied as much in an illustration that I created for my Centauri Dreams post, Stagnant Supercivilizations and Interstellar Travel (reproduced below), in which I showed a schematic representation of the carrying capacity of the universe. At this scale of activity we would be engaging in cosmological niche construction in order to make a home for ourselves in the universe, as we are now engaging in planetary-scale niche construction as we learn how to live on a planet.

. . . . .

. . . . .

signature

. . . . .

Grand Strategy Annex

. . . . .

project astrolabe logo smaller

. . . . .

Advertisements

Computational Omniscience

18 December 2013

Wednesday


hal9000

What does it mean for a body of knowledge to be founded in fact? This is a central question in the philosophy of science: do the facts suggest hypotheses, or are hypotheses used to give meanings to facts? These questions are also posed in history and the philosophy of history. Is history a body of knowledge founded on facts? What else could it be? But do the facts of history suggest historical hypotheses, or do our historical hypotheses give meaning and value to historical facts, without which the bare facts would add up to nothing?

Is history a science? Can we analyze the body of historical knowledge in terms of facts and hypotheses? Is history subject to the same constraints and possibilities as science? An hypothesis is an opportunity — an opportunity to transform facts in the image of meaning; facts are limitations that constrain hypotheses. An hypothesis is an epistemic opportunity — an opportunity to make sense of the world — and therefore an hypothesis is also at the same time an epistemic risk — a risk of getting interpreting the world incorrectly and misunderstanding events.

The ancient question of whether history is an art or a science would seem to have been settled by the emergence of scientific historiography, which clearly is a science, but this does not answer the question of what history was before scientific historiography. One might reasonably maintain that scientific historiography was the implicit telos of all previous historiographical study, but this fails to acknowledge the role of historical narratives in shaping our multiple human identities — personal, cultural, ethnic, political, mythological.

If Big History should become the basis of some future axialization of industrial-technological civilization, then scientific historiography too will play a constitutive role in human identity, and while other and older identity narratives presently coexist and furnish different individuals with a different sense of their place in the world, we have already seen the beginnings of an identity shaped by science.

There is a sense in which the scientific historian today knows much more about the past than those who lived in the past and experienced that past as an immediate yet fragmentary present. One might infer the possibility of a total knowledge of the past through the cumulative knowledge scientific historiography — a condition denied to those who actually lived in the past — although this “total” knowledge must fall short of the peculiar kind of knowledge derived from immediate personal experience, as contemplated in the thought experiment known as “Mary’s room.”

In the thought experiment known as Mary’s room, also called the knowledge argument, we imagine a condition of total knowledge and compare this with the peculiar kind of knowledge that is derived from experience, in contradistinction to the knowledge of knowledge we come to through science. Here is the source of the Mary’s room thought experiment:

“Mary is a brilliant scientist who is, for whatever reason, forced to investigate the world from a black and white room via a black and white television monitor. She specializes in the neurophysiology of vision and acquires, let us suppose, all the physical information there is to obtain about what goes on when we see ripe tomatoes, or the sky, and use terms like ‘red’, ‘blue’, and so on. She discovers, for example, just which wavelength combinations from the sky stimulate the retina, and exactly how this produces via the central nervous system the contraction of the vocal cords and expulsion of air from the lungs that results in the uttering of the sentence ‘The sky is blue’. […] What will happen when Mary is released from her black and white room or is given a color television monitor? Will she learn anything or not?”

Frank Jackson, “Epiphenomenal Qualia” (1982)

Philosophers disagree on whether Mary learns anything upon leaving Mary’s room. As a thought experiment, it is intended not to give as a definitive answer to a circumstance that is never likely to occur in fact, but to sharpen our intuitions and refine our formulations. We can try to do the same with formulations of an ideal totality of knowledge derived from scientific historiography. There is a sense in which scientific historiography allows us to know much more about the past than those who lived in the past. To echo a question of Thomas Nagel, was there something that it was like to be in the past? Are there, or were there, historical qualia? Is the total knowledge of history afforded by scientific historiography short of capturing historical qualia?

In the Mary’s room thought experiment the agent in question is human and the experience is imposed colorblindness. Many people live with colorblindness within the condition greatly impacting their lives, so in this context it is plausible that Mary learns nothing upon the lifting of her imposed colorblindness, since the gap between these conditions is not as intuitively obvious as the gap between agents of a fundamentally different kind (as, e.g., distinct species) or between experiences of a fundamental different kind in which it is not plausible that the the lifting of an imposed limitation on experience results in no significant impact on one’s life.

We can sharpen the formulation of Mary’s room, and thus potentially sharpen our own intuitions, by taking a more intense experience than that of color vision. We can also alter the sense of this thought experiment by considering the question across distinct species or across the division between minds and machines. For example, if a machine learned everything that there is to know about eating would that machine know what it was like to eat? Would total knowledge after the manner of Mary’s knowledge of color suffice to exhaust knowledge of eating, even in the absence of an actual experience of eating? I doubt that many would be convinced that learning about eating without the experience of eating would be sufficient to exhaust what there is to know about eating. Thomas Nagel’s thought experiment in “What is it like to be a bat?” alluded to above poses the knowledge argument across species.

We can give this same thought experiment yet another twist if we reverse the roles of minds and machines, and asking of machine experience, should machine consciousness emerge, the questions we have asked of human experience (or bat experience). If a human being learned everything there is to know about AI and machine consciousness, would such a human being know what it is like to be a machine? Could knowledge of machines exhaust uniquely machine experience?

The kind of total scientific knowledge of the world implicit in scientific historiography is not unlike what Pierre Simon LaPlace had in mind when he posited the possibility of predicting the entire state of the universe, past or future, on the basis of an exhaustive knowledge of the present. LaPlace’s argument is also a classic determinist position:

“We ought then to regard the present state of the universe as the effect of its anterior state and as the cause of the one which is to follow. Given for one instant an intelligence which could comprehend all the forces by which nature is animated and the respective situation of the beings who compose it — an intelligence sufficiently vast to submit these data to analysis — it would embrace in the same formula the movements of the greatest bodies of the universe and those of the lightest atom; for it, nothing would be uncertain and the future, as the past, would be present to its eyes. The human mind offers, in the perfection which it has been able to give to astronomy, a feeble idea of this intelligence. Its discoveries in mechanics and geometry, added to that of universal gravity, have enabled it to comprehend in the same analytical expressions the past and future states of the system of the world. Applying the same method to some other objects of its knowledge, it has succeeded in referring to general laws observed phenomena and in foreseeing those which given circumstances ought to produce. All these efforts in the search for truth tend to lead it back continually to the vast intelligence which we have just mentioned, but from which it will always remain infinitely removed. This tendency, peculiar to the human race, is that which renders it superior to animals; and their progress in this respect distinguishes nations and ages and constitutes their true glory.”

A Philosophical Essay on Probabilities, PIERRE SIMON, MARQUIS DE LAPLACE, WITH AN INTRODUCTORY NOTE BY E. T. BELL, DOVER PUBLICATIONS, INC., New York, CHAPTER II.

While such a LaPlacean calculation of the universe would lie beyond the capability of any human being, it might someday lie within the capacity of another kind of intelligence. What LaPlace here calls, “an intelligence sufficiently vast to submit these data to analysis,” suggests the possibility of a sufficiently advanced (i.e., sufficiently large and fast) computer that could make this calculation, thereby achieving a kind of computational omniscience.

Long before we have reached the point of an “intelligence explosion” (the storied “technological singularity”) and machines surpass the intelligence of human beings, and each generation of machine is able to build a yet more intelligent successor (i.e., an “intelligence explosion”), the computational power at our disposal will for all practical purposes exhaust the world and will thus have obtained computational omniscience. We have already begun to converge upon this kind of total knowledge of the cosmos with the Bolshoi Cosmological Simulations and similar efforts with other supercomputers.

It is this kind of reasoning in regard to the future of cosmological simulations that has led to contemporary formulations of the “Simulation Hypothesis” — the hypothesis that we are, ourselves, at this moment, living in a computer simulation. According to the simulation argument, cosmological simulations become so elaborate and are refined to such a fine-grained level of detail that the simulation eventually populates itself with conscious agents, i.e., ourselves. Here, the map really does coincide with the territory, at least for us. The entity or entities conducting such a grand simulation, and presumably standing outside the whole simulation observing, can see the simulation for the simulation that it is. (The connection between cosmology and the simulation argument is nicely explained in the episode “Are We Real?” of the television series “What We Still Don’t Know” hosted by noted cosmologist Martin Rees.)

One way to formulate the idea of omniscience is to define omniscience as knowledge of the absolute infinite. The absolute infinite is an inconsistent multiplicity (in Cantorian terms). There is a certain reasonableness in this, as the logical principle of explosion, also known as ex falso quodlibet (namely, the principle that anything follows from a contradiction), means that an inconsistent multiplicity that incorporates contradictions is far richer than any consistent multiplicity. In so far as omniscience could be defined as knowledge of the absolute infinite, few would, I think, be willing to argue for the possibility of computational omniscience, so we will below pursue this from another angle, but I wanted to mention this idea of defining omniscience as knowledge of the absolute infinite because it strikes me as interesting. But no more of this for now.

The claim of computational omniscience must be qualified, since computational omniscience can exhaust only that portion of the world exhaustible by computational means; computational omniscience is the kind of omniscience that we encountered in the “Mary’s room” thought experiment, which might plausibly be thought to exhaust the world, or which might with equal plausibility be seen as falling far short of all that might be known of some body of knowledge.

Computational omniscience is distinct from omniscience simpliciter; while exhaustive in one respect, it fails to capture certain aspects of the world. Computational omniscience may be defined as the computation of all that is potentially computable, which leaves aside that which is not computable. The non-computable aspects of the world include, but are not limited to, non-computable functions, quantum indeterminacy, that which is non-quantifiable (for whatever reason), the qualitative dimension of conscious experience (i.e., qualia), and that which is inferred but not observable. These are pretty significant exceptions. What is left over? What part of the world is computable? This is a philosophical question that we must ask once we understand that computability has limits and that these limits may be distinct from the limits of human intelligence. Just as conscious biological agents face intrinsic epistemic limits, so also non-biological agents would also face intrinsic epistemic limits — in so far as a non-biological agent can be considered an epistemic agent — but these limitations on biological and non-biological agents are not necessarily the same.

The ultimate inadequacy of computational omniscience points to the possibility of limited omniscience — though one might well assert that omniscience that is limited is not really omniscience at all. The limited omniscience of a computer capable of computing the fate of the known universe may be compared to recent research on what Daniel Kahneman calls the bounded rationality of human minds. Artificial intelligence is likely to be a bounded intelligence that exemplifies bounded rationality, although its boundaries will not necessarily coincide precisely with the boundaries that defined human bounded rationality.

The idea of limited omniscience has been explored in mathematics, particular in regard to constructivism. Constructivist mathematicians have formulated principles of omniscience, and, wary of both unrestricted use of tertium non datur and of its complete interdiction in the manner of intuitionism, the limited principle of omniscience has been proposed as a specific way to skirt around some of the problems implicit in the realism of unrestricted tertium non datur.

When we allow our mathematical thought to coincide with realities and infinities — an approach that we are assured is practical and empirical, and bound to only yield benefits — we find ourselves mired in paradoxes, and in the interest of freeing ourselves from this conceptual mire we are driven to a position like Einstein’s famous aphorism that, “As far as the laws of mathematics refer to reality, they are not certain; and as far as they are certain, they do not refer to reality.” We separate and compartmentalize factual realities and mathematical infinities because we have difficulty, “to hold two opposing ideas in mind at the same time and still retain the ability to function.”

Indeed, it was Russell’s attempt to bring together Cantorian conceptions of set theory with practical measures of the actual world that begat the definitive paradox of set theory that bear Russell’s name, and the responses to which have in large measure shaped post-Cantorian mathematics. Russell gives the following account of his discovery of his eponymous paradox in his Autobiography:

Cantor had a proof that there is no greatest number, and it seemed to me that the number of all the things in the world ought to be the greatest possible. Accordingly, I examined his proof with some minuteness, and endeavoured to apply it to the class of all the things there are. This led me to consider those classes which are not members of themselves, and to ask whether the class of such classes is or is not a member of itself. I found that either answer implies its contradictory.

Bertrand Russell, The Autobiography of Bertrand Russell, Vol. II, 1872-1914, “Principia Mathematica”

None of the great problems of philosophical logic from this era — i.e., the fruitful period in which Russell and several colleagues created mathematical logic — were “solved”; rather, a consensus emerged among philosophers of logic, conventions were established, and, perhaps most importantly, Zermelo’s axiomatization of set theory became the preferred mathematical treatment of set theory, which allowed mathematicians to skirt the difficult issues in philosophical logic and to focus on the mathematics of set theory largely without logical distractions.

It is an irony of intellectual history that the next great revolution in mathematics to follow after set theory — which latter is, essentially, the mathematical theory of the infinite — was to be that of computer science, which constitutes the antithesis of set theory in so far as it is the strictest of strict finitisms. It would be fair to characterize the implicit theoretical position of computer science as a species of ultra-finitism, since computers cannot formulate even the most tepid potential infinite. All computing machines have an upper bound of calculation, and this is a physical instantiation of the theoretical position of ultra-finitism. This finitude follows from embodiment, which a computer shares with the world itself, and which therefore makes ultra-finite computing consistent with an ultra-finite world. In an ultra-finite world, it is possible that the finite may exhaust the finite and computational omniscience realized.

The universe defined by the Big Bang and all that followed from the Big Bang is a finite universe, and may in virtue of its finitude admit of exhaustive calculation, though this finite universe of observable cosmology may be set in an infinite context. Indeed, even the finite universe may not be as rigorously finite as we suppose, given that the limitations of our observations are not necessarily the limits of the real, but rather are defined by the limit of the speed of light. Leonard Susskind has rightly observed that what we observe of the universe is like being inside a room, the walls of which are the distant regions of the universe receding from us at superluminous velocity at the point at which they disappear from our view.

Recently in The Size of the World I quoted this passage from Leonard Susskind:

“In every direction that we look, galaxies are passing the point at which they are moving away from us faster than light can travel. Each of us is surrounded by a cosmic horizon — a sphere where things are receding with the speed of light — and no signal can reach us from beyond that horizon. When a star passes the point of no return, it is gone forever. Far out, at about fifteen billion light years, our cosmic horizon is swallowing galaxies, stars, and probably even life. It is as if we all live in our own private inside-out black hole.”

Leonard Susskind, The Black Hole War: My Battle with Stephen Hawking to make the World Safe for Quantum Mechanics, New York, Boston, and London: Little, Brown and Company, 2008, pp. 437-438

This observation has not yet been sufficiently appreciated (as I previously noted in The Size of the World). What lies beyond Susskind’s cosmic horizon is unobservable, just as anything that disappears beyond the event horizon of a black hole has become unobservable. We might term such empirical realities just beyond our grasp empirical unobservables. Empirical unobservables include (but are presumably not limited to — our “out” clause) all that which lies beyond the event horizon of Susskind’s inside-out black hole, that which lies beneath the event horizon of a black hole as conventionally conceived, and that which lies outside the lightcone defined by our present. There may be other empirical unobservables that follow from the structure of relativistic space. There are, moreover, many empirically inaccessible points of view, such as the interiors of stars, which cannot be observed for contingent reasons distinct from the impossibility of observing certain structures of the world hidden from us by the nature of spacetime structure.

What if the greater part of the universe passes in the oblivion of the empirical unobservables? This is a question that was posed by a paper appeared that in 2007, The Return of a Static Universe and the End of Cosmology, which garnered some attention because of its quasi-apocalyptic claim of the “end of cosmology” (which sounds a lot like Heidegger’s proclamation of the “end of philosophy” or any number of other proclamations of the “end of x“). This paper was eventually published in Scientific American as The End of Cosmology? An accelerating universe wipes out traces of its own origins by Lawrence M. Krauss and Robert J. Scherrer.

In calling the “end of cosmology” a “quasi-apocalyptic” claim I don’t mean to criticize or ridicule the paper or its argument, which is of the greatest interest. As in the subtitle of the Scientific American article, it appears to be the case that an accelerating universe wipes out traces of its own origins. If a quasi-apocalyptic claim can be scientifically justified, it is a legitimate and deserves our intellectual respect. Indeed, the study of existential risk could be considered a scientific study of apocalyptic claims, and I regard this as an undertaking of the first importance. We need to think seriously about existential risks in order to mitigate them rationally to the extent possible.

In my posts on the prediction and retrodiction walls (The Retrodiction Wall and Addendum on the Retrodiction Wall) I introduced the idea of effective history, which is that span of time which lies between the retrodiction wall in the past and the prediction wall in the future. One might similarly define effective cosmology as consisting of that region or those regions of space within the practical limits of observational cosmology, and excluding those regions of space that cannot be observed — not merely what is hidden from us by contingent circumstances, but that which are are incapable of observing because of the very structure of the universe and our place (ontologically speaking) within it.

There are limits to what we can know that are intrinsic to what we might call the human condition, except that this formulation is anthropocentric. The epistemic limits represented by effective history and effective cosmology are limitations that would hold for any sentient, conscious organism emergent from natural history, i.e., would hold for any peer species. Some of these limitations are limitations intrinsic to our biology and to the kind of mind that is emergent from biological organisms. Some of these limitations are limitations intrinsic to the world in which we find ourselves, and the vantage point from within the cosmos that we view our world. Ultimately, these limitations are one and the same, as the kind of biological beings that we are is a function of the kind of cosmos in which we have emerged, and which has served as the context of our natural history.

Within the domains of effective history and effective cosmology, we are limited further still by the non-quantifiable aspects of the world noted above. Setting aside non-quantifiable aspects of the world, what I have elsewhere called intrinsically arithmetical realities are a paradigm case of what remains computable once we have separated out the non-computable exceptions. (Beyond the domains of effective history and effective cosmology, hence beyond the domain of computational omniscience, there lies the infinite context of our finite world, about which we will say no more at present.) Intrinsically arithmetical realities are intrinsically amenable to quantitative methods are potentially exhaustible by computational omniscience.

Some have argued that the whole of the universe is intrinsically arithmetical in the sense of being essentially mathematical, as in the “Mathematical Universe Hypothesis” of Max Tegmark. Tegmark writes:

“[The Mathematical Universe Hypothesis] explains the utility of mathematics for describing the physical world as a natural consequence of the fact that the latter is a mathematical structure, and we are simply uncovering this bit by bit.”

The Mathematical Universe by Max Tegmark

Tegmark also explicitly formulates two companion principles:

External Reality Hypothesis (ERH): There exists an external physical reality completely independent of us humans.

…and…

Mathematical Universe Hypothesis (MUH): Our external physical reality is a mathematical structure.

I find these formulations to be philosophically naïve in the extreme, but as a contemporary example of a perennial tradition of philosophical thought Tegmark is worth citing. Tegmark is seeking an explicit answer to Wigner’s famous question about the “unreasonable effectiveness of mathematics.” It is to be expected that some responses to Wigner will take the form that Tegmark represents, but even if our universe is a mathematical structure, we do not yet know how much of that mathematical structure is computable and how much of that mathematical structure is not computable.

In my Centauri Dreams post on SETI, METI, and Existential Risk I mentioned that I found myself unable to identify with either the proponents of unregulated METI or those who argue for the regulation of METI efforts, since I disagreed with key postulates on both sides of the argument. METI advocates typically hold that interstellar flight is impossible, therefore METI can pose no risk. Advocates of METI regulation typically hold that unintentional EM spectrum leakage is not detectable at interstellar distances, therefore METI poses a risk we do not face at present. Since I hold that interstellar flight is possible, and that unintentional EM spectrum radiation is (or will be) detectable, I can’t comfortably align myself with either party in the discussion.

I find myself similarly hamstrung on the horns of a dilemma when it comes to computability, the cosmos, and determinism. Computer scientists and singulatarian enthusiasts of exponential increasing computer power ultimately culminating in an intelligence explosion seem content to assume that the universe is not only computable, and presents no fundamental barriers to computation, but foresee a day when matter itself is transformed into computronium and the whole universe becomes a grand computer. Critics of such enthusiasts often take the form of denying the possibility of AI or denying the possibility of machine consciousness, denying this or that is technically possible, and so on. It seems clear to me that only a portion of the world will ever be computable, but that portion is considerable and that a great many technological developments will fundamentally change our relationship to the world. But no matter how much either human beings or machines are transformed by the continuing development of industrial-technological civilization, non-computable functions will remain non-computable. This I cannot count myself either as a singulatarian or a Luddite.

How are we to understand the limitations to computational omniscience imposed by the limits of computation? The transcomputational problem, rather than laying bare human limitations, points to the way in which minds are not subject to computational limits. Minds as minds do not function computationally, so the evolution of mind (which drives the evolution of civilization) embodies different bounds and different limits than the Bekenstein bound and Bremermann’s limit, as well as different possibilities and different opportunities. The evolutionary possibilities of the mind are radically distinct from the evolutionary possibilities of bodies subject to computational limits, even though minds are dependent upon the bodies in which they are embodied.

Bremermann’s limit is 1093, which is somewhat arbitrary, but whether we draw the line here or elsewhere it doesn’t really matter for the principle at stake. Embodied computing must run into intrinsic limits, e.g., from relativity — a computer that exceeded Bremerman’s limit by too much would be subject to relativistic effects that would mean that gains in size would reach a point of diminishing returns. Recent brain research was suggested that the human brain is already close to the biological limit for effective signal transmission within and between the various parts of the brain, so that a larger brain would not necessarily be smarter or faster or more efficient. Indeed, it has been pointed out the elephant and whale brains are larger than mammal brains, although the encephalization quotient is much higher in human beings despite the difference in absolute brain size.

The function of organic bodies easily peaks over 1093. The Wikipedia entry on the transcomputational problem says:

“The retina contains about a million light-sensitive cells. Even if there were only two possible states for each cell (say, an active state and an inactive state) the processing of the retina as a whole requires processing of more than 10 to the 300,000 bits of information. This is far beyond Bremermann’s limit.”

This is just the eye alone. The body has far more nerve ending inputs than just those of the eye, and essentially a limitless number of outputs. So exhausting the possible computational states of even a relatively simple organism easily surpasses Bremermann’s limit and is therefore transcomputational. Some very simple organisms might not be transcomputational, given certain quantifiable parameters, but I think most complex life, and certainly things are complex as mammals, are radically transcomputational. Therefore the mind (whatever it is) is embodied in a transcomputational body, of which no computer could exhaustively calculate its possible states. The brain itself is radically transcomputational with its 100 billion neurons (each of which can take at minimum two distinct states, and possibly more).

Yet even machine embodiments can be computationally intractable (in the same way that organic bodies are computationally intractable), exceeding the possibility of exhaustively calculating every possible material state of the mechanism (on a molecular or atomic level). Thus the emergence of machine consciousness would also supervene upon a transcomputational embodiment. It is, at present, impossible to say whether a machine embodiment of consciousness would be a limitation upon that consciousness (because the embodiment is likely to be less radically transcomputational than the brain) or a facilitation of consciousness (because machines can be arbitrarily scaled up in a way that organic bodies cannot be).

Since the mind stands outside the possibilities of embodied computation, if machine consciousness emerges, machine embodiments will be as non-transparent to machine minds as organic embodiment is non-transparent to organic minds, but the machine minds, non-transparent to their embodiment as they are, will have access to energy sources far beyond any resources an organic body could provide. Such machine consciousness would not be bound by brute force calculation or linear models (as organic minds are not so bound), but would have far greater resources at its command for the development of its consciousness.

Since the body that today embodies mind already far exceeds Bremermann’s limit, and no machine as machine is likely to exceed this limit, machine consciousness emergent from computationally tractable bodies may, rather than being super-intelligent in ways that biologically derived minds can never be, may on the contrary be a pale shadow of an organic mind in an essentially transcomputational body. This gives a whole new twist to the much-discussed idea of the mind’s embodiment.

Computation is not the be-all and end-all of of mind; it is, in fact, only peripheral to mind as mind. If we had to rely upon calculation to make it through our day, we wouldn’t be able to get out of bed in the morning; most of the world is simply too complex to calculate. But we have a “work around” — consciousness. Marginalized as the “hard problem” in the philosophy of mind, or simply neglected in scientific studies, consciousness enables us to cut the Gordian Knot of transcomputability and to act in a complex world that far exceeds our ability to calculate.

Neither is consciousness the be-all and end-all of mind, although the rise of computer science and the increasing role of computers in our life has led many to conclude that computation is primary and that it is consciousness is that is peripheral. and, to be sure, in some contexts, consciousness is peripheral. In many of the same contexts of our EEA in which calculation is impossible due to complexity, consciousness is also irrelevant because we respond by an instinct that is deeper than and other than consciousness. In such cases, the mechanism of instinct takes over, but this is a biologically specific mechanism, evolved to serve the purpose of differential survival and reproduction; it would be difficult to re-purpose a biologically specific mechanism for any kind of abstract computing task, and not particularly helpful either.

Consciousness is not the be-all and end-all not only because instinct largely circumvents it, but also because machines have a “work around” for consciousness just as consciousness is a “work around” for the limits of computability; mechanism is a “work around” for the inefficiencies of consciousness. Machine mechanisms can perform precisely those tasks that so tax organic minds as to be virtually unsolvable, in a way that is perfectly parallel to the conscious mind’s ability to perform tasks that machines cannot yet even approach — not because machines can’t do the calculations, but because machines don’t possess the “work around” ability of consciousness.

It is when computers have the “work around” capacity that conscious beings have that they will be in a position to effect an intelligence explosion. That is to say, machine consciousness is crucial to AI that is able to perform in that way that AI is expected to perform, though AI researchers tend to be dismissive of consciousness. If the proof of the pudding is in the eating, well, then it is consciousness that allows us to “chunk the proofs” (i.e., to divide the proof into individually manageable pieces) and get to the eating all the more efficiently.

. . . . .

signature

. . . . .

Grand Strategy Annex

. . . . .

The Size of the World

24 November 2013

Sunday


The Sloan Digital Sky Survey points to a large scale structure of the universe dominated by hyperclusters, which appear to be structures that exceed the upper size limit of structures as predicted by contemporary cosmology.

The Sloan Digital Sky Survey points to a large scale structure of the universe dominated by hyperclusters, which appear to be structures that exceed the upper size limit of structures as predicted by contemporary cosmology.

The world, we are learning every day, is a very large place. Or perhaps I should say that the universe is a very large place. It is also a very complex and strange place. J. B. S. Haldane famously said that, “I have no doubt that in reality the future will be vastly more surprising than anything I can imagine. Now my own suspicion is that the Universe is not only queerer than we suppose, but queerer than we can suppose.” (Possible Worlds and Other Papers, 1927, p. 286) In other words, human beings, no matter how valiantly they attempt to understand the universe, may not be cognitively equipped to understand it; our minds may not be the kind of minds that can understand the kind of place that the world is.

This idea of our inability to understand the world in which we find ourselves (an admirably humble Copernican insight that we might call metaphysical modesty, and which stands in contrast to epistemic hubris) has received many glosses since Haldane’s time. Most notable (notable, at least, from my perspective) are the evolutionary gloss, the quantum physics gloss, and the philosophical gloss. I will consider each of these in turn.

In terms of evolution, there is no reason to suppose that descent with modification in a context of a struggle for vital resources on the plains of Africa (the environment of evolutionary adaptedness, or EEA) is going to produce minds capable of understanding higher dimensional spatial manifolds or quantum physics at microscopic scales that differ radically from the macroscopic scales of ordinary human perception. Alvin Plantinga (about whom I wrote some time ago in A Note on Plantinga, inter alia) has used this argument for theological purposes. However, there is no intrinsic reason that a mind born in the mud and the muck cannot raise itself above its origins and come to contemplate the world in Copernican terms. The evolutionary argument cuts both ways, and since we have ourselves as the evidence of an organism that can raise itself from strictly survival behavior to forms of thought that have nothing to do with survival, from the perspective of the weak anthropic principle this is proof enough that natural selection can result in such a mind.

In terms of quantum theory, we are all familiar with famous quotes from the leading lights of quantum theory as to the essentially incomprehensibility of that theory. For example, Richard Feynman said, “I think I can safely say that nobody understands quantum mechanics.” However, I have observed (in The limits of my language are the limits of my world and elsewhere) that recent research is making strides in working around the epistemic limitations of quantum theory, revealing its uncertainties to be not absolute and categorical, but rather subject to careful and painstaking narrowing that renders the uncertainty a little less uncertain. I anticipate two developments that will emerge from the further elaborate of quantum theory: 1) the finding of ways to gradually and incrementally chip away at an absolutist conception of uncertainty (as just mentioned), and 2) the formulation of more adequate intuitions to make quantum theory more palatable to the human mind.

In terms of philosophy, Colin McGinn’s book Problems in philosophy: The Limits of Inquiry formulates a position which he calls Transcendental Naturalism:

“Philosophy is an attempt to get outside the constitutive structure of our minds. Reality itself is everywhere flatly natural, but because of our cognitive limits we are unable to make good on this general ontological principle. Our epistemic architecture obstructs knowledge of the real nature of the objective world. I shall call this thesis transcendental naturalism, TN for short.” (pp. 2-3)

I have previously written about McGinn’s work in Transcendental Non-Naturalism and Naturalism and Object Oriented Ontology, inter alia. Our ability to get outside the constitutive structure of our minds is severely limited at best, and so our ability to understand the world as it is is limited at best.

While our cognitive abilities are admittedly limited (for all the reasons discussed above, as well as other reasons not discussed), these limits are not absolute, but rather admit of revision. McGinn’s position as stated above implies a false dichotomy between staying within the constitutive structure of our minds and getting outside it. This is a classic case of facing the sheer cliff of Mount Improbable: while it is impossible to get outside our cognitive architecture in one fell swoop, we can little by little transgress the boundaries of our cognitive architecture, each time ever-so-slightly expanding our capacities. Incrementally over time we improve our ability to stand outside those limits that once marked the boundaries of our cognitive architecture. Thus in an ironic twist of intellectual history, the evolutionary argument, rather than demonstrating metaphysical modesty, is rather the key to limiting the limitations on the human mind.

All of this is related to one of the central problems in the philosophy of science of our time — the whole Kuhnian legacy that is the framework of so much contemporary philosophy of science. Copernican revelations and revolutions, which formerly disturbed our anthropocentric bias every few hundred years, now occur with alarming frequency. The difference today, of course, is that science is much more advanced than it was with past Copernican revelations and revolutions — it has much more advanced instrumentation available to it (as a result of the STEM cycle), and we have a much better idea of what to look for in the cosmos.

It was a shock to almost everyone to have it scientifically demonstrated that the universe is not static and eternal, but dynamic and changing. It was a shock when quantum theory demonstrated the world to be fundamentally indeterministic. This is by now a very familiar narrative. In fact, it is so familiar that it has been expropriated (dare I say exapted?) by obscurantists and irrationalists of our time, who point at continual changes at scientific knowledge as “proof” that science doesn’t give us any “truth” because it changes. The assumption here is that change in scientific knowledge demonstrates the weakness of science; in fact, change in scientific knowledge is the strength of science. Scientific knowledge is what I have elsewhere called an intelligent institution in so far as it is institutionalized knowledge, but that institution is formulated with internal mechanisms that facilitate the re-shaping of the institution itself over time. That mechanism is the scientific method.

It is important to see that the overturning of familiar conceptions of the world — some of which are ancient and some of which are not — is not arbitrary. Less comprehensive conceptions are being replaced by more comprehensive conceptions. The more comprehensive our perspective on the world, the greater the number of anomalies we must face, and the greater the number of anomalies we face the more likely it is that our theories will be overturned, or at least partially falsified. But it is the wrong debate to ask whether theory change is rational or irrational. It is misleading, because what ought to concern us is how well our theories account for the ever-larger world that is revealed to us through our ever-more comprehensive methods of science, and not how well our theories conform to our presuppositions about rationality. The more we get the science right, reason will follow, shaping new intuitions and formulating new theories.

Our ability to discover (and to understand) ever greater scales of the universe is contingent upon our growing intellectual capabilities, which are cumulative. Just as in the STEM cycle science begets technologies that beget industries that create better scientific instruments, so too on a purely intellectual level the intellectual capabilities of one generation are the formative context of the intellectual capabilities of the next generation, which allows the later generation to exceed the earlier generation. Concepts are the tools of the mind, and we use our familiar concepts to create the next generation of concepts, which latter are both more refined and more powerful than the former, in the same way as we use each generation of tools to build the next generation of tools, which makes each generation of tools better than the last (except for computer software — but I expect that this degradation in the practicability of computer software is simply the software equivalent of planned obsolescence).

Our current generation of tools — both conceptual and technological — are daily revealing to us the inadequacy of our past conceptions of the world. Several recent discoveries have in particular called into question our understanding of the size of the world, especially in so far as the world is defined in terms of its origins in the Big Bang. For example, the discovery of hyperclusters suggest physical structures of the universe that are larger than the upper limit set to physical structures by contemporary cosmologies theories (cf. ‘Hyperclusters’ of the Universe — “Something is Behaving Very Strangely”).

In a similar vein, writing of the recent discovery of a “large quasar group” (LQG) as much as four billion light years across, the article The Largest Discovered Structure in the Universe Contradicts Big-Bang Theory Cosmology states:

“This LQG challenges the Cosmological Principle, the assumption that the universe, when viewed at a sufficiently large scale, looks the same no matter where you are observing it from. The modern theory of cosmology is based on the work of Albert Einstein, and depends on the assumption of the Cosmological Principle. The principle is assumed, but has never been demonstrated observationally ‘beyond reasonable doubt’.”

This formulation gets the order of theory and observation wrong. The cosmological principle is not a principle that can be proved or disproved by evidence; it is a theoretical idea that is used to give structure and meaning to observations, to organize observations into a theoretical whole. The cosmological principle belongs to theoretical cosmology; recent discoveries such as hyperclusters and large quasar groups belong to observational cosmology. While the two — i.e., theoretical and observational — cannot be separated in the practice of science, it is also true that they are not identical. Theoretical methods are distinct from observational methods, and vice versa.

Thus the cosmological principle may be helpful or unhelpful in organizing our knowledge of the cosmos, but it is not the kind of thing that can be falsified in the same way that, for example, a theory of planetary formation can be falsified. That is to say, the cosmological principle might be opposed to (falsified by) another principle that negates the cosmological principle, but this anti-cosmological principle will similarly belong to an order not falsifiable by empirical observations.

The discoveries of hyperclusters and LQGs are particularly problematic because they question some of the fundamental assumptions and conclusions of Big Bang cosmology, which is, essentially, the only large scale cosmological model in contemporary science. Big Bang cosmology is the explanation for the structure of the cosmos that was formulated in response to the discovery of the red shift, which implies that, on the largest observable scales, the universe is expanding. It is important to add the qualification, “on the largest observable scales” because stars within a given galaxy are circulating around the galaxy, and while a given star may be moving away from another given star, it is also likely to be moving toward yet some other star. And, even at larger scales, not all galaxies are receding from each other. It is fairly well known that galaxies collide and commingle; the Helmi stream of our own Milky Way is the result of a long past galactic collision, and at some far time in the future the Milky Way itself will merge with the larger Andromeda galaxy, and be absorbed by it.

Cosmology during the period of the big bang theory (a period in which we still find ourselves today) is in some respects like biology before Darwin. Almost all biology before Darwin was essentially theological, but no one had a better idea so biology had to wait to become a science capable of methodologically naturalistic formulations until after Darwin. The big bang theory was, on the contrary, proposed as a scientific theory (not merely bequeathed to us by pre-scientific tradition), and most scientists working within the big bang tradition have formulated the Big Bang in meticulously naturalistic terms. Nevertheless, once the steady state theory was overthrown, no one really had an alternative to the big bang theory, so all cosmology centered on the Big Bang for lack of imagination of alternatives — but also due to the limitations of the scientific instruments, which at the time of the triumph of the big bang theory were much more modest than they are today.

As disconcerting as it was to discover that the cosmos did not embody an eternal order, that it is expanding and had a history of violent episodes, and that it was much larger than an “island universe” comprising only the Milky Way, the observations that we need to explain today are no less disconcerting in their own way.

Here is how Leonard Susskind describes our contemporary observations of the expanding universe:

“In every direction that we look, galaxies are passing the point at which they are moving away from us faster than light can travel. Each of us is surrounded by a cosmic horizon — a sphere where things are receding with the speed of light — and no signal can reach us from beyond that horizon. When a star passes the point of no return, it is gone forever. Far out, at about fifteen billion light years, our cosmic horizon is swallowing galaxies, stars, and probably even life. It is as if we all live in our own private inside-out black hole.”

Leonard Susskind, The Black Hole War: My Battle with Stephen Hawking to make the World Safe for Quantum Mechanics, New York, Boston, and London: Little, Brown and Company, 2008, pp. 437-438

This observation has not yet been sufficiently appreciated. What lies beyond Susskind’s cosmic horizon is unobservable, as anything that disappears beyond the event horizon of a black hole has become unobservable, and that places such matters beyond the reach of science understood in a narrow sense of observation. But as I have noted above, in the practice of science we cannot disentangle the theoretical and the observational, but the two are not the same. While our observations come to an end at the cosmic horizon, our principles encounter no such boundary. Thus it is that we naturally extrapolate our science beyond the boundaries of observation, but if we get our scientific principles wrong, anything beyond the boundary of observation will be wrong and will be incapable or correction by observation.

Science in the narrow sense must, then, come to an end with observation. But this does not satisfy the mind. One response is to deny the mind its satisfaction and refuse to pass beyond observation. Another response is to fill the void with mythology and fiction. Yet another response is to take up the principles on their own merits and consider them in the light of reason. This response is the philosophical response, and we see that it is a rational response to the world that is continuous with science even when it passes beyond science.

. . . . .

signature

. . . . .

Grand Strategy Annex

. . . . .

Sunday


A few days ago in The Truth is Out There I twice made reference to anti-philosophy among scientists. I wrote, for example, the following:

“While Ferris frequently invokes the kind of anti-philosophy that I have become accustomed to encountering in the writings of scientists, he also cites philosophers has diverse as Hegel and Wittgenstein…”

And…

“…despite the fashionable anti-philosophy of many scientists, that often leads them to say unkind things about purely philosophical inquiry, I see the enterprises of science and philosophy as parallel undertakings…”

What do I mean by the “anti-philosophy” of many scientists? Usually, and unfortunately, it simply takes the form of ad hominem abuse of philosophers while cribbing ideas that the scientists don’t understand, and often don’t even realize that they are cribbing. I will give two examples. Here is Leonard Susskind:

“…many physicists throughout the second half of the twentieth century considered the pursuit of such a unifying theory to be worthless, fit only for crackpots and philosophers.”

Leonard Susskind, The Black Hole War: My Battle with Stephen Hawking to Make the World Safe for Quantum Mechanics, 2009

And here is Stephen Hawking:

“We have known for twenty-five years that Einstein’s general theory of relativity predicts that time must have had a beginning in a singularity fifteen billion years ago. But the philosophers have not yet caught up with the idea.”

Stephen W. Hawking, Black Holes and Baby Universes and Other Essays, 1994

It would be relatively easy to multiply quotes of this character; they are regrettably common, and one must wonder why, because philosophers do not even register on the radar of the popular mind. Why should we find denunciations of philosophers and philosophy in popularizations of science by eminent physicists? I have a hard time imagining that either Susskind or Hawking would make comments like these about, say, novelists or biologists.

I have chosen the quotes from Susskind and Hawking strategically, since each represents a different side of a long-running scientific controversy, a controversy that is related in Susskind’s book cited above. Though these two physicists found themselves on opposite sides of a scientific controversy, they apparently have common ground in their use of philosophers as straw men.

I am listening to Susskind’s book now, and while I enjoy it, I can feel the limitations that arise from anti-philosophy. What happens when you reject Western civilization’s storehouse of carefully thought out ideas? You end up citing science fiction authors to make your point, as Susskind employs Heinlein’s “grok” in the opening pages of his book. There is a vast philosophical literature on intuitive knowledge and understanding, but Susskind prefers to neglect this and employs “grok” instead. No doubt he believes this to be clearer.

There is a sense in which the Susskind reference to Heinlein is appropriate, since I recall that Heinlein himself was anti-philosophical. When I was a child I read a great many science fiction novels, a great quantity in fact, and Heinlein was among my favorites, but I can remember even then, thirty years ago and before I discovered philosophy, I wondered why Heinlein had bothered to malign philosophy. In fact, it was just this attitude, garnered from many diverse sources, that eventually made me sufficiently curious that I began to read philosophy myself. I discovered something else, something unexpected, when I began to read philosophy: I found that I was thinking for myself, and that I felt no particular obligation to follow the thoughts of others unless they gave me good reason to do so.

It has become a commonplace in contemporary intellectual discourse to note (and to bemoan the fact) that intelligent and educated people see no stigma attached to saying that they know nothing of mathematics. Even here we can cite Heinlein again: “Anyone who cannot cope with mathematics is not fully human. At best he is a tolerable subhuman who has learned to wear shoes, bathe, and not make messes in the house.” Well, it also seems to be true that many scientists not only attach no stigma to ignorance of philosophy, but many of them take a perverse pride in their science being “uncontaminated” by philosophy, not realizing that this ignorance means that they make elementary philosophical errors based on elementary philosophical presuppositions and never seem to notice or be the least bit troubled by it.

The problem is not that scientists make philosophical errors and philosophical assumptions; the problem is that they fail to acknowledge that they do so. Mathematicians make a particular effort to make their assumptions explicit. This is called axiomatization. But philosophical assumptions lie even deeper than mathematical assumptions, and are therefore all the more difficult to make explicit. An effort is required. But without the effort, we literally don’t know what we’re doing.

Louis Althusser wrote a book about the spontaneous philosophy of scientists, and I have always thought that this was a particularly apt phrase. Scientists come up with a theory on the spot, as it were, and such theories are as easily discarded. It is easy to see how this serves scientific practice. Too careful and studied a reliance on a research program dictated by a philosophical theory would probably quickly turn sterile. This does not, however, excuse either ignorance or ad hominem attacks.

Scientists are instinctive phenomenologists, in so far as they share with Husserl a desire to formulate their knowledge utterly free from presuppositions, and, at very least, free from philosophical presuppositions. But this ideal of presuppositionless knowledge is a philosophical undertaking, so that it becomes a problematic enterprise for scientists. The alternative to making one’s presuppositions explicit is to leave them implicit, and when we add anti-philosophy to implicit presuppositions we have a situation in which it becomes unacceptable to acknowledge a presupposition even if, in the back of one’s mind one begins to be dimly conscious of the fact that there is more going on in scientific experiment and theory than pure observation. Thus the scientist who denies the role of philosophy in knowledge is put in a position antithetical to that of the mathematician, being committed, as he is, to denying and obscuring his presuppositions. Thus there is a sense in which fashionable anti-philosophy is a rejection of the very idea of rigorous axiomatic thinking.

. . . . .

signature

. . . . .

Grand Strategy Annex

. . . . .

project astrolabe logo smaller

. . . . .

%d bloggers like this: