Saturday


Knowledge relevant to the Fermi paradox will expand if human knowledge continues to expand, and we can expect human knowledge to continue to expand for as long as civilization in its contemporary form endures. Thus the development of scientific knowledge, once the threshold of modern scientific method is attained (which, in terrestrial history, was the scientific revolution), is a function of “L” in the Drake equation, i.e., a function of the longevity of civilization. It is possible that there could be a qualitative change in the nature of civilization that would mean the continuation the civilization but without the continuing expansion of scientific knowledge. However, if we take “L” in the big picture, a civilization may undergo qualitative changes throughout its history, some of which would be favorable to the expansion of scientific knowledge, and some of which would be unfavorable to the same. Under these conditions, scientific knowledge will tend to increase over the long term up to the limit of possible scientific knowledge (if there is such a limit).

At least part of the paradox of the the Fermi paradox is due to our limited knowledge of the universe of which we are a part. With the expansion of our scientific knowledge the “solution” to the Fermi paradox may be slowly revealed to us (which could include the “no paradox” solution to the paradox, i.e., the idea that the Fermi paradox isn’t really paradoxical at all if we properly understand it, which is an understanding that may dawn on us gradually), or it may hit us all at once if we have a major breakthrough that touches upon the Fermi paradox. For example, a robust SETI signal confirmed to emanate from an extraterrestrial source might open up the floodgates of scientific knowledge through interstellar idea diffusion from a more advanced civilization. This isn’t a likely scenario, but it is a scenario in which we not only confirm that we are not alone in the universe, but also in which we learn enough to formulate a scientific explanation of our place in the universe.

The growth of scientific knowledge could push our understanding of the Fermi paradox in several different directions, which again points to our relative paucity of knowledge of our place in the universe. In what follows I want to construct one possible direction of the growth of scientific knowledge and how it might inform our ongoing understanding of the Fermi paradox and its future formulations.

At the present stage of the acquisition of scientific knowledge and the methodological development of science (which includes the development of technologies that expand the scope of scientific research), we are aware of ourselves as the only known instance of life, of consciousness, of intelligence, of technology, and of civilization in the observable universe. These emergent complexities may be represented elsewhere in the universe, but we do not have any empirical evidence of these emergent complexities beyond Earth.

Suppose, then, that scientific knowledge expands along with human civilization. Suppose we arrive at the geologically complex moons of Jupiter and Saturn, whether in the form of human explorers or in the form of automated spacecraft, and despite sampling several subsurface oceans and finding them relatively clement toward life, they are all nevertheless sterile. And suppose that we extensively research Mars and find no subsurface, deep-dwelling microorganisms on the Red Planet. Suppose we search our entire solar system high and low and there is no trace of life anywhere except on Earth. The solar system, in this scenario, is utterly sterile except for Earth and the microbes that may float into space from the upper atmosphere.

Further suppose that, even after we discover a thoroughly sterile solar system, all of the growth of scientific knowledge either confirms or is consistent with the present body of scientific knowledge. That is to say, we add to our scientific knowledge throughout the process of exploring the solar system, but we don’t discover anything that overturns our scientific knowledge in a major way. There may be “revolutionary” expansions of knowledge, but no revolutionary paradigm shifts that force us to rethink science from the ground up.

At this stage, what are we to think? The science that brought to to see the potential problem represented by the Fermi paradox is confirmed, meaning that our understanding of biology, the origins of life, and the development of planets in our solar system is refined but not changed, but we don’t find any other life even in environments in which we would expect to find life, as in clement subsurface oceans. I think this would sharpen the feeling of the paradoxicalness of the Fermi paradox still without shedding much light on an improved formulation of the problem that would seem less paradoxical, but it wouldn’t sharpen the paradox to a degree that would force a paradigm shift and a reassessment of our place in the universe, i.e., it wouldn’t force us to rethink the astrobiology of the human condition.

Let us take this a step further. Suppose our technology improves to the point that we can visit a number of nearby planetary systems, again, whether by human exploration or by automated spacecraft. Supposed we visit a dozen nearby stars in our galactic neighborhood and we find a few planets that would be perfect candidates for living worlds with a biosphere — in the habitable zone of their star, geologically complex with active plate tectonics, liquid surface water, appropriate levels of stellar insolation without deadly levels of radiation or sterilizing flares, etc. — and these worlds are utterly sterile, without even so much as a microbe to be found. No sign of life. And no sign of life in any other nooks and crannies of these other planetary systems, which will no doubt also have subsurface oceans beyond the frost line, and other planets that might give rise to other forms of life.

At this stage in the expansion of our scientific knowledge, we would probably begin to think that the Fermi paradox was to be resolved by the rarity of the origins of life. In other words, the origins of life is the great filter. We know that there is a lot of organic chemistry in the universe, but what doesn’t take place very often is the integration of organic molecules into self-replicating macro-molecules. This would be a reasonable conclusion, and might prove to be an additional spur to studying the origins of life on Earth. Again, our deep dive both into other planets and into the life sciences, confirms what we know about science and finds no other life (in the present thought experiment).

While there would be a certain satisfaction in narrowing the focus of the Fermi paradox to the origins of life, if the growth of scientific knowledge continues to confirm the basic outlines of what we know about the life sciences, it would still be a bit paradoxical that the life sciences understood in a completely naturalistic manner would render the transition from organic molecules to self-replicating macro-molecules so rare. In addition to prompting a deep dive into origins of life research, there would probably also be a lot of number-crunching in order to attempt to nail down the probability of an origins of life event taking place given all the right elements are available (and in this thought experiment we are stipulating that all the right elements and all the right conditions are in place).

Suppose, now, that human civilization becomes a spacefaring supercivilization, in possession of technologies so advanced that we are more-or-less empowered to explore the universe at will. In our continued exploration of the universe and the continued growth of scientific knowledge, the same scenario as previously described continues to obtain: our scientific knowledge is refined and improved but not greatly upset, but we find that the universe is utterly and completely sterile except for ourselves and other life derived from the terrestrial biosphere. This would be “proof” of a definitive kind that terrestrial life is unique in the universe, but would this finding resolve the Fermi paradox? Wouldn’t it be a lot like cutting the Gordian knot to assert that the Fermi paradox was resolved because only a single origins of life event occurred in the universe? Wouldn’t we want to know why the origins of life was such a hurdle? We would, and I suspect that origins of life research would be pervasively informed by a desire to understand the rarity of the event.

Suppose that we ran the numbers on the kind of supercomputers that a supercivilization would have available to it, and we found that, even though our application of probability to the life sciences indicated the origins of life events should, strictly speaking, be very rare, they shouldn’t be so rare that there was only a single, unique origins of life event in the history of the universe. Say, given the age and the extent of the universe, which is very old and vast beyond human comprehension, life should have originated, say, a half dozen times. However, at this point we are a spacefaring supercivilization, we can can empirically confirm that there is no other life in the universe. We would not have missed another half dozen instances of life, and yet our science points to this. However, a half dozen compared to no other instances of life isn’t yet even an order of magnitude difference, so it doesn’t bother us much.

We can ratchet up this scenario as we have ratcheted up the previous scenarios: probability and biology might converge upon a likelihood of a dozen instances of other origins of life events, or a hundred such instances, and so on, until the orders of magnitude pile up and we have a paradox on our hands again, despite having exhaustive empirical evidence of the universe and its sterility.

At what point in the escalation of this scenario do we begin to question ourselves and our scientific understanding in a more radical way? At what point does the strangeness of the universe begin to point beyond itself, and we begin to consider non-naturalistic solutions to the Fermi paradox, when, by some ways of understanding the paradox, it has been fully resolved, and should be regarded as such by any reasonable person? At what point should a rational person consider as a possibility that a universe empty of life except for ourselves might be the result of supernatural creation? At what point would we seriously consider the naturalistic equivalent of supernatural creation, say, in a scenario such as the simulation hypothesis? It might make more sense to suppose that we are an experiment in cosmic isolation conducted by some greater intelligence, than to suppose that the universe entire is sterile except for ourselves.

I should be clear that I am not advocating a non-naturalistic solution to the Fermi paradox. However, I find it an interesting philosophical question that there might come a point at which the resolution of a paradox requires that we look beyond naturalistic explanations, and perhaps we may have to, in extremis, reconsider the boundary between the naturalistic and the non-naturalistic. I have been thinking about this problem a lot lately, and it seems to me that the farther we depart from the ordinary business of life, when we attempt to think about scales of space and time inaccessible to human experience (whether the very large or the very small), the line between the naturalistic and the non-naturalistic becomes blurred, and perhaps it ultimately ceases to be meaningful. In order to solve the problem of the universe and our place within the universe (if it is a problem), we may have to consider a solution set that is larger than that dictated by the naturalism of science on a human scale. This is not a call for supernaturalistic explanations for scientific problems, but rather a call to expand the scope of science beyond the bounds with which we are currently comfortable.

. . . . .

. . . . .

signature

. . . . .

Grand Strategy Annex

. . . . .

project astrolabe logo smaller

. . . . .

Computational Omniscience

18 December 2013

Wednesday


hal9000

What does it mean for a body of knowledge to be founded in fact? This is a central question in the philosophy of science: do the facts suggest hypotheses, or are hypotheses used to give meanings to facts? These questions are also posed in history and the philosophy of history. Is history a body of knowledge founded on facts? What else could it be? But do the facts of history suggest historical hypotheses, or do our historical hypotheses give meaning and value to historical facts, without which the bare facts would add up to nothing?

Is history a science? Can we analyze the body of historical knowledge in terms of facts and hypotheses? Is history subject to the same constraints and possibilities as science? An hypothesis is an opportunity — an opportunity to transform facts in the image of meaning; facts are limitations that constrain hypotheses. An hypothesis is an epistemic opportunity — an opportunity to make sense of the world — and therefore an hypothesis is also at the same time an epistemic risk — a risk of getting interpreting the world incorrectly and misunderstanding events.

The ancient question of whether history is an art or a science would seem to have been settled by the emergence of scientific historiography, which clearly is a science, but this does not answer the question of what history was before scientific historiography. One might reasonably maintain that scientific historiography was the implicit telos of all previous historiographical study, but this fails to acknowledge the role of historical narratives in shaping our multiple human identities — personal, cultural, ethnic, political, mythological.

If Big History should become the basis of some future axialization of industrial-technological civilization, then scientific historiography too will play a constitutive role in human identity, and while other and older identity narratives presently coexist and furnish different individuals with a different sense of their place in the world, we have already seen the beginnings of an identity shaped by science.

There is a sense in which the scientific historian today knows much more about the past than those who lived in the past and experienced that past as an immediate yet fragmentary present. One might infer the possibility of a total knowledge of the past through the cumulative knowledge scientific historiography — a condition denied to those who actually lived in the past — although this “total” knowledge must fall short of the peculiar kind of knowledge derived from immediate personal experience, as contemplated in the thought experiment known as “Mary’s room.”

In the thought experiment known as Mary’s room, also called the knowledge argument, we imagine a condition of total knowledge and compare this with the peculiar kind of knowledge that is derived from experience, in contradistinction to the knowledge of knowledge we come to through science. Here is the source of the Mary’s room thought experiment:

“Mary is a brilliant scientist who is, for whatever reason, forced to investigate the world from a black and white room via a black and white television monitor. She specializes in the neurophysiology of vision and acquires, let us suppose, all the physical information there is to obtain about what goes on when we see ripe tomatoes, or the sky, and use terms like ‘red’, ‘blue’, and so on. She discovers, for example, just which wavelength combinations from the sky stimulate the retina, and exactly how this produces via the central nervous system the contraction of the vocal cords and expulsion of air from the lungs that results in the uttering of the sentence ‘The sky is blue’. […] What will happen when Mary is released from her black and white room or is given a color television monitor? Will she learn anything or not?”

Frank Jackson, “Epiphenomenal Qualia” (1982)

Philosophers disagree on whether Mary learns anything upon leaving Mary’s room. As a thought experiment, it is intended not to give as a definitive answer to a circumstance that is never likely to occur in fact, but to sharpen our intuitions and refine our formulations. We can try to do the same with formulations of an ideal totality of knowledge derived from scientific historiography. There is a sense in which scientific historiography allows us to know much more about the past than those who lived in the past. To echo a question of Thomas Nagel, was there something that it was like to be in the past? Are there, or were there, historical qualia? Is the total knowledge of history afforded by scientific historiography short of capturing historical qualia?

In the Mary’s room thought experiment the agent in question is human and the experience is imposed colorblindness. Many people live with colorblindness within the condition greatly impacting their lives, so in this context it is plausible that Mary learns nothing upon the lifting of her imposed colorblindness, since the gap between these conditions is not as intuitively obvious as the gap between agents of a fundamentally different kind (as, e.g., distinct species) or between experiences of a fundamental different kind in which it is not plausible that the the lifting of an imposed limitation on experience results in no significant impact on one’s life.

We can sharpen the formulation of Mary’s room, and thus potentially sharpen our own intuitions, by taking a more intense experience than that of color vision. We can also alter the sense of this thought experiment by considering the question across distinct species or across the division between minds and machines. For example, if a machine learned everything that there is to know about eating would that machine know what it was like to eat? Would total knowledge after the manner of Mary’s knowledge of color suffice to exhaust knowledge of eating, even in the absence of an actual experience of eating? I doubt that many would be convinced that learning about eating without the experience of eating would be sufficient to exhaust what there is to know about eating. Thomas Nagel’s thought experiment in “What is it like to be a bat?” alluded to above poses the knowledge argument across species.

We can give this same thought experiment yet another twist if we reverse the roles of minds and machines, and asking of machine experience, should machine consciousness emerge, the questions we have asked of human experience (or bat experience). If a human being learned everything there is to know about AI and machine consciousness, would such a human being know what it is like to be a machine? Could knowledge of machines exhaust uniquely machine experience?

The kind of total scientific knowledge of the world implicit in scientific historiography is not unlike what Pierre Simon LaPlace had in mind when he posited the possibility of predicting the entire state of the universe, past or future, on the basis of an exhaustive knowledge of the present. LaPlace’s argument is also a classic determinist position:

“We ought then to regard the present state of the universe as the effect of its anterior state and as the cause of the one which is to follow. Given for one instant an intelligence which could comprehend all the forces by which nature is animated and the respective situation of the beings who compose it — an intelligence sufficiently vast to submit these data to analysis — it would embrace in the same formula the movements of the greatest bodies of the universe and those of the lightest atom; for it, nothing would be uncertain and the future, as the past, would be present to its eyes. The human mind offers, in the perfection which it has been able to give to astronomy, a feeble idea of this intelligence. Its discoveries in mechanics and geometry, added to that of universal gravity, have enabled it to comprehend in the same analytical expressions the past and future states of the system of the world. Applying the same method to some other objects of its knowledge, it has succeeded in referring to general laws observed phenomena and in foreseeing those which given circumstances ought to produce. All these efforts in the search for truth tend to lead it back continually to the vast intelligence which we have just mentioned, but from which it will always remain infinitely removed. This tendency, peculiar to the human race, is that which renders it superior to animals; and their progress in this respect distinguishes nations and ages and constitutes their true glory.”

A Philosophical Essay on Probabilities, PIERRE SIMON, MARQUIS DE LAPLACE, WITH AN INTRODUCTORY NOTE BY E. T. BELL, DOVER PUBLICATIONS, INC., New York, CHAPTER II.

While such a LaPlacean calculation of the universe would lie beyond the capability of any human being, it might someday lie within the capacity of another kind of intelligence. What LaPlace here calls, “an intelligence sufficiently vast to submit these data to analysis,” suggests the possibility of a sufficiently advanced (i.e., sufficiently large and fast) computer that could make this calculation, thereby achieving a kind of computational omniscience.

Long before we have reached the point of an “intelligence explosion” (the storied “technological singularity”) and machines surpass the intelligence of human beings, and each generation of machine is able to build a yet more intelligent successor (i.e., an “intelligence explosion”), the computational power at our disposal will for all practical purposes exhaust the world and will thus have obtained computational omniscience. We have already begun to converge upon this kind of total knowledge of the cosmos with the Bolshoi Cosmological Simulations and similar efforts with other supercomputers.

It is this kind of reasoning in regard to the future of cosmological simulations that has led to contemporary formulations of the “Simulation Hypothesis” — the hypothesis that we are, ourselves, at this moment, living in a computer simulation. According to the simulation argument, cosmological simulations become so elaborate and are refined to such a fine-grained level of detail that the simulation eventually populates itself with conscious agents, i.e., ourselves. Here, the map really does coincide with the territory, at least for us. The entity or entities conducting such a grand simulation, and presumably standing outside the whole simulation observing, can see the simulation for the simulation that it is. (The connection between cosmology and the simulation argument is nicely explained in the episode “Are We Real?” of the television series “What We Still Don’t Know” hosted by noted cosmologist Martin Rees.)

One way to formulate the idea of omniscience is to define omniscience as knowledge of the absolute infinite. The absolute infinite is an inconsistent multiplicity (in Cantorian terms). There is a certain reasonableness in this, as the logical principle of explosion, also known as ex falso quodlibet (namely, the principle that anything follows from a contradiction), means that an inconsistent multiplicity that incorporates contradictions is far richer than any consistent multiplicity. In so far as omniscience could be defined as knowledge of the absolute infinite, few would, I think, be willing to argue for the possibility of computational omniscience, so we will below pursue this from another angle, but I wanted to mention this idea of defining omniscience as knowledge of the absolute infinite because it strikes me as interesting. But no more of this for now.

The claim of computational omniscience must be qualified, since computational omniscience can exhaust only that portion of the world exhaustible by computational means; computational omniscience is the kind of omniscience that we encountered in the “Mary’s room” thought experiment, which might plausibly be thought to exhaust the world, or which might with equal plausibility be seen as falling far short of all that might be known of some body of knowledge.

Computational omniscience is distinct from omniscience simpliciter; while exhaustive in one respect, it fails to capture certain aspects of the world. Computational omniscience may be defined as the computation of all that is potentially computable, which leaves aside that which is not computable. The non-computable aspects of the world include, but are not limited to, non-computable functions, quantum indeterminacy, that which is non-quantifiable (for whatever reason), the qualitative dimension of conscious experience (i.e., qualia), and that which is inferred but not observable. These are pretty significant exceptions. What is left over? What part of the world is computable? This is a philosophical question that we must ask once we understand that computability has limits and that these limits may be distinct from the limits of human intelligence. Just as conscious biological agents face intrinsic epistemic limits, so also non-biological agents would also face intrinsic epistemic limits — in so far as a non-biological agent can be considered an epistemic agent — but these limitations on biological and non-biological agents are not necessarily the same.

The ultimate inadequacy of computational omniscience points to the possibility of limited omniscience — though one might well assert that omniscience that is limited is not really omniscience at all. The limited omniscience of a computer capable of computing the fate of the known universe may be compared to recent research on what Daniel Kahneman calls the bounded rationality of human minds. Artificial intelligence is likely to be a bounded intelligence that exemplifies bounded rationality, although its boundaries will not necessarily coincide precisely with the boundaries that defined human bounded rationality.

The idea of limited omniscience has been explored in mathematics, particular in regard to constructivism. Constructivist mathematicians have formulated principles of omniscience, and, wary of both unrestricted use of tertium non datur and of its complete interdiction in the manner of intuitionism, the limited principle of omniscience has been proposed as a specific way to skirt around some of the problems implicit in the realism of unrestricted tertium non datur.

When we allow our mathematical thought to coincide with realities and infinities — an approach that we are assured is practical and empirical, and bound to only yield benefits — we find ourselves mired in paradoxes, and in the interest of freeing ourselves from this conceptual mire we are driven to a position like Einstein’s famous aphorism that, “As far as the laws of mathematics refer to reality, they are not certain; and as far as they are certain, they do not refer to reality.” We separate and compartmentalize factual realities and mathematical infinities because we have difficulty, “to hold two opposing ideas in mind at the same time and still retain the ability to function.”

Indeed, it was Russell’s attempt to bring together Cantorian conceptions of set theory with practical measures of the actual world that begat the definitive paradox of set theory that bear Russell’s name, and the responses to which have in large measure shaped post-Cantorian mathematics. Russell gives the following account of his discovery of his eponymous paradox in his Autobiography:

Cantor had a proof that there is no greatest number, and it seemed to me that the number of all the things in the world ought to be the greatest possible. Accordingly, I examined his proof with some minuteness, and endeavoured to apply it to the class of all the things there are. This led me to consider those classes which are not members of themselves, and to ask whether the class of such classes is or is not a member of itself. I found that either answer implies its contradictory.

Bertrand Russell, The Autobiography of Bertrand Russell, Vol. II, 1872-1914, “Principia Mathematica”

None of the great problems of philosophical logic from this era — i.e., the fruitful period in which Russell and several colleagues created mathematical logic — were “solved”; rather, a consensus emerged among philosophers of logic, conventions were established, and, perhaps most importantly, Zermelo’s axiomatization of set theory became the preferred mathematical treatment of set theory, which allowed mathematicians to skirt the difficult issues in philosophical logic and to focus on the mathematics of set theory largely without logical distractions.

It is an irony of intellectual history that the next great revolution in mathematics to follow after set theory — which latter is, essentially, the mathematical theory of the infinite — was to be that of computer science, which constitutes the antithesis of set theory in so far as it is the strictest of strict finitisms. It would be fair to characterize the implicit theoretical position of computer science as a species of ultra-finitism, since computers cannot formulate even the most tepid potential infinite. All computing machines have an upper bound of calculation, and this is a physical instantiation of the theoretical position of ultra-finitism. This finitude follows from embodiment, which a computer shares with the world itself, and which therefore makes ultra-finite computing consistent with an ultra-finite world. In an ultra-finite world, it is possible that the finite may exhaust the finite and computational omniscience realized.

The universe defined by the Big Bang and all that followed from the Big Bang is a finite universe, and may in virtue of its finitude admit of exhaustive calculation, though this finite universe of observable cosmology may be set in an infinite context. Indeed, even the finite universe may not be as rigorously finite as we suppose, given that the limitations of our observations are not necessarily the limits of the real, but rather are defined by the limit of the speed of light. Leonard Susskind has rightly observed that what we observe of the universe is like being inside a room, the walls of which are the distant regions of the universe receding from us at superluminous velocity at the point at which they disappear from our view.

Recently in The Size of the World I quoted this passage from Leonard Susskind:

“In every direction that we look, galaxies are passing the point at which they are moving away from us faster than light can travel. Each of us is surrounded by a cosmic horizon — a sphere where things are receding with the speed of light — and no signal can reach us from beyond that horizon. When a star passes the point of no return, it is gone forever. Far out, at about fifteen billion light years, our cosmic horizon is swallowing galaxies, stars, and probably even life. It is as if we all live in our own private inside-out black hole.”

Leonard Susskind, The Black Hole War: My Battle with Stephen Hawking to make the World Safe for Quantum Mechanics, New York, Boston, and London: Little, Brown and Company, 2008, pp. 437-438

This observation has not yet been sufficiently appreciated (as I previously noted in The Size of the World). What lies beyond Susskind’s cosmic horizon is unobservable, just as anything that disappears beyond the event horizon of a black hole has become unobservable. We might term such empirical realities just beyond our grasp empirical unobservables. Empirical unobservables include (but are presumably not limited to — our “out” clause) all that which lies beyond the event horizon of Susskind’s inside-out black hole, that which lies beneath the event horizon of a black hole as conventionally conceived, and that which lies outside the lightcone defined by our present. There may be other empirical unobservables that follow from the structure of relativistic space. There are, moreover, many empirically inaccessible points of view, such as the interiors of stars, which cannot be observed for contingent reasons distinct from the impossibility of observing certain structures of the world hidden from us by the nature of spacetime structure.

What if the greater part of the universe passes in the oblivion of the empirical unobservables? This is a question that was posed by a paper appeared that in 2007, The Return of a Static Universe and the End of Cosmology, which garnered some attention because of its quasi-apocalyptic claim of the “end of cosmology” (which sounds a lot like Heidegger’s proclamation of the “end of philosophy” or any number of other proclamations of the “end of x“). This paper was eventually published in Scientific American as The End of Cosmology? An accelerating universe wipes out traces of its own origins by Lawrence M. Krauss and Robert J. Scherrer.

In calling the “end of cosmology” a “quasi-apocalyptic” claim I don’t mean to criticize or ridicule the paper or its argument, which is of the greatest interest. As in the subtitle of the Scientific American article, it appears to be the case that an accelerating universe wipes out traces of its own origins. If a quasi-apocalyptic claim can be scientifically justified, it is a legitimate and deserves our intellectual respect. Indeed, the study of existential risk could be considered a scientific study of apocalyptic claims, and I regard this as an undertaking of the first importance. We need to think seriously about existential risks in order to mitigate them rationally to the extent possible.

In my posts on the prediction and retrodiction walls (The Retrodiction Wall and Addendum on the Retrodiction Wall) I introduced the idea of effective history, which is that span of time which lies between the retrodiction wall in the past and the prediction wall in the future. One might similarly define effective cosmology as consisting of that region or those regions of space within the practical limits of observational cosmology, and excluding those regions of space that cannot be observed — not merely what is hidden from us by contingent circumstances, but that which are are incapable of observing because of the very structure of the universe and our place (ontologically speaking) within it.

There are limits to what we can know that are intrinsic to what we might call the human condition, except that this formulation is anthropocentric. The epistemic limits represented by effective history and effective cosmology are limitations that would hold for any sentient, conscious organism emergent from natural history, i.e., would hold for any peer species. Some of these limitations are limitations intrinsic to our biology and to the kind of mind that is emergent from biological organisms. Some of these limitations are limitations intrinsic to the world in which we find ourselves, and the vantage point from within the cosmos that we view our world. Ultimately, these limitations are one and the same, as the kind of biological beings that we are is a function of the kind of cosmos in which we have emerged, and which has served as the context of our natural history.

Within the domains of effective history and effective cosmology, we are limited further still by the non-quantifiable aspects of the world noted above. Setting aside non-quantifiable aspects of the world, what I have elsewhere called intrinsically arithmetical realities are a paradigm case of what remains computable once we have separated out the non-computable exceptions. (Beyond the domains of effective history and effective cosmology, hence beyond the domain of computational omniscience, there lies the infinite context of our finite world, about which we will say no more at present.) Intrinsically arithmetical realities are intrinsically amenable to quantitative methods are potentially exhaustible by computational omniscience.

Some have argued that the whole of the universe is intrinsically arithmetical in the sense of being essentially mathematical, as in the “Mathematical Universe Hypothesis” of Max Tegmark. Tegmark writes:

“[The Mathematical Universe Hypothesis] explains the utility of mathematics for describing the physical world as a natural consequence of the fact that the latter is a mathematical structure, and we are simply uncovering this bit by bit.”

The Mathematical Universe by Max Tegmark

Tegmark also explicitly formulates two companion principles:

External Reality Hypothesis (ERH): There exists an external physical reality completely independent of us humans.

…and…

Mathematical Universe Hypothesis (MUH): Our external physical reality is a mathematical structure.

I find these formulations to be philosophically naïve in the extreme, but as a contemporary example of a perennial tradition of philosophical thought Tegmark is worth citing. Tegmark is seeking an explicit answer to Wigner’s famous question about the “unreasonable effectiveness of mathematics.” It is to be expected that some responses to Wigner will take the form that Tegmark represents, but even if our universe is a mathematical structure, we do not yet know how much of that mathematical structure is computable and how much of that mathematical structure is not computable.

In my Centauri Dreams post on SETI, METI, and Existential Risk I mentioned that I found myself unable to identify with either the proponents of unregulated METI or those who argue for the regulation of METI efforts, since I disagreed with key postulates on both sides of the argument. METI advocates typically hold that interstellar flight is impossible, therefore METI can pose no risk. Advocates of METI regulation typically hold that unintentional EM spectrum leakage is not detectable at interstellar distances, therefore METI poses a risk we do not face at present. Since I hold that interstellar flight is possible, and that unintentional EM spectrum radiation is (or will be) detectable, I can’t comfortably align myself with either party in the discussion.

I find myself similarly hamstrung on the horns of a dilemma when it comes to computability, the cosmos, and determinism. Computer scientists and singulatarian enthusiasts of exponential increasing computer power ultimately culminating in an intelligence explosion seem content to assume that the universe is not only computable, and presents no fundamental barriers to computation, but foresee a day when matter itself is transformed into computronium and the whole universe becomes a grand computer. Critics of such enthusiasts often take the form of denying the possibility of AI or denying the possibility of machine consciousness, denying this or that is technically possible, and so on. It seems clear to me that only a portion of the world will ever be computable, but that portion is considerable and that a great many technological developments will fundamentally change our relationship to the world. But no matter how much either human beings or machines are transformed by the continuing development of industrial-technological civilization, non-computable functions will remain non-computable. This I cannot count myself either as a singulatarian or a Luddite.

How are we to understand the limitations to computational omniscience imposed by the limits of computation? The transcomputational problem, rather than laying bare human limitations, points to the way in which minds are not subject to computational limits. Minds as minds do not function computationally, so the evolution of mind (which drives the evolution of civilization) embodies different bounds and different limits than the Bekenstein bound and Bremermann’s limit, as well as different possibilities and different opportunities. The evolutionary possibilities of the mind are radically distinct from the evolutionary possibilities of bodies subject to computational limits, even though minds are dependent upon the bodies in which they are embodied.

Bremermann’s limit is 1093, which is somewhat arbitrary, but whether we draw the line here or elsewhere it doesn’t really matter for the principle at stake. Embodied computing must run into intrinsic limits, e.g., from relativity — a computer that exceeded Bremerman’s limit by too much would be subject to relativistic effects that would mean that gains in size would reach a point of diminishing returns. Recent brain research was suggested that the human brain is already close to the biological limit for effective signal transmission within and between the various parts of the brain, so that a larger brain would not necessarily be smarter or faster or more efficient. Indeed, it has been pointed out the elephant and whale brains are larger than mammal brains, although the encephalization quotient is much higher in human beings despite the difference in absolute brain size.

The function of organic bodies easily peaks over 1093. The Wikipedia entry on the transcomputational problem says:

“The retina contains about a million light-sensitive cells. Even if there were only two possible states for each cell (say, an active state and an inactive state) the processing of the retina as a whole requires processing of more than 10 to the 300,000 bits of information. This is far beyond Bremermann’s limit.”

This is just the eye alone. The body has far more nerve ending inputs than just those of the eye, and essentially a limitless number of outputs. So exhausting the possible computational states of even a relatively simple organism easily surpasses Bremermann’s limit and is therefore transcomputational. Some very simple organisms might not be transcomputational, given certain quantifiable parameters, but I think most complex life, and certainly things are complex as mammals, are radically transcomputational. Therefore the mind (whatever it is) is embodied in a transcomputational body, of which no computer could exhaustively calculate its possible states. The brain itself is radically transcomputational with its 100 billion neurons (each of which can take at minimum two distinct states, and possibly more).

Yet even machine embodiments can be computationally intractable (in the same way that organic bodies are computationally intractable), exceeding the possibility of exhaustively calculating every possible material state of the mechanism (on a molecular or atomic level). Thus the emergence of machine consciousness would also supervene upon a transcomputational embodiment. It is, at present, impossible to say whether a machine embodiment of consciousness would be a limitation upon that consciousness (because the embodiment is likely to be less radically transcomputational than the brain) or a facilitation of consciousness (because machines can be arbitrarily scaled up in a way that organic bodies cannot be).

Since the mind stands outside the possibilities of embodied computation, if machine consciousness emerges, machine embodiments will be as non-transparent to machine minds as organic embodiment is non-transparent to organic minds, but the machine minds, non-transparent to their embodiment as they are, will have access to energy sources far beyond any resources an organic body could provide. Such machine consciousness would not be bound by brute force calculation or linear models (as organic minds are not so bound), but would have far greater resources at its command for the development of its consciousness.

Since the body that today embodies mind already far exceeds Bremermann’s limit, and no machine as machine is likely to exceed this limit, machine consciousness emergent from computationally tractable bodies may, rather than being super-intelligent in ways that biologically derived minds can never be, may on the contrary be a pale shadow of an organic mind in an essentially transcomputational body. This gives a whole new twist to the much-discussed idea of the mind’s embodiment.

Computation is not the be-all and end-all of of mind; it is, in fact, only peripheral to mind as mind. If we had to rely upon calculation to make it through our day, we wouldn’t be able to get out of bed in the morning; most of the world is simply too complex to calculate. But we have a “work around” — consciousness. Marginalized as the “hard problem” in the philosophy of mind, or simply neglected in scientific studies, consciousness enables us to cut the Gordian Knot of transcomputability and to act in a complex world that far exceeds our ability to calculate.

Neither is consciousness the be-all and end-all of mind, although the rise of computer science and the increasing role of computers in our life has led many to conclude that computation is primary and that it is consciousness is that is peripheral. and, to be sure, in some contexts, consciousness is peripheral. In many of the same contexts of our EEA in which calculation is impossible due to complexity, consciousness is also irrelevant because we respond by an instinct that is deeper than and other than consciousness. In such cases, the mechanism of instinct takes over, but this is a biologically specific mechanism, evolved to serve the purpose of differential survival and reproduction; it would be difficult to re-purpose a biologically specific mechanism for any kind of abstract computing task, and not particularly helpful either.

Consciousness is not the be-all and end-all not only because instinct largely circumvents it, but also because machines have a “work around” for consciousness just as consciousness is a “work around” for the limits of computability; mechanism is a “work around” for the inefficiencies of consciousness. Machine mechanisms can perform precisely those tasks that so tax organic minds as to be virtually unsolvable, in a way that is perfectly parallel to the conscious mind’s ability to perform tasks that machines cannot yet even approach — not because machines can’t do the calculations, but because machines don’t possess the “work around” ability of consciousness.

It is when computers have the “work around” capacity that conscious beings have that they will be in a position to effect an intelligence explosion. That is to say, machine consciousness is crucial to AI that is able to perform in that way that AI is expected to perform, though AI researchers tend to be dismissive of consciousness. If the proof of the pudding is in the eating, well, then it is consciousness that allows us to “chunk the proofs” (i.e., to divide the proof into individually manageable pieces) and get to the eating all the more efficiently.

. . . . .

signature

. . . . .

Grand Strategy Annex

. . . . .

%d bloggers like this: