Wednesday


old computer

Technologies may be drivers of change or facilitators of change, the latter employed by the former as the technologies that enable the development of technologies that are drivers of change; that is to say, technologies that are facilitators of change are tools for the technologies that are in the vanguard of economic, social, and political change. Technologies, when introduced, have the capability of providing a competitive advantage when one business enterprise has mastered them while other business enterprises have not yet mastered them. Once a technology has been mastered by all elements of the economy it ceases to provide a competitive advantage to any one firm but is equally possessed and employed by all. At that point of its mature development, a technology also ceases to be a driver a change and becomes a facilitator of change.

Any technology that has become a part of the infrastructure may be considered a facilitator of change rather than a driver of change. Civilization requires an infrastructure; industrial-technological civilization requires an industrial-technological infrastructure. We are all familiar with infrastructure such as roads, bridges, ports, railroads, schools, and hospitals. There is also the infrastructure that we think of as “utilities” — water, sewer, electricity, telecommunications, and now computing — which we build into our built environment, retrofitting old buildings and sometimes entire older cities in order to bring them up to the standards of technology assumed by the industrialized world today.

All of the technologies that now constitute the infrastructure of industrial-technological civilization were once drivers of change. Before the industrial revolution, the building of ports and shipping united coastal communities in many regions of the world; the Romans built a network of road and bridges; in medieval Europe, schools and hospitals become a routine part of the structure of cities; early in the industrial revolution railroads became the first mechanized form of rapid overland transportation. Consider how the transcontinental railroad in North America and the trans-Siberian railway in Russia knitted together entire continents, and their role as transformative technologies should be clear.

Similarly, the technologies we think of as utilities were once drivers of change. Hot and cold running water and indoor plumbing, still absent in much of the world, did not become common in the industrialized world until the past century, but early agricultural and urban centers only came into being with the management of water resources, which reached a height in the most sophisticated cities of classical antiquity, with water supplied by aqueducts and sewage taken away by underground drainage systems that were superior to many in existence today. With the advent of natural gas and electricity as fuels for home and industry, industrial cities were retrofitted for these services, and have since been retrofitted again for telecommunications, and now computers.

The most recent technology to have a transformative effect on socioeconomic life was computing. In the past several decades — since the end of the Second World War, when the first digital, programmable electronic computers were built for code breaking (the Colossus in the UK) — computer technology grew exponentially and eventually affected almost every aspect of life in industrialized nation-states. During this period of time, computing has been a driver of change across socioeconomic institutions. Building a faster and more sophisticated computer has been an end in itself for technologists and computer science researchers. While this will continue to be the case for some time, computing has begun to make the transition from being a driver of change in an of itself to being a facilitator of change in other areas of technological innovation. In other words, computers are becoming a part of the infrastructure of industrial-technological civilization.

The transformation of the transformative technology of computing from a driver of change into a facilitator of change for other technologies has been recognized for more than ten years. In 2003 an article by Nicholas G. Carr, Why IT Doesn’t Matter Anymore, stirred up a significant controversy when it was published. More recently, Mark R. DeLong in Research computing as substrate, calls computing a substrate instead of an infrastructure, though the idea is much the same. Delong writes of computing: “It is a common base that supports and nurtures research work and scholarly endeavor all over the university.” Although computing is also a focus of research work and scholarly endeavor in and of itself, it also serves a larger supporting role, not only in the university, but also throughout society.

Although today we still fall far short of computational omniscience, the computer revolution has happened, as evidenced by the pervasive presence of computers in contemporary socioeconomic institutions. Computers have been rapidly integrated into the fabric of industrial-technological civilization, to the point that those of us born before the computer revolution, and who can remember a world in which computers were a negligible influence, can nevertheless only with difficulty remember what life was like without computers.

Depsite, then, what technology enthusiasts tell us, computers are not going to revolutionize our world a second time. We can imagine faster computers, smaller computers, better computers, computers with more storage capacity, and computers running innovative applications that make them useful in unexpected ways, but the pervasive use of computers that has already been achieved gives us a baseline for predicting future computer capacities, and these capacities will be different in degree from earlier computers, but not different in kind. We already know what it is like to see exponential growth in computing technology, and so we can account for this; computers have ceased to be a disruptive technology, and will not become a disruptive technology a second time.

Recently quantum computing made the cover of TIME magazine, together with a number of hyperbolic predictions about how quantum computing will change everything (the quantum computer is called “the infinity machine”). There have been countless articles about how “big data” is going to change everything also. Similar claims are made for artificial intelligence, and especially for “superintelligence.” An entire worldview has been constructed — the technological singularity — in which computing remains an indefinitely disruptive technology, the development of which eventually brings about the advent of the Millennium — the latter suitably re-conceived for a technological age.

Predictions of this nature are made precisely because a technology has become widely familiar, which is almost a guarantee that the technology in question is now part of the infrastructure of the ordinary business of life. One can count on being understood when one makes predictions about the future of the computer, in the same way that one might have been understood in the late nineteenth or early twentieth century if making predictions about the future of railroads. But in so far as this familiarity marks the transition in the life of a technology from being a driver of change to being a facilitator of change, such predictions are misleading at best, and flat out wrong at worst. The technologies that are going to be drivers of change in the coming century are not those that have devolved to the level of infrastructure; they are (or will be) unfamiliar technologies that can only be understood with difficulty.

The distinction between technologies that are drivers of change and technologies that are facilitators of change (like almost all distinctions) admits of a certain ambiguity. In the present context, one of these ambiguities is that of what constitutes a computing technology. Are computing applications distinct from computing? What of technologies for which computing is indispensable, and which could not have come into being without computers? This line of thought can be pursued backward: computers could not exist without electricity, so should computers be considered anything new, or merely an extension of electrical power? And electrical power could not have come about with the steam and fossil-fueled industry that preceded it. This can be pursued back to the first stone tools, and the argument can be made the nothing new has happened, in essence, since the first chipped flint blade.

Perhaps the most obvious point of dispute in this analysis is the possibility of machine consciousness. I will acknowledge without hesitation that the emergence of machine consciousness is a potentially revolutionary development, and it would constitute a disruptive technology. Machine consciousness, however, is frequently conflated with artificial intelligence and with superintelligence, and we must distinguish between the two. Artificial intelligence of a rudimentary form is already crucial to the automation of industry; machine consciousness would be the artificial production, in a machine substrate, of the kind of consciousness that we personally experience as our own identity, and which we infer to be at the basis of the action of others (what philosophers call the problem of other minds).

What makes the possibility of machine consciousness interesting to me, and potentially revolutionary, is that it would constitute a qualitatively novel emergent from computing technology, and not merely another application of computing. Computers stand in the same relationship to electricity that machine consciousness would stand in relation to computing: a novel and transformational technology emergent from an infrastructural technology, that is to say, a driver of change that emerges from a facilitator of change.

The computational infrastructure of industrial-technological civilization is more or less in place at present, a familiar part of our world, like the early electrical grids that appeared in the industrialized world once electricity became sufficiently commonplace to become a utility. Just as the electrical grid has been repeatedly upgraded, and will continue to be ungraded for the foreseeable future, so too the computational infrastructure of industrial-technological civilization will be continually upgraded. But the upgrades to our computational infrastructure will be incremental improvements that will no longer be major drivers of change either in the economy or in sociopolitical institutions. Other technologies will emerge that will take that role, and they will emerge from an infrastructure that is no longer driving socioeconomic change, but is rather the condition of the possibility of this change.

. . . . .

Colossus

. . . . .

signature

. . . . .

Grand Strategy Annex

. . . . .

Advertisements

Computational Omniscience

18 December 2013

Wednesday


hal9000

What does it mean for a body of knowledge to be founded in fact? This is a central question in the philosophy of science: do the facts suggest hypotheses, or are hypotheses used to give meanings to facts? These questions are also posed in history and the philosophy of history. Is history a body of knowledge founded on facts? What else could it be? But do the facts of history suggest historical hypotheses, or do our historical hypotheses give meaning and value to historical facts, without which the bare facts would add up to nothing?

Is history a science? Can we analyze the body of historical knowledge in terms of facts and hypotheses? Is history subject to the same constraints and possibilities as science? An hypothesis is an opportunity — an opportunity to transform facts in the image of meaning; facts are limitations that constrain hypotheses. An hypothesis is an epistemic opportunity — an opportunity to make sense of the world — and therefore an hypothesis is also at the same time an epistemic risk — a risk of getting interpreting the world incorrectly and misunderstanding events.

The ancient question of whether history is an art or a science would seem to have been settled by the emergence of scientific historiography, which clearly is a science, but this does not answer the question of what history was before scientific historiography. One might reasonably maintain that scientific historiography was the implicit telos of all previous historiographical study, but this fails to acknowledge the role of historical narratives in shaping our multiple human identities — personal, cultural, ethnic, political, mythological.

If Big History should become the basis of some future axialization of industrial-technological civilization, then scientific historiography too will play a constitutive role in human identity, and while other and older identity narratives presently coexist and furnish different individuals with a different sense of their place in the world, we have already seen the beginnings of an identity shaped by science.

There is a sense in which the scientific historian today knows much more about the past than those who lived in the past and experienced that past as an immediate yet fragmentary present. One might infer the possibility of a total knowledge of the past through the cumulative knowledge scientific historiography — a condition denied to those who actually lived in the past — although this “total” knowledge must fall short of the peculiar kind of knowledge derived from immediate personal experience, as contemplated in the thought experiment known as “Mary’s room.”

In the thought experiment known as Mary’s room, also called the knowledge argument, we imagine a condition of total knowledge and compare this with the peculiar kind of knowledge that is derived from experience, in contradistinction to the knowledge of knowledge we come to through science. Here is the source of the Mary’s room thought experiment:

“Mary is a brilliant scientist who is, for whatever reason, forced to investigate the world from a black and white room via a black and white television monitor. She specializes in the neurophysiology of vision and acquires, let us suppose, all the physical information there is to obtain about what goes on when we see ripe tomatoes, or the sky, and use terms like ‘red’, ‘blue’, and so on. She discovers, for example, just which wavelength combinations from the sky stimulate the retina, and exactly how this produces via the central nervous system the contraction of the vocal cords and expulsion of air from the lungs that results in the uttering of the sentence ‘The sky is blue’. […] What will happen when Mary is released from her black and white room or is given a color television monitor? Will she learn anything or not?”

Frank Jackson, “Epiphenomenal Qualia” (1982)

Philosophers disagree on whether Mary learns anything upon leaving Mary’s room. As a thought experiment, it is intended not to give as a definitive answer to a circumstance that is never likely to occur in fact, but to sharpen our intuitions and refine our formulations. We can try to do the same with formulations of an ideal totality of knowledge derived from scientific historiography. There is a sense in which scientific historiography allows us to know much more about the past than those who lived in the past. To echo a question of Thomas Nagel, was there something that it was like to be in the past? Are there, or were there, historical qualia? Is the total knowledge of history afforded by scientific historiography short of capturing historical qualia?

In the Mary’s room thought experiment the agent in question is human and the experience is imposed colorblindness. Many people live with colorblindness within the condition greatly impacting their lives, so in this context it is plausible that Mary learns nothing upon the lifting of her imposed colorblindness, since the gap between these conditions is not as intuitively obvious as the gap between agents of a fundamentally different kind (as, e.g., distinct species) or between experiences of a fundamental different kind in which it is not plausible that the the lifting of an imposed limitation on experience results in no significant impact on one’s life.

We can sharpen the formulation of Mary’s room, and thus potentially sharpen our own intuitions, by taking a more intense experience than that of color vision. We can also alter the sense of this thought experiment by considering the question across distinct species or across the division between minds and machines. For example, if a machine learned everything that there is to know about eating would that machine know what it was like to eat? Would total knowledge after the manner of Mary’s knowledge of color suffice to exhaust knowledge of eating, even in the absence of an actual experience of eating? I doubt that many would be convinced that learning about eating without the experience of eating would be sufficient to exhaust what there is to know about eating. Thomas Nagel’s thought experiment in “What is it like to be a bat?” alluded to above poses the knowledge argument across species.

We can give this same thought experiment yet another twist if we reverse the roles of minds and machines, and asking of machine experience, should machine consciousness emerge, the questions we have asked of human experience (or bat experience). If a human being learned everything there is to know about AI and machine consciousness, would such a human being know what it is like to be a machine? Could knowledge of machines exhaust uniquely machine experience?

The kind of total scientific knowledge of the world implicit in scientific historiography is not unlike what Pierre Simon LaPlace had in mind when he posited the possibility of predicting the entire state of the universe, past or future, on the basis of an exhaustive knowledge of the present. LaPlace’s argument is also a classic determinist position:

“We ought then to regard the present state of the universe as the effect of its anterior state and as the cause of the one which is to follow. Given for one instant an intelligence which could comprehend all the forces by which nature is animated and the respective situation of the beings who compose it — an intelligence sufficiently vast to submit these data to analysis — it would embrace in the same formula the movements of the greatest bodies of the universe and those of the lightest atom; for it, nothing would be uncertain and the future, as the past, would be present to its eyes. The human mind offers, in the perfection which it has been able to give to astronomy, a feeble idea of this intelligence. Its discoveries in mechanics and geometry, added to that of universal gravity, have enabled it to comprehend in the same analytical expressions the past and future states of the system of the world. Applying the same method to some other objects of its knowledge, it has succeeded in referring to general laws observed phenomena and in foreseeing those which given circumstances ought to produce. All these efforts in the search for truth tend to lead it back continually to the vast intelligence which we have just mentioned, but from which it will always remain infinitely removed. This tendency, peculiar to the human race, is that which renders it superior to animals; and their progress in this respect distinguishes nations and ages and constitutes their true glory.”

A Philosophical Essay on Probabilities, PIERRE SIMON, MARQUIS DE LAPLACE, WITH AN INTRODUCTORY NOTE BY E. T. BELL, DOVER PUBLICATIONS, INC., New York, CHAPTER II.

While such a LaPlacean calculation of the universe would lie beyond the capability of any human being, it might someday lie within the capacity of another kind of intelligence. What LaPlace here calls, “an intelligence sufficiently vast to submit these data to analysis,” suggests the possibility of a sufficiently advanced (i.e., sufficiently large and fast) computer that could make this calculation, thereby achieving a kind of computational omniscience.

Long before we have reached the point of an “intelligence explosion” (the storied “technological singularity”) and machines surpass the intelligence of human beings, and each generation of machine is able to build a yet more intelligent successor (i.e., an “intelligence explosion”), the computational power at our disposal will for all practical purposes exhaust the world and will thus have obtained computational omniscience. We have already begun to converge upon this kind of total knowledge of the cosmos with the Bolshoi Cosmological Simulations and similar efforts with other supercomputers.

It is this kind of reasoning in regard to the future of cosmological simulations that has led to contemporary formulations of the “Simulation Hypothesis” — the hypothesis that we are, ourselves, at this moment, living in a computer simulation. According to the simulation argument, cosmological simulations become so elaborate and are refined to such a fine-grained level of detail that the simulation eventually populates itself with conscious agents, i.e., ourselves. Here, the map really does coincide with the territory, at least for us. The entity or entities conducting such a grand simulation, and presumably standing outside the whole simulation observing, can see the simulation for the simulation that it is. (The connection between cosmology and the simulation argument is nicely explained in the episode “Are We Real?” of the television series “What We Still Don’t Know” hosted by noted cosmologist Martin Rees.)

One way to formulate the idea of omniscience is to define omniscience as knowledge of the absolute infinite. The absolute infinite is an inconsistent multiplicity (in Cantorian terms). There is a certain reasonableness in this, as the logical principle of explosion, also known as ex falso quodlibet (namely, the principle that anything follows from a contradiction), means that an inconsistent multiplicity that incorporates contradictions is far richer than any consistent multiplicity. In so far as omniscience could be defined as knowledge of the absolute infinite, few would, I think, be willing to argue for the possibility of computational omniscience, so we will below pursue this from another angle, but I wanted to mention this idea of defining omniscience as knowledge of the absolute infinite because it strikes me as interesting. But no more of this for now.

The claim of computational omniscience must be qualified, since computational omniscience can exhaust only that portion of the world exhaustible by computational means; computational omniscience is the kind of omniscience that we encountered in the “Mary’s room” thought experiment, which might plausibly be thought to exhaust the world, or which might with equal plausibility be seen as falling far short of all that might be known of some body of knowledge.

Computational omniscience is distinct from omniscience simpliciter; while exhaustive in one respect, it fails to capture certain aspects of the world. Computational omniscience may be defined as the computation of all that is potentially computable, which leaves aside that which is not computable. The non-computable aspects of the world include, but are not limited to, non-computable functions, quantum indeterminacy, that which is non-quantifiable (for whatever reason), the qualitative dimension of conscious experience (i.e., qualia), and that which is inferred but not observable. These are pretty significant exceptions. What is left over? What part of the world is computable? This is a philosophical question that we must ask once we understand that computability has limits and that these limits may be distinct from the limits of human intelligence. Just as conscious biological agents face intrinsic epistemic limits, so also non-biological agents would also face intrinsic epistemic limits — in so far as a non-biological agent can be considered an epistemic agent — but these limitations on biological and non-biological agents are not necessarily the same.

The ultimate inadequacy of computational omniscience points to the possibility of limited omniscience — though one might well assert that omniscience that is limited is not really omniscience at all. The limited omniscience of a computer capable of computing the fate of the known universe may be compared to recent research on what Daniel Kahneman calls the bounded rationality of human minds. Artificial intelligence is likely to be a bounded intelligence that exemplifies bounded rationality, although its boundaries will not necessarily coincide precisely with the boundaries that defined human bounded rationality.

The idea of limited omniscience has been explored in mathematics, particular in regard to constructivism. Constructivist mathematicians have formulated principles of omniscience, and, wary of both unrestricted use of tertium non datur and of its complete interdiction in the manner of intuitionism, the limited principle of omniscience has been proposed as a specific way to skirt around some of the problems implicit in the realism of unrestricted tertium non datur.

When we allow our mathematical thought to coincide with realities and infinities — an approach that we are assured is practical and empirical, and bound to only yield benefits — we find ourselves mired in paradoxes, and in the interest of freeing ourselves from this conceptual mire we are driven to a position like Einstein’s famous aphorism that, “As far as the laws of mathematics refer to reality, they are not certain; and as far as they are certain, they do not refer to reality.” We separate and compartmentalize factual realities and mathematical infinities because we have difficulty, “to hold two opposing ideas in mind at the same time and still retain the ability to function.”

Indeed, it was Russell’s attempt to bring together Cantorian conceptions of set theory with practical measures of the actual world that begat the definitive paradox of set theory that bear Russell’s name, and the responses to which have in large measure shaped post-Cantorian mathematics. Russell gives the following account of his discovery of his eponymous paradox in his Autobiography:

Cantor had a proof that there is no greatest number, and it seemed to me that the number of all the things in the world ought to be the greatest possible. Accordingly, I examined his proof with some minuteness, and endeavoured to apply it to the class of all the things there are. This led me to consider those classes which are not members of themselves, and to ask whether the class of such classes is or is not a member of itself. I found that either answer implies its contradictory.

Bertrand Russell, The Autobiography of Bertrand Russell, Vol. II, 1872-1914, “Principia Mathematica”

None of the great problems of philosophical logic from this era — i.e., the fruitful period in which Russell and several colleagues created mathematical logic — were “solved”; rather, a consensus emerged among philosophers of logic, conventions were established, and, perhaps most importantly, Zermelo’s axiomatization of set theory became the preferred mathematical treatment of set theory, which allowed mathematicians to skirt the difficult issues in philosophical logic and to focus on the mathematics of set theory largely without logical distractions.

It is an irony of intellectual history that the next great revolution in mathematics to follow after set theory — which latter is, essentially, the mathematical theory of the infinite — was to be that of computer science, which constitutes the antithesis of set theory in so far as it is the strictest of strict finitisms. It would be fair to characterize the implicit theoretical position of computer science as a species of ultra-finitism, since computers cannot formulate even the most tepid potential infinite. All computing machines have an upper bound of calculation, and this is a physical instantiation of the theoretical position of ultra-finitism. This finitude follows from embodiment, which a computer shares with the world itself, and which therefore makes ultra-finite computing consistent with an ultra-finite world. In an ultra-finite world, it is possible that the finite may exhaust the finite and computational omniscience realized.

The universe defined by the Big Bang and all that followed from the Big Bang is a finite universe, and may in virtue of its finitude admit of exhaustive calculation, though this finite universe of observable cosmology may be set in an infinite context. Indeed, even the finite universe may not be as rigorously finite as we suppose, given that the limitations of our observations are not necessarily the limits of the real, but rather are defined by the limit of the speed of light. Leonard Susskind has rightly observed that what we observe of the universe is like being inside a room, the walls of which are the distant regions of the universe receding from us at superluminous velocity at the point at which they disappear from our view.

Recently in The Size of the World I quoted this passage from Leonard Susskind:

“In every direction that we look, galaxies are passing the point at which they are moving away from us faster than light can travel. Each of us is surrounded by a cosmic horizon — a sphere where things are receding with the speed of light — and no signal can reach us from beyond that horizon. When a star passes the point of no return, it is gone forever. Far out, at about fifteen billion light years, our cosmic horizon is swallowing galaxies, stars, and probably even life. It is as if we all live in our own private inside-out black hole.”

Leonard Susskind, The Black Hole War: My Battle with Stephen Hawking to make the World Safe for Quantum Mechanics, New York, Boston, and London: Little, Brown and Company, 2008, pp. 437-438

This observation has not yet been sufficiently appreciated (as I previously noted in The Size of the World). What lies beyond Susskind’s cosmic horizon is unobservable, just as anything that disappears beyond the event horizon of a black hole has become unobservable. We might term such empirical realities just beyond our grasp empirical unobservables. Empirical unobservables include (but are presumably not limited to — our “out” clause) all that which lies beyond the event horizon of Susskind’s inside-out black hole, that which lies beneath the event horizon of a black hole as conventionally conceived, and that which lies outside the lightcone defined by our present. There may be other empirical unobservables that follow from the structure of relativistic space. There are, moreover, many empirically inaccessible points of view, such as the interiors of stars, which cannot be observed for contingent reasons distinct from the impossibility of observing certain structures of the world hidden from us by the nature of spacetime structure.

What if the greater part of the universe passes in the oblivion of the empirical unobservables? This is a question that was posed by a paper appeared that in 2007, The Return of a Static Universe and the End of Cosmology, which garnered some attention because of its quasi-apocalyptic claim of the “end of cosmology” (which sounds a lot like Heidegger’s proclamation of the “end of philosophy” or any number of other proclamations of the “end of x“). This paper was eventually published in Scientific American as The End of Cosmology? An accelerating universe wipes out traces of its own origins by Lawrence M. Krauss and Robert J. Scherrer.

In calling the “end of cosmology” a “quasi-apocalyptic” claim I don’t mean to criticize or ridicule the paper or its argument, which is of the greatest interest. As in the subtitle of the Scientific American article, it appears to be the case that an accelerating universe wipes out traces of its own origins. If a quasi-apocalyptic claim can be scientifically justified, it is a legitimate and deserves our intellectual respect. Indeed, the study of existential risk could be considered a scientific study of apocalyptic claims, and I regard this as an undertaking of the first importance. We need to think seriously about existential risks in order to mitigate them rationally to the extent possible.

In my posts on the prediction and retrodiction walls (The Retrodiction Wall and Addendum on the Retrodiction Wall) I introduced the idea of effective history, which is that span of time which lies between the retrodiction wall in the past and the prediction wall in the future. One might similarly define effective cosmology as consisting of that region or those regions of space within the practical limits of observational cosmology, and excluding those regions of space that cannot be observed — not merely what is hidden from us by contingent circumstances, but that which are are incapable of observing because of the very structure of the universe and our place (ontologically speaking) within it.

There are limits to what we can know that are intrinsic to what we might call the human condition, except that this formulation is anthropocentric. The epistemic limits represented by effective history and effective cosmology are limitations that would hold for any sentient, conscious organism emergent from natural history, i.e., would hold for any peer species. Some of these limitations are limitations intrinsic to our biology and to the kind of mind that is emergent from biological organisms. Some of these limitations are limitations intrinsic to the world in which we find ourselves, and the vantage point from within the cosmos that we view our world. Ultimately, these limitations are one and the same, as the kind of biological beings that we are is a function of the kind of cosmos in which we have emerged, and which has served as the context of our natural history.

Within the domains of effective history and effective cosmology, we are limited further still by the non-quantifiable aspects of the world noted above. Setting aside non-quantifiable aspects of the world, what I have elsewhere called intrinsically arithmetical realities are a paradigm case of what remains computable once we have separated out the non-computable exceptions. (Beyond the domains of effective history and effective cosmology, hence beyond the domain of computational omniscience, there lies the infinite context of our finite world, about which we will say no more at present.) Intrinsically arithmetical realities are intrinsically amenable to quantitative methods are potentially exhaustible by computational omniscience.

Some have argued that the whole of the universe is intrinsically arithmetical in the sense of being essentially mathematical, as in the “Mathematical Universe Hypothesis” of Max Tegmark. Tegmark writes:

“[The Mathematical Universe Hypothesis] explains the utility of mathematics for describing the physical world as a natural consequence of the fact that the latter is a mathematical structure, and we are simply uncovering this bit by bit.”

The Mathematical Universe by Max Tegmark

Tegmark also explicitly formulates two companion principles:

External Reality Hypothesis (ERH): There exists an external physical reality completely independent of us humans.

…and…

Mathematical Universe Hypothesis (MUH): Our external physical reality is a mathematical structure.

I find these formulations to be philosophically naïve in the extreme, but as a contemporary example of a perennial tradition of philosophical thought Tegmark is worth citing. Tegmark is seeking an explicit answer to Wigner’s famous question about the “unreasonable effectiveness of mathematics.” It is to be expected that some responses to Wigner will take the form that Tegmark represents, but even if our universe is a mathematical structure, we do not yet know how much of that mathematical structure is computable and how much of that mathematical structure is not computable.

In my Centauri Dreams post on SETI, METI, and Existential Risk I mentioned that I found myself unable to identify with either the proponents of unregulated METI or those who argue for the regulation of METI efforts, since I disagreed with key postulates on both sides of the argument. METI advocates typically hold that interstellar flight is impossible, therefore METI can pose no risk. Advocates of METI regulation typically hold that unintentional EM spectrum leakage is not detectable at interstellar distances, therefore METI poses a risk we do not face at present. Since I hold that interstellar flight is possible, and that unintentional EM spectrum radiation is (or will be) detectable, I can’t comfortably align myself with either party in the discussion.

I find myself similarly hamstrung on the horns of a dilemma when it comes to computability, the cosmos, and determinism. Computer scientists and singulatarian enthusiasts of exponential increasing computer power ultimately culminating in an intelligence explosion seem content to assume that the universe is not only computable, and presents no fundamental barriers to computation, but foresee a day when matter itself is transformed into computronium and the whole universe becomes a grand computer. Critics of such enthusiasts often take the form of denying the possibility of AI or denying the possibility of machine consciousness, denying this or that is technically possible, and so on. It seems clear to me that only a portion of the world will ever be computable, but that portion is considerable and that a great many technological developments will fundamentally change our relationship to the world. But no matter how much either human beings or machines are transformed by the continuing development of industrial-technological civilization, non-computable functions will remain non-computable. This I cannot count myself either as a singulatarian or a Luddite.

How are we to understand the limitations to computational omniscience imposed by the limits of computation? The transcomputational problem, rather than laying bare human limitations, points to the way in which minds are not subject to computational limits. Minds as minds do not function computationally, so the evolution of mind (which drives the evolution of civilization) embodies different bounds and different limits than the Bekenstein bound and Bremermann’s limit, as well as different possibilities and different opportunities. The evolutionary possibilities of the mind are radically distinct from the evolutionary possibilities of bodies subject to computational limits, even though minds are dependent upon the bodies in which they are embodied.

Bremermann’s limit is 1093, which is somewhat arbitrary, but whether we draw the line here or elsewhere it doesn’t really matter for the principle at stake. Embodied computing must run into intrinsic limits, e.g., from relativity — a computer that exceeded Bremerman’s limit by too much would be subject to relativistic effects that would mean that gains in size would reach a point of diminishing returns. Recent brain research was suggested that the human brain is already close to the biological limit for effective signal transmission within and between the various parts of the brain, so that a larger brain would not necessarily be smarter or faster or more efficient. Indeed, it has been pointed out the elephant and whale brains are larger than mammal brains, although the encephalization quotient is much higher in human beings despite the difference in absolute brain size.

The function of organic bodies easily peaks over 1093. The Wikipedia entry on the transcomputational problem says:

“The retina contains about a million light-sensitive cells. Even if there were only two possible states for each cell (say, an active state and an inactive state) the processing of the retina as a whole requires processing of more than 10 to the 300,000 bits of information. This is far beyond Bremermann’s limit.”

This is just the eye alone. The body has far more nerve ending inputs than just those of the eye, and essentially a limitless number of outputs. So exhausting the possible computational states of even a relatively simple organism easily surpasses Bremermann’s limit and is therefore transcomputational. Some very simple organisms might not be transcomputational, given certain quantifiable parameters, but I think most complex life, and certainly things are complex as mammals, are radically transcomputational. Therefore the mind (whatever it is) is embodied in a transcomputational body, of which no computer could exhaustively calculate its possible states. The brain itself is radically transcomputational with its 100 billion neurons (each of which can take at minimum two distinct states, and possibly more).

Yet even machine embodiments can be computationally intractable (in the same way that organic bodies are computationally intractable), exceeding the possibility of exhaustively calculating every possible material state of the mechanism (on a molecular or atomic level). Thus the emergence of machine consciousness would also supervene upon a transcomputational embodiment. It is, at present, impossible to say whether a machine embodiment of consciousness would be a limitation upon that consciousness (because the embodiment is likely to be less radically transcomputational than the brain) or a facilitation of consciousness (because machines can be arbitrarily scaled up in a way that organic bodies cannot be).

Since the mind stands outside the possibilities of embodied computation, if machine consciousness emerges, machine embodiments will be as non-transparent to machine minds as organic embodiment is non-transparent to organic minds, but the machine minds, non-transparent to their embodiment as they are, will have access to energy sources far beyond any resources an organic body could provide. Such machine consciousness would not be bound by brute force calculation or linear models (as organic minds are not so bound), but would have far greater resources at its command for the development of its consciousness.

Since the body that today embodies mind already far exceeds Bremermann’s limit, and no machine as machine is likely to exceed this limit, machine consciousness emergent from computationally tractable bodies may, rather than being super-intelligent in ways that biologically derived minds can never be, may on the contrary be a pale shadow of an organic mind in an essentially transcomputational body. This gives a whole new twist to the much-discussed idea of the mind’s embodiment.

Computation is not the be-all and end-all of of mind; it is, in fact, only peripheral to mind as mind. If we had to rely upon calculation to make it through our day, we wouldn’t be able to get out of bed in the morning; most of the world is simply too complex to calculate. But we have a “work around” — consciousness. Marginalized as the “hard problem” in the philosophy of mind, or simply neglected in scientific studies, consciousness enables us to cut the Gordian Knot of transcomputability and to act in a complex world that far exceeds our ability to calculate.

Neither is consciousness the be-all and end-all of mind, although the rise of computer science and the increasing role of computers in our life has led many to conclude that computation is primary and that it is consciousness is that is peripheral. and, to be sure, in some contexts, consciousness is peripheral. In many of the same contexts of our EEA in which calculation is impossible due to complexity, consciousness is also irrelevant because we respond by an instinct that is deeper than and other than consciousness. In such cases, the mechanism of instinct takes over, but this is a biologically specific mechanism, evolved to serve the purpose of differential survival and reproduction; it would be difficult to re-purpose a biologically specific mechanism for any kind of abstract computing task, and not particularly helpful either.

Consciousness is not the be-all and end-all not only because instinct largely circumvents it, but also because machines have a “work around” for consciousness just as consciousness is a “work around” for the limits of computability; mechanism is a “work around” for the inefficiencies of consciousness. Machine mechanisms can perform precisely those tasks that so tax organic minds as to be virtually unsolvable, in a way that is perfectly parallel to the conscious mind’s ability to perform tasks that machines cannot yet even approach — not because machines can’t do the calculations, but because machines don’t possess the “work around” ability of consciousness.

It is when computers have the “work around” capacity that conscious beings have that they will be in a position to effect an intelligence explosion. That is to say, machine consciousness is crucial to AI that is able to perform in that way that AI is expected to perform, though AI researchers tend to be dismissive of consciousness. If the proof of the pudding is in the eating, well, then it is consciousness that allows us to “chunk the proofs” (i.e., to divide the proof into individually manageable pieces) and get to the eating all the more efficiently.

. . . . .

signature

. . . . .

Grand Strategy Annex

. . . . .

Friday


Alonzo Church and Alan Turing

What is the Church-Turing Thesis? The Church-Turing Thesis is an idea from theoretical computer science that emerged from research in the foundations of logic and mathematics, also called Church’s Thesis, Church’s Conjecture, the Church-Turing Conjecture as well as other names, that ultimately bears upon what can be computed, and thus, by extension, what a computer can do (and what a computer cannot do).

Note: For clarity’s sake, I ought to point out the Church’s Thesis and Church’s Theorem are distinct. Church’s Theorem is an established theorem of mathematical logic, proved by Alonzo Church in 1936, that there is no decision procedure for logic (i.e., there is no method for determining whether an arbitrary formula in first order logic is a theorem). But the two – Church’s theorem and Church’s thesis – are related: both follow from the exploration of the possibilities and limitations of formal systems and the attempt to define these in a rigorous way.

Even to state Church’s Thesis is controversial. There are many formulations, and many of these alternative formulations come straight from Church and Turing themselves, who framed the idea differently in different contexts. Also, when you hear computer science types discuss the Church-Turing thesis you might think that it is something like an engineering problem, but it is essentially a philosophical idea. What the Church-Turing thesis is not is as important as what it is: it is not a theorem of mathematical logic, it is not a law of nature, and it not a limit of engineering. We could say that it is a principle, because the word “principle” is ambiguous and thus covers the various formulations of the thesis.

There is an article on the Church-Turing Thesis at the Stanford Encyclopedia of Philosophy, one at Wikipedia (of course), and even a website dedicated to a critique of the Stanford article, Alan Turing in the Stanford Encyclopedia of Philosophy. All of these are valuable resources on the Church-Turing Thesis, and well worth reading to gain some orientation.

One way to formulate Church’s Thesis is that all effectively computable functions are general recursive. Both “effectively computable functions” and “general recursive” are technical terms, but there is an important different between these technical terms: “effectively computable” is an intuitive conception, whereas “general recursive” is a formal conception. Thus one way to understand Church’s Thesis is that it asserts the identity of a formal idea and an informal idea.

One of the reasons that there are many alternative formulations of the Church-Turing thesis is that there any several formally equivalent formulations of recursiveness: recursive functions, Turing computable functions, Post computable functions, representable functions, lambda-definable functions, and Markov normal algorithms among them. All of these are formal conceptions that can be rigorously defined. For the other term that constitutes the identity that is Church’s thesis, there are also several alternative formulations of effectively computable functions, and these include other intuitive notions like that of an algorithm or a procedure that can be implemented mechanically.

These may seem like recondite matters with little or no relationship to ordinary human experience, but I am surprised how often I find the same theoretical conflict played out in the most ordinary and familiar contexts. The dialectic of the formal and the informal (i.e., the intuitive) is much more central to human experience than is generally recognized. For example, the conflict between intuitively apprehended free will and apparently scientifically unimpeachable determinism is a conflict between an intuitive and a formal conception that both seem to characterize human life. Compatibilist accounts of determinism and free will may be considered the “Church’s thesis” of human action, asserting the identity of the two.

It should be understood here that when I discuss intuition in this context I am talking about the kind of mathematical intuition I discussed in Adventures in Geometrical Intuition, although the idea of mathematical intuition can be understood as perhaps the narrowest formulation of that intuition that is the polar concept standing in opposition to formalism. Kant made a useful distinction between sensory intuition and intellectual intuition that helps to clarify what is intended here, since the very idea of intuition in the Kantian sense has become lost in recent thought. Once we think of intuition as something given to us in the same way that sensory intuition is given to us, only without the mediation of the senses, we come closer to the operative idea of intuition as it is employed in mathematics.

Mathematical thought, and formal accounts of experience generally speaking, of course, seek to capture our intuitions, but this formal capture of the intuitive is itself an intuitive and essentially creative process even when it culminates in the formulation of a formal system that is essentially inaccessible to intuition (at least in parts of that formal system). What this means is that intuition can know itself, and know itself to be an intuitive grasp of some truth, but formality can only know itself as formality and cannot cross over the intuitive-formal divide in order to grasp the intuitive even when it captures intuition in an intuitively satisfying way. We cannot even understand the idea of an intuitively satisfying formalization without an intuitive grasp of all the relevant elements. As Spinoza said that the true is the criterion both of itself and of the false, we can say that the intuitive is the criterion both of itself and the formal. (And given that, today, truth is primarily understood formally, this is a significant claim to make.)

The above observation can be formulated as a general principle such that the intuitive can grasp all of the intuitive and a portion of the formal, whereas the formal can grasp only itself. I will refer to this as the principle of the asymmetry of intuition. We can see this principle operative both in the Church-Turing Thesis and in popular accounts of Gödel’s theorem. We are all familiar with popular and intuitive accounts of Gödel’s theorem (since the formal accounts are so difficult), and it is not usual to make claims for the limitative theorems that go far beyond what they formally demonstrate.

All of this holds also for the attempt to translate traditional philosophical concepts into scientific terms — the most obvious example being free will, supposedly accounted for by physics, biochemistry, and neurobiology. But if one makes the claim that consciousness is nothing but such-and-such physical phenomenon, it is impossible to cash out this claim in any robust way. The science is quantifiable and formalizable, but our concepts of mind, consciousness, and free will remain stubbornly intuitive and have not been satisfyingly captured in any formalism — the determination of any such satisfying formalization could only be determined by intuition and therefore eludes any formal capture. To “prove” determinism, then, would be as incoherent as “proving” Church’s Thesis in any robust sense.

There certainly are interesting philosophical arguments on both sides of Church’s Thesis — that is to say, both its denial and its affirmation — but these are arguments that appeal to our intuitions and, most crucially, our idea of ourselves is intuitive and informal. I should like to go further and to assert that the idea of the self must be intuitive and cannot be otherwise, but I am not fully confident that this is the case. Human nature can change, albeit slowly, along with the human condition, and we could, over time — and especially under the selective pressures of industrial-technological civilization — shape ourselves after the model of a formal conception of the self. (In fact, I think it very likely that this is happening.)

I cannot even say — I would not know where to begin — what would constitute a formal self-understanding of the self, much less any kind of understanding of a formal self. Well, maybe not. I have written elsewhere that the doctrine of the punctiform present (not very popular among philosophers these days, I might add) is a formal doctrine of time, and in so far as we identify internal time consciousness with the punctiform present we have a formal doctrine of the self.

While the above account is one to which I am sympathetic, this kind of formal concept — I mean the punctiform present as a formal conception of time — is very different from the kind of formality we find in physics, biochemistry, and neuroscience. We might assimilate it to some mathematical formalism, but this is an abstraction made concrete in subjective human experience, not in physical science. Perhaps this partly explains the fashionable anti-philosophy that I have written about.

. . . . .

signature

. . . . .

Grand Strategy Annex

. . . . .

%d bloggers like this: