18 December 2013
What does it mean for a body of knowledge to be founded in fact? This is a central question in the philosophy of science: do the facts suggest hypotheses, or are hypotheses used to give meanings to facts? These questions are also posed in history and the philosophy of history. Is history a body of knowledge founded on facts? What else could it be? But do the facts of history suggest historical hypotheses, or do our historical hypotheses give meaning and value to historical facts, without which the bare facts would add up to nothing?
Is history a science? Can we analyze the body of historical knowledge in terms of facts and hypotheses? Is history subject to the same constraints and possibilities as science? An hypothesis is an opportunity — an opportunity to transform facts in the image of meaning; facts are limitations that constrain hypotheses. An hypothesis is an epistemic opportunity — an opportunity to make sense of the world — and therefore an hypothesis is also at the same time an epistemic risk — a risk of getting interpreting the world incorrectly and misunderstanding events.
The ancient question of whether history is an art or a science would seem to have been settled by the emergence of scientific historiography, which clearly is a science, but this does not answer the question of what history was before scientific historiography. One might reasonably maintain that scientific historiography was the implicit telos of all previous historiographical study, but this fails to acknowledge the role of historical narratives in shaping our multiple human identities — personal, cultural, ethnic, political, mythological.
If Big History should become the basis of some future axialization of industrial-technological civilization, then scientific historiography too will play a constitutive role in human identity, and while other and older identity narratives presently coexist and furnish different individuals with a different sense of their place in the world, we have already seen the beginnings of an identity shaped by science.
There is a sense in which the scientific historian today knows much more about the past than those who lived in the past and experienced that past as an immediate yet fragmentary present. One might infer the possibility of a total knowledge of the past through the cumulative knowledge scientific historiography — a condition denied to those who actually lived in the past — although this “total” knowledge must fall short of the peculiar kind of knowledge derived from immediate personal experience, as contemplated in the thought experiment known as “Mary’s room.”
In the thought experiment known as Mary’s room, also called the knowledge argument, we imagine a condition of total knowledge and compare this with the peculiar kind of knowledge that is derived from experience, in contradistinction to the knowledge of knowledge we come to through science. Here is the source of the Mary’s room thought experiment:
“Mary is a brilliant scientist who is, for whatever reason, forced to investigate the world from a black and white room via a black and white television monitor. She specializes in the neurophysiology of vision and acquires, let us suppose, all the physical information there is to obtain about what goes on when we see ripe tomatoes, or the sky, and use terms like ‘red’, ‘blue’, and so on. She discovers, for example, just which wavelength combinations from the sky stimulate the retina, and exactly how this produces via the central nervous system the contraction of the vocal cords and expulsion of air from the lungs that results in the uttering of the sentence ‘The sky is blue’. [...] What will happen when Mary is released from her black and white room or is given a color television monitor? Will she learn anything or not?”
Frank Jackson, “Epiphenomenal Qualia” (1982)
Philosophers disagree on whether Mary learns anything upon leaving Mary’s room. As a thought experiment, it is intended not to give as a definitive answer to a circumstance that is never likely to occur in fact, but to sharpen our intuitions and refine our formulations. We can try to do the same with formulations of an ideal totality of knowledge derived from scientific historiography. There is a sense in which scientific historiography allows us to know much more about the past than those who lived in the past. To echo a question of Thomas Nagel, was there something that it was like to be in the past? Are there, or were there, historical qualia? Is the total knowledge of history afforded by scientific historiography short of capturing historical qualia?
In the Mary’s room thought experiment the agent in question is human and the experience is imposed colorblindness. Many people live with colorblindness within the condition greatly impacting their lives, so in this context it is plausible that Mary learns nothing upon the lifting of her imposed colorblindness, since the gap between these conditions is not as intuitively obvious as the gap between agents of a fundamentally different kind (as, e.g., distinct species) or between experiences of a fundamental different kind in which it is not plausible that the the lifting of an imposed limitation on experience results in no significant impact on one’s life.
We can sharpen the formulation of Mary’s room, and thus potentially sharpen our own intuitions, by taking a more intense experience than that of color vision. We can also alter the sense of this thought experiment by considering the question across distinct species or across the division between minds and machines. For example, if a machine learned everything that there is to know about eating would that machine know what it was like to eat? Would total knowledge after the manner of Mary’s knowledge of color suffice to exhaust knowledge of eating, even in the absence of an actual experience of eating? I doubt that many would be convinced that learning about eating without the experience of eating would be sufficient to exhaust what there is to know about eating. Thomas Nagel’s thought experiment in “What is it like to be a bat?” alluded to above poses the knowledge argument across species.
We can give this same thought experiment yet another twist if we reverse the roles of minds and machines, and asking of machine experience, should machine consciousness emerge, the questions we have asked of human experience (or bat experience). If a human being learned everything there is to know about AI and machine consciousness, would such a human being know what it is like to be a machine? Could knowledge of machines exhaust uniquely machine experience?
The kind of total scientific knowledge of the world implicit in scientific historiography is not unlike what Pierre Simon LaPlace had in mind when he posited the possibility of predicting the entire state of the universe, past or future, on the basis of an exhaustive knowledge of the present. LaPlace’s argument is also a classic determinist position:
“We ought then to regard the present state of the universe as the effect of its anterior state and as the cause of the one which is to follow. Given for one instant an intelligence which could comprehend all the forces by which nature is animated and the respective situation of the beings who compose it — an intelligence sufficiently vast to submit these data to analysis — it would embrace in the same formula the movements of the greatest bodies of the universe and those of the lightest atom; for it, nothing would be uncertain and the future, as the past, would be present to its eyes. The human mind offers, in the perfection which it has been able to give to astronomy, a feeble idea of this intelligence. Its discoveries in mechanics and geometry, added to that of universal gravity, have enabled it to comprehend in the same analytical expressions the past and future states of the system of the world. Applying the same method to some other objects of its knowledge, it has succeeded in referring to general laws observed phenomena and in foreseeing those which given circumstances ought to produce. All these efforts in the search for truth tend to lead it back continually to the vast intelligence which we have just mentioned, but from which it will always remain infinitely removed. This tendency, peculiar to the human race, is that which renders it superior to animals; and their progress in this respect distinguishes nations and ages and constitutes their true glory.”
A Philosophical Essay on Probabilities, PIERRE SIMON, MARQUIS DE LAPLACE, WITH AN INTRODUCTORY NOTE BY E. T. BELL, DOVER PUBLICATIONS, INC., New York, CHAPTER II.
While such a LaPlacean calculation of the universe would lie beyond the capability of any human being, it might someday lie within the capacity of another kind of intelligence. What LaPlace here calls, “an intelligence sufficiently vast to submit these data to analysis,” suggests the possibility of a sufficiently advanced (i.e., sufficiently large and fast) computer that could make this calculation, thereby achieving a kind of computational omniscience.
Long before we have reached the point of an “intelligence explosion” (the storied “technological singularity”) and machines surpass the intelligence of human beings, and each generation of machine is able to build a yet more intelligent successor (i.e., an “intelligence explosion”), the computational power at our disposal will for all practical purposes exhaust the world and will thus have obtained computational omniscience. We have already begun to converge upon this kind of total knowledge of the cosmos with the Bolshoi Cosmological Simulations and similar efforts with other supercomputers.
It is this kind of reasoning in regard to the future of cosmological simulations that has led to contemporary formulations of the “Simulation Hypothesis” — the hypothesis that we are, ourselves, at this moment, living in a computer simulation. According to the simulation argument, cosmological simulations become so elaborate and are refined to such a fine-grained level of detail that the simulation eventually populates itself with conscious agents, i.e., ourselves. Here, the map really does coincide with the territory, at least for us. The entity or entities conducting such a grand simulation, and presumably standing outside the whole simulation observing, can see the simulation for the simulation that it is. (The connection between cosmology and the simulation argument is nicely explained in the episode “Are We Real?” of the television series “What We Still Don’t Know” hosted by noted cosmologist Martin Rees.)
One way to formulate the idea of omniscience is to define omniscience as knowledge of the absolute infinite. The absolute infinite is an inconsistent multiplicity (in Cantorian terms). There is a certain reasonableness in this, as the logical principle of explosion, also known as ex falso quodlibet (namely, the principle that anything follows from a contradiction), means that an inconsistent multiplicity that incorporates contradictions is far richer than any consistent multiplicity. In so far as omniscience could be defined as knowledge of the absolute infinite, few would, I think, be willing to argue for the possibility of computational omniscience, so we will below pursue this from another angle, but I wanted to mention this idea of defining omniscience as knowledge of the absolute infinite because it strikes me as interesting. But no more of this for now.
The claim of computational omniscience must be qualified, since computational omniscience can exhaust only that portion of the world exhaustible by computational means; computational omniscience is the kind of omniscience that we encountered in the “Mary’s room” thought experiment, which might plausibly be thought to exhaust the world, or which might with equal plausibility be seen as falling far short of all that might be known of some body of knowledge.
Computational omniscience is distinct from omniscience simpliciter; while exhaustive in one respect, it fails to capture certain aspects of the world. Computational omniscience may be defined as the computation of all that is potentially computable, which leaves aside that which is not computable. The non-computable aspects of the world include, but are not limited to, non-computable functions, quantum indeterminacy, that which is non-quantifiable (for whatever reason), the qualitative dimension of conscious experience (i.e., qualia), and that which is inferred but not observable. These are pretty significant exceptions. What is left over? What part of the world is computable? This is a philosophical question that we must ask once we understand that computability has limits and that these limits may be distinct from the limits of human intelligence. Just as conscious biological agents face intrinsic epistemic limits, so also non-biological agents would also face intrinsic epistemic limits — in so far as a non-biological agent can be considered an epistemic agent — but these limitations on biological and non-biological agents are not necessarily the same.
The ultimate inadequacy of computational omniscience points to the possibility of limited omniscience — though one might well assert that omniscience that is limited is not really omniscience at all. The limited omniscience of a computer capable of computing the fate of the known universe may be compared to recent research on what Daniel Kahneman calls the bounded rationality of human minds. Artificial intelligence is likely to be a bounded intelligence that exemplifies bounded rationality, although its boundaries will not necessarily coincide precisely with the boundaries that defined human bounded rationality.
The idea of limited omniscience has been explored in mathematics, particular in regard to constructivism. Constructivist mathematicians have formulated principles of omniscience, and, wary of both unrestricted use of tertium non datur and of its complete interdiction in the manner of intuitionism, the limited principle of omniscience has been proposed as a specific way to skirt around some of the problems implicit in the realism of unrestricted tertium non datur.
When we allow our mathematical thought to coincide with realities and infinities — an approach that we are assured is practical and empirical, and bound to only yield benefits — we find ourselves mired in paradoxes, and in the interest of freeing ourselves from this conceptual mire we are driven to a position like Einstein’s famous aphorism that, “As far as the laws of mathematics refer to reality, they are not certain; and as far as they are certain, they do not refer to reality.” We separate and compartmentalize factual realities and mathematical infinities because we have difficulty, “to hold two opposing ideas in mind at the same time and still retain the ability to function.”
Indeed, it was Russell’s attempt to bring together Cantorian conceptions of set theory with practical measures of the actual world that begat the definitive paradox of set theory that bear Russell’s name, and the responses to which have in large measure shaped post-Cantorian mathematics. Russell gives the following account of his discovery of his eponymous paradox in his Autobiography:
Cantor had a proof that there is no greatest number, and it seemed to me that the number of all the things in the world ought to be the greatest possible. Accordingly, I examined his proof with some minuteness, and endeavoured to apply it to the class of all the things there are. This led me to consider those classes which are not members of themselves, and to ask whether the class of such classes is or is not a member of itself. I found that either answer implies its contradictory.
Bertrand Russell, The Autobiography of Bertrand Russell, Vol. II, 1872-1914, “Principia Mathematica”
None of the great problems of philosophical logic from this era — i.e., the fruitful period in which Russell and several colleagues created mathematical logic — were “solved”; rather, a consensus emerged among philosophers of logic, conventions were established, and, perhaps most importantly, Zermelo’s axiomatization of set theory became the preferred mathematical treatment of set theory, which allowed mathematicians to skirt the difficult issues in philosophical logic and to focus on the mathematics of set theory largely without logical distractions.
It is an irony of intellectual history that the next great revolution in mathematics to follow after set theory — which latter is, essentially, the mathematical theory of the infinite — was to be that of computer science, which constitutes the antithesis of set theory in so far as it is the strictest of strict finitisms. It would be fair to characterize the implicit theoretical position of computer science as a species of ultra-finitism, since computers cannot formulate even the most tepid potential infinite. All computing machines have an upper bound of calculation, and this is a physical instantiation of the theoretical position of ultra-finitism. This finitude follows from embodiment, which a computer shares with the world itself, and which therefore makes ultra-finite computing consistent with an ultra-finite world. In an ultra-finite world, it is possible that the finite may exhaust the finite and computational omniscience realized.
The universe defined by the Big Bang and all that followed from the Big Bang is a finite universe, and may in virtue of its finitude admit of exhaustive calculation, though this finite universe of observable cosmology may be set in an infinite context. Indeed, even the finite universe may not be as rigorously finite as we suppose, given that the limitations of our observations are not necessarily the limits of the real, but rather are defined by the limit of the speed of light. Leonard Susskind has rightly observed that what we observe of the universe is like being inside a room, the walls of which are the distant regions of the universe receding from us at superluminous velocity at the point at which they disappear from our view.
Recently in The Size of the World I quoted this passage from Leonard Susskind:
“In every direction that we look, galaxies are passing the point at which they are moving away from us faster than light can travel. Each of us is surrounded by a cosmic horizon — a sphere where things are receding with the speed of light — and no signal can reach us from beyond that horizon. When a star passes the point of no return, it is gone forever. Far out, at about fifteen billion light years, our cosmic horizon is swallowing galaxies, stars, and probably even life. It is as if we all live in our own private inside-out black hole.”
Leonard Susskind, The Black Hole War: My Battle with Stephen Hawking to make the World Safe for Quantum Mechanics, New York, Boston, and London: Little, Brown and Company, 2008, pp. 437-438
This observation has not yet been sufficiently appreciated (as I previously noted in The Size of the World). What lies beyond Susskind’s cosmic horizon is unobservable, just as anything that disappears beyond the event horizon of a black hole has become unobservable. We might term such empirical realities just beyond our grasp empirical unobservables. Empirical unobservables include (but are presumably not limited to — our “out” clause) all that which lies beyond the event horizon of Susskind’s inside-out black hole, that which lies beneath the event horizon of a black hole as conventionally conceived, and that which lies outside the lightcone defined by our present. There may be other empirical unobservables that follow from the structure of relativistic space. There are, moreover, many empirically inaccessible points of view, such as the interiors of stars, which cannot be observed for contingent reasons distinct from the impossibility of observing certain structures of the world hidden from us by the nature of spacetime structure.
What if the greater part of the universe passes in the oblivion of the empirical unobservables? This is a question that was posed by a paper appeared that in 2007, The Return of a Static Universe and the End of Cosmology, which garnered some attention because of its quasi-apocalyptic claim of the “end of cosmology” (which sounds a lot like Heidegger’s proclamation of the “end of philosophy” or any number of other proclamations of the “end of x“). This paper was eventually published in Scientific American as The End of Cosmology? An accelerating universe wipes out traces of its own origins by Lawrence M. Krauss and Robert J. Scherrer.
In calling the “end of cosmology” a “quasi-apocalyptic” claim I don’t mean to criticize or ridicule the paper or its argument, which is of the greatest interest. As in the subtitle of the Scientific American article, it appears to be the case that an accelerating universe wipes out traces of its own origins. If a quasi-apocalyptic claim can be scientifically justified, it is a legitimate and deserves our intellectual respect. Indeed, the study of existential risk could be considered a scientific study of apocalyptic claims, and I regard this as an undertaking of the first importance. We need to think seriously about existential risks in order to mitigate them rationally to the extent possible.
In my posts on the prediction and retrodiction walls (The Retrodiction Wall and Addendum on the Retrodiction Wall) I introduced the idea of effective history, which is that span of time which lies between the retrodiction wall in the past and the prediction wall in the future. One might similarly define effective cosmology as consisting of that region or those regions of space within the practical limits of observational cosmology, and excluding those regions of space that cannot be observed — not merely what is hidden from us by contingent circumstances, but that which are are incapable of observing because of the very structure of the universe and our place (ontologically speaking) within it.
There are limits to what we can know that are intrinsic to what we might call the human condition, except that this formulation is anthropocentric. The epistemic limits represented by effective history and effective cosmology are limitations that would hold for any sentient, conscious organism emergent from natural history, i.e., would hold for any peer species. Some of these limitations are limitations intrinsic to our biology and to the kind of mind that is emergent from biological organisms. Some of these limitations are limitations intrinsic to the world in which we find ourselves, and the vantage point from within the cosmos that we view our world. Ultimately, these limitations are one and the same, as the kind of biological beings that we are is a function of the kind of cosmos in which we have emerged, and which has served as the context of our natural history.
Within the domains of effective history and effective cosmology, we are limited further still by the non-quantifiable aspects of the world noted above. Setting aside non-quantifiable aspects of the world, what I have elsewhere called intrinsically arithmetical realities are a paradigm case of what remains computable once we have separated out the non-computable exceptions. (Beyond the domains of effective history and effective cosmology, hence beyond the domain of computational omniscience, there lies the infinite context of our finite world, about which we will say no more at present.) Intrinsically arithmetical realities are intrinsically amenable to quantitative methods are potentially exhaustible by computational omniscience.
Some have argued that the whole of the universe is intrinsically arithmetical in the sense of being essentially mathematical, as in the “Mathematical Universe Hypothesis” of Max Tegmark. Tegmark writes:
“[The Mathematical Universe Hypothesis] explains the utility of mathematics for describing the physical world as a natural consequence of the fact that the latter is a mathematical structure, and we are simply uncovering this bit by bit.”
The Mathematical Universe by Max Tegmark
Tegmark also explicitly formulates two companion principles:
External Reality Hypothesis (ERH): There exists an external physical reality completely independent of us humans.
Mathematical Universe Hypothesis (MUH): Our external physical reality is a mathematical structure.
I find these formulations to be philosophically naïve in the extreme, but as a contemporary example of a perennial tradition of philosophical thought Tegmark is worth citing. Tegmark is seeking an explicit answer to Wigner’s famous question about the “unreasonable effectiveness of mathematics.” It is to be expected that some responses to Wigner will take the form that Tegmark represents, but even if our universe is a mathematical structure, we do not yet know how much of that mathematical structure is computable and how much of that mathematical structure is not computable.
In my Centauri Dreams post on SETI, METI, and Existential Risk I mentioned that I found myself unable to identify with either the proponents of unregulated METI or those who argue for the regulation of METI efforts, since I disagreed with key postulates on both sides of the argument. METI advocates typically hold that interstellar flight is impossible, therefore METI can pose no risk. Advocates of METI regulation typically hold that unintentional EM spectrum leakage is not detectable at interstellar distances, therefore METI poses a risk we do not face at present. Since I hold that interstellar flight is possible, and that unintentional EM spectrum radiation is (or will be) detectable, I can’t comfortably align myself with either party in the discussion.
I find myself similarly hamstrung on the horns of a dilemma when it comes to computability, the cosmos, and determinism. Computer scientists and singulatarian enthusiasts of exponential increasing computer power ultimately culminating in an intelligence explosion seem content to assume that the universe is not only computable, and presents no fundamental barriers to computation, but foresee a day when matter itself is transformed into computronium and the whole universe becomes a grand computer. Critics of such enthusiasts often take the form of denying the possibility of AI or denying the possibility of machine consciousness, denying this or that is technically possible, and so on. It seems clear to me that only a portion of the world will ever be computable, but that portion is considerable and that a great many technological developments will fundamentally change our relationship to the world. But no matter how much either human beings or machines are transformed by the continuing development of industrial-technological civilization, non-computable functions will remain non-computable. This I cannot count myself either as a singulatarian or a Luddite.
How are we to understand the limitations to computational omniscience imposed by the limits of computation? The transcomputational problem, rather than laying bare human limitations, points to the way in which minds are not subject to computational limits. Minds as minds do not function computationally, so the evolution of mind (which drives the evolution of civilization) embodies different bounds and different limits than the Bekenstein bound and Bremermann’s limit, as well as different possibilities and different opportunities. The evolutionary possibilities of the mind are radically distinct from the evolutionary possibilities of bodies subject to computational limits, even though minds are dependent upon the bodies in which they are embodied.
Bremermann’s limit is 1093, which is somewhat arbitrary, but whether we draw the line here or elsewhere it doesn’t really matter for the principle at stake. Embodied computing must run into intrinsic limits, e.g., from relativity — a computer that exceeded Bremerman’s limit by too much would be subject to relativistic effects that would mean that gains in size would reach a point of diminishing returns. Recent brain research was suggested that the human brain is already close to the biological limit for effective signal transmission within and between the various parts of the brain, so that a larger brain would not necessarily be smarter or faster or more efficient. Indeed, it has been pointed out the elephant and whale brains are larger than mammal brains, although the encephalization quotient is much higher in human beings despite the difference in absolute brain size.
The function of organic bodies easily peaks over 1093. The Wikipedia entry on the transcomputational problem says:
“The retina contains about a million light-sensitive cells. Even if there were only two possible states for each cell (say, an active state and an inactive state) the processing of the retina as a whole requires processing of more than 10 to the 300,000 bits of information. This is far beyond Bremermann’s limit.”
This is just the eye alone. The body has far more nerve ending inputs than just those of the eye, and essentially a limitless number of outputs. So exhausting the possible computational states of even a relatively simple organism easily surpasses Bremermann’s limit and is therefore transcomputational. Some very simple organisms might not be transcomputational, given certain quantifiable parameters, but I think most complex life, and certainly things are complex as mammals, are radically transcomputational. Therefore the mind (whatever it is) is embodied in a transcomputational body, of which no computer could exhaustively calculate its possible states. The brain itself is radically transcomputational with its 100 billion neurons (each of which can take at minimum two distinct states, and possibly more).
Yet even machine embodiments can be computationally intractable (in the same way that organic bodies are computationally intractable), exceeding the possibility of exhaustively calculating every possible material state of the mechanism (on a molecular or atomic level). Thus the emergence of machine consciousness would also supervene upon a transcomputational embodiment. It is, at present, impossible to say whether a machine embodiment of consciousness would be a limitation upon that consciousness (because the embodiment is likely to be less radically transcomputational than the brain) or a facilitation of consciousness (because machines can be arbitrarily scaled up in a way that organic bodies cannot be).
Since the mind stands outside the possibilities of embodied computation, if machine consciousness emerges, machine embodiments will be as non-transparent to machine minds as organic embodiment is non-transparent to organic minds, but the machine minds, non-transparent to their embodiment as they are, will have access to energy sources far beyond any resources an organic body could provide. Such machine consciousness would not be bound by brute force calculation or linear models (as organic minds are not so bound), but would have far greater resources at its command for the development of its consciousness.
Since the body that today embodies mind already far exceeds Bremermann’s limit, and no machine as machine is likely to exceed this limit, machine consciousness emergent from computationally tractable bodies may, rather than being super-intelligent in ways that biologically derived minds can never be, may on the contrary be a pale shadow of an organic mind in an essentially transcomputational body. This gives a whole new twist to the much-discussed idea of the mind’s embodiment.
Computation is not the be-all and end-all of of mind; it is, in fact, only peripheral to mind as mind. If we had to rely upon calculation to make it through our day, we wouldn’t be able to get out of bed in the morning; most of the world is simply too complex to calculate. But we have a “work around” — consciousness. Marginalized as the “hard problem” in the philosophy of mind, or simply neglected in scientific studies, consciousness enables us to cut the Gordian Knot of transcomputability and to act in a complex world that far exceeds our ability to calculate.
Neither is consciousness the be-all and end-all of mind, although the rise of computer science and the increasing role of computers in our life has led many to conclude that computation is primary and that it is consciousness is that is peripheral. and, to be sure, in some contexts, consciousness is peripheral. In many of the same contexts of our EEA in which calculation is impossible due to complexity, consciousness is also irrelevant because we respond by an instinct that is deeper than and other than consciousness. In such cases, the mechanism of instinct takes over, but this is a biologically specific mechanism, evolved to serve the purpose of differential survival and reproduction; it would be difficult to re-purpose a biologically specific mechanism for any kind of abstract computing task, and not particularly helpful either.
Consciousness is not the be-all and end-all not only because instinct largely circumvents it, but also because machines have a “work around” for consciousness just as consciousness is a “work around” for the limits of computability; mechanism is a “work around” for the inefficiencies of consciousness. Machine mechanisms can perform precisely those tasks that so tax organic minds as to be virtually unsolvable, in a way that is perfectly parallel to the conscious mind’s ability to perform tasks that machines cannot yet even approach — not because machines can’t do the calculations, but because machines don’t possess the “work around” ability of consciousness.
It is when computers have the “work around” capacity that conscious beings have that they will be in a position to effect an intelligence explosion. That is to say, machine consciousness is crucial to AI that is able to perform in that way that AI is expected to perform, though AI researchers tend to be dismissive of consciousness. If the proof of the pudding is in the eating, well, then it is consciousness that allows us to “chunk the proofs” (i.e., to divide the proof into individually manageable pieces) and get to the eating all the more efficiently.
. . . . .
. . . . .
. . . . .
25 September 2013
Hegel is not remembered as the clearest of philosophical writers, and certainly not the shortest, but among his massive, literally encyclopedic volumes Hegel also left us one very short gem of an essay, “Who Thinks Abstractly?” that communicates one of the most interesting ideas from Hegel’s Phenomenology of Mind. The idea is simple but counter-intuitive: we assume that knowledgeable individuals employ more abstractions, while the common run of men content themselves with simple, concrete ideas and statements. Hegel makes that point that the simplest ideas and terms that tend to be used by the least knowledgeable among us also tend to be the most abstract, and that as a person gains knowledge of some aspect of the world the abstraction of a terms like “tree” or “chair” or “cat” take on concrete immediacy, previous generalities are replaced by details and specificity, and one’s perspective becomes less abstract. (I wrote about this previously in Spots Upon the Sun.)
We can go beyond Hegel himself by asking a perfectly Hegelian question: who thinks abstractly about history? The equally obvious Hegelian response would be that the historian speaks the most concretely about history, and it must be those who are least knowledgeable about history who speak and think the most abstractly about history.
“…it is difficult to imagine that any of the sciences could treat time as a mere abstraction. Yet, for a great number of those who, for their own purposes, chop it up into arbitrary homogenous segments, time is nothing more than a measurement. In contrast, historical time is a concrete and living reality with an irreversible onward rush… this real time is, in essence, a continuum. It is also perpetual change. The great problems of historical inquiry derive from the antithesis of these two attributes. There is one problem especially, which raises the very raison d’être of our studies. Let us assume two consecutive periods taken out of the uninterrupted sequence of the ages. To what extent does the connection which the flow of time sets between them predominate, or fail to predominate, over the differences born out of the same flow?”
Marc Bloch, The Historian’s Craft, translated by Peter Putnam, New York: Vintage, 1953, Chapter I, sec. 3, “Historical Time,” pp. 27-29
The abstraction of historical thought implicit in Hegel and explicit in Marc Bloch is, I think, more of a problem that we commonly realize. Once we look at the problem through Hegelian spectacles, it becomes obvious that most of us think abstractly about history without realizing how abstract our historical thought is. We talk in general terms about history and historical events because we lack the knowledge to speak in detail about exactly what happened.
Why should it be any kind of problem at all that we think abstractly about history? People say that the past is dead, and that it is better to let sleeping dogs lie. Why not forget about history and get on with the business of the present? All of this sounds superficially reasonable, but it is dangerously misleading.
Abstract thinking about history creates the conditions under which the events of contemporary history — that is to say, current events — are conceived abstractly despite our manifold opportunities for concrete and immediate experience of the present. This is precisely Hegel’s point in “Who Thinks Abstractly?” when he invites the reader to consider the humanity of the condemned man who is easily dismissed as a murderer, a criminal, or a miscreant. But we not only think in such abstract terms of local events, but also if not especially in regard to distant events, and large events that we cannot experience personally, so that massacres and famines and atrocities are mere massacres, mere famines, and mere atrocities because they are never truly real for us.
There is an important exception to all this abstraction, and it is the exception that shapes us: one always experiences the events of one’s own life with concrete immediacy, and it is the concreteness of personal experience contrasted to the abstractness of everything else not immediately experienced that is behind much (if not all) egocentrism and solipsism.
Thus while it is entirely possible to view the sorrows and reversals of others as abstractions, it is almost impossible to view one’s own sorrows and reversals in life as abstractions, and as a result of the contrast between our own vividly experienced pain and the abstract idea of pain in the life of another we have a very different idea of all that takes place in the world outside our experience as compared to the small slice of life we experience personally. This observation has been made in another context by Elaine Scarry, who in The Body in Pain: The Making and Unmaking of the World rightly observed that one’s own pain is a paradigm of certain knowledge, while the pain of another is a paradigm of doubt.
Well, this is exactly why we need to make the effort to see the big picture, because the small picture of one’s own life distorts the world so severely. But given our bias in perception, and the unavoidable point of view that our own embodied experience gives to us, is this even possible? Hegel tried to arrive at the big picture by seeing history whole. In my post The Epistemic Overview Effect I called this the “overview effect in time” (without referencing Hegel).
Another way to rise above one’s anthropic and individualist bias is the overview effect itself: seeing the planet whole. Frank White, who literally wrote the book on the overview effect, The Overview Effect: Space Exploration and Human Evolution, commented on my post in which I discussed the overview effect in time and suggested that I look up his other book, The Ice Chronicles, which discusses the overview effect in time.
I have since obtained a copy of this book, and here are some representative passages that touch on the overview effect in relation to planetary science and especially glaciology:
“In the past thirty-five years, we have grown increasingly fascinated with our home planet, the Earth. What once was ‘the world’ has been revealed to us as a small planet, a finite sphere floating in a vast, perhaps infinite, universe. This new spatial consciousness emerged with the initial trips into Low Earth Orbit…, and to the moon. After the Apollo lunar missions, humans began to understand that the Earth is an interconnected unity, where all things are related to one another, and there what happens on one part of the planet affects the whole system. We also saw that the Earth is a kind of oasis, a place hospitable to life in a cosmos that may not support living systems, as we know them, anywhere else. This is the experience that has come to be called ‘The Overview Effect’.”
Paul Andrew Mayewski and Frank White, The Ice Chronicles: The Quest to Understand Global Climate Change, University Press of New England: Hanover and London, 2002, p. 15
“The view of the whole Earth serves as a natural symbol for the environmental movement. it leaves us unable to ignore the reality that we are living on a finite ‘planet,’ and not a limitless ‘world.’ That planet is, in the words of another astronaut, a lifeboat in a hostile space, and all living things are riding in it together. This realization formed the essential foundation of an emerging environmental awareness. The renewed attention on the Earth that grew out of these early space flights also contributed to an intensified interest in both weather and climate.”
Paul Andrew Mayewski and Frank White, The Ice Chronicles: The Quest to Understand Global Climate Change, University Press of New England: Hanover and London, 2002, p. 20
“Making the right choices transcends the short-term perspectives produced by human political and economic considerations; the long-term habitability of our home planet is at stake. In the end, we return to the insights brought to us by our astronauts and cosmonauts as the took humanity’s first steps in the universe: We live in a small, beautiful oasis floating through a vast and mysterious cosmos. We are the stewards of this ‘good Earth,’ and it is up to us to learn how to take good care of her.”
Paul Andrew Mayewski and Frank White, The Ice Chronicles: The Quest to Understand Global Climate Change, University Press of New England: Hanover and London, 2002, p. 214
It is interesting to note in this connection that glaciology yielded one of the earliest forms of scientific dating techniques, which is varve chronology, originating in Sweden in the nineteenth century. Varve chronology dates sedimentary layers by the annual layers of alternating coarse and fine sediments from glacial runoff — making it something like dendrochronology, except for ice instead of trees.
Scientific historiography can give us a taste of the overview effect, though considerable effort is required to acquire the knowledge, and it is not likely to have the visceral impact of seeing the overview effect with your own eyes. Even an idealistic philosophy like that of Hegel, as profoundly different as this is from the empiricism of scientific historiography, can give a taste of the overview effect by making the effort to see history whole and therefore to see ourselves within history, as a part of an ongoing process. Probably the scientists of classical antiquity would have been delighted by the overview effect, if only they had had the opportunity to experience it. Certainly they had an inkling of it when they proved that the Earth is spherical.
There are many paths to the overview effect; we need to widen these paths even as we blaze new trails, so that the understanding of the planet as a finite and vulnerable whole is not merely an abstract item of knowledge, but also an immediately experienced reality.
. . . . .
. . . . .
. . . . .
25 March 2013
In my last post, The Problem with Diachronic Extrapolation, I attempted to show how diachronic extrapolation, while the most familiar form of futurism, is often misleading because it fails to adequately account for synchronic interactions as a diachronic strategic trend develops. In other posts concerned with unintended consequences I have emphasized that, in the long term, unintended consequences often outweigh intended consequences. Unintended consequences are the result of synchronic interactions that were not foreseen, that were no part of diachronic agency, and those cases in which unintended consequences swamp intended consequences the synchronic interactions have proved more decisive in shaping the future than diachronic causality.
In my post on The Problem with Diachronic Extrapolation I made several assertions that clearly imply the limitation of inferences from the present to the future, which also implies the limitation of inferences from the present to the past. This brings up issues that go far beyond futurism.
In that post I wrote:
“…diachrony over significant periods of time cannot be pursued in isolation, since any diachronic extrapolation will interact with changed conditions over time, and this interaction will eventually come to constitute the consequences as must as the original trend diachronically extrapolated.”
“…the most frequent form of failed futurism is to take a trend in the present and to project it into the future, but any futurism worthy of the name must understand events in both their synchronic and diachronic context; isolation from succession in time is just as invidious as isolation from interaction across time…”
The reader may have noticed the resemblance of this species of failed futurism to uniformitarianism: instead of taking a strategic trend acting at present and extrapolating it into the future, uniformitarianism takes a physical force acting in the present and extrapolates it into the future (or, as is more likely the case in geology, into the past). This idea of uniformitarianism is usually expressed as, “the present is key to the past,” and we might similarly express the parallel form of futurism as being, “the present is key to the future.” These two claims — the present is the key to the past and the present is the key to the future — are logically equivalent since, as I pointed out previously, every present is the future of some past, and the past of some future.
Since these interpretations of uniformitarianism involve uniformity across past and future, these formulations closely resemble formulations of induction also stated in terms of past and future, as when the logical problem of induction is formulated, “Will the future be like the past?” It is at this point that the philosophy of time, the philosophy of history, the philosophy of science, and futurism all coincide, because it concerns a problem that all have in common.
Stephen Jay Gould noticed this similarity of uniformitarianism and induction in his first published paper, “Is uniformitarianism necessary?” Gould, of course, become famous for his critique of uniformitarianism, and for this alternative to it, punctuated equilibrium (for which he shares the credit with Niles Eldredge). In this early paper by Gould, Gould distinguished between substantive uniformitarianism and methodological uniformitarianism. He tried to show that the former is simply false, and the the latter, methodological uniformitarianism, is now subsumed under the scientificity of geology and paleontology. Here is now Gould put it:
“…we see that methodological uniformitarianism amounts to an affirmation of induction and simplicity. But since these principles belong to the modern definition of empirical science in general, uniformitarianism is subsumed in the simple statement: ‘geology is a science’. By specifically invoking methodological uniformitarianism, we do little more than affirm that induction is procedurally valid in geology.”
Stephen Jay Gould, “Is uniformitarianism necessary?” American Journal of Science, Vol. 263, March 1965, p. 227
That is to say, the earth sciences use the scientific method, which Gould characterizes in terms of inductive logic and the principle of parsimony (I would argue that Gould is also assuming methodological naturalism) — therefore everything that is worth saving in uniformitarianism is already secured by the scientific status of geology, and therefore uniformitarianism is dispensable. Having once served an important function in science, uniformitarianism has now, Gould contends, become an obstacle to progress.
As I noted above, Gould didn’t merely assert that uniformitarianism was no longer necessary, but devoted his career to arguing for an alternative, punctuated equilibrium, which asserts that long period of stasis are interrupted by catastrophic discontinuities. While much has been written about uniformitarianism vs. punctuated equilibrium, I see this as the thin end of the wedge for considering all kinds of alternatives to strict uniformitarianism, and to his end I think we would do well to explore all possible patterns of development, whether uniform (slow, gradual, incremental), punctuated (sudden, catastrophic, discontinuous), or otherwise.
Of course, we could easily produce more sophisticated formulations of uniformitarianism that would avoid the subsequent problems that have been raised, but this is the path that leads to Ptolemaic epicycles and attempts to “save the appearances,” whereas what we want is a rich mixture of theoretical innovation from which we can try many different models and select for further development those that are most true to the world.
Since the philosophy of time, the philosophy of history, the philosophy of science, and futurism all coincide at the point represented by the problem of the relationship of parts of time to other parts of time (and the idea of temporal parts is itself philosophical contested), all of these disciplines stand to learn something of value from exploring alternatives to uniformitarianism. In so far as futurism is dominated by nomothetic diachrony, and constitutes a kind of historical uniformitarianism, very different forms of futurism might emerge from a careful study of the alternatives to uniformitarianism, or merely from a recognition that, as Gould put, uniformitarianism is no longer necessary and something of an anachronism. If there is anything of which futurists ought to beware, being an anachronism must be close to the top of the list.
. . . . .
. . . . .
. . . . .
22 February 2013
Much of what I write here, whether commenting on current affairs to delving into the depths of prehistory, could be classed under the general rubric of philosophy of history. One of my early posts to this forum was Of What Use is Philosophy of History in Our Time? (An echo of the title of Hans Meyerhoff’s widely available anthology Philosophy of History in Our Time.) It could be argued that my subsequent posts have been attempts to answer this question (that is to say, to answer the question what is the use of philosophy of history in our time), to demonstrate the usefulness of bringing a philosophical perspective to history, contemporary and otherwise. The reader is left to judge whether this attempt has been a success (partial or otherwise) or a failure (partial or otherwise).
In several recent posts — as, for example in The Science of Time, Addendum on Big History as the Science of Time, and Human Agency and the Exaptation of Selection, inter alia — I have been writing a lot about the philosophy of history from the perspective of big history, which is a contemporary historiographical school that comes to history from the perspective of the big picture and primarily proceeds according to scientific naturalism. This latter condition makes of big history a particular species of naturalism.
In many posts to this forum I have emphasized my own naturalistic perspective both in philosophy generally speaking as well as more specifically in the philosophy of history. For example, in posts such as Natural History and Human History, The Continuity of Civilization and Natural History, and An Existentialist Philosophy of History, I have emphasized the continuity of human history and natural history, especially making the attempt to place civilization in a natural historical context.
This emphasis on big history and naturalism has meant that I have spent very little time writing about alternatives to naturalistic historical thought — with a certain exception, which the reader may well not immediately recognize, so I will point it out explicitly. In several posts — The Ethos of Formal Thought, Foucault’s Formalism, Cartesian Formalism, and Formal Strategy and Philosophical Logic: Work in Progress among them — I have discussed the possibility of formal thought in relation to historical understanding, i.e., topics not usually discussed from a formal perspective (which is usually confined to logic, mathematics, and some branches of science). Formalism represents a certain kind of countervailing intellectual influence to naturalism, and it has probably served roughly that function in my thought.
I have previously mentioned Darren Staloff’s lectures on the philosophy of history, The Search for a Meaningful Past: Philosophies, Theories and Interpretations of Human History. One of the motifs running through Staloff’s lectures is a contrast between what he calls naturalism and idealism. He sums up this motif in the final lecture, in which he adopts the perspectives of naturalism and idealism in turn, trying give the listener a sense of the claims of each tradition. I found Staloff’s exposition of idealism less persuasive that his exposition of naturalism, and so I found the motif of a contrast between naturalism and idealism a bit strained, since it seemed to me that idealism really couldn’t carry its own weight in the way that it might have been able to in the past.
Recently I’ve encountered an approach to the philosophy of history that could be called “idealist” (at least in a certain sense), and this is much more persuasive to me that Staloff’s analytical representatives of the idealist tradition, like R. G. Collingwood. I have found this idealist perspective in the work of Ludwig Landgrebe, who was one of Husserl’s research assistants.
The casual reader of this blog might well have picked up on the amount of contemporary continental philosophy that I have read, but it unlikely to have realized the extent to which Edmund Husserl and phenomenology have been an influence on my thought. Nevertheless, that influence has been profound, to the point that many of Husserl’s expositors and commentators have also influenced my thinking. Recently I have been reading some essays by Ludwig Landgrebe, and this has started to give me another perspective on the philosophy of history.
Landgrebe wrote at least two papers on the philosophy of history, as well as one chapter of his book, Major Problems in Contemporary European Philosophy, from Dilthey to Heidegger. No doubt there is more material, but this is what I have found translated into English. (Landgrebe wrote an entire book on the phenomenological philosophy of history, Phänomenologie und Geschichte, but this has not been translated into English.) The two papers are “Phenomenology as Transcendental Theory of History” (which can be found in the collection of essays Husserl: Expositions and Appraisals, edited by Elliston and McCormick, University of Notre Dame Press, 1977. pp. 101-113) and “A Meditation on Husserl’s Statement: ‘History is the grand fact of absolute Being’” (The Southwestern Journal of Philosophy, Vol. 5, Issue 3, Fall 1974, pp. 111-125).
It is well known that Husserl’s last work, The Crisis of European Sciences and Transcendental Phenomenology: An Introduction to Phenomenological Philosophy, assembled posthumously from his papers, is the work in which Husserl placed phenomenology in historical context (for all practical purposes, for the first time), and considered the emergence of Western scientific thought in historical context. As such, this has been the point of departure of much historically-oriented phenomenological research, and the Crisis (as it has come to be known) and its supplementary texts were clearly influential for Landgrebe.
Landgrebe, however, as Husserl’s research assistant, was more than conversant with Husserl’s logical thought also. Husserl’s Experience and Judgment: Investigations in a Genealogy of Logic was a text assembled by Landgrebe from Husserl’s notes. Landgrebe consulted with Husserl throughout this project, and the original texts are all due to Husserl, but the structure of the book is entirely Landgrebe’s doing. Landgrebe brings the kind of rigor one learns in studying logic to his very compact essays on the philosophy of history. In this way, Landgrebe’s formulations have a formal character that makes them very congenial to me. Landgrebe’s approach is essentially that of a formal phenomenological theory of history, and this perspective allows me to assimilate Landgrebe’s insights both to idealistic historiography as well as my long-standing interest in formal thought.
If I were now to revise my speculative syllabus If I Lectured on the Philosophy of History (lecture 13 of which I had already assigned to phenomenology), I would definitely showcase Landgrebe’s philosophy of history as the most sophisticated phenomenological contribution to the philosophy of history.
. . . . .
. . . . .
. . . . .
27 January 2013
My title today, Human Agency and the Exaptation of Selection, is perhaps not a very good title, but if anyone out there has read a representative selection of my posts they will be aware that all of these topics — human agency, exaptation, and natural selection — are matters to which I have returned time and again, and I feel like I beginning to see my way clear to a point at which I can systematically tie together these themes into something more comprehensive than occasional remarks and comments of the sort that are the usual fare of blog posts.
All macro-historical revolutions to date have simply happened to us; they were not planned or chosen or made to happen, they just happened. And before the emergence of human agency in history, all the great transitions of natural history — i.e., the natural equivalent of a macro-historical revolution — simply happened without design, purpose, or direction.
Human efforts (including individual choices) in constituting historical realities have, to date, been like the myriad accidents of natural history that together and cumulatively constitute natural history. Even though human consciousness gives meaning and value to these individual decisions, and at times we participate in collective meanings and values, none of this has yet risen to the level of consciously constituting an epoch of history on the basis of human meanings and values. We have given meaning and value to circumstances that we have (accidentally) brought about, but have not brought about a civilization or a way of life in response to a determination to realize particular meanings and values. This is the social equivalent of Schopenhauer’s assertion that, while we are free to do what we want, we are not free to want what we want.
To shape the future of history, to plan for the kind of civilization to come, and possibly even to create a kind of civilization consciously intended and brought into being, would be historically unprecedented on a scale beyond the unprecedented events of human history (such as I recently wrote about in Invariant Civilizational Properties in Futurist Scenarios, i.e., how it would be unprecedented for an invariant of civilization to be overturned), because the trend of human history being shaped by non-human forces is far older than human history, and far older than our species.
Naturalism and its Others
It is at this point that the naturalistically inclined philosopher of history must obviously and unavoidably part company with those who retain theological conceptions of the world and its development. The idea of the world, up until the emergence of human intelligence from human consciousness, being utterly unplanned, undirected, and undesigned is a rigorously (and indeed rigidly) naturalistic conception that excludes even the most distant and unconcerned creator of deism.
Even the religiously and theologically inclined who make no attempt to defy what science tells us about the world must retain some minimal sense of purpose and direction — perhaps a quasi-Aristotelian final cause — since without this there remains nothing upon which to pun one’s beliefs that is not strictly a part of nature — no transcendent eschatology or soteriology.
It should be obvious from my other posts that I am writing from a rigorously naturalistic perspective, but sometimes one must be explicit about these things so as not to leave any wiggle room, so that one’s naturalistic formulations will either be interpreted naturalistically or rejected tout court because they are naturalistic. What I have written above about unprecedented historical developments simply makes no sense is one deviates from a strict naturalism, and that is why I make it explicit here.
The Threshold of Agency
The imposition of human will upon unthinking and uncomprehending nature began in the most rudimentary ways — the chipping of stone for tools and the gathering of sufficient sustenance such that this might last beyond the next meal. At this level of planning and provision for the future, the human mind is no different from other mammalian minds, since we know that other mammals make rudimentary tools and store food for the future.
To define the point at which human planning and provision for the future exceed this common mammalian standard, and thereby also exceed the possibility of being entirely the result of instinct refined by natural selection, genetically encoded in our biology (and the ultimate limit of evolutionary psychology), involves a sorites paradox (i.e., the paradox of the heap). While we need not define a particular point that human planning exceeds the mammalian norm, we can content ourselves with a span of time (viz. between the emergence of biologically modern homo sapiens and the advent of the historical period strictly speaking, i.e., a span of time encompassing human prehistory). In accordance with what I have called the Truncation Principle, we can in fact recognize an historical discontinuity, even if that discontinuity comes about gradually.
Over some period of time, then, human planning and provision exceeded the mammalian norm and became something historically unprecedented. We tend to magnify this transition, calling ourselves the “rational animal” and associating our reason with that which is uniquely human. One of the great themes of our time is that of human beings asserting their control over the planet, assuming de facto right over the disposition of the biosphere. In fact, we don’t even control our own history, much less the history of the planet. We affect our history and the natural history of our planet, but we do not control them.
We have risen to the level of micro-historical efficacy with the first rudimentary steps of tool making and food storage. We rose to the level of meso-historical efficacy in constituting human societies. These societies began as emergent accidents of human behavior, but I think that we can assert that, over time, we have consciously constituted at least a few limited examples of communities intentionally constituted to certain ends. We rose to the level of exo-historical efficacy in constituting the largest institutions and political entities that have dominated human history. Many of these institutions and political entities have also been accidents of history, but, again, I think that we can say that there are at least some explicit examples of the purposeful constitution of human institutions and political entities.
In other words, have passed at least three thresholds of agency defined in terms of ecological temporality. For human agency to rise to the level of macro-historical efficacy we would need to rise to the level of shaping entire eras of civilization and history. We aren’t there yet. As with the natural historical emergence of human communities and later larger institutions, which began with historical accidents and were only later rationalized, human macro-history remains at the level of our accidental participation. Millions upon millions of conscious human actions were required to create the industrial revolution, but no one consciously sought to create the industrial revolution; although it was, in a sense, made by us, in a more important sense it simply happened to us.
The Problem of Progress
In several posts — Civilization and the Technium, Biology Recapitulates Cosmology, and Progress, Stagnation, and Retrogression among them — I have mentioned Kevin Kelly’s explicit arguments for progress in his book What Technology Wants. I have mentioned this because, in terms of our current intellectual climate, he is an outlier, although among techno-philosophers he may represent something closer to a consensus. Among contemporary academic philosophers and historians, almost no one argues for progress — to do so is considered an unforgivable form of naïveté.
I mention this again here because the above treatment of human agency in terms of ecological temporality might provide a quantitative way to talk about human progress and the progress of human civilization that is not tied to the development of some particular technology. Any time anyone asserts that there has been progress because we now have airplanes and computers whereas once we did not, someone else responds by pointing to the moral horrors of the twentieth century, such as genocide, to demonstrate that technological progress cannot be conflated with moral progress. Moral progress requires an entirely separate argument, as does aesthetic progress. (So too, presumably religious, ideological, or eschatological progress, but I will not attempt to address any of these at present.)
The expanding scope of human agency through levels of ecological temporality can be interpreted as a kind of progress independent of any technological development. In so far as human agency is centrally implicated in human morality, the progress of human agency could even be interpreted as a form of moral progress. Now, this is an admittedly deceptive way to formulate it, because I do not here mean “moral” in the narrow sense of “ethical” but rather “moral” in the way we would use the term in a phrase like, “the moral lives of human beings.” Another way to formulate this would be to call it human progress, but this is probably no improvement at all. I mean progress in the form of asserting human agency over the peculiarly human aspects of our lives — emotions, relationships, interactions, evaluations, creations, and so forth.
A Darwinian conception of history
A Darwinian conception of history and of civilization is simply a conception of history and civilization fully in accord with Darwin’s thorough-doing naturalism, and especially the role of selection in the constitution of historical entities (like human history and human civilization). We can understand Darwinian conceptions of history and civilization as aspects of a Darwinian cosmology. The above formulations of the ecological temporal thresholds of human agency allow us to do this in an interesting way.
When human agency crosses a threshold from being subject to accidents, including its own cumulative accidents, to asserting control over the whole process of agency and its consequences — i.e., what it brings about — what is essentially happening is that human agency is taking over for natural selection; selection, or some part of selection, is transferred from nature to humanity. In other words, the expansion of human agency is the exaptation of selection. Selection that began as natural selection, taken over by the expanding agency of human beings, becomes human selection. This is exaptation not of organic structures, but of behavioral structures, i.e., exaptation on the order of the will.
To assert that the expansion of human agency is the exaptation of selection is to formulate a Darwinian conception of history and of civilization that does not need to declare the progress is impossible to account for in a selective paradigm, and also is not obligated to argue that progress is inherent in the very nature of things, which it is not.
One can understand the problematic idea of “progress” (which we may someday be able to take out of scare quotes) as the increasing human ability to impose human direction, purpose, and design upon history.
. . . . .
. . . . .
. . . . .
3 November 2012
How do we orient ourselves within historiography? This may sound like an odd question; I will try to make it sound like a sensible question, and a question with relevance extending far beyond the bounds of historiography narrowly construed.
One way to orient oneself within historiography is to accept and elaborate upon a familiar schema of historical periodization. There are many from which to choose. For example, if one divides Western history into ancient, medieval and modern periods, and then goes on to describe the character of medieval civilization, this constitutes a kind of orientation within historiography. Others working on the medieval period will recognize your approach based on a received conception of periodization and will critique the effort accordingly.
While I often write about problematic issues in historical periodization, I am going to consider a very different orientation within historiography today, and this might be considered to be a methodological orientation, based on how one assesses and organizes the objects of historical knowledge.
A familiar distinction within historiography is that between the synchonic and the diachronic. I have written about this distinction in Synchronic and Diachronic Approaches to Civilization and Synchronic and Diachronic Geopolitical Theories. “Synchrony” and “diachrony” sound like forbidding technical terms, but the concepts they attempt to capture are not at all difficult. Synchrony is the present construed broadly enough to admit of short term historical interaction, while diachrony typically takes a narrower view but a longer span of time. Sometimes this is expressed by saying that synchrony is across time while diachrony is through time.
Another distinction often made is that between the nomothetic and the ideographic. Again, these are intimidating technical terms, but the ideas are simple. Nomothetic (which comes from the Greek “nomos” for “law” or “norm”) approaches are concerned with law-like transitions in time: cause and effect. For example, you intentionally touch a stove not knowing that it is hot, you burn your finger, you withdraw your hand and give a shout of pain. Ideographic approaches do not quite constitute the negation of cause and effect, but they focus on all that is merely contigent, accidental, and unpredictable in life. For example, while looking at some distraction out of the corner of your eye, you trip, and in seeking to catch your fall you touch a hot stove and burn your finger.
When we put together these two historiographical distinctions — synchronic and diachronic, nomothetic and ideographic — we get four possible permutations of historiographical methodology, as follows:
● nomothetic synchrony
Law-like interaction of all elements within a broadly-defined present
● ideographic synchrony
Contingent interactions of all elements within a broadly-defined present
● nomothetic diachrony
Law-like succession of related events through historical time (especially “deep time”)
● ideographic diachrony
Contingent succession of related events through historical time
This schematic representation of historiographical methodologies is in no wise intended to be exhaustive; I’m sure if I continued to think about this, all kinds of conditions, qualifications, and additions would occur to me. For example, one obvious way to give this much more subtlety and sophistication would be to define each of the above methodological orientations for each division of what I have called ecological temporality, i.e., define each method for each level of time, from the micro-temporality of lived experience to the meta-temporality of the unfolding of ideas in history. I’m not going to attempt to do this at present, I just wanted to give a sense of the simplified schematism I am employing here, which I hope has some relevance despite its simplicity.
All of this sounds very abstract, but if just the right intuitive illustrations of each concept can be found, the concepts will gain in concreteness and depth, and their usefulness will be immediately understood. I can’t claim that I have yet assembled the perfect intuitive illustrations for all four of these methodologies, but I will give you what I have at present, and as I continue to think about this I will (hopefully) add some telling examples.
Nomothetic synchrony, as a method of highlighting the law-like interaction of all elements within a broadly-defined present, is perhaps the most difficult to intuitively illustrate. What “the present” includes is ambiguous, but I have said that the present is “broadly-defined,” so you will understand that the present is not here the punctiform present but something more like “current events.” Current events are continually feeding back on themselves by being repeated in the media and iterated throughout numerous cultural channels. Not all of this feedback, and not all of these iterations, are law-like, but some are. For example, procedural rationality — laws, rules, and regulations intended to bring order and system to the ordinary business of life — constitutes a highly complex set of law-like interactions in the present. In natural history, in contradistinction to human history, ecology is, in a sense, an instance of nomothetic synchrony, and that genre of writing/study once called “nature studies” which focuses on life cycles and predictable patterns within a defined and limited ecosystem, habitat, or niche. Anything, then, that we can describe in ecological terms can also be described in terms of nomothetic synchrony, and since I have taken the trouble to define metaphysical ecology, this category is potentially highly comprehensive. For example, if we call sociology the ecology of society, or we call cosmology galactic ecology, these disciplines could both be treated in terms of nomothetic synchrony.
Ideographic synchrony as constituted by all contingent interactions within a broadly-defined present might be summed up as William James famously summarized sensory perception for an infant: “The baby, assailed by eyes, ears, nose, skin, and entrails at once, feels it all as one great blooming, buzzing, confusion.” Ideographic synchrony is a blooming, buzzing confusion. Anarchic processes like financial markets and warfare might be good illustrations of ideographic synchrony. Of course, markets are supposed to behave according to procedural rationality, and wars are supposed to be fought according to a strategy — but we have all heard of the “fog of war” and of battlefield “friction” (both concepts due to Clausewitz), as we have all heard that no plan survives contact with the enemy. Similarly, no trading strategy survives exposure to the market.
Nomothetic diachrony, the law-like succession of related events through historical time, is the paradigmatic form of historical thought, but more often than not an elusive ideal. Many “laws of history” have been proposed, but none have been widely accepted. The only law of history that has survived is not from history, but from biology: natural selection. Evolution, while often apparently random and pervasively contingent, is a perfect illustration of law-like transitions through deep time. The “big history” movement is also a paradigm case of nomothetic diachrony, with the central theoretical narrative being that of increasing complexity.
Ideographic diachrony, the contingent succession of related events through historical time, can be illustrated in several imaginative ways. The biography of an individual primarily consists of a tight focus on a contingent sequence of events (events in the life of one individual) through a period of time not limited to the broadly-defined present. Many writers like to dwell on the role of the merely contingent and even the spectacularly accidental in history, as with Pascal’s several remarks about how if Cleopatra’s nose had had another shape, history would be different — a particular theme that has been since taken up by others (as in Daniel J. Boorstin’s book, Cleopatra’s Nose: Essays on the Unexpected). There is also the famous rhyme about how “for want of a nail a kingdom fell” which also focuses on the disproportionate historical influence of accidental contingencies. The “butterfly effect” is another illustration.
These four concepts — nomothetic synchrony, ideographic synchrony, nomothetic diachrony, and ideographic diachrony — provide a kind of methodological orientation in historiography. But it is more than merely methodological, since particular methods imply particular metaphysical orientations as well. Someone who holds the cataclysmic conception of history — based upon a denial of human agency — is likely to pursue an ideographic methodology rather than a nomothetic methodology. However, the four conceptions of history that I have defined don’t neatly map on the four methodologies defined above, so I can’t just connect these two quadripartite schemas straight across, showing that each conception of history has an associated methodology.
It’s more complicated than that. It usually is with history.
. . . . .
. . . . .
. . . . .
29 October 2012
Parochialism, ironically, knows no bounds. Our habit of blinkering ourselves — what visionary poet William Blake called “mind-forged manacles” — is nearly universal. Sometimes even the most sophisticated minds miss the simple things that are staring them in the face. Usually, I think this is a function of the absence of a theoretical context that would make it possible to understand the simple truth staring us in the face.
I have elsewhere written that one of the things that makes Marx a truly visionary thinker is that he saw the industrial revolution for what it was — a revolution — even while many who lived through this profound series of events where unaware that they were living through a revolution. So even if one’s theoretical context is almost completely wrong, or seriously flawed, the mere fact of having the more comprehensive perspective bequeathed by a theoretical understanding of contemporary events can be enough to make it possible for one to see the forest for the trees.
Darwin wrote somewhere (I can’t recall where as I write this, but will add the reference later when I run across it) that from his conversations with biologists prior to publishing The Origin of Species he knew how few were willing to thing in terms of the mutability of species, but once he had made his theory public it was rapidly adopted as a research program by biologists, and Darwin suggested that countless facts familiar to biologists but hitherto not systematically incorporated into theory suddenly found a framework in which they could be expressed. Obviously, these are my words rather than Darwin’s, and when I can find the actual quote I will include it here, but I think I have remembered the gist of the passage to which I refer.
It would be comical, if it were not so pathetic, that one of the first responses to Darwin’s systematic exposition of evolution was for people to look around for “transitional” evolutionary forms, and, strange to say, they didn’t find any. This failure to find transitional forms was interpreted as a problem for evolution, and expeditions were mounted in order to search for the so-called “missing link.”
The idea that the present consists entirely of life forms having attained a completed and perfected form, and that all previous natural history culminates in these finished forms of the present, therefore placing all transitional forms in the past, is a relic of teleological and equilibrium thinking. Once we dispense the unnecessary and mistaken idea that the present is the aim of the past and exemplifies a kind of equilibrium in the history of life that can henceforth be iterated to infinity, it becomes immediately obvious that every life form is a transitional form, including ourselves.
A few radical thinkers understood this. Nietzsche, for example, understood this all-too-clearly, and wrote that, “Man is a rope stretched between the beasts and the Superman — a rope over an abyss. A dangerous crossing, a dangerous wayfaring, a dangerous looking-back, a dangerous trembling and halting. What is great in man is that he is a bridge and not a goal..” But assertions as bold as that of Nietzsche were rare. Darwin himself didn’t even mention human evolution in The Origin of Species (though he later came back to human origins in The Descent of Man): Darwin first offered a modest formulation of a radical theory.
So what has all this in regard to Marx and Darwin to do with the great filter, mentioned in the title of this post? I have written many posts about the Fermi paradox recently without ever mentioning the great filter, which is an important part of the way that the Fermi paradox is formulated today. If we ask, if the universe is supposedly teaming with alien life, and possibly also with alien civilizations, why we haven’t met any of them, we have to draw that conclusion that, among all the contingencies that must hold in order for an industrial-technological civilization to arise within our cosmos, at least one of these contingencies has tripped up all previous advanced civilizations, or else they would be here already (and we would probably be their slaves).
The contingency that has prevented any other advanced civilization in the cosmos from beating us to the punch is called the great filter. Many who write on the Fermi paradox, then, ask whether the great filter is in our past or in our future. If it is in our past, we have good reason to hope that our civilization can be an ongoing concern. If it is in our future, we have a very real reason to be concerned, since if no other advanced civilization has made it through the great filter in their development, it would seem unlikely that we would prove the exception to that rule. So a neat way to divide the optimists and the pessimists in regard to the future of human civilization is whether someone places the great filter in the past (optimists) or in the future (pessimists).
Human beings are the only species (on the only biosphere known to us) known to have created industrial-technological civilization. This is our special claim to intelligence. But before us there were numerous precursor species, and many hominid species that have since gone extinct. Many of these hominids (who cannot all be called human “ancestors” since many of them were dead ends on the evolutionary tree) were tool users, and it is for this reason that I noted in Civilization and the Technium that the technium is older than civilization (and more widely distributed than civilization). But now we are only only remaining hominid species on the planet. So in the past, we can already see a filter that has narrowed down the human experience to a single sentient and intelligent species.
Writers on the technological singularity and on the post-human and even post-biological future have speculated on a wide variety of possible scenarios in which post-human beings, industrial-technological civilization, and the technium will expand throughout the cosmos. If these events come to past, the narrowing of the human experience to a single biological species will eventually be followed by a great blossoming of sentient and intelligent agents who may not be precisely human in the narrow sense, but in a wider sense will all be our descendants and our progeny. In this eventuality, the narrow bottleneck of humanity will expand exponentially from its present condition.
Looking at the present human condition from the perspective of multiple predecessor species and multiple future species, we see that the history of sentient and intelligent life on earth has narrowed in the present to a single hominid species. The natural history of intelligence on the Earth has all its eggs in one basket. Our existence as the sole sentient and intelligent species means that we are the great filter.
If we survive ourselves, we will have a right to be optimistic about the future of intelligent life in the universe — but not until then. Not until we have been superseded, not until the human era has ended, ought we to be optimistic.
Man is a narrow strand stretched between pre-human diversity and post-human diversity.
. . . . .
. . . . .
. . . . .
9 October 2012
The “technium” is a term coined by Kevin Kelly in his book What Technology Wants. The author writes that he dislikes inventing words, but felt he needed to coin a term in the context of his exposition of technology; I, on the contrary, don’t mind in the least inventing words. I invent words all the time. When we formulate a new concept we ought to give it a new name, because we are not only expanding our linguistic vocabulary, we are also extending out conceptual vocabulary. So I will without hesitation take up the term “technium” and attempt to employ it as the author intended, though I will extend the concept even further by applying some of my own terminology to the idea.
In What Technology Wants the technium is defined as follows:
“I dislike inventing new words that no one else uses, but in this case all known alternatives fail to convey the required scope. So I’ve somewhat reluctantly coined a word to designate the greater, global, massively interconnected system of technology vibrating around us. I call it the technium. The technium extends beyond shiny hardware to include culture, art, social institutions, and intellectual creations of all types. It includes intangibles like software, law, and philosophical concepts. And most important, it includes the generative impulses of our inventions to encourage more tool making, more technology invention, and more self-enhancing connections. For the rest of this book I will use the term technium where others might use technology as a plural, and to mean a whole system (as in “technology accelerates”). I reserve the term technology to mean a specific technology, such as radar or plastic polymers.”
Some time ago, in some earlier posts here, I started using the term “social technology” to indicate those artifacts of human invention that are not particular pieces of hardware. In making that distinction I did not think to further subdivide and extrapolate all possible kinds of technology, nor to unify them all together into one over-arching term (at least, I don’t remember having the idea). This is what, as far as I understand it, the technium means: the most comprehensive conception of technology, including social technologies and electromechanical technologies and biological technologies and so forth.
Although we usually don’t think of it like this, technology is older than civilization. Lord Broers led off his 2005 Reith Lectures with an account of the “Grimes Graves” flint mining site, which virtually constituted an entire Neolithic industrial complex. While Grimes Graves is contemporaneous with agriculture, and therefore with a broad conception of agricultural civilization, there were probably other such industries dating to the Paleolithic that are lost to us now.
With the emergence of human cognitive modernity sometime about fifty to sixty thousand years ago, human beings began making tools in a big way. Of course, earlier hominids before homo sapiens made tools also, although their toolkits were pretty rudimentary and showed little or no development over hundred of thousands of years. Still, it should be observed that tools and technology are not only older that civilization, they are even older than human beings, in so far as we understand human beings narrowly as homo sapiens only (though it would be just as legitimate to extend the honorific “human being” to all hominids). What this means is that the technium is older than civilization.
If we take the technium as an historical phenomenon and study it separately from the history of human beings or the history of civilization, we see that it is legitimate to identify the technium as an independent object of inquiry since it has a life of its own. At some points in history the technium has coincided fully with civilization; at other points in time, the technium has not precisely coincided with civilization. As I have just noted above, the technium preceded the advent of civilization, and therefore in its earliest stages did not coincide with civilization.
At the present moment in history, with our technological artifacts spread across the solar system and crowding the orbit of the earth, the technium again, in extending beyond the strict range of human civilization, does not precisely correspond with the extent of civilization. The possibility of a solarnet (this term of due to Heath Rezabek, and the idea is given an exposition in my Cyberspace and Outer Space) that would constitute an internet for a human civilization throughout our native solar system, would be an expansion of the technium throughout our solar system, and it is likely that this will proceed human spacesteading (or, at least, will be at the leading edge of human spacesteading) so that the technium has a greater spatial extent than civilization for some time.
If, at some future time, human beings were to build and launch Bracewell-Von Neuman probes — self-replicating robotic probes sent to other solar systems, at which point the self-replicating probes employ the resources of the other solar system to build more Bracewell-Von Neuman probes which are then sent on to other solar systems in turn — when, in the fullness of time, these probes had spread through the entire Milky Way galaxy (which would take less than four million years), the technium would then include the entire Milky Way, even if we couldn’t properly say that human civilization covered the same extent.
It is an interesting feature of a lot of futurism that focuses on technology — and here I am thinking of Kevin Kelly’s book here under consideration as well as the extensive contemporary discussion of the technological singularity — that such accounts tend to remain primarily terrestrially-focused, while it is another party of futurists who focus on scenarios in which human space travel plays a significant role in the future. Both visions are inadequate, because both technological advances and space travel that projects civilization beyond the Earth will play significant roles in the future, and in fact the two will not be distinguishable. As I have noted above, the technium already extends well beyond the Earth to the other planets of our solar system, and, if we count the Voyager probes now in deep space, beyond the solar system.
One way in which we see technologically-based futurism focusing on terrestrial scenarios is the terminology and concepts employed. While the term isn’t used much today, there is the idea of a “technosphere” which is the technological analogue of those spheres recognized by the earth sciences such as the geosphere, the hydrosphere, the biosphere, the lithosphere, and so forth — essentially geocentric or Ptolemaic conceptions, which remain eminently valid in regard to Earth-specific earth sciences, but which when applied to technology, which has already slipped the surly bonds of earth, it is misleading.
More contemporary conceptions — which, of course, have a history of their own — would be that of a planetary civilization or, on a larger scale, the idea of a matrioshka brain, which latter could be understood as part of a human scenario of the future or part of a singularity scenario.
Michio Kaku has many times referenced the idea of a planetary civilization, and he often does so citing Kardashev’s classifications of civilization types based on energy uses. Here is Kaku’s exposition of what he calls a Type I civilization:
Type I civilizations: those that harvest planetary power, utilizing all the sunlight that strikes their planet. They can, perhaps, harness the power of volcanoes, manipulate the weather, control earthquakes, and build cities on the ocean. All planetary power is within their control.
Michio Kaku, Physics of the Impossible, Chapter 8, “Extraterrestrials and UFOs”
Of course, anyone is free to define types of civilization however they like, and Kaku has been consistent in which characterization of civilization across his own works, but this does have much of a relationship to the schema of Type I, II, and III civilizations as originally laid out by Kardashev. Kardashev was quite explicit in his original paper, “Transmission of Information by Extraterrestrial Civilizations” (1964), that a type I civilization was a, “technological level close to the level presently attained on the earth.” The earth’s energy use has increased significantly since Kardashev wrote this, so according to Kardashev’s original idea, we are today firmly within the territory of a Type I civilization. But Kardashev’s conception is not what Kaku has in mind as a planetary civilization:
“As I’ve discussed in my previous books, our own civilization qualifies a Type 0 civilization (i.e., we use dead plants, oil and coal, to fuel our machines). We utilize only a tiny fraction of the sun’s energy that falls on our planet. But already we can see the beginnings of a Type I civilization emerging on the Earth. The Internet is the beginning of a Type I telephone system connecting the entire planet. The beginning of a Type I economy can be seen in the rise of the European Union, which in turn was created to compete with NAFTA.”
Michio Kaku, Physics of the Impossible, loc. cit.
In his Physics of the Future, Kaku devotes Chapter 8, “Future of Humanity,” to the idea of a planetary civilization, in which he elaborates in more detail on the above themes:
The culmination of all these upheavals is the formation of a planetary civilization, what physicists call a Type I civilization. This transition is perhaps the greatest transition in history, marking a sharp departure from all civilizations of the past. Every headline that dominates the news reflects, in some way, the birth pangs of this planetary civilization. Commerce, trade, culture, language, entertainment, leisure activities, and even war are all being revolutionized by the emergence of this planetary civilization. Calculating the energy output of the planet, we can estimate that we will attain Type I status within 100 years. Unless we succumb to the forces of chaos and folly, the transition to a planetary civilization is inevitable, the end product of the enormous, inexorable forces of history and technology beyond anyone’s control.
Michio Kaku, Physics of the Future, p. 11
And to put it in a more explicitly moral (and bifurcated, i.e., Manichean) context:
There are two competing trends in the world today: one is to create a planetary civilization that is tolerant, scientific, and prosperous, but the other glorifies anarchy and ignorance that could rip the fabric of our society. We still have the same sectarian, fundamentalist, irrational passions of our ancestors, but the difference is that now we have nuclear, chemical, and biological weapons.
Michio Kaku, Physics of the Future, p. 16
For Kaku, the telos of civilization’s immediate future is the achievement of a planetary technium. The roots of this idea go back at least to the Greek architect and city planner Constantinos Doxiadis, who was quite famous in the middle of the twentieth century, authored many books, formulated a theory of urbanism that I personally find more interesting than anything written today (although he called his theory “ekistics” which is not an attractive name), and drew up the plans for Islamabad. Doxiadis forecast an entire hierachy of settlements (which he called ekistic units), from the individual to the ecumenopolis, the world-city.
Here is how Doxiadis defined ecumenopolis in his treatise on urbanism:
Ecumenopolis: the coming city that will, together with the corresponding open land which is indispensable for Man, cover the entire Earth as a continuous system forming a universal settlement. Term coined by the author and first used in the October 1961 issue of Ekistics.
Constantinos A. Doxiadis, Ekistics: An Introduction to the Science of Human Settlements, New York: Oxford University Press, 1968, p. 516 (Doxiadis, like me, had no compunctions about inventing his own terminology)
In What Technology Wants Kelly explicitly invoked ecumenopolis as both unsettling and possibly inevitable:
The technium is a global force beyond human control that appears to have no boundaries. Popular wisdom perceives no counterforce to prevent technology from usurping all available surfaces of the planet, creating an extreme ecumenopolis — planet-sized city — like the fictional Trantor in Isaac Asimov’s sci-fi stories or the planet Coruscant in Lucas’s Star Wars. Pragmatic ecologists would argue that long before an ecumenopolis could form, the technium would outstrip the capacity of Earth’s natural systems and thus would either stall or collapse. The cornucopians, who believe the technium capable of infinite substitutions, see no hurdle to endless growth of civilization’s imprint and welcome the ecumenopolis. Either prospect is unsettling.
Kevin Kelly, What Technology Wants, First published in 2010 by Viking Penguin, p. 197
Now, I am not saying that the scenarios of Kevin Kelly and Michio Kaku avoid the human future in space, but it doesn’t seem to be a particular interest of either author, so it doesn’t really receive systematic development or exposition. So I would like to place the technium in Copernican context, i.e., in the context of a Copernican civilization — although it should be obvious from what I wrote above that a Copernican technium will not always coincide with a Copernican civilization.
Some of this will be familiar to those who have read my other posts on Copernican civilization and astrobiology. In A Copernican Conception of Civilization (later refined in my formulations in Eo-, Eso-, Exo-, Astro-, based on Joshua Lederberg’s concepts of eobiology, esobiology, and exobiology) I formulated the following definitions of civilization:
● Eocivilization the origins of civilization, wherever and whenever it occurs, terrestrial or otherwise
● Esocivilization our terrestrial civilization
● Exocivilization extraterrestrial civilization exclusive of terrestrial civilization
● Astrocivilization the totality of civilization in the universe, terrestrial and extraterrestrial civilization taken together in their cosmological context
Now it should be obvious how we can further adapt these same definitions to the technium:
● Eotechnium the origins of the technium, wherever and whenever it occurs, terrestrial or otherwise
● Esotechnium our terrestrial technium
● Exotechnium any extraterrestrial technium exclusive of the terrestrial technium
● Astrotechnium the totality of technology in the universe, our terrestrial and any extraterrestrial technium taken together in their cosmological context
The esotechnium corresponds to what has been called the technosphere, mentioned above. I have pointed out that the concept of the technosphere (like other -spheres such as the hydrosphere and the sociosphere, etc.) is essentially Ptolemaic in conception, and that to make the transition to fully Copernican conceptions of science and the world we need to transcend our Ptolemaic ideas and begin to employ Copernican ideas. Thus to recognize that the technosphere corresponds to the esotechnium constitutes conceptual progress, because on this basis we can immediately posit the exotechnium, and beyond both the esotechnium and the exotechnium we can posit the astrotechnium.
A strict interpretation of technosphere or esotechnium would be limited to the surface of the earth, so that all the technology that is flying around in low earth orbit, and which is so closely tied in with planetary technological systems, constitutes an exotechnium. If we define the boundary of the earth as the Kármán line, 100 km above sea level, this would include within the technosphere or esotechnium all of the highest flying aircraft and the weather balloons, but would exclude all of the lowest orbiting satellites. Even if we were to include the near earth orbit so saturated with satellites as part of the esotechnium, there would still be our technological artifacts on the moon, Mars, Venus, and orbiting around distant bodies of the solar system. farthest out of all, already passing out of the heliosphere of the solar system, into the heliopause, and therefore into interstellar space, are the spacecraft Voyager 1 and Voyager 2.
One question that Kelly left unanswered in his exposition of the technium is whether or not it is to be understood as human-specific, i.e., as the totality of technology generated and employed by human beings. In the nearer-term future there may be a question of distinguishing between human-produced technology and machine-produced technology; in the longer-term future there may be a question of distinguishing between human-generated technology and exocivilization-produced technology. In so far as the idea of the technological singularity involves the ability of machines to augment their own technology, the distinction between human industrial-technological civilization and the post-human technological singularity is precisely that between human-generated technology and machine-generated technology.
There is a perfect parallel between the Terrestrial Eocivilization Thesis and, what is implied in the above, the Terrestrial Eotechnium Thesis, which latter would constitute the claim that all technology begins on the Earth and expands into the universe from this single point of origin.
At this point we might want to distinguish between an endogenous technium, having its origins on the Earth, and any exogenous technium, having its origins in an alien civilization. Another way to formulate this would be to identify any alien technium as a xenotechnium, but I haven’t thought about this systematically yet, so I will leave any attempted exposition for a later time.
. . . . .
. . . . .
. . . . .