Sunday


Wilhelm Dilthey (19 November 1833 to 01 October 1911)

Every so often a term from philosophy — and by “philosophy” in this context I mean the kind of philosophy that is generally not read by the wider public, and which is therefore sometimes called “technical” or “professional” — finds its way into the wild, as it were, and begins to appear in non-philosophical contexts. This happened with Thomas Kuhn’s use of “paradigm shift” and with Derrida’s use of “deconstruction.” To a lesser extent, it is also true of “phenomenology” since Husserl’s use of the term. Another philosophical term that has come into general currency is “lived experience.” (There are also variations on the theme of “lived experience,” such as “felt experience,” which I found in Barry Mazur’s 2008 paper “Mathematical Platonism and its Opposites,” in which the author refers to, “…the passionate felt experience that makes it so wonderful to think mathematics.”) Recently I saw “lived experience” used in the title of a non-philosophical book, Nubia in the New Kingdom: Lived Experience, Pharaonic Control and Indigenous Traditions, edited by N. Spencer, A. Stevens, and M. Binder. A description of the book on the publishers website says that the approach of the volume provides, “…a more nuanced understanding of what it was like to live in colonial Kush during the later second millennium BC.”

This, I think, is the takeaway of “lived experience” for non-philosophers — that of “what it was like to live” in some particular social or historical context. One could easily imagine, “what it was really like to live” becoming a slogan on a par with Leopold von Ranke’s, “to show what actually happened” (“wie es eigentlich gewesen”). Both could be taken as historiographical principles, and indeed the two might be taken to imply each other: arguably, one can’t know what it was like to live without knowing what actually happened, and, again arguably, one can’t show what actually happened without knowing what it was like to live. Actually, I think that the two are distinguishable, but I only wanted to make the point of how closely related these ideas are.

I believe, though I cannot say for sure, that the philosophical use of “lived experience” originates in the work of Wilhelm Dilthey. If Dilthey did not originate the philosophical use of “lived experience,” he did write extensively about it earlier than most other philosophers who took up the term. (If anyone knows otherwise, please set me straight.) Since I am planning on making use of the idea of lived experience, I have been reading Dilthey recently, especially his Selected Works, Volume III: The Formation of the Historical World in the Human Sciences (which corresponds to the German language Gesammelte Schriften, Volume 7: Der Aufbau der geschichtlichen Welt in den Geisteswissenschaften), which has a lot of material on lived experience.

Dilthey is not an easy author to read. I have heard it said many times that Husserl is a difficult author, but I find translations of Husserl to be much easier going than translations of Dilthey. Dilthey and Husserl knew each other, read each others’ works, and they corresponded. Dilthey’s exposition of lived experience contains numerous references to Husserl’s Logical Investigations (Husserl’s systematic works on phenomenology mostly appeared after Dilthey passed away, so it was only the Logical Investigations to which Dilthey had access). Most interestingly to me, Husserl wrote a semi-polemical article, “Philosophy as Rigorous Science,” in which Husserl discussed Dilthey in the section “Historicism and Weltanschauung Philosophy.” Dilthey did not agree with the characterization of his work by Husserl. It was Husserl’s article that was the occasion of their correspondence (translated in Husserl: Shorter Works), and it is a lesson in the unity German philosophy to read this exchange of letters. In their correspondence, Dilthey and Husserl were easily able to find common ground in a language rooted in 19th century German idealist philosophy.

While the apparent ground of their common outlook was expressed in the peculiar idiom of German philosophy, both were also reacting against that tradition. Both Dilthey and Husserl were centrally concerned with the experience of time. Husserl’s manuscripts on time consciousness run to hundreds of pages (cf. On the Phenomenology of the Consciousness of Internal Time (1893–1917)). Of Husserl’s efforts Dilthey wrote, “A true Plato, who first of all fixes in concept the things that become and flow, then puts beside the concept of the fixed a concept of flowing.” (cited by Quentin Lauer in The Triumph of Subjectivity from Dilthey, Gesammelte Schriften, Vol. V, p. cxii) Dilthey’s own exposition of time consciousness can be found in Vol. III of the selected works in English, Drafts for a Critique of Historical Reason, section 2, “Reflexive Awareness, Reality: Time” (pp. 214-218), where it is integral with his exposition of lived experience.

Of time and lived experience Dilthey wrote:

“Temporality is contained in life as its first categorical determination and the one that is fundamental for all others… Thus the lived experience of time determines the content of our lives in all directions.”

Wilhem Dilthey, Selected Works, Volume III: The Formation of the Historical World in the Human Sciences, Princeton and Oxford: Princeton University Press, 2002, pp. 214-215.

I suspect that Husserl would have agreed with this, as for Husserl time consciousness was the foundation of the constituting consciousness. Dilthey also writes:

“That which forms a unity of presence in the flow of time because it has a unitary meaning is the smallest unit definable as a lived experience.” And, “A lived experience is a temporal sequence in which every state is a flux before it can become a distinct object.” And, “The course of life consists of parts, of lived experiences that are inwardly connected with each other. Each lived experience relates to a self of which it is a part.”

Op. cit., pp. 216-217

Here I have plucked out a few representative quotes by Dilthey on lived experience; this may give a flavor of his exposition, but I certainly don’t maintain that this is a fair way of coming to grips with Dilthey’s conception of lived experience. The only way to do that is by the lived experience of reading the text through and deriving from it a unitary meaning. I will not attempt to do that in the present context, as I only wanted here to give the reader an impression of Dilthey’s writing on lived experience.

Dilthey, as I noted, is not an easy author. Both Dilthey’s and Husserl’s discussions of time consciousness and lived experience are opaque at best. I keep at Dilthey despite the difficulty because I want to understand his exposition of lived experience. However, as I keep at it I cannot help but think that part of the difficulty of the discussion is the absence of a scientific understanding of consciousness. As I have mentioned many times, we simply have no idea, at the present stage of the development of our scientific knowledge, what consciousness is. Trying to give a detailed description of time consciousness and lived experience without any scientific foundation is almost crippling. I believe that the effort is worthwhile, but it is as instructive in how it fails as it is instructive in how it less often succeeds.

In this frame of mind I recalled a passage from Foucault’s The Birth of the Clinic:

“Towards the middle of the eighteenth century, Pomme treated and cured a hysteric by making her take ‘baths, ten or twelve hours a day, for ten whole months.’ At the end of this treatment for the desiccation of the nervous system and the heat that sustained it, Pomme saw ‘membranous tissues like pieces of damp parchment … peel away with some slight discomfort, and these were passed daily with the urine; the right ureter also peeled away and came out whole in the same way.’ The same thing occurred with the intestines, which at another stage, ‘peeled off their internal tunics, which we saw emerge from the rectum. The oesophagus, the arterial trachea, and the tongue also peeled in due course; and the patient had rejected different pieces either by vomiting or by expectoration’.”

“…Pomme, lacking any perceptual base, speaks to us in the language of fantasy. But by what fundamental experience can we establish such an obvious difference below the level of our certainties, in that region from which they emerge? How can we be sure that an eighteenth-century doctor did not see what he saw, but that it needed several decades before the fantastic figures were dissipated to reveal, in the space they vacated, the shapes of things as they really are?”

Michel Foucault, The Birth of the Clinic, New York: Vintage, 1975, pp. ix-x; Foucault cites Pomme, Traite des affections vaporeuses des deux sexes (4th edn., Lyons, 1769, vol. I, pp. 60-5)

Because of the theory-ladenness of perception, when the theory is absent or unclear, perception has little to go on and it is confused and unclear. We cannot describe with precision unless we can conceptualize with precision. The eventual development of an adequate science of consciousness — which may ultimately involve a revision to the nature of science itself — will issue in concepts of sufficient precision that they can be the basis of precise observations, and precise observations can further contribute to the precisification of the concepts — a virtuous circle of expanding knowledge.

I would not insist upon the theory-ladenness of perception to the point of excluding the possibility of any knowledge without an adequate theory to guide perception. In this spirit I have already acknowledged that there is some value in Dilthey’s attempt to clarify the idea of lived experience. If theory and observation are mutually implicated, and eventually can accelerate in a virtuous circle of mutual clarification, then the first, tentative ideas and observations on lived experience can be understood analogously to the stone tools used by our earliest ancestors. These stone tools are rough and rudimentary by present standards of precision machine tools, but we had to start somewhere. So too with our conceptual tools: we have to start somewhere.

Dilthey’s approach to lived experience is one such starting point, and from this point of departure we can revise, amend, and extend Dilthey’s conception until it becomes a more useful tool for us. One way to do this is by way of what has been called the knowledge argument, also known as the Mary’s room thought experiment. I have earlier discussed the knowledge argument in Colonia del Sacramento and the Knowledge Argument and Computational Omniscience.

Here is the locus classicus of the thought experiment:

“Mary is a brilliant scientist who is, for whatever reason, forced to investigate the world from a black and white room via a black and white television monitor. She specializes in the neurophysiology of vision and acquires, let us suppose, all the physical information there is to obtain about what goes on when we see ripe tomatoes, or the sky, and use terms like ‘red,’ ‘blue,’ and so on. She discovers, for example, just which wavelength combinations from the sky stimulate the retina, and exactly how this produces via the central nervous system the contraction of the vocal cords and expulsion of air from the lungs that results in the uttering of the sentence ‘The sky is blue.’ […] What will happen when Mary is released from her black and white room or is given a color television monitor? Will she learn anything or not?”

Frank Jackson, “Epiphenomenal Qualia” (1982)

The historical parallel of the Mary’s room argument would be to ask, if Mary had exhaustively studied life in colonial Kush during the later second millennium BC, and then Mary was enabled to actually go back and live in colonial Kush during the later second millennium BC, would Mary learn anything by the latter method that she did not already know from the first method? If we answer that Mary learns nothing from living in Kush that she did not already know by exhaustively studying Kush, then we can assert the equivalence of what it was like to live and what actually happened. If, on the other hand, we answer that Mary does indeed learn something from living in Kush that she did not learn by exhaustively studying Kush, then we ought to deny the equivalence of what it was like to live and what actually happened.

While this exact thought experiment cannot be performed, there is a more mundane parallel that anyone can test: exhaustively educate yourself about somewhere you have never visited, and then go to see the place for yourself. Do you learn anything when you visit that you did not know from your prior exhaustive study? In other words, does the lived experience of the place add to the knowledge you had gained without lived experience?

While Dilthey does not use the term “ineffable,” many of his formulations of lived experience point to its ineffability and our inability to capture lived experience in any conceptual framework (as is implied by his criticism of Husserl, quoted above). If what one learns from what it was like to live is ineffable, then we could assert that, even when our conceptual framework was as adequate as we can make it, it is still inadequate and leaves out something of what what it was like to live, i.e., it leaves out the component of lived experience.

But, as I said, Dilthey himself does not use the term “ineffable” in this context, and he may have avoided it for the best scientific reasons. Our inability to formulate the distinctiveness of lived experience in contradistinction to that which can be learned apart from lived experience may be simply due to the inadequacy of our conceptual framework. When we have improved our conceptual framework, we may possess the concepts necessary to render that which now appears ineffable as something that can be accounted for in our conceptual framework. We must admit in all honesty, however, that we aren’t there yet in relation to lived experience. This is not a reason to avoid the concept of lived experience, but, on the contrary, it is a reason to work all the more diligently at clarifying the concept of lived experience. Employing simple distinctions like that between what it was like to live and what actually happened is one way to test the boundaries of the concept and so to better understand its relationships to other related concepts.

. . . . .

Pierre Pomme (1735 to 1812)

. . . . .

signature

. . . . .

Grand Strategy Annex

. . . . .

project astrolabe logo smaller

. . . . .

Advertisements

Thursday


From ‘Big Bang Discovery Opens Doors to the ‘Multiverse”

The observable/observed distinction

We can make a distinction between observable universes that are, in fact, observed, and observable universes that, while observable in principle, are not actually observed in fact. Thus, the set of all observable universes may be larger than the set of all universes actually observed, just as the set of all habitable planets is almost certainly larger than the set of all planets that are actually inhabited.

There are many parallels between the observable/observed and inhabitable/inhabited distinctions, and this is because this is, in each case, a modal distinction between potentiality and actuality. For a universe to be observable is for it to be potentially an object of perception, and for a universe to be observed is for it to be actually an object of perception. If “observation” is taken to include not only perception (which might be unknowing and unreflective, i.e., not self-aware) but also conception, we can revise these formulations so that some universe is potentially or actually both an object of perception and an object of thought.

But the observable/observed and inhabitable/inhabited distinctions are even more closely related than both being particular cases of potentiality vs. actuality; an observable universe is a habitable universe, and an observed universe is an inhabited universe. The universe (or a universe), then, is a generalization of a planet, so that in studying the habitable/inhabited distinction where it concerns planets, we are studying the question of observable/observed universes in miniature.

In the case of habitability (i.e., the habitable/inhabited distinction), we know the confusion that this routinely causes. With the increasing number of announcements of exoplanet discoveries, there have been an increasing number of confused accounts which imply that a planet of the right size found within a habitable zone is not just potentially habitable (arguably this formulation is redundant, and it should be sufficient to say “habitable”), but that it is, or must be, inhabited. Exoplanet scientists and astrobiologists are not guilty of this conflation, but accounts of their work in the legacy media make this conflation with regularity.

Perhaps because we see our near neighbors Venus and Mars, both smallish rocky planets like Earth, and both more-or-less in the habitable zone, we can easily understand that a planet that has the right conditions for life does not necessarily host life: these planets are habitable but not inhabited. We can bring the habitable/inhabited distinction home and understand it in human terms, but the observable/observed distinction, especially when applied to the universe entire, is likely to elude us. Moreover, the idea of an empty universe, that is to say, an entire universe without intelligent observers (observers who can both perceive the world and form a conception of what they perceive), is likely to strike many as a bit bizarre, if not absurd.

The Anthropic Cosmological Principle

Sometimes the idea that an empty universe is absurd is made explicit, or nearly so. John Wheeler is credited with saying, “A universe without an observer is not a universe at all.” In fact, Wheeler didn’t write these exact words, but the idea is pervasively present in his exposition of the anthropic cosmological principle. To give a sense of this, here is a comment on the weak anthropic principle (WAP) from Barrow and Tipler’s classic work (with a forward provided by John Wheeler):

“According to WAP, it is possible to contemplate the existence of many possible universes, each possessing different defining parameters and properties. Observers like ourselves obviously can exist only in that subset containing universes consistent with the evolution of carbon-based life.”

The Anthropic Cosmological Principle, John D. Barrow and Frank J. Tipler, Oxford: Oxford University Press, 1986, p. 19

Three interpretations are given of the strong anthropic principle:

(A) There exists one possible Universe ‘designed’ with the goal of generating and sustaining ‘observers’.

(B) Observers are necessary to bring the Universe into being.

(C) An ensemble of other different universes is necessary for the existence of our Universe.

Ibid., p. 22

As these ideas are given an extensive exposition in the text, I will not attempt to flesh them out, but I quote them here only for purposes of exhibition. It would be a considerably involved enterprise to give an exposition of the various formulations of the weak, strong, participatory, and final anthropic principles propounded by Barrow, Tipler, and Wheeler, and then to present them in comparison and contrast with what I have written here about empty universes, but I am not going to attempt that here. Some of these ideas are consistent with a range of universes, some of them empty, and some are not.

Empty, unobserved universes and scientific realism

There can only be two senses of “observable universe” if one is willing to countenance the possibility of empty, unobserved universes, which suggests a strongly realist position, and this interpretation takes to the limit of extrapolation the idea that something exists whether or not we see it (or anyone sees it). If we assume that the back side of the head of the person we are talking to continues to exist even when we do not see it (and if there is no one else looking at it), then we are assuming some degree of realism.

In the case of the person, it could be argued that the person in question is always viscerally conscious of their bodily integrity, and on this basis the back side of their head continues to be perceived, and hence continues to exist without the posit of realism. However, this argument cannot be made with inanimate objects without positing panpsychism. We assume that the back sides of houses, the insides of closets, and the contents of empty rooms continue to exist even when we are not looking at them. I can see no reason this intuitive realism should not be scaled up to entire universes that exist without being observed. This is, at least, consistent with scientific realism, even if it is not entailed by scientific realism.

The Principle of Plenitude

This kind of distinction I am making here between observable universes and observed universes immediately puts us in mind of the principle of plenitude (on which I previously wrote in Cosmology is the Principle of Plenitude Teaching by Example and Parsimony and Plenitude in Cosmology). The most obvious interpretation of the principle of plenitude in this context is that a universe that was habitable would eventually realize the potential of this habitability and would become inhabited. Perhaps this is why some advocates of the strong anthropic principle say that a universe that does not produce observers is a “failed” universe (not the kind of claim I would ever make, but one can understand something of this by saying that such a universe has failed to realize its potential). If we acknowledge the possibility of “failed” universes in this sense, then we would have empty, uninhabited universes, only we would attach a (negative) valuation to them (and presumably we would attach a positive valuation to successful universes that realize their potential and produce observers).

There is, however, another way to interpret the principle of plenitude in this context, and that is to argue that the principle of plenitude entails the realization of every possible kind of universe, and that the existence of an empty universe without observers is a potential that will eventually be realized, if it has not already been realized. Moreover, every kind of universe that can be observed by an observer that evolves within that universe constitutes another kind of universe that could exist in which the potential of such an observer is not realized. Thus if there are a plurality of observed universes, then this interpretation of the principle of plenitude suggests that there will be a plurality of observable but unobserved universes.

The Principle of Parsimony

The principle of plenitude as applied to worlds or to universes would imply densely inhabited worlds and intensively observed universes — what Frank Drake and Dava Sobel called, “an infinitely populated universe.” The principle of parsimony (often invoked as a counter to the principle of plenitude) as applied to worlds or the universe would limit us almost in a constructivistic sense to the world we inhabit — there is at least one observable universe that is, in fact, observed — though before or after the existence of this one known instance of an observer the universe would be empty and unobserved.

The intersection of the principle of plenitude and the principle of parsimony would yield at least one such-and-such (plenitude) and at most one such-and-such (parsimony), that is to say, this intersection would yield uniqueness, one and only one such-and-such — but whether this uniqueness should apply to each and every universe, or whether the universe itself ought to be considered unique, is another question.

A final reflection

It seems to me that the idea of an uninhabited planet, that is unobserved because it it uninhabited, has become a familiar and even a conventional idea of contemporary cosmology and astrobiology — it is, I think, widely assumed that we will eventually find other life in the universe, sprung from other origin of life events, but that intelligent life, and thus an observer that knows itself to be observing, is likely to be quite rare. This consensus view — if it is a consensus — encounters problems when it is extrapolated from habitable/inhabited planets to habitable/inhabited universes. Why this idea appears to transcend science (in the narrow sense) when extrapolated to the whole of the universe I am not yet prepared to say, but I will continue to think about this.

I began this post with the intention to make a simple and straight-forward distinction between observable universes and observed universes (my first draft was only three paragraphs), but as I worked on this I got myself entangled in a number of difficult questions that ended up entailing all-too-brief discussions of difficult ideas like the principle of plenitude and the principle of parsimony. This is admittedly unsatisfying, and I know that I have not done these ideas justice, but at some point I have to bring this to a close.

. . . . .

. . . . .

signature

. . . . .

Grand Strategy Annex

. . . . .

project astrolabe logo smaller

. . . . .

Sunday


Looking down on Earth from above may not only make us reevaluate out relationship to the planet, but may also help us to understand the planet better.

Science is a way to better understand the world, but science itself is not always easy to understand, and we often find that, after clarifying some problem through science, we must then clarify the science so that the science makes sense to us. Some call this science communication; I call it the pursuit of intuitive tractability.

While it is not part of science proper to seek intuitively tractable formulations, it is part of human nature to seek intuitively tractable formulations, as we are more satisfied with science formulated in intuitively tractable forms than with science that is not intuitively tractable. For example, there is, as yet, no intuitively tractable formulation of quantum theory, and this may be why Einstein famously wrote in a letter to Max Born that, “Quantum Mechanics is very impressive. But an inner voice tells me that it is not yet the real thing.”

When the concept of zero was introduced into mathematics, it was thought to be an advanced and difficult idea, but we now teach a number system starting with zero to children in primary school. In a similar way, the Hindu-Arabic system of numbers has displaced almost every other system of numbers because it is what I would paradoxically call an intuitive formalism, i.e., it is a formalization of the number concept that is both adequate to mathematics and closely follows our intuitive conception of number. Mathematics is easier with Hindu-Arabic numerals than other numbering systems because this numbering system is intuitively tractable. There are other formalisms for number that are equally valid and equally correct, but not as intuitively tractable.

The pursuit of intuitive tractability has also been evident in geometry, and especially the axiomatic exposition of geometry that begins with postulates accepted ab initio as self-evident, and which has been the model of rigorous mathematics ever since Euclid. Euclid’s fifth postulate, the famous parallel postulate, is difficult to understand and was a theoretical problem for geometry until its independence was proved, but whether or not the fifth postulate was demonstrably independent of the other postulates, Euclid’s opaque exposition did not help. Here is Euclid’s parallel axiom from the Elements:

“If a line segment intersects two straight lines forming two interior angles on the same side that sum to less than two right angles, then the two lines, if extended indefinitely, meet on that side on which the angles sum to less than two right angles.”

Almost two thousand years later, in 1846, John Playfair formulated what we now call “Playfair’s axiom,” which tells us everything that Euclid’s postulate sought to communicate, but in a far more intuitively tractable form: “In a plane, given a line and a point not on it, at most one line parallel to the given line can be drawn through the point.” Once this more intuitively tractable formulation of the parallel postulate was available, Euclid’s formulation was largely abandoned. There is, then, a process of cognitive selection, whereby the most intuitively tractable formulations are preserved and the less intuitively tractable formulations are abandoned.

Those concepts that are the most intuitively tractable are those concepts that are familiar to us all and which are seamlessly integrated into ordinary thought and language. I have called such concepts “folk concepts.” Folk concepts that have persisted from their origins in our earliest evolutionary psychology up into the present have been subjected to the cognitive equivalent of natural selection, so that we can reasonably speak of folk concepts as having been refined and elaborated by the experience of many generations.

In a series of posts — Folk Astrobiology, Folk Concepts of Scientific Civilization, and Folk Concepts and Scientific Progress — I have considered the nature of “folk” concepts as they have been frequently invoked, and it is natural to ask, in the light of such an inquiry, whether there is a “folk Weltanschauung” that is constituted by a cluster of folk concepts that naturally hang together, and which inform the pre-scientific (or non-scientific) way of thinking about the world.

Arguably, the idea of a folk Weltanschauung is already familiar by a number of different terms that philosophers have employed to identify the concept (or something like the concept) — naïve realism or common sense realism, for example. What Husserl called “natürliche Einstellung” and which Boyce Gibson translated as “natural standpoint” and Fred Kersten translated as “natural attitude” could be said to approximate a folk Weltanschauung. Here is how Husserl describes the natürliche Einstellung:

“I am conscious of a world endlessly spread out in space, endlessly becoming and having endlessly become in time. I am conscious of it: that signifies, above all, that intuitively I find it immediately, that I experience it. By my seeing, touching, hearing, and so forth, and in the different modes of sensuous perception, corporeal physical things with some spatial distribution or other are simply there for me, ‘on hand’ in the literal or the figurative sense, whether or not I am particularly heedful of them and busied with them in my considering, thinking, feeling, or willing.”

Edmund Husserl, Ideas Pertaining to a Pure Phenomenology and to a Phenomenological Philosophy: First Book: General Introduction to a Pure Phenomenology, translated by Fred Kersten, section 27

Husserl characterizes the natural attitude as a “thesis” — a thesis consisting of a series of posits of the unproblematic existence of ordinary objects — that can be suspended, set aside, as it were, by the phenomenological procedure of “bracketing.” These posits could be identified with folk concepts, making the thesis of the natural standpoint into a folk Weltanschauung, but I think this interpretation is a bit forced and not exactly what Husserl had in mind.

Perhaps closer to what I am getting at than the Husserlian natural attitude is what Wilfrid Sellars has called the manifest image of man-in-the-world, or simply the manifest image. Sellars’ thought is no easier to get a handle on than Husserl’s thought, so that one never quite knows if one has gotten it right, and one can easily imagine being lectured by a specialist in the inadequacies of one’s interpretation. Nevertheless, I think that Sellers’ manifest image is closer to what I am trying to get at than Husserl’s natürliche Einstellung. Closer, but still not the same.

Sellars develops the idea of the manifest image in contrast to the scientific image, and this distinction is especially given exposition in his essay Philosophy and the Scientific Image of Man. After initially characterizing the philosophical quest such that, “[i]t is… the ‘eye on the whole’ which distinguishes the philosophical enterprise,” and distinguishing several different senses in which philosophy could be said to be a synoptic effort at understanding the world as a whole, Sellars introduces terms for contrasting two distinct ways of seeing the world whole:

“…the philosopher is confronted not by one complex many dimensional picture, the unity of which, such as it is, he must come to appreciate; but by two pictures of essentially the same order of complexity, each of which purports to be a complete picture of man-in-the-world, and which, after separate scrutiny, he must fuse into one vision. Let me refer to these two perspectives, respectively, as the manifest and the scientific images of man-in-the-world.”

Wilfrid Sellars, Philosophy and the Scientific Image of Man, section 1

Sellars’ distinction between the manifest image and the scientific image has been quite influential. A special issue of the journal Humana Mente, Between Two Images: The Manifest and Scientific Conceptions of the Human Being, 50 Years On, focused on the two images. Bas C. van Fraassen in particular has written a lot about Sellars, devoting an entire book to one of the two images, The Scientific Image, and has also written several relevant papers, such as “On the Radical Incompleteness of the Manifest Image” (Proceedings of the Biennial Meeting of the Philosophy of Science Association,Vol. 1976, Volume Two: Symposia and Invited Papers 1976, pp. 335-343). All of this material is well worth reading.

Sellars is at pains to point out that his distinction between manifest image and scientific image is not intended to be a distinction between pre-scientific and scientific worldviews (“…what I mean by the manifest image is a refinement or sophistication of what might be called the ‘original’ image…”), though it is clear from this exposition that the manifest image, however refined and up-to-date, has its origins in a pre-scientific conception of the world. (“It is, first, the framework in terms of which man came to be aware of himself as man-in-the-world.”) The essence of this distinction between the manifest image and the scientific image is that the manifest image is correlational while the scientific image is postulational. What this means is that the manifest image “explains” the world (in so far as it could be said to explain the world at all) by correlations among observables, while the scientific image explains the world by positing unobservables that connect observables “under the surface” of things, as it were (involving, “…the postulation of imperceptible entities”). Sellars also maintains that the manifest image cannot postulate in this way, and therefore cannot be improved or refined by science, although it can improve on itself by its own correlational methods.

I do not yet understand Sellars well enough to say why he insists that the manifest image cannot incorporate insights from the scientific image, and this is a key point of divergence between Sellars’ manifest image and what I above called a folk Weltanschauung. If a folk Weltanschauung consists of a cluster of tightly-coupled folk concepts (and perhaps a wide penumbra of associated but loosely-coupled folk concepts), then the generation of refined scientific concepts can slowly, one-by-one, replace folk concepts, so that the folk Weltanschauung gradually evolves into a more scientific Weltanschauung, even if it is not entirely transformed under the influence of scientific concepts. Science, too, consists of a cluster of tightly-coupled concepts, and these two distinct clusters of concepts — the folk and the scientific — might well resist mixing for a time, but the human mind cannot keep such matters rigorously separate, and it is inevitable that each will bleed over into the other. Sometimes this “bleeding over” is intentional, as when science reaches for metaphors or non-scientific language as a way to make its findings understood to a wider audience. This is part of the pursuit of intuitively tractable formulations, but it can also go very wrong, as when scientists adopt theological language in an attempt at a popular exposition that will not be rejected out-of-hand by the Great Unwashed.

Despite my differences with Sellars, I am going to here adopt his terminology of the manifest image and the scientific image, and I will hope that I don’t make too much of a mess of it. I will have more to say on this use of Sellars’ concepts below (especially in relation to the postulational character of the scientific image). In the meantime, I want to use Sellars’ concepts in a exposition of intuitive tractability. Sellars’ uses the metaphor of “stereoscopic vision” as the proper way to understand how we must bring together the manifest image and the scientific image as a single way of understanding the world (“…the most appropriate analogy is stereoscopic vision, where two differing perspectives on a landscape are fused into one coherent experience”). I think, on the contrary, that intuitively tractable formulations of scientific concepts can make the manifest image and the scientific image coincide, so that they are one and the same, and not two distinct images fused together. A slightly weaker formulation of this is to assert that intuitively tractable formulations allow us to integrate the manifest image and the scientific image.

Now I want to illustrate this by reference to the overview effect, that is to say, the cognitive effect of seeing our planet whole — preferably from orbit, but, if not from orbit, in photographs and film that make the point as unmistakably as though one were there, in orbit, seeing it with one’s own eyes.

Before the overview effect, we saw our planet with the same eyes, but even after it is proved to us that the planet is (roughly) a sphere, hanging suspended in space, it is difficult to believe this. All manner of scientific proofs of the world as a spherical planet can be adduced, but the science lacks intuitive tractability and we have a difficult time bringing together our scientific concepts and our folk concepts of the world — or, if you will, we have difficulty reconciling the manifest image and the scientific image. The two are distinct. Until we achieve the overview effect, there is an apparent contradiction between what we experience of the world and our scientific knowledge of the world. Our senses tell us that the world is flat and solid and unmoving; scientific knowledge tells us that the world is round and moving and hanging in space.

Once we attain the overview effect, this changes, and the apparent contradiction is revealed as apparent. The overview effect shows how the manifest image and the scientific image coincide. The things we know about ordinary objects, which shapes the manifest image, now applies to Earth, which is seen as an object rather than as surrounding us as an environment with an horizon that we can never reach, and which therefore feels endless to us. Seen from orbit, this explains itself intuitively, and an explicit explanation now appears superfluous (as is ideally the case with an axiom — it is seen to be true as soon as it is understood). The overview effect makes the scientific knowledge of our planet as a planet intuitively tractable, transforming scientific truths into visceral truths. One might say that the overview effect is the lived experience of the scientific truth of our homeworld. In this particular case, we have replaced a folk concept with a scientific concept, and the scientific concept is correct even as intuition is satisfied.

The use of the overview effect to illustrate the manifest and scientific images, and their possible coincidence in a single experience, is especially interesting in light of Sellars’ insistence that the scientific image is distinctive because it is postulational, and more particularly that it postulates unobservables as a way to explain observables. When, in a scientific context, someone speaks of unobservables or “imperceptible entities” the assumption is that we are talking about entities that are too small to see with the naked eye. The germ theory of disease and the atomic theory of matter both exemplify this idea of unobservables being observable because they are smaller than the resolution of unaided human vision. We can only observe these unobservables with instruments, and then this experience is mediated by complex instruments and an even more complex conceptual framework so that no one ever speaks of the “lived experience” of particle physics or microbiology.

In contrast to this, the Earth is unobservable to the human eye not because it is too small, but because it is too large. When shown scientific demonstrations that the world is round, we must posit an unobservable planet, and then identify this unobservable entity with the actual ground under our feet. This is difficult to do, intuitively speaking. We see the world at all times, but we do not see it as a planet. We do not see enough of the world at any one moment to see it as a planet. Enter the overview effect. Seeing the Earth whole from space reveals the entity that is planet Earth, and if one has the good fortune to lift off from Earth and experience the process of departing from its surface to then see the same from space, this makes a previously unobservable postulate into a concretely experienced entity.

We are in the same position now vis-à-vis our place within the Milky Way galaxy, and our place within the larger universe, as we were once in relation to the spherical Earth. Our accumulated scientific knowledge tells us where we are at in the universe, and where we are at in the Milky Way. We can even see a portion of the Milky Way when we look up into the night sky, but we cannot stand back and see the whole from a distance, taking in the Milky Way and pointing of the position of our solar system within one of the spiral arms of our galaxy. We know it, but we haven’t yet experienced it viscerally. We have to posit the Milky Way galaxy as a whole, the Virgo supercluster, and the filaments of galaxies that stretch through the cosmos, because they are too large for us to observe at present. They are partially observed, in the way we might say that an atom is partially observed when we look at a piece of ordinary material composed of atoms.

Our postulational scientific image of the universe in which we live is redeemed for intuition by experiences that put us in a position to view these entities with our own eyes, and so to see them in an intuitively tractable manner. Perhaps one of the reasons that quantum theory remains intuitively intractable is that the unobservables that it posits are so small that we have no hope of ever seeing them, even with an electron microscope.

Ultimately, intuitively tractable formulations of formerly difficult if not opaque scientific ideas is a function of the conceptual framework that we employ, and this is ultimately a philosophical concern. Sellars suggests that the manifest and scientific conceptual framework might be harmonized in stereoscopic vision, but he doesn’t hold out any hope that the manifest image can be integrated with the scientific image. I think that the example of the overview effect demonstrates that there are at least some cases when manifest image and scientific image can be shown to coincide, and therefore these two ways of grasping the world are not entirely alien from each other. Cosmology may be the point of contact at which the two images coincide and through which the two images can communicate.

The pursuit of intuitive tractability is, I submit, a central concern of scientific civilization. If there ever is to be a fully scientific civilization, in which scientific ways of knowing and scientific approaches to problems and their solutions are the pervasively held view, this scientific civilization will come about because we have been successful in our pursuit of intuitive tractability, and we are able to make advanced scientific concepts as familiar as the idea of zero is now familiar to us. Since the question of a conceptual framework in which rigorous science and intuitively tractable concepts can be brought together is not a scientific question, but a philosophical question, the contemporary contempt for philosophy in the special sciences is invidious to the effective pursuit of intuitive tractability. The fate of scientific civilization lies with philosophy.

. . . . .

astronaut-above-earth

. . . . .

Overview Effects

The Epistemic Overview Effect

The Overview Effect as Perspective Taking

Hegel and the Overview Effect

The Overview Effect in Formal Thought

Brief Addendum on the Overview Effect in Formal Thought

A Further Addendum on the Overview Effect in Formal Thought, in the Way of Providing a Measure of Disambiguation in Regard to the Role of Temporality

Our Knowledge of the Internal World

Personal Experience and Empirical Knowledge

The Overview Effect over the longue durée

Cognitive Astrobiology and the Overview Effect

The Scientific Imperative of Human Spaceflight

Planetary Endemism and the Overview Effect

The Overview Effect and Intuitive Tractability

Homeworld Effects

The Homeworld Effect and the Hunter-Gatherer Weltanschauung

The Martian Standpoint

Addendum on the Martian Standpoint

Hunter-Gatherers in Outer Space

What will it be like to be a Martian?

. . . . .

night-sky-0

. . . . .

signature

. . . . .

Grand Strategy Annex

. . . . .

project astrolabe logo smaller

. . . . .

Saturday


The periodic table, color-coded by the source of the element in the solar system. By Jennifer Johnson.

Many years ago, reading a source I cannot now recall (and for which I searched unsuccessfully when I started writing this post), I came upon a passage that has stayed with me. The author was making the argument that no sciences were consistent except those that had been reduced to mere catalogs of facts, like geography and anatomy. I can’t recall the larger context in which this argument appeared, but the observation that sciences might only become fully consistent when they have matured to the point of being exhaustive but static and uninteresting catalogs of facts, implying that the field of research itself had been utterly exhausted, was something I remembered. This idea presents in miniature a developmental conception of the sciences, but I think that it is a developmental conception that is incomplete.

Thinking of this idea of an exhausted field of research, I am reminded of a discussion in Conversations on Mind, Matter, and Mathematics by Jean-Pierre Changeux and Alain Connes, in which mathematician Alain Connes distinguished between fully explored and as yet unexplored parts of mathematics:

“…the list of finite fields is relatively easy to grasp, and it’s a simple matter to prove that the list is complete. It is part of an almost completely explored mathematical reality, where few problems remain. Cultural and social circumstances clearly serve to indicate which directions need to be pursued on the fringe of current research — the conquest of the North Pole, to return again to my comparison, surely obeyed the same type of cultural and social motivations, at least for a certain time. But once exploration is finished, these cultural and social phenomena fade away, and all that’s left is a perfectly stable corpus, perfectly fitted to mathematical reality…”

Jean-Pierre Changeux and Alain Connes, Conversations on Mind, Matter, and Mathematics, Princeton: Princeton University Press, 1995, pp. 33-34

To illustrate a developmental conception of mathematics and the formal sciences would introduce additional complexities that follow from the not-yet-fully-understood relationship between the formal sciences and the empirical sciences, so I am going to focus on developmental conceptions of the empirical sciences, but I hope to return to the formal sciences in this connection.

The idea of the development of science as a two-stage process, with discovery followed by a consistent and exhaustive catalog, implies both that most sciences (and, if we decompose the individual special sciences into subdivisions, parts of most or all sciences) remain in the discovery phase, and that once the discovery phase has passed and we are in possession of an exhaustive and complete catalog of the facts discovered by a science, there is nothing more to be done in a given science. However, I can think of several historical examples in which a science seemed to be converging on a complete catalog, but this development was disrupted (one might say) by conceptual change within the field that forced the reorganization of the materials in a new way. My examples will not be perfect, and some additional scientific discovery always seems to have been involved, but I think that these examples will be at least suggestive.

Prior to the great discoveries of cosmology in the early twentieth century, after which astronomy became indissolubly connected to astrophysics, astronomy seemed to be converging slowly upon an exhaustive catalog of all stars, with the limitation on the research being simply the resolving power of the telescopes employed to view the stars. One could imagine a counterfactual world in which technological innovations in instrumentation supplied nothing more than new telescopes able to resolve more stars, and that the task of astronomy was merely to supply an exhaustive catalog of stars, listing their position in the sky, intrinsic brightness, and a few other facts about the points of light in the sky. But the cataloging of stars itself contributed to the revolution that would follow, particularly when the period-luminosity relationship in Cepheid variable stars was discovered by Henrietta Swan Leavitt (discovered in 1908 and published in 1912). The period-luminosity relationship provided a “standard candle” for astronomy, and this standard candle began the process of constructing the cosmological distance ladder, which in turn made it possible to identify Cepheid variables in the Andromeda galaxy and thus to prove that the Andromeda galaxy was two million light years away and not contained within the Milky Way.

Once astronomy became scientifically coupled to astrophysics, and the resources of physics (both relativistic and quantum) could be brought to bear upon understanding stars, a whole new cosmos opened up. Stars, galaxies, and the universe entire were transformed from something static that might be exhaustively cataloged, to a dynamic and changing reality with a natural history as well as a future. Astronomy went from being something that we might call a Platonic science, or even a Linnaean science, to being an historical science, like geology (after Hutton and Lyell), biology (after Darwin and Wallace), and Paleontology. This coupling of the study of the stars with the study of the matter that makes up the stars has since moved in both directions, with physics driving cosmology and cosmology driving physics. One result of this interaction between astronomy and physics is the illustration above (by Jennifer Johnson) of the periodic table of elements, which prominently exhibits the origins of the elements in cosmological processes. The periodic table once seemed, like a catalog of stars, to be something static to be memorized, and divorced from natural history. This conceptualization of matter in terms of its origins puts the periodic table in a dramatically different light.

As the cosmos was once conceived in Platonic terms as fixed and eternal, to be delineated in a Linnaean science of taxonomical classification, so too the Earth was conceived in Platonic terms as fixed and eternal, to be similarly delineated in a Linnaean science of classification. The first major disruption of this conception came with geology since Hutton and Lyell, followed by plate tectonics and geomorphology in the twentieth century. Now this process has been pushed further by the idea of mineral evolution. I have been listening through for the second time to Robert Hazen’s lectures The Origin and Evolution of Earth: From the Big Bang to the Future of Human Existence, which exposition closely follow the content of his book, The Story of Earth: The First 4.5 Billion Years, from Stardust to Living Planet, in which Hazen wrote:

“The ancient discipline of mineralogy, though absolutely central to everything we know about Earth and its storied past, has been curiously static and detached from the conceptual vagaries of time. For more than two hundred years, measurements of chemical composition, density, hardness, optical properties, and crystal structure have been the meat and potatoes of the mineralogist’s livelihood. Visit any natural history museum, and you’ll see what I mean: gorgeous crustal specimens arrayed in case after glass-fronted case, with labels showing name, chemical formula, crystal system, and locality. These most treasured fragments of Earth are rich in historical context, but you will likely search in vain for any clue as to their birth ages or subsequent geological transformations. The old way all but divorces minerals from their compelling life stories.”

Robert M. Hazen, The Story of Earth: The First 4.5 Billion Years, from Stardust to Living Planet, Viking Penguin, 2012, Introduction

This illustrates, from the perspective of mineralogy, much of what I said above in relation to star charts and catalogs: mineralogy was once about cataloging minerals, and this may have been a finite undertaking once all minerals had been isolated, identified, and cataloged. Now, however, we can understand mineralogy in the context of cosmological history, and this is as revolutionary for our understanding of Earth as the periodic table understood in terms of cosmological history. It could be argued, in addition, that compiling the “particle zoo” of contemporary particle physics is also a task of cataloging the entities studied by physics, but the cataloging of particles has been attended throughout with a theory of how these particles are generated and how they fit into the larger cosmological story — what Aristotle would have called their coming to be and passing away.

The best contemporary example of a science still in its initial phases of discovery and cataloging is the relatively recent confirmation of exoplanets. On my Tumblr blog I recently posted On the Likely Existence of “Random” Planetary Systems, which tried to place our current Golden Age of Exoplanet Discovery in the context of a developing science. We find the planetary systems that we do in fact find partly as a consequence of observation selection effects, and it belongs to the later stages of the development of a science to attempt to correct for observation selection effects built into the original methods of discovery employed. The planetary science that is emerging from exoplanet discoveries, however, and like contemporary particle physics, is attended by theories of planet formation that take into account cosmological history. However, the discovery phase, in terms of exoplanets, is still underway and still very new, and we have a lot to learn. Moreover, once we learn more about the possibilities of planets in our universe, hopefully also we will learn about the varied possibilities of planetary biospheres, and given the continual interaction between biosphere, lithosphere, atmosphere, and hydrosphere, which is a central motif of Hazen’s mineral evolution, we will be able to place planets and their biospheres into a large cosmological context (perhaps even reconstructing biosphere evolution). But first we must discover them, and then we must catalog them.

These observations, I think, have consequences not only for our understanding of the universe in which we find ourselves, but also for our understanding of science. Perhaps, instead of a two-stage process of discovery and taxonomy, science involves a three-stage process of discovery, taxonomy, and natural history, in which latter the objects and facts cataloged by one of the special sciences (earlier in their development) can take their place within cosmological history. If this is the case, then big history is the master category not only of history, but also of science, as big history is the ultimate framework for all knowledge that bears the lowly stamp of its origins. This conception of the task of science, once beyond the initial stages of discovery and classification, to integrate that which was discovered and classified into the framework of big history, suggests a concrete method by which to “cash out” in a meaningful way Wilfrid Sellars’ contention that, “…the specialist must have a sense of how not only his subject matter, but also the methods and principles of his thinking about it, fit into the intellectual landscape.” (cf. Philosophy and the Scientific Image of Man) Big history is the intellectual landscape in which the sciences are located.

A developmental conception of science that recognized stages in the development of science beyond classification, taxonomy, and an exhaustive catalog (which is, in effect, the tombstone of what was a living and growing science), has consequences for the practice of science. Discovery may well be the paradigmatic form of scientific activity, but it is not the only form of scientific activity. The painstakingly detailed and disciplined work of cataloging stars or minerals is the kind of challenge that attracts a certain kind of mind with a particular interest, and the kind of individual who is attracted to this task of systematically cataloging entities and facts is distinct from the kind of individual who might be most attracted by scientific discovery, and also distinct from the kind of individual who might be attracted to fitting the discoveries of a special science into the overall story of the universe and its natural history. There may need to be a division of labor within the sciences, and this may entail an educational difference. Dividing sciences by discipline (and, now, by university departments), which involves inter-generational conflicts among sciences and the paradigm shifts that sometimes emerge as a result of these conflicts, may ultimately make less sense than dividing sciences according their stage of development. Perhaps universities, instead of having departments of chemistry, geology, and botany, should have departments of discovery, taxonomy, and epistemic integration.

Speaking from personal experience, I know that (long ago) when I was in school, I absolutely hated the cataloging approach to the sciences, and I was bored to tears by memorizing facts about minerals or stars. But the developmental science of evolution so intrigued me that I read extensively about evolution and anthropology outside and well beyond the school curriculum. If mineral evolution and the Earth sciences in their contemporary form had been known then, I might have had more of an interest in them.

What are the sciences developing into, or what are the sciences becoming? What is the end and aim of science? I previously touched on this question, a bit obliquely, in What is, or what ought to be, the relationship between science and society? though this line of inquiry is more like a thought experiment. It may be too early in the history of the sciences to say what they are becoming or what they will become. Perhaps an emergent complexity will arise out of knowledge itself, something that I first suggested in Scientific Historiography: Past, Present, and Future, in which I wrote in the final paragraph:

We cannot simply assume an unproblematic diachronic extrapolation of scientific knowledge — or, for that matter, historical knowledge — especially as big history places such great emphasis upon emergent complexity. The linear extrapolation of science eventually may trigger a qualitative change in knowledge. In other words, what will be the emergent form of scientific knowledge (the ninth threshold, perhaps?) and how will it shape our conception of scientific historiography as embodied in big history, not to mention the consequences for civilization itself? We may yet see a scientific historiography as different from big history as big history is different from Augustine’s City of God.

It is only a lack of imagination that would limit science to the three stages of development I have outlined above. There may be developments in science beyond those we can currently understand. Perhaps the qualitative emergent from the quantitative expansion of scientific knowledge will be a change in science itself — possibly a fourth stage in the development of science — that will open up to scientific knowledge aspects of experience and regions of nature currently inaccessible to science.

. . . . .

signature

. . . . .

Grand Strategy Annex

. . . . .

project astrolabe logo smaller

. . . . .

Monday


Janus, the Roman god of beginnings and portals, had two faces, one looking into the past and another looking into the future.

Janus, the Roman god of beginnings and portals, had two faces, one looking into the past and another looking into the future.

In my recent Manifesto for the Study of Civilization I employed the phrase history in an extended sense. Here is a bit more context:

“One form that the transcendence of an exclusively historical study of civilization can take is that of extrapolating historical modes of thought so that these modes of thought apply to the future as well as to the past (and this could be called history in an extended sense).”

In several posts I have developed what I call concepts in an extended sense, as in Geocentrism in an Extended Sense and “biocentrism in an extended sense” in Addendum on the Technocentric Thesis and “ecology in an extended sense” in Intelligent Invasive Species.

In Developmental Temporality I wrote:

“With the advent of civilization in the most extended sense of that term, comprising organized settled agricultural societies and their urban centers, planning for the future becomes systematic.”

And in Reduction, Emergence, Supervenience I wrote:

“Philosophy today, then, is centered on the extended conceptions of ‘experience’ and ‘observation’ that science has opened up to us, and these extended senses of experience and observation go considerably beyond ordinary experience, and the prima facie intellectual intuitions available to beings like ourselves, whose minds evolved in a context in which perceptions mattered enormously while the constituents and overall structure of the cosmos mattered not at all.”

In these attempts to extrapolate, expand, and extend concepts beyond their ordinary usage — the result of which might also be called overview concepts — each traditional concept must be treated individually, as there is a limit that is demarcated by the intrinsic meaning of the concept, and these limits are different in each case. With history, the extrapolation of the concept is obvious: history has taken the past as its remit, but history in an extended sense would apply to the totality of time. This is already being done in Big History.

When I attended the second IBHA conference in 2014 I was witness to a memorable exchange that I described in 2014 IBHA Conference Day 2:

“During the question and answer session, a fellow who had spoken up in previous sessions with questions stood up and said that there were (at least) two conceptual confusions pervasive throughout discussions at this conference: 1) that something could come from nothing (presumably a reference to how the big bang is framed, though this could have been intended more generally as a critique of emergentism) and, 2) that history can say anything about the future. The same individual (whose name I did not get) said that no one had given an adequate definition of history, and then noted that the original Greek term for history meant ‘inquiry.’ Given this Grecian (or even, if you like, Herodotean) origin for the idea of history as an inquiry, I immediately asked myself, ‘If one can conduct an inquiry into the past, why cannot one also conduct an inquiry into the future?’ No doubt these inquires will be distinct because one concerns the past and the other the future, but cannot they be taken up in the same spirit?”

There was a note of frustration in the voice of the speaker who objected to any account of the future as a part of history, and while I could appreciate the source of that frustration, it reminded me of every traditionalist protest against the growth of scientific knowledge made possible by novel methods not sanctioned by tradition. In this connection I think of Isaiah Berlin’s critique of scientific historiography, which I previously discussed in Big History and Scientific Historiography.

Berlin argued that the historical method is intrinsically distinct from the scientific method, so that there can be no such thing as scientific historiography, i.e., that the intrinsic limitations of the concept of history restricts history from being scientific in the way that the natural sciences are scientific. While Berlin’s objection to scientific historiography is not stated in terms of restricting the expansion of historical modes of thought, his appeal to a nature of history intrinsically irreconcilable with science and the scientific method is parallel to an appeal to the nature of history as being intrinsically about the past (thus intrinsically not about the future), hence there can be no such thing as a history that includes within it the study of the future in addition to the study of the past.

Here is a passage in which Berlin characterizes distinctively historical modes of thought, contrasting them to scientific modes of thought:

“Historians cannot ply their trade without a considerable capacity for thinking in general terms; but they need, in addition, peculiar attributes of their own: a capacity for integration, for perceiving qualitative similarities and differences, a sense of the unique fashion in which various factors combine in the particular concrete situation, which must at once be neither so unlike any other situation as to constitute a total break with the continuous flow of human experience, nor yet so stylised and uniform as to be the obvious creature of theory and not of flesh and blood. The capacities needed are rather those of association than of dissociation, of perceiving the relation of parts to wholes, of particular sounds or colours to the many possible tunes or pictures into which they might enter, of the links that connect individuals viewed and savoured as individuals, and not primarily as instances of types or laws.”

Isaiah Berlin, “The Concept of Scientific History,” in Concepts and Categories, p. 140

Every cognitive capacity that Berlin here credits to the historian can be equally well exercised in relation to the future as to the past (I should point out that, as far as I know, Berlin did not take up the problem of the relation of the historian to the future). Indeed, one of the weaknesses of futurism has been that futurists have not immersed themselves in these distinctively historical modes of thought; our conception of the future could greatly benefit from a capacity for integration and perceiving the relation of parts to wholes. I don’t think Berlin would ever have imagined his critique of scientific historiography as advice for futurists, but it could be profitably employed in developing history in an extended sense.

It is common for historians to invoke distinctively historical modes of thought, and I believe that this is a valid concern. Indeed, I would go farther yet. Human modes of thought are primarily temporal, and non-temporal modes of thought come very late in our history as a species in comparison to the effortless way we learn to think of time in subtle and sophisticated ways. For example, when one learns a language, one finds that one spends an inordinate amount of time attempting to master past, present, and future tenses — the tenses of our mother tongue are so fixed in our minds that any other schema strikes us as counterintuitive (and, interestingly, even those who attain fluency in another language or languages usually revert to their mother tongue for counting). But in order to communicate effectively we must master the logic of time as expressed in linguistic tenses. Human beings are inveterate planners, preparers, and schemers; our present is pervasively animated by a concern for the future. We are so taken up with our plans for the future that it is considered something of a “gift” to be able to “live in the moment.”

Many of Berlin’s examples of distinctively historical thought position the historian as attempting to explain historical change. The emphasis on describing change in history results in an indirect deemphasis of continuity, though continuity is arguably the overwhelming experience of time and history. It would be almost impossible for us to delineate all of the things that we know will happen tomorrow, and which we do not even bother to think of as predictions because they fall so far near certainty on the epistemic continuum of historical knowledge. All of the laws of science that have been discovered up to the present day will continue to be in effect tomorrow, and all of the events and processes that make up the world will continue to be governed by these laws of nature tomorrow. We could exhaust ourselves describing the nomological certainties of the morrow, and still not have exhausted the predictions we might have made. Thus it is we know that the sun will rise tomorrow, and we can explain how and why the sun will rise tomorrow. If you are an anchorite living in a cave, the sun will not rise for you, but you can nevertheless be confident that Earth will continue to orbit the sun while rotating, and that this process will result in the appearance of the sun rising for everyone else not so confined.

But our sciences that describe the laws of nature that govern the world are incomplete, and they are in particular incomplete when it comes to history. I have noted elsewhere that there is (as yet) no science of time, and it is interesting to speculate that the absence of a science of time may be related to a parallel absence of a truly scientific historiography or a science of civilization. Because we have no science of time, we have no formal concepts of time — or, rather, we have no concepts of time recognized to be formal concepts. I have argued elsewhere that the idea of the punctiform present is a formal concept of time, i.e., interpreted as a formal concept it can be employed in a formal theory of time which can illuminate actual time as an ideal, simplified model. But as soon as you try to interpret the idea of the punctiform present as an empirical concept you run into difficulties. Would it be possible to measure a dimensionless instant? The punctiform present is like a pendulum with a weightless string, frictionless fulcrum, and no air drag. No such pendulum exists in actual fact, but the ideal pendulum remains a useful fiction for us. Similarly, the punctiform present is a useful fiction for a formal science of time.

A truly (perhaps exhaustively) scientific historiography would not only employ the methods of the special sciences in the exposition of history, but would also incorporate a science of time that would allow us to be as definite about history to come as we can now be definite about our predictions for the natural world as governed by laws of nature. It is not difficult to imagine what Berlin would have thought of such an idea. Here is another quote from Berlin’s essay on scientific historiography:

“…the attempt to construct a discipline which would stand to concrete history as pure to applied, no matter how successful the human sciences may grow to be — even if, as all but obscurantists must hope, they discover genuine, empirically confirmed, laws of individual and collective behaviour — seems an attempt to square the circle.”

Isaiah Berlin, “The Concept of Scientific History,” in Concepts and Categories, p. 142

What Berlin here condemns as an attempt to square the circle is precisely my ideal in history, and it is what I called formal historiography in Rational Reconstructions of Time. A formulation of history in an extended sense would be a step toward a formal historiography.

While on one level I am interested in history as an intellectual discipline in its own right — history for history’s sake — and therefore I am interested in formal historiography as a sui generis discipline, I also have an ulterior motive in the pursuit of a formal historiography that can develop history in an extended sense. Such a formal historiography will be one tool in the interdisciplinary toolkit of future scientists of civilization, who must study civilization both in terms of its past and its future.

. . . . .

Isaiah Berlin (1909–1997)

Isaiah Berlin (1909–1997)

. . . . .

signature

. . . . .

Grand Strategy Annex

. . . . .

project astrolabe logo smaller

. . . . .

A Fine-Grained Overview

5 December 2016

Monday


overview-and-detail

Constructive and Non-Constructive Perspectives

Whenever I discuss methodology, I eventually come around to discussing the difference between constructive and non-constructive methods, as this is a fundamental distinction in reasoning, though often unappreciated, and especially neglected in informal thought (which is almost all human thought). After posting Ex Post Facto Eight Year Anniversary I realized that the distinction that I made in that post between detail (granularity) and overview (comprehensivity) can also be illuminated by the distinction between the constructive and the non-constructive.

Two two pairs of concepts can be juxtapositioned in order to show the four permutations yielded by them. I have done the same thing with the dual dichotomies of nomothetic/ideographic and synchonic/diachronic (in Axes of Historiography) and with weak panspermia/strong panspermia and theological/technological (in Is astrobiology discrediting the possibility of directed panspermia?). The table above gives the permutations for the juxtaposition of detail/overview and constructive/non-constructive.

In that previous post I identified my theoretical ideal as a fine-grained overview, combining digging deeply into details while also cultivating an awareness of the big picture in which the details occur. Can a fine-grained overview be attained more readily through constructive or non-constructive methods?

In P or Not-P I quoted this from Alain Connes:

“Constructivism may be compared to mountain climbers who proudly scale a peak with their bare hands, and formalists to climbers who permit themselves the luxury of hiring a helicopter to fly over the summit.”

Changeux and Connes, Conversations on Mind, Matter, and Mathematics, Princeton, 1995, p. 42

This image makes of constructivism the fine-grained, detail-oriented approach, while non-constructive methods are like the overview from on high, as though looking down from a helicopter. But it isn’t quite that simple. If we take this idea of constructivists as mountain climbers, we may extend the image with this thought from Wittgenstein:

“With my full philosophical rucksack I can climb only slowly up the mountain of mathematics.”

Ludwig Wittgenstein, Culture and Value, p. 4

And so it is with constructivism: the climbing is slow because they labor under their weight of a philosophical burden. They have an overarching vision of what logic and mathematics ought to be, and generally are not satisfied with these disciplines as they are. Thus constructivism has an overview as well — a prescriptive overview — though this overview is not always kept in mind. As Jean Largeault wrote, “The grand design has given way to technical work.” (in the original: “Les grands desseins ont cédé la place au travail technique.” L’intuitionisme, p. 118) By this Largeault meant that the formalization of intuitionistic logic had deprived intuitionism (one species of constructivism) of its overarching philosophical vision, its grand design:

“Even those who do not believe in the omnipotence of logic and who defend the rights of intuition have acceded to this movement in order to justify themselves in the eyes of their opponents. As a result we find them setting out, somewhat paradoxically, the ‘formal rules of intuitionist logic’ and establishing an ‘intuitionistic formalism’.”

…and in the original…

“Ceux-la memes qui ne croient pas a la toute-puissance de la logique et qui défendent les droits de l’intuition, ont du, eux aussi, céder au mouvement pour pouvoir se justifier aux yeux de leurs adversaires, et l’on a vu ainsi, chose passablement paradoxale, énoncer les ‘regles formelles de la logique intuitioniste’ et se constituer un ‘formalisme intuitioniste’.”

Robert Blanché, L’axioimatique, § 17

But intuitionists and constructivists return time and again to a grand design, so that the big picture is always there, though often it remains implicit. At very least, both the granular and the comprehensive conceptions of constructivism have at least a passing methodological familiarity, as we see in the table above, on the left side, granular constructivism with its typical concern for the “right” methods (which can be divorced from any overview), but also, below that, the philosophical ideas that inspired the constructivist deviation from classical eclecticism, from Kant through Hilbert and Brouwer to the constructivists of our time, such as Errett Bishop.

These two faces of methodology are not as familiar with non-constructivism. In so far as non-constructivism is classical eclecticism (a phrase I have taken from the late Torkel Frazén), a methodological “anything goes,” this is the granular conception of non-constructivism that consists of formal methods without any unifying philosophical conception. This much is familiar. Less familiar is the possibility of a non-constructive overview made systematic by some unifying conception. The idea of a non-constructive overview is familiar enough, and appears in the Connes quote above, but it this idea has had little philosophical content.

There is, however, the possibility of giving non-constructive formal methodology an overarching philosophical vision, and this follows readily enough from familiar forms of non-constructive thought. Cantor’s theory of transfinite numbers, and the proof techniques that Cantor formulated (and which remain notorious among constructivists) is a rare example of non-constructive thought pushed to its limits and beyond. Applied to a non-constructive overview, the transfinite perspective suggests that a systematically non-constructive methodology would insistently seek a total context for any idea, by always contextualizing any idea in a more comprehensive setting, and pursuing that contextualization to infinity. Thus any attempt to think a finite thought forces us to grapple with the infinite.

A fine-grained overview might be formulated by way of a systematically non-constructive methodology — not the classical eclecticism that is an accidental embrace of non-constructive methods alongside constructive methods — that digs deep and drills down into details by non-constructive methods that also furnish a sweeping, comprehensive philosophical vision of what formal methods can be, when that philosophical vision is not inspired to systematically limit formal methods (as is the case with constructivism).

Would the details that would be brought out by a systematically non-constructive method be the same fine-grained details that constructivism brings out when it insists upon finitistic proof procedures? Might there be different kinds of detail to be revealed by distinct methods of granularity in formal thought? These are elusive thoughts that I have not yet pinned down, so examples and answers will have to wait until I have achieved Cartesian clarity and distinctness about non-constructive methods. I beg the reader’s indulgence for my inadequate formulations here. Even as I write, ideas appear briefly and then disappear before I can record them, so this post is different from what I imagined as I sat down to write it.

Here again I can appeal to Wittgenstein:

“This book is written for such men as are in sympathy with its spirit. This spirit is different from the one which informs the vast stream of European and American civilization in which all of us stand. The spirit expresses itself in an onwards movement, building ever larger and more complicated structures; the other in striving after clarity and perspicuity in no matter what structure. The first tries to grasp the world by way of its periphery — in its variety; the second at its center — in its essence. And so the first adds one construction to another, moving on and up, as it were, from one stage to the next, while the other remains where it is and what it tries to grasp is always the same.”

Ludwig Wittgenstein, Philosophical Remarks, Foreword

These two movements of thought are not mutually exclusive; it is possible to build larger structures while always trying to grasp an elusive essence. It could be argued that anything built on uncertain foundations will come to naught, so that we must grasp the essence first, before we can proceed to construction. As important as it is to attempt to grasp an elusive essence, if we do this, we risk the intellectual equivalent of the waiting gambit.

. . . . .

Constructivism and Non-constructivism

P or Not-P

What is the Relationship between Constructive and Non-Constructive Mathematics?

A Pop Culture Exposition of Constructivism

Intuitively Clear Slippery Concepts

Kantian Non-Constructivism

Constructivism without Constructivism

The Vacuous Identity Principle

Permutations of Infinitistic Methods

Methodological Differences

Constructivist Watersheds

Constructive Moments within Non-Constructive Thought

Gödel between Constructivism and Non-Constructivism

The Natural History of Constructivism

Cosmology: Constructive and Non-Constructive

Saying, Showing, Constructing

Arthur C. Clarke’s tertium non datur

A Non-Constructive World

. . . . .

Wittgenstein wrote, “With my full philosophical rucksack I can climb only slowly up the mountain of mathematics.”

Wittgenstein wrote, “With my full philosophical rucksack I can climb only slowly up the mountain of mathematics.”

. . . . .

signature

. . . . .

Grand Strategy Annex

. . . . .

project astrolabe logo smaller

. . . . .

Saturday


eight-ball

Last month, November 2016, marked the eight year anniversary for this blog. My first post, Opening Reflection, was dated 05 November 2008. Since then I have continued to post, although less frequently of late. I have become much less interested in tossing off a post about current events, and more interested in more comprehensive and detailed analyses, though blog posts are rarely associated with comprehensivity or detail. But that’s how I roll.

It is interesting that we have two distinct and even antithetical metaphors to identify non-trivial modes of thought. I am thinking of “dig deep” or “drill down” on the one hand, and, on the other hand, “overview” or “big picture.” The two metaphors are not identical, but each implies a particular approach to non-triviality, with the former implying an immersion in a fine-grained account of anything, while the latter implies taking anything in its widest signification.

Ideally, one would like to be both detailed and comprehensive at the same time — formulating an account of anything that is, at once, both fine-grained and which takes the object of one’s thought in its widest signification. In most cases, this is not possible. Or, rather, we find this kind of scholarship only in the most massive works, like Gibbon’s Decline and Fall of the Roman Empire, or Mario Bunge’s Treatise on Basic Philosophy. Over the past hundred years or so, scholarship has been going in exactly the opposite direction. Scholars focus on a particular area of thought, and then produce papers, each one of which focuses even more narrowly on one carefully defined and delimited topic within a particular area of thought. There is, thus, a great deal of very detailed scholarship, and less comprehensive scholarship.

Previously in Is it possible to specialize in the big picture? I considered whether it is even possible to have a scholarly discipline that focuses on the big picture. This question is posed in light of the implied dichotomy above: comprehensivity usually comes at the cost of detail, and detail usually comes at the cost of comprehensivity.

Another formulation of this dichotomy that brings out other aspects of the dilemma would to ask if it is possible to be rigorous about the big picture, or whether it is possible to be give a detailed account of the big picture — a fine-grained overview, as it were? I guess this is one way to formulate my ideal: a fine-grained overview — thinking rigorously about the big picture.

While there is some satisfaction in being able to give a concise formulation of my intellectual ideal — a fine-grained overview — I cannot yet say if this is possible, or if the ambition is chimerical. And if the ambition for a fine-grained overview is chimerical, is it chimerical because finite and flawed human beings cannot rise to this level of cognitive achievement, or is it chimerical because it is an ontological impossibility?

While an overview may necessarily lack the detail of a close and careful account of anything, so that the two — overview and detail — are opposite ends of a continuum, implying the ontological impossibility of their union, I do know, on the other hand, that clear and rigorous thinking is always possible, even if it lacks detail. Clarity and rigor — or, if one prefers the canonical Cartesian formulation, clear and distinct ideas — is a function of disciplined thinking, and one can think in a disciplined way about a comprehensive overview. If one allows that a fine-grained overview can be finely grained in virtue of the fine-grained conceptual infrastructure that one employs in the exposition of that overview, then, certainly, comprehensive detail is possible in this respect (even if in no other).

I could, then, re-state my ambition as formulated in my opening reflection such that, “my intention in this forum to view geopolitics through the prism of ideas,” now becomes my intention to formulate a fine-grained overview of geopolitics through the prism of ideas. But, obviously, I now seldom post on geopolitics, and am out to bag bigger game. This is, I think, implicit in the remit of a comprehensive overview of geopolitics. F. H. Bradley famously said, “Short of the Absolute God cannot stop, and, having reached that goal, He is lost, and religion with Him.” We might similarly say, short of big history geopolitics cannot stop, and, having reached that goal, it is lost, and political economy with it.

. . . . .

signature

. . . . .

Grand Strategy Annex

. . . . .

project astrolabe logo smaller

. . . . .

Planet of Zombies

21 August 2016

Sunday


planet of zombies 2

The Fate of Mind in the Age of Turing

We are living today in the Age of Turing. Alan Turing was responsible for the theoretical work underlying contemporary computer science, but Turing’s work went far beyond the formal theory of the computer. Like Darwin, Turing’s thought ran ahead of the science he founded, and he openly speculated on the consequences of the future development of the computers that his theory made possible.

In his seminal paper “Computing Machinery and Intelligence” (the paper in which he introduced the “Turing Test,” which he called the “imitation game”) Turing began with the question, “Can machines think?” and went on to assert:

I believe that in about fifty years’ time it will be possible, to programme computers, with a storage capacity of about 109, to make them play the imitation game so well that an average interrogator will not have more than 70 per cent chance of making the right identification after five minutes of questioning. The original question, “Can machines think?” I believe to be too meaningless to deserve discussion. Nevertheless I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted.

A. M. Turing, “Computing machinery and intelligence,” Mind, 1950, 59, 433-460.

Turing’s prediction hasn’t yet come to pass, but Turing was absolutely correct that one can speak of machines thinking without being contradicted. Indeed, Turing was more right than he could have guessed, as his idea that computers should be judged upon their performance — and even compared in the same way to human performance — rather than on a vague idea of thinking or consciousness, has become so commonplace that, if one maintains the contrary in public, one can expect to be contradicted.

Turing was, in respect to mind and consciousness, part of a larger intellectual movement that called into question “folk concepts,” which came to seem unacceptably vague and far too unwieldy in the light of the explanatory power of scientific concepts, the latter often constructed without reference to folk concepts, which came to be viewed as dispensable. Consciousness has been relegated to the status of a concept of “folk psychology” with no scientific basis.

While I am in sympathy with the need for rigorous scientific concepts, the eliminative approach to mind and consciousness has not resulted in greater explanatory power for scientific theories, but rather has reinforced an “explanatory gap” (a term made prominent by David Chalmers) that has resulted in a growing disconnect between the most rigorous sciences of human and animal behavior on the one hand, and on the other hand what we know to be true of our own experience, but which we cannot formulate or express in scientific terms. This is a problem. The perpetuation of this disconnect will only deepen our misunderstanding of ourselves and will continue to weaken the ability of science to explain anything that touches upon human experience. Moreover, this is not merely a human matter. We misunderstand the biosphere entire if we attempt to understand it while excluding the role of consciousness. More on this below.

Science has been misled in the study of consciousness by an analogy with the study of life. Life was once believed to be inexplicable in terms of pure science, and so there was a dispute between “mechanism” and “vitalism,” with the vitalists believing that there was some supernatural or other principle superadded to inanimate matter, and that possession of this distinctively vital element unaccountable in scientific terms distinguished the animate from the animate. Physics and chemistry alone could explain inanimate matter, but something more was needed, according to vitalism, to explain life. But with the progress of biology, vitalism was not so much refuted as made irrelevant. We now have a good grasp of biochemistry, and while a distinction is made between inorganic chemistry and biochemistry, it is all understood to be chemistry, and no vital spark is invoked to explain the chemistry distinctive of life.

Similarly, consciousness has been believed to be a “divine spark” within a human being that distinguishes a distinctively human perspective on the world, but consciousness “explained” in this way comes with considerable theological baggage, as explicitly theological terms like “soul” and “spirit” are typically used interchangeably with “consciousness” and “mind.” From a scientific perspective, this leaves much to be desired, and we could do much better. I agree with this. Turing’s imitation game seems to present us with an operational definition of consciousness that allows us to investigate mind and consciousness without reference to the theological baggage. There is much to gained by Turing’s approach, but the problem is that we have here no equivalent of chemistry — no underlying physical theory that could account for consciousness in the way that life is accounted for by biochemistry.

Part of the problem, and the problem that most interests me at present, is the anthropocentrism of both traditional theological formulations and contemporary scientific formulations. If we understand human consciousness not as an exception that definitively separates us from the rest of life on the planet, not as a naturalistic stand-in for a “divine spark” that would differentiate human beings from the “lower” animals, but as a distinctive development of consciousness already emergent in other forms preceding human beings, then we understand that human consciousness is continuous with other forms of consciousness in nature, and that, as conscious beings, we are part of something greater than ourselves, which is a biosphere in which consciousness is commonplace, like vision or flight.

There are naturalistic alternatives to an anthropocentric conception of consciousness, alternatives that place consciousness in the natural world, and which also have the virtue of avoiding the obvious problems of eliminativist of reductivist accounts of consciousness. I will consider the views of Antonio Damasio and John Searle. I do not fully agree with either of these authors, but I am in sympathy with these approaches, which seem to me to offer the possibility of further development, as fully scientific as Turing’s approach, but without the denial of consciousness as a distinctive constituent of the world.

Antonio R. Damasio in The Feeling of What Happens distinguished between core consciousness and extended consciousness. Core consciousness, he wrote:

“…provides the organism with a sense of self about one moment — now — and about one place — here. The scope of core consciousness is the here and now. Core consciousness does not illuminate the future, and the only past it vaguely lets us glimpse is that which occurred in the instant just before. There is no elsewhere, there is no before, there is no after.”

Antonio R. Damasio, The Feeling of What Happens: Body and Emotion in the Making of Consciousness, San Diego, New York, and London: Harcourt, Inc., 1999, p. 16

…and…

“…core consciousness is a simple, biological phenomenon; it has one single level of organization; it is stable across the lifetime of the organism; it is not exclusively human; and it is not dependent on conventional memory, working memory, reasoning, or language.”

Loc. cit.

The simplicity of core consciousness gives it a generality across organisms, and across the life span of a given organism; at any one time, it is always more or less the same. Extended consciousness, on the other hand, is both more complex and less robust, dependent upon an underlying core consciousness, but constructing from core consciousness what Damasio calls the “autobiographical self” in contradistinction to the ephemeral “core self” of core consciousness. Extended consciousness, Damasio says:

“…provides the organism with an elaborate sense of self — an identity and a person, you or me, no less — and places that person at a point in individual historical time, richly aware of the lived past and of the anticipated future, and keenly cognizant of the world beside it.”

Loc. cit.

…and…

“…extended consciousness is a complex biological phenomenon; it has several levels of organization; and it evolves across the lifetime of the organism. Although I believe extended consciousness is also present in some nonhumans, at simple levels, it only attains its highest reaches in humans. It depends on conventional memory and working memory. When it attains its human peak, it is also enhanced by language.”

Loc. cit.

…but…

“…extended consciousness is not an independent variety of consciousness: on the contrary, it is built on the foundation of core consciousness.”

Op. cit., p. 17

One might add to this formulation by noting that, as extended consciousness is built on core consciousness, core consciousness is, in turn, built on the foundation of biological processes. I would probably describe consciousness in a somewhat different way, and would make different distinctions, but I find Damasio’s approach helpful, as he makes no attempt to explain away consciousness or to reduce it to something that it is not. Damasio seeks to describe and to explain consciousness as consciousness, and, moreover, sees consciousness as part of the natural world that is to be found embodied in many beings in addition to human beings, which latter constitutes, “…extended consciousness at its zenith.”

Damasio’s formulation of both core consciousness and extended consciousness as biological phenomena might be compared to what John Searle calls “biological naturalism.” What Searle, a philosopher, and Damasio, a neuroscientist, have in common is an interest in a naturalistic account of mind which is not eliminativist or reductivist. To this end, both emphasize the biological nature of consciousness. Searle has conveniently summarized his biological naturalism in six theses, as follows:

1. Consciousness consists of inner, qualitative, subjective states and processes. It has therefore a first-person ontology.

2. Because it has a first-person ontology, consciousness cannot be reduced to a third-person phenomena in the way that it is typical of other natural phenomena such as heat, liquidity, or solidity.

3. Consciousness is, above all, a biological phenomenon. Conscious processes are biological processes.

4. Conscious processes are caused by lower-level neuronal processes in the brain.

5. Consciousness consists of higher-level processes realized in the structure of the brain.

6. There is, as far as we know, no reason in principle why we could not build an artificial brain that also causes and realizes consciousness.

John R. Searle, Mind, Language and Society: Philosophy in the Real World, New York: Basic Books, 1999, p. 53

Searle’s formulations — again, as with Damasio, I would probably formulate these ideas a bit differently, but, on the whole, I am sympathetic to Searle’s approach — are a reaction against a reaction, i.e., against a reactionary theory of mind, which is the materialist theory of mind formulated in consciousness contradistinction to Cartesian dualism. Searle devotes a considerable portion of several books to the problems with this latter philosophy. I think the most important lesson to take away from Searle’s critique is not the technical dispute, but the thematic motives that underlie this philosophy of mind:

“How is it that so many philosophers and cognitive scientists can say so many things that, to me at least, seem obviously false? Extreme views in philosophy are almost never unintelligent; there are generally very deep and powerful reasons why they are held. I believe one of the unstated assumptions behind the current batch of views is that they represent the only scientifically acceptable alternatives to the antiscientism that went with traditional dualism, the belief in the immortality of the soul, spiritualism, and so on. Acceptance of the current views is motivated not so much by an independent conviction of their truth as by a terror of what are apparently the only alternatives.”

John R. Searle, The Rediscovery of the Mind, Cambridge and London: The MIT Press, Chap. 1

The biologism of both Damasio and Searle make it possible not only to approach human consciousness scientifically, but also to place consciousness in nature — the alternatives being denying human consciousness or approaching it non-scientifically, and denying consciousness a place in nature. These alternatives have come to have a colorful representation in contemporary philosophy in the discussion of “philosophical zombies.” Philosophical zombies are beings like ourselves, but without consciousness. The question, then, is whether we can distinguish philosophical zombies from human beings in possession of consciousness. I hope that the reader will have noticed that, in the discussion of philosophical zombies we encounter another anthropocentric formulation. (I previously touched on some of the issues related to philosophical zombies in The Limitations of Human Consciousness, A Note on Soulless Zombies, and The Prodigal Philosopher Returns.)

The anthropocentrism of philosophical zombies can be amended by addressing philosophical zombies in a more comprehensive context, in which not only human beings have consciousness, but consciousness is common in the biosphere. Then the question becomes not, “can we distinguish between philosophical zombies and conscious human beings” but “can we distinguish between a biosphere in which consciousness plays a constitutive role and a biosphere in which consciousness is entirely absent”? This is potentially a very rich question, and I could unfold it over several volumes, rather than the several paragraphs that follow, which should be understood as only the barest sketch of the problem.

As I see it, reconstructing biosphere evolution should include the reconstruction, to the extent possible, of the evolution of consciousness as a component of the biosphere — when did it emerge? When did the structures upon which is supervenes emerge? How did consciousness evolve and adapt to changing selection pressures? How did consciousness radiate, and what forms has it taken? These questions are obviously entailed by biological naturalism. Presumably consciousness evolved gradually from earlier antecedents that were not consciousness. Damasio writes, “natural low-level attention precedes consciousness,” and, “consciousness and wakefulness, as well as consciousness and low-level attention, can be separated.” Again, I would formulate this a bit differently, but, in principle, states of a central nervous system prior to the emergence of consciousness would precede even rudimentary core consciousness. If these states of a central nervous system prior to consciousness include wakefulness and low-level attention, this would constitute a particular seriation of the evolution of consciousness.

Damasio calls human consciousness, “consciousness at its zenith,” and a naturalistic conception of consciousness recognizes this by placing this zenith of human consciousness at the far end of the continuum of consciousness, but still on a continuum that we share with other beings with which we share the biosphere. A human being is not only a being among beings, but also one biological being among other biological beings. Given Searle’s biological naturalism, our common biology — especially the common biology of our central nervous systems and brains — points to our being a conscious being among other conscious beings. This seems to be borne out in our ordinary experience, as we usually understand our experience. We interact with other conscious beings on the level of consciousness, but the quality of consciousness may differ among beings. Interacting with other beings on the level of awareness means that our relationships with other conscious beings are marked by mutual awareness: not only are we aware of the other, but the other is also aware of us.

Above and beyond mere consciousness is sentient consciousness, i.e., consciousness with an emotional element superadded. We interact with other sentient beings on the level of sentience, that is to say, on the level of feeling. Our relationships with other mammals, especially those we have made part of our civilization, like dogs and horses, are intimate, personal relationships, not mediated by intelligence, but mostly mediated by the emotional lives we share with our fellow mammals, endowed, like us, with a limbic system. We intuitively understand the interactions and group dynamics of other social species, because we are ourselves a social species, Even when the institutions of, for example, gorilla society or chimpanzee society, are radically different from the institutions of human society, we can recognize that these are societies, and we can sometimes recognize the different rules that govern these societies.

Even when human beings are absent from interactions in the biosphere, there are still interactions on the level of consciousness and sentience. When a bobcat chases a hare, both interact on the level of two core consciousnesses, and also, as mammals, they interact on a sentient level. The hare has that level of fear and panic possible for core consciousness, and the bobcat, no doubt, experiences the core consciousness equivalent of satisfaction if it catches the hare, and frustration if the hare escapes. Or when a herd of wild horses panics and stampedes, their common sentient response to some environmental stimulation provides the basis of their interaction as a herd species.

All of this can be denied, and we can study nature as though consciousness were no part of it. While I have assimilated the denial of consciousness in nature to anthropocentrism, many more assimilate the attribution of consciousness to other species as a form of anthropocentrism. Clearly, we need to better define anthropocentrism, where and how it misleads us, and where and how it better helps us to understand our fellow beings with which we share the biosphere. That position that identifies consciousness as peculiarly human and denies it to the rest of the biosphere is, in effect asserting that a biosphere of zombies is indistinguishable from a biosphere of consciousness beings; I can understand how this grows out of a legitimate concern to avoid anthropocentric extrapolations, but I can also recognize the violation of the Copernican principle in this position. The view that recognizes consciousness throughout the macroscopic biosphere can also be interpreted as consistent with avoiding anthropocentrism, but also is consonant with Copernicanism broadly construed.

To adopt an eliminativist or reductionist account of consciousness, i.e., to deny the reality of consciousness, is not only to deny consciousness to human beings (a denial that would be thoroughly anthropocentric), it is to deny consciousness to the whole of nature, to deny all consciousness of all kinds throughout nature. It is to assert that consciousness has no place in nature, and that a planet of zombies is indistinguishable from a planet of consciousness agents. Without consciousness, the world entire would be a planet of zombies.

To deny consciousness is to deny that there are any other species, or any other biospheres, in the universe in which consciousness plays a role. If we deny consciousness we also deny consciousness elsewhere in the universe, unless we insist that terrestrial life is the exception, and that, again, would be a non-Copernican position to take. To deny consciousness is to deny that consciousness will ever inhere in some non-biological substrate, i.e., it is to deny that machines will never become conscious, because there is no such thing as consciousness. To deny consciousness is to constitute in place of the biosphere we have, in which conscious interaction plays a prominent role in the lifeways of megafauna, a planet of zombies in which all of these apparent interactions are mere appearance, and the reality is non-conscious beings interacting mechanically and only mechanically. I am not presenting this as a moral horror, that we should avoid because it offends us, but as naturalistically — indeed, biologically — false. Our world is not a planet of zombies.

. . . . .

signature

. . . . .

Grand Strategy Annex

. . . . .

project astrolabe logo smaller

. . . . .

Wednesday


Saturn with astronaut

Our first view of Earth was from its surface; every other planet human beings eventually visit will be first perceived by a human being at a great distance, then from orbit, and last of all from its surface. We will descend from orbit to visit a new world, rather than, as on Earth, emerging from the surface of that world and, only later, much later, seeing it from orbit, and then as a pale blue dot, from a great distance.

With our homeworld, the effect of looking up from the surface of our planet precedes the overview effect; with every other world, the overview effect precedes the surface standpoint. We might call this the homeworld effect, which is a consequence of what I now call planetary endemism (and which, when I was first exploring the concept, I called planetary constraint). We have already initiated this process when human beings visited the moon, and for the first time in human history descended to a new world, never before visited by human beings. With this first tentative experience of spacefaring, humanity knows one world from its surface (Earth) and one world from above (the moon). Every subsequent planetary visit will increase the relative proportion of the overview effect in contradistinction to the homeworld effect.

In the fullness of time, our normative assumptions about originating on a plant and leaving it by ascending in to orbit will be displaced by a “new normal” of approaching worlds from a great distance, worlds perhaps first perceived as a pale blue dot, and then only later descending to familiarize ourselves with surface features. If we endure for a period of time sufficient for further human evolution under the selection pressure of spacefaring civilization, this new normal will eventually replace the instincts formed in the environment of evolutionary adaptedness (EEA) when humanity as a species branched off from other primates. The EEA of our successor species will be spacefaring civilization and the many worlds to which we travel, and this experience will shape our minds as well, producing an evolutionary psychology adapted not to survival on the surface of a planet, but to survival on any planet whatever, or no planet at all.

The Copernican principle is the first hint we have of the mind of a species adapted to spacefaring. It is a characteristic of Copernicanism to call the perspective borne of planetary endemism, the homeworld effect, into question. We have learned that the Copernican principle continually unfolds, always offering more comprehensive perspectives that place humanity and our world in a context that subsumes our previous perspective. Similarly, the overview effect will unfold over the development of spacefaring civilization that takes human beings progressively farther into space, providing ever more distant overviews of our world, until that world becomes lost among countless other worlds.

In my Centauri Dreams post The Scientific Imperative of Human Spaceflight, I discussed the possibility of further overview effects resulting from attaining ever more distant perspectives on our cosmic home — thus attaining an ever more rigorous Copernican perspective. For example, although it is far beyond contemporary technology, it is possible to imagine we might someday have the ability to go so far outside the Milky Way that we could see our own galaxy in overview, and point out the location of the sun in the Orion Spur of the Milky Way.

There is, however, another sense in which additional overview effects may manifest themselves in human experience, and this would be due less to greater technical abilities that would allow for further first person human perspectives on our homeworld and on our universe, and rather due more to cumulative human experience in space as a spacefaring civilization. With accumulated experience comes “know how,” expertise, practical skill, and intuitive mastery — perhaps what might be thought of as the physical equivalent of acculturation.

We achieve this physical acculturation to the world through our bodies, and we express it through a steadily improving facility in accomplishing practical tasks. One such practical task is the ability to estimate sizes, distances, and movements of other bodies in relation to our own body. An astronaut floating in space in orbit around a planet or a moon (i.e., on a spacewalk) would naturally (i.e., intuitively) compare himself as a body floating in space with the planet or moon, also a body floating in space. Frank White has pointed out to me that, in interviews with astronauts, the astronauts themselves have noted the difference between being inside a spacecraft and being outside on a spacewalk, when one is essentially a satellite of Earth, on a par with other satellites.

The human body is an imperfectly uniform, imperfectly “standard” standard ruler that we use to judge the comparative sizes of the objects around us. Despite its imperfection as a measuring instrument, the human body has the advantage of being more intimately familiar to us than any other measuring device, which makes it possible to achieve a visceral understanding of quantities measured in comparison to our own body. At first perceptions of comparative sizes of bodies in space would be highly inaccurate and subject to optical illusions and cognitive biases, but with time and accumulated experience an astronaut would develop a more-or-less accurate “feel” for the size of the planetary body about which he is orbiting. With accumulated experience one would gain an ability to judge distance in space by eye, estimate how rapidly one was orbiting the celestial body in question, and perhaps even familiarize oneself with minute differences in microgravity environments, perceptible only on an intuitive level below the threshold of explicit consciousness — like the reflexes one acquires in learning how to ride a bicycle.

This idea came to me recently as I was reading a NASA article about Saturn, Saturn the Mighty, and I was struck by the opening sentences:

“It is easy to forget just how large Saturn is, at around 10 times the diameter of Earth. And with a diameter of about 72,400 miles (116,500 kilometers), the planet simply dwarfs its retinue of moons.”

How large is Saturn? We can approach the question scientifically and familiarize ourselves with the facts of matter, expressed quantitatively, and we learn that Saturn has an equatorial radius of 60,268 ± 4 km (or 9.4492 Earths), a polar radius of 54,364 ± 10 km (or 8.5521 Earths), a flattening of 0.09796 ± 0.00018, a surface area of 4.27 × 1010 km2 (or 83.703 Earths), a volume of 8.2713 × 1014 km3 (or 763.59 Earths), and a mass of 5.6836 × 1026 kg (or 95.159 Earths) — all figures that I have taken from the Wikipedia entry on Saturn. We could follow up on this scientific knowledge by refining our measurements and by going more deeply in to planetary science, and this gives us a certain kind of knowledge of how large Saturn is.

Notice that the figures I have taken from Wikipedia for the size of Saturn notes Earth equivalents where relevant: this points to another way of “knowing” how large Saturn is: by way of comparative concepts, in contradistinction to quantitative concepts. When I read the sentence quoted above about Saturn I instantly imagined an astronaut floating above Saturn who had also floated above the Earth, feeling on a visceral level the enormous size of the planet below. In the same way, an astronaut floating above the moon or Mars would feel the smallness of both in comparison to Earth. This is significant because the comparative judgement is exactly what a photograph does not communicate. A picture of the Earth as “blue marble” may be presented to us in the same size format as a picture of Mars or Saturn, but the immediate experience of seeing these planets from orbit would be perceived very differently by an orbiting astronaut because the human body always has itself to compare to its ambient environment.

This is kind of experience could only come about once a spacefaring civilization had developed to the point that individuals could acquire diverse experiences of sufficient duration to build up a background knowledge that is distinct from the initial “Aha!” moment of first experiencing a new perspective, so one might think of the example I have given above as a “long term” overview effect, in contradistinction to the immediate impact of the overview effect for those who see Earth from orbit for the first time.

The overview effect over the longue durée, then, will continually transform our perceptions both by progressively greater overviews resulting from greater distances, and by cumulative experience as a spacefaring species that becomes accustomed to viewing worlds from an overview, and immediately grasps the salient features of worlds seen first from without and from above. In transforming our perceptions, our minds will also be transformed, and new forms of consciousness will become possible. This alone ought to be reason enough to justify human spaceflight.

The possibility of new forms of consciousness unprecedented in the history of terrestrial life poses an interesting question: suppose a species — for the sake of simplicity, let us say that this species is us, i.e., humanity — achieves forms of consciousness through the overview effect cultivated in the way I have described here, and that these forms of consciousness are unattainable prior to the broad and deep experience of the overview effect that would characterize a spacefaring civilization. Suppose also, for the sake of the argument, that the species that attains these forms of consciousness is sufficiently biologically continuous that there has been no speciation in the biological sense. There would be a gulf between earlier and later iterations of the same species, but could we call this gulf speciation? Another way to pose this question is to ask whether there can be cognitive speciation. Can a species at least partly defined in terms of its cognitive functions be said to speciate on a cognitive level, even when no strictly biological speciation has taken place?

I will not attempt to answer this question at present — I consider the question entirely open — but I would like to suggest that the idea of cognitive speciation, i.e., a form of speciation unique to conscious beings, is deserving of further inquiry, and should be of special interest to the field of cognitive astrobiology.

. . . . .

The Overview Effect

The Epistemic Overview Effect

Hegel and the Overview Effect

The Overview Effect and Perspective Taking

The Overview Effect in Formal Thought

Our Knowledge of the Internal World

The Human Overview

Personal Experience and Empirical Knowledge

Cognitive Astrobiology and the Overview Effect

The Scientific Imperative of Human Spaceflight

Brief Addendum on the Overview Effect in Formal Thought

A Further Addendum on the Overview Effect in Formal Thought, in the Way of Providing a Measure of Disambiguation in Regard to the Role of Temporality

The Overview Effect over the longue durée

Civilizations of Planetary Endemism

. . . . .

deep field astronaut 3

. . . . .

signature

. . . . .

Grand Strategy Annex

. . . . .

project astrolabe logo smaller

. . . . .

Tuesday


William of Ockham, one of the greatest philosophers of the late Middle Ages, is remembered today primarily for his formulation of the principle of parsimony, also called Ockham's razor.

William of Ockham, one of the greatest philosophers of the late Middle Ages, is remembered today primarily for his formulation of the principle of parsimony, also called Ockham’s razor.

A medieval logician in the twenty-first century

In the discussion surrounding the unusual light curve of the star KIC 8462852, Ockham’s razor has been mentioned numerous times. I have written a couple of posts on this topic, i.e., interpreting the light curve of KIC 8462852 in light of Ockham’s razor, KIC 8462852 and Parsimony and Plenitude in Cosmology.

What is Ockham’s razor exactly? Well, that is a matter of philosophical dispute (and I offer my own more precise definition below), but even if it is difficult to say that Ockham’s razor is exactly, we can say something about what it was originally. Philotheus Boehner, a noted Ockham scholar, wrote of Ockham’s razor:

“It is quite often stated by Ockham in the form: ‘Plurality is not to be posited without necessity’ (Pluralitas non est ponenda sine necessitate), and also, though seldom: ‘What can be explained by the assumption of fewer things is vainly explained by the assumption of more things’ (Frustra fit per plura quod potest fieri per pauciora). The form usually given, ‘Entities must not be multiplied without necessity’ (Entia non sunt multiplicanda sine necessitate), does not seem to have been used by Ockham.”

William of Ockham, Philosophical Writings: A Selection, translated, with an Introduction, by Philotheus Boehner, O.F.M., Indianapolis and New York: The Library of Liberal Arts, THE BOBBS-MERRILL COMPANY, INC., 1964, Introduction, p. xxi

Most references to (and even most uses of) Ockham’s razor are informal and not very precise. In Maybe It’s Time To Stop Snickering About Aliens, which I linked to in KIC 8462852 Update, Adam Frank wrote of Ockham’s razor in relation to KIC 8462852:

“…aliens are always the last hypothesis you should consider. Occam’s razor tells scientists to always go for the simplest explanation for a new phenomenon. But even as we keep Mr. Occam’s razor in mind, there is something fundamentally new happening right now that all of us, including scientists, must begin considering… the exoplanet revolution means we’re developing capacities to stare deep into the light produced by hundreds of thousands of boring, ordinary stars. And these are exactly the kind of stars where life might form on orbiting planets… So we are already going to be looking at a lot of stars to hunt for planets. And when we find those planets, we are going to look at them for basic signs that life has formed. But all that effort means we will also be looking in exactly the right places to stumble on evidence of not just life but intelligent, technology-deploying life.

Here the idea of Ockham’s razor is present, but little more than the idea. Rather than merely invoking the idea of Ockham’s razor, and merely assuming what constitutes simplicity and parsimony, if we are going to profitably employ the idea today, we need to develop it more fully in the context of contemporary scientific knowledge. In KIC 8462852 I wrote:

“One can see an emerging adaptation of Ockham’s razor, such that explanations of astrophysical phenomena are first explained by known processes of nature before they are attributed to intelligence. Intelligence, too, is a process of nature, but it seems to be rare, so one ought to exercise particular caution in employing intelligence as an explanation.”

In a recent post, Parsimony and Emergent Complexity I went a bit further and suggested that Ockham’s razor can be formulated with greater precision in terms of emergent complexity, such that no phenomenon should be explained in terms of a level of emergent complexity higher than that necessary to explain the phenomenon.

De revolutionibus orbium coelestium (On the Revolutions of the Heavenly Spheres) is the seminal work on the heliocentric theory of the Renaissance astronomer Nicolaus Copernicus (1473–1543). The book, first printed in 1543 in Nuremberg, Holy Roman Empire, offered an alternative model of the universe to Ptolemy's geocentric system, which had been widely accepted since ancient times. (Wikipedia)

De revolutionibus orbium coelestium (On the Revolutions of the Heavenly Spheres) is the seminal work on the heliocentric theory of the Renaissance astronomer Nicolaus Copernicus (1473–1543). The book, first printed in 1543 in Nuremberg, Holy Roman Empire, offered an alternative model of the universe to Ptolemy’s geocentric system, which had been widely accepted since ancient times. (Wikipedia)

De revolutionibus orbium coelestium and its textual history

Like Darwin many centuries later, Copernicus hesitated to publish his big book to explain his big idea, i.e., heliocentrism. Both men, Darwin and Copernicus, understood the impact that their ideas would have, though both probably underestimated the eventual influence of these ideas; both were to transform the world and leave as a legacy entire cosmologies. The particular details of the Copernican system are less significant than the Copernican idea, i.e., the Copernican cosmology, which, like Ockham’s razor, has gone on to a long career of continuing influence.

Darwin eventually published in his lifetime, prompted by the “Ternate essay” that Wallace sent him, but Copernicus put off publishing until the end of his life. It is said that Copernicus was shown a copy of the first edition of De revolutionibus on his deathbed (though this is probably apocryphal). Copernicus, of course, lived much closer to the medieval world than did Darwin — one could well argue that Toruń and Frombork in the fifteenth and sixteenth centuries was the medieval world — so we can readily understand Copernicus’ hesitation to publish. Darwin published in a world already transformed by industrialization, already wrenched by unprecedented social change; Copernicus eventually published in a world that, while on the brink of profound change, had not appreciably changed in a thousand years.

Copernicus’ hesitation meant that he did not directly supervise the publication of his manuscript, that he was not able to correct or revise subsequent editions (Darwin revised On the Origin of Species repeatedly for six distinct editions in his lifetime, not including translations), and that he was not able to respond to the reception of his book. All of these conditions were to prove significant in the reception and propagation of the Copernican heliocentric cosmology.

Copernicus, after long hesitation, was stimulated to pursue the publication of De revolutionibus by his contact with Georg Joachim Rheticus, who traveled to Frombork for the purpose of meeting Copernicus. Rheticus, who had great respect for Copernicus’ achievement, came from the hotbed of renaissance and Protestant scholarship that was Nuremberg. He took Copernicus’ manuscript to Nuremberg to be published by a noted scientific publisher of the day, but Rheticus did not stay to oversee the entire publication of the work. This job was handed down to Andreas Osiander, a Protestant theologian who sought to water down the potential impact of De Revolutionibus by adding a preface that suggested that Copernicus’ theory should be accepted in the spirit of an hypothesis employed for the convenience of calculation. Osiander did not sign this preface, and many readers of the book, when it eventually came out, thought that this preface was the authentic Copernican interpretation of the text.

Osiander’s preface, and Osiander’s intentions in writing the preface (and changing the title of the book) continue to be debated to the present day. This debate cannot be cleanly separated from the tumult surrounding the Protestant Reformation. Luther and the Lutherans were critical of Copernicus — they had staked the legitimacy of their movement on Biblical literalism — but one would have thought that Protestantism would have been friendly to the work of Ockham, given Ockham’s conflict with the Papacy, Ockham’s fideism, and his implicit position as a critic of Thomism. (I had intended to read up on the Protestant interpretation of Ockham prior to writing this post, but I haven’t yet gotten to this.) The parsimony of Copernicus’ formulation of cosmology, then, was a mixed message to the early scientific revolution in the context of the Protestant Reformation.

Both Rheticus and Copernicus’ friend Tiedemann Giese were indignant over the unsigned and unauthorized preface by Osiander. Rheticus, by some accounts, was furious, and felt that the book and Copernicus had been betrayed. He pursued legal action against the printer, but it is not clear that it was the printer who was at fault for the preface. While Rheticus suspected Osiander as the author of the preface, this was not confirmed until some time later, when Rheticus had moved on to other matters, so Osiander was never pursued legally over the preface.

Nicolaus Copernicus (1473–1543) -- Mikołaj Kopernik in Polish, and Nikolaus Kopernikus in German

Nicolaus Copernicus (1473–1543) — Mikołaj Kopernik in Polish, and Nikolaus Kopernikus in German

Copernicus’ Ockham

The most common reason adduced to preferring Copernican cosmology to Ptolematic cosmology is not that one is true and the other is false (though this certainly is a reason to prefer Copernicus) but rather that the Copernican cosmology is the simpler and more straight-forward explanation for the observed movements of the stars and the planets. The Ptolemaic system can predict the movements of stars, planets, and the moon (within errors of margin relevant to its time), but it does so by way of a much more complex and cumbersome method than that of Copernicus. Copernicus was radical in the disestablishment of traditional cosmological thought, but once beyond that first radical step of displacing the Earth of the center of the universe (a process we continue to iterate today), the solar system fell into place according to a marvelously simple plan that anyone could understand once it was explained: the sun at the center, and all the planets revolving around it. From the perspective of our rotating and orbiting Earth, the other planets also orbiting the sun appear to reverse in their course, but this is a mere artifact due to our position as observers. Once Copernicus can convince the reader that, despite the apparent solidity of the Earth, it is in fact moving through space, everything else falls into place.

One of the reasons that theoretical parsimony and elegance played such a significant role in the reception of Copernicus — and even the theologians who rejected his cosmology employed his calculations to clarify the calendar, so powerful was Copernicus’ work — was that the evidence given for the Copernican system was indirect. Even today, only a handful of the entire human population has ever left the planet Earth and looked down on it from above — seeing Earth from the perspective of the overview effect — and so acquired direct evidence of the Earth in space. No one, no single human being, has hovered above the solar system entire and looked down upon it and so obtained the most direct evidence of the Copernican theory — this is an overview affect that we have not yet attained. (NB: in The Scientific Imperative of Human Spaceflight I suggested the possibility of a hierarchy of overview effects as one moved further out from Earth.)

The knowledge that we have of our solar system, and indeed of the universe entire, is derived from observations and deduction from observations. Moreover, seeing the truth of Copernican heliocentrism would not only require an overview in space, but an overview in time, i.e., one would need to hover over our solar system for hundreds of years to see all the planets rotating around the common center of the sun, and one would have to, all the while, remain focused on observing the solar system in order to be able to have “seen” the entire process — a feat beyond the limitations of the human lifetime, not to mention human consciousness.

Copernicus himself did not mention the principle of parsimony or Ockham’s razor, and certainly did not mention William of Ockham, though Ockham was widely read in Copernicus’ time. The principle of parsimony is implicit, even pervasive, in Copernicus, as it is in all good science. We don’t want to account for the universe with Rube Goldberg-like contraptions as our explanations.

In a much later era of scientific thought — in the scientific thought of our own time — Stephen J. Gould wrote an essay titled “Is uniformitarianism necessary?” in which he argued for the view that uniformitarianism in geology had simply come to mean that geology follows the scientific method. Similarly, one might well argued that parsimony is no more necessary than uniformitarianism, and that what content of parsimony remains is simply coextenisve with the scientific method. To practice science is to reason in accordance with Ockham’s razor, but we need not explicitly invoke or apply Ockham’s razor, because its prescriptions are assimilated into the scientific method. And indeed this idea fits in quite well with the casual references to Ockham’s razor such as that I quoted above. Most scientists do not need to think long and hard about parsimony, because parsimonious formulations are already a feature of the scientific method. If you follow the scientific method, you will practice parsimony as a matter of course.

Copernicus’ Ockham, then, was already the Ockham already absorbed into nascent scientific thought. Perhaps it would be better to say that parsimony is implicit in the scientific method, and Copernicus, in implicitly following a scientific method that had not yet, in his time, been made explicit, was following the internal logic of the scientific method and its parsimonious demands for simplicity.

Andreas Osiander (19 December 1498 – 17 October 1552) was a German Lutheran theologian who oversaw the publication of Copernicus' De revolutionibus and added an unsigned preface that many attributed to Copernicus.

Andreas Osiander (19 December 1498 – 17 October 1552) was a German Lutheran theologian who oversaw the publication of Copernicus’ De revolutionibus and added an unsigned preface that many attributed to Copernicus.

Osiander’s Ockham

Osiander was bitterly criticized in his own time for his unauthorized preface to Copernicus, though many immediately recognized it as a gambit to allow for the reception of Copernicus’ work to involve the least amount of controversy. As I noted above, the Protestant Reformation was in full swing, and the events that would lead up the Thirty Years’ War were beginning to unfold. Europe was a powder keg, and many felt that it was the better part of valor not to touch a match to any issue that might explode. All the while, others were doing everything in their power to provoke a conflict that would settle matters once and for all.

Osiander not only added the unsigned and unauthorized preface, but also changed the title of the whole work from De revolutionibus to De revolutionibus orbium coelestium, adding a reference to the heavenly spheres that was not in Copernicus. This, too, can be understood as a concession to the intellectually conservative establishment — or it can be seen as a capitulation. But it was the preface, and what the preface claimed as the proper way to understand the work, that was the nub of the complaint against Osiander.

Here is a long extract of Osiander’s unsigned and unauthorized preface to De revolutionibus, not quite the whole thing, but most of it:

“…it is the duty of an astronomer to compose the history of the celestial motions through careful and expert study. Then he must conceive and devise the causes of these motions or hypotheses about them. Since he cannot in any way attain to the true causes, he will adopt whatever suppositions enable the motions to be computed correctly from the principles of geometry for the future as well as for the past. The present author has performed both these duties excellently. For these hypotheses need not be true nor even probable. On the contrary, if they provide a calculus consistent with the observations, that alone is enough. Perhaps there is someone who is so ignorant of geometry and optics that he regards the epicyclc of Venus as probable, or thinks that it is the reason why Venus sometimes precedes and sometimes follows the sun by forty degrees and even more. Is there anyone who is not aware that from this assumption it necessarily follows that the diameter of the planet at perigee should appear more than four times, and the body of the planet more than sixteen times, as great as at apogee? Yet this variation is refuted by the experience of every age. In this science there are some other no less important absurdities, which need not be set forth at the moment. For this art, it is quite clear, is completely and absolutely ignorant of the causes of the apparent nonuniform motions. And if any causes are devised by the imagination, as indeed very many are, they are not put forward to convince anyone that are true, but merely to provide a reliable basis for computation. However, since different hypotheses are sometimes offered for one and the same motion (for example, eccentricity and an epicycle for the sun’s motion), the astronomer will take as his first choice that hypothesis which is the easiest to grasp. The philosopher will perhaps rather seek the semblance of the truth. But neither of them will understand or state anything certain, unless it has been divinely revealed to him.”

Nicholas Copernicus, On the Revolutions, Translation and Commentary by Edward Rosen, THE JOHNS HOPKINS UNIVERSITY PRESS, Baltimore and London

If we eliminate the final qualification, “unless it has been divinely revealed to him,” Osiander’s preface is a straight-forward argument for instrumentalism. Osiander recommends Copernicus’ work because it gives the right results; we can stop there, and need not make any metaphysical claims on behalf of the theory. This ought to sound very familiar to the modern reader, because this kind of instrumentalism has been common in positivist thought, and especially so since the advent of quantum theory. Quantum theory is the most thoroughly confirmed theory in the history of science, confirmed to a degree of precision almost beyond comprehension. And yet quantum theory still lacks an intuitive correlate. Thus we use quantum theory because it gives us the right results, but many scientists hesitate to give any metaphysical interpretation to the theory.

Copernicus, and those most convinced of his theory, like Rheticus, was a staunch scientific realist. He did not propose his cosmology as a mere system of calculation, but insisted that his theory was the true theory describing the motions of the planets around the sun. It follows from this uncompromising scientific realism that other theories are not merely less precise in calculating the movements of the planets, but false. Scientific realism accords with common sense realism when it comes to the idea that there is a correct account of the world, and other accounts that deviate from the correct account are false. But we all know that scientific theories are underdetermined by the evidence. To formulate a law is to go beyond the finite evidence and to be able to predict an infinitude of possible future states of the phenomenon predicted.

Scientific realism, then, is an ontologically robust position, and this ontological robustness is a function of the underdetermination of the theory by the evidence. Osiander argues of Copernicus’ theory that, “if they provide a calculus consistent with the observations, that alone is enough.” So Osiander is not willing to go beyond the evidence and posit the truth of an underdetermined theory. Moreover, Osiander was willing to maintain empirically equivalent theories, “since different hypotheses are sometimes offered for one and the same motion.” Given empirically equivalent theories that can both “provide a calculus consistent with the observations,” why would one theory be favored over another? Osiander states that the astronomer will prefer the simplest explanation (hence explaining Copernicus’ position) while the philosopher will seek a semblance of truth. Neither, however, can know what this truth is without divine revelation.

Osiander’s Ockham is the convenience of the astronomer to seek the simplest explanation for his calculations; the astronomer is justified in employing the simplest explanation of the most precise method available to calculate and predict the course of the heavens, but he cannot know the truth of his theory unless that truth is guaranteed by some outside and transcendent evidence not available through science — a deus ex machina for the mind.

Copernicus stands at the beginning of the scientific revolution, and he stands virtually alone.

Copernicus stands at the beginning of the scientific revolution, and he stands virtually alone.

The origins of the scientific revolution in Copernicus

Copernicus’ Ockham was ontological parsimony; Osiander’s Ockham was methodological parsimony. Are we forced to choose between the two, or are we forced to find a balance between ontological and methodological parsimony? These are still living questions in the philosophy of science today, and there is a sense in which it is astonishing that they appeared so early in the scientific revolution.

As noted above, the world of Copernicus was essentially a medieval world. Toruń and Frombork were far from the medieval centers of learning in Paris and Oxford, and about as far from the renaissance centers of learning in Florence and Nuremberg. Nevertheless, the new cosmology that emerged from the scientific revolution, and which is still our cosmology today, continuously revised and improved, can be traced to the Baltic coast of Poland in the late fifteenth and early sixteenth century. The controversy over how to interpret the findings of science can be traced to the same root.

The conventions of the scientific method were established in the work of Copernicus, Galileo, and Newton, which means that it was the work of these seminal thinkers who established these conventions. Like the cosmologies of Copernicus, Galileo, and Newton, the scientific method has also been continuously revised and improved. That Copernicus grasped in essence as much of the scientific method as he did, working in near isolation far from intellectual centers of western civilization, demonstrates both the power of Copernicus’ mind and the power of the scientific method itself. As implied above, once grasped, the scientific method has an internal logic of its own that directs the development of scientific thought.

The scientific method — methodological naturalism — exists in an uneasy partnership with scientific realism — ontological naturalism. We can see that this tension was present right from the very beginning of the scientific revolution, before the scientific method was ever formulated, and the tension continues down to the present day. Contemporary analytical philosophers discuss the questions of scientific realism in highly technical terms, but it is still the same debate that began with Copernicus, Rheticus, and Osiander. Perhaps we can count the tension between methodological naturalism and ontological naturalism as one of the fundamental tensions of scientific civilization.

. . . . .

Updates and Addenda

This post began as a single sentence in one of my note books, and continued to grow as I worked on it. As soon as I posted it I realized that the discussions of scientific realism, instrumentalism, and methodological naturalism in relation to parsimony could be much better. With additional historical and philosophical discussion, this post might well be transformed into an entire book. So for the questioning reader, yes, I understand the inadequacy of what I have written above, and that I have not done justice to my topic.

Shortly after posting the above Paul Carr pointed out to me that the joint ESA-NASA Ulysses deep-space mission sent a spacecraft to study the poles of the sun, so we have sent a spacecraft out of the plane of the solar system, which could “look down” on our star and its planetary system, although the mission was not designed for this and had no cameras on board. If we did position a camera “above” our solar system, it would be able to take pictures of our heliocentric solar system. This, however, would be more indirect evidence — more direct than deductions from observations, but not as direct as seeing this with one’s own eyes — like the famous picture of the “blue marble” Earth, which is an overview experience for those of us who have not been into orbit to the moon, but which is not quite the same as going into orbit or to the moon.

Paul Carr also drew my attention to Astronomy Cast Episode 390: Occam’s Razor and the Problem with Probabilities, with Fraser Cain and Pamela Gay, which discusses Ockham’s razor in relation to positing aliens as a scientific explanation.

. . . . .

signature

. . . . .

Grand Strategy Annex

. . . . .

project astrolabe logo smaller

. . . . .

%d bloggers like this: