23 November 2012
What is the Church-Turing Thesis? The Church-Turing Thesis is an idea from theoretical computer science that emerged from research in the foundations of logic and mathematics, also called Church’s Thesis, Church’s Conjecture, the Church-Turing Conjecture as well as other names, that ultimately bears upon what can be computed, and thus, by extension, what a computer can do (and what a computer cannot do).
Note: For clarity’s sake, I ought to point out the Church’s Thesis and Church’s Theorem are distinct. Church’s Theorem is an established theorem of mathematical logic, proved by Alonzo Church in 1936, that there is no decision procedure for logic (i.e., there is no method for determining whether an arbitrary formula in first order logic is a theorem). But the two – Church’s theorem and Church’s thesis – are related: both follow from the exploration of the possibilities and limitations of formal systems and the attempt to define these in a rigorous way.
Even to state Church’s Thesis is controversial. There are many formulations, and many of these alternative formulations come straight from Church and Turing themselves, who framed the idea differently in different contexts. Also, when you hear computer science types discuss the Church-Turing thesis you might think that it is something like an engineering problem, but it is essentially a philosophical idea. What the Church-Turing thesis is not is as important as what it is: it is not a theorem of mathematical logic, it is not a law of nature, and it not a limit of engineering. We could say that it is a principle, because the word “principle” is ambiguous and thus covers the various formulations of the thesis.
There is an article on the Church-Turing Thesis at the Stanford Encyclopedia of Philosophy, one at Wikipedia (of course), and even a website dedicated to a critique of the Stanford article, Alan Turing in the Stanford Encyclopedia of Philosophy. All of these are valuable resources on the Church-Turing Thesis, and well worth reading to gain some orientation.
One way to formulate Church’s Thesis is that all effectively computable functions are general recursive. Both “effectively computable functions” and “general recursive” are technical terms, but there is an important different between these technical terms: “effectively computable” is an intuitive conception, whereas “general recursive” is a formal conception. Thus one way to understand Church’s Thesis is that it asserts the identity of a formal idea and an informal idea.
One of the reasons that there are many alternative formulations of the Church-Turing thesis is that there any several formally equivalent formulations of recursiveness: recursive functions, Turing computable functions, Post computable functions, representable functions, lambda-definable functions, and Markov normal algorithms among them. All of these are formal conceptions that can be rigorously defined. For the other term that constitutes the identity that is Church’s thesis, there are also several alternative formulations of effectively computable functions, and these include other intuitive notions like that of an algorithm or a procedure that can be implemented mechanically.
These may seem like recondite matters with little or no relationship to ordinary human experience, but I am surprised how often I find the same theoretical conflict played out in the most ordinary and familiar contexts. The dialectic of the formal and the informal (i.e., the intuitive) is much more central to human experience than is generally recognized. For example, the conflict between intuitively apprehended free will and apparently scientifically unimpeachable determinism is a conflict between an intuitive and a formal conception that both seem to characterize human life. Compatibilist accounts of determinism and free will may be considered the “Church’s thesis” of human action, asserting the identity of the two.
It should be understood here that when I discuss intuition in this context I am talking about the kind of mathematical intuition I discussed in Adventures in Geometrical Intuition, although the idea of mathematical intuition can be understood as perhaps the narrowest formulation of that intuition that is the polar concept standing in opposition to formalism. Kant made a useful distinction between sensory intuition and intellectual intuition that helps to clarify what is intended here, since the very idea of intuition in the Kantian sense has become lost in recent thought. Once we think of intuition as something given to us in the same way that sensory intuition is given to us, only without the mediation of the senses, we come closer to the operative idea of intuition as it is employed in mathematics.
Mathematical thought, and formal accounts of experience generally speaking, of course, seek to capture our intuitions, but this formal capture of the intuitive is itself an intuitive and essentially creative process even when it culminates in the formulation of a formal system that is essentially inaccessible to intuition (at least in parts of that formal system). What this means is that intuition can know itself, and know itself to be an intuitive grasp of some truth, but formality can only know itself as formality and cannot cross over the intuitive-formal divide in order to grasp the intuitive even when it captures intuition in an intuitively satisfying way. We cannot even understand the idea of an intuitively satisfying formalization without an intuitive grasp of all the relevant elements. As Spinoza said that the true is the criterion both of itself and of the false, we can say that the intuitive is the criterion both of itself and the formal. (And given that, today, truth is primarily understood formally, this is a significant claim to make.)
The above observation can be formulated as a general principle such that the intuitive can grasp all of the intuitive and a portion of the formal, whereas the formal can grasp only itself. I will refer to this as the principle of the asymmetry of intuition. We can see this principle operative both in the Church-Turing Thesis and in popular accounts of Gödel’s theorem. We are all familiar with popular and intuitive accounts of Gödel’s theorem (since the formal accounts are so difficult), and it is not usual to make claims for the limitative theorems that go far beyond what they formally demonstrate.
All of this holds also for the attempt to translate traditional philosophical concepts into scientific terms — the most obvious example being free will, supposedly accounted for by physics, biochemistry, and neurobiology. But if one makes the claim that consciousness is nothing but such-and-such physical phenomenon, it is impossible to cash out this claim in any robust way. The science is quantifiable and formalizable, but our concepts of mind, consciousness, and free will remain stubbornly intuitive and have not been satisfyingly captured in any formalism — the determination of any such satisfying formalization could only be determined by intuition and therefore eludes any formal capture. To “prove” determinism, then, would be as incoherent as “proving” Church’s Thesis in any robust sense.
There certainly are interesting philosophical arguments on both sides of Church’s Thesis — that is to say, both its denial and its affirmation — but these are arguments that appeal to our intuitions and, most crucially, our idea of ourselves is intuitive and informal. I should like to go further and to assert that the idea of the self must be intuitive and cannot be otherwise, but I am not fully confident that this is the case. Human nature can change, albeit slowly, along with the human condition, and we could, over time — and especially under the selective pressures of industrial-technological civilization — shape ourselves after the model of a formal conception of the self. (In fact, I think it very likely that this is happening.)
I cannot even say — I would not know where to begin — what would constitute a formal self-understanding of the self, much less any kind of understanding of a formal self. Well, maybe not. I have written elsewhere that the doctrine of the punctiform present (not very popular among philosophers these days, I might add) is a formal doctrine of time, and in so far as we identify internal time consciousness with the punctiform present we have a formal doctrine of the self.
While the above account is one to which I am sympathetic, this kind of formal concept — I mean the punctiform present as a formal conception of time — is very different from the kind of formality we find in physics, biochemistry, and neuroscience. We might assimilate it to some mathematical formalism, but this is an abstraction made concrete in subjective human experience, not in physical science. Perhaps this partly explains the fashionable anti-philosophy that I have written about.
. . . . .
. . . . .
. . . . .
14 October 2012
A message to the foundations of mathematics (FOM) listserv by Frank Waaldijk alerted me to the fact that today, 14 October 2012, is the one hundredth anniversary of Brouwer’s inaugural address at the University of Amsterdam, “Intuitionism and Formalism.” (I have discussed Frank Waaldijk earlier in P or Not-P and What is the Relationship Between Constructive and Non-Constructive Mathematics?)
I have called this post “One Hundred Years of Intuitionism and Formalism” but I should have called it “One Hundred Years of Intuitionism” since, of the three active contenders as theories for the foundations of mathematics a hundred years ago, only intuitionism is still with us in anything like its original form. The other contenders — formalism and logicism — are still with us, but in forms so different that they no longer resemble any kind of programmatic approach to the foundations of mathematics. In fact, it could be said that logicism was gradually transformed into technical foundational research, primarily logical in character, without any particular programmatic content, while formalism, following in a line of descent from Hilbert, has also been incrementally transformed into mainstream foundational research, but primarily mathematical in character, and also without any particular programmatic or even philosophical content.
The very idea of “foundations” has come to be questioned in the past hundred years — though, as I commented a few days ago in The Genealogy of the Technium, the early philosophical foundationalist programs continue to influence my own thinking — and we have seen that intuitionism has been able to make the transition from a foundationalist-inspired doctrine to doctrine that might be called mathematical “best practices.” In contemporary philosophy of mathematics, one of the most influential schools of thought for the past couple of decades or more has been to focus not on theories of mathematics, but rather on mathematical practices. Sometimes this is called “neo-empiricism.”
Intuitionism, I think, has benefited from the shift from the theoretical to the practical in the philosophy of mathematics, since intuitionism was always about making a distinction between the acceptable and the unacceptable in logical principles, mathematical reasoning, proof procedures, and all those activities that are part of the mathematician’s daily bread and butter. This shift has also made it possible for intuitionism to distance itself from its foundationalist roots at a time when foundationalism is on the ropes.
Brouwer is due some honor for his prescience in formulating intuitionism a hundred years ago — and intuitionism came almost fully formed out of the mind of Brouwer as syllogistic logic came almost fully formed out of the mind of Aristotle — so I would like to celebrate Brouwer on this, the one hundredth anniversary of his inaugural address at the University of Amsterdam, in which he formulated so many of the central principles of intuitionism.
Brouwer was prescient in another sense as well. He ended his inaugural address with a quote from Poincaré that is well known in the foundationalist community, since it has been quoted in many works since:
“Les hommes ne s’entendent pas, parce qu’ils ne parlent pas la même langue et qu’il y a des langues qui ne s’apprennent pas.”
This might be (very imperfectly) translated into English as follows:
“Men do not understand each other because they do not speak the same language and there are languages that cannot be learned.”
What Poincaré called men not understanding each other Kuhn would later and more famously call incommensurability. And while we have always known that men do not understand each other, it had been widely believed before Brouwer that at least mathematicians understood each other because they spoke the same universal language of mathematics. Brouwer said that his exposition revealed, “the fundamental issue, which divides the mathematical world.” A hundred years later the mathematical world is still divided.
For those who have not studied the foundations and philosophy of mathematics, it may come as a surprise that the past century, which has been so productive of research in advanced mathematics — arguably going beyond all the cumulative research in mathematics up to that time — has also been a century of conflict during which the idea of mathematics as true, certain, and necessary — ideas that had been central to a core Platonic tradition of Western thought — have all been questioned and largely abandoned. It has been a raucous century for mathematics, but also a fruitful one. A clever mathematician with a good literary imagination could write a mathematical analogue of Mandeville’s Fable of the Bees in which it is precisely the polyglot disorder of the hive that made it thrive.
That core Platonic tradition of Western thought is now, even as I write these lines, dissipating just as the illusions of the philosopher, freed from the cave of shadows, dissipate in the light of the sun above.
Brouwer, like every revolutionary (and we recall that it was Weyl, who was sympathetic to Brouwer, who characterized Brouwer’s work as a revolution in mathematics), wanted to do away with an old, corrupt tradition and to replace it with something new and pure and edifying. But in the affairs of men, a revolution is rarely complete, and it is, far more often, the occasion of schism than conversion.
Many were converted by Brouwer; many are still being converted today. As I wrote above, intuitionism remains a force to be reckoned with in contemporary mathematical thought in a way that logicism and formalism cannot claim to be such a force. But the conversions and subsequent defections left a substantial portion of the mathematical community unconverted and faithful to the old ways. The tension and the conflict between the old ways and the new ways has been a source of creative inspiration.
Precisely that moment in history when the very nature of mathematics was called into question became the same moment in history when mathematics joined technology in exponential growth.
Mars is the true muse of men.
. . . . .
. . . . .
. . . . .
. . . . .
10 October 2012
Addendum on Civilization and the Technium
in regard to human, animal, and alien technology
One of the virtues of taking the trouble to formulate one’s ideas in an explicit form is that, once so stated, all kinds assumptions one was making become obvious as well as all kinds of problems that one didn’t see when the idea was just floating around in one’s consciousness, as a kind of intellectual jeu d’esprit, as it were.
Bertrand Russell wrote about this, or, at least, about a closely related experience in one of his well-known early essays, in which he discussed the importance not only making our formulations explicit, but of doing so by way of putting some distance between our thoughts and the kind of facile self-evidence that can distract us from the real business at hand:
“It is not easy for the lay mind to realise the importance of symbolism in discussing the foundations of mathematics, and the explanation may perhaps seem strangely paradoxical. The fact is that symbolism is useful because it makes things difficult. (This is not true of the advanced parts of mathematics, but only of the beginnings.) What we wish to know is, what can be deduced from what. Now, in the beginnings, everything is self-evident; and it is very hard to see whether one self-evident proposition follows from another or not. Obviousness is always the enemy to correctness. Hence we invent some new and difficult symbolism, in which nothing seems obvious. Then we set up certain rules for operating on the symbols, and the whole thing becomes mechanical. In this way we find out what must be taken as premiss and what can be demonstrated or defined. For instance, the whole of Arithmetic and Algebra has been shown to require three indefinable notions and five indemonstrable propositions. But without a symbolism it would have been very hard to find this out. It is so obvious that two and two are four, that we can hardly make ourselves sufficiently skeptical to doubt whether it can be proved. And the same holds in other cases where self-evident things are to be proved.”
Bertrand Russell, Mysticism and Logic, “Mathematics and the Metaphysicians”
Russell’s foundationalist program in the philosophical of mathematics closely followed the method that he outlined so lucidly in the passage above. Principia Mathematica makes the earliest stages of mathematics notoriously difficult, but does so in service to the foundationalist ideal of revealing hidden presuppositions and incorporating them into the theory in an explicit form.
Another way that Russell sought to overcome self-evidence is through the systematic pursuit of the highest degree of generality, which drives us to formulate concepts that are alien to common sense:
“It is a principle, in all formal reasoning, to generalize to the utmost, since we thereby secure that a given process of deduction shall have more widely applicable results…”
Bertrand Russell, An Introduction to Mathematical Philosophy, Chapter XVIII, “Mathematics and Logic”
These are two philosophical principles — the explication of ultimate simples (foundations) and the pursuit of generality — that I have very much taken to heart and attempted to put into practice in my own philosophical work. Russell’s foundationalist method shows us what can be deduced from what, and gives to these deductions the most widely applicable results. To these philosophical imperatives of Russell I have myself added another, parallel to his pursuit of generality, and that is the simultaneous pursuit of formality: it is (or ought to be) a principle in all theoretical reasoning to formalize to the utmost…
Russell also observed the imperative of formalization, though he himself did not systematically distinguish between generalization and formalization, and it is a tough problem; I’ve been working on it for about twenty years and haven’t yet arrived at definitive formulations. As far as provisional formulations go, generalization gives us the highly comprehensive conceptions like astrobiology and civilization and the technium that allow us to unify a vast body of knowledge that must be studied by inter-disciplinary means, while formalization gives us the distinctions we must carefully observe within our concepts, so that generalization does not simply give us the night in which all cows are black (to borrow a phrase that Hegel used to ridicule Schelling’s conception of the Absolute).
Foundationalism as a philosophical movement is very much out of fashion now, although the foundations of mathematics, pursued eo ipso, remains an active and highly technical branch of logico-mathematical research, and today looks a lot different from what it was when it was first formulated as a philosophical research program a hundred years ago by Frege, Peano, Russell, Whitehead, Wittgenstein, and others. Nevertheless, I continue to derive much philosophical clarification from the early philosophical stages of foundationalism, especially in regard to theories that have not (yet) been reduced to formal systems, as is the case with theories of history or theories of civilization.
I am still a long way from reducing my ideas about history or civilization to first principles, much less to symbolism, but I feel like I am making progress, and the discovery of assumptions and problems is a sure sign of progress; in this sense, my post on Civilization and the Technium marked a stage of progress in my thinking, because of the inadequacy of my formulations that it revealed.
In my Civilization and the Technium I compared the extent of civilization — a familiar idea that has not yet received anything like an adequate definition — with the extent of the technium — a recent and hence unfamiliar idea for which there is an explicit formulation, but since it is new its full scope remains untested and untried, and therefore it presents problems that the idea of civilization does not present. I formulated concepts of the technium parallel to formulations of astrobiology and astrocivilization, as follows:
● Eotechnium the origins of the technium, wherever and whenever it occurs, terrestrial or otherwise
● Esotechnium our terrestrial technium
● Exotechnium any extraterrestrial technium exclusive of the terrestrial technium
● Astrotechnium the totality of technology in the universe, our terrestrial and any extraterrestrial technium taken together in their cosmological context
I realize now that when I did this I was making slightly different assumptions for civilization and the technium. The intuitive basis of this was that I assumed, in regard to the technium, that the technium I was describing was all due to human activity (a clear case of anthropic bias), so that the distinction between the exotechnium and the exotechnium was the distinction between terrestrial human technology and extraterrestrial human technology.
When, on the other hand, I formulated the parallel concepts for civilization, I assumed that esocivilization was terrestrial human civilization and that exocivilization would be alien civilizations not derived from the human eocivilization source.
Another way to put this is that I assumed the validity of the terrestrial eotechnium thesis even while I also assumed that the terrestrial eocivilization thesis did not hold. Is that too much technical terminology? In other words, I assumed the uniqueness of the human technium but I did not assume the uniqueness of human industrial-technological civilization.
This points to a further articulation (and therefore a further formalization) of the concepts employed: one must keep the conception of eocivlization (the origins of civilization) clearly in mind, and distinguish between terrestrial civilization that expands into extraterrestrial space and therefore becomes exocivilization from its eocivilization source on the one hand, and on the other hand a xeno-eocivilization source that constitutes exocivilization by virtue of its xenomorphic origins. If one is going to distinguish between esocivilization and exocivilization, one must identify the eocivilization source, or all is for naught.
All of this holds, mutatis mutandis, for the eotechnium, esotechnium, exotechnium, and astrotechnium, although I ought to point that my formulations in Civilization and the Technium, and repeated above, were accurate because they were formulated in Russellian generality. It was in my following exposition that I failed to observe all the requisite distinctions. But there’s more. I’ve since realized that further distinctions can be made.
As I thought about the possibility of a xenotechnium, i.e., a technium produced by a sentient alien species, I realized that there is a xenotechnium right here on Earth (a terrestrial xenotechnium, or non-hominid technium), in the form of tool use and other forms of technology by non-human species. We are all familiar with famous examples like the chimpanzees who will strip the leaves off a branch and then use the branch to extract termites from a termite mound. Yesterday I alluded to the fact that otters use rocks to break open shells. There are many other examples. Apart from tool use, beaver damns and the nests of birds, while not constructed with tools, certainly represent a kind of technology.
If we take all instances of animal technology together they constitute a terrestrial non-human technium. If we take all instances of technology known to us, human and non-human together, we have a still more comprehensive conception of the technium that is more general that the concept of the human-specific technium and therefore less subject to anthropic bias (the latter concept due to Nick Bostrum, who also formulated existential risk). This latter, more comprehensive conception of the technium would seem to be favored by Russell’s imperative of generalization to the utmost, although we must continue to make the finer distinctions within the concept for the formalization of the conception of the technium to keep pace with its generalization.
There is a systematic relationship between terrestrial biology and the terrestrial technium, both hominid and non-hominid. Eobiology facilitates the emergence of a terrestrial eotechnium, of which all instances of technology, hominid and non-hominid alike, can be considered expressions. This is already explicit in Kevin Kelly’s book, What Technology Wants, as one of his arguments is that the emergence and growth of the technium is continuous with the emergence of growth of biological organization and complexity. He cites John Maynard Smith and Eors Szathmary as defining the following thresholds of biological organization (p. 46):
One replicating molecule -» Interacting population of replicating molecules
Replicating molecules -» Replicating molecules strung into chromosome
Chromosome of RNA enzymes -» DNA proteins
Cell without nucleus -» Cell with nucleus
Asexual reproduction (cloning) -» Sexual recombination
Single-cell organism -* Multicell organism
Solitary individual -» Colonies and superorganisms
Primate societies -» Language-based societies
He then suggests the following sequence of thresholds within the growth of the technium (p. 47):
Primate communication -» Language
Oral lore -> Writing/mathematical notation
Scripts -» Printing
Book knowledge -» Scientific method
Artisan production -» Mass production
Industrial culture -» Ubiquitous global communication
And then he connects the two sequences:
The trajectory of increasing order in the technium follows the same path that it does in life. Within both life and the technium, the thickening of interconnections at one level weaves the new level of organization above it. And it’s important to note that the major transitions in the technium begin at the level where the major transitions in biology left off: Primate societies give rise to language. The invention of language marks the last major transformation in the natural world and also the first transformation in the manufactured world. Words, ideas, and concepts are the most complex things social animals (like us) make, and also the simplest foundation for any type of technology. (p. 48)
Thus the genealogy of the technium is continuous with the genealogy of life.
Considering this in relation to the possibility of a xenotechnium, one would expect the same to be the case: I would expect a systematic relationship to hold between xenobiology and a xenotechnium, such that an alien eobiology would facilitate the emergence of an alien eotechnium. And, extending this naturalistic line of thought, that assumes similar patterns of development to hold for peer industrial-technological civilizations, I would further assume that a xenotechnium would not always coincide with the xenocivilization with which it is associated. If there is a “first contact” between terrestrial civilization and a xenocivilization, it is likely that it will be rather a contact between the expanding terrestrial technium (which is, technically, no longer terrestrial precisely because it is expanding extraterrestrially) and an expanding xenotechnium.
There remains much conceptual work to be done here, as the reader will have realized. I’ll continue to work on these formulations, keeping in mind the imperatives of generality and formality, and perhaps someday converging on a foundationalist account of biology, civilization, and the technium that is at once both fully comprehensive and fully articulated.
. . . . .
. . . . .
. . . . .
19 May 2012
We can make a distinction among distinctions between ad hoc and principled distinctions. The former category — ad hoc distinctions — may ultimately prove to be based on a principle, but that principle is unknown as long as the distinction remains an ad hoc distinction. This suggests a further distinction among distinctions between ad hoc distinctions that really are ad hoc, and which are based on no principle, and ad hoc distinctions that are really principled distinctions but the principle in question is not yet known, or not yet formulated, at the time the distinction is made. So there you have a principled distinction between distinctions.
A perfect evocation of ad hoc distinctions is to be found in the opening paragraph of the Preface to Foucault’s The Order of Things:
This book first arose out of a passage in Borges, out of the laughter that shattered, as I read the passage, all the familiar landmarks of my thought — our thought, the thought that bears the stamp of our age and our geography — breaking up all the ordered surfaces and all the planes with which we are accustomed to tame the wild profusion of existing things, and continuing long afterwards to disturb and threaten with collapse our age-old distinction between the Same and the Other. This passage quotes a ‘certain Chinese encyclopedia’ in which it is written that ‘animals are divided into: (a) belonging to the Emperor, (b) embalmed, (c) tame, (d) sucking pigs, (e) sirens, (f) fabulous, (g) stray dogs, (h) included in the present classification, (i) frenzied, (j) innumerable, (k) drawn with a very fine camelhair brush, (1) et cetera, (m) having just broken the water pitcher, (n) that from a long way off look like flies’. In the wonderment of this taxonomy, the thing we apprehend in one great leap, the thing that, by means of the fable, is demonstrated as the exotic charm of another system of thought, is the limitation of our own, the stark impossibility of thinking that.
Such distinctions are comic, though Foucault recognizes that our laughter is uneasy: even as we immediately recognize the ad hoc character of these distinctions, we realize that the principled distinctions we routinely employ may not be so principled as we supposed.
Foucault continues this theme for several pages, and then gives another formulation — perhaps, given his interest in mental illness, an illustration that is closer to reality than Borges’ Chinese dictionary:
“It appears that certain aphasiacs, when shown various differently coloured skeins of wool on a table top, are consistently unable to arrange them into any coherent pattern; as though that simple rectangle were unable to serve in their case as a homogeneous and neutral space in which things could be placed so as to display at the same time the continuous order of their identities or differences as well as the semantic field of their denomination. Within this simple space in which things are normally arranged and given names, the aphasiac will create a multiplicity of tiny, fragmented regions in which nameless resemblances agglutinate things into unconnected islets; in one corner, they will place the lightest-coloured skeins, in another the red ones, somewhere else those that are softest in texture, in yet another place the longest, or those that have a tinge of purple or those that have been wound up into a ball. But no sooner have they been adumbrated than all these groupings dissolve again, for the field of identity that sustains them, however limited it may be, is still too wide not to be unstable; and so the sick mind continues to infinity, creating groups then dispersing them again, heaping up diverse similarities, destroying those that seem clearest, splitting up things that are identical, superimposing different criteria, frenziedly beginning all over again, becoming more and more disturbed, and teetering finally on the brink of anxiety.”
Foucault here writes that, “the sick mind continues to infinity,” in other words, the process does not terminate in a definite state-of-affairs. This implies that the healthy mind does not continue to infinity: rational thought must make concessions to human finitude. While I find the use of the concept of the pathological in this context questionable, and I have to wonder if Foucault was unwittingly drawn into the continental anti-Cantorian tradition (Brouwerian intuitionism and the like, though I will leave this aside for now), there is some value to the idea that a scientific process (such as classification) must terminate in a finite state-of-affairs, even if only tentatively. I will try to show, moreover, that there is an implicit principle in this attitude, and that it is in fact a principle that I have discussed previously.
The quantification of continuous data requires certain compromises. Two of these compromises include finite precision errors (also called rounding errors) and finite dimension errors (also called truncation). Rounding errors should be pretty obvious: finite parameters cannot abide infinite decimal expansions, and so we set a limit of six decimal places, or twenty, or more — but we must set a limit. The difference between actual figures and limited decimal expansions of the same figure is called a finite precision error. Finite dimension errors result from the need to arbitrarily introduce gradations into a continuum. Using the real number system, any continuum can be faithfully represented, but this representation would require infinite decimal expansions, so we see that there is a deep consonance between finite precision errors and finite dimension errors. Thus, for example, we measure temperature by degrees, and the arbitrariness of this measure is driven home to us by the different scales we can use for this measurement. And if we could specify temperature using real numbers (including transcendental numbers) we would not have to compromise. But engineering and computers and even human minds need to break things up into manageable finite quantities, so we speak of 3 degrees C, or even 3.14 degrees C, but we don’t try to work with pi degrees C. Thus the increments of temperature, or of any another measurement, involve both finite precision errors and finite dimension errors.
In so far as quantification is necessary to the scientific method, finite dimension errors are necessary to the scientific method. In several posts (e.g., Axioms and Postulates in Strategy) I have cited Carnap’s tripartite distinction among scientific concepts, the three being classificatory, comparative, and quantitative concepts. Carnap characterizes the emergence of quantitative scientific concepts as the most sophisticated form of scientific thought, but in reviewing Carnap’s scientific concepts in the light of finite precision errors and finite dimension errors, it is immediately obvious that classificatory concepts and comparative concepts do not necessarily involve finite precision errors and finite dimension errors. It is only with the introduction of quantitative concepts that science becomes sufficiently precise that its precision forces compromises upon us. However, I should point out that classificatory concepts routinely force us to accept finite dimension errors, although they do not involve finite precision errors. The example given by Foucault, quoted above, illustrates the inherent tension in classificatory concepts.
We accept finite precision errors and finite dimension errors as the price of doing science, and indeed as the price of engaging in rational thought. As Foucault implied in the above quote, the healthy and sane mind must draw lines and define limits and call a halt to things. Sometimes these limits are close to being arbitrary. We retain the ambition of “carving nature at the joints,” but we accept that we can’t always locate the joint but at times must cleave the carcass of nature regardless.
For this willingness to draw lines and establish limits and to call a halt to proceedings I will give the name The Truncation Principle, since it is in virtue to cutting off some portion of the world and treating it as though it were a unified whole that we are able to reason about the world.
As I mentioned above, I have discussed this problem previously, and in my discussion I noted that I wanted to give an exposition of a principle and a fallacy, but that I did not have a name for it yet, so I called it An Unnamed Principle and an Unnamed Fallacy. Now I have a name for it, and I will use this name, i.e., the truncation principle, from now on.
Note: I was tempted to call this principle the “baby retention principle” or even the “hang on to your baby principle” since it is all about the commonsense notion of not throwing out the baby with the bathwater.
In An Unnamed Principle and an Unnamed Fallacy I initially formulated the principle as follows:
The principle is simply this: for any distinction that is made, there will be cases in which the distinction is problematic, but there will also be cases when the distinction is not problematic. The correlative unnamed fallacy is the failure to recognize this principle.
What I most want to highlight is that when someone points out there are gray areas that seem to elude classification by any clear cut distinction, this is sometimes used as a skeptical argument intended to undercut the possibility of making any distinctions whatsoever. The point is that the existence of gray areas and problematic cases does not address the other cases (possibly even the majority of the cases) for which the distinction isn’t in the least problematic.
A distinction that that admits of problematic cases not clearly falling on one side of the distinction or the other, may yet have other cases that are clearly decided by the distinction in question. This might seem too obvious to mention, but distinctions that admit of problematic instances are often impugned and rejected for this reason. Admitting of no exceptions whatsoever is an unrealistic standard for a distinction.
I hope to be able to elaborate on this formulation as I continue to think about the truncation principle and its applications in philosophical, formal, and scientific thought.
Usually when we hear “truncation” we immediately think of the geometrical exercise of regularly cutting away parts of the regular (Platonic) solids, yielding truncated polyhedra and converging on rectified polyhedra. This is truncation in space. Truncation in time, on the other hand, is what is more commonly known as historical periodization. How exactly one historical period is to be cut off from another is always problematic, not least due to the complexity of history and the sheer number of outliers that seem to falsify any attempt at periodization. And yet, we need to break history up into comprehensible chunks. When we do so, we engage in temporal truncation.
All the problems of philosophical logic that present themselves to the subtle and perceptive mind when contemplating a spatial truncation, as, for example, in defining the Pacific Ocean — where exactly does it end in relation to the Indian Ocean? — occur in spades in making a temporal truncation. Yet if rational inquiry is to begin (and here we do not even raise the question of where rational inquiry ends) we must make such truncations, and our initial truncations are crude and mostly ad hoc concessions to human finitude. Thus I introduce the truncation principle as an explicit justification of truncations as we employ them throughout reasoning.
And, as if we hadn’t already laid up enough principles and distinctions for today, here is a principle of principles of distinctions: every principled distinction implies a fallacy that takes the form of neglecting this distinction. With an ad hoc distinction there is no question of fallacy, because there is no principle to violate. Where there is a principle involved, however, the violation of the principle constitutes a fallacy.
Contrariwise, every fallacy implies a principled distinction that ought to have been made. If we observe the appropriate principled distinctions, we avoid fallacies, and if we avoid fallacies we appropriately distinguish that which ought to be distinguished.
. . . . .
. . . . .
. . . . .
9 April 2012
Geopolitics and Geostrategy
as a formal sciences
In a couple of posts — Formal Strategy and Philosophical Logic: Work in Progress and Axioms and Postulates of Strategy — I have explicitly discussed the possibility of a formal approach to strategy. This has been a consistent theme of my writing over the past three years, even when it is not made explicit. The posts that I wrote on theoretical geopolitics can also be considered an effort in the direction of formal strategy.
There is a sense in which formal thought is antithetical to the tradition of geopolitics, which latter seeks to immerse itself in the empirical facts of how history gets made, in contradistinction to the formalist’s desire to define, categorize, and clarify the concepts employed in analysis. Yet in so far as geopolitics takes the actual topographical structure of the land as its point of analytical departure, this physical structure becomes the form upon which the geopolitician constructs the logic of his or her analysis. Geopolitical thought is formal in so far as the forms to which it conforms itself are physical, topographical forms.
Most geopoliticians, however, have no inkling of the formal dimension of their analyses, and so this formal dimension remains implicit. I have commented elsewhere that one of the most common fallacies is the conflation of the formal and the informal. In Cartesian Formalism I wrote:
One of the biggest and yet one of the least recognized blunders in philosophy (and certainly not only in philosophy) is to conflate the formal and the informal, whether we are concerned with formal and informal objects, formal and informal methods, or formal and informal ideas, etc. (I recently treated this topic on my other blog in relation to the conflation of formal and informal strategy.)
Geopolitics, geostrategy, and in fact many of the so-called “soft” sciences that do not involve extensive mathematization are among the worst offenders when it comes to the conflation of the formal and the informal, often because the practitioners of the “soft” sciences do not themselves understand the implicit principles of form to which they appeal in their theories. Instead of theoretical formalisms we get informal narratives, many of which are compelling in terms of their human interest, but are lacking when it comes to analytical clarity. These narratives are primarily derived from historical studies within the discipline, so that when this method is followed in geopolitics we get a more-or-less quantified account of topographical forms that shape action and agency, with an overlay of narrative history to string together the meaning of names, dates, and places.
There is a sense in which geography and history cannot be separated, but there is another sense in which the two are separated. Because the ecological temporality of human agency is primarily operational at the levels of micro-temporality and meso-temporality, this agency is often exercised without reference to the historical scales of the exo-temporality of larger social institutions (like societies and civilizations) and the macro-historical scales of geology and geomorphology. That is to say, human beings usually act without reference to plate tectonics, the uplift of mountains, or seafloor spreading, except when these events act over micro- and meso-time scales as in the case of earthquakes and tsunamis generated by geological events that otherwise act so slowly that we never notice them in the course of a lifetime — or even in the course of the life of a civilization.
The greatest temporal disconnect occurs between the smallest scales (micro-temporality) and the largest scales (macro-temporality), while there is less disconnect across immediately adjacent divisions of ecological temporality. I can employ a distinction that I recently made in a discussion of Descartes, that between strong distinctions and weak distinctions (cf. Of Distinctions Weak and Strong). Immediately adjacent divisions of ecological temporality are weakly distinct, while those not immediately adjacent are strongly distinct.
We have traditionally recognized the abstraction of macroscopic history that does not descend into details, but it has not been customary to recognize the abstractness of microscopic history, immersed in details, that does not also place these events in relation to a macroscopic context. In order to attain to a comprehensive perspective that can place these more limited perspectives into a coherent context, it is important to understand the limitations of our conventional conceptions of history (such as the failure to understand the abstract character of micro-history) — and, for that matter, the limitations of our conventional conceptions of geography. One of these limitations is the abstractness of either geography or history taken in isolation.
The degree of abstractness of an inquiry can be quantified by the ecological scope of that inquiry; any one division of ecological temporality (or any one division of metaphysical ecology) taken in isolation from other divisions is abstract. It is only the whole of ecology taken together that a truly concrete theory is possible. To take into account the whole of ecological temporality in a study of history is a highly concrete undertaking which is nevertheless informed by the abstract theories that constitute each individual level of ecological temporality.
Geopolitics, despite its focus on the empirical conditions of history, is a highly abstract inquiry precisely because of its nearly-exclusive focus on one kind of structure as determinative in history. As I have argued elsewhere, and repeatedly, abstract theories are valuable and have their place. Given the complexity of a concrete theory that seeks to comprehend the movements of human history around the globe, an abstract theory is a necessary condition of any understanding. Nevertheless, we need to rest in our efforts with an abstract theory based exclusively in the material conditions of history, which is the perspective of geopolitics (and, incidentally, the perspective of Marxism).
Geopolitics focuses on the seemingly obvious influences on history following from the material conditions of geography, but the “obvious” can be misleading, and it is often just as important to see what is not obvious as to explicitly take into account what is obvious. Bertrand Russell once observed, in a passage both witty and wise, that:
“It is not easy for the lay mind to realise the importance of symbolism in discussing the foundations of mathematics, and the explanation may perhaps seem strangely paradoxical. The fact is that symbolism is useful because it makes things difficult. (This is not true of the advanced parts of mathematics, but only of the beginnings.) What we wish to know is, what can be deduced from what. Now, in the beginnings, everything is self-evident; and it is very hard to see whether one self-evident proposition follows from another or not. Obviousness is always the enemy to correctness. Hence we invent some new and difficult symbolism, in which nothing seems obvious. Then we set up certain rules for operating on the symbols, and the whole thing becomes mechanical. In this way we find out what must be taken as premiss and what can be demonstrated or defined. For instance, the whole of Arithmetic and Algebra has been shown to require three indefinable notions and five indemonstrable propositions. But without a symbolism it would have been very hard to find this out. It is so obvious that two and two are four, that we can hardly make ourselves sufficiently sceptical to doubt whether it can be proved. And the same holds in other cases where self-evident things are to be proved.”
Bertrand Russell, Mysticism and Logic, “Mathematics and the Metaphysicians”
Russell here expresses himself in terms of symbolism, but I think it would better to formulate this in terms of formalism. When Russell writes that, “we invent some new and difficult symbolism, in which nothing seems obvious,” the new and difficult symbolism he mentions is more than mere symbolism, it is a formal theory. Russell’s point, then, is that if we formalize a body of knowledge heretofore consisting of intuitively “obvious” truths, certain relationships between truths become obvious that were not obvious prior to formalization. Another way to formulate this is to say that formalization constitutes a shift in our intuition, so that truths once intuitively obvious become inobvious, while inobvious truths because intuitive. Thus formalization is the making intuitive of previously unintuitive (or even counter-intuitive) truths.
Russell devoted a substantial portion of his career to formalizing heretofore informal bodies of knowledge, and therefore had considerable experience with the process of formalization. Since Russell practiced formalization without often explaining exactly what he was doing (the passage quoted above is a rare exception), we must look to the example of his formal thought as a model, since Russell himself offered no systematic account of the formalization of any given body of knowledge. (Russell and Whitehead’s Principia Mathematica is a tour de force comprising the order of justification of its propositions, while remaining silent about the order of discovery.)
A formal theory of time would have the same advantages for time as the theoretical virtues that Russell identified in the formalization of mathematics. In fact, Russell himself formulated a formal theory of time, in his paper “On Order in Time,” which is, in Russell’s characteristic way, reductionist and over-simplified. Since I aim to formulate a theory of time that is explicitly and consciously non-reductionist, I will make no use of Russell’s formal theory of time, though it is interesting at least to note Russell’s effort. The theory of ecological temporality that I have been formulating here is a fragment of a full formal theory of time, and as such it can offer certain insights into time that are lost in a reductionist account (as in Russell) or hidden in an informal account (as in geography and history).
As noted above, a formalized theory brings about a shift in our intuition, so that the formerly intuitive becomes unintuitive while the formerly unintuitive becomes intuitive. A shift in our intuitions about time (and history) means that a formal theory of time makes intuitive temporal relationships less obvious, while making temporal relationships that are hidden by the “buzzing, blooming world” more obvious, and therefore more amenable to analysis — perhaps for the first time.
Ecological temporality gives us a framework in which we can demonstrate the interconnectedness of strongly distinct temporalities, since the panarchy the holds between levels of an ecological system is the presumption that each level of an ecosystem impacts every other level of an ecosystem. Given the distinction between strong distinctions and weak distinctions, it would seem that adjacent ecological levels are weakly distinct and therefore have a greater impact on each other, while non-adjacent ecological levels are strongly distinct and therefore have less of an impact on each other. In an ecological theory of time, all of these principles hold in parallel, so that, for example, micro-temporality is only weakly distinct from meso-temporality, while being strongly distinct from exo-temporality. As a consequence, a disturbance in micro-temporality has a greater impact upon meso-temporality than upon exo-temporality (and vice versa), but less of an impact does not mean no impact at all.
Another virtue of formal theories, in addition to the shift in intuition that Russell identified, is that it forces us to be explicit about our assumptions and presuppositions. The implicit theory of time held by a geostrategist matters, because that geostrategist will interpret history in terms of the categories of his or her theory of time. But most geostrategists never bother to make their theory of time explicit, so that we do not know what assumptions they are making about the structure of time, hence also the structure of history.
Sometimes, in some cases, these assumptions will become so obvious that they cannot be ignored. This is especially the case with supernaturalistic and soteriological conceptions of metaphysical history that ultimately touch on everything else that an individual believes. This very obviousness makes it possible to easily identify eschatological and theological bias; what is much more insidious is the subtle assumption that is difficult to discern and which only can be elucidated with great effort.
If one comes to one’s analytical work presupposing that every moment of time possesses absolute novelty, one will likely make very different judgments than if one comes to the same work presupposing that there is nothing new under the sun. Temporal novelty means historical novelty: anything can happen; whereas, on the contrary, the essential identity of temporality over historical scales — identity for all practical purposes — means historical repetition: very little can happen.
. . . . .
Note: Anglo-American political science implicitly takes geopolitics as its point of departure, but, as I have attempted to demonstrate in several posts, this tradition of mainstream geopolitics can be contrasted to a nascent movement of biopolitics. However, biopolitics too could be formulated in the manner of a theoretical biopolitics, and a theoretical biopolitics would be at risk of being as abstract as geopolitics and in need of supplementation by a more comprehensive ecological perspective.
. . . . .
. . . . .
. . . . .
3 April 2012
One of the important ideas from Piaget’s influential conception of cognitive development is that of perspective taking. The ability to coordinate the perspectives of multiple observers of one and the same state of affairs is a cognitive skill that develops with time and practice, and the mastery of perspective taking coincides with cognitive maturity.
From a philosophical standpoint, the problem of perspective taking is closely related to the problem of appearance and reality, since one and the same state of affairs not only appears from different perspectives for different observers, it also appears from different perspectives for one and the same observer at different times. In other words, appearance changes — and presumably reality does not. It is interesting to note that developmental psychologists following Paiget’s lead have in fact conducted tests with children in order to understand at what stage of development that they can consistently distinguish between appearance and reality.
Just as perspective taking is a cognitive accomplishment — requiring time, training, and natural development — and not something that happens suddenly and all at once, the cognitive maturity of which perspective taking is an accomplishment does not occur all at once. Both maturity and perspective taking continue to develop as the individual develops — and I take this development continues beyond childhood proper.
While I find Piaget’s work quite congenial, the developmental psychology of Erik Erikson strikes me a greatly oversimplified, with its predictable crises at each stage of life, and the implicit assumption built in that if you aren’t undergoing some particular crisis that strikes most people at a given period of life, then there is something wrong with you. That being said, what I find of great value in Erikson’s work is his insistence that development continues throughout the human lifespan, and does not come to a halt after a particular accomplishment of cognitive maturity is achieved.
Piagetian cognitive development in terms of perspective taking can easily be extended throughout the human lifespan (and beyond) by the observation that there are always new perspectives to take. As civilization develops and grows, becoming ever more comprehensive as it does so, the human beings who constitute this civilization are forced to formulate always more comprehensive conceptions in order to take the measure of the world being progressively revealed to us. Each new idea that takes the measure of the world at a greater order of magnitude presents the possibility of a new perspective on the world, and therefore the possibility of a new achievement in terms of perspective taking.
The perspectives we attain constitute a hierarchy that begins with the first accomplishment of the self-aware mind, which is egocentric thought. Many developmental psychologists have described the egocentric thought patterns of young children, though the word “egocentric” is now widely avoided because of its moralizing connotations. I, however, will retain the term “egocentric,” because it helps to place this stage within a hierarchy of perspective taking.
The egocentric point of departure for human cognition does not necessarily disappear even when it is theoretically surpassed, because we know egocentric thinking so well from the nearly universal phenomenon of human selfishness, which is where the moralizing connotation of “egocentric” no doubt has its origin. An individual may become capable of coordinating multiple perspectives and still value the world exclusively from the perspective of self-interest.
In any case, the purely egocentric thought of early childhood confines the egocentric thinker to a tightly constrained circle defined by one’s personal perspective. While this is a personal perspective, it is also an impersonal perspective in so far as all individuals share this perspective. It is what Francis Bacon called the “idols of the cave,” since every human being, “has a cave or den of his own, which refracts and discolours the light of nature.” This has been well described in a passage from F. H. Bradley made famous by T. S. Eliot, because the latter quoted it in a footnote to The Waste Land:
My external sensations are no less private to myself than are my thoughts or my feelings. In either case my experience falls within my own circle, a circle closed on the outside; and, with all its elements alike, every sphere is opaque to the others which surround it… In brief, regarded as an existence which appears in a soul, the whole world for each is peculiar and private to that soul.
F. H. Bradley, Appearance and Reality, p. 346, quoted by T. S. Eliot in footnote 48 to The Waste Land, “What the Thunder Said”
I quote this passage here because, like my retention of the term “egocentric,” it can help us to see perspectives in perspective, and it helps us to do so because we can think of expanding and progressively more comprehensive perspectives as concentric circles. The egocentric perspective is located precisely at the center, and the circle described by F. H. Bradley is the circle within which the egocentric perspective prevails.
The next most comprehensive perspective taking beyond the transcendence of the egocentric perspective is the transcendence of the ethnocentric perspective. The ethnocentric perspective corresponds to what Bacon called the “idols of the marketplace,” such that this perspective is, “formed by the intercourse and association of men with each other.” The ethnocentric perspective can also be identified with the sociosphere, which I recently discussed in Eo-, Exo-, Astro- as an essentially geocentric conception which, in a Copernican context, should be overcome.
Beyond ethnocentrism and its corresponding sociosphere there is ideocentrism, which Bacon called the “idols of the theater,” and which we can identify with the noösphere. The ideocentric perspective, which Bacon well described in terms of philosophical systems, such that, “all the received systems are but so many stage-plays, representing worlds of their own creation after an unreal and scenic fashion.” Trans-ethnic communities of ideology and belief, like world’s major religions and political ideologies, represent the ideocentric perspective.
The transcendence of the ideocentric perspective by way of more comprehensive perspective taking brings us to the anthropocentric perspective, which can be identified with the anthroposphere (still a geocentric and pre-Copernican conception, as with the other -spheres mentioned above). The anthropocentric perspective corresponds to Bacon’s “idols of the tribe,” which Bacon described thus:
“The Idols of the Tribe have their foundation in human nature itself, and in the tribe or race of men. For it is a false assertion that the sense of man is the measure of things. On the contrary, all perceptions as well of the sense as of the mind are according to the measure of the individual and not according to the measure of the universe. And the human understanding is like a false mirror, which, receiving rays irregularly, distorts and discolours the nature of things by mingling its own nature with it.”
Bacon was limited by the cosmology of his time so that he could not readily identify further idols beyond the anthropocentric idols of the (human) tribe, just as we are limited by the cosmology of our time. Yet we do today have a more comprehensive perspective than Bacon, we can can identify a few more stages of more comprehensive perspective taking. Beyond the anthropocentric perspective there is the geocentric perspective, the heliocentric perspective, and even what we could call the galacticentric perspective — as when early twentieth century cosmologists argued over whether the Milky Way as the only galaxy and constituted an “island universe.” Now we know that there are other galaxies, and we can be said to have transcended the galacticentric perspective.
As I wrote above, as human knowledge has expanded and become more comprehensive, ever more comprehensive perspective taking has come about in order to grasp the concepts employed in expanding human knowledge. There is every reason to believe that this process will be iterated indefinitely into the future, which means that perspective taking also will be indefinitely iterated into the future. (I attempted to make a similar and related point in Gödel’s Lesson for Geopolitics.) Therefore, further levels of cognitive maturity wait for us in the distant future as accomplishments that we cannot yet attain at this time.
This last observation allows me to cite one more relevant developmental psychologist, namely Lev Vygotsky, whose cognitive mediation theory of human development makes use of the concept of a Zone of proximal development (ZPD). Human development, according to Vygotsky, takes place within a proximal zone, and not at any discrete point or stage. Within the ZPD, certain accomplishments of cognitive maturity are possible. In the lower ZPD there is the actual zone of development, while in the upper ZPD there lies the potential zone of development, which can be attained through cognitive mediation by the proper prompting of an already accomplished mentor. Beyond the upper ZPD, even if there are tasks yet to be accomplished, they cannot be accomplished within this particular ZPD.
With the development of human knowledge, we’re on our own. There is no cognitive mediator to help us over the hard parts and assist us in the more comprehensive perspective taking that will mark a new stage of cognitive maturity and possible also a new zone of proximal development in which new accomplishments will be possible. But this has always been true in the past, and yet we have managed to make these breakthroughs to more comprehensive perspectives of cognitive maturity.
I hope that the reader sees that this is both hopeful and sad. Hopeful because this way of looking at human knowledge suggests indefinite progress. Sad because we will not be around to see the the accomplishments of cognitive maturity that lie beyond our present zone of proximal development.
. . . . .
. . . . .
. . . . .
25 March 2012
In what style should we think? It sounds like an odd question. I will attempt to make it sound like a reasonable one.
It would, of course, be preferable (or maybe I should say, “more natural”) to ask, “In what manner should we think?” or simply, “How should we think?” But I have formulated my question as I have in order to refer to Heinrich Hübsch’s essay, “In what style should we build?” (In welchem Style sollen wir bauen? 1828)
Building and thinking are both human activities, and thus both can be assimilated to the formulation of Weyl that I quoted in The Clausewitzean Conception of Civilization:
“The ultimate foundations and the ultimate meaning of mathematics remain an open problem; we do not know in what direction it will find its solution, nor even whether a final objective answer can be expected at all. ‘Mathematizing’ may well be a creative activity of man, like music, the products of which not only in form but also in substance are conditioned by the decisions of history and therefore defy complete objective rationalization.”
Hermann Weyl, Philosophy of Mathematics and Natural Science, Appendix A, “The Structure of Mathematics”
What Weyl here refers to as “mathematizing” can be generalized to human cognition generally speaking, and, if we like, we can generalize all the way to a comprehensive Cartesian conception of thought:
By the word ‘thought’, I mean everything which happens in us while we are conscious, in so far as there is consciousness of it in us. So in this context, thinking includes sensing as well as understanding, willing, and imagining. If I say, ‘I see therefore I am,’ or ‘I walk therefore I am,’ and mean by that the seeing or walking which is performed by the body, the conclusion is not absolutely certain. After all, when I am asleep I can often think I am seeing or walking, but without opening my eyes or moving, — and perhaps even without my having any body at all. On the other hand, the conclusion is obviously certain if I mean the sensing itself, or the consciousness that I am seeing or walking, since the conclusion then refers to the mind. And it is only the mind which senses, or thinks about its seeing or walking.
Descartes, Principles of Philosophy, section 9
Do thinking and building have anything in common beyond both being human activities? Is there not something essentially constructive in both activities? (This question is surprisingly apt, because we need to understand what constructive thinking is, but I will return to that later.) Did not Kant refer to the “architectonic” of pure reason, and has it not become commonplace among contemporary cognitive scientists and philosophers of mind to speak of our “cognitive architecture.”
Just taking the term “constructive” in its naïve and intuitive signification, we know that thought is not always constructive. Indeed, it is often said that thought, and especially philosophical thought, must be analytical and critical. Critical thought is not always or invariably destructive, and most of us know the difference between constructive criticism and destructive criticism. Still, thought can be quite destructive. William of Ockham, for example, is often credited with bringing down the Scholastic philosophical synthesis that reached its apogee in Aquinas.
Similar observations can be made about the building trades. While we usually do not include demolition crews among the construction trades, there is a sense in which demolition and construction are both phases in the building process. Combat engineers must be equally trained in the building and demolition of bridges, for example, which demonstrates both the constructive and the destructive aspects of construction engineering.
Just as we have a choice not only what to build, but in what style we will build, so too we have a choice, not only in what we think, but also how we think. As a matter of historical fact, I think you will find that the thinking of most individuals is not much more than a reaction, or a reflex. People think in the way that comes naturally to them, and they do not realize that they are thinking in a certain style unless they pause to think about their thinking. Well, this would be one way to characterize philosophy: thinking about thinking.
The unthinking way in which most of us think has the consequence of fostering what may be called cognitive monoculture. Individuals rarely step outside the parameters of thought with which they are comfortable, and so they allow their thoughts to follow in the ruts and the grooves left by their ancestors, much as architects, for many generations, reiterated classical building styles for lack of imagination of anything different.
It is probably very nearly impossible that I should write about building and thinking without citing Heidegger, so here is my nearly obligatory Heidegger citation, which, despite my general dislike of Heideggerian thought, suits my purposes quite perfectly:
“We come to know what it means to think when we ourselves try to think. If the attempt is to be successful, we must be ready to learn thinking.”
Martin Heidegger, What is called thinking? Lecture I
I agree with this: a serious attempt at thinking entails that we come to know what it means to think, and moreover we must be ready to learn thinking, and not merely take it for granted. But I find that I do not agree with the very next paragraph in Heidegger:
“As soon as we allow ourselves to become involved in such learning, we have admitted that we are not yet capable of thinking.”
Martin Heidegger, What is called thinking? Lecture I
In fact, we are capable of thinking, though the problem is that we do not really know whether we are thinking well or thinking poorly. When we think about thinking, when we reflect on what we are doing when we are thinking, we will discover that we have been thinking in a particular style, even if we were not aware that we were doing so — much like the physician in Moliere who did not know that he had been speaking prose his entire life.
If we pay attention to our thinking, and think critically about our thinking, we stumble across a number of distinctions that we realize can be used to classify the style of thought in which we have been engaged: formal or informal, constructive or non-constructive, abstract or concrete, objective or subjective, theoretical or practical, a priori or a posteriori, empirical or rational. These distinction define styles of thought, and it is only in reflection that we realize that one or another of these terms has applied to our thought, and thus we have been thinking in this particular style.
Ideally one would be aware of how one was thinking, and be able to shift gears in the middle of thinking and adopt a different mode of thought as the need or desire arose. The value of knowing how one has been thinking, and realizing the unconscious distinctions one has been making, is that one is now in a position to provide counter-examples to one’s own thought, and one is therefore no longer strictly reliant upon the objections of others who think otherwise than ourselves.
The cognitive monoculture that we uncritically accept before we learn to reflect on our own thinking is more often than not borrowed from the world, and not the product of our own initiative. Are we living, intellectually, so to speak, in a structure built by others? If so, ought we to question or to accept that structure?
This is a theme to which Merleau-Ponty often returned:
“…it is by borrowing from the world structure that the universe of truth and of thought is constructed for us. When we want to express strongly the consciousness we have of a truth, we find nothing better than to invoke a topos noetos that would be common to minds or to men, as the sensible world is common to the sensible bodies. And this is not only an analogy: it is the same world that contains our bodies and our minds, provided that we understand by world not only the sum of things that fall or could fall under our eyes, but also the locus of their compossibility, the invariable style they observe, which connects our perspectives, permits transition from one to the other, and — whether in describing a detail of the landscape or in coming to agreement about an invisible truth — makes us feel we are two witnesses capable of hovering over the same true object, or at least of exchanging out situations relative to it, as we can exchange out standpoints in the visible world in the strict sense.”
Maurice Merleau-Ponty, The Visible and the Invisible,
I trust Merleau-Ponty with this idea, but, to put it bluntly, there are many that I would not trust with this idea, since the idea that our cognitive architecture is borrowed from the world that we inhabit can be employed as a strategy to dilute and perhaps even to deny the individual. One could make the case on this basis that we are owned by the past, and certainly there are those who believe that inter-generational moral duties flow in only one direction, from the present to the past, but merely to formulate it in these terms suggests the possibility of inter-generational moral duties that flow from the past to the present.
Certainly by being born into the world we are born into a linguistic and intellectual context at the same time as we are born into an existential context, and this fact has profound consequences. As in the passage from Marx that I have quoted many times:
“Men make their own history, but they do not make it as they please; they do not make it under self-selected circumstances, but under circumstances existing already, given and transmitted from the past.”
Karl Marx, The Eighteenth Brumaire of Louis Napoleon, first paragraph
Marx gives us a particular perspective on this idea, but we can turn it around and by reformulating Marx attain to a different perspective on the same idea. Marx takes the making of history to be a unidirectional process, but it goes both ways, men make history and history makes men:
“Men begin under circumstances existing already, given and transmitted from the past, and make their own history as they please from what they select of the past. The past has not reality but that which men give to it.”
The circumstances transmitted to us from the past are not arbitrary; these circumstances are the sum total of the efforts of previous generations to re-make the world during their lives according to their vision. We live with the consequences of this vision. Moreover, the circumstances we then create are then transmitted to the past; this is our legacy, and future generations will do with it as they will.
The architect, too, begins with circumstances existing already, given and transmitted from the past. For Hübsch this is the problem. Hübsch begins his brief treatise with a ringing assertion that architectural thought is dominated to an archaic paradigm:
“Painting and sculpture have long since abandoned the lifeless imitation of antiquity. Architecture has yet to come of age and continues to imitate the antique style. Although nearly everyone recognizes the inadequacy of that style in meeting today’s needs and is dissatisfied with the buildings recently erected in it, almost all architects still adhere to it.”
Heinrich Hübsch, In what style should we build? 1828
In the twenty-first century this is no longer true. Building has been substantially liberated from classical forms. In fact, since Hübsch’s time, a new classicism — international modern — rose, dominated for a short time, and now has been displaced by a bewildering plethora of styles, from an ornately decorative post-modernism to outlandish structures that would have been impossible without contemporary materials technology. There are, to be sure, architectural conventions that remain to be challenge, and in the sphere of urban planning these conventions can be quite rigid because they become embodied in legal codes.
For our time, the most forceful way to understand Hübsch’s question would be, “In what style should we build our cities?” Another way in which Hübsch’s question retains its poignant appeal is in the form that I suggested above: in what style should we think?
Are we intellectually owned by the past? Is there a moral obligation for us to think in the style of our grandfathers? A semi-humorous definition attributed to Benjamin Disraeli has it that, “A realist is a man who insists on making the same mistakes his grandfather did.” Are we obliged to be realists?
Here we see the clear connection between building and thinking. Just as we might think like our grandfathers, so too we might build like our grandfathers. This latter was the concern of Hübsch. That is to say, we can as well inhabit (and restore, and reconstruct) the intellectual constructions of our forefathers as well as the material constructions of our forefathers.
It would be entirely possible for us today to construct classical cities on the Greco-Roman model; it is even possible to imagine a traditional Roman house with hot and cold running water, electric kitchen appliances, and wired for WiFi. That is to say, we could have our modern conveniences and still continue to build as the past built. We could choose to literally inhabit the structure of the past, as civilization did in fact choose to do for almost a thousand years when classical cities were built to essentially the same plan throughout the ancient world. (See my remarks on this in The Iterative Conception of Civilization.)
We can take the Middle Ages as the intellectual analogy for thinking that the modernized Roman house is for living: the role of intellectual authority in medieval thinking was unprecedented and unparalleled. If experience contradicted authority, so much the worse for experience. If a classical text stated that something was the case, and the world seemed at variance with the text, the world was assumed to be in error. As classical antiquity lived with the same buildings for a thousand years, so the Middle Ages lived with the same thoughts for a thousand years. There is no reason that we could not take medieval scholarship, as we might update a Roman house, and add a few modern conveniences — like names for chemical elements, etc. — and have this perfectly serviceable intellectual context as our own.
Thus the two previous macro-historical stages of Western civilization prior to modernism — namely, classicism and medievalism — represent, respectively, the attempt to build in the style of the past and the attempt to think in the style of the past. It has been the rude character of modernism to focus on the future and to be dismissive of the past. While this attitude can be nihilistic, we can now clearly see how it came about: the other alternatives were tried and found wanting.
. . . . .
. . . . .
. . . . .