Thursday — Thanksgiving Day

Studies in Formal Thought:

Albert Einstein (14 March 1879 – 18 April 1955)

Albert Einstein (14 March 1879 – 18 April 1955)

Einstein’s Philosophy of Mathematics

For some time I have had it on my mind to return to a post I wrote about a line from Einstein’s writing, Unpacking an Einstein Aphorism. The “aphorism” in question is this sentence:

“As far as the laws of mathematics refer to reality, they are not certain; and as far as they are certain, they do not refer to reality.”

…which, in the original German, was…

“Insofern sich die Sätze der Mathematik auf die Wirklichkeit beziehen, sind sie nict sicher, und insofern sie sicher sind, beziehen sie sich nicht auf die Wirklichkeit.”

Although this sentence has been widely quoted out of context until it has achieved the de facto status of an aphorism, I was wrong to call it an aphorism. This sentence, and the idea it expresses, is entirely integral with the essay in which it appears, and should not be treated in isolation from that context. I can offer in mitigation that a full philosophical commentary on Einstein’s essay would run to the length of a volume, or several volumes, but this post will be something of a mea culpa and an effort toward mitigation of the incorrect impression I previously gave that Einstein formulated this idea as an aphorism.

The first few paragraphs of Einstein’s lecture, which includes the passage quoted above, constitute a preamble on the philosophy of mathematics. Einstein wrote this sententious survey of his philosophy of mathematics in order to give the listener (or reader) enough of a methodological background that they would be able to follow Einstein’s reasoning as he approaches the central idea he wanted to get across: Einstein’s lecture was an exercise in the cultivation of geometrical intuition. Unless one has some familiarity with formal thought — usually mathematics or logic — one is not likely to have an appreciation of the tension between intuition and formalization in formal thought, nor of how mathematicians use the term “intuition.” In ordinary language, “intuition” usually means arriving at a conclusion on some matter too subtle to be made fully explicit. For mathematicians, in contrast, intuition is a faculty of the mind that is analogous to perception. Indeed, Kant made this distinction, implying its underlying parallelism, by using the terms “sensible intuition” and “intellectual intuition” (which can also be called “outer” and “inner” intuition).

Intuition as employed in this formal sense has been, through most of the history of formal thought, understood sub specie aeternitatis, i.e., possessing many of the properties once reserved for divinity: eternity, immutability, impassibility, and so on. In the twentieth century this began to change, and the formal conception of intuition came to be more understood in naturalistic terms as a faculty of the human mind, and, as such, subject to change. Here is a passage from Gödel that I have quoted many times (e.g., in Transcendental Humors), in which Gödel delineates a dynamic and changing conception of intuition:

“Turing… gives an argument which is supposed to show that mental procedures cannot go beyond mechanical procedures. However, this argument is inconclusive. What Turing disregards completely is the fact that mind, in its use, is not static, but is constantly developing, i.e., that we understand abstract terms more and more precisely as we go on using them, and that more and more abstract terms enter the sphere of our understanding. There may exist systematic methods of actualizing this development, which could form part of the procedure. Therefore, although at each stage the number and precision of the abstract terms at our disposal may be finite, both (and, therefore, also Turing’s number of distinguishable states of mind) may converge toward infinity in the course of the application of the procedure.”

“Some remarks on the undecidability results” (Italics in original) in Gödel, Kurt, Collected Works, Volume II, Publications 1938-1974, New York and Oxford: Oxford University Press, 1990, p. 306.

If geometrical intuition (or mathematical intuition more generally) is subject to change, it is also subject to improvement (or degradation). A logician like Gödel, acutely aware of the cognitive mechanisms by which he has come to grasp logic and mathematics, might devote himself to consciously developing intuitions, always refining and improving his conceptual framework, and straining toward developing new intuitions that would allow for the extension of mathematical rigor to regions of knowledge previously given over to Chaos and Old Night. Einstein did not make this as explicit as did Gödel, but he clearly had the same idea, and Einstein’s lecture was an attempt to demonstrate to his audience the cultivation of geometrical intuitions consistent with the cosmology of general relativity.

Einstein’s revolutionary work in physics represented at the time a new level of sophistication of the mathematical representation of physical phenomena. Mathematicized physics began with Galileo, and might be said to coincide with the advent of the scientific revolution, and the mathematization of physics reached a level of mature sophistication with Newton, who invented the calculus in order to be able to express his thought in mathematical form. The Newtonian paradigm in physics was elaborated as the “classical physics” of which Einstein and Infeld, like mathematical parallels of Edward Gibbon, recorded the decline and fall.

Between Einstein and Newton a philosophical revolution in mathematics occurred. The philosophy of mathematics formulated by Kant is taken by many philosophers to express the conception of mathematics to be found in Newton; I do not agree with this judgment, as much for historiographical reasons as for philosophical reasons. But perhaps if we scrape away the Kantian idealism and subjectivism there might well be a core of Newtonian philosophy of mathematics in Kant, or, if you prefer, a core of Kantian philosophy of mathematics intimated in Newton. For present purposes, this is neither here nor there.

The revolution that occurred between Newton and Einstein was the change to hypothetico-deductivism from that which preceded it. So what was it that preceded the hypothetico-deductive conception of formal systems in mathematics? I call this earlier form of mathematics, i.e., I call pre-hypothetico-deductive mathematics, categorico-deductive mathematics, because the principles or axioms now asserted hypothetically were once asserted categorically, in the belief that the truths of formal thought, i.e., of logic and mathematics, were eternal, immutable, unchanging truths, recognized by the mind’s eye as incontrovertible, indubitable, necessary truths as soon as they were glimpsed. It was often said (and is sometimes still today said), that to understand an axiom is ipso facto to see that it must be true; this is the categorico-deductive perspective.

In mathematics as it is pursued today, as an exercise in hypothetico-deductive reasoning, axioms are posited not because they are held to be necessarily true, or self-evidently true, or undeniably true; axioms need not be true at all. Axioms are posited because they are an economical point of origin for the fruitful derivation of consequences. This revolution in mathematical rigor transformed the landscape of mathematical thought so completely that Bertrand Russell, writing in the early twentieth century could write, “…mathematics may be defined as the subject in which we never know what we are talking about, nor whether what we are saying is true.” Here formal, logical truth is entirely insulated from empirical, intuitive truth. It is at least arguable that the new formalisms made possible by the hypothetico-deductive method are at least partially responsible for Einstein’s innovations in physics. (I have earlier touched on Einstein’s conception of formalism in A Century of General Relativity and Constructive Moments within Non-Constructive Thought.)

If you are familiar with Einstein’s lecture, and especially with the opening summary of Einstein’s philosophy of mathematics, you will immediately recognize that Einstein formulates his position around the distinction between the categorico-deductive (which Einstein calls the “older interpretation” of axiomatics) and the hypothetico-deductive (which Einstein calls the “modern interpretation” of axiomatics). Drawing upon this distinction, Einstein gives us a somewhat subtler and more flexible formulation of the fundamental disconnect between the formal and the material than that which Russell paradoxically thrusts in our face. By formulating his distinction in terms of “as far as,” Einstein implies that there is a continuum of the dissociation between what Einstein called the “logical-formal” and “objective or intuitive content.”

Einstein then goes on to assert that a purely logical-formal account of mathematics joined together with the totality of physical laws allows us to say something, “about the behavior of real things.” The logical-formal alone can can say nothing about the real world; in its isolated formal purity it is perfectly rigorous and certain, but also impotent. This marvelous structure must be supplemented with empirical laws of nature, empirically discovered, empirically defined, empirically applied, and empirically tested, in order to further our knowledge of the world. Here we see Einstein making use of the hypothetico-deductive method, and supplementing it with contemporary physical theory; significantly, in order to establish a relationship between the formalisms of general relativity and the actual world he didn’t try to turn back the clock by returning to categorico-deductivism, but took up hypothetic-deductivism and ran with it.

But all of this is mere prologue. The question that Einstein wants to discuss in his lecture is the spatial extension of the universe, which Einstein distills to two alternatives:

1. The universe is spatially infinite. This is possible only if in the universe the average spatial density of matter, concentrated in the stars, vanishes, i.e., if the ratio of the total mass of the stars to the volume of the space through which they are scattered indefinitely approaches zero as greater and greater volumes are considered.

2. The universe is spatially finite. This must be so, if there exists an average density of the ponderable matter in the universe that is different from zero. The smaller that average density, the greater is the volume of the universe.

Albert Einstein, Geometry and Experience, Lecture before the Prussian Academy of Sciences, January 27, 1921. The last part appeared first in a reprint by Springer, Berlin, 1921

It is interesting to note, especially in light of the Kantian distinction noted above between sensible and intellectual intuition, that one of Kant’s four antinomies of pure reason was whether or not the universe was finite or infinite in extent. Einstein has taken this Kantian antimony of pure reason and has cast it in a light in which it is no longer exclusively the province of pure reason, and so may be answered by the methods of science. To this end, Einstein presents the distinction between a finite universe and an infinite universe in the context of the density of matter — “the ratio of the total mass of the stars to the volume of the space through which they are scattered” — which is a question that may be determined by science, whereas the purely abstract terms of the Kantian antimony allowed for no scientific contribution to the solution of the question. For Kant, pure reason could gain no traction on this paralogism of pure reason; Einstein gains traction by making the question less pure, and moves toward more engagement with reality and therefore less certainty.

It was the shift from categorico-deductivism to hypothetico-deductivism, followed by Einstein’s “completion” of geometry by the superaddition of empirical laws, that allows Einstein to adopt a methodology that is both rigorous and scientifically fruitful. (“We will call this completed geometry ‘practical geometry,’ and shall distinguish it in what follows from ‘purely axiomatic geometry’.”) Where the simplicity of Euclidean geometry allows for the straight-forward application of empirical laws to “practically-rigid bodies” then the simplest solution of Euclidean geometry is preferred, but where this fails, other geometries may be employed to resolve the apparent contradiction between mathematics and empirical laws. Ultimately, the latter is found to be the case — “the laws of disposition of rigid bodies do not correspond to the rules of Euclidean geometry on account of the Lorentz contraction” — and so it is Riemannian geometry rather than Euclidean geometry that is the mathematical setting of general relativity.

Einstein’s use of Riemannian geometry is significant. The philosophical shift from categorico-deductivism to hypothetico-deductivism could be reasonably attributed to (or, at least, to follow from) the nineteenth century discovery of non-Euclidean geometries, and this discovery is an interesting and complex story in itself. Gauss (sometimes called the “Prince of Mathematicians”) discovered non-Euclidean geometry, but published none of it in his lifetime. It was independently discovered by the Hungarian János Bolyai (the son of a colleague of Gauss) and the Russian Nikolai Ivanovich Lobachevsky. Both Bolyai and Lobachevsky arrived at non-Euclidean geometry by adopting the axioms of Euclid but altering the axiom of parallels. The axioms of parallels had long been a sore spot in mathematics; generation after generation of mathematicians had sought to prove the axiom of parallels from the other axioms, to no avail. Bolyai and Lobachevsky found that they could replace the axiom of parallels with another axiom and derive perfectly consistent but strange and unfamiliar geometrical theorems. This was the beginning of the disconnect between the logical-formal and objective or intuitive content.

Riemann also independently arrived at non-Euclidean geometry, but by a different route than that taken by Bolyai and Lobachevsky. Whereas the latter employed the axiomatic method — hence its immediate relevance to the shift from the categorico-deductive to the hypothetico-deductive — Riemann employed a metrical method. That is to say, Riemann’s method involved measurements of line segments in space defined by the distance between two points. In Euclidean space, the distance between two points is given by a formula derived from the Pythagorean theorem — d = √(x2x1)2 + (y2y1)2 — so that in non-Euclidean space the distance between two points could be given by some different equation.

Whereas the approach of Bolyai and Lobachevsky could be characterized as variations on a theme of axiomatics, Riemann’s approach could be characterized as variations on a theme of analytical geometry. The applicability to general relativity becomes clear when we reflect how, already in antiquity, Eratosthenes was able to determine that the Earth is a sphere by taking measurements on the surface of the Earth. By the same token, although we are embedded in the spacetime continuum, if we take careful measurements we can determine the curvature of space, and perhaps also the overall geometry of the universe.

From a philosophical standpoint, it is interesting to ask if there is an essential relationship between the method of a non-Euclidean geometry and the geometrical intuitions engaged by these methods. Both Bolyai and Lobachevsky arrived at hyperbolic non-Euclidean geometry (an infinitude of parallel lines) whereas Riemann arrived at elliptic non-Euclidean geometry (no parallel lines). I will not attempt to analyze this question here, though I find it interesting and potentially relevant. The non-Euclidean structure of Einstein’s general relativity is more-or-less a three dimensional extrapolation of the elliptic two dimensional surface of a sphere. Our minds cannot conceive this (at least, my mind can’t conceive of it, but there may be mathematicians who, having spent their lives thinking in these terms, are able to visualize three dimensional spatial curvature), but we can formally work with the mathematics, and if the measurements we take of the universe match the mathematics of Riemannian elliptical space, then space is curved in a way such that most human beings cannot form any geometrical intuition of it.

Einstein’s lecture culminates in an attempt to gently herd his listeners toward achieving such an impossible geometrical intuition. After a short discussion of the apparent distribution of mass in the universe (in accord with Einstein’s formulation of the dichotomy between an infinite or a finite universe), Einstein suggests that these considerations point to a finite universe, and then explicitly asks in his lecture, “Can we visualize a three-dimensional universe which is finite, yet unbounded?” Einstein offers a visualization of this by showing how an infinite Euclidean plane can be mapped onto a finite surface of a sphere, and then suggesting an extrapolation from this mapping of an infinite two dimensional Euclidean space to a finite but unbounded two dimensional elliptic space as a mapping from an infinite three dimensional Euclidean space to a finite but unbounded three dimensional elliptic space. Einstein explicitly acknowledges that, “…this is the place where the reader’s imagination boggles.”

Given my own limitations when it comes to geometrical intuition, it is no surprise that I cannot achieve any facility in the use of Einstein’s intuitive method, though I have tried to perform it as a thought experiment many times. I have no doubt that Einstein was able to do this, and much more besides, and that it was, at least in part, his mastery of sophisticated forms of geometrical intuition that had much to do with his seminal discoveries in physics and cosmology. Einstein concluded his lecture by saying, “My only aim today has been to show that the human faculty of visualization is by no means bound to capitulate to non-Euclidean geometry.”

Above I said it would be an interesting question to pursue whether there is an essential relationship between formalisms and the intuitions engaged by them. This problem returns to us in a particularly compelling way when we think of Einstein’s effort in this lecture to guide his readers toward conceiving of the universe as finite and unbounded. When Einstein gave this lecture in 1922 he maintained a steady-state conception of the universe. About the same time the Russian mathematician Alexander Friedmann was formulating solutions to Einstein’s field equations that employed expanding and contracting universes, of which Einstein himself did not approve. It wasn’t until Einstein met with Georges Lemaître in 1927 that we know something about Einstein’s engagement with Lemaître’s cosmology, which would become the big bang hypothesis. (Cf. the interesting sketch of their relationship, Einstein and Lemaître: two friends, two cosmologies… by Dominique Lambert.)

Ten years after Einstein delivered his “Geometry and Experience” lecture he was hesitantly beginning to accept the expansion of the universe, though he still had reservations about the initial singularity in Lemaître’s cosmology. Nevertheless, Einstein’s long-standing defense of the counter-intuitive idea (which he attempted to make intuitively palatable) of a finite and unbounded universe would seem to have prepared his mind for Lemaître’s cosmology, as Einstein’s finite and unbounded universe is a natural fit with the big bang hypothesis: if the universe began from an initial singularity at a finite point of time in the past, then the universe derived from the initial singularity would still be finite any finite period of time after the initial singularity. Just as we find ourselves on the surface of the Earth (i.e., our planetary endemism), which is a finite and unbounded surface, so we seem to find ourselves within a finite and unbounded universe. Simply connected surfaces of these kinds possess a topological parsimony and hence would presumably be favored as an explanation for the structure of the world in which we find ourselves. Whether our formalisms (i.e., those formalisms accessible to the human mind, i.e., intuitively tractable formalisms) are conductive to this conception, however, is another question for another time.

. . . . .

An illustration from Einstein’s lecture Geometry and Experience

. . . . .

Studies in Formalism

1. The Ethos of Formal Thought

2. Epistemic Hubris

3. Parsimonious Formulations

4. Foucault’s Formalism

5. Cartesian Formalism

6. Doing Justice to Our Intuitions: A 10 Step Method

7. The Church-Turing Thesis and the Asymmetry of Intuition

8. Unpacking an Einstein Aphorism

9. The Overview Effect in Formal Thought

10. Einstein on Geometrical intuition

. . . . .

Wittgenstein's Tractatus Logico-Philosophicus was part of the efflourescence of formal thinking focused on logic and mathematics.

Wittgenstein’s Tractatus Logico-Philosophicus was part of an early twentieth century efflorescence of formal thinking focused on logic and mathematics.

. . . . .


. . . . .

Grand Strategy Annex

. . . . .

project astrolabe logo smaller

. . . . .



Alonzo Church and Alan Turing

What is the Church-Turing Thesis? The Church-Turing Thesis is an idea from theoretical computer science that emerged from research in the foundations of logic and mathematics, also called Church’s Thesis, Church’s Conjecture, the Church-Turing Conjecture as well as other names, that ultimately bears upon what can be computed, and thus, by extension, what a computer can do (and what a computer cannot do).

Note: For clarity’s sake, I ought to point out the Church’s Thesis and Church’s Theorem are distinct. Church’s Theorem is an established theorem of mathematical logic, proved by Alonzo Church in 1936, that there is no decision procedure for logic (i.e., there is no method for determining whether an arbitrary formula in first order logic is a theorem). But the two – Church’s theorem and Church’s thesis – are related: both follow from the exploration of the possibilities and limitations of formal systems and the attempt to define these in a rigorous way.

Even to state Church’s Thesis is controversial. There are many formulations, and many of these alternative formulations come straight from Church and Turing themselves, who framed the idea differently in different contexts. Also, when you hear computer science types discuss the Church-Turing thesis you might think that it is something like an engineering problem, but it is essentially a philosophical idea. What the Church-Turing thesis is not is as important as what it is: it is not a theorem of mathematical logic, it is not a law of nature, and it not a limit of engineering. We could say that it is a principle, because the word “principle” is ambiguous and thus covers the various formulations of the thesis.

There is an article on the Church-Turing Thesis at the Stanford Encyclopedia of Philosophy, one at Wikipedia (of course), and even a website dedicated to a critique of the Stanford article, Alan Turing in the Stanford Encyclopedia of Philosophy. All of these are valuable resources on the Church-Turing Thesis, and well worth reading to gain some orientation.

One way to formulate Church’s Thesis is that all effectively computable functions are general recursive. Both “effectively computable functions” and “general recursive” are technical terms, but there is an important different between these technical terms: “effectively computable” is an intuitive conception, whereas “general recursive” is a formal conception. Thus one way to understand Church’s Thesis is that it asserts the identity of a formal idea and an informal idea.

One of the reasons that there are many alternative formulations of the Church-Turing thesis is that there any several formally equivalent formulations of recursiveness: recursive functions, Turing computable functions, Post computable functions, representable functions, lambda-definable functions, and Markov normal algorithms among them. All of these are formal conceptions that can be rigorously defined. For the other term that constitutes the identity that is Church’s thesis, there are also several alternative formulations of effectively computable functions, and these include other intuitive notions like that of an algorithm or a procedure that can be implemented mechanically.

These may seem like recondite matters with little or no relationship to ordinary human experience, but I am surprised how often I find the same theoretical conflict played out in the most ordinary and familiar contexts. The dialectic of the formal and the informal (i.e., the intuitive) is much more central to human experience than is generally recognized. For example, the conflict between intuitively apprehended free will and apparently scientifically unimpeachable determinism is a conflict between an intuitive and a formal conception that both seem to characterize human life. Compatibilist accounts of determinism and free will may be considered the “Church’s thesis” of human action, asserting the identity of the two.

It should be understood here that when I discuss intuition in this context I am talking about the kind of mathematical intuition I discussed in Adventures in Geometrical Intuition, although the idea of mathematical intuition can be understood as perhaps the narrowest formulation of that intuition that is the polar concept standing in opposition to formalism. Kant made a useful distinction between sensory intuition and intellectual intuition that helps to clarify what is intended here, since the very idea of intuition in the Kantian sense has become lost in recent thought. Once we think of intuition as something given to us in the same way that sensory intuition is given to us, only without the mediation of the senses, we come closer to the operative idea of intuition as it is employed in mathematics.

Mathematical thought, and formal accounts of experience generally speaking, of course, seek to capture our intuitions, but this formal capture of the intuitive is itself an intuitive and essentially creative process even when it culminates in the formulation of a formal system that is essentially inaccessible to intuition (at least in parts of that formal system). What this means is that intuition can know itself, and know itself to be an intuitive grasp of some truth, but formality can only know itself as formality and cannot cross over the intuitive-formal divide in order to grasp the intuitive even when it captures intuition in an intuitively satisfying way. We cannot even understand the idea of an intuitively satisfying formalization without an intuitive grasp of all the relevant elements. As Spinoza said that the true is the criterion both of itself and of the false, we can say that the intuitive is the criterion both of itself and the formal. (And given that, today, truth is primarily understood formally, this is a significant claim to make.)

The above observation can be formulated as a general principle such that the intuitive can grasp all of the intuitive and a portion of the formal, whereas the formal can grasp only itself. I will refer to this as the principle of the asymmetry of intuition. We can see this principle operative both in the Church-Turing Thesis and in popular accounts of Gödel’s theorem. We are all familiar with popular and intuitive accounts of Gödel’s theorem (since the formal accounts are so difficult), and it is not usual to make claims for the limitative theorems that go far beyond what they formally demonstrate.

All of this holds also for the attempt to translate traditional philosophical concepts into scientific terms — the most obvious example being free will, supposedly accounted for by physics, biochemistry, and neurobiology. But if one makes the claim that consciousness is nothing but such-and-such physical phenomenon, it is impossible to cash out this claim in any robust way. The science is quantifiable and formalizable, but our concepts of mind, consciousness, and free will remain stubbornly intuitive and have not been satisfyingly captured in any formalism — the determination of any such satisfying formalization could only be determined by intuition and therefore eludes any formal capture. To “prove” determinism, then, would be as incoherent as “proving” Church’s Thesis in any robust sense.

There certainly are interesting philosophical arguments on both sides of Church’s Thesis — that is to say, both its denial and its affirmation — but these are arguments that appeal to our intuitions and, most crucially, our idea of ourselves is intuitive and informal. I should like to go further and to assert that the idea of the self must be intuitive and cannot be otherwise, but I am not fully confident that this is the case. Human nature can change, albeit slowly, along with the human condition, and we could, over time — and especially under the selective pressures of industrial-technological civilization — shape ourselves after the model of a formal conception of the self. (In fact, I think it very likely that this is happening.)

I cannot even say — I would not know where to begin — what would constitute a formal self-understanding of the self, much less any kind of understanding of a formal self. Well, maybe not. I have written elsewhere that the doctrine of the punctiform present (not very popular among philosophers these days, I might add) is a formal doctrine of time, and in so far as we identify internal time consciousness with the punctiform present we have a formal doctrine of the self.

While the above account is one to which I am sympathetic, this kind of formal concept — I mean the punctiform present as a formal conception of time — is very different from the kind of formality we find in physics, biochemistry, and neuroscience. We might assimilate it to some mathematical formalism, but this is an abstraction made concrete in subjective human experience, not in physical science. Perhaps this partly explains the fashionable anti-philosophy that I have written about.

. . . . .


. . . . .

Grand Strategy Annex

. . . . .


L. E. J. Brouwer: philosopher of mathematics, mystic, and pessimistic social theorist

A message to the foundations of mathematics (FOM) listserv by Frank Waaldijk alerted me to the fact that today, 14 October 2012, is the one hundredth anniversary of Brouwer’s inaugural address at the University of Amsterdam, “Intuitionism and Formalism.” (I have discussed Frank Waaldijk earlier in P or Not-P and What is the Relationship Between Constructive and Non-Constructive Mathematics?)

I have called this post “One Hundred Years of Intuitionism and Formalism” but I should have called it “One Hundred Years of Intuitionism” since, of the three active contenders as theories for the foundations of mathematics a hundred years ago, only intuitionism is still with us in anything like its original form. The other contenders — formalism and logicism — are still with us, but in forms so different that they no longer resemble any kind of programmatic approach to the foundations of mathematics. In fact, it could be said that logicism was gradually transformed into technical foundational research, primarily logical in character, without any particular programmatic content, while formalism, following in a line of descent from Hilbert, has also been incrementally transformed into mainstream foundational research, but primarily mathematical in character, and also without any particular programmatic or even philosophical content.

The very idea of “foundations” has come to be questioned in the past hundred years — though, as I commented a few days ago in The Genealogy of the Technium, the early philosophical foundationalist programs continue to influence my own thinking — and we have seen that intuitionism has been able to make the transition from a foundationalist-inspired doctrine to doctrine that might be called mathematical “best practices.” In contemporary philosophy of mathematics, one of the most influential schools of thought for the past couple of decades or more has been to focus not on theories of mathematics, but rather on mathematical practices. Sometimes this is called “neo-empiricism.”

Intuitionism, I think, has benefited from the shift from the theoretical to the practical in the philosophy of mathematics, since intuitionism was always about making a distinction between the acceptable and the unacceptable in logical principles, mathematical reasoning, proof procedures, and all those activities that are part of the mathematician’s daily bread and butter. This shift has also made it possible for intuitionism to distance itself from its foundationalist roots at a time when foundationalism is on the ropes.

Brouwer is due some honor for his prescience in formulating intuitionism a hundred years ago — and intuitionism came almost fully formed out of the mind of Brouwer as syllogistic logic came almost fully formed out of the mind of Aristotle — so I would like to celebrate Brouwer on this, the one hundredth anniversary of his inaugural address at the University of Amsterdam, in which he formulated so many of the central principles of intuitionism.

Brouwer was prescient in another sense as well. He ended his inaugural address with a quote from Poincaré that is well known in the foundationalist community, since it has been quoted in many works since:

“Les hommes ne s’entendent pas, parce qu’ils ne parlent pas la même langue et qu’il y a des langues qui ne s’apprennent pas.”

This might be (very imperfectly) translated into English as follows:

“Men do not understand each other because they do not speak the same language and there are languages ​​that cannot be learned.”

What Poincaré called men not understanding each other Kuhn would later and more famously call incommensurability. And while we have always known that men do not understand each other, it had been widely believed before Brouwer that at least mathematicians understood each other because they spoke the same universal language of mathematics. Brouwer said that his exposition revealed, “the fundamental issue, which divides the mathematical world.” A hundred years later the mathematical world is still divided.

For those who have not studied the foundations and philosophy of mathematics, it may come as a surprise that the past century, which has been so productive of research in advanced mathematics — arguably going beyond all the cumulative research in mathematics up to that time — has also been a century of conflict during which the idea of mathematics as true, certain, and necessary — ideas that had been central to a core Platonic tradition of Western thought — have all been questioned and largely abandoned. It has been a raucous century for mathematics, but also a fruitful one. A clever mathematician with a good literary imagination could write a mathematical analogue of Mandeville’s Fable of the Bees in which it is precisely the polyglot disorder of the hive that made it thrive.

That core Platonic tradition of Western thought is now, even as I write these lines, dissipating just as the illusions of the philosopher, freed from the cave of shadows, dissipate in the light of the sun above.

Brouwer, like every revolutionary (and we recall that it was Weyl, who was sympathetic to Brouwer, who characterized Brouwer’s work as a revolution in mathematics), wanted to do away with an old, corrupt tradition and to replace it with something new and pure and edifying. But in the affairs of men, a revolution is rarely complete, and it is, far more often, the occasion of schism than conversion.

Many were converted by Brouwer; many are still being converted today. As I wrote above, intuitionism remains a force to be reckoned with in contemporary mathematical thought in a way that logicism and formalism cannot claim to be such a force. But the conversions and subsequent defections left a substantial portion of the mathematical community unconverted and faithful to the old ways. The tension and the conflict between the old ways and the new ways has been a source of creative inspiration.

Precisely that moment in history when the very nature of mathematics was called into question became the same moment in history when mathematics joined technology in exponential growth.

Mars is the true muse of men.

. . . . .

Mars, God of War and Muse of Men.

. . . . .


. . . . .

Grand Strategy Annex

. . . . .


Euclid provided the model of formal thought with his axiomatization of geometry, but Euclid also, if perhaps unwittingly, provided the model of intuitive mathematical thought by his appeals to geometrical intuition.

Over the past few days I’ve posted several strictly theoretical pieces that have touched on geometrical intuition and what I have elsewhere called thinking against the grainA Question for Philosophically Inclined Mathematicians, Fractals and the Banach-Tarski Paradox, and A visceral feeling for epsilon zero.

Benoît Mandelbrot rehabilitated geometrical intuition.

Not long previously, in my post commemorating the passing of Benoît Mandelbrot, I discussed the rehabilitation of geometrical intuition in the wake of Mandelbrot’s work. The late nineteenth and early twentieth century work in the foundations of mathematics largely made the progress that it did by consciously forswearing geometrical intuition and seeking instead logically rigorous foundations that made no appeal to our ability to visualize or conceive particular spatial relationships. Mandelbrot said that, “The eye had been banished out of science. The eye had been excommunicated.” He was right, but the logically motivated foundationalists were right also: we are misled by geometrical intuition at least as often as we are led rightly by it.

Kurt Gödel was part of the tradition of logically rigorous foundationalism, but he did not reject geometrical intuition on that account.

Geometrical intuition, while it suffered during a period of relative neglect, was never entirely banished, never excommunicated to the extent of being beyond rehabilitation. Even Gödel, who formulated his paradoxical theorems employing the formal machinery of arithmetization, therefore deeply indebted to the implicit critique of geometrical intuition, wrote: “I only wanted to show that an innate Euclidean geometrical intuition which refers to reality and is a priori valid is logically possible and compatible with the existence of non-Euclidean geometry and with relativity theory.” (Collected Papers, Vol. III, p. 255) This is, of course, to damn geometrical intuition by way of faint praise, but being damned by faint praise is not the same as being condemned (or excommunicated). Geometrical intuition was down, but not out.

As Gödel observed, even non-Euclidean geometries are compatible with Euclidean geometrical intuition. When non-Euclidean geometries were first formulated by Bolyai, Lobachevski, and Riemann (I suppose I should mention Gauss too), they were interpreted as a death-blow to geometrical intuition, but it became apparent as these discoveries were integrated into the body of mathematical knowledge that what the non-Euclidean geometries had done was not to falsify geometrical intuition by way of counter-example, but to extend geometrical intuition through further (and unexpected) examples. The development of mathematics here exhibits not Aristotelian logic but Hegelian dialectical logic: Euclidean geometry was the thesis, non-Euclidean geometry was the antithesis, and contemporary geometry, incorporating all of these discoveries, is the synthesis.

Bertrand Russell was a major player in extending the arithmetization of analysis by pursing the logicization of arithmetic.

Bertrand Russell, who was central in the philosophical struggle to find rigorous logical formulations for mathematical theories that had previously rested on geometrical intuition, wrote: “A logical theory may be tested by its capacity for dealing with puzzles, and it is a wholesome plan, in thinking about logic, to stock the mind with as many puzzles as possible, since these serve much the same purpose as is served by experiments in physical science.” (from the famous “On Denoting” paper) Though Russell thought of this as a test of logical theories, it is also a wholesome plan to stock the mind with counter-intuitive geometrical examples. Non-Euclidean geometry greatly contributed to the expansion and extrapolation of geometrical intuition by providing novel examples toward which intuition can expand.

In the interest of offering exercises and examples for geometrical intuition, In Fractals and the Banach-Tarski Paradox I suggested the construction of a fractal by raising a cube on each side of a cube. I realized that if instead of raising a cube we sink a cube inside, it would make for an interesting pattern. With a cube of the length of 3, six cubes indented into this cube, each of length 1, would meet the other interior cubes at a single line.

If we continue this iteration the smaller cubes inside (in the same proportion) would continue to meet along a single line. Iterated to infinity, I suspect that this would look interesting. I’m sure it’s already been done, but I don’t know the literature well enough to cite its previous incarnations.

The two dimensional version of this fractal looks like a square version of the well-known Sierpinski triangle, and the pattern of fractal division is quite similar.

One particularly interesting counter-intuitive curiosity is the ability to construct a figure of infinite length starting with an area of finite volume. If we take a finite square, cut it in half, and put the halves end-to-end, and then cut one of the halves again, and again put them end-to-end, and iterate this process to infinity (as with a fractal construction, though this is not a fractal), we take the original finite volume and stretch it out to an infinite length.

With a little cleverness we can make this infinite line constructed from a finite volume extend infinitely in both directions by cutting up the square and distributing it differently. Notice that, with these constructions, the area remains exactly the same, unlike Banach-Tarski constructions in which additional space is “extracted” from a mathematical continuum (which could be of any dimension).

Thinking of these above two constructions, it occurred to me that we might construct an interesting fractal from the second infinite line of finite area. This is unusual, because fractals usually aren’t constructed from rearranging areas in quite this way, but it is doable. We could take the middle third of each segment, cut it into three pieces, and assemble a “U” shaped construction in the middle of the segment. This process can be iterated with every segment, and the result would be a line that is infinite two over: it would be infinite in extent, and it would be infinite between any two arbitrary points. This constitutes another sense in which we might construct an infinite fractal.

. . . . .

Fractals and Geometrical Intuition

1. Benoît Mandelbrot, R.I.P.

2. A Question for Philosophically Inclined Mathematicians

3. Fractals and the Banach-Tarski Paradox

4. A visceral feeling for epsilon zero

5. Adventures in Geometrical Intuition

6. A Note on Fractals and Banach-Tarski Extraction

7. Geometrical Intuition and Epistemic Space

. . . . .


. . . . .

Grand Strategy Annex

. . . . .

%d bloggers like this: