Saturday


Euro sign

An introduction to brinkmanship

The emerging consensus in the financial press is that Greece must default on its debt obligations. No longer is the question, “Will Greece default?” but rather then question is, “When will Greece default?” After a pause, another question comes up: “How exactly will Greece default?” Will a Greek default mean Greece leaves the Eurozone (or, rather, the EMU, the European Monetary Union), or will Greece default and a way will be found to keep the country in the EMU? The more these questions are followed by further questions the more obvious it becomes that those asking the questions are seeking justifications and rationalizations to retain Greece within the Eurozone even as it defaults.

During the last episode of Greek default brinkmanship it became increasingly obvious that the powers that be would find a way to avoid Greek default and exit from the EMU (known by the ugly coinage “Grexit”). How do we know this? There was no significant shorting of the Euro in currency markets. Greek bonds took a hit, but they didn’t collapse. In the final analysis, no one really believed that anything dire would happen. Financial markets remained calm. Now that we are once again approaching the brink, and the drumbeat in the financial press is that Greece must default this time, again financial markets are mostly calm. The Euro is not plunging in value (the Euro is lower in value, but not at historic lows), and Greek bonds recently rallied on the assumption that the sidelining of Yanis Varoufakis would make negotiations easier. It seems, once again, that the conventional wisdom is that the worst will be avoided. In other words, a way will be found for Greece to default on its debt and to remain within the EMU so as to create the fewest waves in the markets.

There are at least two interesting things to notice about this process. The first is how far an institution (or institutions) can be pushed in a desired direction in order to obtain a desired result. The Eurozone is today a rather different entity than when the Eurozone treaties were drafted in the late 1990s and the Eurozone was only imagined. Today the Eurozone is at a crossroads, but as important as the crossroads is the long road behind it — a road of repeated and flagrant violations of the Maastricht criteria that were to govern the Eurozone, in which no nation-state has been held to account for its violations. In this context, the further violations required to keep Greece after default in the EMU do not seem particularly outrageous, as they would have seemed to those drafting the Maastricht criteria.

The “convergence” that didn’t happen

Here a little history is in order, and not the history that you are likely to get from those tying themselves in knots to try to find ways not to put the Eurozone asunder. The conditions for accession to the EMU (also known as “convergence criteria”) are known as the “Maastricht criteria” (cf. Who can join and when?):

Price stability, to show inflation is controlled;

Soundness and sustainability of public finances, through limits on government borrowing and national debt to avoid excessive deficit;

Exchange-rate stability, through participation in the Exchange Rate Mechanism (ERM II) for at least two years without strong deviations from the ERM II central rate;

Long-term interest rates, to assess the durability of the convergence achieved by fulfilling the other criteria.

Of course, these are statements of general principle and not quantifiable economic measures, but the Eurozone also has stipulated quantifiable economic measures, and there is a lot of fine print involved in these stipulations.

It is now known and generally acknowledged that Greece did not meet the convergence criteria when it was admitted into the EMU. It doesn’t take much research to find the documentation on this, but you do have to have a memory that goes back more than ten years. Also cf. The politics of the Maastricht convergence criteria by Paul De Grauwe.

Plausible deniability for the Eurozone

To understand why Greece failed to meet accession criteria but was admitted anyway one must enter into the mindset of those laying the groundwork for the EMU. The Eurozone’s monetary union was viewed as a shoe-in for success, and getting in on the ground floor was seen as something as a coup for a marginal economy like Greece, which had hitched its wagon to a star. The people of Greece had only to sit back and watch their economy soar into the stratosphere, pulled along by German and French economies. By allowing Greece into the EMU with a wink and a nod, the EU has plausible deniability when it comes to Greek entry into the Eurozone — their papers were in order, if falsified — but no one at the time really believed the Greece met the Maastricht criteria.

In all fairness, while the Eurozone did not enforce its own accession conditions for the entrance of Greece into the EMU, other nation-states within the Eurozone have repeatedly and routinely failed to meet Eurozone convergence criteria, and they have not been held to account. No consequences follow from having too large of a budge deficit or allowing inflation to get out of hand. The individual economies within the Eurozone appear to enjoy complete impunity in regard to the convergence criteria. This is how the Eurozone has arrived at its present position, which is that of trying to find excuses to allow Greece to default while remaining within the institutional structure of the Eurozone and the EMU.

Cognitive bias as a guide to political economy

To return to the two things I said above deserve to be noted in the present situation, the second thing to notice is that, however far an institution (or institutions) can be pushed, there eventually comes a breaking point — the straw that breaks the camel’s back, as it were — and the real brinkmanship going on is not whether Greece will default or whether Greece will leave the Eurozone, but whether the Eurozone will push its institutions to the breaking point. I want to pause over this ancient problem of brinkmanship and breaking points, because recent scholarship can shed light on this in an unexpected way.

A good portion of Daniel Kahneman’s book about cognitive biases, Thinking, Fast and Slow (especially Part I, section 9, “Answering an Easier Question”), is devoted to cognitive biases in which we substitute an answer for a difficult question with an easier question that we know how to answer and to which we can give a definitive answer. I don’t think that we can stress strongly enough how important (and how under-appreciated) this insight is in relation to economics and politics. All you have to do is to read the reasoning of traders in volatile commodities, and review their elaborate justifications for investments that miss the point of the biggest questions, in order to see how profoundly this affects our world today. Because it is relatively easy to talk about quantitative measures of the economy, and what these have predicted in the past, but it is very difficult to say exactly when public discontent is rising to the point that an unprecedented disruption (or a revolution) is about to occur, it is not surprising that economists and politicians alike prefer to answer the easy question, and sometimes they even convince themselves that the easy question is the only question.

The theology of the insurance adjustor

Not to worry. Insurance companies are ready for such unprecedented events. I have often reflected on the theology of the insurance adjustor who must adjudicate between events anticipated by the language of a policy and those events not anticipated or predicted, and so come under the all-embracing umbrella of “Acts of God.” Wikipedia says that, “An act of God is a legal term for events outside human control, such as sudden natural disasters, for which no one can be held responsible.” This term of art from the insurance industry can paper over a multitude of sins and cognitive biases: deal with the easy problem you’ve substituted for the difficult problem, and then when the difficult problem asserts itself, call it an “Act of God” (or the political equivalent thereof).

If we are honest, we must admit that we do not know what will become of the Eurozone and the EMU. Trying to predict the future of an enterprise so large and so complex is like trying to predict the weather: we can say pretty well what will happen tomorrow, within certain parameters, but the farther we go into the future the more our models for predicting the future diverge, until at some point different models are making inconsistent if not antithetical predictions. This is the essence of a chaotic system, and financial markets and political communities are chaotic systems.

A political and not an economic union

The Eurozone is not fundamentally economic, but political. It is a political project masquerading as an economic project, and while diplomacy often requires masquerades, when the music stops and the ball comes to an end, the masks must come off. Because the Eurozone is a political project, the glosses on its presumed political meaning are legion. I have read accounts in reputable media claiming that it was the intention of the Eurozone that, once economic unification had started, member states would lurch from crisis to crisis, and these crises would force member states to surrender political sovereignty, thus slowly transforming the Eurozone into a political union — perhaps the political union it should have been from its inception. I wouldn’t go quite this far, but such an account at least understands that only political union would make possible the wealth transfers within the Eurozone that would make the EMU workable in the longer term.

Since these is no clear idea of what the Eurozone stands for, one cannot convict the Eurozone of hypocrisy or contradiction. And there is no question that the Eurozone can find some way for Greece to default and to remain within the Eurozone, but any such arrangement will have to accept that Greece will in no sense be an equal member of the Eurozone and EMU. What, then, will Greece be?

What will become of Greece?

Quite some time ago I noted the possibility of “Euroization,” that is to say, the adoption of the Euro as a currency by a nation-state (or other political entity) not part of the EU, much less the EMU. There is precedent for this in dollarization — the use of the US dollar outside US territories. The Ecuadorian economy dollarized, and the Argentinian economy is partially dollarized, with real estate purchases traditionally transacted in US dollars and its many dollar-denominated financial instruments.

If Greece defaults but remains within the EMU, it will become a de facto “Euroized” economy that employs the Euro as its currency, but which has little real participation in the European economy. The Greek economy is not large enough, even in its presumed implosion, to seriously threaten the economies of the other EMU nation-states. If Greece defaults and exits the EMU, both Greece and the remaining nation-states of the EMU will pass through a painful adjustment, but Greece would probably be better off than languishing in the perpetual twilight of Euroized poor cousin to the EMU.

Some consequences of a Greek exit form the EMU are quite easy to guess. Tourism has been a major component of the Greek economy for some decades, and it is likely that most of the upmarket hotels patronized by foreign visitors will price their rooms in dollars or Euros, and in so doing a major sector of the Greek economy will take in hard, convertible foreign currencies. This alone will keep a substantial portion of the Greek economy in operation, even if no one wants to think of their country as nothing but a tourist destination. This is not at all unusual. Many hotels I have stayed at in South America price their rooms in dollars, and some will only take dollars. I especially noticed this in Argentina when I was there in 2010. Even as the Argentine economy stumbles under mismanagement, those who have a hotel that attracts foreign guests capable of paying in hard convertible currencies can do quite well in such an economy desperate for dollars. But while the Greek economy can subsist, after a fashion, on tourism, agriculture was always the strength of the Argentinian economy, and tourism does not represent a substantial contribution to the overall economy.

. . . . .

Euro coin

. . . . .

signature

. . . . .

Grand Strategy Annex

. . . . .

project astrolabe logo smaller

. . . . .

Advertisements

Monday


Studies in Formalism:

The Synoptic Perspective in Formal Thought


In my previous two posts on the overview effect — The Epistemic Overview Effect and The Overview Effect as Perspective Taking — I discussed how we can take insights gained from the “overview effect” — what astronauts and cosmonauts have experienced as a result of seeing our planet whole — and apply them to over areas of human experience and knowledge. Here I would like to try to apply these insights to formal thought.

The overview effect is, above all, a visceral experience, something that the individual feels as much as they experience, and you may wonder how I could possibly find a connection between a visceral experience and formal thinking. Part of the problem here is simply the impression that formal thought is distant from human concerns, that it is cold, impersonal, unfeeling, and, in a sense, inhuman. Yet for logicians and mathematicians (and now, increasingly, also for computer scientists) formal thought is a passionate, living, and intimate engagement with the world. Truly enough, this is not an engagement with the concrete artifacts of the world, which are all essentially accidents due to historical contingency, but rather an engagement with the principles implicit in all things. Aristotle, ironically, formalized the idea of formal thought being bereft of human feeling when he asserted that mathematics has no ethos. I don’t agree, and I have discussed this Aristotelian perspective in The Ethos of Formal Thought.

And yet. Although Aristotle, as the father of logic, had more to do with the origins of formal thought than any other human being who has ever lived, the Aristotelian denial of an ethos to formal thought does not do justice to our intuitive and even visceral engagement with formal ideas. To get a sense of this visceral and intuitive engagement with the formal, let us consider G. H. Hardy.

Late in his career, the great mathematician G. H. Hardy struggled to characterize what he called mathematically significant ideas, which is to say, what makes an idea significant in formal thought. Hardy insisted that “real” mathematics, which he distinguished from “trivial” mathematics, and which presumably engages with mathematically significant ideas, involves:

“…a very high degree of unexpectedness, combined with inevitability and economy.”

G. H. Hardy, A Mathematician’s Apology, section 15

Hardy’s appeal to parsimony is unsurprising, yet the striking contrast of the unexpected and the inevitable is almost paradoxical. One is not surprised to hear an exposition of mathematics in deterministic terms, which is what inevitability is, but if mathematics is the working out of rigid formal rules of procedure (i.e., a mechanistic procedure), how could any part of it be unexpected? And yet it is. Moreover, as Hardy suggested, “deep” mathematical ideas (which we will explore below) are unexpected even when they appear inevitable and economical.

It would not be going too far to suggest that Hardy was trying his best to characterize mathematical beauty, or elegance, which is something that is uppermost in the mind of the pure mathematician. Well, uppermost at least in the minds of some pure mathematicians; Gödel, who was as pure a formal thinker as ever lived, said that “…after all, what interests the mathematician, in addition to drawing consequences from these assumptions, is what can be carried out” (Collected Works Volume III, Unpublished essays and lectures, Oxford, 1995, p. 377), which is an essentially pragmatic point of view, in which formal elegance would seem to play little part. Mathematical elegance has never been given a satisfactory formulation, and it is an irony of intellectual history that the most formal of disciplines relies crucially on an informal intuition of formal elegance. Beauty, it is often said, in the mind of the beholder. Is this true also for mathematical beauty? Yes and no.

If a mathematically significant idea is inevitable, we should be able to anticipate it; if unexpected, it ought to elude all inevitability, since the inevitable ought to be predictable. One way to try to capture the ineffable sense of mathematical elegance is through paradox — here, the paradox of the inevitable and the unexpected — in way not unlike the attempt to seek enlightenment through the contemplation of Zen koans. But Hardy was no mystic, so he persisted in his attempted explication of mathematically significant ideas in terms of discursive thought:

“There are two things at any rate which seem essential, a certain generality and a certain depth; but neither quality is easy to define at all precisely.

G. H. Hardy, A Mathematician’s Apology, section 15

Although Hardy repeatedly expressed his dissatisfaction with his formulations of generality and depth, he nevertheless persisted in his attempts to clarify them. Of generality Hardy wrote:

“The idea should be one which is a constituent in many mathematical constructs, which is used in the proof of theorems of many different kinds. The theorem should be one which, even if stated originally (like Pythagoras’s theorem) in a quite special form, is capable of considerable extension and is typical of a whole class of theorems of its kind. The relations revealed by the proof should be such as to connect many different mathematical ideas.” (section 15)

And of mathematical depth Hardy hazarded:

“It seems that mathematical ideas are arranged somehow in strata, the ideas in each stratum being linked by a complex of relations both among themselves and with those above and below. The lower the stratum, the deeper (and in general more difficult) the idea.” (section 17)

This would account for the special difficulty of foundational ideas, of which the most renown example would be the idea of sets, though there are other candidates to be found in other foundational efforts, as in category theory or reverse mathematics.

Hardy’s metaphor of mathematical depth suggests foundations, or a foundational approach to mathematical ideas (an approach which reached its zenith in the early twentieth century in the tripartite struggle over the foundations of mathematics, but is a tradition which has since fallen into disfavor). Depth, however, suggests the antithesis of a synoptic overview, although both the foundational perspective and the overview perspective seek overarching unification, one from the bottom up, the other from the top down. These perspectives — bottom up and top down — are significant, as I have used these motifs elsewhere as an intuitive shorthand for constructive and non-constructive perspectives respectively.

Few mathematicians in Hardy’s time had a principled commitment to constructive methods, and most employed non-constructive methods will little hesitation. Intuitionism was only then getting its start, and the full flowering of constructivistic schools of thought would come later. It could be argued that there is a “constructive” sense to Zermelo’s axiomatization of set theory, but this is of the variety that Godel called “strictly nominalistic construtivism.” Here is Godel’s attempt to draw a distinction between nominalistic constructivism and the sense of constructivism that has since overtaken the nominalistic conception:

…the term “constructivistic” in this paper is used for a strictly nominalistic kind of constructivism, such that that embodied in Russell’s “no class theory.” Its meaning, therefore, if very different from that used in current discussions on the foundations of mathematics, i.e., from both “intuitionistically admissible” and “constructive” in the sense of the Hilbert School. Both these schools base their constructions on a mathematical intuition whose avoidance is exactly one of the principle aims of Russell’s constructivism… What, in Russell’s own opinion, can be obtained by his constructivism (which might better be called fictionalism) is the system of finite orders of the ramified hierarchy without the axiom of infinity for individuals…”

Kurt Gödel, Kurt Gödel: Collected Works: Volume II: Publications 1938-1974, Oxford et al.: Oxford University Press, 1990, “Russell’s Mathematical Logic (1944),” footnote, Author’s addition of 1964, expanded in 1972, p. 119

This profound ambiguity in the meaning of “constructivism” is a conceptual opportunity — there is more that lurks in this idea of formal construction than is apparent prima facie. That what Gödel calls a, “strictly nominalistic kind of constructivism” coincides with what we would today call non-constructive thought demonstrates the very different conceptions of what is has meant to mathematicians (and other formal thinkers) to “construct” an object.

Kant, who is often called a proto-constructivist (though I have identified non-constructive elements on Kant’s thought in Kantian Non-Constructivism), does not invoke construction when he discusses formal entities, but instead formulates his thoughts in terms of exhibition. I think that this is an important difference (indeed, I have a long unfinished manuscript devoted to this). What Kant called “exhibition” later philosophers of mathematics came to call “surveyability” (“Übersichtlichkeit“). This latter term is especially due to Wittgenstein; Wittgenstein also uses “perspicuous” (“Übersehbar“). Notice in both of the terms Wittgenstein employs for surveyability — Übersichtlichkeit and Übersehbar — we have “Über,” usually (or often, at least) translated as “over.” Sometimes “Über” is translated as “super” as when Nietzsche’s Übermensch is translated as “superman” (although the term has also been translated as “over-man,” inter alia).

There is a difference between Kantian exhibition and Wittgensteinian surveyability — I don’t mean to conflate the two, or to suggest that Wittgenstein was simply following Kant, which he was not — but for the moment I want to focus on what they have in common, and what they have in common is the attempt to see matters whole, i.e., to take in the object of one’s thought in a single glance. In the actual practice of seeing matters whole it is a bit more complicated, especially since in English we commonly use “see” to mean “understand,” and there are a whole range of visual metaphors for understanding.

The range of possible meanings of “seeing” accounts for a great many of the different formulations of constructivism, which may distinguish between what is actually constructable in fact, that which it is feasible to construct (this use of “feasible” reminds me a bit of “not too large” in set theories based on the “limitation of size” principle, which is a purely conventional limitation), and that which can be constructed in theory, even if not constructable in fact, or if not feasible to construct. What is “surveyable” depends on our conception of what we can see — what might be called the modalities of seeing, or the modalities of surveyability.

There is an interesting paper on surveyability by Edwin Coleman, “The surveyability of long proofs,” (available in Foundations of Science, 14, 1-2, 2009) which I recommend to the reader. I’m not going to discuss the central themes of Coleman’s paper (this would take me too far afield), but I will quote a passage:

“…the problem is with memory: ‘our undertaking’ will only be knowledge if all of it is present before the mind’s eye together, which any reliance on memory prevents. It is certainly true that many long proofs don’t satisfy Descartes-surveyability — nobody can sweep through the calculations in the four color theorem in the requisite way. Nor can anyone do it with either of the proofs of the Enormous Theorem or Fermat’s Last Theorem. In fact most proofs in real mathematics fail this test. If real proofs require this Cartesian gaze, then long proofs are not real proofs.”

Edwin Coleman, “The surveyability of long proofs,” in Foundations of Science, 14 (1-2), 2009

For Coleman, the received conception of surveyability is deceptive, but what I wanted to get across by quoting his paper was the connection to the Cartesian tradition, and to the role of memory in seeing matters whole.

The embodied facts of seeing, when seeing is understood as the biophysical process of perception, was a concern to Bertrand Russell in the construction of a mathematical logic adequate to the deduction of mathematics. In the Introduction to Principia Mathematica Russell wrote:

“The terseness of the symbolism enables a whole proposition to be represented to the eyesight as one whole, or at most in two or three parts divided where the natural breaks, represented in the symbolism, occur. This is a humble property, but is in fact very important in connection with the advantages enumerated under the heading.”

Bertrand Russell and Alfred North Whitehead, Principia Mathematica, Volume I, second edition, Cambridge: Cambridge University Press, 1963, p. 2

…and Russell elaborated…

“The adaptation of the rules of the symbolism to the processes of deduction aids the intuition in regions too abstract for the imagination readily to present to the mind the true relation between the ideas employed. For various collocations of symbols become familiar as representing important collocations of ideas; and in turn the possible relations — according to the rules of the symbolism — between these collocations of symbols become familiar, and these further collocations represent still more complicated relations between the abstract ideas. And thus the mind is finally led to construct trains of reasoning in regions of thought in which the imagination would be entirely unable to sustain itself without symbolic help.”

Loc. cit.

Thinking is difficult, and symbolization allows us to — mechanically — extend thinking into regions where thinking alone, without symbolic aid, would not be capable of penetrating. But that doesn’t mean symbolic thinking is easy. Elsewhere Russell develops another rationalization for symbolization:

“The fact is that symbolism is useful because it makes things difficult. (This is not true of the advanced parts of mathematics, but only of the beginnings.) What we wish to know is, what can be deduced from what. Now, in the beginnings, everything is self- evident; and it is very hard to see whether one self- evident proposition follows from another or not. Obviousness is always the enemy to correctness. Hence we invent some new and difficult symbolism, in which nothing seems obvious. Then we set up certain rules for operating on the symbols, and the whole thing becomes mechanical. In this way we find out what must be taken as premiss and what can be demonstrated or defined.”

Bertrand Russell, Mysticism and Logic, “Mathematics and the Metaphysicians”

Russell formulated the difficulty of thinking even more strongly in a later passage:

“There is a good deal of importance to philosophy in the theory of symbolism, a good deal more than at one time I thought. I think the importance is almost entirely negative, i.e., the importance lies in the fact that unless you are fairly self conscious about symbols, unless you are fairly aware of the relation of the symbol to what it symbolizes, you will find yourself attributing to the thing properties which only belong to the symbol. That, of course, is especially likely in very abstract studies such as philosophical logic, because the subject-matter that you are supposed to be thinking of is so exceedingly difficult and elusive that any person who has ever tried to think about it knows you do not think about it except perhaps once in six months for half a minute. The rest of the time you think about the symbols, because they are tangible, but the thing you are supposed to be thinking about is fearfully difficult and one does not often manage to think about it. The really good philosopher is the one who does once in six months think about it for a minute. Bad philosophers never do.”

Bertrand Russell, Logic and Knowledge: Essays 1901-1950, 1956, “The Philosophy of Logical Atomism,” I. “Facts and Propositions,” p. 185

Alfred North Whitehead, coauthor of Principia Mathematica, made a similar point more colorfully than Russell, which I recently in The Algorithmization of the World:

“It is a profoundly erroneous truism, repeated by all copy-books and by eminent people when they are making speeches, that we should cultivate the habit of thinking of what we are doing. The precise opposite is the case. Civilization advances by extending the number of important operations which we can perform without thinking about them. Operations of thought are like cavalry charges in a battle: they are strictly limited in number, they require fresh horses, and must only be made at decisive moments.”

Alfred North Whitehead, An Introduction to Mathematics, London: WILLIAMS & NORGATE, Chap. V, pp. 45-46

This quote from Whitehead follows a lesser known passage from the same work:

“…by the aid of symbolism, we can make transitions in reasoning almost mechanically by the eye, which otherwise would call into play the higher faculties of the brain.”

Alfred North Whitehead, An Introduction to Mathematics, London: WILLIAMS & NORGATE, Chap. V, pp. 45

In other words, the brain is saved effort by mechanizing as much reason as can be mechanized. Of course, not everyone is capable of these kinds of mechanical deductions made possible by mathematical logic, which is especially difficult.

Recent scholarship has only served to underscore the difficulty of thinking, and the steps we must take to facilitate our thinking. Daniel Kahneman in particular has focused on the physiology effort involved in thinking. In his book Thinking, Fast and Slow, Daniel Kahneman distinguishes between two cognitive systems, which he calls System 1 and System 2, which are, respectively, that faculty of the mind that responds immediately, on an intuitive or instinctual level, and that faculty of the mind that proceeds more methodically, according to rules:

Why call them System 1 and System 2 rather than the more descriptive “automatic system” and “effortful system”? The reason is simple: “Automatic system” takes longer to say than “System 1” and therefore takes more space in your working memory. This matters, because anything that occupies your working memory reduces your ability to think. You should treat “System 1” and “System 2” as nicknames, like Bob and Joe, identifying characters that you will get to know over the course of this book. The fictitious systems make it easier for me to think about judgment and choice, and will make it easier for you to understand what I say.

Daniel Kahneman, Thinking, Fast and Slow, New York: Farrar, Straus, and Giroux, Part I, Chap. 1

While such concerns do not appear to have explicitly concerned Russell, Russell’s concern for economy of thought implicitly embraced this idea. One’s ability to think must be facilitated in any way possible, including the shortening of names — in purely formal thought, symbolization dispenses with names altogether and contents itself with symbols only, usually introduced as letters.

Kahneman’s book, by the way, is a wonderful review of cognitive biases that cites many of the obvious but often unnoticed ways in which thought requires effort. For example, if you are walking along with someone and you ask them in mid-stride to solve a difficult mathematical problem — or, for that matter, any problem that taxes working memory — your companion is likely to come to a stop when focusing mental effort on the work of solving the problem. Probably everyone has had experiences like this, but Kahneman develops the consequences systematically, with very interesting results (creating what is now known as behavioral economics in the process).

Formal thought is among the most difficult forms of cognition ever pursued by human beings. How can we facilitate our ability to think within a framework of thought that taxes us so profoundly? It is the overview provided by the non-constuctive perspective that makes it possible to take a “big picture” view of formal knowledge and formal thought, which is usually understood to be a matter entirely immersed in theoretical details and the minutiae of deduction and derivation. We must take an “Über” perspective in order to see formal thought whole. We have become accustomed to thinking of “surveyability” in constructivist terms, but it is just as valid in non-constructivist terms.

In P or not-P (as well as in subsequent posts concerned with constructivism, What is the relationship between constructive and non-constructive mathematics? Intuitively Clear Slippery Concepts, and Kantian Non-constructivism) I surveyed constructivist and non-constructivist views of tertium non datur — the central logical principle at issue in the conflict between constructivism and non-constructiviem — and suggested that constructivists and non-constructivists need each other, as each represents a distinct point of view on formal thought.

In P or not-P, cited above, I quoted French mathematician Alain Connes:

“Constructivism may be compared to mountain climbers who proudly scale a peak with their bare hands, and formalists to climbers who permit themselves the luxury of hiring a helicopter to fly over the summit …the uncountable axiom of choice gives an aerial view of mathematical reality — inevitably, therefore, a simplified view.”

Conversations on Mind, Matter, and Mathematics, Changeux and Connes, Princeton, 1995, pp. 42-43

In several posts I have taken up this theme of Alain Connes and have spoken of the non-constructive perspective (which Connes calls “formalist”) as being top-down and the constructive perspective as being bottom-up. In particular, in The Epistemic Overview Effect I argued that in additional to the possibility of a spatial overview (the world entire seen from space) and a temporal overview (history seen entire, after the manner of Big History), there is an epistemic overview, that is to say, an overview of knowledge, perhaps even the totality of knowledge.

If we think of those mathematical equations that have become sufficiently famous that they have become known outside mathematics and physics — (as well as some that should be more widely known, but are not, like the generalized continuum hypothesis and the expression of epsilon zero) — they all have not only the succinct property that Russell noted in the quotes above in regard to symbolism, but also many of the qualities that G. H. Hardy ascribed to what he called mathematically significant ideas.

It is primarily non-constructive modes of thought that give us a formal overview and which make it possible for us to engage with mathematically significant ideas, and, more generally, with formally significant ideas.

. . . . .

Note added Monday 26 October 2015: I have written more about the above in Brief Addendum on the Overview Effect in Formal Thought.

. . . . .

Formal thought begins with Greek mathematics and Aristotle's logic.

Formal thought begins with Greek mathematics and Aristotle’s logic.

. . . . .

Studies in Formalism

1. The Ethos of Formal Thought

2. Epistemic Hubris

3. Parsimonious Formulations

4. Foucault’s Formalism

5. Cartesian Formalism

6. Doing Justice to Our Intuitions: A 10 Step Method

7. The Church-Turing Thesis and the Asymmetry of Intuition

8. Unpacking an Einstein Aphorism

9. The Overview Effect in Formal Thought

10. Einstein on Geometrical intuition

11. Methodological and Ontological Parsimony (in preparation)

12. The Spirit of Formalism (in preparation)

. . . . .

Wittgenstein's Tractatus Logico-Philosophicus was part of the efflourescence of formal thinking focused on logic and mathematics.

Wittgenstein’s Tractatus Logico-Philosophicus was part of an early twentieth century efflorescence of formal thinking focused on logic and mathematics.

. . . . .

signature

. . . . .

Grand Strategy Annex

. . . . .

project astrolabe logo smaller

. . . . .

%d bloggers like this: