30 January 2013
F. H. Bradley in his classic treatise Appearance and Reality: A Metaphysical Essay, made this oft-quoted comment:
“If you identify the Absolute with God, that is not the God of religion. If again you separate them, God becomes a finite factor in the Whole. And the effort of religion is to put an end to, and break down, this relation — a relation which, none the less, it essentially presupposes. Hence, short of the Absolute, God cannot rest, and, having reached that goal, he is lost and religion with him. It is this difficulty which appears in the problem of the religious self-consciousness.”
I think many commentators have taken this passage as emblematic of what they believe to be Bradley’s religious sentimentalism, and in fact the yearning for religious belief (no longer possible for rational men) that characterized much of the school of thought that we now call “British Idealism.”
This is not my interpretation. I’ve read enough Bradley to know that he was no sentimentalist, and while his philosophy diverges radically from contemporary philosophy, he was committed to a philosophical, and not a religious, point of view.
Bradley was an elder contemporary of Bertrand Russell, and Bertrand Russell characterized Bradley as the grand old man of British idealism. This if from Russell’s Our Knowledge of the External World:
“The nature of the philosophy embodied in the classical tradition may be made clearer by taking a particular exponent as an illustration. For this purpose, let us consider for a moment the doctrines of Mr Bradley, who is probably the most distinguished living representative of this school. Mr Bradley’s Appearance and Reality is a book consisting of two parts, the first called Appearance, the second Reality. The first part examines and condemns almost all that makes up our everyday world: things and qualities, relations, space and time, change, causation, activity, the self. All these, though in some sense facts which qualify reality, are not real as they appear. What is real is one single, indivisible, timeless whole, called the Absolute, which is in some sense spiritual, but does not consist of souls, or of thought and will as we know them. And all this is established by abstract logical reasoning professing to find self-contradictions in the categories condemned as mere appearance, and to leave no tenable alternative to the kind of Absolute which is finally affirmed to be real.”
Bertrand Russell, Our Knowledge of the External World, Chapter I, “Current Tendencies”
Although Russell rejected what he called the classical tradition, and distinguished himself in contributing to the origins of a new philosophical school that would come (in time) to be called analytical philosophy, the influence of figures like F. H. Bradley and J. M. E. McTaggart (whom Russell knew personally) can still be found in Russell’s philosophy.
In fact, the above quote from F. H. Bradley — especially the portion most quoted, short of the Absolute, God cannot rest, and, having reached that goal, he is lost and religion with him — is a perfect illustration of a principle found in Russell, and something on which I have quoted Russell many times, as it has been a significant influence on my own thinking.
I have come to refer to this principle as Russell’s generalization imperative. Russell didn’t call it this (the terminology is mine), and he didn’t in fact give any name at all to the principle, but he implicitly employs this principle throughout his philosophical method. Here is how Russell himself formulated the imperative (which I last quoted in The Genealogy of the Technium):
“It is a principle, in all formal reasoning, to generalize to the utmost, since we thereby secure that a given process of deduction shall have more widely applicable results…”
Bertrand Russell, An Introduction to Mathematical Philosophy, Chapter XVIII, “Mathematics and Logic”
One of the distinctive features that Russell identifies as constitutive of the classical tradition, and in fact one of the few explicit commonalities between the classical tradition and Russell’s own thought, was the denial of time. The British idealists denied the reality of time outright, in the best Platonic tradition; Russell did not deny the reality of time, but he was explicit about not taking time too seriously.
Despite Russell’s hostility to mysticism as expressed in his famous essay “Mysticism and Logic,” when it comes to the mystic’s denial of time, Russell softens a bit and shows his sympathy for this particular aspect of mysticism:
“Past and future must be acknowledged to be as real as the present, and a certain emancipation from slavery to time is essential to philosophic thought. The importance of time is rather practical than theoretical, rather in relation to our desires than in relation to truth. A truer image of the world, I think, is obtained by picturing things as entering into the stream of time from an eternal world outside, than from a view which regards time as the devouring tyrant of all that is. Both in thought and in feeling, even though time be real, to realise the unimportance of time is the gate of wisdom.”
“…impartiality of contemplation is, in the intellectual sphere, that very same virtue of disinterestedness which, in the sphere of action, appears as justice and unselfishness. Whoever wishes to see the world truly, to rise in thought above the tyranny of practical desires, must learn to overcome the difference of attitude towards past and future, and to survey the whole stream of time in one comprehensive vision.”
Bertrand Russell, Mysticism and Logic, and Other Essays, Chapter I, “Mysticism and Logic”
While Russell and the classical tradition in philosophy both perpetuated the devalorization of time, this attitude is slowly disappearing from philosophy, and contemporary philosophers are more and more treating time as another reality to be given philosophical exposition rather than denying its reality. I regard this as a salutary development and a riposte to all who claim that philosophy makes no advances. Contemporary philosophy of time is quite sophisticated, and embodies a much more honest attitude to the world than the denial of time. (For those looking at philosophy from the outside, the denial of the reality of time simply sounds like a perverse waste of time, but I won’t go into that here.)
In any case, we can bring Russell’s generalization imperative to time and history even if Russell himself did not do so. That is to say, we ought to generalize to the utmost in our conception of time, and if we do so, we come to a
principle parallel to Bradley’s that I think both Russell and Bradley would have endorsed: short of the absolute time cannot rest, and, having reached that goal, time is lost and history with it.
Since I don’t agree with this, but it would be one logical extrapolation of Russell’s generalization imperative as applied to time, this suggests to be that there is more than one way to generalize about time. One way would be the kind of generalization that I formulated above, presumably consistent with Russell’s and Bradley’s devalorization of time. Time generalized in this way becomes a whole, a totality, that ceases to possess the distinctive properties of time as we experience it.
The other way to generalize time is, I think, in accord with the spirit of Big History: here Russell’s generalization imperative takes the form of embedding all times within larger, more comprehensive times, until we reach the time of the entire universe (or beyond). The science of time, as it is emerging today, demands that we almost seek the most comprehensive temporal perspective, placing human action in evolutionary context, placing evolution in biological context, placing biology is in geomorphological context, placing terrestrial geomorphology into a planetary context, and placing this planetary perspective into a cosmological context. This, too, is a kind of generalization, and a generalization that fully feels the imperative that to stop at any particular “level” of time (which I have elsewhere called ecological temporality) is arbitrary.
On my other blog I’ve written several posts related directly or obliquely to Big History as I try to define my own approach to this emerging school of historiography: The Place of Bilateral Symmetry in the History of Life, The Archaeology of Cosmology, and The Stars Down to Earth.
The more we pursue the rapidly growing body of knowledge revealed by scientific historiography, the more we find that we are part of the larger universe; our connections to the world expand as we pursue them outward in pursuit of Russell’s generalization imperative. I think it was Hans Blumenberg in his enormous book The Genesis of the Copernican World, who remarked on the significance of the fact that we can stand with our feet on the earth and look up at the stars. As I remarked in The Archaeology of Cosmology, we now find that by digging into the earth we can reveal past events of cosmological history. As a celestial counterpart to this digging in the earth (almost as though concretely embodying the contrast to which Blumenberg referred), we know that by looking up at the stars, we are also looking back in time, because the light that comes to us ages after it has been produced. Thus is astronomy a kind of luminous archaeology.
In Geometrical Intuition and Epistemic Space I wrote, “…we have no science of time. We have science-like measurements of time, and time as a concept in scientific theories, but no scientific theory of time as such.” Scientists have tried to think scientifically about time, but, as with the case of consciousness, a science of time eludes us as a science of consciousness eludes us. Here a philosophical perspective remains necessary because there are so many open questions and no clear indication of how these questions are to be answered in a clearly scientific spirit.
Therefore I think it is too early to say exactly what Big History is, because we aren’t logically or intellectually prepared to say exactly what the Russellian generalization imperative yields when applied to time and history. I think that we are approaching a point at which we can clarify our concepts of time and history, but we aren’t quite there yet, and a lot of conceptual work is necessary before we can produce a definitive formulation of time and history that will make of Big History the science and it aspires to be.
. . . . .
. . . . .
. . . . .
23 November 2012
What is the Church-Turing Thesis? The Church-Turing Thesis is an idea from theoretical computer science that emerged from research in the foundations of logic and mathematics, also called Church’s Thesis, Church’s Conjecture, the Church-Turing Conjecture as well as other names, that ultimately bears upon what can be computed, and thus, by extension, what a computer can do (and what a computer cannot do).
Note: For clarity’s sake, I ought to point out the Church’s Thesis and Church’s Theorem are distinct. Church’s Theorem is an established theorem of mathematical logic, proved by Alonzo Church in 1936, that there is no decision procedure for logic (i.e., there is no method for determining whether an arbitrary formula in first order logic is a theorem). But the two – Church’s theorem and Church’s thesis – are related: both follow from the exploration of the possibilities and limitations of formal systems and the attempt to define these in a rigorous way.
Even to state Church’s Thesis is controversial. There are many formulations, and many of these alternative formulations come straight from Church and Turing themselves, who framed the idea differently in different contexts. Also, when you hear computer science types discuss the Church-Turing thesis you might think that it is something like an engineering problem, but it is essentially a philosophical idea. What the Church-Turing thesis is not is as important as what it is: it is not a theorem of mathematical logic, it is not a law of nature, and it not a limit of engineering. We could say that it is a principle, because the word “principle” is ambiguous and thus covers the various formulations of the thesis.
There is an article on the Church-Turing Thesis at the Stanford Encyclopedia of Philosophy, one at Wikipedia (of course), and even a website dedicated to a critique of the Stanford article, Alan Turing in the Stanford Encyclopedia of Philosophy. All of these are valuable resources on the Church-Turing Thesis, and well worth reading to gain some orientation.
One way to formulate Church’s Thesis is that all effectively computable functions are general recursive. Both “effectively computable functions” and “general recursive” are technical terms, but there is an important different between these technical terms: “effectively computable” is an intuitive conception, whereas “general recursive” is a formal conception. Thus one way to understand Church’s Thesis is that it asserts the identity of a formal idea and an informal idea.
One of the reasons that there are many alternative formulations of the Church-Turing thesis is that there any several formally equivalent formulations of recursiveness: recursive functions, Turing computable functions, Post computable functions, representable functions, lambda-definable functions, and Markov normal algorithms among them. All of these are formal conceptions that can be rigorously defined. For the other term that constitutes the identity that is Church’s thesis, there are also several alternative formulations of effectively computable functions, and these include other intuitive notions like that of an algorithm or a procedure that can be implemented mechanically.
These may seem like recondite matters with little or no relationship to ordinary human experience, but I am surprised how often I find the same theoretical conflict played out in the most ordinary and familiar contexts. The dialectic of the formal and the informal (i.e., the intuitive) is much more central to human experience than is generally recognized. For example, the conflict between intuitively apprehended free will and apparently scientifically unimpeachable determinism is a conflict between an intuitive and a formal conception that both seem to characterize human life. Compatibilist accounts of determinism and free will may be considered the “Church’s thesis” of human action, asserting the identity of the two.
It should be understood here that when I discuss intuition in this context I am talking about the kind of mathematical intuition I discussed in Adventures in Geometrical Intuition, although the idea of mathematical intuition can be understood as perhaps the narrowest formulation of that intuition that is the polar concept standing in opposition to formalism. Kant made a useful distinction between sensory intuition and intellectual intuition that helps to clarify what is intended here, since the very idea of intuition in the Kantian sense has become lost in recent thought. Once we think of intuition as something given to us in the same way that sensory intuition is given to us, only without the mediation of the senses, we come closer to the operative idea of intuition as it is employed in mathematics.
Mathematical thought, and formal accounts of experience generally speaking, of course, seek to capture our intuitions, but this formal capture of the intuitive is itself an intuitive and essentially creative process even when it culminates in the formulation of a formal system that is essentially inaccessible to intuition (at least in parts of that formal system). What this means is that intuition can know itself, and know itself to be an intuitive grasp of some truth, but formality can only know itself as formality and cannot cross over the intuitive-formal divide in order to grasp the intuitive even when it captures intuition in an intuitively satisfying way. We cannot even understand the idea of an intuitively satisfying formalization without an intuitive grasp of all the relevant elements. As Spinoza said that the true is the criterion both of itself and of the false, we can say that the intuitive is the criterion both of itself and the formal. (And given that, today, truth is primarily understood formally, this is a significant claim to make.)
The above observation can be formulated as a general principle such that the intuitive can grasp all of the intuitive and a portion of the formal, whereas the formal can grasp only itself. I will refer to this as the principle of the asymmetry of intuition. We can see this principle operative both in the Church-Turing Thesis and in popular accounts of Gödel’s theorem. We are all familiar with popular and intuitive accounts of Gödel’s theorem (since the formal accounts are so difficult), and it is not usual to make claims for the limitative theorems that go far beyond what they formally demonstrate.
All of this holds also for the attempt to translate traditional philosophical concepts into scientific terms — the most obvious example being free will, supposedly accounted for by physics, biochemistry, and neurobiology. But if one makes the claim that consciousness is nothing but such-and-such physical phenomenon, it is impossible to cash out this claim in any robust way. The science is quantifiable and formalizable, but our concepts of mind, consciousness, and free will remain stubbornly intuitive and have not been satisfyingly captured in any formalism — the determination of any such satisfying formalization could only be determined by intuition and therefore eludes any formal capture. To “prove” determinism, then, would be as incoherent as “proving” Church’s Thesis in any robust sense.
There certainly are interesting philosophical arguments on both sides of Church’s Thesis — that is to say, both its denial and its affirmation — but these are arguments that appeal to our intuitions and, most crucially, our idea of ourselves is intuitive and informal. I should like to go further and to assert that the idea of the self must be intuitive and cannot be otherwise, but I am not fully confident that this is the case. Human nature can change, albeit slowly, along with the human condition, and we could, over time — and especially under the selective pressures of industrial-technological civilization — shape ourselves after the model of a formal conception of the self. (In fact, I think it very likely that this is happening.)
I cannot even say — I would not know where to begin — what would constitute a formal self-understanding of the self, much less any kind of understanding of a formal self. Well, maybe not. I have written elsewhere that the doctrine of the punctiform present (not very popular among philosophers these days, I might add) is a formal doctrine of time, and in so far as we identify internal time consciousness with the punctiform present we have a formal doctrine of the self.
While the above account is one to which I am sympathetic, this kind of formal concept — I mean the punctiform present as a formal conception of time — is very different from the kind of formality we find in physics, biochemistry, and neuroscience. We might assimilate it to some mathematical formalism, but this is an abstraction made concrete in subjective human experience, not in physical science. Perhaps this partly explains the fashionable anti-philosophy that I have written about.
. . . . .
. . . . .
. . . . .
23 July 2012
Even if you know what to look for, it is quite difficult to pick out the Urnes stave church from across the fjord at Solvorn, where a small ferry departs each hour on the hour to take tourists and a few cars and bicycles across Sognefjord over to the Urnes side (also spelled “Ornes”). Once across, you walk up the hill to the top of the village, and there sits the Urnes stave church among trees and the cultivated hillsides, just as it has been sitting for more then 800 years. This is the second time I have been to Urnes, and I was unable to see the stave church from across the fjord; perhaps if I had had binoculars I would have seen it, but it melds into the landscape from which it came.
Looking back to Solvorn from the top of the hill at Urnes, standing next to this ancient wooden structure, little changed from when it was built — Urnes is thought to be the oldest of the surviving stave churches, with timbers dating from 1129-1130 (thanks to dedrochronology) — it is very easy to imagine the villagers are Solvorn getting into the wooden boats, rowing across the fjord, and walking up the hill to attend services in their ancient church. We often hear the phrase “time stands still” — at Urnes, you can stand still along with time for a few moments. Here, history has been paused.
In so saying that history is paused at Urnes I am reminded of a passage from Rembrandt and Spinoza by Leo Balet, which I quoted previously in Capturing the Moment:
“In those of his portraits where the portrayed is not acting, but just resting, pausing, we get the feeling that the resting continues, that it is a resting with duration, a resting, thus, in time; in those pictures we are closer to life than in the portraits where just the breaking off of the action makes us so vividly aware that his whole action was make-believe.”
Leo Balet, Rembrandt and Spinoza, p. 184
Balet here frames his thesis in terms of portraiture, but the same might be said of a photograph or a sculpture — or even of a place that changes but little over the years. Urnes is such a place, and, in fact, there are many such places in Norway. Yesterday in A Wittgensteinian Pilgrimage I noted how Wittgenstein’s correspondents in Skjolden often closed their letters with, “All is as before here” (“Her er det som før”). in Skjolden, too, time is paused.
Similarly, the busyness of the world appears to us as mere make-believe when seen from the perennial perspective of unchanging continuity in time. Our hurried and harassed lives seem mindless and perhaps a bit comical when compared to forms of life that endure — or, to put it otherwise, compared to modes of life that enjoy historical viability.
I have elsewhere defined historical viability as the ability of an existent to endure in existence by changing as the world changes; now I realize that the world changes in different ways at different times and places, so that historical viability is a local phenomenon that is subject to conditions closely similar to natural selection — existents are selected for historical viability not by being “better” or “higher” or “superior” or “perfect,” but by being the most suited to their environment. In the present context, “environment” should be understood as the temporal or historical environment of a historical existent — with this in mind, a more subtle form of the principle of historical viability begins to emerge.
. . . . .
. . . . .
. . . . .
. . . . .
3 July 2012
Each time the Eurozone puts together another bailout package the markets follow with a brief (sometimes very brief) rally, which collapses pretty much as soon as reality reasserts itself and it becomes obvious that most of the measures constitute creative ways of kicking the can down the road, while those more ambitious measures that are more than kicking the can down the road are probably overly ambitious and not likely to be practical policies in the midst of a financial crisis.
Simply from a practical point of view, it is difficult to imagine how anyone can believe that a more comprehensive fiscal and political union can be brought about in the midst of the crisis, although formulated with the best intentions of saving the Eurozone, since the original (and much more limited) Eurozone was negotiated, planned, and implemented over a period of many years, not over a period of few days as inter-bank loan rates are climbing by the hour. Apart from this practical problem, there are several issues of principle at stake in the Eurozone crisis and the attempts to rescue the European Monetary Union.
Mario Monti was quoted in a Reuter’s article, Monti says EU hinges on summit talks outcome: report, in defense of strengthening financial and political ties within the Eurozone as a way to save that Euro that:
“Europeans know where they’re going… the markets are convinced that having given birth to the euro, the will to make it indissoluble and irrevocable is there and will be strengthened by other steps towards integration.”
Can the Euro be made “indissoluble and irrevocable”? Can anything be made indissoluble and irrevocable? I think not, and this is a matter of principle to which I attach great importance.
I have several times quoted Edward Gibbon on the impossibility of present legislators binding the acts of future legislators:
“In earthly affairs, it is not easy to conceive how an assembly equal of legislators can bind their successors invested with powers equal to their own.”
Edward Gibbon, History of the Decline and Fall of the Roman Empire, Vol. VI, Chapter LXVI, “Union Of The Greek And Latin Churches.–Part III.
Since I have quoted this several times (in The Imperative of Regime Survival, The Institution of Language, and The Chilean Model, e.g.), implicitly maintaining that it states an important principle, I am now going give this principle a name: Gibbon’s Principle of Inalienable Autonomy for Political Entities, or, more briefly, Gibbon’s Principle.
As I have tried to make explicit, Gibbon’s Principle holds for political entities, but I have also quoted a passage from Sartre that presents essentially the same idea for individuals rather than for political entities:
“I cannot count upon men whom I do not know, I cannot base my confidence upon human goodness or upon man’s interest in the good of society, seeing that man is free and that there is no human nature which I can take as foundational. I do not know where the Russian revolution will lead. I can admire it and take it as an example in so far as it is evident, today, that the proletariat plays a part in Russia which it has attained in no other nation. But I cannot affirm that this will necessarily lead to the triumph of the proletariat: I must confine myself to what I can see. Nor can I be sure that comrades-in-arms will take up my work after my death and carry it to the maximum perfection, seeing that those men are free agents and will freely decide, tomorrow, what man is then to be. Tomorrow, after my death, some men may decide to establish Fascism, and the others may be so cowardly or so slack as to let them do so. If so, Fascism will then be the truth of man, and so much the worse for us. In reality, things will be such as men have decided they shall be. Does that mean that I should abandon myself to quietism? No. First I ought to commit myself and then act my commitment, according to the time-honoured formula that “one need not hope in order to undertake one’s work.” Nor does this mean that I should not belong to a party, but only that I should be without illusion and that I should do what I can. For instance, if I ask myself ‘Will the social ideal as such, ever become a reality?’ I cannot tell, I only know that whatever may be in my power to make it so, I shall do; beyond that, I can count upon nothing.”
Jean-Paul Sartre, “Existentialism is a Humanism” (lecture from 1946, translated by Philip Mairet)
This I will now also name with a principle: Sartre’s Principle of Inalienable Autonomy for Individuals, or, more briefly, Sartre’s Principle.
If that weren’t already enough principles for today, I going to formulate another principle, and although this is my own I’m not going to name it after myself after the fashion of the names I’ve given to Gibbon’s Principle or Sartre’s Principle. This additional principle is The Principle of the Political Primacy of the Individual (admittedly awkward — I will try to think of a better name for this): political autonomy is predicated upon individual autonomy. In other words, Gibbon’s Principle carries the force that it does because of Sartre’s Principle, and this makes Sartre’s Principle the more fundamental.
At present I am not going to argue for The Principle of the Political Primacy of the Individual, but I will simply assume that Gibbon’s Principle supervenes upon Sartre’s Principle, but I wanted to make clear that I understand that there are those who would reject this principle, and that there are arguments on both sides of the question. There is no establish literature on this principle so far as I know, as I am not aware that anyone has previously formulated it in an explicit form, but I can easily imagine arguments taken from classic sources that bear on both sides of the principle (i.e., its affirmation or its denial).
Because, as Sartre said, “men are free agents and will freely decide,” the Euro cannot be made “indissoluble and irrevocable” and the attempt to try to make it seem so is pure folly. For in order to maintain this appearance, we must be dishonest with ourselves; we must make claims and assertions that we know to be false. This cannot be a robust foundation for any political effort. If, tomorrow, a deeper economic and political union of the Eurozone becomes of the truth of Europe, this does not mean that the day after tomorrow that this will remain the truth of Europe.
And this brings us to yet another principle, and this principle is a negative formulation of a principle that I have formulated in the past, the principle of historical viability. According to the principle of historical viability, an existent must change as the world changes or it will be eliminated from history. This means that entities that remain in existence must be so malleable that they can change in their essence, for if they fail to change, they experience adverse selection.
A negative formulation of the principle of historical viability might be called the principle of historical calamity: any existent so constituted that it cannot change is doomed to extinction, and sooner rather than later. In other words, any effort that is made to make the Euro “indissoluble and irrevocable” not only will fail to make the Euro indissoluble and irrevocable, but will in fact make the Euro all the more vulnerable to historical forces that would destroy it.
When I previously discussed Gibbon’s Principle and Sartre’s Principle (before I had named these principles as such) in The Imperative of Regime Survival, I cited an effort in Cuba to incorporate Castro’s vision of Cuba’s socio-economic system into the constitution as a permanent feature of the government of Cuba that would presumably hold until the end of time. This would be laughable were it not the source of so much human suffering and misery.
Well, the Europeans aren’t imposing any misery on themselves on the level of that which has been imposed upon the Cuban people by their elites, but the folly in each class of elites is essentially the same: the belief that those in power today, at the present moment, are in a privileged position to dictate the only correct institutional model for all time and eternity. In other words, the End of History has arrived.
Why not make the Euro an open, flexible, and malleable institution that can respond to political, social, economic, and demographic changes? Sir Karl Popper famously wrote about The Open Society and its Enemies — ought not an open society to have open institutions? And would not open institutions be those that are formulated with an eye toward the continuous evolution in the light of further and future experience?
To deny Gibbon’s Principle and Sartre’s Principle is to count oneself among the enemies of open societies and open institutions.
. . . . .
. . . . .
. . . . .
3 June 2012
Science often (though not always or exclusively) involves a quantitative approach to phenomena. As the phenomena of the world are often (though not always or exclusively) continuous, the continuum of phenomena must be broken up into discrete chunks of experience, however imperfect the division. If we are to quantify knowledge, we must have distinctions, and distinctions must be interpolated at some particular point in a continuum.
The truncation principle is the principled justification of this practice, and the truncation fallacy is the claim that distinctions in the name of quantification are illegitimate. The claim of the illegitimacy of a given distinction is usually based on an ideal standard of distinctions having to be based on a sharply-bounded concept that marks an exhaustive division that admits of no exceptions. This is an unreasonable standard for human experience or its systematization in scientific knowledge.
One of my motivations (though not my only motivation) for formulating the truncation principle was the obvious application to historical periodization. Historians have always been forced to confront the truncation fallacy, though I am not aware that there has previously been any name for the conceptual problems involved in historical periodization, though it has been ever-present in the background of historical thought.
Here is an implicit exposition of the problems of the truncation principle by Marc Bloch, one of the most eminent members of the Annales school of historians (which also included Fernand Braudel, of whom I have written on many occasions), and who was killed by the Gestapo while working for the French resistance during the Second World War:
“…it is difficult to imagine that any of the sciences could treat time as a mere abstraction. Yet, for a great number of those who, for their own purposes, chop it up into arbitrary homogenous segments, time is nothing more than a measurement. In contrast, historical time is a concrete and living reality with an irreversible onward rush… this real time is, in essence, a continuum. It is also perpetual change. The great problems of historical inquiry derive from the antithesis of these two attributes. There is one problem especially, which raises the very raison d’être of our studies. Let us assume two consecutive periods taken out of the uninterrupted sequence of the ages. To what extent does the connection which the flow of time sets between them predominate, or fail to predominate, over the differences born out of the same flow?”
Marc Bloch, The Historian’s Craft, translated by Peter Putnam, New York: Vintage, 1953, Chapter I, sec. 3, “Historical Time,” pp. 27-29
Bloch, then, sees times itself, the structure of time, as the source both of historical continuity and historical discontinuity. For Bloch the historian, time is the truncation principle, as for some metaphysicians space (or time, for that matter) simply is the principle of individuation.
The truncation principle and the principle of individuation are closely related. What makes an individual an individual? When it is cut off from the rest of the world and designated as an individual. I haven’t thought about this yet, so I will reserve further remarks until I’ve made an effort to review the history of the principium individuationis.
The “two attributes” of continuity and change are both functions of time; both the connection and the differences between any two “arbitrary homogenous segments” are due to the action of time, according to Bloch.
The truncation principle, however, has a wider application than time. To express the truncation principle in terms of time invites a formulation (or an example) in terms of space, and there is an excellent example ready to hand: that of the color spectrum of visible light. There is a convention of dividing the color spectrum into red, orange, yellow, green, blue, indigo, and violet. But this is not the only convention. Because the word “indigo” is becoming almost archaic, one now sees the color spectrum decomposed into red, orange, yellow, green, blue, and purple.
Both decompositions of the color spectrum, and any others that might be proposed, constitute something like, “arbitrary homogenous segments.” The decomposition of the color spectrum is justified by the truncation principle, but the principle does not privilege any one decomposition over any other. All distinctions are equal, and if any one distinction is taken to be more equal than others, it is only because this distinction has the sanction of tradition.
. . . . .
. . . . .
. . . . .
19 May 2012
We can make a distinction among distinctions between ad hoc and principled distinctions. The former category — ad hoc distinctions — may ultimately prove to be based on a principle, but that principle is unknown as long as the distinction remains an ad hoc distinction. This suggests a further distinction among distinctions between ad hoc distinctions that really are ad hoc, and which are based on no principle, and ad hoc distinctions that are really principled distinctions but the principle in question is not yet known, or not yet formulated, at the time the distinction is made. So there you have a principled distinction between distinctions.
A perfect evocation of ad hoc distinctions is to be found in the opening paragraph of the Preface to Foucault’s The Order of Things:
This book first arose out of a passage in Borges, out of the laughter that shattered, as I read the passage, all the familiar landmarks of my thought — our thought, the thought that bears the stamp of our age and our geography — breaking up all the ordered surfaces and all the planes with which we are accustomed to tame the wild profusion of existing things, and continuing long afterwards to disturb and threaten with collapse our age-old distinction between the Same and the Other. This passage quotes a ‘certain Chinese encyclopedia’ in which it is written that ‘animals are divided into: (a) belonging to the Emperor, (b) embalmed, (c) tame, (d) sucking pigs, (e) sirens, (f) fabulous, (g) stray dogs, (h) included in the present classification, (i) frenzied, (j) innumerable, (k) drawn with a very fine camelhair brush, (1) et cetera, (m) having just broken the water pitcher, (n) that from a long way off look like flies’. In the wonderment of this taxonomy, the thing we apprehend in one great leap, the thing that, by means of the fable, is demonstrated as the exotic charm of another system of thought, is the limitation of our own, the stark impossibility of thinking that.
Such distinctions are comic, though Foucault recognizes that our laughter is uneasy: even as we immediately recognize the ad hoc character of these distinctions, we realize that the principled distinctions we routinely employ may not be so principled as we supposed.
Foucault continues this theme for several pages, and then gives another formulation — perhaps, given his interest in mental illness, an illustration that is closer to reality than Borges’ Chinese dictionary:
“It appears that certain aphasiacs, when shown various differently coloured skeins of wool on a table top, are consistently unable to arrange them into any coherent pattern; as though that simple rectangle were unable to serve in their case as a homogeneous and neutral space in which things could be placed so as to display at the same time the continuous order of their identities or differences as well as the semantic field of their denomination. Within this simple space in which things are normally arranged and given names, the aphasiac will create a multiplicity of tiny, fragmented regions in which nameless resemblances agglutinate things into unconnected islets; in one corner, they will place the lightest-coloured skeins, in another the red ones, somewhere else those that are softest in texture, in yet another place the longest, or those that have a tinge of purple or those that have been wound up into a ball. But no sooner have they been adumbrated than all these groupings dissolve again, for the field of identity that sustains them, however limited it may be, is still too wide not to be unstable; and so the sick mind continues to infinity, creating groups then dispersing them again, heaping up diverse similarities, destroying those that seem clearest, splitting up things that are identical, superimposing different criteria, frenziedly beginning all over again, becoming more and more disturbed, and teetering finally on the brink of anxiety.”
Foucault here writes that, “the sick mind continues to infinity,” in other words, the process does not terminate in a definite state-of-affairs. This implies that the healthy mind does not continue to infinity: rational thought must make concessions to human finitude. While I find the use of the concept of the pathological in this context questionable, and I have to wonder if Foucault was unwittingly drawn into the continental anti-Cantorian tradition (Brouwerian intuitionism and the like, though I will leave this aside for now), there is some value to the idea that a scientific process (such as classification) must terminate in a finite state-of-affairs, even if only tentatively. I will try to show, moreover, that there is an implicit principle in this attitude, and that it is in fact a principle that I have discussed previously.
The quantification of continuous data requires certain compromises. Two of these compromises include finite precision errors (also called rounding errors) and finite dimension errors (also called truncation). Rounding errors should be pretty obvious: finite parameters cannot abide infinite decimal expansions, and so we set a limit of six decimal places, or twenty, or more — but we must set a limit. The difference between actual figures and limited decimal expansions of the same figure is called a finite precision error. Finite dimension errors result from the need to arbitrarily introduce gradations into a continuum. Using the real number system, any continuum can be faithfully represented, but this representation would require infinite decimal expansions, so we see that there is a deep consonance between finite precision errors and finite dimension errors. Thus, for example, we measure temperature by degrees, and the arbitrariness of this measure is driven home to us by the different scales we can use for this measurement. And if we could specify temperature using real numbers (including transcendental numbers) we would not have to compromise. But engineering and computers and even human minds need to break things up into manageable finite quantities, so we speak of 3 degrees C, or even 3.14 degrees C, but we don’t try to work with pi degrees C. Thus the increments of temperature, or of any another measurement, involve both finite precision errors and finite dimension errors.
In so far as quantification is necessary to the scientific method, finite dimension errors are necessary to the scientific method. In several posts (e.g., Axioms and Postulates in Strategy) I have cited Carnap’s tripartite distinction among scientific concepts, the three being classificatory, comparative, and quantitative concepts. Carnap characterizes the emergence of quantitative scientific concepts as the most sophisticated form of scientific thought, but in reviewing Carnap’s scientific concepts in the light of finite precision errors and finite dimension errors, it is immediately obvious that classificatory concepts and comparative concepts do not necessarily involve finite precision errors and finite dimension errors. It is only with the introduction of quantitative concepts that science becomes sufficiently precise that its precision forces compromises upon us. However, I should point out that classificatory concepts routinely force us to accept finite dimension errors, although they do not involve finite precision errors. The example given by Foucault, quoted above, illustrates the inherent tension in classificatory concepts.
We accept finite precision errors and finite dimension errors as the price of doing science, and indeed as the price of engaging in rational thought. As Foucault implied in the above quote, the healthy and sane mind must draw lines and define limits and call a halt to things. Sometimes these limits are close to being arbitrary. We retain the ambition of “carving nature at the joints,” but we accept that we can’t always locate the joint but at times must cleave the carcass of nature regardless.
For this willingness to draw lines and establish limits and to call a halt to proceedings I will give the name The Truncation Principle, since it is in virtue to cutting off some portion of the world and treating it as though it were a unified whole that we are able to reason about the world.
As I mentioned above, I have discussed this problem previously, and in my discussion I noted that I wanted to give an exposition of a principle and a fallacy, but that I did not have a name for it yet, so I called it An Unnamed Principle and an Unnamed Fallacy. Now I have a name for it, and I will use this name, i.e., the truncation principle, from now on.
Note: I was tempted to call this principle the “baby retention principle” or even the “hang on to your baby principle” since it is all about the commonsense notion of not throwing out the baby with the bathwater.
In An Unnamed Principle and an Unnamed Fallacy I initially formulated the principle as follows:
The principle is simply this: for any distinction that is made, there will be cases in which the distinction is problematic, but there will also be cases when the distinction is not problematic. The correlative unnamed fallacy is the failure to recognize this principle.
What I most want to highlight is that when someone points out there are gray areas that seem to elude classification by any clear cut distinction, this is sometimes used as a skeptical argument intended to undercut the possibility of making any distinctions whatsoever. The point is that the existence of gray areas and problematic cases does not address the other cases (possibly even the majority of the cases) for which the distinction isn’t in the least problematic.
A distinction that that admits of problematic cases not clearly falling on one side of the distinction or the other, may yet have other cases that are clearly decided by the distinction in question. This might seem too obvious to mention, but distinctions that admit of problematic instances are often impugned and rejected for this reason. Admitting of no exceptions whatsoever is an unrealistic standard for a distinction.
I hope to be able to elaborate on this formulation as I continue to think about the truncation principle and its applications in philosophical, formal, and scientific thought.
Usually when we hear “truncation” we immediately think of the geometrical exercise of regularly cutting away parts of the regular (Platonic) solids, yielding truncated polyhedra and converging on rectified polyhedra. This is truncation in space. Truncation in time, on the other hand, is what is more commonly known as historical periodization. How exactly one historical period is to be cut off from another is always problematic, not least due to the complexity of history and the sheer number of outliers that seem to falsify any attempt at periodization. And yet, we need to break history up into comprehensible chunks. When we do so, we engage in temporal truncation.
All the problems of philosophical logic that present themselves to the subtle and perceptive mind when contemplating a spatial truncation, as, for example, in defining the Pacific Ocean — where exactly does it end in relation to the Indian Ocean? — occur in spades in making a temporal truncation. Yet if rational inquiry is to begin (and here we do not even raise the question of where rational inquiry ends) we must make such truncations, and our initial truncations are crude and mostly ad hoc concessions to human finitude. Thus I introduce the truncation principle as an explicit justification of truncations as we employ them throughout reasoning.
And, as if we hadn’t already laid up enough principles and distinctions for today, here is a principle of principles of distinctions: every principled distinction implies a fallacy that takes the form of neglecting this distinction. With an ad hoc distinction there is no question of fallacy, because there is no principle to violate. Where there is a principle involved, however, the violation of the principle constitutes a fallacy.
Contrariwise, every fallacy implies a principled distinction that ought to have been made. If we observe the appropriate principled distinctions, we avoid fallacies, and if we avoid fallacies we appropriately distinguish that which ought to be distinguished.
. . . . .
. . . . .
. . . . .
29 December 2011
Yesterday in A Review of Iranian Capabilities I mentioned the current foreign policy debate over the idea of a preventative war against Iran and recounted some of Iran’s known capabilities.
Reflecting the these attempts to make a case for or against preventative war with Iran, I was led back in my thoughts to a post I wrote last summer about what I called The Possible War. In this post I tried to emphasize that ex post facto criticisms of conduct in war — like criticisms of the Allies’ strategic bombing of Germany during the Second World War — presume a parity of capability and opportunity that almost never obtains in fact. Military powers do not engage in ideal wars that meet certain standards; they fight the war that they are able to fight, and this is the possible war.
Moving beyond a description of the possible war, the idea can be formulated as a principle, the principle of possible wars, and the principle is this: in any given conflict, each party to the conflict will fight the war that it is possible for that party to fight. In other words, no party to a conflict is going to fight a war that it is impossible for it to fight. In other words again, no party to a conflict is going to fight a losing war on the basis of peer-to-peer engagement if there is a non-peer strategy that will win the war. This sort of thing makes good poetry, as in The Charge of the Light Brigade, but in so far as it ensures failure in a campaign, it exerts a strong negative selection over military powers that pursue such policies.
The military resources of a given political entity (whether state or non-state entity) will always seek to maximize its advantage by employing its most effective means available against its adversary’s most vulnerable target available. This is what makes war brutal and ugly, this is why it has been said since ancient times that inter arma enim silent leges.
There is a sense in which this principle of possible wars is simply an extension of the classic twin principles of mass and economy of forces. Each party to a conflict concentrates as much force as it can at a point it believes the adversary to be most vulnerable, and the enemy is simultaneously trying to do the same thing. If we think of concentration as concentration of effort, rather than mere numbers of battalions, and we think of vulnerability as any way in which an enemy can be defeated, and not merely a point on the line that is insufficiently defended, then we have the principle of possible war.
War is not always and inevitably brutal and ugly, and the principle of possible wars helps us to understand why this is the case. Previously in Civilization and War as Social Technologies I discussed how in particular historical circumstances warfare can become highly ritualized and stylized. There I cited the non-Western examples of Samurai sword fighting, retained in Japan long after the rest of the world was fighting with guns, and the Aztec Flower Battle, which combined religious rituals of sacrifice with the honor and prestige requirements of combat. However, there are Western precedents for ritualized combat as well, as when, in the ancient world, each party to a conflict would choose an individual champion and the issue was decided by single combat.
Another example of semi-ritualized forms of combat in Western history might include early modern Condottieri wars in the Italian peninsula. Before the large scale armies of the French and the Spanish crossed the Alps to pillage and plunder Italy, the peninsula was dominated by wealth city-states who hired mercenary armies under Condottieri captains to wage war against each other. With two mercenary armies facing each other on the battlefield, there was a strong incentive to minimize casualties, and there are some remarkable stories from the era of nearly bloodless battles.
Another example would be the maneuver warfare of small, professional European armies during the Enlightenment, who sometimes managed to fight limited wars with a minimal impact on non-combatants. This may well have been a cultural response to the horrific slaughter of the Thirty Years War.
In these latter two examples, limited wars were the possible war because a sufficient number of social conventions and normative presuppositions were shared by all parties to the conflict, who were willing to abide by the results of the contest even when a more ruthless approach might have secured a Pyrrhic victory. Under these socio-political conditions, limited wars were possible wars because all parties recognized that it was in their enlightened self-interest not to escalate wars beyond a certain threshold. Such social conventions touching even upon the conduct of war can only be effective in a suitably homogenous cultural region.
After the escalating total wars leading up to the middle of the twentieth century, limited wars emerged again out of fear of crossing the nuclear threshold. Parties to the conflicts were willing to abide by the issue of these limited wars because the alternative was mutually assured destruction. Also, all parties to proxy wars knew they would have another chance at achieving their goals in another theater when the proxy war would shift to another region of the world. Thus limited wars because possible wars because the alternative was unthinkable.
. . . . .
. . . . .
. . . . .
27 December 2011
Yesterday in The Philosophy of Fear I quoted Descartes from his Discourse on Method, from the section in which he introduces an implicit distinction between the theoretical principles he will use to guide his philosophical activities and the practical moral principles that he will employ in his life while he is going about his theoretical activity. Here is his exposition of his four theoretical principles:
● The first was never to accept anything for true which I did not clearly know to be such; that is to say, carefully to avoid precipitancy and prejudice, and to comprise nothing more in my judgement than what was presented to my mind so clearly and distinctly as to exclude all ground of doubt.
● The second, to divide each of the difficulties under examination into as many parts as possible, and as might be necessary for its adequate solution.
● The third, to conduct my thoughts in such order that, by commencing with objects the simplest and easiest to know, I might ascend by little and little, and, as it were, step by step, to the knowledge of the more complex; assigning in thought a certain order even to those objects which in their own nature do not stand in a relation of antecedence and sequence.
● And the last, in every case to make enumerations so complete, and reviews so general, that I might be assured that nothing was omitted.
Anyone who knows Descartes’ works will recognize that he has here stated, much more simply and compactly, the principles that he was working on in his unfinished manuscript Rules of the Direction of Mind. Here, by way of contrast, is a highly condensed version of Descartes’ practical and provisional moral principles:
● The first was to obey the laws and customs of my country, adhering firmly to the faith in which, by the grace of God, I had been educated from my childhood and regulating my conduct in every other matter according to the most moderate opinions, and the farthest removed from extremes, which should happen to be adopted in practice with general consent of the most judicious of those among whom I might be living.
● My second maxim was to be as firm and resolute in my actions as I was able, and not to adhere less steadfastly to the most doubtful opinions, when once adopted, than if they had been highly certain; imitating in this the example of travelers who, when they have lost their way in a forest, ought not to wander from side to side, far less remain in one place, but proceed constantly towards the same side in as straight a line as possible, without changing their direction for slight reasons, although perhaps it might be chance alone which at first determined the selection; for in this way, if they do not exactly reach the point they desire, they will come at least in the end to some place that will probably be preferable to the middle of a forest.
● My third maxim was to endeavor always to conquer myself rather than fortune, and change my desires rather than the order of the world, and in general, accustom myself to the persuasion that, except our own thoughts, there is nothing absolutely in our power; so that when we have done our best in things external to us, all wherein we fail of success is to be held, as regards us, absolutely impossible: and this single principle seemed to me sufficient to prevent me from desiring for the future anything which I could not obtain, and thus render me contented…
Descartes wrote a lot a extremely long run-on sentences, so that one must cut radically in order to quote him (except for his theoretical principles, above, which I have quoted entire), but I have tried to include enough above to give a genuine flavor of how he expressed himself. Although Descartes did not himself make this distinction between theoretical and practical principles explicit, although the distinction is explicitly embodied in his two sets of explicitly stated principles, he does provide a justification for the distinction:
“…as it is not enough, before commencing to rebuild the house in which we live, that it be pulled down, and materials and builders provided, or that we engage in the work ourselves, according to a plan which we have beforehand carefully drawn out, but as it is likewise necessary that we be furnished with some other house in which we may live commodiously during the operations, so that I might not remain irresolute in my actions, while my reason compelled me to suspend my judgement, and that I might not be prevented from living thenceforward in the greatest possible felicity, I formed a provisory code of morals, composed of three or four maxims, with which I am desirous to make you acquainted.”
After I quoted this in The Philosophy of Fear I realized that it constitutes a perfect antithesis to the conception of the rational reconstruction of knowledge embodied in the image of Neurath’s ship, which I have quoted several times.
Rational reconstruction was an idea that fascinated early twentieth century philosophers, especially the logical positivists, whose philosophical tradition would eventually mature and transform itself into mainstream analytical philosophy. It was logical positivism that gave us an enduring image of rational reconstruction, as related by Otto Neurath:
“There is no way of taking conclusively established pure protocol sentences as the starting point of the sciences. No tabula rasa exists. We are like sailors who must rebuild their ship on the open sea, never able to dismantle it in dry-dock and to reconstruct it there out of the best materials. Only the metaphysical elements can be allowed to vanish without trace.”
Quine then used this image in his Word and Object:
“We are like sailors who on the open sea must reconstruct their ship but are never able to start afresh from the bottom. Where a beam is taken away a new one must at once be put there, and for this the rest of the ship is used as support. In this way, by using the old beams and driftwood the ship can be shaped entirely anew, but only by gradual reconstruction.”
These two epistemic paradigms — what I will call Descartes’ house and Neurath’s ship — represent antithetical conceptions of the epistemological enterprise. Neurath’s ship is usually presented as an anti-foundationalist parable, which would suggest that Descartes’ house is a foundationalist parable. There are certain problems with this initial characterization. The logical positivists who invoked Neurath’s ship with approval were often foundationalists in the philosophy of mathematics while being anti-foundational in other areas.
There is a sense in which it is fair to call Descartes’ house a foundationalist parable: Descartes is suggesting a radical approach to the foundations of knowledge — utterly tearing down our knowledge in order to construct entirely anew on the same ground — and he attempted to put this into practice in his own philosophical work. He doubted everything that he could until he arrived at the fact that he could not doubt his own existence, and then on the basis of the certainty of his own existence he attempted to reconstruct the entire edifice of knowledge. The result was not radical, but actually rather conventional, but the method certainly was radical. It was also total.
Whether or not Neurath’s ship is anti-foundational, it is certainly incrementalist. If we were to attempt to rebuild a ship while at sea, we would need to proceed bit by bit, and very carefully. Nothing radical would be attempted, for to attempt anything radical would be to sink the ship. There is a sense in which we could identify this effort as essentially constructivist in spirit, though not exclusively constructivist: constructivism is certainly not the only motivation for Neurath’s ship, and many who invoked it employed non-constructive modes of reasoning.
Are Descartes’ house and Neurath’s ship mutually exclusive? Not necessarily. We do remodel houses while living in them, although when we do we need to keep some basic functions available during our residency. And we can demolish certain parts of a ship at sea; as long as the hull remains intact, we can engage in a radical reconstruction (as opposed to a rational reconstruction) of the masts and the rigging.
One ought not to push an image too far, for fear of verging on the ludicrous, but it can be observed that, while living in a house, we can tear down half of it to the ground and rebuild that half from scratch while living in the other half, and then repeat this process in the half we have been living in. In fact, I know people who have done this. There will, of course, be certain compromises that will have to be made in wedding the two halves together, so that the seam between the two has the incrementalist character of Neurath’s ship, while each half has the radical and total character of Descartes’ house.
It is difficult to imagine a parallel for the above scenario when it comes to Neurath’s ship. The hull of the ship can only be rebuilt incrementally, although almost everything else can be radically reconstructed. And it may well be that some parts of epistemology must be approached incrementally while other parts of epistemology may be radically reconstructed almost with impunity. This seems like an eminently reasonable conclusion. But it is no conclusion — at least not yet — because there is more to say.
What underlies the image of Descartes’ house and Neurath’s ship is in each case a distinct metaphor, and that metaphor is for Descartes the earth, the solid ground upon which we stand, while for Neurath it is the sea, to which we must go down in ships, and where we cannot stand but must swim or be carried. So, we have two epistemic metaphors — of what are they metaphors? Existence? Being? Human experience? Knowledge? If the house or the ship is knowledge, then the ground or the sea must be that upon which knowledge rests (or floats). This once again suggests a foundationalist approach, but points to very different foundations: a house stands on dirt and stones; a ship floats on water.
Does knowledge ultimately rest upon the things themselves — the world, existence, or being, as you prefer — or upon human experience of the world? Or is not knowledge a consequence of the tension between human experience and the world, so that both the world and human experience are necessary to knowledge?
Intuitively, and without initially putting much thought into this (although I will continue to think about this because it is an interesting idea), I would suggest that the metaphor of the earth implies that knowledge ultimately is founded on the things themselves, while the metaphor of the sea implies that knowledge ultimately is founded on the ever-changing tides of human experience.
Therefore, if knowledge requires both the world and human experience, either the metaphor of Descartes’ house or Neurath’s ship alone, in isolation from the other, is inadequate. We need something more, or something different, to illustrate our relation to knowledge and how it changes.
. . . . .
. . . . .
. . . . .
. . . . .
3 August 2010
Aristotle claimed that mathematics has no ethos (Metaphysics, Book III, Chap. 2, 996a). Aristotle, of course, was more interested in the empirical sciences than his master Plato, whose Academy presumed and demanded familiarity with geometry — and we must understand that for the ancients, long before the emergence of analytical geometry in the work of Descartes (allowing us to formulate geometry algebraically, hence arithmetically), that geometry was always axiomatic thought, rigorously conceived in terms of demonstration. For the Greeks, this was the model and exemplar of all rigorous thought, and for Aristotle this was a mode of thought that lacked an ethos.
In this, I think, Aristotle was wrong, and I think that Plato would have agree on this point. But the intuition behind Aristotle’s denial of a mathematical ethos is, I think, a common one. And indeed it has even become a rhetorical trope to appeal to rigorous mathematics as an objective standard free from axiological accretions.
Our human, all-too-human faculties conspire to confuse us, to addle our wits, when we begin talking about morality, so that the purity and rigor of mathematical and logical thought seem to be called into question if we acknowledge that there is an ethos of formal thought. We easily confuse ourselves with religious, mystical, and ethical ideas, and since the great monument of mathematical thought has been mostly free of this particular species of confusion, to deny an ethos of formal thought can be understood as a strategy to protect and defend of the honor of mathematics and logic by preserving it from the morass that envelops most human attempts to think clearly, however heroically undertaken.
Kant famously stated in the Critique of Pure Reason that, “I have found it necessary to deny knowledge in order to make room for faith.” I should rather limit faith to make room for rigorous reasoning. Indeed, I would squeeze out faith altogether, and find myself among the most rigorous of the intuitionists, one of whom has said: “The aim of this program is to banish faith from the foundations of mathematics, faith being defined as any violation of the law of sufficient reason (for sentences). This law is defined as the identification (by definition) of truth with the result of a (present or feasible) proof…”
Though here again, with intuitionism (and various species of constructivism generally), we have rigor, denial, asceticism — intuitionistic logic is no joyful wisdom. (An ethos of formal thought need not be an inspiring and edifying ethos.) It is logic with a frown, disapproving, censorious — a bitter medicine justified only because it offers hope of curing the disease of contradiction, contracted when mathematics was shown to be reducible to set theory, and the latter shown to be infected with paradox (as if the infinite hubris of set theory were not alone enough for its condemnation). Is the intuitionist’s hope justified? In so far as it is hope — i.e., hope and not proof, the expectation that things will go better for the intuitionistic program than for logicism — it is not justified.
Dummett has said that intuitionistic logic and mathematics are to wear their justification on their face:
“From an intuitionistic standpoint, mathematics, when correctly carried on, would not need any justification from without, a buttress from the side or a foundation from below: it would wear its own justification on its face.”
Dummett, Michael, Elements of Intuitionism, Oxford University Press, 1977, p. 2
The hope that contradiction will not arise from intuitionistic methods clearly is no such evident justification. As a matter of fact, empirically and historically verifiable, we know that intuitionism has resulted in no contradictions, but this could change tomorrow. Intuitionism stands in need of a consistency proof even more than formalism. There is, in its approach, a faith invested in the assumption that infinite totalities caused the paradoxes, and once we have disallowed reference to them all will go well. This is a perfectly reasonable assumption, but one, in so far as it is an article of faith, which is at variance with the aims and methods of intuitionism.
And what is a feasible proof, which our ultra-intuitionist would allow? Have we not with “feasible proof” abandoned proof altogether in favor of probability? Again, we will allow them their inconsistencies and meet them on their own ground. But we shall note that the critics of the logicist paradigm fix their gaze only upon consistency, and in so doing reveal again their stingy, miserly conception of the whole enterprise.
“The Ultra-Intuitionistic Criticism and the Antitraditional program for the foundations of Mathematics” by A. S. Yessenin-Volpin (who was arguing for intellectual freedom in the Soviet Union at the same time that he was arguing for a censorious conception of reason), in Intuitionism and Proof Theory, quoted briefly above, is worth quoting more fully:
The aim of this program is to banish faith from the foundations of mathematics, faith being defined as any violation of the law of sufficient reason (for sentences). This law is defined as the identification (by definition) of truth with the result of a (present or feasible) proof, in spite of the traditional incompleteness theorem, which deals only with a very narrow kinds [sic] of proofs (which I call ‘formal proofs’). I define proof as any fair way of making a sentence incontestable. Of course this explication is related to ethics — the notion fair means ‘free from any coercion or fraud’ — and to the theory of disputes, indicating the cases in which a sentence is to be considered as incontestable. Of course the methods of traditional mathematical logic are not sufficient for this program: and I have to enlarge the domain of means explicitly studied in logic. I shall work in a domain wherein are to be found only special notions of proof satisfying the mentioned explication. In this domain I shall allow as a means of proof only the strict following of definitions and other rules or principles of using signs.
Intuitionism and proof theory: Proceedings of the summer conference at Buffalo, N.Y., 1968, p. 3
What is coercion or fraud in argumentation? We find something of an illustration of this in Gregory Vlastos’ portrait of Socrates: “Plato’s Socrates is not persuasive at all. He wins every argument, but never manages to win over an opponent. He has to fight every inch of the way for any assent he gets, and gets it, so to speak, at the point of a dagger.” (The Philosophy of Socrates, Ed. by Gregory Vlastos, page 2)
What appeal to logic does not invoke logical compulsion? Is logical compulsion unique to non-constructive mathematical thought? Is there not an element of logical compulsion present also in constructivism? Might it not indeed be the more coercive form of compulsion that is recognized alike by constructivists and non-constructivists?
The breadth of the conception outlined by Yessenin-Volpin is impressive, but the essay goes on to stipulate the harshest measures of finitude and constructivism. One can imagine these Goldwaterite logicians proclaiming: “Extremism in the defense of intuition is no vice, and moderation in the pursuit of constructivist rigor is no virtue.” Brouwer, the spiritual father of intuitionism, even appeals to the Law-and-Order mentality, saying that a criminal who has not been caught is still a criminal. Logic and mathematics, it seems, must be brought into line. They verge on criminality, deviancy, perversion.
The same righteous, narrow, anathamatizing attitude is at work among the defenders of what is sometimes called the “first-order thesis” in logic. Quine sees a similar deviancy in modal logic (which can be shown to be equivalent to intuitionistic logic), which he says was “conceived in sin” — the sin of confusing use and mention. These accusations do little to help us understand logic. We would do well to adopt Foucault’s attitude on these matters: “leave it to our bureaucrats and our police to see that our papers are in order. At least spare us their morality when we write.” (The Archaeology of Knowledge, p. 17)
The philosophical legacy of intuitionism has been profound yet mixed; its influence has been deeply ambiguous. (Far from the intuitive certainty, immediacy, clarity, and evident justification that it would like to propagate.) There is in inuitionism much in harmony with contemporary philosophy of mathematics and its emphasis on practices, the demand for finite constructivity, its anti-philosophical tenor, its opposition to platonism. The Father of Intuitionism, Brouwer, was, like many philosophers, anti-philosophical even while propounding a philosophy. No doubt his quasi-Kantianism put his conscience at rest in the Kantian tradition of decrying metaphysics while practicing it, and his mysticism gave a comforting halo (which softens and obscures the hard edges of intuitionist rigor in proof theory) to mathematics which some have found in the excesses of platonism.
In any case, few followers of Brouwer followed him in his Kantianism and mysticism. The constructivist tradition which grew from intuitionism has proved to be philosophically rich, begetting a variety of constructive techniques and as many justifications for them. Even if few mathematicians actually do intuitionistic mathematics, controversies over the significance of constructivism have a great deal of currency in philosophy. And Dummett is explicit about the place of philosophy in intuitionistic logic and mathematics.
Intuitionism and constructivism command our respect in the same way that Euclidean geometry commanded the respect of the ancients: we might not demand that all reasoning conform to this model, but it is valuable to know that rigorous standards can be formulated, as an ideal to which we might aspire if nothing else. And and ideal of reason is itself an ethos of reason, a norm to which formal thought aspires, and which it hopes to approximate even if it cannot always live up the the most exacting standard that it can recognize for itself.
. . . . .
. . . . .
. . . . .
12 January 2010
In earlier posts to this forum I have discussed the dissatisfaction that comes from introducing an idea before one has the right name for it. An appropriate name will immediately communicate the intuitive content of the idea to the reader, as when I wrote about the civilization of the hand in contradistinction to the civilization of the mind, after having already sketched the idea in a previous post.
Again I find myself in the position of wanting to write about something for which I don’t yet have the perfect intuitive name, and I have even had to name this post “an unnamed principle and an unnamed fallacy” because I can’t even think of a mediocre name for the principle and its related fallacy.
In yesterday’s Defunct Ideas I argued that new ideas are always emerging in history (though they aren’t always being lost), and it isn’t too difficult to come up with a new idea if one has the knack for it. But most new ideas are pretty run-of-the-mill. One can always build on past ideas and add another brick to the growing structure of human knowledge.
That being said, it is only occasionally, in the midst of a lot of ideas of the middling sort, that one comes up with a really good idea. It is even more rare when one comes up with a truly fundamental idea. Formulating a logical fallacy that has not been noticed to date, despite at least twenty-five hundred years of cataloging fallacies would constitute a somewhat fundamental idea. As this is unlikely in the present context, the principle and the associated fallacy below have probably already been noticed and named by others long ago. If not, they should have been.
The principle is simply this: for any distinction that is made, there will be cases in which the distinction is problematic, but there will also be cases when the distinction is not problematic. The correlative unnamed fallacy is the failure to recognize this principle.
This unnamed principle is not the same as the principle of bivalence or the law of the excluded middle (tertium non datur), though any clear distinction depends, to a certain extent upon them. This unnamed principle is also not to be confused with a simple denial of clear cut distinctions. What I most want to highlight is that when someone points out there are gray areas that seem to elude classification by any clear cut distinction, this is sometimes used as a skeptical argument intended to undercut the possibility of making any distinctions whatsoever. The point is that the existence of gray areas and problematic cases does not address the other cases (possibly even the majority of the cases) for which the distinction isn’t in the least problematic.
Again: a distinction that that admits of problematic cases not clearly falling on one side of the distinction or the other, may yet have other cases that are clearly decided by the distinction in question. This might seem too obvious to mention, but distinctions that admit of problematic instances are often impugned and rejected for this reason. Admitting of no exceptions whatsoever is an unrealistic standard for a distinction.
. . . . .
. . . . .
. . . . .
. . . . .