30 January 2013
F. H. Bradley in his classic treatise Appearance and Reality: A Metaphysical Essay, made this oft-quoted comment:
“If you identify the Absolute with God, that is not the God of religion. If again you separate them, God becomes a finite factor in the Whole. And the effort of religion is to put an end to, and break down, this relation — a relation which, none the less, it essentially presupposes. Hence, short of the Absolute, God cannot rest, and, having reached that goal, he is lost and religion with him. It is this difficulty which appears in the problem of the religious self-consciousness.”
I think many commentators have taken this passage as emblematic of what they believe to be Bradley’s religious sentimentalism, and in fact the yearning for religious belief (no longer possible for rational men) that characterized much of the school of thought that we now call “British Idealism.”
This is not my interpretation. I’ve read enough Bradley to know that he was no sentimentalist, and while his philosophy diverges radically from contemporary philosophy, he was committed to a philosophical, and not a religious, point of view.
Bradley was an elder contemporary of Bertrand Russell, and Bertrand Russell characterized Bradley as the grand old man of British idealism. This if from Russell’s Our Knowledge of the External World:
“The nature of the philosophy embodied in the classical tradition may be made clearer by taking a particular exponent as an illustration. For this purpose, let us consider for a moment the doctrines of Mr Bradley, who is probably the most distinguished living representative of this school. Mr Bradley’s Appearance and Reality is a book consisting of two parts, the first called Appearance, the second Reality. The first part examines and condemns almost all that makes up our everyday world: things and qualities, relations, space and time, change, causation, activity, the self. All these, though in some sense facts which qualify reality, are not real as they appear. What is real is one single, indivisible, timeless whole, called the Absolute, which is in some sense spiritual, but does not consist of souls, or of thought and will as we know them. And all this is established by abstract logical reasoning professing to find self-contradictions in the categories condemned as mere appearance, and to leave no tenable alternative to the kind of Absolute which is finally affirmed to be real.”
Bertrand Russell, Our Knowledge of the External World, Chapter I, “Current Tendencies”
Although Russell rejected what he called the classical tradition, and distinguished himself in contributing to the origins of a new philosophical school that would come (in time) to be called analytical philosophy, the influence of figures like F. H. Bradley and J. M. E. McTaggart (whom Russell knew personally) can still be found in Russell’s philosophy.
In fact, the above quote from F. H. Bradley — especially the portion most quoted, short of the Absolute, God cannot rest, and, having reached that goal, he is lost and religion with him — is a perfect illustration of a principle found in Russell, and something on which I have quoted Russell many times, as it has been a significant influence on my own thinking.
I have come to refer to this principle as Russell’s generalization imperative. Russell didn’t call it this (the terminology is mine), and he didn’t in fact give any name at all to the principle, but he implicitly employs this principle throughout his philosophical method. Here is how Russell himself formulated the imperative (which I last quoted in The Genealogy of the Technium):
“It is a principle, in all formal reasoning, to generalize to the utmost, since we thereby secure that a given process of deduction shall have more widely applicable results…”
Bertrand Russell, An Introduction to Mathematical Philosophy, Chapter XVIII, “Mathematics and Logic”
One of the distinctive features that Russell identifies as constitutive of the classical tradition, and in fact one of the few explicit commonalities between the classical tradition and Russell’s own thought, was the denial of time. The British idealists denied the reality of time outright, in the best Platonic tradition; Russell did not deny the reality of time, but he was explicit about not taking time too seriously.
Despite Russell’s hostility to mysticism as expressed in his famous essay “Mysticism and Logic,” when it comes to the mystic’s denial of time, Russell softens a bit and shows his sympathy for this particular aspect of mysticism:
“Past and future must be acknowledged to be as real as the present, and a certain emancipation from slavery to time is essential to philosophic thought. The importance of time is rather practical than theoretical, rather in relation to our desires than in relation to truth. A truer image of the world, I think, is obtained by picturing things as entering into the stream of time from an eternal world outside, than from a view which regards time as the devouring tyrant of all that is. Both in thought and in feeling, even though time be real, to realise the unimportance of time is the gate of wisdom.”
“…impartiality of contemplation is, in the intellectual sphere, that very same virtue of disinterestedness which, in the sphere of action, appears as justice and unselfishness. Whoever wishes to see the world truly, to rise in thought above the tyranny of practical desires, must learn to overcome the difference of attitude towards past and future, and to survey the whole stream of time in one comprehensive vision.”
Bertrand Russell, Mysticism and Logic, and Other Essays, Chapter I, “Mysticism and Logic”
While Russell and the classical tradition in philosophy both perpetuated the devalorization of time, this attitude is slowly disappearing from philosophy, and contemporary philosophers are more and more treating time as another reality to be given philosophical exposition rather than denying its reality. I regard this as a salutary development and a riposte to all who claim that philosophy makes no advances. Contemporary philosophy of time is quite sophisticated, and embodies a much more honest attitude to the world than the denial of time. (For those looking at philosophy from the outside, the denial of the reality of time simply sounds like a perverse waste of time, but I won’t go into that here.)
In any case, we can bring Russell’s generalization imperative to time and history even if Russell himself did not do so. That is to say, we ought to generalize to the utmost in our conception of time, and if we do so, we come to a
principle parallel to Bradley’s that I think both Russell and Bradley would have endorsed: short of the absolute time cannot rest, and, having reached that goal, time is lost and history with it.
Since I don’t agree with this, but it would be one logical extrapolation of Russell’s generalization imperative as applied to time, this suggests to be that there is more than one way to generalize about time. One way would be the kind of generalization that I formulated above, presumably consistent with Russell’s and Bradley’s devalorization of time. Time generalized in this way becomes a whole, a totality, that ceases to possess the distinctive properties of time as we experience it.
The other way to generalize time is, I think, in accord with the spirit of Big History: here Russell’s generalization imperative takes the form of embedding all times within larger, more comprehensive times, until we reach the time of the entire universe (or beyond). The science of time, as it is emerging today, demands that we almost seek the most comprehensive temporal perspective, placing human action in evolutionary context, placing evolution in biological context, placing biology is in geomorphological context, placing terrestrial geomorphology into a planetary context, and placing this planetary perspective into a cosmological context. This, too, is a kind of generalization, and a generalization that fully feels the imperative that to stop at any particular “level” of time (which I have elsewhere called ecological temporality) is arbitrary.
On my other blog I’ve written several posts related directly or obliquely to Big History as I try to define my own approach to this emerging school of historiography: The Place of Bilateral Symmetry in the History of Life, The Archaeology of Cosmology, and The Stars Down to Earth.
The more we pursue the rapidly growing body of knowledge revealed by scientific historiography, the more we find that we are part of the larger universe; our connections to the world expand as we pursue them outward in pursuit of Russell’s generalization imperative. I think it was Hans Blumenberg in his enormous book The Genesis of the Copernican World, who remarked on the significance of the fact that we can stand with our feet on the earth and look up at the stars. As I remarked in The Archaeology of Cosmology, we now find that by digging into the earth we can reveal past events of cosmological history. As a celestial counterpart to this digging in the earth (almost as though concretely embodying the contrast to which Blumenberg referred), we know that by looking up at the stars, we are also looking back in time, because the light that comes to us ages after it has been produced. Thus is astronomy a kind of luminous archaeology.
In Geometrical Intuition and Epistemic Space I wrote, “…we have no science of time. We have science-like measurements of time, and time as a concept in scientific theories, but no scientific theory of time as such.” Scientists have tried to think scientifically about time, but, as with the case of consciousness, a science of time eludes us as a science of consciousness eludes us. Here a philosophical perspective remains necessary because there are so many open questions and no clear indication of how these questions are to be answered in a clearly scientific spirit.
Therefore I think it is too early to say exactly what Big History is, because we aren’t logically or intellectually prepared to say exactly what the Russellian generalization imperative yields when applied to time and history. I think that we are approaching a point at which we can clarify our concepts of time and history, but we aren’t quite there yet, and a lot of conceptual work is necessary before we can produce a definitive formulation of time and history that will make of Big History the science and it aspires to be.
. . . . .
. . . . .
. . . . .
7 December 2012
Learning to Love the Wisdom
of Industrial-Technological Civilization
A confession of enthusiasm
Allow me to give free rein to my enthusiasm and to proclaim that there has never been a more exciting time in human history to be a philosopher than today. It is ironic that, at the same time, philosophers are probably held in lower esteem today than in any other period of human history. I have recently come to the opinion that it is intrinsic to the structure of industrial-technological civilization to devalue philosophy, but I have discussed the contemporary neglect of philosophy in several posts — Fashionable Anti-Philosophy, Further Fashionable Anti-Philosophy, and Beyond Anti-Philosophy among them — so that is not what I am going to write about today.
Today, on the contrary, I want to write about the great prospects that are now opening up to philosophy, despite its neglect in popular culture and its abuse by the enthusiasts of a positivistically-conceived science. And these prospects are not one but many. In some previous posts about object-oriented philosophy (also called object-oriented ontology, or OOO) I mentioned how exciting it was to be alive at a time when a new philosophical school was coming into being, especially at a time when academic philosophy seems to have stalled and relinquished any engagement with the world or any robust relationship to the ordinary lives of ordinary human beings. (As bitterly as the existentialists were denounced in their day, they did engage quite directly with contemporary events and contemporary life. Sartre made a fool of himself by meeting with Che Guevara and by mouthing Maoist claptrap in his later years, but he reached far more people than most philosophers of his generation, and like fellow existentialist Camus, did so through a variety of prose works, plays, and novels.) Now I see that we live in an age of the emergence of not one but of many different philosophical schools, and this is interesting indeed.
Philosophical periodization: schools of thought
Anyone who discusses so-called “schools” in philosophy is likely to run into immediate resistance, usually from those who have been characterized as belonging to a dubiously-conceived school. As soon as Sartre gave an explicit definition of existentialism as being based on the principle that existence precedes essence, Heidegger and Jaspers explicitly and emphatically denied that they were “existentialists.” And if we think of the hundreds years of philosophical research and the hundreds of philosophers who can be lumped under the label of “scholasticism,” the identification of a school of “scholastic” philosophers would seem to be without any content whatsoever.
Nevertheless, some of these labels remain accurate even when and where they are rejected. While Heidegger and Jaspers rejected the principle that existence precedes essence, there is no question that all three of these great existentialist thinkers were preoccupied with the problematic human condition in the modern world. Similarly, the ordinary language philosophers had their disagreements, but there were unified by a method of the analysis of ordinary language.
The school of techno-philosophy
With this caveat in mind about identifying a philosophical “school” that will almost certainly be rejected by its practitioners, I am going to identify what I will call techno-philosophy. In regard to techno-philosophy I will identify no common goals, aspirations, beliefs, principles, ideas, or ideals that belong to the practitioners of techno-philosophy, but only the common object of philosophical analysis. Techno-philosophy offers an initial exploration of novel ideas and novel facts of life in industrial society, and especially the ideas and facts of life related to technology that rapidly change within a single lifetime.
What makes the school of techno-philosophy interesting is not the special rigor or creativity of the philosophical thought in question — contemporary Anglo-American academic analytical philosophy is far more rigorous, and contemporary continental philosophy is far more imaginative — but rather the objects taken up by techno-philosophy. What are the objects of techno-philosophy? These objects are the novel productions of industrial-technological civilization, which appear and succeed each other in breathless rapidity. The fact of technological change, or even, if one would be so bold, rapid technological progress, is unprecedented. As an unprecedented aspect of life in industrial-technological civilization, rapid technological progress is an appropriate object for philosophical reflection.
The original position of technical society
The artifacts of technological progress have been produced in almost complete blindness as regard to their philosophical significance and consequences. What techno-philosophy represents is the first attempt to make philosophical sense of the artifacts of technology taken collectively, on the whole, and with an eye to their extrapolation across space and through time. In fact, the very idea of technology taken whole may be understood as a conceptual innovation of techno-philosophy, and this very idea has been called the technium by Kevin Kelly. (I wrote about the idea of the technium in Civilization and the Technium and The Genealogy of the Technium.)
Thus we can count Kevin Kelly among techno-philosophers, and even Ray Kurzweil — though Kurzweil does not seem to be interested in philosophy per se, he has pushed the limits of thinking about machine intelligence to the point that he is on the verge of philosophical questions. Thinkers in the newly emerging tradition of the technological singularity and transhumanism belong to techno-philosophy. Academic philosopher David Chalmers, known for his contributions to the philosophy of mind (and especially known for formulating the phrase “explanatory gap” to indicate the chasm between consciousness and attempted physicalistic accounts of mind) was invited to the last singularity conference and tried his hand at an essay in techno-philosophy.
Bostrom and Ćirković and techno-philosophers
The work of Nick Bostrom also represents techno-philosophy, as Professor Bostrom has engaged with a number of contemporary ideas such as superintelligence, the Fermi paradox, extraterrestrial life, transhumanism, posthumanism, the simulation hypothesis (which is a contemporary reformulation of Cartesian evil spirit), and existential risk (which is a contemporary reformulation and secularization of apocalypticism, but with a focus on mitigating apocalyptic scenarios).
Serbian astronomer and physicist Milan M. Ćirković has also dealt with many of the same questions in an admirably daring way (he has co-edited the volume Global Catastrophic Risks with Bostrom). What typifies the work of Bostrom and Ćirković — which definitely constitutes the best work in contemporary techno-philosophy — is their willingness to bring traditional philosophical sensibility to the analysis of contemporary ideas, and especially ideas enabled and facilitated by contemporary technology such as computing and space science.
The branches of industrial-technological philosophy
Industrial-technological civilization is created by practical men who eschew philosophy if they happen to be aware of it, and those with a bent for abstract or theoretical thought, and who desire a robust engagement with the world, turn to science or mathematics, where abstract and theoretical ideas can have a direct and nearly immediate impact upon the development of industrial society. Techno-philosophy picks up where these indispensable men of industrial-technological civilization leave off.
Once we understand the relationship between techno-philosophy and industrial-technological civilization (and its disruptions), and knowing the cycle of science, technology and engineering that drives such a civilization, we can posit a philosophical analysis of each stage in the escalating spiral of industrial-technological civilization, with a philosophy of the science of this civilization, a philosophy of the technology of this civilization, and a philosophy of the engineering of this civilization. Techno-philosophy, then, is the philosophy of the technology of industrial-technological civilization.
Philosophy in a time of model drift
In parallel to the emerging school of techno-philosophy, there is a quasi-philosophical field of popular expositions of science by those actively working on the frontiers of the sciences that have been most profoundly transformed by recent developments, and which are still in the process of transformation. This is the philosophy of the science of industrial-technological civilization, and it is distinct from traditional philosophy of science. The rapid developments in cosmology and physics in particular have led to model drift in these fields, and those scientists who are working on these concepts feel the need to give these abstract and theoretical conceptions a connection to ordinary human experience.
Here I have in mind the books of Brian Green, such as his exposition of string theory, The Elegant Universe: Superstrings, Hidden Dimensions, and the Quest for the Ultimate Theory, as well as criticisms of string theory such as Peter Woit’s Not Even Wrong: The Failure of String Theory and the Search for Unity in Physical Law. Some of these books are more widely ranging and therefore more philosophical, such as David Deutsch’s The Fabric of Reality: The Science of Parallel Universes — and Its Implications, while some appeal to a traditional conception of “natural philosophy” as in David Grinspoon’s Lonely Planets: The Natural Philosophy of Alien Life. While these works do not constitute “techno-philosophy” as I have characterized it above, they stand in a similar relationship to the civilization the thought of which they represent.
The prospects for techno-philosophy
As techno-philosophy grows in scope, rigor, depth, and methodological sophistication, it promises to give to industrial-technological civilization something this civilization never wanted and never desired, but of which it is desperately in need: Depth. Gravitas. Intellectual seriousness. Disciplined reflection on the human condition. In a word: wisdom.
If there is anything the world needs today, it is wisdom.
. . . . .
. . . . .
. . . . .
23 November 2012
What is the Church-Turing Thesis? The Church-Turing Thesis is an idea from theoretical computer science that emerged from research in the foundations of logic and mathematics, also called Church’s Thesis, Church’s Conjecture, the Church-Turing Conjecture as well as other names, that ultimately bears upon what can be computed, and thus, by extension, what a computer can do (and what a computer cannot do).
Note: For clarity’s sake, I ought to point out the Church’s Thesis and Church’s Theorem are distinct. Church’s Theorem is an established theorem of mathematical logic, proved by Alonzo Church in 1936, that there is no decision procedure for logic (i.e., there is no method for determining whether an arbitrary formula in first order logic is a theorem). But the two – Church’s theorem and Church’s thesis – are related: both follow from the exploration of the possibilities and limitations of formal systems and the attempt to define these in a rigorous way.
Even to state Church’s Thesis is controversial. There are many formulations, and many of these alternative formulations come straight from Church and Turing themselves, who framed the idea differently in different contexts. Also, when you hear computer science types discuss the Church-Turing thesis you might think that it is something like an engineering problem, but it is essentially a philosophical idea. What the Church-Turing thesis is not is as important as what it is: it is not a theorem of mathematical logic, it is not a law of nature, and it not a limit of engineering. We could say that it is a principle, because the word “principle” is ambiguous and thus covers the various formulations of the thesis.
There is an article on the Church-Turing Thesis at the Stanford Encyclopedia of Philosophy, one at Wikipedia (of course), and even a website dedicated to a critique of the Stanford article, Alan Turing in the Stanford Encyclopedia of Philosophy. All of these are valuable resources on the Church-Turing Thesis, and well worth reading to gain some orientation.
One way to formulate Church’s Thesis is that all effectively computable functions are general recursive. Both “effectively computable functions” and “general recursive” are technical terms, but there is an important different between these technical terms: “effectively computable” is an intuitive conception, whereas “general recursive” is a formal conception. Thus one way to understand Church’s Thesis is that it asserts the identity of a formal idea and an informal idea.
One of the reasons that there are many alternative formulations of the Church-Turing thesis is that there any several formally equivalent formulations of recursiveness: recursive functions, Turing computable functions, Post computable functions, representable functions, lambda-definable functions, and Markov normal algorithms among them. All of these are formal conceptions that can be rigorously defined. For the other term that constitutes the identity that is Church’s thesis, there are also several alternative formulations of effectively computable functions, and these include other intuitive notions like that of an algorithm or a procedure that can be implemented mechanically.
These may seem like recondite matters with little or no relationship to ordinary human experience, but I am surprised how often I find the same theoretical conflict played out in the most ordinary and familiar contexts. The dialectic of the formal and the informal (i.e., the intuitive) is much more central to human experience than is generally recognized. For example, the conflict between intuitively apprehended free will and apparently scientifically unimpeachable determinism is a conflict between an intuitive and a formal conception that both seem to characterize human life. Compatibilist accounts of determinism and free will may be considered the “Church’s thesis” of human action, asserting the identity of the two.
It should be understood here that when I discuss intuition in this context I am talking about the kind of mathematical intuition I discussed in Adventures in Geometrical Intuition, although the idea of mathematical intuition can be understood as perhaps the narrowest formulation of that intuition that is the polar concept standing in opposition to formalism. Kant made a useful distinction between sensory intuition and intellectual intuition that helps to clarify what is intended here, since the very idea of intuition in the Kantian sense has become lost in recent thought. Once we think of intuition as something given to us in the same way that sensory intuition is given to us, only without the mediation of the senses, we come closer to the operative idea of intuition as it is employed in mathematics.
Mathematical thought, and formal accounts of experience generally speaking, of course, seek to capture our intuitions, but this formal capture of the intuitive is itself an intuitive and essentially creative process even when it culminates in the formulation of a formal system that is essentially inaccessible to intuition (at least in parts of that formal system). What this means is that intuition can know itself, and know itself to be an intuitive grasp of some truth, but formality can only know itself as formality and cannot cross over the intuitive-formal divide in order to grasp the intuitive even when it captures intuition in an intuitively satisfying way. We cannot even understand the idea of an intuitively satisfying formalization without an intuitive grasp of all the relevant elements. As Spinoza said that the true is the criterion both of itself and of the false, we can say that the intuitive is the criterion both of itself and the formal. (And given that, today, truth is primarily understood formally, this is a significant claim to make.)
The above observation can be formulated as a general principle such that the intuitive can grasp all of the intuitive and a portion of the formal, whereas the formal can grasp only itself. I will refer to this as the principle of the asymmetry of intuition. We can see this principle operative both in the Church-Turing Thesis and in popular accounts of Gödel’s theorem. We are all familiar with popular and intuitive accounts of Gödel’s theorem (since the formal accounts are so difficult), and it is not usual to make claims for the limitative theorems that go far beyond what they formally demonstrate.
All of this holds also for the attempt to translate traditional philosophical concepts into scientific terms — the most obvious example being free will, supposedly accounted for by physics, biochemistry, and neurobiology. But if one makes the claim that consciousness is nothing but such-and-such physical phenomenon, it is impossible to cash out this claim in any robust way. The science is quantifiable and formalizable, but our concepts of mind, consciousness, and free will remain stubbornly intuitive and have not been satisfyingly captured in any formalism — the determination of any such satisfying formalization could only be determined by intuition and therefore eludes any formal capture. To “prove” determinism, then, would be as incoherent as “proving” Church’s Thesis in any robust sense.
There certainly are interesting philosophical arguments on both sides of Church’s Thesis — that is to say, both its denial and its affirmation — but these are arguments that appeal to our intuitions and, most crucially, our idea of ourselves is intuitive and informal. I should like to go further and to assert that the idea of the self must be intuitive and cannot be otherwise, but I am not fully confident that this is the case. Human nature can change, albeit slowly, along with the human condition, and we could, over time — and especially under the selective pressures of industrial-technological civilization — shape ourselves after the model of a formal conception of the self. (In fact, I think it very likely that this is happening.)
I cannot even say — I would not know where to begin — what would constitute a formal self-understanding of the self, much less any kind of understanding of a formal self. Well, maybe not. I have written elsewhere that the doctrine of the punctiform present (not very popular among philosophers these days, I might add) is a formal doctrine of time, and in so far as we identify internal time consciousness with the punctiform present we have a formal doctrine of the self.
While the above account is one to which I am sympathetic, this kind of formal concept — I mean the punctiform present as a formal conception of time — is very different from the kind of formality we find in physics, biochemistry, and neuroscience. We might assimilate it to some mathematical formalism, but this is an abstraction made concrete in subjective human experience, not in physical science. Perhaps this partly explains the fashionable anti-philosophy that I have written about.
. . . . .
. . . . .
. . . . .
14 October 2012
A message to the foundations of mathematics (FOM) listserv by Frank Waaldijk alerted me to the fact that today, 14 October 2012, is the one hundredth anniversary of Brouwer’s inaugural address at the University of Amsterdam, “Intuitionism and Formalism.” (I have discussed Frank Waaldijk earlier in P or Not-P and What is the Relationship Between Constructive and Non-Constructive Mathematics?)
I have called this post “One Hundred Years of Intuitionism and Formalism” but I should have called it “One Hundred Years of Intuitionism” since, of the three active contenders as theories for the foundations of mathematics a hundred years ago, only intuitionism is still with us in anything like its original form. The other contenders — formalism and logicism — are still with us, but in forms so different that they no longer resemble any kind of programmatic approach to the foundations of mathematics. In fact, it could be said that logicism was gradually transformed into technical foundational research, primarily logical in character, without any particular programmatic content, while formalism, following in a line of descent from Hilbert, has also been incrementally transformed into mainstream foundational research, but primarily mathematical in character, and also without any particular programmatic or even philosophical content.
The very idea of “foundations” has come to be questioned in the past hundred years — though, as I commented a few days ago in The Genealogy of the Technium, the early philosophical foundationalist programs continue to influence my own thinking — and we have seen that intuitionism has been able to make the transition from a foundationalist-inspired doctrine to doctrine that might be called mathematical “best practices.” In contemporary philosophy of mathematics, one of the most influential schools of thought for the past couple of decades or more has been to focus not on theories of mathematics, but rather on mathematical practices. Sometimes this is called “neo-empiricism.”
Intuitionism, I think, has benefited from the shift from the theoretical to the practical in the philosophy of mathematics, since intuitionism was always about making a distinction between the acceptable and the unacceptable in logical principles, mathematical reasoning, proof procedures, and all those activities that are part of the mathematician’s daily bread and butter. This shift has also made it possible for intuitionism to distance itself from its foundationalist roots at a time when foundationalism is on the ropes.
Brouwer is due some honor for his prescience in formulating intuitionism a hundred years ago — and intuitionism came almost fully formed out of the mind of Brouwer as syllogistic logic came almost fully formed out of the mind of Aristotle — so I would like to celebrate Brouwer on this, the one hundredth anniversary of his inaugural address at the University of Amsterdam, in which he formulated so many of the central principles of intuitionism.
Brouwer was prescient in another sense as well. He ended his inaugural address with a quote from Poincaré that is well known in the foundationalist community, since it has been quoted in many works since:
“Les hommes ne s’entendent pas, parce qu’ils ne parlent pas la même langue et qu’il y a des langues qui ne s’apprennent pas.”
This might be (very imperfectly) translated into English as follows:
“Men do not understand each other because they do not speak the same language and there are languages that cannot be learned.”
What Poincaré called men not understanding each other Kuhn would later and more famously call incommensurability. And while we have always known that men do not understand each other, it had been widely believed before Brouwer that at least mathematicians understood each other because they spoke the same universal language of mathematics. Brouwer said that his exposition revealed, “the fundamental issue, which divides the mathematical world.” A hundred years later the mathematical world is still divided.
For those who have not studied the foundations and philosophy of mathematics, it may come as a surprise that the past century, which has been so productive of research in advanced mathematics — arguably going beyond all the cumulative research in mathematics up to that time — has also been a century of conflict during which the idea of mathematics as true, certain, and necessary — ideas that had been central to a core Platonic tradition of Western thought — have all been questioned and largely abandoned. It has been a raucous century for mathematics, but also a fruitful one. A clever mathematician with a good literary imagination could write a mathematical analogue of Mandeville’s Fable of the Bees in which it is precisely the polyglot disorder of the hive that made it thrive.
That core Platonic tradition of Western thought is now, even as I write these lines, dissipating just as the illusions of the philosopher, freed from the cave of shadows, dissipate in the light of the sun above.
Brouwer, like every revolutionary (and we recall that it was Weyl, who was sympathetic to Brouwer, who characterized Brouwer’s work as a revolution in mathematics), wanted to do away with an old, corrupt tradition and to replace it with something new and pure and edifying. But in the affairs of men, a revolution is rarely complete, and it is, far more often, the occasion of schism than conversion.
Many were converted by Brouwer; many are still being converted today. As I wrote above, intuitionism remains a force to be reckoned with in contemporary mathematical thought in a way that logicism and formalism cannot claim to be such a force. But the conversions and subsequent defections left a substantial portion of the mathematical community unconverted and faithful to the old ways. The tension and the conflict between the old ways and the new ways has been a source of creative inspiration.
Precisely that moment in history when the very nature of mathematics was called into question became the same moment in history when mathematics joined technology in exponential growth.
Mars is the true muse of men.
. . . . .
. . . . .
. . . . .
. . . . .
10 October 2012
Addendum on Civilization and the Technium
in regard to human, animal, and alien technology
One of the virtues of taking the trouble to formulate one’s ideas in an explicit form is that, once so stated, all kinds assumptions one was making become obvious as well as all kinds of problems that one didn’t see when the idea was just floating around in one’s consciousness, as a kind of intellectual jeu d’esprit, as it were.
Bertrand Russell wrote about this, or, at least, about a closely related experience in one of his well-known early essays, in which he discussed the importance not only making our formulations explicit, but of doing so by way of putting some distance between our thoughts and the kind of facile self-evidence that can distract us from the real business at hand:
“It is not easy for the lay mind to realise the importance of symbolism in discussing the foundations of mathematics, and the explanation may perhaps seem strangely paradoxical. The fact is that symbolism is useful because it makes things difficult. (This is not true of the advanced parts of mathematics, but only of the beginnings.) What we wish to know is, what can be deduced from what. Now, in the beginnings, everything is self-evident; and it is very hard to see whether one self-evident proposition follows from another or not. Obviousness is always the enemy to correctness. Hence we invent some new and difficult symbolism, in which nothing seems obvious. Then we set up certain rules for operating on the symbols, and the whole thing becomes mechanical. In this way we find out what must be taken as premiss and what can be demonstrated or defined. For instance, the whole of Arithmetic and Algebra has been shown to require three indefinable notions and five indemonstrable propositions. But without a symbolism it would have been very hard to find this out. It is so obvious that two and two are four, that we can hardly make ourselves sufficiently skeptical to doubt whether it can be proved. And the same holds in other cases where self-evident things are to be proved.”
Bertrand Russell, Mysticism and Logic, “Mathematics and the Metaphysicians”
Russell’s foundationalist program in the philosophical of mathematics closely followed the method that he outlined so lucidly in the passage above. Principia Mathematica makes the earliest stages of mathematics notoriously difficult, but does so in service to the foundationalist ideal of revealing hidden presuppositions and incorporating them into the theory in an explicit form.
Another way that Russell sought to overcome self-evidence is through the systematic pursuit of the highest degree of generality, which drives us to formulate concepts that are alien to common sense:
“It is a principle, in all formal reasoning, to generalize to the utmost, since we thereby secure that a given process of deduction shall have more widely applicable results…”
Bertrand Russell, An Introduction to Mathematical Philosophy, Chapter XVIII, “Mathematics and Logic”
These are two philosophical principles — the explication of ultimate simples (foundations) and the pursuit of generality — that I have very much taken to heart and attempted to put into practice in my own philosophical work. Russell’s foundationalist method shows us what can be deduced from what, and gives to these deductions the most widely applicable results. To these philosophical imperatives of Russell I have myself added another, parallel to his pursuit of generality, and that is the simultaneous pursuit of formality: it is (or ought to be) a principle in all theoretical reasoning to formalize to the utmost…
Russell also observed the imperative of formalization, though he himself did not systematically distinguish between generalization and formalization, and it is a tough problem; I’ve been working on it for about twenty years and haven’t yet arrived at definitive formulations. As far as provisional formulations go, generalization gives us the highly comprehensive conceptions like astrobiology and civilization and the technium that allow us to unify a vast body of knowledge that must be studied by inter-disciplinary means, while formalization gives us the distinctions we must carefully observe within our concepts, so that generalization does not simply give us the night in which all cows are black (to borrow a phrase that Hegel used to ridicule Schelling’s conception of the Absolute).
Foundationalism as a philosophical movement is very much out of fashion now, although the foundations of mathematics, pursued eo ipso, remains an active and highly technical branch of logico-mathematical research, and today looks a lot different from what it was when it was first formulated as a philosophical research program a hundred years ago by Frege, Peano, Russell, Whitehead, Wittgenstein, and others. Nevertheless, I continue to derive much philosophical clarification from the early philosophical stages of foundationalism, especially in regard to theories that have not (yet) been reduced to formal systems, as is the case with theories of history or theories of civilization.
I am still a long way from reducing my ideas about history or civilization to first principles, much less to symbolism, but I feel like I am making progress, and the discovery of assumptions and problems is a sure sign of progress; in this sense, my post on Civilization and the Technium marked a stage of progress in my thinking, because of the inadequacy of my formulations that it revealed.
In my Civilization and the Technium I compared the extent of civilization — a familiar idea that has not yet received anything like an adequate definition — with the extent of the technium — a recent and hence unfamiliar idea for which there is an explicit formulation, but since it is new its full scope remains untested and untried, and therefore it presents problems that the idea of civilization does not present. I formulated concepts of the technium parallel to formulations of astrobiology and astrocivilization, as follows:
● Eotechnium the origins of the technium, wherever and whenever it occurs, terrestrial or otherwise
● Esotechnium our terrestrial technium
● Exotechnium any extraterrestrial technium exclusive of the terrestrial technium
● Astrotechnium the totality of technology in the universe, our terrestrial and any extraterrestrial technium taken together in their cosmological context
I realize now that when I did this I was making slightly different assumptions for civilization and the technium. The intuitive basis of this was that I assumed, in regard to the technium, that the technium I was describing was all due to human activity (a clear case of anthropic bias), so that the distinction between the exotechnium and the exotechnium was the distinction between terrestrial human technology and extraterrestrial human technology.
When, on the other hand, I formulated the parallel concepts for civilization, I assumed that esocivilization was terrestrial human civilization and that exocivilization would be alien civilizations not derived from the human eocivilization source.
Another way to put this is that I assumed the validity of the terrestrial eotechnium thesis even while I also assumed that the terrestrial eocivilization thesis did not hold. Is that too much technical terminology? In other words, I assumed the uniqueness of the human technium but I did not assume the uniqueness of human industrial-technological civilization.
This points to a further articulation (and therefore a further formalization) of the concepts employed: one must keep the conception of eocivlization (the origins of civilization) clearly in mind, and distinguish between terrestrial civilization that expands into extraterrestrial space and therefore becomes exocivilization from its eocivilization source on the one hand, and on the other hand a xeno-eocivilization source that constitutes exocivilization by virtue of its xenomorphic origins. If one is going to distinguish between esocivilization and exocivilization, one must identify the eocivilization source, or all is for naught.
All of this holds, mutatis mutandis, for the eotechnium, esotechnium, exotechnium, and astrotechnium, although I ought to point that my formulations in Civilization and the Technium, and repeated above, were accurate because they were formulated in Russellian generality. It was in my following exposition that I failed to observe all the requisite distinctions. But there’s more. I’ve since realized that further distinctions can be made.
As I thought about the possibility of a xenotechnium, i.e., a technium produced by a sentient alien species, I realized that there is a xenotechnium right here on Earth (a terrestrial xenotechnium, or non-hominid technium), in the form of tool use and other forms of technology by non-human species. We are all familiar with famous examples like the chimpanzees who will strip the leaves off a branch and then use the branch to extract termites from a termite mound. Yesterday I alluded to the fact that otters use rocks to break open shells. There are many other examples. Apart from tool use, beaver damns and the nests of birds, while not constructed with tools, certainly represent a kind of technology.
If we take all instances of animal technology together they constitute a terrestrial non-human technium. If we take all instances of technology known to us, human and non-human together, we have a still more comprehensive conception of the technium that is more general that the concept of the human-specific technium and therefore less subject to anthropic bias (the latter concept due to Nick Bostrum, who also formulated existential risk). This latter, more comprehensive conception of the technium would seem to be favored by Russell’s imperative of generalization to the utmost, although we must continue to make the finer distinctions within the concept for the formalization of the conception of the technium to keep pace with its generalization.
There is a systematic relationship between terrestrial biology and the terrestrial technium, both hominid and non-hominid. Eobiology facilitates the emergence of a terrestrial eotechnium, of which all instances of technology, hominid and non-hominid alike, can be considered expressions. This is already explicit in Kevin Kelly’s book, What Technology Wants, as one of his arguments is that the emergence and growth of the technium is continuous with the emergence of growth of biological organization and complexity. He cites John Maynard Smith and Eors Szathmary as defining the following thresholds of biological organization (p. 46):
One replicating molecule -» Interacting population of replicating molecules
Replicating molecules -» Replicating molecules strung into chromosome
Chromosome of RNA enzymes -» DNA proteins
Cell without nucleus -» Cell with nucleus
Asexual reproduction (cloning) -» Sexual recombination
Single-cell organism -* Multicell organism
Solitary individual -» Colonies and superorganisms
Primate societies -» Language-based societies
He then suggests the following sequence of thresholds within the growth of the technium (p. 47):
Primate communication -» Language
Oral lore -> Writing/mathematical notation
Scripts -» Printing
Book knowledge -» Scientific method
Artisan production -» Mass production
Industrial culture -» Ubiquitous global communication
And then he connects the two sequences:
The trajectory of increasing order in the technium follows the same path that it does in life. Within both life and the technium, the thickening of interconnections at one level weaves the new level of organization above it. And it’s important to note that the major transitions in the technium begin at the level where the major transitions in biology left off: Primate societies give rise to language. The invention of language marks the last major transformation in the natural world and also the first transformation in the manufactured world. Words, ideas, and concepts are the most complex things social animals (like us) make, and also the simplest foundation for any type of technology. (p. 48)
Thus the genealogy of the technium is continuous with the genealogy of life.
Considering this in relation to the possibility of a xenotechnium, one would expect the same to be the case: I would expect a systematic relationship to hold between xenobiology and a xenotechnium, such that an alien eobiology would facilitate the emergence of an alien eotechnium. And, extending this naturalistic line of thought, that assumes similar patterns of development to hold for peer industrial-technological civilizations, I would further assume that a xenotechnium would not always coincide with the xenocivilization with which it is associated. If there is a “first contact” between terrestrial civilization and a xenocivilization, it is likely that it will be rather a contact between the expanding terrestrial technium (which is, technically, no longer terrestrial precisely because it is expanding extraterrestrially) and an expanding xenotechnium.
There remains much conceptual work to be done here, as the reader will have realized. I’ll continue to work on these formulations, keeping in mind the imperatives of generality and formality, and perhaps someday converging on a foundationalist account of biology, civilization, and the technium that is at once both fully comprehensive and fully articulated.
. . . . .
. . . . .
. . . . .
19 May 2012
We can make a distinction among distinctions between ad hoc and principled distinctions. The former category — ad hoc distinctions — may ultimately prove to be based on a principle, but that principle is unknown as long as the distinction remains an ad hoc distinction. This suggests a further distinction among distinctions between ad hoc distinctions that really are ad hoc, and which are based on no principle, and ad hoc distinctions that are really principled distinctions but the principle in question is not yet known, or not yet formulated, at the time the distinction is made. So there you have a principled distinction between distinctions.
A perfect evocation of ad hoc distinctions is to be found in the opening paragraph of the Preface to Foucault’s The Order of Things:
This book first arose out of a passage in Borges, out of the laughter that shattered, as I read the passage, all the familiar landmarks of my thought — our thought, the thought that bears the stamp of our age and our geography — breaking up all the ordered surfaces and all the planes with which we are accustomed to tame the wild profusion of existing things, and continuing long afterwards to disturb and threaten with collapse our age-old distinction between the Same and the Other. This passage quotes a ‘certain Chinese encyclopedia’ in which it is written that ‘animals are divided into: (a) belonging to the Emperor, (b) embalmed, (c) tame, (d) sucking pigs, (e) sirens, (f) fabulous, (g) stray dogs, (h) included in the present classification, (i) frenzied, (j) innumerable, (k) drawn with a very fine camelhair brush, (1) et cetera, (m) having just broken the water pitcher, (n) that from a long way off look like flies’. In the wonderment of this taxonomy, the thing we apprehend in one great leap, the thing that, by means of the fable, is demonstrated as the exotic charm of another system of thought, is the limitation of our own, the stark impossibility of thinking that.
Such distinctions are comic, though Foucault recognizes that our laughter is uneasy: even as we immediately recognize the ad hoc character of these distinctions, we realize that the principled distinctions we routinely employ may not be so principled as we supposed.
Foucault continues this theme for several pages, and then gives another formulation — perhaps, given his interest in mental illness, an illustration that is closer to reality than Borges’ Chinese dictionary:
“It appears that certain aphasiacs, when shown various differently coloured skeins of wool on a table top, are consistently unable to arrange them into any coherent pattern; as though that simple rectangle were unable to serve in their case as a homogeneous and neutral space in which things could be placed so as to display at the same time the continuous order of their identities or differences as well as the semantic field of their denomination. Within this simple space in which things are normally arranged and given names, the aphasiac will create a multiplicity of tiny, fragmented regions in which nameless resemblances agglutinate things into unconnected islets; in one corner, they will place the lightest-coloured skeins, in another the red ones, somewhere else those that are softest in texture, in yet another place the longest, or those that have a tinge of purple or those that have been wound up into a ball. But no sooner have they been adumbrated than all these groupings dissolve again, for the field of identity that sustains them, however limited it may be, is still too wide not to be unstable; and so the sick mind continues to infinity, creating groups then dispersing them again, heaping up diverse similarities, destroying those that seem clearest, splitting up things that are identical, superimposing different criteria, frenziedly beginning all over again, becoming more and more disturbed, and teetering finally on the brink of anxiety.”
Foucault here writes that, “the sick mind continues to infinity,” in other words, the process does not terminate in a definite state-of-affairs. This implies that the healthy mind does not continue to infinity: rational thought must make concessions to human finitude. While I find the use of the concept of the pathological in this context questionable, and I have to wonder if Foucault was unwittingly drawn into the continental anti-Cantorian tradition (Brouwerian intuitionism and the like, though I will leave this aside for now), there is some value to the idea that a scientific process (such as classification) must terminate in a finite state-of-affairs, even if only tentatively. I will try to show, moreover, that there is an implicit principle in this attitude, and that it is in fact a principle that I have discussed previously.
The quantification of continuous data requires certain compromises. Two of these compromises include finite precision errors (also called rounding errors) and finite dimension errors (also called truncation). Rounding errors should be pretty obvious: finite parameters cannot abide infinite decimal expansions, and so we set a limit of six decimal places, or twenty, or more — but we must set a limit. The difference between actual figures and limited decimal expansions of the same figure is called a finite precision error. Finite dimension errors result from the need to arbitrarily introduce gradations into a continuum. Using the real number system, any continuum can be faithfully represented, but this representation would require infinite decimal expansions, so we see that there is a deep consonance between finite precision errors and finite dimension errors. Thus, for example, we measure temperature by degrees, and the arbitrariness of this measure is driven home to us by the different scales we can use for this measurement. And if we could specify temperature using real numbers (including transcendental numbers) we would not have to compromise. But engineering and computers and even human minds need to break things up into manageable finite quantities, so we speak of 3 degrees C, or even 3.14 degrees C, but we don’t try to work with pi degrees C. Thus the increments of temperature, or of any another measurement, involve both finite precision errors and finite dimension errors.
In so far as quantification is necessary to the scientific method, finite dimension errors are necessary to the scientific method. In several posts (e.g., Axioms and Postulates in Strategy) I have cited Carnap’s tripartite distinction among scientific concepts, the three being classificatory, comparative, and quantitative concepts. Carnap characterizes the emergence of quantitative scientific concepts as the most sophisticated form of scientific thought, but in reviewing Carnap’s scientific concepts in the light of finite precision errors and finite dimension errors, it is immediately obvious that classificatory concepts and comparative concepts do not necessarily involve finite precision errors and finite dimension errors. It is only with the introduction of quantitative concepts that science becomes sufficiently precise that its precision forces compromises upon us. However, I should point out that classificatory concepts routinely force us to accept finite dimension errors, although they do not involve finite precision errors. The example given by Foucault, quoted above, illustrates the inherent tension in classificatory concepts.
We accept finite precision errors and finite dimension errors as the price of doing science, and indeed as the price of engaging in rational thought. As Foucault implied in the above quote, the healthy and sane mind must draw lines and define limits and call a halt to things. Sometimes these limits are close to being arbitrary. We retain the ambition of “carving nature at the joints,” but we accept that we can’t always locate the joint but at times must cleave the carcass of nature regardless.
For this willingness to draw lines and establish limits and to call a halt to proceedings I will give the name The Truncation Principle, since it is in virtue to cutting off some portion of the world and treating it as though it were a unified whole that we are able to reason about the world.
As I mentioned above, I have discussed this problem previously, and in my discussion I noted that I wanted to give an exposition of a principle and a fallacy, but that I did not have a name for it yet, so I called it An Unnamed Principle and an Unnamed Fallacy. Now I have a name for it, and I will use this name, i.e., the truncation principle, from now on.
Note: I was tempted to call this principle the “baby retention principle” or even the “hang on to your baby principle” since it is all about the commonsense notion of not throwing out the baby with the bathwater.
In An Unnamed Principle and an Unnamed Fallacy I initially formulated the principle as follows:
The principle is simply this: for any distinction that is made, there will be cases in which the distinction is problematic, but there will also be cases when the distinction is not problematic. The correlative unnamed fallacy is the failure to recognize this principle.
What I most want to highlight is that when someone points out there are gray areas that seem to elude classification by any clear cut distinction, this is sometimes used as a skeptical argument intended to undercut the possibility of making any distinctions whatsoever. The point is that the existence of gray areas and problematic cases does not address the other cases (possibly even the majority of the cases) for which the distinction isn’t in the least problematic.
A distinction that that admits of problematic cases not clearly falling on one side of the distinction or the other, may yet have other cases that are clearly decided by the distinction in question. This might seem too obvious to mention, but distinctions that admit of problematic instances are often impugned and rejected for this reason. Admitting of no exceptions whatsoever is an unrealistic standard for a distinction.
I hope to be able to elaborate on this formulation as I continue to think about the truncation principle and its applications in philosophical, formal, and scientific thought.
Usually when we hear “truncation” we immediately think of the geometrical exercise of regularly cutting away parts of the regular (Platonic) solids, yielding truncated polyhedra and converging on rectified polyhedra. This is truncation in space. Truncation in time, on the other hand, is what is more commonly known as historical periodization. How exactly one historical period is to be cut off from another is always problematic, not least due to the complexity of history and the sheer number of outliers that seem to falsify any attempt at periodization. And yet, we need to break history up into comprehensible chunks. When we do so, we engage in temporal truncation.
All the problems of philosophical logic that present themselves to the subtle and perceptive mind when contemplating a spatial truncation, as, for example, in defining the Pacific Ocean — where exactly does it end in relation to the Indian Ocean? — occur in spades in making a temporal truncation. Yet if rational inquiry is to begin (and here we do not even raise the question of where rational inquiry ends) we must make such truncations, and our initial truncations are crude and mostly ad hoc concessions to human finitude. Thus I introduce the truncation principle as an explicit justification of truncations as we employ them throughout reasoning.
And, as if we hadn’t already laid up enough principles and distinctions for today, here is a principle of principles of distinctions: every principled distinction implies a fallacy that takes the form of neglecting this distinction. With an ad hoc distinction there is no question of fallacy, because there is no principle to violate. Where there is a principle involved, however, the violation of the principle constitutes a fallacy.
Contrariwise, every fallacy implies a principled distinction that ought to have been made. If we observe the appropriate principled distinctions, we avoid fallacies, and if we avoid fallacies we appropriately distinguish that which ought to be distinguished.
. . . . .
. . . . .
. . . . .