Saturday


Truncated sphere: can we appeal to any principle in our truncations?

We can make a distinction among distinctions between ad hoc and principled distinctions. The former category — ad hoc distinctions — may ultimately prove to be based on a principle, but that principle is unknown as long as the distinction remains an ad hoc distinction. This suggests a further distinction among distinctions between ad hoc distinctions that really are ad hoc, and which are based on no principle, and ad hoc distinctions that are really principled distinctions but the principle in question is not yet known, or not yet formulated, at the time the distinction is made. So there you have a principled distinction between distinctions.

A perfect evocation of ad hoc distinctions is to be found in the opening paragraph of the Preface to Foucault’s The Order of Things:

This book first arose out of a passage in Borges, out of the laughter that shattered, as I read the passage, all the familiar landmarks of my thought — our thought, the thought that bears the stamp of our age and our geography — breaking up all the ordered surfaces and all the planes with which we are accustomed to tame the wild profusion of existing things, and continuing long afterwards to disturb and threaten with collapse our age-old distinction between the Same and the Other. This passage quotes a ‘certain Chinese encyclopedia’ in which it is written that ‘animals are divided into: (a) belonging to the Emperor, (b) embalmed, (c) tame, (d) sucking pigs, (e) sirens, (f) fabulous, (g) stray dogs, (h) included in the present classification, (i) frenzied, (j) innumerable, (k) drawn with a very fine camelhair brush, (1) et cetera, (m) having just broken the water pitcher, (n) that from a long way off look like flies’. In the wonderment of this taxonomy, the thing we apprehend in one great leap, the thing that, by means of the fable, is demonstrated as the exotic charm of another system of thought, is the limitation of our own, the stark impossibility of thinking that.

Such distinctions are comic, though Foucault recognizes that our laughter is uneasy: even as we immediately recognize the ad hoc character of these distinctions, we realize that the principled distinctions we routinely employ may not be so principled as we supposed.

Foucault continues this theme for several pages, and then gives another formulation — perhaps, given his interest in mental illness, an illustration that is closer to reality than Borges’ Chinese dictionary:

“It appears that certain aphasiacs, when shown various differently coloured skeins of wool on a table top, are consistently unable to arrange them into any coherent pattern; as though that simple rectangle were unable to serve in their case as a homogeneous and neutral space in which things could be placed so as to display at the same time the continuous order of their identities or differences as well as the semantic field of their denomination. Within this simple space in which things are normally arranged and given names, the aphasiac will create a multiplicity of tiny, fragmented regions in which nameless resemblances agglutinate things into unconnected islets; in one corner, they will place the lightest-coloured skeins, in another the red ones, somewhere else those that are softest in texture, in yet another place the longest, or those that have a tinge of purple or those that have been wound up into a ball. But no sooner have they been adumbrated than all these groupings dissolve again, for the field of identity that sustains them, however limited it may be, is still too wide not to be unstable; and so the sick mind continues to infinity, creating groups then dispersing them again, heaping up diverse similarities, destroying those that seem clearest, splitting up things that are identical, superimposing different criteria, frenziedly beginning all over again, becoming more and more disturbed, and teetering finally on the brink of anxiety.”

Foucault here writes that, “the sick mind continues to infinity,” in other words, the process does not terminate in a definite state-of-affairs. This implies that the healthy mind does not continue to infinity: rational thought must make concessions to human finitude. While I find the use of the concept of the pathological in this context questionable, and I have to wonder if Foucault was unwittingly drawn into the continental anti-Cantorian tradition (Brouwerian intuitionism and the like, though I will leave this aside for now), there is some value to the idea that a scientific process (such as classification) must terminate in a finite state-of-affairs, even if only tentatively. I will try to show, moreover, that there is an implicit principle in this attitude, and that it is in fact a principle that I have discussed previously.

The quantification of continuous data requires certain compromises. Two of these compromises include finite precision errors (also called rounding errors) and finite dimension errors (also called truncation). Rounding errors should be pretty obvious: finite parameters cannot abide infinite decimal expansions, and so we set a limit of six decimal places, or twenty, or more — but we must set a limit. The difference between actual figures and limited decimal expansions of the same figure is called a finite precision error. Finite dimension errors result from the need to arbitrarily introduce gradations into a continuum. Using the real number system, any continuum can be faithfully represented, but this representation would require infinite decimal expansions, so we see that there is a deep consonance between finite precision errors and finite dimension errors. Thus, for example, we measure temperature by degrees, and the arbitrariness of this measure is driven home to us by the different scales we can use for this measurement. And if we could specify temperature using real numbers (including transcendental numbers) we would not have to compromise. But engineering and computers and even human minds need to break things up into manageable finite quantities, so we speak of 3 degrees C, or even 3.14 degrees C, but we don’t try to work with pi degrees C. Thus the increments of temperature, or of any another measurement, involve both finite precision errors and finite dimension errors.

In so far as quantification is necessary to the scientific method, finite dimension errors are necessary to the scientific method. In several posts (e.g., Axioms and Postulates in Strategy) I have cited Carnap’s tripartite distinction among scientific concepts, the three being classificatory, comparative, and quantitative concepts. Carnap characterizes the emergence of quantitative scientific concepts as the most sophisticated form of scientific thought, but in reviewing Carnap’s scientific concepts in the light of finite precision errors and finite dimension errors, it is immediately obvious that classificatory concepts and comparative concepts do not necessarily involve finite precision errors and finite dimension errors. It is only with the introduction of quantitative concepts that science becomes sufficiently precise that its precision forces compromises upon us. However, I should point out that classificatory concepts routinely force us to accept finite dimension errors, although they do not involve finite precision errors. The example given by Foucault, quoted above, illustrates the inherent tension in classificatory concepts.

We accept finite precision errors and finite dimension errors as the price of doing science, and indeed as the price of engaging in rational thought. As Foucault implied in the above quote, the healthy and sane mind must draw lines and define limits and call a halt to things. Sometimes these limits are close to being arbitrary. We retain the ambition of “carving nature at the joints,” but we accept that we can’t always locate the joint but at times must cleave the carcass of nature regardless.

For this willingness to draw lines and establish limits and to call a halt to proceedings I will give the name The Truncation Principle, since it is in virtue to cutting off some portion of the world and treating it as though it were a unified whole that we are able to reason about the world.

As I mentioned above, I have discussed this problem previously, and in my discussion I noted that I wanted to give an exposition of a principle and a fallacy, but that I did not have a name for it yet, so I called it An Unnamed Principle and an Unnamed Fallacy. Now I have a name for it, and I will use this name, i.e., the truncation principle, from now on.

Note: I was tempted to call this principle the “baby retention principle” or even the “hang on to your baby principle” since it is all about the commonsense notion of not throwing out the baby with the bathwater.

In An Unnamed Principle and an Unnamed Fallacy I initially formulated the principle as follows:

The principle is simply this: for any distinction that is made, there will be cases in which the distinction is problematic, but there will also be cases when the distinction is not problematic. The correlative unnamed fallacy is the failure to recognize this principle.

What I most want to highlight is that when someone points out there are gray areas that seem to elude classification by any clear cut distinction, this is sometimes used as a skeptical argument intended to undercut the possibility of making any distinctions whatsoever. The point is that the existence of gray areas and problematic cases does not address the other cases (possibly even the majority of the cases) for which the distinction isn’t in the least problematic.

A distinction that that admits of problematic cases not clearly falling on one side of the distinction or the other, may yet have other cases that are clearly decided by the distinction in question. This might seem too obvious to mention, but distinctions that admit of problematic instances are often impugned and rejected for this reason. Admitting of no exceptions whatsoever is an unrealistic standard for a distinction.

I hope to be able to elaborate on this formulation as I continue to think about the truncation principle and its applications in philosophical, formal, and scientific thought.

Usually when we hear “truncation” we immediately think of the geometrical exercise of regularly cutting away parts of the regular (Platonic) solids, yielding truncated polyhedra and converging on rectified polyhedra. This is truncation in space. Truncation in time, on the other hand, is what is more commonly known as historical periodization. How exactly one historical period is to be cut off from another is always problematic, not least due to the complexity of history and the sheer number of outliers that seem to falsify any attempt at periodization. And yet, we need to break history up into comprehensible chunks. When we do so, we engage in temporal truncation.

All the problems of philosophical logic that present themselves to the subtle and perceptive mind when contemplating a spatial truncation, as, for example, in defining the Pacific Ocean — where exactly does it end in relation to the Indian Ocean? — occur in spades in making a temporal truncation. Yet if rational inquiry is to begin (and here we do not even raise the question of where rational inquiry ends) we must make such truncations, and our initial truncations are crude and mostly ad hoc concessions to human finitude. Thus I introduce the truncation principle as an explicit justification of truncations as we employ them throughout reasoning.

And, as if we hadn’t already laid up enough principles and distinctions for today, here is a principle of principles of distinctions: every principled distinction implies a fallacy that takes the form of neglecting this distinction. With an ad hoc distinction there is no question of fallacy, because there is no principle to violate. Where there is a principle involved, however, the violation of the principle constitutes a fallacy.

Contrariwise, every fallacy implies a principled distinction that ought to have been made. If we observe the appropriate principled distinctions, we avoid fallacies, and if we avoid fallacies we appropriately distinguish that which ought to be distinguished.

. . . . .

signature

. . . . .

Grand Strategy Annex

. . . . .

project astrolabe logo smaller

. . . . .

Advertisements

Tuesday


Henri-Louis Bergson, 18 October 1859 to 04 January 1941, philosopher and time and duration.

Henri-Louis Bergson, 18 October 1859 to 04 January 1941, philosopher of Dionysian time and duration.

In the early twentieth century Henri Bergson was a name to conjure with. He was an intellectual celebrity not unlike, say, Foucault before his death: both men could pack a hall with excited Parisians eager to hear the intellectual developments of the most advanced mind of France. Bergson was a man of sharp, angular features, a large bulbous forehead, and deeply set eyes, the overall effect of which reminds one not a little of Count Orlock played by Max Schreck in Murnau’s Nosferatu.

Nosferatu was the bête noire of all that belongs to the light of day, and he himself belongs to Chaos and Old Night.

Nosferatu was the bête noire of all that belongs to the light of day, and he himself belongs to Chaos and Old Night.

As a philosopher who made crowds swoon, he inevitably attracted the enmity of other philosophers, Bertrand Russell especially, for whom Bergson was his bête noire. I can imagine that Russell might have chuckled at the idea of Bergson as Count Orlock. But for Russell the chuckle would have been mixed with a sense of disquiet because Bergson represented to him much that Murnau’s symphony of horror represented to its audience: the irruption of the irrational within an ordered world, the rejection of reason in favor of Dionysian indulgence, the mind subordinated to natural forces in their most horrific appearance (not unlike Pentheus in The Bacchae). For Russell, Bergson represented the forces of Chaos and Old Night let loose upon the world (in an intellectualized form), just as an early cinema-goer might have seen the story of Nosferatu as Chaos and Old Night let loose upon the world (in a dramatically cinematic form).

The young Bertrand Russell rarely passed up an opportunity to criticize Bergson.

The Apollonian young Bertrand Russell rarely passed up an opportunity to criticize Bergson.

Bergson is no longer a name with which to conjure, but when he is remembered, one of the themes for which he is remembered is that of the spatialization of time. For Bergson, intellectual activity cannot reconcile itself to time as it is actually experienced, so that it must create surrogates for time, and it does so, according to Bergson, by assimilating time to space. The mind creates images and representations of time that employ the constructions of geometry. So it is that we represent the continuity of historical time by a line cut by dates. This manner of representing history is so common we never think twice about it.

A time line of events in the life of Henry VIII.

A spatialized and schematized time line of events in the life of Henry VIII.

Are we forced to choose between Russell and Bergson? Both have valid points to make. While I am sympathetic to Russell’s rationalism, I think that Bergson had a point in his critique of spatialization, but Bergson did not go far enough with this idea. Not only has there been a spatialization of time, there has also been a temporalization of space. We see this in the contemporary world in the prevalence of what I call transient spaces: spaced designed to pass through but not spaces in which to abide. Airports, laundromats, bus stations, and sidewalks are all transient spaces. The social consequences of industrialization that have forced us to abide by the regime of the calendar and the time clock by the very fact of quantifying time into discrete regions and apportioning them according to a schedule also forces us to wait. The waiting room ought to be recognized as one of the central symbols of our age; the waiting room is par excellence the temporalization of space.

waiting room

The modeling of real world phenomena by quantifiable means — be these phenomena spatial or temporal — involves at least two known unknowns: finite precision errors and finite dimensional errors. The former (finite precision errors) occur when decimal expansions are arbitrarily cut off at, say, six decimal places or eight decimal places or whatever the model demands. Our finite computing systems cannot calculate real numbers with infinite decimal expansions, so they must be terminated at some point or we cannot even begin our attempt at modeling. The latter (finite dimension errors) occur when a continuum of possibilities must be broken down into discrete units. A rainbow is a continuous gradation of color, but for the sake of our conceptual schematism we break it down into red, orange, yellow, green, blue, indigo, and violet (some recitations leave out the indigo). The familiar rainbow color schematism involves finite dimensional errors.

Some familiar artefacts of life lend themselves to geometrical exposition, so that their spatialization contributes to our understanding of them.

Some familiar artefacts of life lend themselves to geometrical exposition, so that their spatialization contributes to our understanding of them.

Ordinary experience is overwhelmingly continuous and only occasionally quantized. The application of flow charts that map the epistemic spaces of our lives to matters of experience involve countless finite dimensional errors. We accept these errors as the price of systematically extrapolating our knowledge, but we ought always to employ such extrapolations with caution. Just as the map is not the territory, so too the extrapolation is not the knowledge itself. Every flow chart and every algorithm embody and reify finite dimension errors. Since it is in their nature to do so, we do not think of these as errors, but should we confuse the map of life with the territory of life we would be compromised.

Instructions: an algorithm for the safe use of a chain saw. Can all aspects of life be similarly and as successfully schematized?

Instructions: an algorithm for the safe use of a chain saw. Can all aspects of life be similarly and as successfully schematized?

Conceptual schematisms are routines of the mind, and we all know how easily we slip into routines. A routine, whether a habit of body or of mind, is an algorithm for life, a finite decision procedure by which those individuals who would otherwise be without purpose determine their course of action and thus manage to fill the vacant hours of the clock. It is a recipe for life, to be sure, but it is not a recipe for anything other than mediocrity in life, and perhaps a guarantee of it.

A recipe is an algorithm for the production of food stuffs, a finite sequence of instructions intended to secure a predictable result.

A recipe is an algorithm for the production of food stuffs, a finite sequence of instructions intended to secure a predictable result.

The mapping of time as an epistemic space, as in a flow chart, is not without consequences. A distinctive feature of algorithms is their finitude. The mapping of life’s paths with discrete, finite alternatives limits options to a few pre-determined alternatives. Any individual of ordinary critical capacity, capable thinking for themselves, will quickly reject any such attempt to limit their options in life, but many among us are unable to see beyond the roles embodied in society. Sartre called this the spirit of seriousness. Finite dimensional errors represent the spirit of seriousness necessary to the practice of science.

There are a number of humorous twitter algorithms floating around the internet at present, but behind the humor is the implied tension of reducing a human activity to a rule.

There are a number of humorous twitter algorithms floating around the internet at present, but behind the humor is the implied tension of reducing a human activity to a rule.

If, as I have argued elsewhere, freedom is a form of infinity, subordinating our lives to a finite, algorithmic regime not only results in an inauthentic life, it robs us of the freedom that makes us human. Without our freedom, we become automatons. And there seems to be an intuitive understanding of the danger that industrialized society poses in terms of regimenting life to the point of transforming life into a hollow, mechanistic exercise. We discussed this at some length in Fear of the Future.

. . . . .

signature

. . . . .

Grand Strategy Annex

. . . . .

%d bloggers like this: