The Finality Fallacy

11 January 2014

Saturday


fallacy taxonomy

One of my pet peeves is when a matter is treated as though settled and final when there is in fact no finality at all in a given formulation or in the circumstances that the formulation seeks to capture. I am going to call this attribution of finality to matters remaining unsettled the “finality fallacy” (it would more accurate to call this the “false finality fallacy” but this is too long and alliterative to boot). This is an informal rather than a formal fallacy, so an individual might be impeccable in their logic while still committing the finality fallacy. Another way to understand informal fallacies is that they concern the premisses of reasoning rather than the reasoning itself (another term for this is material fallacy), and it is one of the premisses of any finality fallacy that a given matter is closed to discussion and nothing more need be said.

Like many fallacies, the finality fallacy is related to recognized fallacies, although it is difficult to classify exactly. One could compare the finality fallacy to ignoratio elenchi or begging the question or the moralistic fallacy — all of these are present in some degree or other in the finality fallacy in its various manifestations.

Preparing for a cosmic journey -- but one trip settles nothing.

Preparing for a cosmic journey — but one trip settles nothing.

Allow me to begin with an example from popular culture. Although it has been several years since I have seen the film Contact (written by Carl Sagan and loosely based on the life of Jill Tartar, famous for her work in SETI) I can remember how irritated I was by the ending, which treated the celestial journey made by the main character as a unique, one-off effort, despite the fact that an enormous apparatus was built to make the journey possible. If I had made the film I would have finished with a waiting line of people queued up to use the machine next, to make it clear that nothing is finished by the fact of a disputed first journey.

It is routine for films to end with a false sense of finality, as filmmakers assume that the audience requires resolution, or “closure.” We hear a lot about closure, but it is rare to see a clear definition of what exactly constitutes closure. Perhaps it is the desire for closure, generally speaking, that is the primary motivation for the finality fallacy. When the psychological need for closure leaks over into an intellectual need for closure, then we find rationalizations of a false finality; perhaps it would be better to call the finality fallacy a cognitive bias rather, or this might be the point at which material fallacy overlaps with cognitive bias.

Closure sign

The cultivation of a false finality is also prevalent among contemporary Marxists, especially those who focus on Marx’s economic doctrines rather than his wider social and philosophical critique. Marx’s economics was already antiquated by the time he published Das Kapital, but because of Marx’s influence, and because of the ongoing revolutionary tradition that rightly claims Marx as a founding father, Marx’s dictates on the labor theory of value are taken as final by Marxists, who must now, more than 150 years later, pretend as though no advances had been made in economics in the meantime. Strangely, this attitude is also taken for granted among ideological foes of Darwin, who, again more then 150 years later, continue to raise the same objections as though nothing had happened in biology since before 1859. This carefully studied ignorance takes a particular development in intellectual history and treats it as final, as the last word, as definitive, as Gospel.

Wherever there is a dogma, there is a finality fallacy waiting to be committed when the adherents of the dogma in question treat that dogma as final and must thereafter perpetuate the ruse that nothing essential changes after the dogma establishes the last word. In Islam, we have the notorious historical development of ‘closing the gate of ijtihad’ — ijtihad being individual rational inquiry. This is now a contested idea — i.e., whether there ever was a closing of the gate of ijtihad — as there seems to have been no official proclamation or definitive text, but there can be no question that the idea of closing the gate of ijtihad played an important role is Islamic civilization in discouraging inquiry and independent thought (what we would today call a “chilling” effect, much like the condemnations of 1277 in Western history).

Bertrand Russell evinced a certain irritation when his eponymous paradox was dismissed or treated as final before any satisfactory treatment had been formulated.

Bertrand Russell evinced a certain irritation when his eponymous paradox was dismissed or treated as final before any satisfactory treatment had been formulated.

Bertrand Russell evinced an obvious irritation and impatience with the response to his paradox, which reveals an attitude not unlike my impatience with false finality:

“Poincaré, who disliked mathematical logic and had accused it of being sterile, exclaimed with glee, ‘it is no longer sterile, it begets contradiction’. This was all very well, but it did nothing towards the solution of the problem. Some other mathematicians, who disapproved of Georg Cantor, adopted the March Hare’s solution: ‘I’m tired of this. Let’s change the subject.’ This, also, appeared to me inadequate.”

Bertrand Russell, My Philosophical Development, Chapter VII, “Principia Mathematica: Philosophical Aspects”

Russell knew that nothing was settled by dismissing mathematical logic, as Poincaré did, or by simply changing the subject, as others were content to do. But some were satisfied with these evasions; Russell would have none of it, and persisted until he satisfied himself with a solution (which was his theory of types). Most mathematicians rejected Russell’s solution, and it was Zermelo’s axiomatization of set theory that ultimately became the consensus choice among mathematicians for employing set theory without running into the contradiction discovered by Russell.

Now I will turn to the contemporary example that prompted this post — as I said above, the finality fallacy is a pet peeve, but this particular instance was the trigger for this particular post — in the work of contemporary philosopher John Gray.

John Gray is not a philosopher who writes intentionally inflammatory pieces in order to grab headlines. He regularly has short essays on the BBC (I have commented on his A Point Of View: Leaving Gormenghast; his A Point of View: Two cheers for human rights also appeared on the BBC). He has written a sober book on Mill’s On Liberty, Mill on Liberty: A Defence, and a study of the thought of Isaiah Berlin (Isaiah Berlin) — in no sense radical topics for a contemporary philosopher.

726508w

Among Gray’s many books is The Immortalization Commission: Science and the Strange Quest to Cheat Death, in which we find the following:

“Echoing the Russian rocket scientist Konstantin Tsiolkovsky, there are some who think humans should escape the planet they have gutted by migrating into outer space. Happily, there is no prospect of the human animal extending its destructive career in this way. The cost of sending a single human being to another planet is prohibitive, and planets in the solar system are more inhospitable than the desolated Earth from which humans would be escaping.”

John Gray, The Immortalization Commission: Science and the Strange Quest to Cheat Death, New York: Farrar, Straus and Giroux, 2011, p. 212

There is a lot going on in this brief passage, and I would like to try to gloss some of this implicit content. Gray is here counting on his reader nodding along with him, since human beings have indeed had a destructive career on Earth, and I can easily imagine someone agreeing to this also agreeing to the undesirability of this destructive career being extended beyond the Earth. Gray also throws in a sense of gross irresponsibility by speaking of human beings having “gutted” the planet, and presumably moving on to “gut” the next one, with the clear implication that this would be worse than arresting the destructive career of human beings on their homeworld. Then Gray moves on to the expense of space travel at the present moment and the inhospitableness of other planets in our solar system. He treats as though final the present expense of space travel and the need to live on the surface of a planet, but more importantly he does so in a moral context that is intended to give the impression that any attempt to go beyond the Earth is unspeakable folly and morally disastrous.

Eunapius and Philostratus

This may sound like a stretch, but I am reminded of a passage from Eunapius (b. 347 A.D.), where Eunapius described the kind of atmosphere that made the condemnation of Socrates possible in Athens:

“…no one of all the Athenians, even though they were a democracy, would have ventured on that accusation and indictment of one whom all the Athenians regarded as a walking image of wisdom, had it not been that in the drunkenness, insanity, and license of the Dionysia and the night festival, when light laughter and careless and dangerous emotions are discovered among men, Aristophanes first introduced ridicule into their corrupted minds, and by setting dances upon the stage won over the audience to his views…”

Philostratus and Eunapius, Lives of the Sophists, Cambridge and London: Harvard, 1921, p. 381

Though a sober philosopher in his own right, Gray here trades upon the light laughter and careless and dangerous emotions when he engages in the ridicule of a human future beyond the Earth, which he implies is not only unlikely (if not impossible) but also morally wrong. But to demonstrate his intellectual sobriety he next turns serious and has this to say on the next page:

“The pursuit of immortality through science is only incidentally a project aiming to defeat death. At bottom it is an attempt to escape contingency and mystery. Contingency means humans will always be subject to fate and chance, mystery that they will always be surrounded by the unknowable. For many this state of affairs is intolerable, even unthinkable. Using advancing knowledge, they insist, the human animal can transcend the human condition.”

John Gray, The Immortalization Commission: Science and the Strange Quest to Cheat Death, New York: Farrar, Straus and Giroux, 2011, p. 213

Gray’s certainty and confidence of expression here mask the sheer absurdity of his claims; the expansion of a scientific civilization will by no means prejudice our relationship to contingency and mystery. On the contrary, it is a scientific understanding of the world that reminds us of contingency on levels that far exceed human capacity. The universe itself is a contingency, and all that it holds is contingency; science reminds us of this at every turn, and for the same reason, no matter how distantly human civilization travels beyond Earth, scientific mystery will be there to remind us of all that we still do not know.

But this is not my topic today (though it makes me angry to read it, and that is why I have quoted it). It is the previously quoted passage from Gray that truly bothers me because of its pose of finality in his pithy remarks about the human future beyond Earth. Gray is utterly dismissive of such prospects, and it is ultimately the dismissiveness that is the problem, not the view he holds.

I don’t mean to single out John Gray as especially guilty in this respect, though as a philosopher he is more guilty than others because he ought to know better, just as Russell knew better when it came to his paradox. In fact, there is some similarity here, because both mathematicians and philosophers were dismissive either of Russell’s paradox or of the formal methods that led to the paradox. We should not be dismissive. We need to confront these problems on their merits, and not turn it into a joke or an excuse to condemn human folly. We recall that a great many dismissed Cantor’s work as folly — after all, how can human beings know the infinite? — and Russell’s extension of Cantor’s work drew similar judgments. This, again, is closely connected to what we are talking about here, because the idea that human beings, finite as they are, can never know anything of the infinite (i.e., human beings cannot escape their legacy of intellectual finitude) is closely related to the idea that human beings can never escape their biological legacy of finitude, which is the topic of Gray’s book.

Hermann Weyl said that we live in an open world.

Hermann Weyl said that we live in an open world.

Anyone who views the world as an ongoing process of natural history, as I do, must see it as an open world. That the world is open, that it is neither closed nor final, neither finished nor complete, means that unprecedented events occur and that there always remains the possibility of evolution, by which we transcend a previous form of being and attain to a new form of being. The world’s openness is an idea that Hermann Weyl took as the title of three lectures from 1932, which end on this note:

“We reject the thesis of the categorical finiteness of man, both in the atheistic form of obdurate finiteness which is so alluringly represented today in Germany by the Freiburg philosopher Heidegger, and in the theistic, specifically Lutheran-Protestant form, where it serves as a background for the violent drama of contrition, revelation, and grace. On the contrary, mind is freedom within the limitations of existence; it is open toward the infinite. Indeed, God as the completed infinite cannot and will not be comprehended by it; neither can God penetrate into man by revelation, nor man penetrate to him by mystical perception. The completed infinite we can only represent in symbols. From this relationship every creative act of man receives its deep consecration and dignity. But only in mathematics and physics, as far as I can see, has symbolical-theoretical construction acquired sufficient solidity to be convincing for everyone whose mind is open to these sciences.”

Hermann Weyl, Mind and Nature: Selected Writings on Philosophy, Mathematics, and Physics, edited by Peter Pesic, Princeton University Press, 2009, Chapter 4, “The Open World: Three Lectures on the Metaphysical Implications of Science,” 1932

Weyl gave his own peculiar theological and constructivist spin to the conception of an open world — Weyl, in fact, represents one of those mathematicians “who disapproved of Georg Cantor” about which I quoted Bertrand Russell above — but in the main I am in agreement with Weyl, and Weyl and Russell could have agreed on the openness of the world. To commit the finality fallacy is to presume some aspect of the world closed, and if the world is indeed open, it is a fallacy to represent it as being closed.

. . . . .

signature

. . . . .

Grand Strategy Annex

. . . . .

Saturday


It is difficult to find an authentic expression of horror, due to its close resemblance to both fear and disgust, but one readily recognizes horror when one sees it.

It is difficult to find an authentic expression of horror, due to its close resemblance to both fear and disgust, but one readily recognizes horror when one sees it.

In several posts I have referred to moral horror and the power of moral horror to shape our lives and even to shape our history and our civilization (cf., e.g., Cosmic Hubris or Cosmic Humility?, Addendum on the Avoidance of Moral Horror, and Against Natural History, Right and Left). Being horrified on a uniquely moral level is a sui generis experience that cannot be reduced to any other experience, or any other kind of experience. Thus the experience of moral horror must not be denied (which would constitute an instance of failing to do justice to our intuitions), but at the same time it cannot be uncritically accepted as definitive of the moral life of humanity.

Our moral intuitions tell us what is right and wrong, but they do not tell us what is or is not (i.e., what exists or what does not exist). This is the upshot of the is-ought distinction, which, like moral horror, must not be taken as an absolute principle, even if it is a rough and ready guide in our thinking. It is perfectly consistent, if discomfiting, to explicitly acknowledge the moral horrors of the world, and not to deny that they exist even while acknowledging that they are horrific. Sometimes the claim is made that the world itself is a moral horror. Joseph Campbell attributes this view to Schopenhauer, saying that according to Schopenhauer the world is something that never should have been.

Apart from the horrors of the world as a central theme of mythology, it is also to be found in science. There is a famous quote from Darwin that illustrates the acknowledgement of moral horror:

“There seems to me too much misery in the world. I cannot persuade myself that a beneficent & omnipotent God would have designedly created the Ichneumonidæ with the express intention of their feeding within the living bodies of caterpillars, or that a cat should play with mice.

Letter from Charles Darwin to Asa Gray, 22 May 1860

This quote from Darwin underlines another point repeatedly made by Joseph Campbell: that different individuals and different societies draw different lessons from the same world. For some, the sufferings of the world constitute an affirmation of divinity, while for Darwin and others, the sufferings of the world constitute a denial of divinity. That being said, it is not the point I would like to make today.

Far more common than the acceptance of the world’s moral horrors as they are is the denial of moral horrors, and especially the denial that moral horrors will occur in the future. On one level, a pragmatic level, we like to believe that we have learned our lessons from the horrors of our past, and that we will not repeat them precisely because we have perpetrated horrors in past and came to realize that they were horrors.

To insist that moral horrors can’t happen because it would offend our sensibilities to acknowledge such a moral horror is a fallacy. Specifically, the moral horror fallacy is a special case of the argumentum ad baculum (argument to the cudgel or appeal to the stick), which is in turn a special case of the argumentum ad consequentiam (appeal to consequences).

Here is one way to formulate the fallacy:

Such-and-such constitutes a moral horror,
It would be unconscionable for a moral horror to take place,
Therefore, such-and-such will not take place.

For “such-and-such” you can substitute “transhumanism” or “nuclear war” or “human extinction” and so on. The inference is fallacious only when the shift is made from is to ought or from ought to is. If confine our inference exclusively either to what is or what ought to be, we do not have a fallacy. For example:

Such-and-such constitutes a moral horror,
It would be unconscionable for a moral horror to take place,
Therefore, we must not allow such-and-such to take place.

…is not fallacious. It is, rather, a moral imperative. If you do not want a moral horror to occur, then you must not allow it to occur. This is what Kant called a hypothetical imperative. This is a formulation entirely in terms of what ought to be. We can also formulate this in terms of what is:

Such-and-such constitutes a moral horror,
Moral horrors do not occur,
Therefore, such-and-such does not occur.

This is a valid inference, although it is false. That is to say, this is not a formal fallacy but a material fallacy. Moral horrors do, in fact, occur, so the premise stating that moral horrors do not occur is a false premise, and the conclusion drawn from this false premise is a false conclusion. (If one denies that moral horrors do, in fact, take place, then one argues for the truth of this inference.)

Moral horrors can and do happen. They are even visited upon us numerous times. After the Holocaust everyone said “never again,” yet subsequent history has not spared us further genocides. Nor will it spare us further genocides and atrocities in the future. We cannot infer from our desire to be spared further genocides and atrocities that they will not come to pass.

More interesting than the fact that moral horrors continue to be perpetrated by the enlightened and technologically advanced human societies of the twenty-first century is the fact that the moral life of humanity evolves, and it often is the case that the moral horrors of the future, to which we look forward with fear and trembling, sometimes cease to be moral horrors by the time they are upon us.

Malthus famously argued that, because human population growth outstrips the production of food (and Malthus was particularly concerned with human beings, but he held this to be a universal law affecting all life) that humanity must end in misery or vice. By “misery” Malthus understood mass starvation — which I am sure that most of us today would agree is misery — and by “vice” Malthus meant birth control. In other words, Malthus viewed birth control as a moral horror comparable to mass starvation. This is not a view that is widely held today.

A great many unprecedented events have occurred since Malthus wrote his Essay on the Principle of Population. The industrialization of agriculture not only provided the world with plenty of food for an unprecedented increase in human population, it did so while farming was reduced to a marginal sector of the economy. And in the meantime birth control has become commonplace — we speak of it today as an aspect of “reproductive rights” — and few regard it as a moral horror. However, in the midst of this moral change and abundance, starvation continues to be a problem, and perhaps even more of a moral horror because there is plenty of food in the world today. Where people are starving, it is only a matter of distribution, and this is primarily a matter of politics.

I think that in the coming decades and centuries that there will be many developments that we today regard as moral horrors, but when we experience them they will not be quite as horrific as we thought. Take, for instance, transhumanism. Francis Fukuyama wrote a short essay in Foreign Policy magazine, Transhumanism, in which he identified transhumanism as the world’s most dangerous idea. While Fukuyama does not commit the moral horror fallacy in any explicit way, it is clear that he sees transhumanism as a moral horror. In fact, many do. But in the fullness of time, when our minds will have changed as much as our bodies, if not more, transhumanism is not likely to appear so horrific.

On the other hand, as I noted above, we will continue to experience moral horrors of unprecedented kinds, and probably also on an unprecedented scope and scale. With the human population at seven billion and climbing, our civilization may well experience wars and diseases and famines that kill billions even while civilization itself continues despite such depredations.

We should, then, be prepared for moral horrors — for some that are truly horrific, and others that turn out to be less than horrific once they are upon us. What we should not try to do is to infer from our desires and preferences in the present what must be or what will be. And the good news in all of this is that we have the power to change future events, to make the moral horrors that occur less horrific than they might have been, and to prepare ourselves intellectually to accept change that might have, once upon a time, been considered a moral horror.

. . . . .

signature

. . . . .

Grand Strategy Annex

. . . . .

Tuesday


Not long ago in The Prescriptive Fallacy I mentioned the obvious symmetry of the naturalistic fallacy (inferring “ought” from “is” ) and the moralistic fallacy (inferring “is” from “ought” ) and then went on to formulate several additional fallacies, as follows:

The Prescriptive Fallacy — the invalid inference from ought to will be
The Progressivist Fallacy — the invalid inference from will be to ought
The Golden Age Fallacy — the invalid inference from ought to was
The Primitivist Fallacy — the invalid inference from was to ought

The first two are concerned with the relationship between the future and what ought to be, while the second two are concerned with the relationship between the past and what ought to be.

While we can clearly make the fine distinctions that I drew in The Prescriptive Fallacy, when we consider these attitudes in detail we often find attitudes to the future mixed together so that there is no clear distinction between believing the future to be good because it is what will be, and believing the future will be what it will be because that is good. Similar attitudes are found in respect to both the past and the present.

Recognizing the common nexus of the prescriptive fallacy and the progressivist fallacy gives us a new fallacy, which I will call the Futurist Fallacy.

Recognizing the common nexus of the Golden Age fallacy and the Primitivist fallacy gives us a new fallacy that I will call the Nostalgic Fallacy.

Recognizing the common nexus of the naturalistic fallacy and the moralistic fallacy (when we literally take the “is” in these formulations in a temporal sense, so that it uniquely picks out the present in contradistinction to the past or the future) gives us a new fallacy that I will call the Presentist Fallacy.

Hegel is now notorious for having said “the real is the rational and the rational is the real.”

These complex fallacies that result from projecting our wishes into the past or future and believing that the past or future simultaneously prescribe a norm in turn may be compared to the famous Hegelian formulation — from the point of view of contemporary philosophers, one of the most “notorious” things Hegel wrote, and frequently used as a philosophical cautionary tale today — that the real is the rational and the rational is the real.

Volumes of commentary have been written on Hegel’s impenetrable aphorism, and there are many interpretations. The best interpretation I have heard comes from understanding the “real” as the genuine, in which case, once we make a distinction between genuine instances of a given thing and bogus instances of a given thing, we are saying something significant when we say that the genuine is the rational and the rational is the genuine. The bogus, in contrast, is not convertible with the rational.

However we interpret Hegel, it was part of his metaphysics that there is a mutual implication between reality and reason. Hegel obviously didn’t see this as a fallacy, and I can just as well imagine someone asserting the convertibility of the future and the desirable or the past and the desirable is no fallacy at all, but rather a philosophical thesis or an ideological position that can be defended.

It remains to be noted that our formulations here and in The Prescriptive Fallacy assume without further elaboration the legitimacy of the is/ought distinction. The is/ought distinction is widely recognized in contemporary thought, but we could just as well deny it and make a principle of the mutual implication of is and ought, as Hegel made a principle of the mutual implication of the real and the rational.

Quine’s influence on twentieth century Anglo-American philosophical thought was not least due to his argument against the synthetic/analytic distinction, which was, before Quine, almost as well established as the is/ought distinction. A few well chosen examples can usually call into question even the most seemingly reliable distinction. Quine’s quasi-scientism had the effect of strengthening the is/ought distinction, but it came at the cost of questioning the venerable synthetic/analytic distinction. One could just as well do away with the is/ought distinction, though this would likely come at the cost of some other venerable principle. It becomes, at bottom, a question of principle.

. . . . .

Fallacies and Cognitive Biases

An unnamed principle and an unnamed fallacy

The Truncation Principle

An Illustration of the Truncation Principle

The Gangster’s Fallacy

The Prescriptive Fallacy

The fallacy of state-like expectations

The Moral Horror Fallacy

The Finality Fallacy

Fallacies: Past, Present, Future

Confirmation Bias and Evolutionary Psychology

Metaphysical Fallacies

Metaphysical Biases

Pernicious Metaphysics

Metaphysical Fallacies Again

An Inquiry into Cognitive Bias by way of Memoir

The Appeal to Embargoed Evidence

. . . . .

signature

. . . . .

Grand Strategy Annex

. . . . .

project astrolabe logo smaller

. . . . .

The Prescriptive Fallacy

19 February 2011

Saturday


In my last post, Scientific Challenges to Over-Socialization, I summarized the naturalistic fallacy and the moralistic fallacy as follows:

While it was the turn-of-the-previous-century academic philosopher G. E. Moore who formulated what he called the naturalistic fallacy, it is only recently that the opposite number of the naturalistic fallacy has been formulated, and this is the moralistic fallacy. We can understand the naturalistic fallacy and the moralistic fallacy in terms of the is/ought distinction. The naturalistic fallacy makes an illegitimate inference from is to ought; the moralistic fallacy makes an equally illegitimate inference form ought to is. That is to say, naturalistic thought is vulnerable to concluding that what is, is right, while moralistic thought is vulnerable to concluding that what is right, is. Science taken up in an ideological or moralistic spirit, then, in contradistinction to a naturalistic spirit, is vulnerable to reading its aspirations and ideals into the world.

Since I wrote that (a few hours ago) I realized that there are a number of fallacies closely related to the naturalistic fallacy and the moralistic fallacy, also derived from the is/ought distinction, but moving beyond the present tense of the “is” to other temporalities — those of the future and of the past.

Fervent belief in eschatological hopes is often a consequence of committing the Prescriptive Fallacy.

The most obvious and prevalent fallacy of the kind I have in mind I will call the Prescriptive Fallacy. The Prescriptive Fallacy is the invalid inference of ought to will be. The same fallacy also appears in its logically equivalent form of an invalid inference from ought not to will not be. The Prescriptive Fallacy is, in short, the fallacy of attempting to prescribe what the future must be on the basis of what it ought to be. This is a hopeful fallacy, and we find hundreds of illustrations of it in ordinary experience when we encounter wishful thinking and eschatological hopes among those we meet (I say “among those we meet” because we certainly aren’t foolish enough to make this invalid inference.).

The eschatological vision of the Technological Singularity proclaimed by Ray Kurzweil is a particularly obvious instantiation of the Prescriptive Fallacy in the contemporary world.

Let me provide an example of the Prescriptive Fallacy. When I was reading Kaczynski’s Industrial Society and its Future for my last post, I found that a section of this manifesto had been quoted in Kurzweil’s book The Age of Spiritual Machines. It was in this context that Bill Joy came across this passage from Kaczynski’s manifesto, and this was at least part of the motivation for Joy’s influential essay “Why the Future Doesn’t Need Us.”

Bill Joy was moved by Kurzweil's quotation from Kaczynski to write his influential dystopian essay on the dispensibility of human beings in the future.

Kurzweil (whose Singulatarian vision has made it to the cover of Time magazine this week), in the earlier iteration of his book, quoted sections 172 through 174 from Kaczynski’s manifesto, and after quoting this dystopian passage on the subordination of human beings to the machines they have created — a perennial human anxiety, it would seem — Kurzweil goes on to comment as follows:

“Although [Kaczynski] makes a compelling case for the dangers and damages that have accompanied industrialization, his proposed vision is neither compelling nor feasible. After all, there is too little nature left to return to, and there are too many human beings. For better or worse, we’re stuck with technology.”

Ray Kurzweil, The Age of Spiritual Machines, Viking Press, 1999, p. 182

Kurzweil excells at inane happy-talk, and this is certainly a perfect example of it. He seems to imagine that a Kaczynski-esque renunciation of technology will be a peaceful process in which we will voluntary quit our cities and move out into the countryside, roughly retaining both our population numbers and our quality of life. Once we realize that there are too many us to do so, presumably we meekly return to our cities and our technological way of life. From the Khmer Rouge attempts to enact just such a social vision in the 1970s, and by the by committing one of the worse genocides in human history on the way to their goal of an ideal agrarian communism, we know that such a process will be attended by death and destruction, as has historically been the case with revolutions.

The Killing Fields of the Khmer Rouge were a consequence of their attempt to put into practice their utopian vision of agrarian communism. This vision came at a high cost, and any future attempts at turning back the clock can be predicted to be similarly catastrophic.

I myself treated this theme, although coming from an economic perspective, in my book Political Economy of Globalization, section 30:

The absolute numbers of contemporary populations are important in this connection because if an economic system fails and population numbers are sufficiently low, people can abandon their formal economy in favor of subsistence through proto-economic activity. However, once a certain population threshold has been passed, there simply isn’t room for the population to scatter from urban concentrations to resume a pastoral existence on the land. When there are more people than subsistence methods can support (even if the same number of persons can be comfortably supported by industrialized methods when the latter are fully functional), competition for scarce subsistence resources would lead to instability and violence. But after violence and starvation had reduced the absolute numbers, the survivors could return to subsistence once all the bodies had been buried. Needless to say, this approach to economic self-sufficiency is not one envied among either nations or peoples.

I don’t think Kaczynski was at all deluded about this process in the way that Kurweil seems to present himself. In fact, Kaczynski’s willingness to turn to militancy and violence suggests a tolerance for violence in the spirit of the end justifies the means. For Kurzweil, the ought of a happy, comfortable future for everyone so completely triumphs over any other possibility, especially those possibilities that involve misery and suffering, that the ought he has in mind must inevitably come to pass, because the alternative is, for him, literally unthinkable. This is why I say that Kurzweil commits the Prescriptive Fallacy of invalidly inferring will be from ought.

Kaczynski's turn to militancy and violence represents a frank admission of the costs associated with utopian visionary schemes. In this respect, Kaczynski is a more honest and radical thinker than Kurzweil.

The mirror image of the Prescriptive Fallacy is the invalid inference from will be to ought. This I will call the Progressivist Fallacy. This is the fallacy committed by every enthusiastic futurist who has seen, at least in part, the changes that the future will brings, and affirms that these changes are good because they are the future and because they will come what may. To commit the Progressivist Fallacy is to assert that change is good because it is change and because change is inevitable. Rational, discerning individuals know that not all change is for the better, but the Progressivist inference is based on a starry-eyed enthusiasm and not a rational judgment. I’m sure that if I read Kurzweil in more detail, or other contemporary futurists, I could find a great many illustrations of this fallacy, but for the moment I have no examples to cite.

The Golden Age Fallacy is based on an invalid inference from “ought” to “was.”

Yet another temporal-moral fallacy is what I will call the Golden Age Fallacy. This is parallel to the above Prescriptive and Progressivist fallacies, except projected into the past instead of the future. The Golden Age Fallacy is the invalid inference from ought to was. This is the fallacy committed by political conservatives and the nostalgic, who conclude that, since the past was better than the present as we live in an age of decadence and decline, that all good things and all things that ought to be were in fact instantiated in the past.

While the most obvious examples of the Golden Age Fallacy are found in our own times among those who imagine a lost, idyllic past, the Golden Age Fallacy represents a perennial human frame of mind. Prior to the advent of modernity in all its hurried insistence (and with is tendency to commit the Progressivist Fallacy), it was quite common in the past to believe in a perfect Golden Age before civilization. We find this in the Hellenistic rationalism of Plato, and we find it in the Zionistic prophecy of the Old Testament story of the Garden of Eden. Even today those who read Hermetic texts and believe that they are acquiring lost, ancient wisdom are re-enacting the pre-modern presumption that truth lies at the source of being, and not in the later manifestations of being. This is one form that the Golden Age Fallacy takes.

Finally, the mirror image of the Golden Age Fallacy is the invalid inference from was to ought. This I will call the Primitivist Fallacy, though it is quite closely related to the Golden Age Fallacy (the two are even more closely related than the Progressivist Fallacy and the Prescriptive Fallacy). Kaczynski, as reference above, commits the Primitivist Fallacy; it is common among anarchists and back-to-the-land types. Discontent with the present causes many to look back for concrete examples of “better” or “happier” institutions (or the lack of institutions, in absolute primitivism), and so the inference comes to be made that what was, was good, and from this follows the imperative, among those who commit the Primitivist Fallacy, to attempt to re-instantiate the institutions of the past in the present. The Taliban and others who wish to return to the times of the Prophet and the community he founded in Medina, commit the Primitivist Fallacy, as do those Muslims who look to a re-establishment of the Caliphate. Again, rational people know that older is not necessarily better, but those taken in by the fallacy are no longer able to reason with any degree of reliability.

. . . . .

signature

. . . . .

Grand Strategy Annex

. . . . .

Tuesday


In earlier posts to this forum I have discussed the dissatisfaction that comes from introducing an idea before one has the right name for it. An appropriate name will immediately communicate the intuitive content of the idea to the reader, as when I wrote about the civilization of the hand in contradistinction to the civilization of the mind, after having already sketched the idea in a previous post.

Again I find myself in the position of wanting to write about something for which I don’t yet have the perfect intuitive name, and I have even had to name this post “an unnamed principle and an unnamed fallacy” because I can’t even think of a mediocre name for the principle and its related fallacy.

In yesterday’s Defunct Ideas I argued that new ideas are always emerging in history (though they aren’t always being lost), and it isn’t too difficult to come up with a new idea if one has the knack for it. But most new ideas are pretty run-of-the-mill. One can always build on past ideas and add another brick to the growing structure of human knowledge.

That being said, it is only occasionally, in the midst of a lot of ideas of the middling sort, that one comes up with a really good idea. It is even more rare when one comes up with a truly fundamental idea. Formulating a logical fallacy that has not been noticed to date, despite at least twenty-five hundred years of cataloging fallacies would constitute a somewhat fundamental idea. As this is unlikely in the present context, the principle and the associated fallacy below have probably already been noticed and named by others long ago. If not, they should have been.

The principle is simply this: for any distinction that is made, there will be cases in which the distinction is problematic, but there will also be cases when the distinction is not problematic. The correlative unnamed fallacy is the failure to recognize this principle.

This unnamed principle is not the same as the principle of bivalence or the law of the excluded middle (tertium non datur), though any clear distinction depends, to a certain extent upon them. This unnamed principle is also not to be confused with a simple denial of clear cut distinctions. What I most want to highlight is that when someone points out there are gray areas that seem to elude classification by any clear cut distinction, this is sometimes used as a skeptical argument intended to undercut the possibility of making any distinctions whatsoever. The point is that the existence of gray areas and problematic cases does not address the other cases (possibly even the majority of the cases) for which the distinction isn’t in the least problematic.

Again: a distinction that that admits of problematic cases not clearly falling on one side of the distinction or the other, may yet have other cases that are clearly decided by the distinction in question. This might seem too obvious to mention, but distinctions that admit of problematic instances are often impugned and rejected for this reason. Admitting of no exceptions whatsoever is an unrealistic standard for a distinction.

. . . . .

. . . . .

signature

. . . . .

Grand Strategy Annex

. . . . .

%d bloggers like this: