Saturday


It is difficult to find an authentic expression of horror, due to its close resemblance to both fear and disgust, but one readily recognizes horror when one sees it.

It is difficult to find an authentic expression of horror, due to its close resemblance to both fear and disgust, but one readily recognizes horror when one sees it.

In several posts I have referred to moral horror and the power of moral horror to shape our lives and even to shape our history and our civilization (cf., e.g., Cosmic Hubris or Cosmic Humility?, Addendum on the Avoidance of Moral Horror, and Against Natural History, Right and Left). Being horrified on a uniquely moral level is a sui generis experience that cannot be reduced to any other experience, or any other kind of experience. Thus the experience of moral horror must not be denied (which would constitute an instance of failing to do justice to our intuitions), but at the same time it cannot be uncritically accepted as definitive of the moral life of humanity.

Our moral intuitions tell us what is right and wrong, but they do not tell us what is or is not (i.e., what exists or what does not exist). This is the upshot of the is-ought distinction, which, like moral horror, must not be taken as an absolute principle, even if it is a rough and ready guide in our thinking. It is perfectly consistent, if discomfiting, to explicitly acknowledge the moral horrors of the world, and not to deny that they exist even while acknowledging that they are horrific. Sometimes the claim is made that the world itself is a moral horror. Joseph Campbell attributes this view to Schopenhauer, saying that according to Schopenhauer the world is something that never should have been.

Apart from the horrors of the world as a central theme of mythology, it is also to be found in science. There is a famous quote from Darwin that illustrates the acknowledgement of moral horror:

“There seems to me too much misery in the world. I cannot persuade myself that a beneficent & omnipotent God would have designedly created the Ichneumonidæ with the express intention of their feeding within the living bodies of caterpillars, or that a cat should play with mice.

Letter from Charles Darwin to Asa Gray, 22 May 1860

This quote from Darwin underlines another point repeatedly made by Joseph Campbell: that different individuals and different societies draw different lessons from the same world. For some, the sufferings of the world constitute an affirmation of divinity, while for Darwin and others, the sufferings of the world constitute a denial of divinity. That being said, it is not the point I would like to make today.

Far more common than the acceptance of the world’s moral horrors as they are is the denial of moral horrors, and especially the denial that moral horrors will occur in the future. On one level, a pragmatic level, we like to believe that we have learned our lessons from the horrors of our past, and that we will not repeat them precisely because we have perpetrated horrors in past and came to realize that they were horrors.

To insist that moral horrors can’t happen because it would offend our sensibilities to acknowledge such a moral horror is a fallacy. Specifically, the moral horror fallacy is a special case of the argumentum ad baculum (argument to the cudgel or appeal to the stick), which is in turn a special case of the argumentum ad consequentiam (appeal to consequences).

Here is one way to formulate the fallacy:

Such-and-such constitutes a moral horror,
It would be unconscionable for a moral horror to take place,
Therefore, such-and-such will not take place.

For “such-and-such” you can substitute “transhumanism” or “nuclear war” or “human extinction” and so on. The inference is fallacious only when the shift is made from is to ought or from ought to is. If confine our inference exclusively either to what is or what ought to be, we do not have a fallacy. For example:

Such-and-such constitutes a moral horror,
It would be unconscionable for a moral horror to take place,
Therefore, we must not allow such-and-such to take place.

…is not fallacious. It is, rather, a moral imperative. If you do not want a moral horror to occur, then you must not allow it to occur. This is what Kant called a hypothetical imperative. This is a formulation entirely in terms of what ought to be. We can also formulate this in terms of what is:

Such-and-such constitutes a moral horror,
Moral horrors do not occur,
Therefore, such-and-such does not occur.

This is a valid inference, although it is false. That is to say, this is not a formal fallacy but a material fallacy. Moral horrors do, in fact, occur, so the premise stating that moral horrors do not occur is a false premise, and the conclusion drawn from this false premise is a false conclusion. (If one denies that moral horrors do, in fact, take place, then one argues for the truth of this inference.)

Moral horrors can and do happen. They are even visited upon us numerous times. After the Holocaust everyone said “never again,” yet subsequent history has not spared us further genocides. Nor will it spare us further genocides and atrocities in the future. We cannot infer from our desire to be spared further genocides and atrocities that they will not come to pass.

More interesting than the fact that moral horrors continue to be perpetrated by the enlightened and technologically advanced human societies of the twenty-first century is the fact that the moral life of humanity evolves, and it often is the case that the moral horrors of the future, to which we look forward with fear and trembling, sometimes cease to be moral horrors by the time they are upon us.

Malthus famously argued that, because human population growth outstrips the production of food (and Malthus was particularly concerned with human beings, but he held this to be a universal law affecting all life) that humanity must end in misery or vice. By “misery” Malthus understood mass starvation — which I am sure that most of us today would agree is misery — and by “vice” Malthus meant birth control. In other words, Malthus viewed birth control as a moral horror comparable to mass starvation. This is not a view that is widely held today.

A great many unprecedented events have occurred since Malthus wrote his Essay on the Principle of Population. The industrialization of agriculture not only provided the world with plenty of food for an unprecedented increase in human population, it did so while farming was reduced to a marginal sector of the economy. And in the meantime birth control has become commonplace — we speak of it today as an aspect of “reproductive rights” — and few regard it as a moral horror. However, in the midst of this moral change and abundance, starvation continues to be a problem, and perhaps even more of a moral horror because there is plenty of food in the world today. Where people are starving, it is only a matter of distribution, and this is primarily a matter of politics.

I think that in the coming decades and centuries that there will be many developments that we today regard as moral horrors, but when we experience them they will not be quite as horrific as we thought. Take, for instance, transhumanism. Francis Fukuyama wrote a short essay in Foreign Policy magazine, Transhumanism, in which he identified transhumanism as the world’s most dangerous idea. While Fukuyama does not commit the moral horror fallacy in any explicit way, it is clear that he sees transhumanism as a moral horror. In fact, many do. But in the fullness of time, when our minds will have changed as much as our bodies, if not more, transhumanism is not likely to appear so horrific.

On the other hand, as I noted above, we will continue to experience moral horrors of unprecedented kinds, and probably also on an unprecedented scope and scale. With the human population at seven billion and climbing, our civilization may well experience wars and diseases and famines that kill billions even while civilization itself continues despite such depredations.

We should, then, be prepared for moral horrors — for some that are truly horrific, and others that turn out to be less than horrific once they are upon us. What we should not try to do is to infer from our desires and preferences in the present what must be or what will be. And the good news in all of this is that we have the power to change future events, to make the moral horrors that occur less horrific than they might have been, and to prepare ourselves intellectually to accept change that might have, once upon a time, been considered a moral horror.

. . . . .

signature

. . . . .

Grand Strategy Annex

. . . . .

Tuesday


Not long ago in The Prescriptive Fallacy I mentioned the obvious symmetry of the naturalistic fallacy (inferring “ought” from “is” ) and the moralistic fallacy (inferring “is” from “ought” ) and then went on to formulate several additional fallacies, as follows:

The Prescriptive Fallacy — the invalid inference from ought to will be
The Progressivist Fallacy — the invalid inference from will be to ought
The Golden Age Fallacy — the invalid inference from ought to was
The Primitivist Fallacy — the invalid inference from was to ought

The first two are concerned with the relationship between the future and what ought to be, while the second two are concerned with the relationship between the past and what ought to be.

While we can clearly make the fine distinctions that I drew in The Prescriptive Fallacy, when we consider these attitudes in detail we often find attitudes to the future mixed together so that there is no clear distinction between believing the future to be good because it is what will be, and believing the future will be what it will be because that is good. Similar attitudes are found in respect to both the past and the present.

Recognizing the common nexus of the prescriptive fallacy and the progressivist fallacy gives us a new fallacy, which I will call the Futurist Fallacy.

Recognizing the common nexus of the Golden Age fallacy and the Primitivist fallacy gives us a new fallacy that I will call the Nostalgic Fallacy.

Recognizing the common nexus of the naturalistic fallacy and the moralistic fallacy (when we literally take the “is” in these formulations in a temporal sense, so that it uniquely picks out the present in contradistinction to the past or the future) gives us a new fallacy that I will call the Presentist Fallacy.

Hegel is now notorious for having said “the real is the rational and the rational is the real.”

These complex fallacies that result from projecting our wishes into the past or future and believing that the past or future simultaneously prescribe a norm in turn may be compared to the famous Hegelian formulation — from the point of view of contemporary philosophers, one of the most “notorious” things Hegel wrote, and frequently used as a philosophical cautionary tale today — that the real is the rational and the rational is the real.

Volumes of commentary have been written on Hegel’s impenetrable aphorism, and there are many interpretations. The best interpretation I have heard comes from understanding the “real” as the genuine, in which case, once we make a distinction between genuine instances of a given thing and bogus instances of a given thing, we are saying something significant when we say that the genuine is the rational and the rational is the genuine. The bogus, in contrast, is not convertible with the rational.

However we interpret Hegel, it was part of his metaphysics that there is a mutual implication between reality and reason. Hegel obviously didn’t see this as a fallacy, and I can just as well imagine someone asserting the convertibility of the future and the desirable or the past and the desirable is no fallacy at all, but rather a philosophical thesis or an ideological position that can be defended.

It remains to be noted that our formulations here and in The Prescriptive Fallacy assume without further elaboration the legitimacy of the is/ought distinction. The is/ought distinction is widely recognized in contemporary thought, but we could just as well deny it and make a principle of the mutual implication of is and ought, as Hegel made a principle of the mutual implication of the real and the rational.

Quine’s influence on twentieth century Anglo-American philosophical thought was not least due to his argument against the synthetic/analytic distinction, which was, before Quine, almost as well established as the is/ought distinction. A few well chosen examples can usually call into question even the most seemingly reliable distinction. Quine’s quasi-scientism had the effect of strengthening the is/ought distinction, but it came at the cost of questioning the venerable synthetic/analytic distinction. One could just as well do away with the is/ought distinction, though this would likely come at the cost of some other venerable principle. It becomes, at bottom, a question of principle.

. . . . .

Fallacies and Cognitive Biases

An unnamed principle and an unnamed fallacy

The Truncation Principle

An Illustration of the Truncation Principle

The Gangster’s Fallacy

The Prescriptive Fallacy

The fallacy of state-like expectations

The Moral Horror Fallacy

The Finality Fallacy

Fallacies: Past, Present, Future

Confirmation Bias and Evolutionary Psychology

Metaphysical Fallacies

Metaphysical Biases

Pernicious Metaphysics

Metaphysical Fallacies Again

An Inquiry into Cognitive Bias by way of Memoir

The Appeal to Embargoed Evidence

. . . . .

signature

. . . . .

Grand Strategy Annex

. . . . .

project astrolabe logo smaller

. . . . .

The Prescriptive Fallacy

19 February 2011

Saturday


In my last post, Scientific Challenges to Over-Socialization, I summarized the naturalistic fallacy and the moralistic fallacy as follows:

While it was the turn-of-the-previous-century academic philosopher G. E. Moore who formulated what he called the naturalistic fallacy, it is only recently that the opposite number of the naturalistic fallacy has been formulated, and this is the moralistic fallacy. We can understand the naturalistic fallacy and the moralistic fallacy in terms of the is/ought distinction. The naturalistic fallacy makes an illegitimate inference from is to ought; the moralistic fallacy makes an equally illegitimate inference form ought to is. That is to say, naturalistic thought is vulnerable to concluding that what is, is right, while moralistic thought is vulnerable to concluding that what is right, is. Science taken up in an ideological or moralistic spirit, then, in contradistinction to a naturalistic spirit, is vulnerable to reading its aspirations and ideals into the world.

Since I wrote that (a few hours ago) I realized that there are a number of fallacies closely related to the naturalistic fallacy and the moralistic fallacy, also derived from the is/ought distinction, but moving beyond the present tense of the “is” to other temporalities — those of the future and of the past.

Fervent belief in eschatological hopes is often a consequence of committing the Prescriptive Fallacy.

The most obvious and prevalent fallacy of the kind I have in mind I will call the Prescriptive Fallacy. The Prescriptive Fallacy is the invalid inference of ought to will be. The same fallacy also appears in its logically equivalent form of an invalid inference from ought not to will not be. The Prescriptive Fallacy is, in short, the fallacy of attempting to prescribe what the future must be on the basis of what it ought to be. This is a hopeful fallacy, and we find hundreds of illustrations of it in ordinary experience when we encounter wishful thinking and eschatological hopes among those we meet (I say “among those we meet” because we certainly aren’t foolish enough to make this invalid inference.).

The eschatological vision of the Technological Singularity proclaimed by Ray Kurzweil is a particularly obvious instantiation of the Prescriptive Fallacy in the contemporary world.

Let me provide an example of the Prescriptive Fallacy. When I was reading Kaczynski’s Industrial Society and its Future for my last post, I found that a section of this manifesto had been quoted in Kurzweil’s book The Age of Spiritual Machines. It was in this context that Bill Joy came across this passage from Kaczynski’s manifesto, and this was at least part of the motivation for Joy’s influential essay “Why the Future Doesn’t Need Us.”

Bill Joy was moved by Kurzweil's quotation from Kaczynski to write his influential dystopian essay on the dispensibility of human beings in the future.

Kurzweil (whose Singulatarian vision has made it to the cover of Time magazine this week), in the earlier iteration of his book, quoted sections 172 through 174 from Kaczynski’s manifesto, and after quoting this dystopian passage on the subordination of human beings to the machines they have created — a perennial human anxiety, it would seem — Kurzweil goes on to comment as follows:

“Although [Kaczynski] makes a compelling case for the dangers and damages that have accompanied industrialization, his proposed vision is neither compelling nor feasible. After all, there is too little nature left to return to, and there are too many human beings. For better or worse, we’re stuck with technology.”

Ray Kurzweil, The Age of Spiritual Machines, Viking Press, 1999, p. 182

Kurzweil excells at inane happy-talk, and this is certainly a perfect example of it. He seems to imagine that a Kaczynski-esque renunciation of technology will be a peaceful process in which we will voluntary quit our cities and move out into the countryside, roughly retaining both our population numbers and our quality of life. Once we realize that there are too many us to do so, presumably we meekly return to our cities and our technological way of life. From the Khmer Rouge attempts to enact just such a social vision in the 1970s, and by the by committing one of the worse genocides in human history on the way to their goal of an ideal agrarian communism, we know that such a process will be attended by death and destruction, as has historically been the case with revolutions.

The Killing Fields of the Khmer Rouge were a consequence of their attempt to put into practice their utopian vision of agrarian communism. This vision came at a high cost, and any future attempts at turning back the clock can be predicted to be similarly catastrophic.

I myself treated this theme, although coming from an economic perspective, in my book Political Economy of Globalization, section 30:

The absolute numbers of contemporary populations are important in this connection because if an economic system fails and population numbers are sufficiently low, people can abandon their formal economy in favor of subsistence through proto-economic activity. However, once a certain population threshold has been passed, there simply isn’t room for the population to scatter from urban concentrations to resume a pastoral existence on the land. When there are more people than subsistence methods can support (even if the same number of persons can be comfortably supported by industrialized methods when the latter are fully functional), competition for scarce subsistence resources would lead to instability and violence. But after violence and starvation had reduced the absolute numbers, the survivors could return to subsistence once all the bodies had been buried. Needless to say, this approach to economic self-sufficiency is not one envied among either nations or peoples.

I don’t think Kaczynski was at all deluded about this process in the way that Kurweil seems to present himself. In fact, Kaczynski’s willingness to turn to militancy and violence suggests a tolerance for violence in the spirit of the end justifies the means. For Kurzweil, the ought of a happy, comfortable future for everyone so completely triumphs over any other possibility, especially those possibilities that involve misery and suffering, that the ought he has in mind must inevitably come to pass, because the alternative is, for him, literally unthinkable. This is why I say that Kurzweil commits the Prescriptive Fallacy of invalidly inferring will be from ought.

Kaczynski's turn to militancy and violence represents a frank admission of the costs associated with utopian visionary schemes. In this respect, Kaczynski is a more honest and radical thinker than Kurzweil.

The mirror image of the Prescriptive Fallacy is the invalid inference from will be to ought. This I will call the Progressivist Fallacy. This is the fallacy committed by every enthusiastic futurist who has seen, at least in part, the changes that the future will brings, and affirms that these changes are good because they are the future and because they will come what may. To commit the Progressivist Fallacy is to assert that change is good because it is change and because change is inevitable. Rational, discerning individuals know that not all change is for the better, but the Progressivist inference is based on a starry-eyed enthusiasm and not a rational judgment. I’m sure that if I read Kurzweil in more detail, or other contemporary futurists, I could find a great many illustrations of this fallacy, but for the moment I have no examples to cite.

The Golden Age Fallacy is based on an invalid inference from “ought” to “was.”

Yet another temporal-moral fallacy is what I will call the Golden Age Fallacy. This is parallel to the above Prescriptive and Progressivist fallacies, except projected into the past instead of the future. The Golden Age Fallacy is the invalid inference from ought to was. This is the fallacy committed by political conservatives and the nostalgic, who conclude that, since the past was better than the present as we live in an age of decadence and decline, that all good things and all things that ought to be were in fact instantiated in the past.

While the most obvious examples of the Golden Age Fallacy are found in our own times among those who imagine a lost, idyllic past, the Golden Age Fallacy represents a perennial human frame of mind. Prior to the advent of modernity in all its hurried insistence (and with is tendency to commit the Progressivist Fallacy), it was quite common in the past to believe in a perfect Golden Age before civilization. We find this in the Hellenistic rationalism of Plato, and we find it in the Zionistic prophecy of the Old Testament story of the Garden of Eden. Even today those who read Hermetic texts and believe that they are acquiring lost, ancient wisdom are re-enacting the pre-modern presumption that truth lies at the source of being, and not in the later manifestations of being. This is one form that the Golden Age Fallacy takes.

Finally, the mirror image of the Golden Age Fallacy is the invalid inference from was to ought. This I will call the Primitivist Fallacy, though it is quite closely related to the Golden Age Fallacy (the two are even more closely related than the Progressivist Fallacy and the Prescriptive Fallacy). Kaczynski, as reference above, commits the Primitivist Fallacy; it is common among anarchists and back-to-the-land types. Discontent with the present causes many to look back for concrete examples of “better” or “happier” institutions (or the lack of institutions, in absolute primitivism), and so the inference comes to be made that what was, was good, and from this follows the imperative, among those who commit the Primitivist Fallacy, to attempt to re-instantiate the institutions of the past in the present. The Taliban and others who wish to return to the times of the Prophet and the community he founded in Medina, commit the Primitivist Fallacy, as do those Muslims who look to a re-establishment of the Caliphate. Again, rational people know that older is not necessarily better, but those taken in by the fallacy are no longer able to reason with any degree of reliability.

. . . . .

signature

. . . . .

Grand Strategy Annex

. . . . .

Tuesday


In earlier posts to this forum I have discussed the dissatisfaction that comes from introducing an idea before one has the right name for it. An appropriate name will immediately communicate the intuitive content of the idea to the reader, as when I wrote about the civilization of the hand in contradistinction to the civilization of the mind, after having already sketched the idea in a previous post.

Again I find myself in the position of wanting to write about something for which I don’t yet have the perfect intuitive name, and I have even had to name this post “an unnamed principle and an unnamed fallacy” because I can’t even think of a mediocre name for the principle and its related fallacy.

In yesterday’s Defunct Ideas I argued that new ideas are always emerging in history (though they aren’t always being lost), and it isn’t too difficult to come up with a new idea if one has the knack for it. But most new ideas are pretty run-of-the-mill. One can always build on past ideas and add another brick to the growing structure of human knowledge.

That being said, it is only occasionally, in the midst of a lot of ideas of the middling sort, that one comes up with a really good idea. It is even more rare when one comes up with a truly fundamental idea. Formulating a logical fallacy that has not been noticed to date, despite at least twenty-five hundred years of cataloging fallacies would constitute a somewhat fundamental idea. As this is unlikely in the present context, the principle and the associated fallacy below have probably already been noticed and named by others long ago. If not, they should have been.

The principle is simply this: for any distinction that is made, there will be cases in which the distinction is problematic, but there will also be cases when the distinction is not problematic. The correlative unnamed fallacy is the failure to recognize this principle.

This unnamed principle is not the same as the principle of bivalence or the law of the excluded middle (tertium non datur), though any clear distinction depends, to a certain extent upon them. This unnamed principle is also not to be confused with a simple denial of clear cut distinctions. What I most want to highlight is that when someone points out there are gray areas that seem to elude classification by any clear cut distinction, this is sometimes used as a skeptical argument intended to undercut the possibility of making any distinctions whatsoever. The point is that the existence of gray areas and problematic cases does not address the other cases (possibly even the majority of the cases) for which the distinction isn’t in the least problematic.

Again: a distinction that that admits of problematic cases not clearly falling on one side of the distinction or the other, may yet have other cases that are clearly decided by the distinction in question. This might seem too obvious to mention, but distinctions that admit of problematic instances are often impugned and rejected for this reason. Admitting of no exceptions whatsoever is an unrealistic standard for a distinction.

. . . . .

. . . . .

signature

. . . . .

Grand Strategy Annex

. . . . .

%d bloggers like this: