Transcanidism

6 May 2011

Friday


Back in 2004 Foreign Policy magazine invited a number of writers to pen short pieces on ideas that were destined for the dustbin of history. Among these contributions, Francis Fukuyama of “end of history” fame wrote a page about transhumanism. Now, not many people know what transhumanism is, so it is hard to view it as a threat, say, on a level with the Soviets during the Cold War, but that was the target that Fukuyama chose to dispose of. For me, this was a laugh out loud moment in the history of ideas, because Fukuyama essentially argued that transhumanism can’t or won’t happen because it poses nearly insuperable moral dilemmas for us. This would be a bit like arguing before the Second World War that the Holocaust couldn’t happen because of the moral implications of such a crime. Well, sheer horror never stopped human beings from doing anything. Or, rather, if it has been a barrier to some, it certainly has not been a barrier to all.

To give you some flavor as to exactly what transhumanism is, and to do so from a sympathetic source, I found a Transhumanist Declaration at the Humanity+ blog, which I reproduce below in its entirety:

1. Humanity stands to be profoundly affected by science and technology in the future. We envision the possibility of broadening human potential by overcoming aging, cognitive shortcomings, involuntary suffering, and our confinement to planet Earth.

2. We believe that humanity’s potential is still mostly unrealized. There are possible scenarios that lead to wonderful and exceedingly worthwhile enhanced human conditions.

3. We recognize that humanity faces serious risks, especially from the misuse of new technologies. There are possible realistic scenarios that lead to the loss of most, or even all, of what we hold valuable. Some of these scenarios are drastic, others are subtle. Although all progress is change, not all change is progress.

4. Research effort needs to be invested into understanding these prospects. We need to carefully deliberate how best to reduce risks and expedite beneficial applications. We also need forums where people can constructively discuss what should be done, and a social order where responsible decisions can be implemented.

5. Reduction of existential risks, and development of means for the preservation of life and health, the alleviation of grave suffering, and the improvement of human foresight and wisdom should be pursued as urgent priorities, and heavily funded.

6. Policy making ought to be guided by responsible and inclusive moral vision, taking seriously both opportunities and risks, respecting autonomy and individual rights, and showing solidarity with and concern for the interests and dignity of all people around the globe. We must also consider our moral responsibilities towards generations that will exist in the future.

7. We advocate the well-being of all sentience, including humans, non-human animals, and any future artificial intellects, modified life forms, or other intelligences to which technological and scientific advance may give rise.

8. We favour allowing individuals wide personal choice over how they enable their lives. This includes use of techniques that may be developed to assist memory, concentration, and mental energy; life extension therapies; reproductive choice technologies; cryonics procedures; and many other possible human modification and enhancement technologies.

To this the response of Francis Fukuyama is as follows:

“…we all possess a human essence that dwarfs manifest differences in skin color, beauty, and even intelligence. This essence, and the view that individuals therefore have inherent value, is at the heart of political liberalism. But modifying that essence is the core of the transhumanist project. If we start transforming ourselves into something superior, what rights will these enhanced creatures claim, and what rights will they possess when compared to those left behind? If some move ahead, can anyone afford not to follow? These questions are troubling enough within rich, developed societies. Add in the implications for citizens of the world’s poorest countries — for whom biotechnology’s marvels likely will be out of reach — and the threat to the idea of equality becomes even more menacing.”

Sure, it’s menacing, and change is frightening. No argument there. But asking the questions that Fukuyama asks — and they are certainly legitimate and interesting questions — is not going to spare us the moral nightmare (if not moral horror) of actually having to find a way to go on living despite menacing developments. And moral horror changes over time. When Malthus said that humanity would have to choose between misery and vice, the vice that horrified him, and which was perhaps no less of a horror to contemplate than mass starvation, was birth control. Now it is Malthus himself who is viewed with horror, not the birth control that inspired Malthus with horror. Only crackpots today attach any social stigma to birth control, and the world goes on its way.

Firstly, I should say — Profess? Declare? Proclaim? — that I don’t in the slightest identify myself as a transhumanist. Like the technological singulatarians, to whom they are closely related, they have some interesting ideas and a lot of predictions, but at the present moment transhumanism is as crackpot-ish as moral opposition to birth control. That doesn’t mean that it will always remain so, but only that it is not one of the world’s prominent evils (or even one of the world’s challenges) at the moment. We have much more to worry about when it comes to atrocities and genocide.

Why is transhumanism marginal at the present moment? Here we can return to Fukuyama, for the brief rant he penned against the transhumanists contains a salient and very true observation:

“…we have drawn a red line around the human being and said that it is sacrosanct.”

We have indeed done so. This is what philosophers call a “convention,” which in this context is not a bunch of beer-swilling salesmen staying together at a Holiday Inn, but a decision to adopt a certain standard, much like the metric system or English weights and measures, or indeed to adopt a particular way of thinking about the world. In my Variations on the Theme of Life I said the following about this particular convention:

“We have elaborately constructed conventional distinctions, embodied in law and social practices, that separate man from every other living thing, and so thorough is this contrived divide that even if no qualitative distinction in fact intervened between man and other living things, the distinction would remain absolute in virtue of the established conventions. But the system is imperfect, and breaks down upon close inspection, for just as all cultures construct the distinction between man and everything else that is not man, they construct it differently, and these different constructions cannot be honestly harmonized. Some animal species are deified, some are demonized, some are commodified, some are marginalized, and some are fetishized. The ideal unity of mankind, then, must be based either on dishonesty and dissimulation, or upon some as yet unsuspected human quality that can distinguish man without reference to cultural relativity.” (section 514)

There is another name for this convention, and that is speciesism. The idea that humanity belongs within a charmed circle is an ontological conception, but the convention to act as though this ontological principle were true (whether or not it is true) is the practical consequence of speciesism. As most people do not think abstractly about principles like this, the convention is likely to have a stronger hold on the mind than the principle, which, when stated as a principle in its explicit form, is likely to sound a bit odd and unfamiliar. But leave that aside for the moment.

It is the very speciesism that stands in the way of the technological development of human potential, keeping us within Fukuyama’s red line, isolated and insulated from the rest of life, that will ultimately facilitate the technological development of non-human species. And the perfection of these technologies of biological augmentation and modification in other species will foster an increasing temptation to apply this technology to human beings, despite whatever obstacles are raised, be they moral, legal, practical, or other. Even if initially consummated in secrecy, we can be certain that the temptation will not be avoided forever.

I realized this today when I was thinking about the now widely publicized presence of a dog with the commando team tasked with the raid on Osama Bin Laden’s hideaway. This detail attracted a lot of attention, and Foreign Policy magazine presented the photo essay War Dog, which rapidly became the most viewed story on their webpage.

It is well known that even the most alert soldier on duty is not nearly as aware as a guard dog on duty, and when it comes to specialized tasks like sniffing out explosives or persons, dogs are superior to the highest high technology. Dogs are now trained and valued in the armed forces as never before, and it would be an obvious development to augment the capacities of guard dogs. A dog with better eyesight or a better nose would be a great asset, and a competitive advantage over non-augmented dogs. Most importantly, the barriers to doing so simply don’t exist, or don’t exist in the same way. We don’t surround dogs with the same red line that we draw around human beings, even if we should.

In short, we will see transcanidism before we see transhumanism, and the former will, in the fullness of time, be the slippery slope that leads to the latter. And, yes, I know that the slippery slope is a logical fallacy; it is also a psychological truth, and what we are really discussing here is the psychology of the red line. That red line changes over time, and it changes in response to changed conditions. The red line that Malthus drew around population control still exists for us today, but it exists in a very different way, and it is drawn in a different place and between different alternatives.

There will be red lines in transcanidism too, but not enough, and not sufficiently robust, to prevent the process from starting down the slippery slope. For example, an obvious extension of improving canine senses would be to improve a dog’s mind. I am certain that most people would be deeply uncomfortable with this. There will be laws passed. There will be attempts to enforce a red line. In the long term, however, that line will be crossed. And once we begin to augment the intelligence of dogs and other war animals (or perhaps once we begin to engineer specialized war animals), they might conceivably catch up with us, or, as in the vision of the technological singularity, exponentially surpass us.

The reader should be fully aware that I am fully aware that what I am writing here would be received as anathema to many and as horrific to some. It has become the custom to discuss certain technological developments that touch directly upon human life in the rhetoric of high moral indignation. This is not helpful. In fact, I take it to be counter-productive.

. . . . .

signature

. . . . .

Grand Strategy Annex

. . . . .

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.