20 September 2013
What happens when an individual achieves a new level of perspective taking? Is one perspective displaced by another? Is a new perspective added to existing perspectives?
Firstly, what do I mean by “perspective taking” in this context? In a couple of posts I’ve discussed perspective taking as a theme of developmental psychology — The Hierarchy of Perspective Taking and The Overview Effect as Perspective Taking — but I haven’t tried to rigorously define it. The short version is that perspective taking is putting yourself in the position of the other and seeing the world from the other’s point of view (or perspective, hence perspective taking). This is, as I have previously remarked, one of the most basic thought experiments in moral philosophy.
Kant implicitly appeals to perspective taking in his categorical imperative, in which he asserts that one ought to act as though the principle that guides one’s actions should be made a universal law. In other words, you ask yourself, “What would the consequences be if everyone acted as I am acting?” This, in turn, supposes that one can place oneself into the position of the other and imagine how the principle of your action would be interpreted and put into practice by others.
It could also be argued that Kant’s categorical imperative is implicit in the regime of popular sovereignty, which has its origins in Kant’s time but which only in the twentieth century became the universally acclaimed normative standard for political entities. Everyone knows that their individual vote does not count, because if you subtracted your vote from the total in any election, it would not affect the outcome. Nevertheless, if everyone came to this conclusion and acted upon it, no one would vote and democracy would collapse.
To return to perspective taking, there are much more sophisticated formulations than the off-the-cuff version I have given above. There is an interesting paper that discusses the conception of perspective taking — THE DEVELOPMENT OF SOCIO-MORAL MEANING MAKING: DOMAINS, CATEGORIES, AND PERSPECTIVE-TAKING — the authors of which, Monika Keller and Wolfgang Edelstein, converge on this definition:
“…perspective-taking is taken to represent the formal structure of coordination of the perspectives of self and other as they relate to the different categories of people’s naive theories of action. The differentiation and coordination of the categories of action and the self-reflexive structure of this process are basic to those processes of development and socialization in which children come to reconstruct the meaning of social interaction in terms of both what is the case and what ought to be the case in terms of morally responsible action. In order to achieve the task of establishing consent and mutually acceptable lines of action in situations of conflicting claims and expectations, a person has to take into account the intersubjective aspects of the situation that represent the generalizable features, as well as the subjective aspects that represent the viewpoints of the persons involved in the situation. In its fully developed form, this complex process of regulation and interaction calls for the existence and operation of complex socio-moral knowledge structures and a concept of self as a morally responsible agent. The ability to differentiate and coordinate the perspectives of self and other thus is a necessary condition both in the development of socio-moral meaning making and in the actual process of solving situations of conflicting claims.”
“THE DEVELOPMENT OF SOCIO-MORAL MEANING MAKING: DOMAINS, CATEGORIES, AND PERSPECTIVE-TAKING,” by Monika Keller and Wolfgang Edelstein, in W. M. Kurtines & J. L. Gewirtz (Eds.) (1991). Handbook of moral behavior and development: Vol. 2. Research (pp. 89-114). Hillsdale, NJ: Erlbaum.
I recommend the entire paper, which discusses, among other matters, the attempts by others to formulate an adequate explication of perspective taking. But I have an ulterior motive for this discussion of perspective taking. The real reason I have engaged in this inquiry about perspective taking is because of my recent posts about the overview effect — The Epistemic Overview Effect and The Overview Effect as Perspective Taking — in which I treat the overview effect as a visceral trigger for perspective taking on a global (and even trans-planetary) scale.
Thinking about the overview effect as perspective taking, I considered the possibility that taking a new global or even trans-planetary perspective might involve either dispensing with a former perspective in order to replace it with a novel perspective (which I will call the displacement model), or adding a new perspective to already existing perspectives (which I will call the augmentation model). (And here I want to cite Siggi Becker and Mark Lambertz, who commented on my earlier overview effect post on Facebook, and spurred me into thinking about what it means for one to achieve a new perspective on the world.)
For cognitive scientists and sociologists, perspective taking is cumulative, especially in the case of moral development. There is an entire literature devoted to Robert L. Selman’s five stages of perspective taking (which is very much influenced by Piaget) and Lawrence Kohlberg’s six stages of moral development (three stages — pre-conventional, conventional and post-conventional — each broken into two divisions).
There are, however, definite limits to this Piagetian cognitive basis for the development of the moral life of the individual. Without some degree of empathy for the other, all of this cognitive approach to moral development falls apart, because one might systematically pursue the development of one’s perspective taking ability only to more successfully exploit and manipulate others through a more effective cunning than that provided by a purely egocentric approach to interaction with others. Thus we arrive at the Schopenhauerian conception such that compassion is the basis of morality.
Max Scheler systematically critiqued this classic Schopenhauerian position in his book The Nature of Sympathy. Scheler concluded that compassion alone is insufficient for morality, thus undermining Schopenhauer’s position, but that compassion alone may not be a sufficient condition for morality, it may yet be a necessary condition. Perhaps it is compassion and perspective taking together that make morality possible. These philosophical issues have also been taken up in the spirit of social science by Carol Gilligan’s work on the ethics of care. I only touch on these issues here in passing, since any serious consideration of these works and their authors would require substantial exposition.
Perspective taking is central to Lawrence Kohlberg’s theory of moral development, and what Kohlberg calls “disequilibrium” (which serves as a spur to moral development) might also be called “disorientation,” or, more specifically, “moral disorientation.” And it is disorienting when one achieves a new perspective, and especially when one does as suddenly, as in the case of the visceral trigger provided by the overview effect. Plato describes such a disorienting experience beautifully in his famous allegory of the cave — the philosopher is twice disoriented, initially as he ascends from the cave of shadows up into the real world, and again when we descends again into the cave of shadows in order to attempt to enlighten those still chained below. A power experience leaves one feeling disoriented, and much of this disorientation is due to the collapse of a familiar system of thought that gave one a sense of one’s place in the world, and its replacement by a new system of thought that is not yet familiar.
If we focus too much on cumulative and continuous aspects of perspective taking, on the assumption that each level of development must build upon the immediately previous level, we may lose sight of the disruptive nature of perspective taking — and moral development is not a primrose path. As individuals confront moral dilemmas they are forced to consider difficult question and sometimes to give hard answers to difficult questions. This is central to the moral growth of the person, and it is often quite uncomfortable and attended by anxiety and inner conflict. One often feels that one fights one’s way through a problem, only to surmount it. This is very much like Wittgenstein’s description of throwing away the ladder once one has climbed up.
If the overview effect constitutes a new level of perspective taking, and if perspective taking is central to moral development, then the perspective taking of the overview effect constitutes a stage in human moral development — and it constitutes that stage of moral development that coincides with civilization’s expansion beyond terrestrial boundaries.
. . . . .
. . . . .
. . . . .
18 September 2013
In my previous post on The Epistemic Overview Effect I now realize that I failed to make an obvious connection with some earlier threads of my thought. Specifically, I failed to see or to develop the connection between the overview effect and what some developmental psychologists call “perspective taking.”
In The Hierarchy of Perspective Taking I discussed the developmental psychology of Jean Piaget, Erik Erikson, and Lev Vygotsky. In this post I attempted to show how perspective taking transcends the life of the individual and applies as well to entire civilizations — which distinction might be called that between ontogenetic perspective taking and phylogenetic perspective taking. In this post I wrote:
“Piagetian cognitive development in terms of perspective taking can easily be extended throughout the human lifespan (and beyond) by the observation that there are always new perspectives to take. As civilization develops and grows, becoming ever more comprehensive as it does so, the human beings who constitute this civilization are forced to formulate always more comprehensive conceptions in order to take the measure of the world being progressively revealed to us. Each new idea that takes the measure of the world at a greater order of magnitude presents the possibility of a new perspective on the world, and therefore the possibility of a new achievement in terms of perspective taking.”
Re-reading this passage in light of the overview effect — the view of the earth entire experienced by astronauts and cosmonauts, as well as the change in perspective that a few of these observers have had as a result of seeing the earth whole with their own eyes — I would now add to my exposition of a hierarchy of perspective taking that the expansion and extension of civilization not only produces new ideas and conceptions, but also new experiences. Technology makes it possible to experience aspects of the world directly that were impossible to experience prior to the advent of industrial-technological civilization.
The overview effect is a paradigmatic case of technologically-facilitated experience. While I could say that those who have, so far, been fortunate enough to experience the overview effect, are “forced” as a result of their experience to formulate new conceptions of the world as a consequence of their experience (as I used this idiom of being “forced” previously), it would be better to say as I put it more recently in The Epistemic Overview Effect that the experience is a trigger that inspires an effort to formulate a conception of the world adequate to the experience.
While the overview effect itself is likely a powerful experience, merely the idea that others are experiencing an overview can itself be a powerful experience. This involves the most fundamental of all ethical thought experiments: the attempt to place ourselves in the position of the other, and so to experience the otherness of the other and the otherness of ourselves. When we believe that we have understood the other’s point of view, it is not unusual to say, “I can see your perspective.”
Perspective taking in the form of taking the perspective of the other is a key achievement in the development of an ethical perspective of the individual life. Some never achieve this level of insight, and some come to an adequate appreciation of the perspective of the other only late in life.
In the Swedish film My Life as a Dog there is an beautiful evocation of such ethical perspective taking in the life of a young boy, by way of the theme of the Russian space dog Laika, which recurs as a motif to which the young protagonist returns time and again as an example of perspective. Here are some of the voiceovers from the protagonist’s narration:
“And what about Laika, the space dog? They put her in the Sputnik and sent her into space. They attached wires to her heart and brain to see how she felt. I don’t think she felt too good. She spun around up there for five months until her doggy bag was empty. She starved to death. It’s important to have something like that to compare things to.”
“It’s strange how I can’t stop thinking about Laika. People shouldn’t think so much. ‘Time heals all wounds,’ Mrs. Arvidsson says. Mrs. Arvidsson says some wise things. You have to try to forget.”
“…I’ve been kinda lucky. I mean, compared to others. You have to compare, so you can get a little distance from things. Like Laika. She really must have seen things in perspective.”
Laika did indeed see things in perspective, and may well have experienced the overview effect before any human being. The young boy in My Life as a Dog understands this, intuiting the Laika’s perspective, and is able to better judge his own station in life by comparing his situation to that of Laika.
As long as our industrial-technological civilization continues in its development (i.e., as long as it does not succumb to the existential risks of flawed realization or permanent stagnation), we individuals contextualized within this civilization can continue our development, and this development will be facilitated by the technologies produced by this civilization that will give us new experiences, and these new experiences will afford us with new perspectives on the world.
Recently there have been many new stories about Voyager-1 being the first human artifact to leave the solar system (cf. Voyager probe ‘leaves Solar System’ by Jonathan Amos, Science correspondent, BBC News). Meditations upon the achievement of Voyager-1 have taken the form of a perspective taking on our solar system entire. We are inspired to contemplate our perspective on the world by imaginatively taking the point of view of Voyager-1. Some day, a human being will travel as far or farther than Voyager-1, and will look back and see our sun at a distance, as we once looked back and saw the earth for the first time at a distance.
Our technologically-facilitated perspective taking will not end there. There are grander views yet to contemplate, and grander conceptions of nature that will follow from a direct, visceral experience of these grander views. As wonderful as the Earth must appear from space, and as transformative as seeing this must be, further in the future there will be the possibility of flying far enough beyond the Milky Way that we will be able to turn around and look back at our home galaxy. Knowing it to be our home (and by that time having come to a kind of astronautical familiarity with the Earth, our solar system, and the Orion Spur of the Milky Way), we will be moved by the sight of our entire galaxy seen whole, in one glance of the eye, hanging suspended and seemingly motionless against the blackness of space unrelieved by stars — for the only companions to our galaxy from this extra-galactic point of view will be other galaxies, and this astonishing perspective may well spur us toward a yet more comprehensive, therefore more adequate, conception of the universe.
. . . . .
. . . . .
. . . . .
. . . . .
14 September 2013
The Overview Effect
The “overview effect” is so named for the view of the earth entire — an “overview” of the earth — enjoyed by astronauts and cosmonauts, as well as the change in perspective that a few of these privileged observers have had as a result of seeing the earth whole with their own eyes.
One of these astronauts, Edgar Mitchell, who was on the 1971 Apollo mission and was the sixth human being to walk on the moon, has been instrumental to bringing attention to the overview effect, and has written a book about his experiences as an astronaut and how it affected his perception and perspective, The Way of the Explorer: An Apollo Astronaut’s Journey Through the Material and Mystical Worlds. A short film has been made about the overview effect, and an institution has been established to study and to promote the overview effect, The Overview Institute.
Here is an extract from the declaration of The Overview Institute:
For more than four decades, astronauts from many cultures and backgrounds have been telling us that, from the perspective of Earth orbit and the Moon, they have gained such a vision. There is even a common term for this experience: “The Overview Effect”, a phrase coined in the book of the same name by space philosopher and writer Frank White. It refers to the experience of seeing firsthand the reality of the Earth in space, which is immediately understood to be a tiny, fragile ball of life, hanging in the void, shielded and nourished by a paper-thin atmosphere. From space, the astronauts tell us, national boundaries vanish, the conflicts that divide us become less important and the need to create a planetary society with the united will to protect this “pale blue dot” becomes both obvious and imperative. Even more so, many of them tell us that from the Overview perspective, all of this seems imminently achievable, if only more people could have the experience!
We have a hint of the overview effect when we see pictures of the Earth as a “blue marble” and as a “pale blue dot”; those who have had the opportunity to see the Earth as a blue marble with their own eyes have been affected by this vision to a greater extent than we can presumably understand from seeing the photographs. Here is another description of the overview effect:
When people leave the surface of the Earth and travel into Low Earth Orbit, to a space station, or the moon, they see the planet differently. My colleague at the Overview Institute, David Beaver, likes to emphasize that they not only see the Earth from space but also in space. He has also been a strong proponent that we describe what then happens as a change in world view.
Deep Space: The Philosophy of the Overview Effect, Frank White
In the same essay White then quotes himself from his book, The Overview Effect: Space Exploration and Human Evolution, on the same theme:
“Mental processes and views of life cannot be separated from physical location. Our “world view” as a conceptual framework depends quite literally on our view of the world from a physical place in the universe.”
Frank White has sought to give a systematic exposition of the overview effect in his book, The Overview Effect: Space Exploration and Human Evolution, which seeks to develop a philosophy of space travel derived from the personal experience of space by space travelers.
The Spatial Overview
There is no question in my mind that sometimes you have to see things for yourself. I have invoked this argument numerous times in writing about travel — no amount of eloquent description or stunning photographs can substitute for the experience of seeing a place for yourself with your own eyes. This is largely a matter of context: being in a place, experiencing a place as a presence, requires one’s own presence, and one’s own presence can be realized only as the result of a journey. A journey contextualizes an experience within the experiences required the reach the object of the journey. The very fact that one must travel in order to each a destination alters the experience of the destination itself.
To be present in a landscape means that all of one’s senses are engaged: one not only sees, but one sees with the whole of one’s peripheral vision, and when one turns one’s body in order to take in more of the landscape, one not only sees more of the landscape, but one feels one’s body turn; one smells the air; one hears the distinctive reverberations of the most casual sounds — all of the things that remind us that this is not an illusion but possesses all the chance qualities that mark a real, concrete experience.
I have remarked in other posts that one of the distinctive trends in contemporary philosophy of mind is that of emphasizing the embodiedness of the mind, and in this context the embodied mind is a mind that is inseparable from its sensory apparatus and its sensory apparatus is inseparable from the world with which it is engaged. When our eyes hurt as we look at the sun we are reminded by this visceral experience of sight — one might say overwhelming sight — that we experience the world in virtue of a sensory apparatus that is made of essentially the same materials as the world — that there is an ontological reciprocity of eye that sees and sun that shines, and it is only because the two share the same world and are made of the same materials that they stand in a relation of cause and effect to each other. We are part of the world, of the world, and in the world.
Presumably, then, to the present in space and feel oneself kineasthetically in space — most obviously, the feeling of a micro-gravity environment once off the surface of the earth — is part of the experience of the overview effect, as is the dramatic journey into orbit, which must remind the viewer of the difficulty of attaining the perspective of seeing the world whole. This is the overview effect in space.
The Temporal Overview
There is also the possibility of an overview effect in time. For the same reason that we might insist that some experiences must be had for oneself, and that one must be present spatially in a spatial landscape in order to appreciate that landscape for what it is, we might also insist that a person who has lived a long life and who has experienced many things has a certain kind of understanding of the temporal landscape of life, and it is only through a conscious knowledge of the experience of time and history that we can attain an overview of time.
The movement in contemporary historiography called Big History (which I have written about several times, e.g., in The Science of Time and Addendum on Big History as the Science of Time) is an attempt to achieve an overview experience of time and history.
I have observed elsewhere that we find ourselves swimming in the ocean of history, but this very immersion in history often prevents us from seeing history whole — which is an interesting contrast to the spatial overview experience, which which contextualization in a particular space is necessary to its appreciation and understanding. But contextualization in a particular time — which we would otherwise call parochialism — tends to limit our historical perspective, and we must actively make an effort to free ourselves from our temporal and historical contextualization in order to see time and history whole.
It is the effort to free ourselves from temporal parochialism, and the particularities and peculiarities of our own time, that give as a perspective on history that is not tied to any one history but embraces the whole of time as the context of many different histories. This is the overview effect in time.
The Epistemic Overview
I would like to suggest that there is also an epistemic overview effect. It is not enough to be told about knowledge in the way that newspaper and magazine articles might tell a popular audience about a new scientific discovery, or in the way that textbooks tell students about the wider world. While in some cases this may be sufficient, and we must rely upon the reports of others because we cannot construct the whole of knowledge on our own, in many cases knowledge must be gained firsthand in order for its proper significance to be appreciated.
Elsewhere (in P or not-P) I have illustrated the distinction between a constructive and a non-constructive point of view being something like the difference between climbing up a mountain, clambering over every rock until one achieves the summit (constructive) versus taking a helicopter and being set down on the summit from above (non-constructive). (I have taken this example over from French mathematician Alain Connes.) With this image in mind, being blasted off into space and seeing the mountain from orbit is a paradigmatically non-constructive experience, and it is difficult to imagine how it could be made a constructive experience.
Well, there are ways. Once space technology becomes widely distributed and accessible, if a person were to build their own SSTO from off-the-shelf parts and then pilot themselves into orbit, that would be something like a constructive experience of the overview effect. And if we go on to create a vibrant and vigorous spacefaring civilization, making it into orbit will only be the first of many steps, so that a constructive experience of space travel will be to “climb” one’s way from the surface of the earth through the solar system and beyond, touching every transitional point in between. It has been said that the journey of the thousand miles begins with a single step — this is very much a constructivist perspective. And it holds true that a journey of a million miles or a billion miles begins with a single step, and that first step of a cosmic voyage is the step that takes us beyond the surface of the earth.
Despite the importance and value of the constructivist perspective, it has its limitations, just as the oft-derided non-constructive point of view has its particular virtues and its significance. Non-constructive methods can reveal to us knowledge that is disruptive because it is forced upon us suddenly, in one fell swoop. Such an experience is memorable; it leaves an impression, and quite possibly it leaves much more of an impression that a painstakingly gradual revelation of exactly the same perspective.
This is the antithesis of the often-cited example of a frog placed in a pot of water and which doesn’t jump out as the water is slowly brought to a boil. The frog in this scenario is a victim of constructivist gradualism; if the frog had had a non-constructive perspective on the hot water in which he was being boiled to death, he might have jumped out and saved himself. And perhaps this is exactly what we need as human beings: a non-constructive (and therefore disruptive) perspective on a the familiar life that has crept over us day-by-day, step-by-step, and bit-by-bit.
An epistemic overview of knowledge can give us a disruptive conception of the totality of knowledge that is not unlike the disruptive experience of the overview effect in space, which allows us to see the earth whole, and the disruptive experience of time that allows us to see history whole. Moreover, I would argue that the epistemic overview is the ultimate category — the summum genus — that must contextualize the overview effect in space and in time. However, it is important to point out that the immediate visceral experience of the overview effect may be the trigger that is required for an individual to begin to seek the epistemic overview that will give meaning to his experiences.
. . . . .
. . . . .
. . . . .
1 September 2013
Sometimes when I am asked my favorite book I reply that it is Nietzsche’s Genealogy of Morals, which is the most systematic of his books on ethics and which gives his most detailed exposition of ressentiment. I reread the third essay in the book today — “What is the meaning of ascetic ideals?” — keeping in mind while I did what I wrote about freedom day before yesterday in Theory and Practice of Freedom.
To give a flavor of Nietzsche’s argument I want to cite a couple of passages from the book that I take to be particularly crucial. Firstly, here is the passage in which Nietzsche introduces the idea of ressentiment becoming creative and creating its own values:
“The beginning of the slaves’ revolt in morality occurs when ressentiment itself turns creative and gives birth to values: the ressentiment of those beings who, denied the proper response of action, compensate for it only with imaginary revenge. Whereas all noble morality grows out of a triumphant saying ‘yes’ to itself, slave morality says ‘no’ on principle to everything that is ‘outside’, ‘other’, ‘non-self ’: and this ‘no’ is its creative deed.”
Nietzsche, Friedrich, On the Genealogy of Morality, EDITED BY KEITH ANSELL-PEARSON, Department of Philosophy, University of Warwick, TRANSLATED BY CAROL DIETHE, Cambridge University Press, 1994, 2007, p. 20
Near the end of the book, Nietzsche reiterates one of his central themes, that man would rather will nothing than not will:
“It is absolutely impossible for us to conceal what was actually expressed by that whole willing that derives its direction from the ascetic ideal: this hatred of the human, and even more of the animalistic, even more of the material, this horror of the senses, of reason itself, this fear of happiness and beauty, this longing to get away from appearance, transience, growth, death, wishing, longing itself — all that means, let us dare to grasp it, a will to nothingness, an aversion to life, a rebellion against the most fundamental prerequisites of life, but it is and remains a will! …And, to conclude by saying what I said at the beginning: man still prefers to will nothingness, than not will…”
Nietzsche, Friedrich, On the Genealogy of Morality, EDITED BY KEITH ANSELL-PEARSON, Department of Philosophy, University of Warwick, TRANSLATED BY CAROL DIETHE, Cambridge University Press, 1994, 2007, p. 120
One of the themes that occurs throughout Nietzsche’s works is the critique of nihilism — Nietzsche finds nihilism in much that others fail to recognize as such, while Nietzsche himself has been accused of nihilism because of his iconoclasm. The immediately preceding passage strikes me as one of Nietzsche’s most powerful formulations of unexpected and unrecognized nihilism: willing nothing.
I think Nietzsche primarily had institutional religion in mind, especially those institutionalized religions that put a priestly caste in power (whether directly or indirectly), but there are plenty of examples of thoroughly secular forms of ressentiment developing to the point of creating its own values, and I think one of the principal forms of secular ressentiment takes the form of the denial or the repudiation or the rejection of freedom. The denial of freedom is a particularly pure form of the nihilistic will saying “No!” to life, since life, in the living of it, is all about freedom — we realize our freedom in the dizziness that is dread, and make our choices in fear and trembling. Many people quite literally become physically ill when faced with a momentous choice — so great a role does the idea of freedom play in our thoughts, that our thoughts are manifested physically.
The denial of freedom takes many forms. For example, it often takes the form of determinism, and determinism itself can take many forms. On my other blog I wrote about determinism from the point of view of the denial of freedom as a philosophical problem — something I wanted to do to counter the prevalent attitude that asks why so many people believe in their own freewill. This approach seems to me incredibly perverse, and the more reasonable question is to ask why so many people believe they do not have freewill. Now, Nietzsche himself was a determinist, so he likely would not be sympathetic to what I’m saying here, but that does not stop us from applying Nietzsche’s own ideas to himself (something Max Scheler also did in his book on Ressentiment).
Probably the most common form that the denial of freedom takes is a rationalization of a failure to take advantage of one’s freedoms. This is a much more subtle denial of freedom than determinism, and in fact assumes the reality of free will. If the palpable reality of freedom, and the potential upsets to the ordinary business of life that it presents, were not all-too-real, there would be no need to formulate elaborate rationales for not taking advantage of one’s freedom and opting for a life of conformity and servile acquiescence to authority.
Understanding that freedom is honored more in the breach than the observance was a well-trodden path in twentieth century thought. Although Freud had deterministic sympathies, his theories of reason as the mere rationalization of what the unconscious was going to do anyway incorporates both determinist and free willist assumptions. The denial of freedom is a central theme in Sartre’s work (the spirit of seriousness and the idea of bad faith are both important forms of the denial of freedom), and through Freud and Sartre the influence on twentieth century thought and literature was profound. I have previously cited the role of Gooper Pollitt in Tennessee Williams’ Cat on a Hot Tin Roof as a paradigm of inauthenticity (in Existential Due Diligence).
All one need do is look around at the world we’ve made, with all its laws and statutes, its codes and regulations, its institutions and rules, its traditions and customs — it would be entirely possible to pass an entire lifetime in this context without realizing, much less exercising, one’s freedom. And these are only passive discouragements. When it comes to active discouragements to freedom, every nay-sayer, every pessimist, every wagging finger, every shaming tactic, every snide and cynical comment is an attempt to dissuade us from enjoying our freedom and entering into the same self-chosen misery of all those who have systematically extirpated all traces of freedom from their own lives.
Everyone who has given up freedom in their own life understandably resents seeing the exercise of freedom in the lives of others, and when this resentment turns creative it gives birth to every imaginable form of slander of freedom and of praise of servility — whether to a cause or to a movement or to an individual or to an institution — not to mention endless rationalizations of why the refusal of freedom isn’t really a refusal of freedom. Don’t believe it. Don’t believe any of it. Don’t buy into it. There is nothing in this world that is worth surrendering your freedom for — not matter how highly it is praised or how enthusiastically it is celebrated — this praise and this celebration of unfreedom is nothing but the creative response of ressentiment directed against freedom.
. . . . .
. . . . .
. . . . .
30 August 2013
It has been observed that, in Western countries at least, the idea of freedom is honored more in the breech than the observance. Individuals who make full use of their freedom are likely to be thought eccentric, and most social institutions both impose and expect a degree of conformity that makes a mockery of the idea of freedom. In other words, there is a bifurcation between the theory and the practice of freedom, in which freedom is celebrated as a wonderful thing in theory but is frowned upon in practice.
Every compromise to our freedom, no matter how slight, every expectation that we will go along to get along, every time we tolerate implicit coercion that channels lives in particular directions, all of the traditions and customs that we “honor” in the misguided spirit of filial piety, impinge upon our freedom, and it is this incremental encroachment of our freedom, the ever-so-gradual paring away of live options and possibilities, that develops into a world-view that prizes conformity over independence and authority over autonomy.
There is an important sense in which travel is the practice of the theory that is freedom. A fundamental part of freedom is freedom of movement, which is why, when we punish individuals, we incarcerate them and restrict their freedom of movement. To be deprived of one’s freedom of movement is to be deprived of one’s liberty. To make the most of one’s freedom of movement is to put into practice the idea of freedom, to live freedom and not merely to honor or respect it.
Some are deprived of their liberty by force, others by fraud, but the vast majority are deprived of their liberty by barriers that exist only because we allow them to exist. It is not quite accurate to say that most individuals live in a prison of their own making; it is worse: most live in a prison built by others, and accept it for what it is without questioning the walls, the boundaries, the perimeters, the accepted parameters of life.
There are degrees of freedom, as I observed a few days ago. One can cultivate additional degrees of freedom, but one can also leave freedom uncultivated, and in so doing implicitly and incrementally relinquish freedoms until the world narrows into something that cannot be called freedom in any sense of the term that is not a betrayal of its meaning.
When I get a taste of freedom by way of exercising my freedom of movement, it is a heady and intoxicating experience, and I want more. I think many freedoms are like this: shimmering just out of reach most of the time, but when possessed, embraced, indulged, and exhausted just the taste of it leaves us wanting more. The appetites of the body are readily satisfied, and, once satiated, leave us unperturbed for a time. The appetites of the mind — for freedom, for meaning, for value — are much more difficult to satisfy, but once we see our way clear to grasping them, we do not tire of them, and our appetite for them only expands with time.
. . . . .
. . . . .
. . . . .
. . . . .
29 June 2013
In several posts I have referred to moral horror and the power of moral horror to shape our lives and even to shape our history and our civilization (cf., e.g., Cosmic Hubris or Cosmic Humility?, Addendum on the Avoidance of Moral Horror, and Against Natural History, Right and Left). Being horrified on a uniquely moral level is a sui generis experience that cannot be reduced to any other experience, or any other kind of experience. Thus the experience of moral horror must not be denied (which would constitute an instance of failing to do justice to our intuitions), but at the same time it cannot be uncritically accepted as definitive of the moral life of humanity.
Our moral intuitions tell us what is right and wrong, but they do not tell us what is or is not (i.e., what exists or what does not exist). This is the upshot of the is-ought distinction, which, like moral horror, must not be taken as an absolute principle, even if it is a rough and ready guide in our thinking. It is perfectly consistent, if discomfiting, to explicitly acknowledge the moral horrors of the world, and not to deny that they exist even while acknowledging that they are horrific. Sometimes the claim is made that the world itself is a moral horror. Joseph Campbell attributes this view to Schopenhauer, saying that according to Schopenhauer the world is something that never should have been.
Apart from the horrors of the world as a central theme of mythology, it is also to be found in science. There is a famous quote from Darwin that illustrates the acknowledgement of moral horror:
“There seems to me too much misery in the world. I cannot persuade myself that a beneficent & omnipotent God would have designedly created the Ichneumonidæ with the express intention of their feeding within the living bodies of caterpillars, or that a cat should play with mice.
Letter from Charles Darwin to Asa Gray, 22 May 1860
This quote from Darwin underlines another point repeatedly made by Joseph Campbell: that different individuals and different societies draw different lessons from the same world. For some, the sufferings of the world constitute an affirmation of divinity, while for Darwin and others, the sufferings of the world constitute a denial of divinity. That being said, it is not the point I would like to make today.
Far more common than the acceptance of the world’s moral horrors as they are is the denial of moral horrors, and especially the denial that moral horrors will occur in the future. On one level, a pragmatic level, we like to believe that we have learned our lessons from the horrors of our past, and that we will not repeat them precisely because we have perpetrated horrors in past and came to realize that they were horrors.
To insist that moral horrors can’t happen because it would offend our sensibilities to acknowledge such a moral horror is a fallacy. Specifically, the moral horror fallacy is a special case of the argumentum ad baculum (argument to the cudgel or appeal to the stick), which is in turn a special case of the argumentum ad consequentiam (appeal to consequences).
Here is one way to formulate the fallacy:
Such-and-such constitutes a moral horror,
It would be unconscionable for a moral horror to take place,
Therefore, such-and-such will not take place.
For “such-and-such” you can substitute “transhumanism” or “nuclear war” or “human extinction” and so on. The inference is fallacious only when the shift is made from is to ought or from ought to is. If confine our inference exclusively either to what is or what ought to be, we do not have a fallacy. For example:
Such-and-such constitutes a moral horror,
It would be unconscionable for a moral horror to take place,
Therefore, we must not allow such-and-such to take place.
…is not fallacious. It is, rather, a moral imperative. If you do not want a moral horror to occur, then you must not allow it to occur. This is what Kant called a hypothetical imperative. This is a formulation entirely in terms of what ought to be. We can also formulate this in terms of what is:
Such-and-such constitutes a moral horror,
Moral horrors do not occur,
Therefore, such-and-such does not occur.
This is a valid inference, although it is false. That is to say, this is not a formal fallacy but a material fallacy. Moral horrors do, in fact, occur, so the premise stating that moral horrors do not occur is a false premise, and the conclusion drawn from this false premise is a false conclusion. (If one denies that moral horrors do, in fact, take place, then one argues for the truth of this inference.)
Moral horrors can and do happen. They are even visited upon us numerous times. After the Holocaust everyone said “never again,” yet subsequent history has not spared us further genocides. Nor will it spare us further genocides and atrocities in the future. We cannot infer from our desire to be spared further genocides and atrocities that they will not come to pass.
More interesting than the fact that moral horrors continue to be perpetrated by the enlightened and technologically advanced human societies of the twenty-first century is the fact that the moral life of humanity evolves, and it often is the case that the moral horrors of the future, to which we look forward with fear and trembling, sometimes cease to be moral horrors by the time they are upon us.
Malthus famously argued that, because human population growth outstrips the production of food (and Malthus was particularly concerned with human beings, but he held this to be a universal law affecting all life) that humanity must end in misery or vice. By “misery” Malthus understood mass starvation — which I am sure that most of us today would agree is misery — and by “vice” Malthus meant birth control. In other words, Malthus viewed birth control as a moral horror comparable to mass starvation. This is not a view that is widely held today.
A great many unprecedented events have occurred since Malthus wrote his Essay on the Principle of Population. The industrialization of agriculture not only provided the world with plenty of food for an unprecedented increase in human population, it did so while farming was reduced to a marginal sector of the economy. And in the meantime birth control has become commonplace — we speak of it today as an aspect of “reproductive rights” — and few regard it as a moral horror. However, in the midst of this moral change and abundance, starvation continues to be a problem, and perhaps even more of a moral horror because there is plenty of food in the world today. Where people are starving, it is only a matter of distribution, and this is primarily a matter of politics.
I think that in the coming decades and centuries that there will be many developments that we today regard as moral horrors, but when we experience them they will not be quite as horrific as we thought. Take, for instance, transhumanism. Francis Fukuyama wrote a short essay in Foreign Policy magazine, Transhumanism, in which he identified transhumanism as the world’s most dangerous idea. While Fukuyama does not commit the moral horror fallacy in any explicit way, it is clear that he sees transhumanism as a moral horror. In fact, many do. But in the fullness of time, when our minds will have changed as much as our bodies, if not more, transhumanism is not likely to appear so horrific.
On the other hand, as I noted above, we will continue to experience moral horrors of unprecedented kinds, and probably also on an unprecedented scope and scale. With the human population at seven billion and climbing, our civilization may well experience wars and diseases and famines that kill billions even while civilization itself continues despite such depredations.
We should, then, be prepared for moral horrors — for some that are truly horrific, and others that turn out to be less than horrific once they are upon us. What we should not try to do is to infer from our desires and preferences in the present what must be or what will be. And the good news in all of this is that we have the power to change future events, to make the moral horrors that occur less horrific than they might have been, and to prepare ourselves intellectually to accept change that might have, once upon a time, been considered a moral horror.
. . . . .
. . . . .
. . . . .
13 May 2013
The least interesting views on almost any philosophical question will inevitably (inevitably, at least, in our age of industrial-technological civilization driven by scientific innovation) be those of some eminent scientist who delivers himself of a philosophical position without bothering to inform himself on the current state of research on the philosophical question in question, and usually, at the same time, decrying the aridity of philosophical discussion. (While this is not true of all scientific opinion on matters philosophical, it is mostly true.) So as not to make such a sweeping charge without naming names, I will here name Francis Crick as a perfect embodiment of this, and to this end I will attempt to describe what I will call “Crick’s Deepity.”
“Crick’s Deepity” sounds like the name of some unusual topographical feature that would be pointed out on local maps for the amusement of travelers, so I will have to explain what I mean by this. What is “Crick’s deepity”?
The “Crick” of the title is none other than Francis Crick, famous for sharing the credit for discovering the structure of DNA with Watson. It will take a little longer to explain what a “deepity” is. I’ve gotten the term from Daniel Dennett, who has introduced the idea in several talks (available on Youtube), and since having learned about it from watching a video of a Dennett talk I found the term on the Urban Dictionary, so it has a certain currency. A deepity is a misleading statement which seems to be profound but is not; construed in one sense, it is simply false; construed in another sense, it is true, but trivially true.
The most commonly adduced deepities are those that depend upon the ambiguity of quotation marks, so they work much better when delivered as part of a lecture rather than when written down. Dennett uses this example — Love is just a word. If we are careful with our quotation marks, this becomes either “‘love’ is just a word” (trivially true) or “love is just a word” (false).
Twentieth century analytical philosophy expended much effort on clarifying the use of quotation marks, which are surprisingly important in mathematical logic and philosophical logic (Quine even formulated quasi-quotes in order to try to dispel the confusion surrounding the use-mention distinction). The use-mention distinction also became important once Tarski formulated his disquotational theory of truth, which employes the famous example, “‘Snow is white’ is true if and only if snow is white.” The interested reader can pursue on his own the relationship between deepities and disquotationalism; perhaps there is a paper or a dissertation here.
In one of his lectures that mention deepities Dennett elaborates: “A deepity is a proposition that seems to be profound because it is actually logically ill-formed.” Dennett follows his deepity, “Love is just a word,” with the assertion that, in its non-trivial sense, “whatever love is, it isn’t a word.” The logical structure of this assertion is, “Whatever x is, it isn’t an F” (or, better, “There is a x, and x is not F”). What Dennett is saying here is that it is a category mistake to assert, in this case, that “x is an F” (that “love is a word”).
Whether or not a category mistake is a logical error is perhaps open to question, while use-mention errors seem to be clearly logical errors. There is, however, a long history of treating theories of categories as part of philosophical logic, so that a category error (like conflating mind with matter, or with material processes) is a logical error. Clearly, however, Dennett is treating his examples of deepities as logically ill-formed as a result of being category errors. “Whatever love is, it isn’t a word,” he says, and he says that because it would be a category error to ascribe the property of “being a word” to love, except when love is invoked as a word. (If we liked, we could limit deepities to use/mention confusions only, and in fact the entry for “deepity” in the Urban Dictionary implies as much, but while Dennett himself used a use/mention confusion to illustrate the idea of a deepity, I don’t think that it was his intention to limit deepities to use/mention confusions only, as in his expositions of the idea he defines a deepity in terms of its being logically ill-formed.)
Now, that being said, and, I trust, being understood, we pass along to further deepities. Once we pass beyond obvious and easily identifiable confusions, fallacies, and paradoxes, the identification of deepities becomes controversial rather than merely an amusing exercise. It would be easy to identify theological deepities that Dennett’s audience would likely reject — religion is a soft target, and easy to ridicule — but it is more interesting to go after hard targets. I want to introduce the particular deepity that one find’s in Crick’s book The Amazing Hypothesis:
“The Astonishing Hypothesis is that ‘You,’ your joys and your sorrows, your memories and your ambitions, your sense of personal identity and free will, are in fact no more than the behavior of a vast assembly of nerve cells and their associated molecules. As Lewis Carrol’s Alice might have phrased it: ‘You are nothing but a pack of neurons.’ This hypothesis is so alien to the ideas of most of people alive today that it can truly be called astonishing.”
Francis Crick, The Amazing Hypothesis: The Scientific Search for the Soul, New York: Touchstone, 1994, p. 3
No one should be astonished by this hypothesis; reductionism is as old as human thought. The key passage here is “no more than,” although in similar passages by other authors one finds the expression, “nothing but,” as in, “x is nothing but y.” This is the paradigmatic form of reductionism.
Some of my readers might be a bit slack-jawed (perhaps even, might I say, astonished) to see me call this paradigmatic instance of scientific reductionism a “deepity.” In taking up Dennett’s term “deepity” and applying it to the sort of scientistic approach to which Dennet would likely be sympathetic is clearly a case of my employing the term in a manner unintended by Dennett, perhaps even constituting a use that Dennett himself would deny was valid, if he knew of it. Indeed, Dennett is quite clear about his own reductionist view of mind, and of the similarity of his own views to those of Crick.
Dennett, however, is pretty honest as a philosopher, and he freely acknowledges the possibility that he might be wrong (a position that C. S. Pierce called “fallibilism”). For example, Dennet wrote, “What about my own reductios of the views of others? Have they been any fairer? Here are a few to consider. You decide.” In the following paragraph of the same book, Intuition Pumps And Other Tools for Thinking, Dennett described what he considers to be the over-simplification of Crick’s views on consciousness:
“You would think that Sir John Eccles, the Catholic dualist, and Francis Crick, the atheist materialist, would have very little in common, aside from their Nobel prizes. But at least for a while their respective view of consciousness shared a dubious oversimplification. many nonscientists don’t appreciate how wonderful oversimplifications can be in science; the cut through the hideous complexity with a working model that is almost right, postponing the messy details until later. Arguably the best us of ‘over’-simplification is the history of science was the end run by Crick and James Watson to find the structure of DNA while Linus Pauling and others were trudging along trying to make sense of the details. Crick was all for the trying the bold stroke just in case it solved the problem in one fell swoop, but of course that doesn’t always work.”
Daniel C. Dennett, Intuition Pumps And Other Tools for Thinking, 2. “By Parody of Reasoning”: Using Reductio ad Absurdum
Dennett then described Crick’s reductionist hypothesis (I’m leaving a lot out here; the reader is referred to the full account in Dennett’s book):
“…then [Crick] proposed a strikingly simply hypothesis: the conscious experience of red, for instance, was activity in the relevant red-sensitive neurons of that retinal area.”
Dennett, Op. cit.
Dennett followed this with counter-arguments that he himself offered (suggesting that Dennett is not himself quite the reductionist that he paints himself as being in popular lectures), but said of Crick that, “He later refined his thinking on this score, but still, he and neuroscientist Christof Koch, in their quest for what they called the NCC (the neural correlates of consciousness), never quite abandoned their allegiance to this idea.” Indeed, not only did Crick not abandon the idea, he went on to write an entire book about it.
It would be a mistake to take Crick’s reductionism in regard to consciousness in isolation, because it occupies a privileged place in a privileged scientific narrative. Vilayanur S. Ramachandran placed Crick and Watson’s discovery of the structure of DNA in the venerable context of repeated conceptual revolutions since the scientific revolution itself:
The history of ideas in the last few centuries has been punctuated by major upheavals in thought that have turned our worldview upside down and created what Thomas Kuhn called “scientific revolutions.” The first of these was the Copernican revolution, that, far from being the centre of the Universe, the Earth is a mere speck of dust revolving around the Sun. Second came Darwin’s insight that we humans do not represent the pinnacle of creation, we are merely hairless neotonous apes that happen to be slightly cleverer than our cousins. Third, the Freudian revolution, the view that our behaviour is governed largely by a cauldron of unconscious motives and desires. Fourth — Crick and Watson’s elucidation of DNA structure and the genetic code, banishing vitalism forever from science. And now, thanks once again partly to Crick, we are poised for the greatest revolution of all — understanding consciousness — understanding the very mechanism that made those earlier revolutions possible! As Crick often reminded us, it’s a sobering thought that all our motives, emotions, desires, cherished values, and ambitions — even what each of us regards as his very own ‘self’ are merely the activity of a hundred billion tiny wisps of jelly in the brain. He referred to this as the “astonishing hypothesis” the title of his last book (echoed by Jim Watson’s quip “There are only molecules, everything else is sociology”).
Vilayanur S. Ramachandran, Perception, 2004, volume 33, pages 1151-1154
The narrative of the materialist reduction of mind to brain or to brain function fits nicely into the overarching scientific narrative of conceptual revolutions that are a rebuke to human pride. That the rebuke to human pride remains such a central theme in the ascetic practice of science merely shows the continuity of science with its medieval scholastic antecedents, in which the punishment of human pride was no less a central doctrine. Indeed, what we might call the Copernican imperative of contemporary science has become the dominant narrative to science to the point that few other narratives are taken seriously. (It is also wrong, or at very least misleading, but that is a topic for another, future, post.) Thus the Copernican imperative is a lot like the (repeatedly disputed) idea of progress in industrial-technological civilization: no matter how hard we try to find another paradigm to organize our understanding, we keep coming back to it. (For example, I have mentioned Kevin Kelly’s explicit arguments for progress in several posts, as in Progress, Stagnation, and Retrogression.)
Placing Crick’s thought in the context of the narrative that furnishes much of its meaning suggests further contexts for Crick’s thought — the ultimate intellectual context that inspired Crick, as well as alternative contexts that place a very different meaning and value on Crick’s reductionism. Surprisingly, as it turns out, the ultimate context of Crick’s views is the most simple-minded theologically-tinged science imaginable, which at once makes Dennett’s above-quoted observation about Crick’s and Eccles’ common ground pregnant with meaning.
Crick’s contempt for philosophical approaches to the problem of consciousness is so thick it practically drips off the page, and furnishes a perfect example of what I have called fashionable anti-philosophy. Despite Crick’s contempt for philosophy, Crick jumps directly into the use of theological language by repeatedly invoking the idea of a human “soul” — indeed, his book is subtitled, “the scientific search for the soul.” This is an important clue. Crick rejects philosophy, but he embraces theology. In other words, Crick’s position is theological, and Crick’s theological frame of mind is at least in part responsible for Crick’s dismissive attitude to philosophy.
Many contemporary philosophers (not to mention contemporary scientists) tie themselves into knots trying to avoid saying that thought and ideas and the mind are distinct from material bodies and physical processes, not because they can’t tell the difference between the two (like G. E. Moore’s famous dream in which he couldn’t distinguish propositions from tables), but because to acknowledge the difference between thoughts and things seems to commit one to a philosophical trajectory that cannot ultimately avoid converging on Cartesian dualism — and if there is any consensus in contemporary philosophy, it is the rejection of Cartesian dualism.
How are thoughts different from things, in so far as we understand “things” in this context to be corporeal bodies? The examples are so numerous and so obvious that it scarcely seems worth the trouble to cite a few of them, but since many people — Crick and Dennett among them — give straight-faced accounts of reductionism, I guess it is necessary. So, think of a joke. Or have someone tell you a joke. If the joke is really funny, you will be amused; maybe you will even laugh. But if you had an exhaustive delineation of brain structure and brain processes that correspond with the joke, nowhere in the brain structure or processes would you find any thing funny or amusing. If you are a brain scientist you might find these brain structures and processes to be fascinating, but unless you’re a bit eccentric you are not likely to find them to be funny.
Similar considerations hold for tragedy: watch or read a great tragedy, and then see if you can find anything tragic in the brain structures and processes that correspond with viewing or reading a tragedy. If you are honest, you will find nothing tragic about brain structures and processes. Again, take two ideas, one of which is logically entailed by the other — of, if you like, take a syllogism and make it easy on yourself: Socrates is a man, All men are mortal, Therefore Socrates is mortal. Find the brain structures and processes that correspond to these three propositions, and see if there is any relationship of logical entailment between the brain structures and processes. But how in the world could a brain structure or process be logically entailed by another brain structure or process? This is simply not the kind of property that brain processes and structures possess.
Being funny or being tragic or being logically entailed by another proposition are properties that ideas might have but they are not the kind of properties that physical structures or processes possess. Physical structures have properties like length, breadth, and depth, while physical processes might have properties like temporal duration, chemical composition, or electrical charge (brain processes might have all three properties). It would be senseless, on the other hand, to speak of the length, breadth, depth, chemical composition or electrical charge of an idea. It is nonsense to say that, “The concept ‘horse’ is three inches wide.” Not true or false — just meaningless. It is equally nonsense to say that, “The pelvis is tragic.”
To conflate thoughts and things is a category mistake, and in so far as category mistakes are violations of philosophical logic, expressions that formulate category mistakes are logically ill-formed. When logically-ill formed propositions seem profound — the sort of thing which, if true, would be earth-shattering — but in fact are merely false, then you have what Dennett calls a “deepity.” Thus Crick’s deepity is his identification of “your joys and your sorrows, your memories and your ambitions, your sense of personal identity and free will” with “the behavior of a vast assembly of nerve cells and their associated molecules.” If this were true, it would be earth-shattered, but in fact it is a logically ill-formed expression that is a deepity. Whatever your joys, sorrows, and memories are, they certainly are not the behavior of nerve cells. That much should be uncontroversial, so let us call a spade a space, and a deepity a deepity.
. . . . .
. . . . .
. . . . .
. . . . .
13 February 2013
In The Accidental World I said that individuals possess axiological uniqueness in virtue of ontological uniqueness — the very contingency of the world, the historical accidents of which we are the consequences, furnishes us with the concrete expressions of our individuality: faces, bodies, boundaries, borders — all that is ours.
It may have appeared mildly ironic to some that I should begin my trip to Japan with a meditation on individuality. Japan is, after all, known in the west as the source of the proverb that the stake that sticks up gets hammered down (出る杭は打たれる。 Deru kui wa utareru).
While Japan is stereotypically a land of stultifying conformity, Ruth Benedict’s classic study of Japan, The Chrysanthemum and the Sword, the result of wartime research commissioned by the U.S. Office of War Information, presents the reader with a sequence of dramatic contradictions of the Japanese character:
“The Japanese are, to the highest degree, both aggressive and unaggressive, both militaristic and aesthetic, both insolent and polite, rigid and adaptable, submissive and resentful of being pushed around, loyal and treacherous, brave and timid, conservative and hospitable to new ways. They are terribly concerned about what other people will think of their behavior, and they are also overcome by guilt when other people know nothing of their misstep. Their soldiers are disciplined to the hilt but are also insubordinate.”
Ruth Benedict, The Chrysanthemum and the Sword, Chapter I, “Assignment: Japan”
In other words, the Japanese are human, all-too-human. There was never a more persuasive argument for universal human nature than a detailed study of the life of a people that reveals their inner nature to be as conflicted as the inner nature of any other people.
In the same spirit of Benedict’s contradictory character traits, one would expect that the Japanese are at once both pervasively conformist and yet profoundly individualistic. The same might be said of Europeans, Africans, Latins, Americans, and so on.
What is distinctive about a people and a culture — that which is distinctively theirs and not ours — is the way in which the conflicted components of human nature are manifested in social institutions. And social institutions can vary quite significantly. Every society must find a way to keep the better part of its people fed, clothed, washed, housed, and occupied, but within these rather generous parameters there are ample opportunities for social experiments not duplicated elsewhere in the world.
Human beings, being all derived from a single speciation event, have a unity that cultural institutions do not possess. Social institutions, far more than individuals, embody the historical accidents that vary from place to place and time to time.
What we find when we travel are human beings, the same as human beings any other place on the planet, but whose lives have been shaped by the geographical and historical accidents that remain localized — unlike ourselves. We individuals do not remain localized. Like our prehistoric ancestors, we can start walking, and if we walk long enough and far enough (and maybe canoe for a while as well) we will find ourselves in another world shaped by other forces of geography and history than those familiar to us.
It is an accident that any of us happens live where we live, just as it is an accident where we happen to be born. It is partially an accident, and partially a matter of choice, where we happen to travel. If we start walking, we first find ourselves at our neighbor’s, and then our neighbor’s neighbor, and so on. Their lives are as accidental as are our lives. That they are the closest Other (and therefore representative of the narcissism of small differences) is as much an accident as where we happen to be born ourselves.
If we spend a little more time planning our expeditions, not merely setting out to walk away from our accidental home, but seeking a place in the world that agrees with our temperament, tastes, or preferences — that, too, is an accident, for while human nature (if there is any) may be traced to a single speciation event, individual temperament is an accident of history, and the places in the world that happen to offer aesthetic, intellectual, pragmatic, or other satisfaction to the individual mind do so as a matter of chance.
De gustibus non est disputandum.
. . . . .
. . . . .
. . . . .
27 October 2012
What is a definitive formulation?
Recently on my other blog I discussed the philosophical pursuit of definitive formulations. What is a definite formulation? The reader will, I am sure, immediately see that giving a concise and accurate idea of what constitutes a definitive formulation would itself require a definitive formulation of a definitive formulation.
I don’t yet have a definitive formulation of what constitutes a definitive formulation. I could simply say that it is a formulation of a concept that could serve as a definition, but this wouldn’t be very helpful. Here is how I characterized it in my other post:
“…a handful of short, clear, concise, and intuitively accessible sentences…”
“…to put this in clear and simple terms, if I have a definitive formulation, that means if you stopped me on the street and asked me to explain myself while standing on one foot, I could do it. Lacking definitive formulations, the attempted explanation would go on a little too long to be comfortable (or safely balanced) on one foot.”
Lacking a definitive formulation of an idea that is central to our thought means that we can only say what Augustine said of time in his Confessions:
What then is time? If no one asks me, I know: if I wish to explain it to one that asketh, I know not: yet I say boldly that I know, that if nothing passed away, time past were not; and if nothing were coming, a time to come were not; and if nothing were, time present were not. (11.14.17)
quid est ergo tempus? si nemo ex me quaerat, scio; si quaerenti explicare velim, nescio. fidenter tamen dico scire me quod, si nihil praeteriret, non esset praeteritum tempus, et si nihil adveniret, non esset futurum tempus, et si nihil esset, non esset praesens tempus.
In some cases, I think that we can move beyond this Augustinian limit to definition, and it is when we hit upon a definitive formulation that we are able to do this.
It seems appropriate that I should give a concrete example of something that I would identify as a definitive formulation, and since I have recently hit upon a formulation that I rather like, I will try to use this to show what a definitive formulation is.
What is temperament?
I have written several posts about temperament, including Temperamental Diversity, A Third Temperament, Intellectual Personalities and Temperament and Civilization. I don’t think that philosophy, science, or socio-political thought has yet done justice to the role that temperament plays in the world.
But what is temperament? The seventh of ten definitions in the Oxford English Dictionary (which of the ten is the closest to the sense of “temperament” as I have been using the word) defines temperament as follows:
“Constitution or habit of mind, esp. as depending upon or connected with physical constitution; natural disposition”
The sixth of the OED definitions defines temperament in terms of the four humours recognized in medieval medical theory and practice:
“In mediæval physiology: The combination of the four cardinal humours (see humour n. 2b) of the body, by the relative proportion of which the physical and mental constitution were held to be determined; known spec. as animal temperament; also, The bodily habit attributed to this, as sanguine temperament, choleric temperament, phlegmatic temperament, or melancholic temperament (see the adjs.).”
In traditional philosophical parlance, a dictionary definition gives us a nominal definition, but as philosophers what we really want is a real definition. While the philosophical distinction between nominal and real definitions is ancient and widely familiar, and therefore probably ought to remain untouched, I think it is more intuitive to call these two kinds of definition formal definition and metaphysical definition. A formal definition situates the meaning of a term within a formal system, perhaps within the system of language, whereas a metaphysical definition situates the meaning of a term within the structure of the world. So I guess what I am saying here is that one function of a definitive formulation is to give a metaphysical definition — but to be able to do so without requiring the exposition of an entire metaphysical system. You can imagine why this might be difficult.
So, what would I offer as a definitive formulation of temperament, that (hopefully) goes beyond the formal (i.e., nominal) definition in the OED? I define temperament as follows:
Temperament is the intellectual expression of individual variability.
I hope that the reader doesn’t find this too anti-climactic. I’ll try to explain why I find this to be a fruitful formulation.
The charm of an idea
A definitive formulation, as I understand it, has an aphoristic quality: it is brief, concise, sententious, and pregnant with meaning. It also has a certain indefinable “appeal” that, like most forms of appealingness, is compelling to some even while it leaves others cold.
Wittgenstein formulated this appeal by calling it the “charm” that some proofs in mathematics and the foundations of mathematics possess. The later Wittgenstein was concerned to criticize the whole Cantorian conception of set theory and transfinite numbers, and much of Wittgenstein’s later philosophical of mathematics has this purpose implicitly as the center of the exposition. (In connection with this, I have previously mentioned Brouwer’s influence on Wittgenstein in Saying, Showing, Constructing, and more recently wrote more about Brouwer in One Hundred Years of Intuitionism and Formalism.)
Here’s what Wittgenstein said about mathematical “charm” in his lectures of 1939:
“The proof has a certain charm if you like that kind of thing; but that is irrelevant. That fact that is has this charm is a very minor point and is not the reason why those calculations were made.–That is colossally important. The calculations have their use not in charm but in their practical consequences.”
“It is quite different if the main role or sole interest is this charm — if the whole interest is showing that a line does cut when it doesn’t, which sets the whole mind in a whirl, and gives the pleasant feeling of paradox. If you can show that there are numbers bigger than the infinite, your head whirls. This may be the chief reason this was invented.”
Ludwig Wittgenstein, Wittgenstein’s Lectures on the Foundations of Mathematics, Cambridge, 1939, edited by Cora Diamond, University of Chicago Press, 1989, p. 16
With this in mind, I am well aware that the “charm” that I find in my definitive formulation of temperament may well be lost on others. The fact that an idea that has a certain charm for one person has none for another is itself a function of temperament. Individuals of different temperaments will find an intellectual charm in different formulations.
Part of the charm that a formulation has (or fails to have) is the connections that it forges to familiar theories. A definitive formulation, among its other functions, contextualizes a less familiar or less precise concept in an established theory or theories, enabling a systematic exploration and exposition of the idea in relation to familiar and therefore more thoroughly explored theories. Well known theories provide clear parameters for an idea, which, when formerly known only in a vague and imprecise form, had no clear parameters.
In formulating temperament as the intellectual expression of individual variation I am contextualizing human temperament in evolutionary theory, and thereby suggesting an interpretation of temperament based in and drawing upon evolutionary psychology. Thus evolutionary theory provides the parameters for temperament understood as the intellectual expression of individual variability.
Individual variability is one of the drivers of natural selection. When distinct individuals have distinct properties, a selection event may favor (select for) some properties while disfavoring (select against) other properties. Usually we think of the properties of an organism as being structural features of an organism: one finch has a longer beak than another, or one ape is better at walking on two legs than another. These differences might disappear into the dustbin of natural history if no selection event comes along that favors one or the other. But if a selection event does occur, and it favors some structural attribute of an organism that varies among individuals, the favored individuals will go on to experience differential survival and reproduction.
While we usually think of selection in structural terms, a selection event can also select for behaviors. Organisms can adapt to their environment through behaviors just as certainly (and much more rapidly) than through structural changes in their bodies. Behavioral adaptation is no less significant in natural history than structural adaptation.
At very least with the emergence of human beings, and probably also with other species, both hominid precursors of homo sapiens and other large-brained mammals, mind emerged in natural history. With the emergence of mind, there emerged also a novel basis of selection. Some minds are constituted in one way, while other minds are constituted in other ways. In other words, the same individual variability we find in bodies and behaviors are also to be found in minds.
If a selection event occurs that should happen to favor (or disfavor) any one kind of mind over any other kind of mind, those possessing the favored minds will enjoy differential survival and reproduction. With individual variability of minds represented in a sentient population — individual temperaments that lead individuals to think in different ways, and value things in different ways, and deliberate over alternatives in different ways — there is the continual possibility of natural selection.
The more variety of minds that there are, the greater the number of alternatives amongst which a selection event can select, the greater the likelihood that some one temperament is more fitted to survive the particular conditions that obtain than other temperaments.
Thus to formulate temperament as the intellectual expression of individual variability is to place mind within natural history.
To place mind within nature is a metaphysical formulation.
. . . . .
. . . . .
. . . . .