10 September 2012
When writing about civilization I have started using the term “industrial-technological civilization” as I believe this captures more accurately the sense of what is unique about contemporary civilization. In Modernism without Industrialism: Europe 1500-1800 I argued that there is a sense in which this early modern variety of civilization was an abortive civilization, the development of which was cut short by the sudden and unprecedented emergence of industrial-technological civilization. I also discussed this recently in Temporal Structures of Civilization.
What I am suggesting is that the industrial revolution inaugurated a novel form of civilization that overtook modernism and essentially replaced it through the overwhelming rapidity and totality of industrial-technological development. And while the industrial revolution began in England, it was in nineteenth century Germany that industrial-technological civilization proper got its start, because it was in Germany that the essential elements that drive industrial-technological civilization came together for the first time in a mutually-reinforcing feedback loop.
The essential elements of industrial-technological civilization are science, technology, and engineering. Science seeks to understand nature on its own terms, for its own sake. Technology is that portion of scientific research that can be developed specifically for the realization of practical ends. Engineering is the actual industrial implementation of a technology. I realize that I am introducing conventional definitions here, and others have established other conventions for these terms, but I think that this much should be pretty clear. If you’d like the parse the journey from science to industry different, you’ll still come to more or less the same mutually-reinforcing feedback loop.
The important thing to understand about the forces that drive industrial-technological civilization is that this cycle is not only self-reinforcing but also that each stage is selective. Science produces knowledge, but technology only selects that knowledge from the scientific enterprise that can be developed for practical uses; of the many technologies that are developed, engineering selects those that are most robust, reproducible, and effective to create an industrial infrastructure that supply the mass consumer society of industrial-technological civilization. The process does not stop here. The achievements of technology and engineering are in turn selected by science in order to produce novel and more advanced forms of scientific instrumentation, with which science can produce further knowledge, thus initiating another generation science followed by technology followed by engineering.
Because of this unique self-perpetuating cycle of industrial-technological civilization, continuous scientific, technological, and engineering development is the norm. It is very tempting to call this development “progress” but as soon as we mention “progress” it gets us into trouble. Progress is problematic because it is ambiguous; different people mean different things when they talk about progress. As soon as someone points out the relentless growth of industrial-technological civilization, someone else will point out some supposed depravity that has flourished along with industrial-technological civilization in order to disprove the idea that such civilization involves “progress.” The ambiguity here is the conflation of technological progress and moral progress.
It is often said that poets only hope to produce poetry as good as that of past poets, and few imagine that they will create something better than Homer, Dante, Chaucer, or Shakespeare. The standards of poetry and art were set high early in the history of civilization, so much so that contemporary poets and sculptors do not imagine progress to be possible. Once can give voice to the authentic spirit of one’s time, but one is not likely to do better than artists of the past did in their effort to give voice to a different civilization. Thus it would be difficult to argue for aesthetic progress as a feature of civilization, much less industrial-technological civilization, any more than one would be likely to attribute moral progress to it.
Contemporary thinkers are also very hesitant to use the term “progress” because of its abuse in the recent past. When a history is written so that the whole of previous history seems to point to some present state of perfect as the culmination of the whole of history, we call this Whiggish history, and everyone today is contemptuous of Whiggish history because we know that history is not an inevitable progression toward greater rationality, freedom, enlightenment, and happiness. Whiggish history is usually traced to Sir James Mackintosh’s The History of England (1830–1832, 3 vols.), and this was thought to inaugurate a particular nineteenth century fondness for progressive history, so much so that one often hears the phase, “the nineteenth century cult of progress.”
Alternatively, the origins of Whiggish history can be attributed to the Marquis de Condorcet’s Outlines of an historical view of the progress of the human mind (1795), and especially its last section, “TENTH EPOCH. Future Progress of Mankind.”
Given the dubiousness of moral progress, the absence of aesthetic progress, and the bad reputation of history written to illustrate progress, historians have become predictably skittish about saying anything that even suggests progress, but this has created an historiographical climate in which any progress is simply dismissed as impossible, but we know this is not true. Even while some dimensions of civilization may remain static, and some may become retrograde, there are some dimensions of civilization that have progressed, and we need to say to explicitly or we will misunderstand the central fact of life in industrial-technological civilization.
Thus I will assert as the Industrial-Technological Thesis that technological progress is intrinsic to industrial-technological civilization. (I could call this the “fundamental theorem of industrial-technological civilization” or, if I wanted to be even more tendentious, “the technological-industrial complex.”) I wish to be understood as making a rather strong claim in so asserting the industrial-technological thesis.
More particularly, I wish to be understood as asserting that industrial-technological civilization is uniquely characterized by the escalating feedback loop of science, technology, and engineering, and that if this cycle should fail or shudder to a halt, the result will not be a stagnant industrial-technological civilization, but a wholly distinct form of civilization. Given the scope and scale of contemporary industrial-technological civilization, which possesses massive internal momentum, even if the cycle that characterizes technological progress should begin to fail, the whole of industrial-technological civilization will continue in existence in its present form for quite some time to come. Transitions between distinct forms of civilization are usually glacially slow, and this would likely be the case with the end of industrial-technological civilization; the advent of industrial-technological civilization is the exception due to its rapidity, thus we must acknowledge at least the possibility that another rapid advent is possible, even if unlikely.
Because of pervasive contemporary irony and skepticism, which is often perceived as being sufficient in itself to serve as the basis for the denial of the technological-industrial thesis, one expects to hear casual denials of progress. By asserting the technological-industrial thesis, and noting the pervasive nature of technological progress within it (and making no claims whatsoever regarding other forms of progress), I want to point out the casual and dismissive nature of most denials of technological progress. The point here is that if someone is going to assert that technological progress cannot continue, or will not continue, or plays no essential role in contemporary civilization, it is not enough merely to assert this claim; if one denies the industrial-technological thesis, one is obligated to maintain an alternative thesis and to argue the case for the absence of technological progress now or in the future. (We might choose to call this alternative thesis Ecclesiastes’ Thesis, because Ecclesiastes famously maintained that, “The thing that hath been, it is that which shall be; and that which is done is that which shall be done: and there is no new thing under the sun.”)
The industrial-technological thesis has significant consequences. Since civilizations ordinarily develop over a long time frame (i.e., la longue durée), and industrial-technological civilization is very young, we can likely expect that it will last for quite some time, and that means that escalating progress in science, technological, and engineering will continue apace. The wonders that futurists have predicted are still to come, if we will be patient. As I observed above, even if the feedback loop of technological progress is interrupted, the momentum of industrial-technological civilization is likely to continue for some time — perhaps even long enough for novel historical developments to emerge from the womb of a faltering industrial-technological civilization and overtake it in its decline with innovations beyond even the imagination of futurists.
. . . . .
. . . . .
. . . . .
7 September 2012
Some time ago in The Pleasures of Model Drift I discussed how contemporary cosmology is challenged by the accelerating expansion of the universe, and that there are no really good explanations of this yet in terms of the the received cosmological models. The resulting state of cosmological theories, then, is called model drift. This is a Kuhnian term. Almost exactly a year ago, when it was reported that some neutrinos may have traveled faster than light, it looked like we might also have had to face model drift in particle physics. Since these results haven’t been replicated, the standard model not only continues to stand, but has recently been fortified by the announcement of the discovery (sort of discovery) of the Higgs Boson.
But theoretical physics isn’t over yet. Some time ago in The limits of my language are the limits of my world I took up Wittgenstein’s famous aphorism from the perspective of recent work in particle physics that had “bent” the rules of quantum theory. Further work by at least of of the same scientific team at Centre for Quantum Information and Quantum Control and Institute for Optical Sciences, Department of Physics, University of Toronto, Aephraim M. Steinberg, has continued this line of research, which has been reported by the same BBC Science and technology reporter, Jason Palmer, who wrote up the earlier results (cf. Quantum test pricks uncertainty). This story covers research reported in Physical Review Letters, “Violation of Heisenberg’s Measurement-Disturbance Relationship by Weak Measurements.”
The abstract of this most recent research reads as follows:
While there is a rigorously proven relationship about uncertainties intrinsic to any quantum system, often referred to as “Heisenberg’s uncertainty principle,” Heisenberg originally formulated his ideas in terms of a relationship between the precision of a measurement and the disturbance it must create. Although this latter relationship is not rigorously proven, it is commonly believed (and taught) as an aspect of the broader uncertainty principle. Here, we experimentally observe a violation of Heisenberg’s “measurement-disturbance relationship”, using weak measurements to characterize a quantum system before and after it interacts with a measurement apparatus. Our experiment implements a 2010 proposal of Lund and Wiseman to confirm a revised measurement-disturbance relationship derived by Ozawa in 2003. Its results have broad implications for the foundations of quantum mechanics and for practical issues in quantum measurement.
Experimentalists are chipping away at Heisenberg’s Uncertainty Principle. They aren’t presenting their research as something especially radical — one might even think of this recent work as an instantiation of radical theories, modest formulations — but this is theoretically and even philosophically quite important.
We recall that despite himself making crucial early contributions to quantum theory, Einstein eventually came reject quantum theory, offering searching and subtle critiques of the theory. In his time Einstein was isolated among physicists for coming to reject quantum theory at the very time of its greatest triumphs. Quantum theory has gone on to become one of the most verified theories — and verified to the most exacting standards — in the history of physics, notwithstanding Einstein’s criticisms. Einstein primarily fell out with quantum theory over the notion of quantum entanglement, though Einstein, himself a staunch determinism, was also greatly troubled by Heisenberg’s uncertainly principle. Many, perhaps including Einstein himself, conflated physical determinism with scientific realism, so that a denial of determinism came to be associated with a rejection of realism. Heisenberg’s uncertainly principle is Exhibit “A” when it comes to the denial of determinism. So I think that if Einstein had lived to see this most recent work, he would have been both fascinated and intrigued by its implications for the uncertainty principle, and indeed its philosophical implications for physics.
Einstein was a uniquely philosophical physicist — the very antithesis of what recent physics has become, and which I have called Fashionable Anti-Philosophy (and which I elaborated in Further Fasionable Anti-Philosophy). From his earliest years, Einstein carefully studied philosophical works. He is said to have read Kant’s three critiques in his early teens. And Einstein’s rejection of quantum theory, which he modestly and humorously characterized as saying that something in his little finger told him that it couldn’t be right, was a philosophical rejection of quantum theory.
The recent research into Heisenberg’s uncertainly principle is not couched in philosophical terms, but it is philosophically significant. The very fact that this research is going on suggests that others, not only Einstein, have been dissatisfied with the uncertainly principle as it is usually interpreted, and that scientists have continued to think critically about it even as the uncertainty principle has been taught for decades as orthodox physics. This is a perfect example of what I have called Science Behind the Scenes.
The uncertainty of quantum theory, given formal expression in Heisenberg’s uncertainty principle, came to be interpreted not only epistemically, as placing limits on what we can know, but it was also interpreted ontologically, as placing limits on the constituents of the world. In so far as Heisenberg encouraged an ontological interpretation of the uncertainty principle, which I believe to be the case, he was advancing an underdetermined theory, i.e., an ontological interpretation of the uncertainty principle goes beyond — I think it goes far beyond — the epistemic uncertainty that we must posit in order to do quantum theory.
It seems to me that it is pretty easy to interpret the recent research cited above as questioning the ontological interpretation of the uncertainty principle while leaving an epistemic interpretation untouched. The limits of human knowledge are often poignantly brought home to us in our daily lives in a thousand ways, but we need not make the unnecessary leap from limitations on human knowledge to limitations on the world. On the other hand, we also need not make any connection between realism and determinism. It is entirely consistent (even if it seems odd to some of us) that there should be an objective world existing apart from human experience of and knowledge of that world, and that that objective world should not be deterministic. It may well be that it is essentially random and only stochastically defined, when a given radioactive substance decays, but the radioactive substance and the event of decay are as real as Einstein’s little finger. If I could have a conversation with Einstein, I would try to convince him of precisely this.
Indeterminate realism is also an underdetermined theory, and it is to be expected that there are non-realistic theories that are empirically equivalent to indeterminate realism. It is for this reason that I believe there are other arguments, distinct from those above, that favor realism over anti-realism, or even realism over some of the more extravagant interpretations of quantum theory. But I won’t go into that now.
We aren’t about to return to classical theories and their assumptions of continuity such as we had prior to quantum theory, any more than we are about to give up relativistic physics and return to strictly Newtonian physics. That much is clear. Nevertheless, it is important to remember that we are not saddled with any one interpretation of relativity or quantum theory, and we are especially not limited to the philosophical theories formulated by those scientists who originally formulated these physical theories, even if the philosophical theories were part of the “original intent” (if you will forgive me) of the physical theory. Another way to put this is that we are not limited to the original model of a theory, hence model drift.
. . . . .
. . . . .
. . . . .
27 May 2012
In the painfully slow process of the formulation of a secular world view having started from civilizations that, throughout the world, have been permeated by religious significance — so much so that each of the world’s major religions roughly correspond to each of the world’s major civilizations — one of the walls against which we repeatedly crack our heads is that of the traditional sense of grandeur that is so perfectly embodied in the religious rituals of ecclesiastical civilization.
For many if not most human beings, this grandeur of ritual translates into intellectual grandeur, and, again, for many if not most, this equation of religious grandeur with human honor and dignity has meant that any deviation from the traditions of ecclesiastical civilization have been treated as deviations from the intrinsic respect due to human beings as human beings. That is to say, many Westerners (and possibly also many elsewhere in the world) express indignation, outrage, and anger over a naturalistic account of human origins. The whole legacy of Copernicus is seen as invidious to human dignity.
Among those in the sciences and philosophy, it has become commonplace to attribute the strongly negative reaction to naturalism (especially as is touches upon human origins) as a reaction to the re-contextualization of humanity’s place in nature in view of a naturalistic cosmology. Anthropocentric cosmology is here treated as an expression of overweening human pride, and the need to re-conceptualize the cosmos in terms that make human beings and human concerns no longer central is not only a necessary adjustment to scientific understanding but also serves as a stern lesson to human hubris.
In other words, the scientific demonstration of the peripheral position of humanity in a naturalistic cosmos is understood to be a moral good because it, “brings most men’s characters to a level with their fortunes” (to quote Thucydides). Science is a rough master, and by formulating scientific cosmology in these unforgiving terms I have made it sound harsh and unsympathetic. This was intentional, because this formulation comes closer to doing justice to the visceral intuitions of the indignant anthropocentric than the usual formulation in terms of a necessary correction to human pride.
Seen in this way, both anthropocentric-ecclesiastical civilization and Copernican-scientific civilization are both related in an essential way to a conception of human pride. Both conceptions of humanity and of civilization have a fundamentally conflicted conception of pride. In ecclesiastical civilization, human pride in species-being (to employ a Marxist term) is magnified while individualistic pride is the sin of Satan and central to the fallen nature of the world. In Copernican civilization, human pride in human knowledge is magnified — and I note that human knowledge is often an individualistic undertaking, but see below for more on this — but pride in species-being is called into question.
In ecclesiastical civilization, pride in species-being is raised to the status of metaphysical pride and is postulated as the organizational principle of the world. But, of course, pride in species-being is identified with humility, and the whole of humanity is dismissed as sinners. In Copernican civilization, pride in knowledge — epistemic pride — is raised to the status of metaphysical pride and is postulated as the organizational principle of the world. But, of course, the epistemic pride of science is often identified with epistemic humility. As Socrates once said to Antisthenes, “I can see your pride through the holes in your cloak.”
Individualistic pride is closely connected to the heroic conception of civilization, and as civilization continues its relentless consolidation of social institutions integrated within a larger whole of human endeavor, the role (even the possibility) of individual heroic action is abridged. Individualistic pride in this context is even more closely connected with the heroic conception of science, which is (as I have pointed out elsewhere) already an antiquated notion.
When civilization was young and scientific research was the province of individuals, not institutions and their communities of researchers, almost all scientific discoveries were the result of heroic individual efforts. Science, like civilization, is now a collective enterprise, and just as the story of civilization was once told as the deeds of kings, so the story of science was once told as the deeds of discoverers. Such authentic efforts could still be found in the nineteenth century (in the person of Darwin) and even in the early twentieth century (in the person of Einstein). But it is rarely the case today, and will become rarer and possibly extinct in the future.
Pride in species-being (in contradistinction to individualistic pride) is something that I have not spent much time thinking about, but when I think about it now in the present context it seems to me that this represents a heroic conception of the career of humanity — a kind of collective heroism of a biological community striving to overcome adverse selection. Thus, if the world is magnified, how much greater is the glory of the species that triumphs over the deselective obstacles thrown up by the world? Religion magnifies the anthropocentrically-organized world in order to magnify the species-being that has been made the principle of the world; science magnifies the Copernican decentralized world in order to magnify the knower whose knowledge has been made the principle of the world.
As ecclesiastical civilization slowly, gradually, and incrementally gives way before Copernican civilization, novel ways will need to be found to supply the apparent human need for a heroic conception of the career of humanity as a whole. It will not be enough to insist upon the grandeur of the scientifically understood universe. We have seen that religion, science, and philosophy can all appeal to the grandeur of the world in making the case for a unification of the world around a particular principle. The Psalmist wrote, “The heavens declare the glory of God; and the firmament proclaims his handiwork.” Darwin wrote, “There is grandeur in this view of life.” Nietzsche wrote even as he was losing his mind, “Sing me a new song: the world is transfigured and all the heavens are filled with joy.”
Scientific knowledge is now a production of species-being, but I don’t think that science as an institution can bear the heavy burden of human hopes and dreams and expectations. Perhaps civilization, which is also collective and a production of species-being, could be channeled into a heroic conception of species-being that could serve an eschatological function. This seems like a real possibility to me, but it is not something that is yet a palpable reality.
If those who will someday formulate a future science of civilizations also see themselves as engineers of the human soul, i.e., that they conceive of the science of civilization not only descriptively but also prescriptively, they will want to not only formulate a doctrine of what civilization is, but also what civilization will be, can be, and ought to be. If civilization is to be a home for human hopes, then it must become something that is capable of sustaining and nurturing such hopes.
. . . . .
. . . . .
. . . . .
19 May 2012
We can make a distinction among distinctions between ad hoc and principled distinctions. The former category — ad hoc distinctions — may ultimately prove to be based on a principle, but that principle is unknown as long as the distinction remains an ad hoc distinction. This suggests a further distinction among distinctions between ad hoc distinctions that really are ad hoc, and which are based on no principle, and ad hoc distinctions that are really principled distinctions but the principle in question is not yet known, or not yet formulated, at the time the distinction is made. So there you have a principled distinction between distinctions.
A perfect evocation of ad hoc distinctions is to be found in the opening paragraph of the Preface to Foucault’s The Order of Things:
This book first arose out of a passage in Borges, out of the laughter that shattered, as I read the passage, all the familiar landmarks of my thought — our thought, the thought that bears the stamp of our age and our geography — breaking up all the ordered surfaces and all the planes with which we are accustomed to tame the wild profusion of existing things, and continuing long afterwards to disturb and threaten with collapse our age-old distinction between the Same and the Other. This passage quotes a ‘certain Chinese encyclopedia’ in which it is written that ‘animals are divided into: (a) belonging to the Emperor, (b) embalmed, (c) tame, (d) sucking pigs, (e) sirens, (f) fabulous, (g) stray dogs, (h) included in the present classification, (i) frenzied, (j) innumerable, (k) drawn with a very fine camelhair brush, (1) et cetera, (m) having just broken the water pitcher, (n) that from a long way off look like flies’. In the wonderment of this taxonomy, the thing we apprehend in one great leap, the thing that, by means of the fable, is demonstrated as the exotic charm of another system of thought, is the limitation of our own, the stark impossibility of thinking that.
Such distinctions are comic, though Foucault recognizes that our laughter is uneasy: even as we immediately recognize the ad hoc character of these distinctions, we realize that the principled distinctions we routinely employ may not be so principled as we supposed.
Foucault continues this theme for several pages, and then gives another formulation — perhaps, given his interest in mental illness, an illustration that is closer to reality than Borges’ Chinese dictionary:
“It appears that certain aphasiacs, when shown various differently coloured skeins of wool on a table top, are consistently unable to arrange them into any coherent pattern; as though that simple rectangle were unable to serve in their case as a homogeneous and neutral space in which things could be placed so as to display at the same time the continuous order of their identities or differences as well as the semantic field of their denomination. Within this simple space in which things are normally arranged and given names, the aphasiac will create a multiplicity of tiny, fragmented regions in which nameless resemblances agglutinate things into unconnected islets; in one corner, they will place the lightest-coloured skeins, in another the red ones, somewhere else those that are softest in texture, in yet another place the longest, or those that have a tinge of purple or those that have been wound up into a ball. But no sooner have they been adumbrated than all these groupings dissolve again, for the field of identity that sustains them, however limited it may be, is still too wide not to be unstable; and so the sick mind continues to infinity, creating groups then dispersing them again, heaping up diverse similarities, destroying those that seem clearest, splitting up things that are identical, superimposing different criteria, frenziedly beginning all over again, becoming more and more disturbed, and teetering finally on the brink of anxiety.”
Foucault here writes that, “the sick mind continues to infinity,” in other words, the process does not terminate in a definite state-of-affairs. This implies that the healthy mind does not continue to infinity: rational thought must make concessions to human finitude. While I find the use of the concept of the pathological in this context questionable, and I have to wonder if Foucault was unwittingly drawn into the continental anti-Cantorian tradition (Brouwerian intuitionism and the like, though I will leave this aside for now), there is some value to the idea that a scientific process (such as classification) must terminate in a finite state-of-affairs, even if only tentatively. I will try to show, moreover, that there is an implicit principle in this attitude, and that it is in fact a principle that I have discussed previously.
The quantification of continuous data requires certain compromises. Two of these compromises include finite precision errors (also called rounding errors) and finite dimension errors (also called truncation). Rounding errors should be pretty obvious: finite parameters cannot abide infinite decimal expansions, and so we set a limit of six decimal places, or twenty, or more — but we must set a limit. The difference between actual figures and limited decimal expansions of the same figure is called a finite precision error. Finite dimension errors result from the need to arbitrarily introduce gradations into a continuum. Using the real number system, any continuum can be faithfully represented, but this representation would require infinite decimal expansions, so we see that there is a deep consonance between finite precision errors and finite dimension errors. Thus, for example, we measure temperature by degrees, and the arbitrariness of this measure is driven home to us by the different scales we can use for this measurement. And if we could specify temperature using real numbers (including transcendental numbers) we would not have to compromise. But engineering and computers and even human minds need to break things up into manageable finite quantities, so we speak of 3 degrees C, or even 3.14 degrees C, but we don’t try to work with pi degrees C. Thus the increments of temperature, or of any another measurement, involve both finite precision errors and finite dimension errors.
In so far as quantification is necessary to the scientific method, finite dimension errors are necessary to the scientific method. In several posts (e.g., Axioms and Postulates in Strategy) I have cited Carnap’s tripartite distinction among scientific concepts, the three being classificatory, comparative, and quantitative concepts. Carnap characterizes the emergence of quantitative scientific concepts as the most sophisticated form of scientific thought, but in reviewing Carnap’s scientific concepts in the light of finite precision errors and finite dimension errors, it is immediately obvious that classificatory concepts and comparative concepts do not necessarily involve finite precision errors and finite dimension errors. It is only with the introduction of quantitative concepts that science becomes sufficiently precise that its precision forces compromises upon us. However, I should point out that classificatory concepts routinely force us to accept finite dimension errors, although they do not involve finite precision errors. The example given by Foucault, quoted above, illustrates the inherent tension in classificatory concepts.
We accept finite precision errors and finite dimension errors as the price of doing science, and indeed as the price of engaging in rational thought. As Foucault implied in the above quote, the healthy and sane mind must draw lines and define limits and call a halt to things. Sometimes these limits are close to being arbitrary. We retain the ambition of “carving nature at the joints,” but we accept that we can’t always locate the joint but at times must cleave the carcass of nature regardless.
For this willingness to draw lines and establish limits and to call a halt to proceedings I will give the name The Truncation Principle, since it is in virtue to cutting off some portion of the world and treating it as though it were a unified whole that we are able to reason about the world.
As I mentioned above, I have discussed this problem previously, and in my discussion I noted that I wanted to give an exposition of a principle and a fallacy, but that I did not have a name for it yet, so I called it An Unnamed Principle and an Unnamed Fallacy. Now I have a name for it, and I will use this name, i.e., the truncation principle, from now on.
Note: I was tempted to call this principle the “baby retention principle” or even the “hang on to your baby principle” since it is all about the commonsense notion of not throwing out the baby with the bathwater.
In An Unnamed Principle and an Unnamed Fallacy I initially formulated the principle as follows:
The principle is simply this: for any distinction that is made, there will be cases in which the distinction is problematic, but there will also be cases when the distinction is not problematic. The correlative unnamed fallacy is the failure to recognize this principle.
What I most want to highlight is that when someone points out there are gray areas that seem to elude classification by any clear cut distinction, this is sometimes used as a skeptical argument intended to undercut the possibility of making any distinctions whatsoever. The point is that the existence of gray areas and problematic cases does not address the other cases (possibly even the majority of the cases) for which the distinction isn’t in the least problematic.
A distinction that that admits of problematic cases not clearly falling on one side of the distinction or the other, may yet have other cases that are clearly decided by the distinction in question. This might seem too obvious to mention, but distinctions that admit of problematic instances are often impugned and rejected for this reason. Admitting of no exceptions whatsoever is an unrealistic standard for a distinction.
I hope to be able to elaborate on this formulation as I continue to think about the truncation principle and its applications in philosophical, formal, and scientific thought.
Usually when we hear “truncation” we immediately think of the geometrical exercise of regularly cutting away parts of the regular (Platonic) solids, yielding truncated polyhedra and converging on rectified polyhedra. This is truncation in space. Truncation in time, on the other hand, is what is more commonly known as historical periodization. How exactly one historical period is to be cut off from another is always problematic, not least due to the complexity of history and the sheer number of outliers that seem to falsify any attempt at periodization. And yet, we need to break history up into comprehensible chunks. When we do so, we engage in temporal truncation.
All the problems of philosophical logic that present themselves to the subtle and perceptive mind when contemplating a spatial truncation, as, for example, in defining the Pacific Ocean — where exactly does it end in relation to the Indian Ocean? — occur in spades in making a temporal truncation. Yet if rational inquiry is to begin (and here we do not even raise the question of where rational inquiry ends) we must make such truncations, and our initial truncations are crude and mostly ad hoc concessions to human finitude. Thus I introduce the truncation principle as an explicit justification of truncations as we employ them throughout reasoning.
And, as if we hadn’t already laid up enough principles and distinctions for today, here is a principle of principles of distinctions: every principled distinction implies a fallacy that takes the form of neglecting this distinction. With an ad hoc distinction there is no question of fallacy, because there is no principle to violate. Where there is a principle involved, however, the violation of the principle constitutes a fallacy.
Contrariwise, every fallacy implies a principled distinction that ought to have been made. If we observe the appropriate principled distinctions, we avoid fallacies, and if we avoid fallacies we appropriately distinguish that which ought to be distinguished.
. . . . .
. . . . .
. . . . .
4 May 2012
When a future science of civilizations begins to take shape, it will need to distinguish broad categories or families of civilizations, or, if you will, species of civilizations. In so far as civilizations are out outgrowth of biological species, they are an extension of biology, and it is appropriate to use the terminology of species to characterize civilizations.
Just a few days ago in A Copernican Conception of Civilization I distinguished between eocivilization (i.e., terrestrial civilizations), exocivilization (extraterrestrial civilizations), and astrocivilization (an integrated conception of eo- and exocivilization taken together). This is a first step in identifying species of civilizations.
Given that astrocivilization follows directly from (one could say, supervenes upon) astrobiology, it is particular apt to extend the definition of astrobiology to astrocivilization, and so in A Copernican Conception of Civilization I paraphrased the NASA definition of astrobiology, mutatis mutandis, for civilization. Thus astrociviliation comprises…
…the study of the civilized universe. This field provides a scientific foundation for a multidisciplinary study of (1) the origin and distribution of civilization in the universe, (2) an understanding of the role of the structure of spacetime in civilizations, and (3) the study of the Earth’s civilizations in their terrestrial and cosmological context.
Some time ago in A First Image from the Herschel Telescope I made the suggestion that particular physical features of a galaxy might result in any and all civilizations arising within that galaxy to share a certain feature or features based upon the features of the containing galaxy. This is a point worth developing at greater length.
Of the images of the M51 galaxy I wrote:
If there are civilizations in that galaxy, they must have marvelous constellations defined by these presumably enormous stars, and that one star at the top of the image seems to be brighter than any other in that galaxy. It would have a special place in the mythologies of the peoples of that galaxy. And the peoples of that galaxy, even if they do not know of each other, would nevertheless have something in common in virtue of their relation to this enormous star. We could, in this context, speak of a “family” of civilizations in this galaxy all influenced by the most prominent stellar feature of the galaxy of which they are a part.
We can generalize about and extrapolate from this idea of a family of civilizations defined by the prominent stellar features of the galaxy in which they are found. If a galaxy has a sufficiently prominent physical feature that can witnessed by sentient beings, these features will have a place in the life of these sentient beings, and thus by extension a place in the civilizations of these sentient beings.
There is a sense in which it seems a little backward to start from the mythological commonalities of civilizations based upon their view of the cosmos, but it is only appropriate, because this is where cosmology began for human beings. If we remain true to the study of astrocivilization as including, “the search for evidence of the origins and early evolution of civilization on Earth,” the origins and early evolution of civilization on earth was at least in part derived from early observational cosmology. We began with myths of the stars, and it is to be expected that many if not most civilizations will begin with myths of the stars. Moreover, these myths will be at least in part a function of the locally observable cosmos.
The more expected progress of thought would be to start with how the physical features of a particular galaxy or group of galaxies would affect the physical chemistry of life within this galaxy or these galaxies, and how life so constituted would go on to constitute civilization. These are important perspectives that a future science of civilizations would also include.
Simply producing a taxonomy of civilizations based on mythological, physical, biological, sociological, and other factors would only be the first step of a scientific study of astrocivilization. As I have noted in Axioms and Postulates in Strategy, Carnap distinguished between classificatory, comparative, and quantitative scientific concepts. Carnap suggested that science begins with classificatory conceptions, i.e., with a taxonomy, but must in the interests of rigor and precision move on to the more sophisticated comparative and quantitative concepts of science. More recently, in From Scholasticism to Science, I suggested that these conceptual stages in the development of science may also demarcate historical stages in the development of human thought.
It will only be in the far future, when we have evidence of many different civilizations, that we will be able to formulate comparative concepts of civilization based on the actual study of astrocivilization, and it is only after we have graduated to comparative concepts in the science of astrocivilization that we will be able to formulate quantitative measures of civilization informed by the experience of many distinct civilizations.
At present, we know only the development of civilizations on the earth. This has not prevented several thinkers from drawing general conclusions about the nature of civilization, but it is not enough of a sample to say anything definitive about, “the origin, evolution, distribution, and future of civilization in the universe.” The civilizations of the earth represent a single species, or, at most, a single genera of civilization. We will need to study the independent origins and development of civilization in order to have a valid basis of comparison. We need to be able to see civilization as a part of cosmological evolution; until that time, we are limited to a quasi-Linnaean taxonomy of civilization, based on observable features in common; after we have a perspective of civilization as part of cosmological evolution, it will be possible to formulate a more Darwinian conception.
In the meantime, while we can understand theoretically the broad outlines of a study of astrocivilization, the actual content of such a science lies beyond our present zone of proximal development. And taking human knowledge in its largest possible context, we can see that our epistemic zone of proximal development supervenes on the maturity and extent of the civilization of which we are a part. This does not hold for more restricted forms of knowledge, but for forms of knowledge of which the study of astrocivilization is an example (i.e., human knowledge at its greatest extent) it becomes true. Not only individuals, but also whole societies and entire civilizations have zones of proximal development. A particular species of civilization facilitates a particular species of knowledge — but it also constrains other species of knowledge. This observation, too, would belong to an adequate conception of astrocivilization.
. . . . .
. . . . .
. . . . .
3 April 2012
One of the important ideas from Piaget’s influential conception of cognitive development is that of perspective taking. The ability to coordinate the perspectives of multiple observers of one and the same state of affairs is a cognitive skill that develops with time and practice, and the mastery of perspective taking coincides with cognitive maturity.
From a philosophical standpoint, the problem of perspective taking is closely related to the problem of appearance and reality, since one and the same state of affairs not only appears from different perspectives for different observers, it also appears from different perspectives for one and the same observer at different times. In other words, appearance changes — and presumably reality does not. It is interesting to note that developmental psychologists following Paiget’s lead have in fact conducted tests with children in order to understand at what stage of development that they can consistently distinguish between appearance and reality.
Just as perspective taking is a cognitive accomplishment — requiring time, training, and natural development — and not something that happens suddenly and all at once, the cognitive maturity of which perspective taking is an accomplishment does not occur all at once. Both maturity and perspective taking continue to develop as the individual develops — and I take this development continues beyond childhood proper.
While I find Piaget’s work quite congenial, the developmental psychology of Erik Erikson strikes me a greatly oversimplified, with its predictable crises at each stage of life, and the implicit assumption built in that if you aren’t undergoing some particular crisis that strikes most people at a given period of life, then there is something wrong with you. That being said, what I find of great value in Erikson’s work is his insistence that development continues throughout the human lifespan, and does not come to a halt after a particular accomplishment of cognitive maturity is achieved.
Piagetian cognitive development in terms of perspective taking can easily be extended throughout the human lifespan (and beyond) by the observation that there are always new perspectives to take. As civilization develops and grows, becoming ever more comprehensive as it does so, the human beings who constitute this civilization are forced to formulate always more comprehensive conceptions in order to take the measure of the world being progressively revealed to us. Each new idea that takes the measure of the world at a greater order of magnitude presents the possibility of a new perspective on the world, and therefore the possibility of a new achievement in terms of perspective taking.
The perspectives we attain constitute a hierarchy that begins with the first accomplishment of the self-aware mind, which is egocentric thought. Many developmental psychologists have described the egocentric thought patterns of young children, though the word “egocentric” is now widely avoided because of its moralizing connotations. I, however, will retain the term “egocentric,” because it helps to place this stage within a hierarchy of perspective taking.
The egocentric point of departure for human cognition does not necessarily disappear even when it is theoretically surpassed, because we know egocentric thinking so well from the nearly universal phenomenon of human selfishness, which is where the moralizing connotation of “egocentric” no doubt has its origin. An individual may become capable of coordinating multiple perspectives and still value the world exclusively from the perspective of self-interest.
In any case, the purely egocentric thought of early childhood confines the egocentric thinker to a tightly constrained circle defined by one’s personal perspective. While this is a personal perspective, it is also an impersonal perspective in so far as all individuals share this perspective. It is what Francis Bacon called the “idols of the cave,” since every human being, “has a cave or den of his own, which refracts and discolours the light of nature.” This has been well described in a passage from F. H. Bradley made famous by T. S. Eliot, because the latter quoted it in a footnote to The Waste Land:
My external sensations are no less private to myself than are my thoughts or my feelings. In either case my experience falls within my own circle, a circle closed on the outside; and, with all its elements alike, every sphere is opaque to the others which surround it… In brief, regarded as an existence which appears in a soul, the whole world for each is peculiar and private to that soul.
F. H. Bradley, Appearance and Reality, p. 346, quoted by T. S. Eliot in footnote 48 to The Waste Land, “What the Thunder Said”
I quote this passage here because, like my retention of the term “egocentric,” it can help us to see perspectives in perspective, and it helps us to do so because we can think of expanding and progressively more comprehensive perspectives as concentric circles. The egocentric perspective is located precisely at the center, and the circle described by F. H. Bradley is the circle within which the egocentric perspective prevails.
The next most comprehensive perspective taking beyond the transcendence of the egocentric perspective is the transcendence of the ethnocentric perspective. The ethnocentric perspective corresponds to what Bacon called the “idols of the marketplace,” such that this perspective is, “formed by the intercourse and association of men with each other.” The ethnocentric perspective can also be identified with the sociosphere, which I recently discussed in Eo-, Exo-, Astro- as an essentially geocentric conception which, in a Copernican context, should be overcome.
Beyond ethnocentrism and its corresponding sociosphere there is ideocentrism, which Bacon called the “idols of the theater,” and which we can identify with the noösphere. The ideocentric perspective, which Bacon well described in terms of philosophical systems, such that, “all the received systems are but so many stage-plays, representing worlds of their own creation after an unreal and scenic fashion.” Trans-ethnic communities of ideology and belief, like world’s major religions and political ideologies, represent the ideocentric perspective.
The transcendence of the ideocentric perspective by way of more comprehensive perspective taking brings us to the anthropocentric perspective, which can be identified with the anthroposphere (still a geocentric and pre-Copernican conception, as with the other -spheres mentioned above). The anthropocentric perspective corresponds to Bacon’s “idols of the tribe,” which Bacon described thus:
“The Idols of the Tribe have their foundation in human nature itself, and in the tribe or race of men. For it is a false assertion that the sense of man is the measure of things. On the contrary, all perceptions as well of the sense as of the mind are according to the measure of the individual and not according to the measure of the universe. And the human understanding is like a false mirror, which, receiving rays irregularly, distorts and discolours the nature of things by mingling its own nature with it.”
Bacon was limited by the cosmology of his time so that he could not readily identify further idols beyond the anthropocentric idols of the (human) tribe, just as we are limited by the cosmology of our time. Yet we do today have a more comprehensive perspective than Bacon, we can can identify a few more stages of more comprehensive perspective taking. Beyond the anthropocentric perspective there is the geocentric perspective, the heliocentric perspective, and even what we could call the galacticentric perspective — as when early twentieth century cosmologists argued over whether the Milky Way as the only galaxy and constituted an “island universe.” Now we know that there are other galaxies, and we can be said to have transcended the galacticentric perspective.
As I wrote above, as human knowledge has expanded and become more comprehensive, ever more comprehensive perspective taking has come about in order to grasp the concepts employed in expanding human knowledge. There is every reason to believe that this process will be iterated indefinitely into the future, which means that perspective taking also will be indefinitely iterated into the future. (I attempted to make a similar and related point in Gödel’s Lesson for Geopolitics.) Therefore, further levels of cognitive maturity wait for us in the distant future as accomplishments that we cannot yet attain at this time.
This last observation allows me to cite one more relevant developmental psychologist, namely Lev Vygotsky, whose cognitive mediation theory of human development makes use of the concept of a Zone of proximal development (ZPD). Human development, according to Vygotsky, takes place within a proximal zone, and not at any discrete point or stage. Within the ZPD, certain accomplishments of cognitive maturity are possible. In the lower ZPD there is the actual zone of development, while in the upper ZPD there lies the potential zone of development, which can be attained through cognitive mediation by the proper prompting of an already accomplished mentor. Beyond the upper ZPD, even if there are tasks yet to be accomplished, they cannot be accomplished within this particular ZPD.
With the development of human knowledge, we’re on our own. There is no cognitive mediator to help us over the hard parts and assist us in the more comprehensive perspective taking that will mark a new stage of cognitive maturity and possible also a new zone of proximal development in which new accomplishments will be possible. But this has always been true in the past, and yet we have managed to make these breakthroughs to more comprehensive perspectives of cognitive maturity.
I hope that the reader sees that this is both hopeful and sad. Hopeful because this way of looking at human knowledge suggests indefinite progress. Sad because we will not be around to see the the accomplishments of cognitive maturity that lie beyond our present zone of proximal development.
. . . . .
. . . . .
. . . . .
29 March 2012
Science has become central to industrial-technological civilization. I would define at least one of the properties that distinguishes industrial-technological civilization from agriculturalism or nomadism as the conscious application of science to technology, and the conscious application in turn of technology to industrial production. Prior to industrial-technological civilization there were science and technology and industry, but the three were not systematically interrelated and consciously pursued with an eye toward steadily increasing productivity.
The role of science within industrial-technological civilization has given science and scientists a special role in society. This role is not the glamorous role of film and music and athletic celebrities, and it is not the high-flying role of celebrity bankers and fund managers and executives, but it is nevertheless a powerful role. As Shelley once said that poets were the unacknowledged legislators of the world, we can say that scientists are the unacknowledged legislators of industrial technological civilization. Foucault came close to saying this when he said that doctors are the strategists of life and death.
I have previously discussed the ideological role of science in the contemporary world in The Political Uses of Science. Perhaps the predominant ideological function of science today is the role of “big science” — enormous research projects backed by government, industry, and universities that employ the talents of hundreds if not thousands of scientists. When Kuhnian normal science has this kind of backing, it is difficult for marginal scientific enterprises to compete. Big science moves markets and moves societies not because it is explicitly ideological in character, but because it is effective in meeting practical needs (though these needs are socially defined by the society in which science functions as a part).
Despite the fact that progress in scientific research is driven by the falsification and revision of theories through the expedient of experimentation, the scientific community has been surprisingly successful in closing ranks behind the most successful scientific theories of our time and presenting a united front that does not really give an accurate impression of the profound differences that separate scientists. Often a scientist spends an entire career trying to get a hearing or his or her idea, and this effort is not always successful. There are very real and bitter differences between the advocates of distinct scientific theories. The scientist sacrifices a life to research in a way not unlike the soldier who sacrifices his life on the battlefield: each uses up a life for a cause.
I have some specific examples in mind when I say that scientists have been successful as closing ranks behind what Kuhn would have called “normal science.” I have written about big bang cosmology and quantum theory in this connection. In Conformal Cyclic Cosmology I noted at least one theory seeking empirical evidence for the world prior to the big bang, while in The limits of my language are the limits of my world I discussed some recent experiments that seem to give us more knowledge of the quantum world that traditional interpretations of quantum theory would seem to suggest is possible.
No one of a truly curious disposition could ever be satisfied with the big bang theory, except in so far as it is but one step — and an admittedly very large step — toward a larger natural history of the universe. Given that the entire observable universe may be the result of a single big bang, any account of the world beyond or before the universe defined by the big bang presents possibly insuperable difficulties for observational cosmology. But the mind does not stop with observational cosmology; the mind does not stop even when presented with obstacles that initially seem insuperable. Slowly and surely the mind seeks the gradual way up what Dawkins called Mount Improbable.
Despite the united front that supports fundamental scientific theories (the sorts of science that Quine would have placed near the center of the web of belief), we know from the examples of Penrose’s conformal cyclic cosmology and the recent experiments attempting to simultaneously measure the position and velocity of quantum particles that scientists are continuing to think beyond the customary interpretations of theories.
The often-repeated claims that space and time were created simultaneously in the big bang and that it is pointless to ask what came before the big bang (as earlier generations were assured that it was illegitimate to ask “Who made God?”), and the claims of the impossibility of simultaneous measurements of a quantum particle’s position and velocity have not stopped the curious from probing beyond these barriers to knowledge. One must, or course, be careful, for there is a danger of being seen as a crackpot, so such inquiries are kept quiet quiet until some kind of empirical evidence can be produced. But before the evidence can be sought, there needs to be an idea of what to look for, and an idea of what to look for comes from a theory. That theory, in turn, must exceed the established interpretations of science if it is too look for anything new.
We know what happens when scientists not only say that something is impossible or unknowable, but also accept that certain things are impossible or unknowable and actually cease to engage in inquiry, and make no attempt to think beyond the limits of accepted theories: we get a dark age. A recent book has spoken of the European middle ages as The Closing of the Western Mind. (In the Islamic world a very similar phenomenon was called “Taqlid” or, “the closing of the gates of Ijtihad“.) When scientists not only say that noting more can be known, but they actually act as though nothing more can be known, and cease to question normal science, this is when intellectual progress stops, and this has happened several times in human history (although I know that this is a controversial position to argue; cf. my The Phenomenon of Civilization Revisited).
It is precisely the fact that science continues to be consciously and systematically pursued in the modern era despite many claims that everything knowable was known that sets industrial-technological civilization apart from all previous iterations of civilization.
Science goes on behind the scenes.
. . . . .
. . . . .
. . . . .
19 March 2012
This post has been superseded by Eo-, Eso-, Exo-, Astro-, which both corrects and extends what I wrote below.
The Philosophical Significance of Astrobiology as a
Cosmological Extrapolation of Terrestrial Biology
In yesterdays’ Commensurable Perspectives I finished with this observation:
Ecology is the master world-narrative that unifies the sub-narratives employed by individual species in virtue of their perceptual and cognitive architecture. Ultimately, astrobiology would constitute the universal narrative that would unify the ecological narratives of distinct worlds.
The naturalistic narrative has the power to unify even across species and across worlds. This power may not be particularly evident at present, but in the long term future of our species (if our species does in fact have a long term future) this power will prove to be crucial.
If indeed astrobiology is the universal narrative of life, that gives astrobiology a privileged position among the sciences. That is a tall order. But what is astrobiology? At one time I had heard both the terms “exobiology” and “astrobiology” and I was not quite clear about the exact difference between the two, or how each was defined. Thereby hangs a tale. The distinction between the two is in fact a very interesting story, and it is a story to which an entire book has been devoted, The Living Universe: NASA and the Development of Astrobiology, by Steven J. Dick and James E. Strick.
I urge the reader to get this book and peruse it for yourself for the detailed version of the emergence of astrobiology as a scientific discipline. I will give only the bare bones of that story here, which will be only enough to grasp the crucial concepts involved. And our interest is in the concepts, not the personalities.
Exobiology is the older term, introduced by Joshua Lederberg (first used in a public lecture in 1960), and contrasted by him to eobiology. Exobiology has some currency in the public mind, but I didn’t know about eobiology until I read about the history of the discipline. However, the contrast between the two terms is conceptually important. Exobiology is concerned with biology off the surface of the earth, while eobiology is biology on the surface of the earth. (cf. p. 29) In other words, all biological science prior to human spaceflight was eobiology, even if we didn’t know that it was eobiology. Another way to formulate this distinction is to say that eobiology is the biology of the terrestrial biosphere, while exobiology is the biology of everything else.
In the book The Living Universe: NASA and the Development of Astrobiology the authors give a lot of background on the internal politics and budgeting of NASA and how this affected the emergence of astrobiology. It is an interesting story, but I will not go into it here, as our interest at present is exclusively with the conceptual infrastructure of the discipline. Suffice it to say that in 1996 the first attempts were made to define astrobiology (cf. p. 202), and within a couple of years there was a virtual Astrobiology Institute.
“Astrobiology is the study of the origin, evolution, distribution, and future of life in the universe. This multidisciplinary field encompasses the search for habitable environments in our Solar System and habitable planets outside our Solar System, the search for evidence of prebiotic chemistry and life on Mars and other bodies in our Solar System, laboratory and field research into the origins and early evolution of life on Earth, and studies of the potential for life to adapt to challenges on Earth and in space.”
“The study of the living universe. This field provides a scientific foundation for a multidisciplinary study of (1) the origin and distribution of life in the universe, (2) an understanding of the role of gravity in living systems, and (3) the study of the Earth’s atmospheres and ecosystems.”
The important lesson to take away from this is that astrobiology is the more comprehensive concept, and that in fact we can consider astrobiology the union of eobiology and exobiology. This sounds simple enough (and it is), but it is important to understand the conceptual leap that has been taken here.
From the perspective of astrobiology, earth sciences are only fragments of far larger and more comprehensive sciences. Just as all biology was once eobiology, the same observation can be made in regard to the other earth sciences, and the same tripartite conceptual distinction can be brought to the other earth sciences. We can formulate eogeology and exogeology unified in astrogeology; we can formulate eohydrology and exohydrology unified in astrohydrology; we can formulate eovulcanology and exovulcanology unified in astrovulcanology; we can formulate eoclimatology and exoclimatology unified in astroclimatology. All of these are cosmological extrapolations of earth sciences. One suspects that, in the future, the prefixes will be dropped and we will return to climatology simpliciter, e.g., but while the conceptual revolution is underway it is important to retain the prefixes as a reminder that science is no longer defined by the boundaries of the earth.
I assert that this is a conceptual leap of the first importance because what we have with astrobiology is the formulation of the first truly Copernican science; astrobiology includes eobiology but it is not exhausted by eobiology; it is supplemented by exobiology. The earth, for obvious reasons, remains important to us, but it no longer dictates the center of our science. All mature sciences will eventually need to take this Copernican turn and dethrone the earth from the center of its concern.
We can take a further step beyond this conceptual formulation of Copernican sciences by observing that traditional earth sciences began as local enterprises, and it has only been in recent decades that truly global sciences have emerged. These global sciences have culminated in objects of scientific study that take the world entire as their object. Thus biology has converged upon study of the biosphere; hydrology has converged on study of the hydrosphere; glaciology has culminated in the study of the cryosphere. Copernican sciences based on the model of astrobiology can go one better than this, transcending earth-defined “-spheres” in favor of more comprehensive concepts.
When I spoke last year on “The Moral Imperative of Human Spaceflight” at the NASA/DARPA 100 Year Starship Study symposium it was my intention to spend some time on the emergence of Copernican sciences, but I didn’t have enough time to elaborate. I cut most of that material out and still was rushed. The point that I wanted to make there was that the concepts of the biosphere, the lithosphere, the geosphere, hydrosphere, cryosphere, atmosphere, anthrosphere, sociosphere, noösphere, and technosphere are essentially Ptolemaic concepts. (If the proceedings of the symposium are published, and if my paper is included, this contains my first sketch of Copernican sciences as transcending these earth-defined “-spheres.”) The Copernican Revolution entails the formulation of Copernican concepts to supersede Ptolemaic concepts, and this work is as yet unfinished. In some spheres of human thought, it has scarcely begun.
One way to transcend our Ptolemaic concepts and to replace them with Copernican concepts, and thus to extend the ongoing shift to a truly Copernican perspective, is to substitute for the earth-defined “-spheres” a conception of the object of the sciences not dependent upon the earth, and this can be done by defining, respectfully, biospace (in place of the biosphere), lithospace, geospace, hydrospace, cryospace, atmospace, anthrospace, sociospace, noöspace, and technospace. In so far as we can facilitate the emergence of Copernican sciences, we can contribute to the ongoing Copernican Revolution, which will someday culminate in a Copernican civilization (if we do not first destroy ourselves).
We can pass beyond the earth sciences and the natural sciences and similarly extend our conceptions of a the social and political sciences. Although concepts from the social sciences are not usually expressed in geocentric terms — except for the above-mentioned anthrosphere, sociosphere, noösphere, and technosphere (which are not employed very often) — our social and political thought is usually even more tied to planetary prejudices than the concepts of the natural sciences. Thus we can extend our conception of politics by distinguishing between eopolitics and exopolitics, both of which are subsumed under astropolitics. Similarly, we can formulate eoeconomics and exoeconomics, subsumed by astroeconomics, eostrategy and exostrategy, subsumed by astrostrategy, and so forth.
As a final note, it is ironic that the breakthrough to a Copernican science should occur first with biology, because biology was among the latest of the sciences to actually attain a scientific status. Prior to Darwin, biological theories were essentially theological theories with but a few exceptions. Darwin put biology on a firm biological footing and created the discipline in its modern scientific form. Thus biology was among the last of the sciences to attain a modern scientific form, though it was the first to attain to a Copernican form.
. . . . .
This post has been superseded by Eo-, Eso-, Exo-, Astro-.
. . . . .
. . . . .
. . . . .
17 March 2012
One of the greatest contributions to science in the twentieth century was Jane Goodall’s study of chimpanzees in the wild at Gombe, Tanzania. Although Goodall’s work represents a major advance in ethology, it did not come without criticism. Here is how Adrian G. Weiss described some of this criticism:
Jane received her Ph.D. from Cambridge University in 1965. She is one of only eight other people to earn a Ph.D. without a bachelor’s (Montgomery 1991). Her adviser, Robert Hinde, said her methods were not professional, and that she was doing her research wrong. Jane’s major mistake was naming her “subjects”. The animals should be given numbers. Jane also used descriptive, narrative writing in her observations and calculations. She anthropomorphized her animals. Her colleagues and classmates thought she was “doing all wrong”. Robert Hinde did approve her thesis, even though she returned with all of his corrections with the original names and anthropomorphizing.
Most innovative science breaks the established rules of the time. If the innovative science is eventually accepted, it eventually also becomes the basis of a new orthodoxy. Given time, that orthodoxy will be displaced as well, as more innovative work demonstrates new ways of acquiring knowledge. As the old orthodoxy passes out of fashion it often falls either into neglect or may become the target of criticism as vicious as that directed at new and innovative research.
I have to imagine that it was this latter phenomenon of formerly accepted scientific discourses falling out of favor and becoming the target of ridicule that inspired one of Foucault’s most famous quotes (which I have cited previously on numerous occasions): “A real science recognizes and accepts its own history without feeling attacked.” Here is the same quote with more context:
Each of my works is a part of my own biography. For one or another reason I had the occasion to feel and live those things. To take a simple example, I used to work in a psychiatric hospital in the 1950s. After having studied philosophy, I wanted to see what madness was: I had been mad enough to study reason; I was reasonable enough to study madness. I was free to move from the patients to the attendants, for I had no precise role. It was the time of the blooming of neurosurgery, the beginning of psychopharmacology, the reign of the traditional institution. At first I accepted things as necessary, but then after three months (I am slow-minded!), I asked, “What is the necessity of these things?” After three years I left the job and went to Sweden in great personal discomfort and started to write a history of these practices. Madness and Civilization was intended to be a first volume. I like to write first volumes, and I hate to write second ones. It was perceived as a psychiatricide, but it was a description from history. You know the difference between a real science and a pseudoscience? A real science recognizes and accepts its own history without feeling attacked. When you tell a psychiatrist his mental institution came from the lazar house, he becomes infuriated.
Truth, Power, Self: An Interview with Michel Foucault — October 25th, 1982, Martin, L. H. et al (1988) Technologies of the Self: A Seminar with Michel Foucault, London: Tavistock. pp.9-15
It remains true that many representatives of even the most sophisticated contemporary sciences react as though attacked when reminded of their discipline’s history. This is true not least because much of science has an unsavory history — at least, by contemporary standards, a lot of scientific history is unsavory, and this gives us reason to believe that many of our efforts today will, in the fullness of time, be consigned to the unsavory inquiries of the past which carry with them norms, evaluations, and assumptions that are no longer considered to be acceptable in polite society. This is, of course, deeply ironic (I could say hypocritical if I wanted to be tendentious) since the standard of acceptability in polite society is one of the most stultifying norms imaginable.
It has long been debated within academia whether history is a science, or an art, or perhaps even a sui generis literary genre with a peculiar respect for evidence. There is no consensus on this question, and I suspect it will continue to be debated so long as the Western intellectual tradition persists. History, at least, is a recognized discipline. I know of no recognized discipline of the study of civilizations, which in part is why I recently wrote The Future Science of Civilizations.
There is, at present, no science of civilization, though there are many scientists who have written about civilization. I don’t know if there are any university departments on “Civilization Studies,” but if there aren’t, there should be. We can at least say that there is an established literary genre, partly scientific, that is concerned with the problems of civilization (including figures as diverse as Toynbee and Jared Diamond). Even among philosophers, who have a great love of writing, “The philosophy of x,” there are very few works on “the philosophy of civilization” — some, yes, but not many — and, I suspect, few if any departments devoted to the philosophy of civilization. This is a regrettable ellipsis.
When, in the future, we do have a science of civilization, and perhaps also a philosophy of civilization (or, at very least, a philosophy of the science of civilization), this science will have to come to terms with its past as every science has had to (or eventually will have to). The prehistory of the science of civilization is already fairly well established, and there are several known classics of the genre. Many of these classics of the study of civilization are as thoroughly unsavory by contemporary standards as one could possibly hope. The history of pronouncements on civilization is filled with short-sighted, baldly prejudiced, privileged, ethnocentric, and thoroughly anthropocentric formulations. For all that, they still may have something of value to offer.
A technological typology of human societies that is no longer in favor is the tripartite distinction between savagery, barbarism, and civilization. This belongs to the prehistory of the prehistory of civilization, since it establishes the natural history of civilization and its antecedents.
Edward Burnett Tylor proposed that human cultures developed through three basic stages consisting of savagery, barbarism, and civilization. The leading proponent of this savagery-barbarism-civilization scale came to be Lewis Henry Morgan, who gave a detailed exposition of it in his 1877 book Ancient Society (the entire book is conveniently available online for your reading pleasure). A quick sketch of the typology can be found at ANTHROPOLOGICAL THEORIES: Cross-Cultural Analysis.
One of the interesting features of Morgan’s elaboration of Tylor’s idea is his concern to define his stages in terms of technology. From the “lower status of savagery” with its initial use of fire, through a middle stage at which the bow and arrow is introduced, to the “upper status of savagery” which includes pottery, each stage of human development is marked by a definite technological achievement. Similarly with barbarism, which moves through the domestication of animals, irrigation, metal working, and a phonetic alphabet. This breakdown is, in its own way, more detailed than many contemporary decompositions of human social development, as well as being admirably tied to material culture and therefore amenable to confirmation and disconfirmation through archaeological research.
Today, of course, we are much too sophisticated to use terms like “savagery” or “barbarism.” These terms are now held in ill repute, as they are thought to suggest strongly negative evaluations. A friend of mine who studied anthropology told me that the word “primitive” is now referred to as “the P-word” within the discipline, so unacceptable has it become. To call a people (even an historical people now extinct) “savage” is similarly considered beyond the pale. We don’t call people “savage” or “primitive” any more. But the dangers of these terminological obsessions are that we get hung up on the terms and no longer consider theories on their theoretical merits. Jane Goodall’s theoretical work was eventually accepted despite her use of proper names in ethology, and now it is not at all uncommon for researchers to name their subjects that belong to other species.
Some theoreticians, moreover, have come to recognize that there are certain things that can be learned through sympathizing with one’s subject that simply cannot be learned in any other way (score one posthumously for Bergson’s conception of “intellectual sympathy”). Of course, science need not limit itself to a single paradigm of valid research. We can have a “big tent” of science with ample room for many methodologies, and hopefully also with plenty of room for disagreements.
It would be an interesting exercise to take a “dated” work like Lewis Henry Morgan’s book Ancient Society, leave the theoretical content intact, and change only the names. In fact, we could formalize Morgan’s gradations, using numbers instead of names just as Jane Goodall was urged to do. I suspect that Morgan’s work would be treated rather better in this case in comparison to the contemporary reception of its original terminology. We ought to ask ourselves why this is the case. Perhaps it is too much to hope for a “big tent” of science so capacious that it could hold Lewis Henry Morgan’s terminology alongside that of contemporary anthropology, but we have arrived at a big tent of science large enough to hold Jane Goodall’s proper names alongside tagged and numbered specimens.
. . . . .
. . . . .
. . . . .