Saturday


eight-ball

Last month, November 2016, marked the eight year anniversary for this blog. My first post, Opening Reflection, was dated 05 November 2008. Since then I have continued to post, although less frequently of late. I have become much less interested in tossing off a post about current events, and more interested in more comprehensive and detailed analyses, though blog posts are rarely associated with comprehensivity or detail. But that’s how I roll.

It is interesting that we have two distinct and even antithetical metaphors to identify non-trivial modes of thought. I am thinking of “dig deep” or “drill down” on the one hand, and, on the other hand, “overview” or “big picture.” The two metaphors are not identical, but each implies a particular approach to non-triviality, with the former implying an immersion in a fine-grained account of anything, while the latter implies taking anything in its widest signification.

Ideally, one would like to be both detailed and comprehensive at the same time — formulating an account of anything that is, at once, both fine-grained and which takes the object of one’s thought in its widest signification. In most cases, this is not possible. Or, rather, we find this kind of scholarship only in the most massive works, like Gibbon’s Decline and Fall of the Roman Empire, or Mario Bunge’s Treatise on Basic Philosophy. Over the past hundred years or so, scholarship has been going in exactly the opposite direction. Scholars focus on a particular area of thought, and then produce papers, each one of which focuses even more narrowly on one carefully defined and delimited topic within a particular area of thought. There is, thus, a great deal of very detailed scholarship, and less comprehensive scholarship.

Previously in Is it possible to specialize in the big picture? I considered whether it is even possible to have a scholarly discipline that focuses on the big picture. This question is posed in light of the implied dichotomy above: comprehensivity usually comes at the cost of detail, and detail usually comes at the cost of comprehensivity.

Another formulation of this dichotomy that brings out other aspects of the dilemma would to ask if it is possible to be rigorous about the big picture, or whether it is possible to be give a detailed account of the big picture — a fine-grained overview, as it were? I guess this is one way to formulate my ideal: a fine-grained overview — thinking rigorously about the big picture.

While there is some satisfaction in being able to give a concise formulation of my intellectual ideal — a fine-grained overview — I cannot yet say if this is possible, or if the ambition is chimerical. And if the ambition for a fine-grained overview is chimerical, is it chimerical because finite and flawed human beings cannot rise to this level of cognitive achievement, or is it chimerical because it is an ontological impossibility?

While an overview may necessarily lack the detail of a close and careful account of anything, so that the two — overview and detail — are opposite ends of a continuum, implying the ontological impossibility of their union, I do know, on the other hand, that clear and rigorous thinking is always possible, even if it lacks detail. Clarity and rigor — or, if one prefers the canonical Cartesian formulation, clear and distinct ideas — is a function of disciplined thinking, and one can think in a disciplined way about a comprehensive overview. If one allows that a fine-grained overview can be finely grained in virtue of the fine-grained conceptual infrastructure that one employs in the exposition of that overview, then, certainly, comprehensive detail is possible in this respect (even if in no other).

I could, then, re-state my ambition as formulated in my opening reflection such that, “my intention in this forum to view geopolitics through the prism of ideas,” now becomes my intention to formulate a fine-grained overview of geopolitics through the prism of ideas. But, obviously, I now seldom post on geopolitics, and am out to bag bigger game. This is, I think, implicit in the remit of a comprehensive overview of geopolitics. F. H. Bradley famously said, “Short of the Absolute God cannot stop, and, having reached that goal, He is lost, and religion with Him.” We might similarly say, short of big history geopolitics cannot stop, and, having reached that goal, it is lost, and political economy with it.

. . . . .

signature

. . . . .

Grand Strategy Annex

. . . . .

project astrolabe logo smaller

. . . . .

Advertisements

Wednesday


2011 pictures 166

In my previous post, The Study of Civilization as Rigorous Science I drew upon examples from both Edmund Husserl and Bertrand Russell — the Godfathers, respectively, of contemporary continental and analytical philosophy — to illustrate some of the concerns of constituting a new science de novo, which is what a science of civilization must be.

In particular, I quoted Husserl to the effect that true science eschews “profundity” in favor of Cartesian clarity and distinctness. Since Husserl himself was none-too-clear a writer, his exposition of a distinction between profundity and clarity might not be especially clear. But another example occurred to me. There is a wonderful passage from Bertrand Russell in which he describes the experience of intellectual insight:

“Every one who has done any kind of creative work has experienced, in a greater or less degree, the state of mind in which, after long labour, truth, or beauty, appears, or seems to appear, in a sudden glory — it may be only about some small matter, or it may be about the universe. The experience is, at the moment, very convincing; doubt may come later, but at the time there is utter certainty. I think most of the best creative work, in art, in science, in literature, and in philosophy, has been the result of such a moment. Whether it comes to others as to me, I cannot say. For my part, I have found that, when I wish to write a book on some subject, I must first soak myself in detail, until all the separate parts of the subject matter are familiar; then, some day, if I am fortunate, I perceive the whole, with all its parts duly interrelated. After that, I only have to write down what I have seen. The nearest analogy is first walking all over a mountain in a mist, until every path and ridge and valley is separately familiar, and then, from a distance, seeing the mountain whole and clear in bright sunshine.”

Bertrand Russell, A History of Western Philosophy, CHAPTER XV, “The Theory of Ideas”

Russell returned to this metaphor of seeing a mountain whole after having wandered in the fog of the foothills on several occasions. For example:

“The time was one of intellectual intoxication. My sensations resembled those one has after climbing a mountain in a mist, when, on reaching the summit, the mist suddenly clears, and the country becomes visible for forty miles in every direction.”

Bertrand Russell, The Autobiography of Bertrand Russell: 1872-1914, Chapter 6, “Principia Mathematica”

…and again…

“Philosophical progress seems to me analogous to the gradually increasing clarity of outline of a mountain approached through mist, which is vaguely visible at first, but even at last remains in some degree indistinct. What I have never been able to accept is that the mist itself conveys valuable elements of truth. There are those who think that clarity, because it is difficult and rare, should be suspect. The rejection of this view has been the deepest impulse in all my philosophical work.”

Bertrand Russell, The Basic Writings of Bertrand Russell, Preface

Russell’s description of intellectual illumination employing the metaphor of seeing a mountain whole is an example of the what I have called the epistemic overview effect — being able to place the parts of knowledge within a larger epistemic whole gives us a context for understanding that is not possible when confined to any parochial, local, or limited perspective.

If we employ Russell’s metaphor to illustrate Husserl’s distinction between the profound and the pellucid we immediately see that an attempt at an exposition which is confined to wandering in the foothills enshrouded in mist and fog has the character of profundity, but when the sun breaks through, the fog lifts, and the mist evaporates, we see clearly and distinctly that which we had before known only imperfectly and at that point we are able to give an exposition in terms of Cartesian clarity and distinctness. Russell’s insistence that he never thought that the mist contained any valuable elements of truth is of a piece with Husserl eschewing profundity.

Just so, a science of civilization should surprise us with unexpected vistas when we see the phenomenon of civilization whole after having familiarized ourselves with each individual parts of it separately. When the moment of illumination comes, dispelling the mists of profundity, we realize that it is no loss at all to let go of the profundity that has, up to that time, been our only guide. The definitive formulation of a concept, a distinction, or a principle can suddenly cut through the mists that we did not even realize were clouding our thoughts, revealing to us the perfect clarity that had eluded us up to that time. As Russell noted that, “this view has been the deepest impulse in all my philosophical work,” so too this is the deepest impulse in my attempt to understand civilization.

. . . . .

signature

. . . . .

Grand Strategy Annex

. . . . .

project astrolabe logo smaller

. . . . .

Monday


academic silos

Contemporary scholarship is a hierarchy of specializations, though the hierarchy is not always obvious. A typical idiom employed today to describe specialization is that of “academic silos,” as though each academic specialization were tightly circumscribed by very high walls rarely breached. The idiom of “silos” points not to a hierarchy, but to a landscape of separate but equal and utterly isolated disciplines.

There are several taxonomies of the academic disciplines that arrange them hierarchically, as in the unity of science movement of twentieth century logical empiricism, which sought to reduce all the sciences to physics. This isn’t what I have in mind when I say that contemporary scholarship is a hierarchy of specializations. I am, rather, recurring to an idea that appeared in the work of Alfred North Whitehead, and which was picked up by Buckminster fuller (of geodesic dome fame).

We can think of Buckminster Fuller as a proto-techno-philosopher, and we know that techno-philosophy disdains the philosophical tradition and seeks to treat traditional philosophical problems de novo from the perspective of science and technology. In one of the rare instances of borrowing by techno-philosophy from traditional philosophy, Buckminster Fuller quoted Alfred North Whitehead, who was a bona fide philosopher.

In R. Buckminster Fuller’s Utopia or Oblivion: The Prospects for Humanity (Chapter 2, “The Music of the New Life”), Fuller identifies what he called “Whitehead’s dilemma,” following an observation made by Alfred North Whitehead about the accelerating pace of specialization in higher education. The dilemma is that the best and brightest students were channeled into specialized studies, and these studies became more specialized as they progress. But there remains a need for a coordinating function among specializations, though all the best minds have already been channeled into specialist studies. That means that the dullest minds that remain are left with the task of the overall coordination of specialist disciplines.

Whitehead formulated his dilemma in terms of academic specialization and governmental coordination of society, but there are “big picture” coordinating functions that have nothing to do with government. This is most especially evident in what I have called the epistemic overview effect, which is concerned with the “big picture” of knowledge. A comprehensive understanding of some specialist discipline no less that an overall coordinating function demands a grasp of the big picture. But the rise of specialization militates against comprehensive understanding in its widest aspect — where it is most needed.

The role of specialization in contemporary scholarship is ironic in historical perspective. It is ironic because, today, more students than ever before in history throng more institutions of higher learning than ever before existed in history, and the traditional ideal of higher education was that of creating a well-rounded individual who had some degree of sophistication across a spectrum of scholarship. Specialization was once the function of the trades (something Whitehead also noted, cf. his Adventures of Ideas, Part One, Chap. 4 “Aspects of Freedom,” Section V; Whitehead’s distinction in this section between profession and craft is instructive). An individual either went on to further academic education in order to understand the wider relationship between the sciences and the humanities, or one entered a trade school or an apprenticeship program and specialized in learning some skill or craft.

It would not be going too far to say that, if you want to understand the big picture, the last person you should talk to is a specialist. A specialist may simply refuse to talk about the big picture, or, if they do talk about the big picture, it will be through the lens of their specialty, which can be highly misleading as regards the big picture. Thus the big picture may be characterized as a body of knowledge in which there are no specialists and no experts. Can there be experts in comprehensive knowledge? Is it possible to specialize in the big picture? How would one go about specializing in the big picture, such that one’s neglect of detail and the specialization of the special sciences would be a principled neglect of detail in order to focus on the details and patterns that emerge exclusively from an attempt to grasp the whole of the world, or the whole of the universe? This kind of specialization sounds counter-intuitive, but we must make the effort to formulate such a conception.

While prima facie counter-intuitive, we should immediately recognize that the idea of specializing in the big picture is nothing other than a particular application of the general principle of scientific abstraction. Science constructs abstract, simplified, idealized models of the world in order to understand processes and phenomena that, in the fullness of their presence, are far too complex to allow totality of knowledge. Recall that Wordsworth said we murder to dissect. The world in itself is intractable; the world of science is made tractable through abstraction; abstraction is the price that we pay for understanding. We must learn to pay that price willingly, if not cheerfully.

In asking if it is possible to specialize in the big picture, I am also in a sense asking if it is possible to think rigorously about the big picture, thus we can also ask: Is it possible to think about the big picture with a clear scholarly conscience? Big picture thinking often invites careless and sloppy formulations, and this has brought big picture thinking into disrepute by those who wish to distance themselves from careless and sloppy thinking — which is to say, almost all contemporary philosophers, who take a special pride in the rigor of their formulations. And this is a rigor largely due to the kind of specialization that Whitehead identified.

There is a kind of implicit contrition in the contemporary philosophical passion for rigor and precision, since much traditional philosophy now seems painfully muddled and unclear, and this has been a stick that scientists have used to beat philosophers, and with which they have justified their fashionable anti-philosophy. But Scientists, too, are guilty on this account. And whereas philosophers committed their sins against rigor in the past, scientists are committing their sins against rigor in the present. The pronouncements of scientists upon extra-scientific questions is an admirable attempt at comprehensive understanding, but it almost always takes place in a context that ignores the history of the question addressed.

History, I think, is essential to the big picture. Indeed, I will go further and I will suggest that the emerging discipline of Big History offers the possibility of a discipline that can specialize in the big picture with the hope of rigorous formulations. We have need of such a discipline. At the 2014 IBHA conference, David Christian in his keynote address (titled,”Can I study everything, please?”) expressed quite vividly the origins of his own interest in what would become big history in an experience of disappointment. He talked about going to school as a child with an initial sense of excitement that his big questions would be answered, only to find that his big questions were shunted aside.

How do you talk about the whole of time without inviting scholarly ridicule by those who have spent their entire careers seeking to accurately portray some small fragment of the whole? Is it possible to speak at this level of generality and still be to “right” in any relevant sense? Big History seeks to be just such a discipline, and the big historians have done a remarkable job in integrating the results of the special sciences into a coherent whole. I have made the claim that big history need not reject any more specialized scholarship, but provides the overall framework within which all specialized studies can find a place. Big history is a “big tent” in which all scholarship can find a place.

Big History is now an established (albeit youthful) branch of historiography, but it could be more than this. Where Big History remains weak is in its theoretical formulations, and this is not a surprise. While Big Historians seek to portray philosophy and the humanities as part of the sweeping story of human civilization (itself a part of a larger cosmic history), they do not draw upon philosophy and the humanities in the same way that they draw upon the special sciences. There is, as yet, no philosophy of big history, and that means that there is, as yet, no systematic attempt to clarify and to extend the conceptions upon which Big History relies in its formulations. This remains to be done.

. . . . .

signature

. . . . .

Grand Strategy Annex

. . . . .

project astrolabe logo smaller

. . . . .

Sunday


Søren Aabye Kierkegaard, 05 May 1813 – 11 November 1855.

Søren Aabye Kierkegaard, 05 May 1813 – 11 November 1855.

Kierkegaard’s Concluding Unscientific Postscript is an impassioned paean to subjectivity, which follows logically (if Kierkegaard will forgive me for saying so) from Kierkegaard’s focus on the individual. The individual experiences subjectivity, and, as far as we know, nothing else in the world experiences subjectivity, so that if the individual is the central ontological category of one’s thought, then the subjectivity that is unique to the individual will be uniquely central to one’s thought, as it is to Kierkegaard’s thought.

Another way to express Kierkegaard’s interest in the individual is to identify his thought as consistently ideographic, to the point of ignoring the nomothetic (on the ideographic and the nomothetic cf. Axes of Historiography). Kierkegaard’s account of the individual and his subjectivity as an individual falls within an overall ontology of individuals, therefore a continuum of contingency. Thus, in a sense, Kierkegaard represents a kind of object-oriented historiography (as a particular expression of an object-oriented ontology). From this point of view, once can easily see Kierkegaard’s resistance to Hegel’s lawlike, i.e., nomothetic, account of history, in which individuals are mere pawns at the mercy of the cunning of Reason.

At the present time, however, I will not discuss the implications of Kierkegaard’s implicit historiography, but rather his implicit futurism, though the two — historiography and futurism — are mirror images of each other, and I have elsewhere quoted Friedrich von Schlegel that, “The historian is a prophet facing backwards.” The same concern for the individual and his subjectivity is present in Kierkegaard’s implicit futurism as in his implicit historiography.

In Kierkegaard’s Concluding Unscientific Postscript, written under the pseudonym Johannes Climacus, we find the following way to distinguish the objective approach from the subjective approach:

The objective accent falls on WHAT is said, the subjective accent on HOW it is said.

Søren Kierkegaard, Concluding Unscientific Postscript, Translated from the Danish by David F. Swenson, completed after his death and provided with Introduction and Notes by Walter Lowrie, Princeton: Princeton University Press, 1968, p. 181

A few pages prior to this in the text, Kierkegaard tells us a story about the importance of the subjective accent upon how something is said:

The objective truth as such, is by no means adequate to determine that whoever utters it is sane; on the contrary, it may even betray the fact that he is mad, although what he says may be entirely true, and especially objectively true. I shall here permit myself to tell a story, which without any sort of adaptation on my part comes direct from an asylum. A patient in such an institution seeks to escape, and actually succeeds in effecting his purpose by leaping out of a window, and prepares to start on the road to freedom, when the thought strikes him (shall I say sanely enough or madly enough?): “When you come to town you will be recognized, and you will at once be brought back here again; hence you need to prepare yourself fully to convince everyone by the objective truth of what you say, that all is in order as far as your sanity is concerned.” As he walks along and thinks about this, he sees a ball lying on the ground, picks it up, and puts it into the tail pocket of his coat. Every step he takes the ball strikes him, politely speaking, on his hinder parts, and every time it thus strikes him he says: “Bang, the earth is round.” He comes to the city, and at once calls on one of his friends; he wants to convince him that he is not crazy, and therefore walks back and forth, saying continually: “Bang, the earth is round!” But is not the earth round? Does the asylum still crave yet another sacrifice for this opinion, as in the time when all men believed it to be flat as a pancake? Or is a man who hopes to prove that he is sane, by uttering a generally accepted and generally respected objective truth, insane? And yet it was clear to the physician that the patient was not yet cured; though it is not to be thought that the cure would consist in getting him to accept the opinion that the earth is flat. But all men are not physicians, and what the age demands seems to have a considerable influence upon the question of what madness is.

Søren Kierkegaard, Concluding Unscientific Postscript, Translated from the Danish by David F. Swenson, completed after his death and provided with Introduction and Notes by Walter Lowrie, Princeton: Princeton University Press, 1968, p. 174

These themes of individuality and subjectivity occur throughout Kierkegaard’s work, always expressed with humor and imagination — Kierkegaard’s writing itself is a testament to the individuality he so valued — as especially illustrated in the passage above. Kierkegaard engages in philosophy by telling a joke; would that more philosophy were written with similar panache.

From Kierkegaard we can learn that how the future is presented can mean the difference between a vision that inspires the individual and a vision that sounds like madness — and this is important. Implicit Kierkegaardian futurism forces us to see the importance of the individual in a schematic conception of the future that is often impersonal and without a role for the individual that the individual would care to assume. Worse yet, there are often aspects of futurism that seem to militate against the individual.

One of the great failings of the communist vision of the future — which inspired many in the twentieth century, and was a paradigm of European manifest destiny such as I described in The Idea and Destiny of Europe — was its open contempt for the individual, which is a feature of most collectivist thought. Not only is it true that, “Where there is no vision, the people perish,” but one might also say that without a personal vision, the people perish.

One of the ways in which futurism has been presented in such a manner that almost seems contrived to deny and belittle the role of the individual is the example of the “twin paradox” in relativity theory. I have discussed this elsewhere (cf. Stepping Stones Across the Cosmos) because I find it so interesting. The twin paradox is used to explain of the oddities of general relativity, such that an accelerated clock moves more slowly relative to a clock that remains stationary.

In the twin paradox, it is postulated that, of two twins on Earth, the two say their goodbyes and one remains on Earth while another travels a great distance (perhaps to another star) at relativistic velocities. When the traveling twin returns to Earth, he finds that his twin has aged beyond recognition and the two scarcely know each other. This already poignant story can be made all the more poignant by postulating an even longer journey in which an individual leaves Earth and returns to find everyone he knew long dead, and perhaps even the places, the cities, and the monuments once familiar to him now long vanished.

The twin paradox, as it is commonly told, is a story, and, moreover, is a parable of cosmic loneliness. We would probably question the sanity of any individual who undertook a journey of space exploration under these conditions, and rightly so. If we imagine this story set within a larger story, the only kind of character who would undertake such a journey would be the villain of the piece, or an outcast, like a crazed scientist maddened by his lack of human contact and obsessed exclusively with his work (a familiar character from fiction).

The twin paradox was formulated to relate the objective truth of our universe, but it sounds more like Kierkegaard’s story of a madman reciting an obvious truth: no one is fooled by the madman. As long as a human future in space is presented in such terms, it will sound like madness to most. What we need in order to present existential risk mitigation to the public are stories of space exploration that touch the heart in a way that anyone can understand. We need new stories of the far future and of the individual’s role in the future in order to bring home such matters in a way that makes the individual respond on a personal level.

A subjective experience is always presented in a personal context. This personal context is important to the individual. Indeed, we know this from many perspectives on human life, whether it be the call to heroic personal self-sacrifice for the good of the community that is found collectivist thought, or the celebration of enlightened self-interest found in individualistic thought. Just as it is possible to paint either approach as a form of selfishness rooted in a personal context, it is possible to paint either as heroic for the same reason. In so far as a conception of history can be made real to the individual, and incorporates a personal context suggestive of subjective experiences, that conception of history will animate effective social action far more readily than even the most seductive vision of a sleek and streamlined future which nevertheless has no obvious place for the individual and his subjective experience.

The ultimate lesson here — and it is a profoundly paradoxical lesson, worthy of the perversity of human nature — is this: the individual life serves as the “big picture” context by which the individual, the individual’s isolated experiences, derive their value.

When we think of “big picture” conceptions of history, humanity, and civilization, we typically think in impersonal terms. This is a mistake. The big picture can be equally formulated in personal or impersonal terms, and it is the vision that is formulated in personal terms that speaks to the individual. In so far as the individual accepts this personal vision of the big picture, the vision informs the individual’s subjective experiences.

The narratives of existential risk would do well to learn this lesson.

. . . . .

Kierkegaard_1902_by_Luplau_Janssen

. . . . .

signature

. . . . .

Grand Strategy Annex

. . . . .

Futurism without predictions

12 December 2011

Monday


“From the relation of the planets among themselves and to the signs of the zodiac. future events and the course of whole lives were inferred, and the most weighty decisions were taken in consequence. In many cases the line of action thus adopted at the suggestion of the stars may not have been more immoral than that which would otherwise have been followed. But too often the decision must have been made at the cost of honour and conscience. It is profoundly instructive to observe how powerless culture and enlightenment were against this delusion; since the latter had its support in the ardent imagination of the people, in the passionate wish to penetrate and determine the future. Antiquity, too, was on the side of astrology.”

Jacob Burckhardt, The Civilization of the Renaissance in Italy, translated by S.G.C. Middlemore, 1878, Part Six, MORALITY AND RELIGION, “Influence of Ancient Superstition”


A few days ago Neil Houghton read my post The Third Law of Geopolitical Thought and made the following comment on Twitter:

Neil Houghton — I add prospective agency. RT @geopolicraticus The Third Law of Geopolitical Thought: human agency in time and history

I responded with a question, and a miniature dialogue developed (within the tightly constrained limits of Twitter):

Nick Nielsen — How would you define prospective agency? Is this agency understood in terms of possibility and potentiality?

Neil Houghton — Great question… in one word, foresight… in more a transdisciplinary practice between, across and beyond orders of time

Nick Nielsen — The whole problem is separating the wheat from the chaff: the wheat is the big picture; the chaff, trivial predictions.

Neil Houghton — Yes. seeing gradience is an aspect of the problem; the difference between the big picture and trivial prediction is one such gradience.

Nick Nielsen — Seeing the big picture in both space and time yields a different kind of foresight than the attempt to predict future events.

Neil Houghton — Foresight as gradience between freedom and destiny (for example) … please say more of your different kind of foresight.

This brief exchange points to something that I consider to be important, so I will attempt to give an account of the distinction I proposed between seeing the big picture and attempting to make predictions.

The most familiar form of futurism consists in making a series of predictions. Like any prognosticator of the future, regardless of methodology, the futurist is caught in a bind. The more specific his predictions, the more likely he is to be caught out. Even if the general drift of a prediction is correct, supplying a lot of details means more ways of potentially being wrong. And the more vague a prediction, the less interesting they are likely to be.

Some futurists take pride in their detailed lists of predictions, and although detail is an opportunity to be wrong, it also provides a lot of fodder for utterly pointless debate. In The Law of Stalled Technologies I wrote the following about Ray Kurzweil’s specific predictions:

Kurweil’s futurism makes for some fun reading. Unfortunately, It will not age well, and will become merely humorous over time (this is not to be confused with his very real technological achievements, which may well develop into robust and durable technologies). I have a copy of Kurzweil’s book that preceded The Singularity is Near, namely The Age of Spiritual Machines (published ten years ago in 1999), which is already becoming humorous. Part III, Chapter Nine of The Age of Spiritual Machines, contains his prophecies for 2009, and now it seems that the future is upon us, because it is the year AD 2009 as I write this. Kurzweil predicted that “People typically have at least a dozen computers on and around their bodies.” It is true that many people do carry multiple gadgets with microprocessors, and some of these are linked together via Bluetooth, so this prophecy does not come off too badly. He also notes that “Cables are disappearing” and this is undeniably true.

Kurzweil goes a little off the rails, however, when it comes to matters that touch directly on human consciousness and its expressions such as language. He predicted that, “The majority of text is created using continuous speech recognition”, and I think it is safe to say that this is not the case. I don’t want to parse all his predictions, but I need to be specific about a few particularly damning failures. Among the damning failures is the prediction that, “Translating Telephone technology … is commonly used for many language pairs.” Here we step over the line of the competence of technology and the limitations of even the most imaginative engineers. While machine translation is common today for text, everyone knows that it is a joke — quite literally so, as the results can be very funny though not terribly helpful.

Kurzweil gives a decade-by-decade running commentary of predictions. I once had somebody scold me about ridiculing Kurzweil’s predictions, because, I was told, the dates given were intended to indicate the initial dates of a ten year period, which gives him a ten year window to be right, thus kicking all his predictions another ten years down the road. This is the kind of ridiculous debate over pointless predictions that is an utter waste of breath. Predictions can be parsed like this until the end of time; this is precisely why people are always trying to show that Nostradamus predicted something. Add vagueness to ambiguity and you create the deconstructionists’ dream: anything can mean anything.

Just to unearth one more prediction, for 2019 Kurzweil predicted:

“Paper books or documents are rarely used and most learning is conducted through intelligent, simulated software-based teachers.”

Even if we give Kurzweil another ten years, I can guarantee you that, if I am still alive in 2029, that I will still have my personal library, it will probably be bigger than it is now, and I will consult it every day, as I do now. This does not, for me, constitute rarity of use. However, I will readily acknowledge that there is, already today, no need whatever to print textbooks, since knowledge is changing so rapidly and students usually don’t retain their textbooks after they have been used for a class. In situations such as this, it makes much more sense to make the material available on the internet. But even if we don’t bother with textbooks anymore, there will be a continuing role for books. At least, for me there will be a continuing role for books.

Whether you want to take pride in a list of specific predictions, having convinced yourself through a charitable hermeneutic that they have all come true, or whether you would rather it were all forgotten as a great embarrassment along with jetpacks, flying cars, and unisex jumpsuits, this model of futurism will always have a certain novelty value, so I will predict that “laundry list futurism” (like the poor) will always be with us.

There is, however, another kind of futurism, which we may not even want to call futurism, but which does incorporate a vision of the future. This other model of futurism is not about offering a laundry list of predictions, but rather about understanding the big picture, as I have said, both in space and time, i.e., geographically and historically. Here, “seeing the big picture” means having a theory of history that embraces the future as well as the past. This approach is about seeing patterns and understanding how the world works in general terms, and from an understanding of patterns and how the world works, having a general idea of what the future will be like, just as one may have a general idea of what they past was like, even if one cannot jump into a time machine and march with Alexander the Great or listen to Peter Abelard debate.

The big picture in space and time — and the biggest picture is what I have called metaphysical ecology and metaphysical history — is a theory, which if it is to be coherent, consistent, and universally applicable, must be applicable both to the past and the future. Ultimately, such a theory would be a science of time, although we aren’t quite there yet. I hope that, before I die, I can make a substantial contribution in this direction, but I recognize that this is a distant goal.

In the meantime, familiar sciences are engaged in precisely this enterprise, though on a less comprehensive scale. Let me try to explain how this is the case.

When we work in the historical sciences, the scale of time is so great that we must settle for retrodiction, because this is what can be done within one human lifespan, or within the lifespan of a community of researchers engaged in a common research program, but if we could afford to wait for thousands or millions or billions of years we could make predictions about the future. When, on the contrary, we work in the natural sciences as in physics, we must make predictions about the future, because we must create an elaborate apparatus to test our theories, and these did not exist in the past, so retrodiction is as closed to us as prediction is closed to the historical sciences. If we could go back in time with a superconducting supercollider, we could make retrodictions in physics, but at the present stage of technology this time travel would be more difficult than the experiment itself.

We accept the limitations of science that we are forced to accept, perhaps not gladly, but of necessity. What alternatives do we have? If we would have knowledge, we must have knowledge upon the conditions that the world will allow us knowledge, or refuse knowledge altogether. We are confident that our theories of physics apply equally well to the past, even if they cannot be tested in the past, and we are confident that our theories of paleontology would apply to the future if only we could wait long enough for the bones of the present to be fossilized.

In the fullness of time, if industrial-technological civilization continues in existence, the limits of science will be pushed back from the positions they presently occupy, but they will never be eliminated altogether. However, our strictly scientific knowledge can be extrapolated within a more comprehensive philosophical context, in which the resources of logical and linguistic analysis can be brought to bear upon the “problem” of history.

When I first began writing about what I began to call integral history, and which I now call metaphysical history, my aim at that time was to give an exposition of an extended conception of history that made use of the resources both of traditional humanistic narrative history and the emerging scientific historical disciplines, such as genomic resources which have taught us so much about the natural history of our species. I have subsequently continued to expand my expanded conception of history, and this is what I call metaphysical history, elaborated in the context of ecological temporality.

A further extension of the already extended conception of metaphysical history would be a conception of history that sees the big picture by seeing time whole, past, present, and future together as one structure that exhibits laws, regularities, patterns, and, of course, exceptions to all of the same.

This, then, was what I meant when I said that, “Seeing the big picture in both space and time yields a different kind of foresight than the attempt to predict future events.” The kind of foresight I have in mind is an understanding of historical events, both past and future, in a larger theoretical context. It is “foresight” only because it is, as the same time, hindsight. Both the past and the future are comprehended in an adequate theory of history.

I have no desire to produce a laundry list of predictions; I have no desire to say what I think the world will look like in 2019 or 2029 or 2039. I think that most of these predictions are irresponsible, though it may land a prophet on the front page of the National Enquirer. Not all such attempts at prediction, however, are irresponsible from my point of view. I have several times discussed George Friedman’s book The Next 100 Years, which strikes me as a responsible exercise in laundry list futurism. I have also discussed Michio Kaku’s book Physics of the Future: How Science Will Shape Human Destiny and Our Daily Lives by the Year 2100.

Kaku’s book is particularly interesting to me in the present context, because Kaku has a very specific method for his futurism. He has interviewed scientists about the technologies that they are developing now, in the present, and which will become part of our lives in the foreseeable future. I realize now that Kaku’s methodology may be characterized as a constructive futurism: he is immersed in the details of technology, and extrapolating particular, incremental advances and applications. This is a bottom-up approach. What I am suggesting, on the contrary, is a profoundly non-constructive approach to the philosophy of history, a top-down understanding that looks for the largest structures of space and time and regards all details and particulars as fungible and incidental. That is my vision of a theory of history, and I think that such a theory would give a certain degree and kind of foresight into event in the future, but certainly not the same degree and kind of foresight that one might gain from the constructive methods of Kaku and Friedman.

. . . . .

signature

. . . . .

Grand Strategy Annex

. . . . .

%d bloggers like this: