Sunday


zero dollars

A couple of recent articles in the Financial Times about the global debt market caught my attention. On Wednesday 16 April 2014 the FT ran “Risk seekers seize the day in South America: Investors are being drawn to bond issues in Argentina, Venezuela and Ecuador,” by Benedict Mander. Nation-states with very troubled financial histories (such as the three named) are not only able to sell their debt, but the writer says that Argentina, “has admitted to receiving loan proposals from international investment banks after local media reported that it was negotiating a $1bn loan from Goldman Sachs.”

Mander’s article also mentions the return of the “carry trade,” that is to say, borrowing money where it is really cheap and loaning it out again where money comes dear. During those years when Japan was loaning money at an effective zero rate, the carry trade was big business, as that zero percent money could be re-lent at 5 or 6 percent elsewhere. In my previous post, Rhine Capitalism, I discussed German bankers trying to present themselves as humbled and chastened by the past financial crisis, welcoming regulation as the price of stability. But that’s not the message we’re getting from the Financial Times, where we find a detailed record of the reconstruction of spectacularly risky financial schemes that will, in due course, have their day or reckoning.

On Thursday 17 April 2014 the FT ran “Investors get selective as frontier debt rush slows: Buyers of poor countries’ bonds are becoming more careful about risk vs reward,” by Elaine Moore, which discusses the “exotic debt” of nation-states such as Sri Lanka, Pakistan, Ghana, and Nigeria (also called “frontier markets”). Dollar-denominated bond issues in such unlikely places as sub-Saharan Africa find plenty of takers, though rates vary from country to country. The article states:

“Internal conflicts, political instability and poor credit records are all being factored in, but what economists say is really propelling the increasing differential in yields between borrowers is the knock-on effect of China’s economic evolution.”

It seems that China’s transition from an export-led growth model to a consumer-led growth model based on internal markets is re-configuring the global commodities markets, as producers of raw materials and feedstocks are hit by decreased demand while manufacturers of consumer goods stand to gain. I think that this influence on global markets is greatly overstated, as China’s hunger for materials for its industry will likely decrease gradually over time (a relatively predictable risk), while the kind of financial trainwreck that comes from disregarding political and economic instability can happen very suddenly, and this is a risk that is difficult to factor in because it is almost impossible to predict. So are economists assessing the risk they know, according to what Daniel Kahneman calls a “substitution heuristic” — answering a question that they know, because the question at issue is either too difficult or intractable to calculation? I believe this to be the case.

In Monday’s Financial Times (which is already out in Europe, so I have read it over the internet but haven’t received my copy yet) there is another debt-related article, “Eurozone periphery nurses debt wounds” (by Robin Wigglesworth in London). This article mentions, “the high demand for peripheral eurozone debt in recent months,” which would seem to be a part of the above-mentioned trend of seeking out higher rates of return and accepting higher risks in order to get those higher rates.

These are perfect examples of what I recently wrote about in Why the Future Doesn’t Get Funded, namely, that there is an enormous amount of money looking for a place to be invested, and that nation-states are pretty much the only thing on the planet that can both soak up that kind of investment as well as being sufficiently familiar to investors — the devil they know — that the investors don’t balk when offered high returns even in a risky debt market.

What is the lesson here? Is it simple investor greed that sees 8 percent and can’t resist? In the cases of Venezuela and Argentina, we have nation-states that are not only politically and economically unstable, but these countries have governments that have spectacularly mismanaged their respective economies, along with a history of nationalizing private assets. This mismanagement is now being rewarded by the global financial community, not least because investors are so worried that they might miss an opportunity. But if events go south while your money is invested in Argentina, you may well find yourself expropriated of your wealth and excoriated by a populist regime (those holding out for payment on the last defaulted bonds have been called “vulture funds”). What kind of rationalization hamster runs its endless cycles in the investor’s brain, convincing them that they can get a few years of eight percent on a billion dollars — which is nothing to sneeze at, being 80 million dollars a year — before the situation collapses, like it did for earlier investors?

There are all kinds of visionary projects that could be funded with this money — projects that would advance the prospects for all humanity — and perhaps at a rate of return not less than that offered by “exotic debt,” but the Siren Song of nation-state debt issues paying at 8 percent or better is too great of a temptation to resist. So why do uncreditworthy nation-states get billions while business enterprises and private opportunities go begging? It is an interesting question.

It is a bit facile (even if it is also true) to point out that most nation-states fall into the category of “too big to fail,” and that the international community will bail them out time and time again, no matter the level of corruption or mismanagement. (We hear constant talk about the evils of “austerity,” and about the terrible things that the IMF and the World Bank are doing by lending these poor, long-suffering nation-states more money, but very little about the evils of the profligacy that necessitated the austerity.) This is a bit too facile because even small nation-states, the default of which would not be particularly ruinous, often receive similar treatment. What’s going on here?

There is more at work here than merely shoddy lending practices that are opening up entire classes of investors to risks that they do not understand. This is an artifact of the international nation-state system that prioritizes the impunity of nation-states, whether in regard to human rights, economics, or any other measure you might care to apply. Nation-states are not held to account, and because they are not held to account they have become reckless. For the institutional investor looking for a place to park a few billion dollars, even severely compromised nation-states may appear to be the only game in town. I won’t hold my breath for the day when one of these institutional investors will put their money into some more productive, less reckless investment instrument, but I won’t stop hoping either.

. . . . .

Frontiermarkets

. . . . .

signature

. . . . .

Grand Strategy Annex

. . . . .

Rhine Capitalism

18 April 2014

Friday


rhine castle

Thursday’s Financial Times included a special supplement on “Frankfurt as a Financial Centre,” and this supplement included the article “Deutsche Börse hopes that its philosophy has global appeal.” And what is the philosophy of Deutsche Börse AG? According to chief executive Reto Francioni, the philosophy of Deutsche Börse AG is “Rhine capitalism.” So what is Rhine capitalism?

deutsche-borse-group logo

Here is a quote from Reto Fancioni from the Financial Times article that employs this interesting formulation:

“We share the same basic belief that the market economy also has to fulfill a social obligation, and that the ‘Rhine capitalism’ model of an economy buffered by corporations and focused on the long term, with strictly regulated markets — which are free for that very reason — is fundamentally superior to the Anglo-American capitalism model of deregulation.”

Further along in the same article we find the following:

[Deutsche Börse AG] hopes that its philosophy of a capitalism based on long-term careful planning will find a more receptive audience worldwide.

If you take a minute to read the mission statement and core values on the Deutsche Börse AG website you will find the usual corporate platitudes, though the following sentence underlines the quotes above from the Financial Times article:

We stand for integrity, transparency and the safety of capital markets. We support regulation that advances these qualities.

A New Year’s reception speech by Reto Francioni on the Eurex Group site repeats some of his thoughts on “Rhine capitalism” in a slightly different context. After stating his strong support for the European idea — saying that “there are no alternatives” to a united Europe — Francioni goes on to say:

…we share the same basic belief that the market economy also has to fulfill a social obligation and that the “Rhine capitalism” model of an economy buffered by corporations and focused on the long term, with strictly regulated markets — which are free for that very reason — is fundamentally superior to the Anglo-American capitalism model of deregulation.

This very interesting claim, however, was preceded in the speech by this…

I am a fan of good regulation. But I stress the word “good”, meaning professional. After all, we are involved in a global competition in regulation.

…and this…

The US remains a pioneer in many respects… They are ahead of us in re-regulation of capital markets and they made use of the crisis to rapidly create new and effective banks and stock exchange
organisations which have been strengthened through mergers and disciplined through sanctions.

Francioni really sounds like he’s trying to have it two ways here: he acknowledges that the US is ahead of Europe in re-regulation but then also holds that “Rhine capitalism” is distinctive because it does not endorse the Anglo-American model of deregulation. So which is it? Is the US leading in re-regulation, or is it guilty of a reckless deregulation that stands in stark contrast to “Rhine capitalism”?

Francioni is talking like a politician when he talks about Rhine capitalism embracing regulation and being the stronger for it while saying that there is a global competition in regulation so that “good regulation” is called for. I doubt that you could find an Anglo-American banker who would have anything but praise for “good” regulation. For this statement to have any content at all it would need to explain the difference between good regulation and bad regulation, preferably citing actual examples of each.

Setting aside Francioni’s double-speak about regulation, what are we to understand by “Rhine capitalism” on the basis of his public pronouncements? We can include within “Rhine capitalism” at least the following:

1. the market economy has social obligations

2. corporations “buffer” the market economy

3. the market economy should be focused on the long term

4. the market economy should be strictly regulated

5. free markets are free in virtue of being regulated

6. regulation of the market economy should be professional

All of these are nice ideas, but they all beg the question. What are the social obligations of a market economy? Are they the obligation to increase the wealth of a society, or to attempt to impose an elusive “safety” and “stability” on markets? How do corporations “buffer” the market? Are corporations to have privileges over and against sole proprietors and partnerships in their role as market buffers? Or is this rather a veiled criticism of the role of private equity? What is the long term for Rhine capitalism? Are we talking about ten months, ten years, or ten centuries? I certainly don’t see in Europe (not to speak of the Rhineland) any more willingness to fund the future than I see in the US. What is a strict regulation, and how are we to distinguish between good and bad regulation? Between professional or unprofessional (amateurish?) regulation? How much strict regulation means that a market is free in virtue of its regulation?

Although I don’t expect that my questions will be answered, I don’t ask them merely rhetorically. I really would like to know exactly what “Rhine capitalism” is, though I think the key to understanding the idea is this: Rhine capitalism is not Anglo-American deregulation. In other words, whatever the British and Americans are doing, we aren’t doing, but we’re still capitalists.

I worry that, in the wake of a devastating financial crisis, European bankers selling themselves to a suspicious public now focused on resentment of “the one percent” by defining “Rhine capitalism” as a vague alternative — the one thing that is clear is that it is not Anglo-American deregulation — are really selling a bill of goods. Francioni offers all kinds so reassuring ideas about a carefully planned, strictly regulated market that fulfills social obligations, but we are right to be suspicious of this in the same way that the working class is right to be suspicious of wealthy bankers. Bankers who claim to do good usually end up making a mess of things, and the bankers that usually benefit society the most are those than focus on making the most money.

. . . . .

signature

. . . . .

Grand Strategy Annex

. . . . .

Tuesday


future-next-exit

Introduction

Why be concerned about the future? Will not the future take care of itself? After all, have we not gotten along just fine without being explicitly concerned with the future? The record of history is not an encouraging one, and suggests that we might do much better if only provisions were made for the future, and problems were addressed before they become unmanageable. But are provisions being made for the future? Mostly, no. And there is a surprisingly simple reason that provisions are rarely made for the future, and that is because the future does not get funded.

The present gets funded, because the present is here with us to plead its case and to tug at our heart strings directly. Unfortunately, the past is also often too much with us, and we find ourselves funding the past because it is familiar and comfortable, not realizing that this works against our interests more often than it serves our interests. But the future remains abstract and elusive, and it is all too easy to neglect what we must face tomorrow in light of present crises. But the future is coming, and it can be funded, if only we will choose to do so.

hundred banknotes

Money, money, everywhere…

The world today is awash in money. Despite the aftereffects of the subprime mortgage crisis, the Great Recession, and the near breakup of the European Union, there has never been so much capital in the world seeking advantageous investment, nor has capital ever been so concentrated as it is now. The statistics are readily available to anyone who cares to do the research: a relatively small number of individuals and institutions own and control the bulk of the world’s wealth. What are they doing with this money? Mostly, they are looking for a safe place to invest it, and it is not easy to find a place to securely stash so much money.

The global availability of money is parallel to the global availability of food: there is plenty of food in the world today, notwithstanding the population now at seven billion and rising, and the only reason that anyone goes without food is due to political (and economic) impediments to food distribution. Still, even in the twenty-first century, when there is food sufficient to feed everyone on the planet, many go hungry, and famines still occur. Similarly, despite the world being awash in capital seeking investment and returns, many worthy projects are underfunded, and many projects are never funded at all.

safe-as-houses

What gets funded?

What does get funded? Predictable, institutional projects usually get funded (investments that we formerly called, “as safe as houses”). Despite the fact of sovereign debt defaults, nation-states are still a relatively good credit risk, but above all they are large enough to be able to soak up the massive amounts of capital now looking for a place to go. Major industries are also sufficiently large and stable to attract significant investment. And a certain amount of capital finds itself invested as venture capital in smaller projects.

Venture capital is known to be the riskiest of investments, and the venture capitalist expects that most of his ventures will fail and yield no returns whatever. The reward comes from the exceptional and unusual venture that, against all odds and out of proportion to the capital invested in it, becomes an enormous success. This rare venture capital success is so profitable that it not only makes up for all the other losses, but more than makes up the losses and makes the successful venture capital firm one of the most intensively capitalized industries in the world.

risk blocks

Risk for risk’s sake?

With the risk already so high in any venture capital project, the venture capitalist does not unnecessarily court additional, unnecessary risks, so, from among the small projects that receive venture funding, it is not the riskiest ventures that get funded, but the least risky that get funded. That is to say, among the marginal investments available to capital, the investor tries to pick the ones that look as close to being a sure thing as anything can be, notwithstanding the fact that most of these ventures will fail and lose money. No one is seeking risk for risk’s sake; if risk is courted, it is only courted as a means to the end of a greater return on capital.

The venture capitalists have a formula. They invest a certain amount of money at what is seen to be a critical stage in the early development of a project, which is then set on a timetable of delivering its product to market and taking the company public at the earliest possible opportunity so that the venture capital investors can get their money out again in two to five years.

Given the already tenuous nature of the investments that attract venture capital, many ideas for investment are rejected on the most tenuous pretexts, rejected out of hand scarcely without serious consideration, because they are thought to be impractical or too idealistic or are not likely to yield a return quickly enough to justify a venture capital infusion.

temperaments

Entrepreneurs, investors, and the spectrum of temperament

Why do the funded projects get funded, while other projects do not get funded? The answer to this lies in the individual psychology of the successful investor. The few individuals who accumulate enough capital to become investors in new enterprises largely become wealthy because they had one good idea and they followed through with relentless focus. The focus is necessary to success, but it usually comes at the cost of wearing blinders.

Every human being has both impulses toward adventure and experimentation, and desires for stability and familiarity. From the impulse to adventure comes entrepreneurship, the questioning of received wisdom, a willingness to experiment and take risks (often including thrill-seeking activities), and a readiness to roll with the punches. From the desire for stability comes discipline, focus, diligence, and all of the familiar, stolid virtues of the industrious. With some individuals, the impulse to adventure predominates, while in others the desire for stability is the decisive influence on a life.

With entrepreneurs, the impulse to adventure outweighs the desire for stability, while for financiers the desire for stability outweighs the impulse to adventure. Thus entrepreneurs and the investors who fund them constitute complementary personality types. But neither exemplifies the extreme end of either spectrum. Adventurers and poets are the polar representatives of the imaginative end of the spectrum, while the hidebound traditionalist exemplifies the polar extreme of the stable end of the spectrum.

It is the rare individual who possesses both adventurous imagination and discipline in equal measures; this is genius. For most, either imagination or discipline predominates. Those with an active imagination but little discipline may entertain flights of fancy but are likely to accomplish little in the real world. Those in whom discipline predominates are likely to be unimaginative in their approach to life, but they are also likely to be steady, focused, and predictable in their behavior.

Most people who start out with a modest stake in life yearn for greater adventures than an annual return of six percent. Because of the impulse to adventure, they are likely to take risks that are not strictly financially justified. Such an individual may be rewarded with unique experiences, but would likely have been more financially successful if they could have overcome the desire in themselves for adventure and focused on a disciplined plan of investment coupled with delayed gratification. If you can overcome this desire for adventure, you can make yourself reasonably wealthy (at very least, comfortable) without too much effort. Despite the paeans we hear endlessly celebrating novelty and innovation, in fact discipline is far more important than creativity or innovation.

The bottom line is that the people who have a stranglehold on the world’s capital are not intellectually adventuresome or imaginative; on the contrary, their financial success is a selective result of their lack of imagination.

giving_money

A lesson from institutional largesse

The lesson of the MacArthur fellowships is worth citing in this connection. When the MacArthur Foundation fellowships were established, the radical premise was to give money away to individuals who could then be freed to do whatever work they desired. When the initial fellowships were awarded, some in the press and some experiencing sour grapes ridiculed the fellowships as “genius grants,” implying that the foundation was being a little too loose and free in its largesse. Apparently the criticism hit home, as in successive rounds of naming MacArthur fellows the grants become more and more conservative, and critics mostly ceased to call them “genius grants” while sniggering behind their hands.

Charitable foundations, like businesses, function in an essentially conservative, if not reactionary, social milieu, in which anything new is immediately suspect and the tried and true is favored. No one wants to court controversy; no one wants to be mentioned in the media for the wrong reason or in an unflattering context, so that anyone who can stir up a controversy, even where none exists, can hold this risk averse milieu hostage to their ridicule or even to their snide laughter.

Who serves on charitable boards? The same kind of unimaginative individuals who serve on corporate boards, and who make their fortunes through the kind of highly disciplined yet largely unimaginative and highly tedious investment strategies favored by those who tend toward the stable end of the spectrum of temperament.

Handing out “genius grants” proved to be too adventuresome and socially risky, and left those in charge of the grants open to criticism. A reaction followed, and conventionality came to dominate over imagination; institutional ossification set in. It is this pervasive institutional ossification that made the MacArthur awards so radical in the early days of the fellowships, when the MacArthur Foundation itself was young and adventuresome, but the institutional climate caught up with the institution and brought it to heel. It now comfortably reclines in respectable conventionality.

clock with dates

Preparing for the next economy

One of the consequences of a risk averse investment class (that nevertheless always talks about its “risk tolerance”) is that it tends to fund familiar technologies, and to fund businesses based on familiar technologies. Yet, in a technological economy the one certainty is that old technologies are regularly replaced by new technologies (a process that I have called technological succession). In some cases there is a straight-forward process of technological succession in which old technologies are abandoned (as when cars displaced horse-drawn carriages), but in many cases what we see instead is that new technologies build on old technologies. In this way, the building of an electricity grid was once a cutting edge technological accomplishment; now it is simply part of the infrastructure upon which the economy is dependent (technologies I recently called facilitators of change), and which serves as the basis of new technologies that go on to become the next cutting edge technologies in their turn (technologies I recently called drivers of change).

What ought to concern us, then, is not the established infrastructure of technologies, which will continue to be gradually refined and improved (a process likely to yield profits proportional to the incremental nature of the progress), but the new technologies that will be built using the infrastructure of existing technologies. Technologies, when introduced, have the capability of providing a competitive advantage when one business enterprise has mastered them while other business enterprises have not yet mastered them. Once a technology has been mastered by all elements of the economy it ceases to provide a competitive advantage to any one firm but is equally possessed and employed by all, and also ceases to be a driver a change. Thus a distinction can be made between technologies that are drivers of change and established technologies that are facilitators of change, driven by other technologies, that is to say, technologies that are tools for the technologies that are in the vanguard of economic, social, and political change.

From the point of view both of profitability and social change, the art of funding visionary business enterprises is to fund those that will focus on those technologies that will be drivers of change in the future, rather than those that have been drivers of change in the past. This can be a difficult art to master. We have heard that generals always prepare for the last war that was just fought rather than preparing for the next war. This is not always true — we can name a list of visionary military thinkers who saw the possibilities for future combat and bent every effort to prepare for it, such as Giulio Douhet, Billy Mitchell, B. H. Liddell Hart, and Heinz Guderian — but the point is well taken, and is equally true in business and industry: financiers and businessmen prepare for the economy that was rather than the economy that will be.

The prevailing investment climate now favors investment in new technology start ups, but the technology in question is almost always implicitly understood to be some kind of electronic device to add to the growing catalog of electronic devices routinely carried about today, or some kind of software application for such an electronic device.

The very fact of risk averse capital coupled with entrepreneurs shaping their projects in such a way as to appeal to investors and thereby to gain access to capital for their enterprises suggests the possibility of the path not taken, and this path would be an enterprise constituted with the particular aim of building the future by funding its sciences, technology, engineering, and even its ideas, that is to say, but funding those developments that are yet to become drivers of change in the economy, rather than those that already are drivers of change in the economy, and therefore will slip into second place as established facilitators of the economy.

open door on road

What is possible?

If there were more imagination on the part of those in control of capital, what might be funded? What are the possibilities? What might be realized by large scale investments into science, technology, and engineering, not to mention the arts and the best of human culture generally speaking? One possibility is that of explicitly funding a particular vision of the future by funding enterprises that are explicitly oriented toward the realization of aims that transcend the present.

Business enterprises explicitly oriented toward the future might be seen as the riskiest of risky investments, but there is another sense in which they are the most conservative of conservative investments: we know that the future will come, whether bidden or unbidden, although we don’t know what this inevitable future holds. Despite our ignorance as to what the future holds, we at least have the power — however limited and uncertain that power — to shape events in the future. We have no real power to shape events in the past, though many spin doctors try to conceal this impotency.

Those who think in explicit terms about the future are likely to seem like dreamers to an investor, and no one wants to labeled a “dreamer,” as this a tantamount to being ignored as a crank or a fool. Nevertheless, we need dreamers to give us a sense as to what might be possible in the future that we can shape, but of which we are as yet ignorant. The dreamer is one who has at least a partial vision of the future, and however imperfect this vision, it is at least a glimpse, and represents the first attempt to shape the future by imagining it.

Everyone who has ever dreamed big dreams knows what it is like to attempt to share these dreams and have them dismissed out of hand. Those who dismiss big dreams for the future usually are not content merely to ignore or to dismiss the dreamer, but they seem to feel compelled to go beyond dismissal and to ridicule if not attempt to shame those who dream their dreams in spite of social disapproval.

The tactics of discouragement are painfully familiar, and are as unimaginative as they are unhelpful: that the idea is unworkable, that it is a mere fantasy, or it is “science fiction.” One also hears that one is wasting one’s time, that one’s time could be better spent, and there is also the patronizing question, “Don’t you want to have a real influence?”

There is no question that the attempt to surpass the present economic paradigm involves much greater risk than seeking to find a safe place for one’s money with the stable and apparent certainty of the present economic paradigm, but greater risks promise commensurate rewards. And the potential rewards are not limited to the particular vision of a particular business enterprise, however visionary or oriented toward the future. The large scale funding of an unconventional enterprise is likely to have unconventional economic outcomes. These outcomes will be unprecedented and therefore unpredictable, but they are far more likely to be beneficial than harmful.

There is a famous passage from Keynes’ General Theory of Employment, Interest and Money that is applicable here:

“If the Treasury were to fill old bottles with banknotes, bury them at suitable depths in disused coalmines which are then filled up to the surface with town rubbish, and leave it to private enterprise on well-tried principles of laissez-faire to dig the notes up again (the right to do so being obtained, of course, by tendering for leases of the note-bearing territory), there need be no more unemployment and, with the help of the repercussions, the real income of the community, and its capital wealth also, would probably become a good deal greater than it actually is. It would, indeed, be more sensible to build houses and the like; but if there are political and practical difficulties in the way of this, the above would be better than nothing.”

John Maynard Keynes, General Theory of Employment, Interest and Money, Book III, Chapter 10, VI

For Keynes, doing something is better than doing nothing, although it would be better still to build houses than to dig up banknotes buried for the purpose of stimulating economic activity. But if it is better to do something than to do nothing, and if it is better to do something constructive like building houses rather than to do something pointless like digging holes in the ground, how much better must it not be to build a future for humanity?

If some of the capital now in search of an investment were to be systematically directed into projects that promised a larger, more interesting, more exciting, and more comprehensive future for all human beings, the eventual result would almost certainly not be that which was originally intended, but whatever came out of an attempt to build the future would be an unprecedented future.

The collateral effect of funding a variety of innovative technologies is likely to be that, as Keynes wrote, “…the real income of the community, and its capital wealth also, would probably become a good deal greater than it actually is.” Even for the risk averse investor, this ought to be too good of a prospect to pass up.

vision

Where there is no vision, the people perish

What is the alternative to funding the future? Funding the past. It sounds vacuous to say so, but there is not much of a future in funding the past. Nevertheless, it is the past that gets funded in the present socioeconomic investment climate.

Why should the future be funded? Despite our fashionable cynicism, even the cynical need a future in which they can believe. Funding a hopeful vision of the future is the best antidote to hopeless hand-wringing and despair.

Who could fund the future if they wanted to? Any of the risk averse investors who have been looking for returns on their capital and imagining that the world can continue as though nothing were going to change as the future unfolds.

What would it take to fund the future? A large scale investment in an enterprise conceived from its inception as concerned both to be a part of the future as it unfolds, and focused on a long term future in which humanity and the civilization it has created will be an ongoing part of the future.

. . . . .

signature

. . . . .

Grand Strategy Annex

. . . . .

Sunday


Tamerlane enjoying a feast near Samarkand after a victory in battle.

Tamerlane enjoying a feast near Samarkand after a victory in battle.

The great age of horse nomads

In discussions of the large-scale historical structure of civilization I often have recourse to a tripartite distinction between pre-civilized nomadic foragers, settled agriculturalism (which I also call agrarian-ecclesiastical civilization), and settled industrialism (which I usually call industrial-technological civilization). I did not originate this tripartite distinction, and I cannot remember where I first encountered an exposition of human history in these terms, but this decomposition of human history serves the purposes of large-scale historiography — call it the “big picture” if you like, or Big History — so I continue to employ it.

In this model of the descent with modification of civilization, agriculturalism proved to be so successful a way of life that it eventually (after a period of several thousand years of transition) displaced nomadic hunter-gatherers, who became a minority and a marginalized population while agriculturalism came to literally dominate the landscape. Agriculturalism in turn has been and is being supplanted by industrialism, which holds such potential for economic and military expansion that no agricultural people can hope to stand against an industrialized people. As a result, agriculturalism in its turn is becoming a minority and marginalized activity, while the world continues its industrialization — a process which has been underway a little more than two hundred years (or, say, five hundred years, if we date from the scientific revolution that made this new civilization possible).

Agrarian-ecclesiastical civilization persisted for more than ten thousand years, but these ten thousand or more years were in no sense static and unchanging. Agricultural civilization, especially pure agriculturalism, is an intensely local form of civilization, and as it is subject to the variability of local climatic conditions, it is subject to periodic famine. Thus agrarian-ecclesiastical civilization repeatedly fell into dark ages, sometimes triggered by climatic events. Socioeconomic stress is often manifested in armed conflict, so these low points in the history of civilization, besides being wracked by famine and pandemics, were also frequently made all the more miserable by pervasive, persistent violence. But agrarian-ecclesiastical civilization not only rebounded from its dark ages, but also seemed to gain in strength and extent, so that subsequent dark ages were shorter and less severe (thus perhaps making civilization itself an example of what Nassim Taleb calls antifragility).

What is missing in this narrative is, that prior to the industrial revolution, settled agricultural civilization underwent a great challenge — a challenge to its socioeconomic institutions almost as wrenching as that of the industrial revolution, although this challenge came in a very different form than machines. It came in the form of horses, that is to say, mounted horse warriors from the steppes of Eurasia, who brutally plundered the vast inland empires of the medieval and early modern periods as the Vikings had earlier brutally plundered the coastal areas of early medieval Europe. History mostly remembers these peoples as barbarians, but that is because histories are mostly written by settled agricultural peoples. The miseries and sufferings of settled agricultural peoples at the hands of these barbarians was at the same time the great age of nomadic pastoralists, when the latter came close to seizing the momentum of history.

A distinct form of civilization

As western civilization stumbled with the collapse of Roman power in the west, and was repeatedly prevented from full recovery due to famine, plague, and violence, a very different form of socioeconomic organization was consolidated in the steppes of Central Asia: nomadic horse warriors. Whether one wishes to call this a distinct form of civilization — say, nomadic-pastoralist civilization — or a non-civilization, if civilization is understood to consist, by definition, of settled peoples, the form of social organization that emerged in Eurasia represented by nomadic pastoralists was both distinct and unique. It was also, for a time, highly successful, especially in armed conflict.

The nomadic pastoralists were not without precedent. In my post The Nature of Viking Power Projection, I wrote, “Ships came out of Scandinavia like horses came out of Mongolia.” I have elsewhere argued that Viking civilization represented a unique form of civilization not often recognized in histories of civilization. Here I would like to argue that nomadic pastoralists also represent a unique form of civilization; like the Vikings, this civilization is not based on settlement, but unlike the Vikings, it is a way of life based on the land and not the sea.

Nomadic pastoralists often adopt a semi-settled way of life called transhumance, which involves an annual migration between winter and summer pastures, ascending to higher elevations for summer pasture and descending into the valleys for winter pasture. Thus they may be considered to exemplify a transitional way of life between pure nomadism and settled life. But this is not the only difference between horse nomads and foragers. One important feature of life that distinguishes nomadic pastoralists from nomadic foragers is that the economy of the former is based on domesticated animals (generally, the horse) while that of the latter involves following herds of non-domesticated animals (generally, reindeer). The nomadic pastoralist exercises a far greater control over the landscape in which he makes his life, and a much greater control over the animals upon which he is dependent. It is in this sense that the nomadic pastoralists deserve to be called a civilization, because the relationship between these peoples and their horses was as central to their way of life as the relationship between settled peoples and their crops — only it was a different relationship of dependence.

An unparalleled weapons system

The military accomplishment of the Mongols and the other horse nomads of Eurasia was remarkable. To train, equip, and maintain a fighting force capable of defeating any other force in the world would be a challenge even for the greatest land empires, but that this was accomplished without the established infrastructure of a settled civilization producing agricultural surpluses, which was what equipped and maintained the armies of settled agricultural peoples. John Keegan, famous for his The Face of Battle, also wrote A History of Warfare, in which he includes much interesting material on what he calls the “horse peoples” (especially Chapter 3, “Flesh”).

The most successful of the nomadic pastoralists from the Asian steppe were unquestionably the Mongols, sometimes called the Devil’s Horsemen. From the historical accounts of Mongol depredations upon Europe and the European periphery, the the attacks of the Mongols sound like a natural disaster, like a plague of locusts, but the Mongols were in fact highly disciplined and employed battlefield tactics that the European armies of the period could not effectively counter for hundreds of years. This is an important point, and it is what accounts for the Mongols’ success: although predicated upon a profoundly different socioeconomic organization than that of the agrarian-ecclesiastical civilization of Europe and the European periphery, the Mongols created a land-based fighting force that for several centuries out-matched every military competitor in Eurasia.

The Mongols perfected a weapons system of mobile fire, which latter I have argued has always been the most potent instruments of warfare in any age. If the Mongols had achieved a level of political organization commensurate with its military organization, their socioeconomic system might have ultimately triumphed in Eurasia, and agrarian-ecclesiastical civilization would have been supplanted by nomadic-pastoralist civilization instead of later being supplanted by industrial-technological civilization.

A uniquely brutal conquest

The Mongols militarily defeated both China and Russia, two of the largest land empires on the planet, and would have permanently subjugated these peoples had they the political structures capable of administering the territories they conquered. Instead of the brutality of horsemen, the Chinese were ultimately subject to the brutality of Chinese emperors and the Russians to the brutalities of their Tsars, which despite being horrific, was less horrific than the depredations of horse nomads.

The conquests of the Mongols were destructive beyond the level of destruction typical of that inflicted by the armies of agrarian-ecclesiastical civilization, and this brutality possibly reached its peak with the depredations of Tamerlane, also called Timur the Lame, who is estimated to have been responsible for the death of about five percent of total global population of the time (the Wikipedia article cites two sources for this claim). In this sense, Tamerlane had much in common with a natural disaster (I noted above the the depredations of horse nomads were often treated like natural disasters by the settled civilizations who suffered from them), as such mortality levels are usually confined to pandemics.

It may have been this brutality and destruction as much as the lack of higher order political organization that ultimately limited the ability of pastoral nomads to rule the peoples they defeated. Notorious leaders of horse nomads such as Attila the Hun, Ghengiz Khan, and Tamerlane seemed to be blind to the most basic forms of enlightened self-interest, as they could have extended their own rule, and had more wealth to plunder, if they have been less destructive in their conquests. This is part of the reason that the peoples that they led are commonly called barbarians, and their way of life is denied the honorific of being called a civilization.

The end of horse nomads as an historical force

If we think of the Turkic peoples of Central Asia as the inheritors of the traditions of horse nomads, the period of the pastoralist challenge to settled agriculturalism continues into the early modern period of European history, up to the two sieges of Vienna, Siege of Vienna in 1529 by the forces of Suleiman the Magnificent, and the Battle of Vienna in 1683, when in both cases Turkish forces sought to take Vienna and were repulsed.

Western history remembers the turning back of the Turks at the Gates of Vienna as the turning point in the depredations of the Turkish Ottomans on Europe. In the following years, the Turks would be pushed back further, and lands would be recovered for Europe from the Turks. But we might also remember this as the last rally of the tradition of conquest that began with the horse nomads of Eurasia. By this time, the Turks had transformed themselves into an empire — the Ottoman Empire — and had adopted the ways of settled peoples. At this point, horse nomads dropped out of history and ceased to be a force shaping civilization.

The future of nomadic pastoralism

In several posts — Three Futures, Pastoralization, and The Argument for Pastoralization, inter alia — I formulated a kind of pastoralism that could define a future pathway of development for human civilization (note that “development” does not here mean “progress”). If this idea of a future for pastoralism is integrated with the realization I have attempted to describe above — viz. that nomadic pastoralism was the greatest challenge to settled agricultural civilization until industrialization — it is easy to see the possibility of a neo-pastoralist future in which industrial-technological civilization itself is challenged by technologically sophisticated pastoral nomads.

While this scenario of technologically sophisticated nomads sounds more like a script for a science fiction film than a likely scenario for the future, it describes possible forms of existential risk, such as permanent stagnation and flawed realization — the former if such a development took us below the level of technological progress necessary to maintain the momentum of industrial-technological civilization, and the latter if this technological progress continues but issues in a society (or, more likely, two or more distinct societies in conflict, i.e., settled and nomadic) that channels this progress into a new dark age, made the more protracted by the lights of a perverted science.

. . . . .

The Battle of Vienna in 1683, when the Turks were turned back from further penetration into Europe.

The Battle of Vienna in 1683, when the Turks were turned back from further penetration into Europe.

. . . . .

signature

. . . . .

Grand Strategy Annex

. . . . .

Wednesday


old computer

Technologies may be drivers of change or facilitators of change, the latter employed by the former as the technologies that enable the development of technologies that are drivers of change; that is to say, technologies that are facilitators of change are tools for the technologies that are in the vanguard of economic, social, and political change. Technologies, when introduced, have the capability of providing a competitive advantage when one business enterprise has mastered them while other business enterprises have not yet mastered them. Once a technology has been mastered by all elements of the economy it ceases to provide a competitive advantage to any one firm but is equally possessed and employed by all. At that point of its mature development, a technology also ceases to be a driver a change and becomes a facilitator of change.

Any technology that has become a part of the infrastructure may be considered a facilitator of change rather than a driver of change. Civilization requires an infrastructure; industrial-technological civilization requires an industrial-technological infrastructure. We are all familiar with infrastructure such as roads, bridges, ports, railroads, schools, and hospitals. There is also the infrastructure that we think of as “utilities” — water, sewer, electricity, telecommunications, and now computing — which we build into our built environment, retrofitting old buildings and sometimes entire older cities in order to bring them up to the standards of technology assumed by the industrialized world today.

All of the technologies that now constitute the infrastructure of industrial-technological civilization were once drivers of change. Before the industrial revolution, the building of ports and shipping united coastal communities in many regions of the world; the Romans built a network of road and bridges; in medieval Europe, schools and hospitals become a routine part of the structure of cities; early in the industrial revolution railroads became the first mechanized form of rapid overland transportation. Consider how the transcontinental railroad in North America and the trans-Siberian railway in Russia knitted together entire continents, and their role as transformative technologies should be clear.

Similarly, the technologies we think of as utilities were once drivers of change. Hot and cold running water and indoor plumbing, still absent in much of the world, did not become common in the industrialized world until the past century, but early agricultural and urban centers only came into being with the management of water resources, which reached a height in the most sophisticated cities of classical antiquity, with water supplied by aqueducts and sewage taken away by underground drainage systems that were superior to many in existence today. With the advent of natural gas and electricity as fuels for home and industry, industrial cities were retrofitted for these services, and have since been retrofitted again for telecommunications, and now computers.

The most recent technology to have a transformative effect on socioeconomic life was computing. In the past several decades — since the end of the Second World War, when the first digital, programmable electronic computers were built for code breaking (the Colossus in the UK) — computer technology grew exponentially and eventually affected almost every aspect of life in industrialized nation-states. During this period of time, computing has been a driver of change across socioeconomic institutions. Building a faster and more sophisticated computer has been an end in itself for technologists and computer science researchers. While this will continue to be the case for some time, computing has begun to make the transition from being a driver of change in an of itself to being a facilitator of change in other areas of technological innovation. In other words, computers are becoming a part of the infrastructure of industrial-technological civilization.

The transformation of the transformative technology of computing from a driver of change into a facilitator of change for other technologies has been recognized for more than ten years. In 2003 an article by Nicholas G. Carr, Why IT Doesn’t Matter Anymore, stirred up a significant controversy when it was published. More recently, Mark R. DeLong in Research computing as substrate, calls computing a substrate instead of an infrastructure, though the idea is much the same. Delong writes of computing: “It is a common base that supports and nurtures research work and scholarly endeavor all over the university.” Although computing is also a focus of research work and scholarly endeavor in and of itself, it also serves a larger supporting role, not only in the university, but also throughout society.

Although today we still fall far short of computational omniscience, the computer revolution has happened, as evidenced by the pervasive presence of computers in contemporary socioeconomic institutions. Computers have been rapidly integrated into the fabric of industrial-technological civilization, to the point that those of us born before the computer revolution, and who can remember a world in which computers were a negligible influence, can nevertheless only with difficulty remember what life was like without computers.

Depsite, then, what technology enthusiasts tell us, computers are not going to revolutionize our world a second time. We can imagine faster computers, smaller computers, better computers, computers with more storage capacity, and computers running innovative applications that make them useful in unexpected ways, but the pervasive use of computers that has already been achieved gives us a baseline for predicting future computer capacities, and these capacities will be different in degree from earlier computers, but not different in kind. We already know what it is like to see exponential growth in computing technology, and so we can account for this; computers have ceased to be a disruptive technology, and will not become a disruptive technology a second time.

Recently quantum computing made the cover of TIME magazine, together with a number of hyperbolic predictions about how quantum computing will change everything (the quantum computer is called “the infinity machine”). There have been countless articles about how “big data” is going to change everything also. Similar claims are made for artificial intelligence, and especially for “superintelligence.” An entire worldview has been constructed — the technological singularity — in which computing remains an indefinitely disruptive technology, the development of which eventually brings about the advent of the Millennium — the latter suitably re-conceived for a technological age.

Predictions of this nature are made precisely because a technology has become widely familiar, which is almost a guarantee that the technology in question is now part of the infrastructure of the ordinary business of life. One can count on being understood when one makes predictions about the future of the computer, in the same way that one might have been understood in the late nineteenth or early twentieth century if making predictions about the future of railroads. But in so far as this familiarity marks the transition in the life of a technology from being a driver of change to being a facilitator of change, such predictions are misleading at best, and flat out wrong at worst. The technologies that are going to be drivers of change in the coming century are not those that have devolved to the level of infrastructure; they are (or will be) unfamiliar technologies that can only be understood with difficulty.

The distinction between technologies that are drivers of change and technologies that are facilitators of change (like almost all distinctions) admits of a certain ambiguity. In the present context, one of these ambiguities is that of what constitutes a computing technology. Are computing applications distinct from computing? What of technologies for which computing is indispensable, and which could not have come into being without computers? This line of thought can be pursued backward: computers could not exist without electricity, so should computers be considered anything new, or merely an extension of electrical power? And electrical power could not have come about with the steam and fossil-fueled industry that preceded it. This can be pursued back to the first stone tools, and the argument can be made the nothing new has happened, in essence, since the first chipped flint blade.

Perhaps the most obvious point of dispute in this analysis is the possibility of machine consciousness. I will acknowledge without hesitation that the emergence of machine consciousness is a potentially revolutionary development, and it would constitute a disruptive technology. Machine consciousness, however, is frequently conflated with artificial intelligence and with superintelligence, and we must distinguish between the two. Artificial intelligence of a rudimentary form is already crucial to the automation of industry; machine consciousness would be the artificial production, in a machine substrate, of the kind of consciousness that we personally experience as our own identity, and which we infer to be at the basis of the action of others (what philosophers call the problem of other minds).

What makes the possibility of machine consciousness interesting to me, and potentially revolutionary, is that it would constitute a qualitatively novel emergent from computing technology, and not merely another application of computing. Computers stand in the same relationship to electricity that machine consciousness would stand in relation to computing: a novel and transformational technology emergent from an infrastructural technology, that is to say, a driver of change that emerges from a facilitator of change.

The computational infrastructure of industrial-technological civilization is more or less in place at present, a familiar part of our world, like the early electrical grids that appeared in the industrialized world once electricity became sufficiently commonplace to become a utility. Just as the electrical grid has been repeatedly upgraded, and will continue to be ungraded for the foreseeable future, so too the computational infrastructure of industrial-technological civilization will be continually upgraded. But the upgrades to our computational infrastructure will be incremental improvements that will no longer be major drivers of change either in the economy or in sociopolitical institutions. Other technologies will emerge that will take that role, and they will emerge from an infrastructure that is no longer driving socioeconomic change, but is rather the condition of the possibility of this change.

. . . . .

Colossus

. . . . .

signature

. . . . .

Grand Strategy Annex

. . . . .

Thursday


Michel Foucault

Michel Foucault

Among the many theoretical innovations for which Michel Foucault is remembered is the idea of biopower. We can think of biopower as a reformulation of perennial Foucauldian themes of the exercise of power through institutions that do not explicitly present themselves as being about power. That is to say, the subjugation of populations is brought about not through the traditional institutions of state power, but by way of new institutions purposefully constituted for the reason of monitoring and administrating the unruly bodies of the individuals who collectively constitute the body politic.

Foucault introduced the idea of biopower in The History of Sexuality, Vol. 1, in the chapter, “Right of Death and Power over Life.” Like his predecessor in France, Descartes, Foucault writes in long sentences and long paragraphs, so that it is difficult to quote him accurately without quoting him at great length. His original exposition of biopower needs to be read in full in its context to appreciate it, but I will try to pick out a few manageable quotes to give a sense of Foucault’s exposition.

Here is something like a definition of biopower from Foucault:

“…a power that exerts a positive influence on life, that endeavors to administer, optimize, and multiply it, subjecting it to precise controls and comprehensive regulations.”

Michel Foucault, The History of Sexuality, Vol. 1, translated from the French by Robert Hurley, New York: Pantheon, 1978, p. 137

Later Foucault names specific institutions and practices implicated in the emergence of biopower:

“During the classical period, there was a rapid development of various disciplines — universities, secondary schools, barracks, workshops; there was also the emergence, in the field of political practices and economic observation, of the problems of birthrate, longevity, public health, housing, and migration. Hence there was an explosion of numerous and diverse techniques for achieving the subjugation of bodies and the control of populations, marking the beginning of an era of ‘biopower’.”

Michel Foucault, The History of Sexuality, Vol. 1, translated from the French by Robert Hurley, New York: Pantheon, 1978, p. 140

Prior to the above quotes, Foucault begins his exposition of biopower with an examination of the transition from the traditional “power of life and death” held by sovereigns, which Foucault says was in fact restricted to the power of death, i.e., the right of a sovereign to deprive subjects of their life, to a fundamental change in emphasis so that the “power of life and death” became the power of life, i.e., biopower. The shift from right of death to power over life is what marks the emergence of biopower. Foucault, however, explicitly acknowledged that,

“…wars were never as bloody as they have been since the nineteenth century, and all things being equal, never before did regimes visit such holocausts on their own populations.”

Michel Foucault, The History of Sexuality, Vol. 1, translated from the French by Robert Hurley, New York: Pantheon, 1978, pp. 136-137

This thanatogenous phenomenon is what Edith Wyschogrod called “The Death Event” (which I wrote about in Existential Risk and the Death Event), but if Foucault is right, it is not the Death Event that defines the social milieu of industrial-technological civilization, but rather a “Life Event” that we must postulate parallel to the Death Event.

What is the Life Event parallel to the Death Event? This is nothing other than the loss of belief in an otherworldly reward after death (which defined social institutions from the Axial Age to the Death of God, and which may be the source of the relation between agriculture and the macabre), and the response to this lost possibility of eternal bliss by the quest for health and felicity in this world and in this life.

A key idea in Foucault’s exposition of biopower hinges upon how the contemporary power over life that has replaced the arbitrary right of death on the part of the sovereign has been seamlessly integrated into state institutions, so that state institutions are the mechanism by which biopower is applied, enforced, expanded, and preserved over time. From this perspective, biopower becomes the unifying theme of Foucault’s series of earlier books on asylums for the insane, prisons for the criminal, and clinics for the diseased, all of which institutions had the character of the, “subjugation of bodies and the control of populations” through “precise controls and comprehensive regulations.” (At this point Foucault could have profited from the work of Erving Goffman, who identified a particular subset of “total institutions” that completely regulated the life of the individual.)

What we are seeing today is that the “success” of the imperative of biopower has resulted in longer and healthier lives among docile populations, who dutifully report to their mind-numbing labor of choice and rarely riot. To step outside the confines of acceptable social behavior is to find oneself committed to a total institution such as an asylum or a prison, so that that individual self-censors and self-restrains in order to preempt state action that would bring his behavior into conformity with the norm. With the imperative of biopower largely established and largely uncontested, the next frontier is the imperative of extending biopower to the mind, and rendering the population intellectually docile in the way that bodies have been regulated and rendered docile.

The extension of biopower to the life of the mind might be called psychopower. This extension presumably involves parallel regimes of psychic hygiene that will give the individual mind a longer, healthier life, as biopower has bequeathed a longer, heathier life to the body, but the healthy and hygienic mind is also a mind that has subjugated to precise controls and comprehensive regulation. Cognitive pathology here becomes a pretext for state intervention into the private consciousness of the individual.

The proliferating regimes of therapy, counseling, psychiatric services, so-called “social” services that today almost invariably have a psychiatric component, not to mention the bewildering range of psychotropic medications available to the public — and apparently prescribed as widely as they are known and available — are formulated with an eye to regimenting the intellectual life of the body politic. And this “eye” is none other than the medical gaze now trained upon the individual’s introspection.

The mechanism by which psychopower is obtained has, to date, been the same state institutions that have overseen biopower, but this is already changing. The emergence of biopower in the period of European history that Foucault called “The Classical Age” (“l’âge classique”) was a product of agricultural civilization (specifically, agrarian-ecclesiastical civilization) at its most mature and sophisticated stage of development, shortly before all that agrarian-ecclesiastical civilization had built in terms of social institutions would be swept away by the unprecedented social change resulting from the industrial revolution, which would eventually begin to converge upon a new civilizational paradigm, that of industrial-technological civilization.

Thus biopower at its inception was the ultimate regulation of a biocentric civilization. As civilization makes a transition from being biocentric to technocentric, new instrumentalities of power will be required to implement a regime of docility under radically changed socioeconomic conditions, i.e., technocentric socioeconomic conditions, and this will require technopower, which will take up where biopower leaves off. Biopower conceived after the manner of biocentric civilization, of which agrarian-ecclesiastical civilization is an expression, cannot answer to the regulatory needs of a technocentric civilization, which thus will require a regime of technopower.

Already this process has begun, though the transition from biocentric civilization is likely to be as slow and as gradual as the transition from hunter-gatherer nomadism to the discipline of settled civilization, in which the institutions of biopower first begin to assume their inchoate forms. What we are beginning to see is the transition from state power being embodied in and exercised through social institutions to state power being embodied in and exercised through technological infrastructure. Central to this development is the emergence of the universal surveillance state, in which the structures of power are identical to the structures of electronic surveillance.

The individual participates in social media for the presumptive opportunities for self-expression and self-development, which are believed to have many of the positive social effects that the regulation of docile bodies has had upon longevity and physical comfort. The structure of these networks, however, serves only to reinforce the distribution of power within society. The more alternatives we have for media, the more we hear only of celebrities (in what is coming to be called a “winner take all” economic model). At the same time that the masses are encouraged to occlude their identity through the iteration of celebrity culture that renders the individual invisible and powerless, the individual self is relentlessly marginalized. In Is the decontextualized photograph the privileged semiotic marker of our time? I argued that the proliferating “selfies” that populate social media, as a self-objectification of the self, are nothing but the “death of the self” prognosticated by post-modernists.

It is unlikely in the extreme that most or even many individuals have any kind of ideological commitment to the emerging universal surveillance state or to the death of the subject, but the technological institutions that are increasingly the mediators of all expression and commerce are becoming inescapable, and as they converge upon totality they will effect a reconstruction of society that will consolidate technopower in the hands of the systems administrators of the technocentric state. These structures are already being constituted, and the channeling of power through apparently benign networks will be the triumph of technopower as it replaces biopower.

. . . . .

signature

. . . . .

Grand Strategy Annex

. . . . .

Monday


surveillance

The national security state came of age during the Cold War, under perpetual threat of a sudden, catastrophic nuclear exchange that could terminate civilization almost instantaneously at any time, and which was therefore an era of institutionalized paranoia. In the national security state, the response to perpetual danger was perpetual vigilance — one often heard the line, the price of peace is eternal vigilance, which has been attributed to many difference sources — and this vigilance primarily took the form of military preparedness. The emergence of the surveillance state as the natural successor to the national security state is a development of the post-Cold War period, and is partly the result of a changed threat narrative, but it is also partly a response to technological advances. During the Cold War the technological resources to construct a universal surveillance state did not yet exist; today these technological resources exist, and they are being systematically implemented.

In the universal surveillance state, the state takes on the role of panopticon — a now-familiar motif originating in the thought of English Utilitarian Jeremy Bentham, but brought to wide attention in the work of French philosopher Michel Foucault (cf. A Flock of Drones) — which has profound behavioral implications for all citizens. It is well known not only to science but to even the most superficial observer of human nature that people tend to behave differently when they know they are being watched, as compared to when they believe themselves to be unobserved. The behavioral significance of universal surveillance is that of putting all citizens on notice that they are being observed at all times. In other words, we are all living inside the panopticon at all times.

Rather than the rational reconstruction of the state, this is the perceptual reconstruction of the state, in which all citizens have a reason to believe that they are under surveillance at all times, and at all places as well, including within the confines of their homes. The tracking of electronic telecommunications — today, primarily cell phone calls and internet-based communication — means that the state reaches in to the private world of the individual citizen, his casual conversations with friends, relatives, colleagues, and neighbors, and monitors the ordinary business of life.

In order to effectively monitor the ordinary business of life of the presumptively “typical” or “average” citizen, the state security monitors must develop protocols for the observation and analysis of this vast body of data that will differentiate the “typical” or “average” citizen from the citizen (or resident, for that matter) who is to be the object of special surveillance. In other words, the total surveillance state must develop an algorithm of normalcy, in contrast to which the pathological is defined — the “normal” and the “pathological” are polar concepts which derive their meaning from their contrast with the opposite polar concept. Any established pattern of life that deviates from the normalcy algorithm would be flagged as suspicious. Even if such flagging incidents fail to reveal criminality, disloyalty, or other behaviors stigmatized by the state, such examples can be used to further refine the algorithm of normalcy in order to rule out the “noise” of the ordinary business of life in favor of the “signal” of pathological behavior patterns.

Those with a hunger for conformity will perhaps interpret a descriptive algorithm for the identification of normalcy as a prescriptive guide to a life that will not attract the attention of the authorities. Many, of course, will give no time to the thought of surveillance. There will be others, however, who are neither indifferent nor conformist, but who will court if not provoke surveillance. And just as the algorithm of normalcy gives a recipe for conformity, it also gives a recipe for non-conformity. Spectacular instances of non-conformity to an algorithm of normalcy will invite surveillance, and this will have potentially unexpected consequences.

One can only wonder how long it will take for individuals hungry for either fame or notoriety — and not caring which one results from their actions — manage to hack the pervasive surveillance state, pinging the system to see how it responds, and using this same system against itself to catapult some individual into the center of national if not global media attention. One could, I imagine, obtain a number of cell phones, land lines, email addresses, and begin using them to exchange suspect information, and eventually be identified as a special surveillance target. If this activity resulted in an arrest, such an experience could be used by the arrested individual as the basis for a book contract or a legal suit about compromised civil rights. Indeed, if the perpetrator was sufficiently clever they could construct the ruse in such a manner as to implicate “sensitive” individuals or to cast serious doubt upon the claims made by law enforcement officials. Such a gambit might be milked for considerable gain.

Given the currency of celebrity in our society, it is nearly inevitable that such an event will occur, whether motivated by the desire for fame, infamy, wealth, power, or self-aggrandizement. Just as Dostoyevsky wrote in a note appended to the beginning of his short story, “Notes from Underground,” (a passage of some interest to me that I previously quoted in An Interview in Starbucks), such individuals must exist in our society:

The author of the diary and the diary itself are, of course, imaginary. Nevertheless it is clear that such persons as the writer of these notes not only may, but positively must, exist in our society, when we consider the circumstances in the midst of which our society is formed. I have tried to expose to the view of the public more distinctly than is commonly done, one of the characters of the recent past. He is one of the representatives of a generation still living.

The overt celebrity state and the covert surveillance state are set to collide, perhaps spectacularly, the more power that is organized around the universal surveillance state. Given the fungibility of power, the political power represented by the universal surveillance state can be readily translated into other forms of power, such as wealth and fame, and the more political power that in concentrates in the universal surveillance state, the riper is this universal surveillance state to being used against its express intention. In other words, the attempt to turn the state into a hard target through universal surveillance, turns the state into a soft target for attacks that exapt the surveillance regime for unintended ends.

Politicians, while savvy within their own metier, like anyone else, can be woefully naïve in other areas of life, which virtually guarantees that, at the very moment when they believe themselves to secured themselves by way of the implementation of a total surveillance regime, they are likely to be blindsided by a completely unprecedented and unanticipated exaptation of their power by another party with an agenda that is so different that it is unrecognizable as a threat by those who study threats to national security. In the way that hackers sometimes cause mayhem and damage for the pure joy of stirring up a ruckus, hackers of the total surveillance state may be motivated by ends that have no place within the threat narratives of the architects of the total surveillance state.

. . . . .

signature

. . . . .

Grand Strategy Annex

. . . . .

Sunday


proprioception

In the spring of 1914, just before the outbreak of World War 1 (and exactly one hundred years ago as I write this), Bertrand Russell gave a series of Lowell Lectures later published as Our Knowledge of the External World. This is a classic exposition of Russell’s thought which had a significant influence on Anglo-American analytical philosophy.

In the audience for one of the later iterations of these lectures was Will Durant, the noted American historian, whose The Story of Philosophy was so successful in the inter-war years that it freed him up to write his multi-volume The Story of Civilization. In The Story of Philosophy Durant wrote of Russell’s 1914 lectures:

“When Bertrand Russell spoke at Columbia University in 1914, he looked like his subject, which was epistemology — thin, pale, and moribund; one expected to see him die at every period. The Great War had just broken out, and this tender-minded, peace-loving philosopher had suffered from the shock of seeing the most civilized of continents disintegrate into barbarism. One imagined that he spoke of so remote a subject as ‘Our Knowledge of the External World’ because he knew it was remote, and wished to be as far as possible from actualities that had become so grim. And then, seeing him again, ten years later, one was happy to find him, though fifty-two, hale and jolly, and buoyant with a still rebellious energy. This despite an intervening decade that had destroyed almost all his hopes, loosened all his friendships, and broken almost all the threads of his once sheltered and aristocratic life.”

Will Durant, The Story of Philosophy, New York: Time Incorporated, 1962, pp. 442-443

Others were more moved by Russell’s thin, pale, and moribund epistemology. Rudolf Carnap read the lectures in book form, and describes the experience in terms reminiscent of a religious conversion:

…in my philosophical thinking in general I learned most from Bertrand Russell. In the winter of 1921 I read his book, Our Knowledge of the External World, as a Field For Scientific Method in Philosophy. Some passages made an especially vivid impression on me because they formulated clearly and explicitly a view of the aim and method of philosophy which I had implicitly held for some time. In the Preface he speaks about “the logical-analytic method of philosophy” and refers to Frege’s work as the first complete example of this method. And on the very last pages of the book he gives a summarizing characterization of this philosophical method in the following words:

The study of logic becomes the central study in philosophy: it gives the method of research in philosophy, just as mathematics gives the method in physics…

All this supposed knowledge in the traditional systems must be swept away, and a new beginning must be made… To the large and still growing body of men engaged in the pursuit of science,… the new method, successful already in such time-honored problems as number, infinity, continuity, space and time, should make an appeal which the older methods have wholly failed to make… The one and only condition, I believe, which is necessary in order to secure for philosophy in the near future an achievement surpassing all that has hitherto been accomplished by philosophers, is the creation of a school of men with scientific training and philosophical interests, unhampered by the traditions of the past, and not misled by the literary methods of those who copy the ancients in all except their merits.

I felt as if this appeal had been directed to me personally. To work in this spirit would be my task from now on And indeed henceforth the application of the new logical instrument for the purposes of analyzing scientific concepts and of clarifying philosophical problems has been the essential aim of my philosophical activity.

Rudolf Carnap, “Intellectual Autobiography,” in The Philosophy of Rudolf Carnap, edited by Paul Arthur Schilpp, p. 13

Russell’s works set the tone and, to a slightly lesser extent, set the agenda for analytical philosophy, in writing such words that inspired and influenced the next generation of philosophers. While Carnap felt himself to be called to a new kind of philosophical work by Russell’s stirring pages, Russell was nevertheless following in a long and distinguished line, which is nothing other than then mainstream of Western philosophy from Aristotle through Descartes and Kant to Russell himself. Descartes is usually remembered for the “epistemological turn” that defines modern Western philosophy, but Descartes was very much schooled in Scholasticism, and Scholasticism was deeply Aristotelian, so that the unbroken line of European philosophy from Aristotle to Russell and beyond may be compared to the “Golden Chain” of philosophers in the Platonic succession of classical antiquity.

The Aristotelian succession of scientifically-minded philosophers tends to be logical rather than intuitive (Aristotle was the first to formulate a formal logic), analytical in its method rather than synthetic or eclectic, and empirical rather than idealistic. But all philosophers, Platonic or Aristotelian, are interested in ideas, and it is the way in which ideas are expressed and incorporated that differs between the two camps. The Aristotelians can no more do without ideas than the Platonists, though ideas tend to enter into Aristotelian thought by way of schematic conceptions that leave their imprint upon the empirical data, and subtly guide the interpretation of all experience.

Aristotle himself is perhaps the best exemplification of this schematization of empirical knowledge according to philosophical categories. The canonical quinquipartitie division of the senses goes back at least to Aristotle’s On the Soul (commonly known as De anima). That our senses consist of seeing, hearing, smelling, tasting, and touching is an idea due to Aristotle’s De anima, and while this division is based on human faculties of perception and has intuitive plausibility, there are ways in which the division is arbitrary. This is one of my favorite works by Aristotle, so I hope that the reader will understand when I say that Aristotle’s division of experience into five senses is arbitrary, that I say so as a reader who is sympathetic to Aristotle’s account.

The Aristotelian division of the senses into five has bequeathed us an impoverished conception of the self. If we think of how the sense of touch is described and incorporated into accounts of the senses, it is as though we were only capable of experiencing bodies as objectified, touched (or touching) from the outside but not felt from within. And yet we experience ourselves from within more continuously than any other form of human experience — even when we close our eyes and stop our ears. Interoception is how we experience our own bodies from the inside. That to say, a part of the world is “wired” from within by our nervous system (which is itself part of the world in turn), and reveals itself to us viscerally. This is one of the consequences of the fact that we human beings constitute the universe experiencing itself (albeit not the whole the universe, but only a very small part thereof).

Recently philosophy has made significant strides in doing justice to what we feel and what we know through our bodies, which is both complex and subtle, and therefore particularly vulnerable to schematic over-simplifying accounts such as Aristotle’s. (I have noted in several posts that recent philosophy of mind has focused on the embodiment of mind, which may be considered another expression of the felt need to do justice to the body.) There is, for example, a wide recognition of what are called kinesthetic sensations, which are the kind of sensations that you feel when you engage in physical activities. When you run, for example, you don’t merely feel the onrush of air evaporating your sweat on the surface of your skin, you also feel your muscles straining, and if something goes wrong you will really feel that. And unless you have one of many disorders, your body has an almost perfect subconscious knowledge of where each limb is in relation to every other limb, which is why we are able to feed ourselves without thinking about it. Because we don’t think about it, but have reduced this knowledge to habit, we don’t think of it as either sensation or knowledge, but it is both.

Even Sam Harris, who doesn’t spend much time on general epistemological inquiries in his books, made a point of citing a litany of bodily sensations:

“Your nervous system sections the undifferentiated buzz of the universe into separate channels of sight, sound, smell, taste, and touch, as well as other senses of lesser renown — proprioception, kinesthesia, enteroreception, and even echolocation.”

Sam Harris, The End of Faith: Religion, Terror, and the Future of Reason, New York and London: W. W. Norton & Company, 2005, “Reason in Exile,” p. 41

In this quote, with its allusion to the “undifferentiated buzz” of experience, there is a hint of William James:

“The baby, assailed by eyes, ears, nose, skin, and entrails at once, feels it all as one great blooming, buzzing, confusion; and to the very end of life, our location of all things in one space is due to the fact that the original extents or bignesses of all the sensations which came to our notice at once, coalesced together into one and the same space.”

William James, The Principles of Psychology, 1890, CHAPTER XIII, “discrimination and Comparison”

James in this short passage has put his finger right on two crucial aspects of perception: that the world comes to us in an undifferentiated welter of sensations, and that we somehow seamlessly knit together this welter into one and the same world. Much as our familiar senses are fully integrated in our experience, so that we experience one world, and not a world of sight, a world of sound, so too our visceral sensations of proprioception, kinaesthesia, and interoception are so subtly integrated that it is only with difficulty that we can distinguish them.

The example of echolocation (which Harris includes in his litany while admitting in a footnote that is not very acute in human beings, but is still present in a limited sense) is especially interesting, because it is a function of hearing that is not exactly identical to hearing as we usually think of hearing (that is to say, hearing that lies outside the Aristotelian template). Moreover, the sensory apparatus inside our skulls that is responsible for hearing is also responsible for vestibular sensations (see glossary below), so that one and the same sense organ allows us more than one perspective on one and the same world.

The seamless integration of sense experience is one of the great unappreciated aspects of the senses in philosophy. Of course, Kant’s transcendental aesthetic was centrally concerned with this problem, there is Husserl on passive synthesis, and there is (or was) Gestalt psychology, and other theories as to how this happens, but none of these are quite right. None of these formulations really drive home the blooming, buzzing confusion of sensation and the unity of the world this sensation reveals. This is the paradox of the one and the many as its manifests itself in sensation.

The feeling of weight, of how one’s body relates to the Earth and to other bodies, is a sensation and that is so subtle and complex, involving both the senses recognized by Aristotle as well as the bodily sensations that Aristotle passed over in silence, that it is extraordinarily difficult to say where one sensation of weight leaves off and another picks up. Consequently, the feeling of weight is difficult to analyze, and most especially its relation to sight — which seems to provide the greater part of our conscious experience of the world — is negligible. When we realize how we typically express knowledge in visual metaphors — e.g., I see what you mean — the disconnect between sight and the feeling of weight takes an a special significance.

To introduce the feeling of weight immediately suggests also the feeling of weightlessness — zero gravity or microgravity conditions, as one experiences in Earth orbit or in deep space. Only a very small number of human beings have experienced weightlessness, and I am not among those few, but I will assume that interoception is fully implicated in the experience of weightlessness. But it is much more than this. Simply put, the experience of weight is the experience of gravity, and, by way of interoception, our body entire is an organ for the sensation of the very fabric of spacetime — our knowledge of the external world by way of our knowledge of the internal world.

When we stand on the surface of Earth and look up at the stars, we also feel the gravity of Earth throughout our body, pulling insistently on every part of us and forcing us to recognize continuously and without exception our physical relationship to Earth. In the most intimate and visceral ways we sense through our animal bodies the great forces that shape planets, stars, galaxies, and the universe entire. We know spacetime not as a mere abstraction, but as a constitutive part of our being. This intimate knowledge of spacetime has shaped our intuitive knowledge and understanding of our place in the cosmos, much as our ability to see the stars has similarly shaped our sense of ourselves as part of the universe. (This is what I called, in a recent post on my other blog, Visceral Cosmology.)

It is not only the visceral sensation of our own spatiality that we know through interoception, but also our own temporality. We not only sense time in the Aristotelian sense as the measure of motion (seeing change in the world), but our minds also give us a personal consciousness of the passage of time. This is as remarkable as our sensation of gravity (i.e., spacetime curvature). Our internal time consciousness, so tied up in our personal identity, reflects the larger temporal structure of the universe, pointing in the same direction as the other arrows of time, and giving us another immediate form or intuition into the very structure of the world. The gnawing tooth of time that ultimately shapes everything in the world also gnaws away inside us.

Our minds and the intuitions that it has about the world have been no less shaped by gravity and time than have our bodies. And in so far as gravity is the distortion of spacetime in the presence of mass, our visceral feelings of weight, as well as our consciousness of time, gives us an immediate intuitive perception of the curvature of spacetime. We possess a kind of interoception of the cosmos. We feel the world in our bones and sinews, as it were.

Here lies a crucial clue to understanding the Overview Effect (cf. The Epistemic Overview Effect, The Overview Effect as Perspective Taking, Hegel and the Overview Effect, and The Overview Effect in Formal Thought) Discussions of the overview effect tend to focus on seeing the Earth whole from space, and this is no doubt crucial to the experience, but the viscerality of the experience comes from the countless sensations of microgravity that are too subtle to describe and too numerous to clearly differentiate. It is the visceral experience of being off the surface of Earth combined with the evidence of one’s eyes that Earth lies before one, suspended in space as one is oneself suspended in space, that is the overview effect.

All human history up until the flight of Yuri Gagarin had taken place on the surface of Earth. In Wittgensteinian terms, nothing up to that point in time had contrasted with the form of terrestrial experience (cf. Nothing contrasts with the form of the world). With the visceral experience of being in space, suddenly there is a contrast where before there was none: the sensation of being on Earth, and the sensation of being off the surface of Earth, and subject to distinct (and distinctively different) gravitational conditions. The conditions of weight and weightlessness now define polar concepts, between which are a continuum of graded sensation; the polar concepts take part of their meaning from their contrast with the opposite polar concept, as do all points of experience along the continuum of the experience of weight.

Further technological developments that allow for unprecedented forms of human experience will also result in novel experiences of interoception. When we eventually build large artificial structures in space and spin them in order to imitate terrestrial gravity, there may be some individuals who cannot distinguish between this imitation of gravity and gravity on the surface of Earth, while other individuals may feel a difference. Some individuals may be made ill by the sensation, and in this way artificial structures will be strongly selective of who remains there — and therefore strongly selective of who does and does not create the human future in space.

When, in the further future, our technology allows us to travel at relativistic velocities, we will have yet further experiences of acceleration and of our personal consciousness of time in relation to time dilation, and the twin paradox that I have recently discussed (e.g., in Kierkegaard and Futurism) will prove to be not a limitation, but rather a revelation. We will learn things about ourselves and about the human condition that could not be learned in any other way than the actual experience of living in various extraterrestrial environments.

The overview effect is only the beginning of the human, all-too-human experience of space travel. The exploration of space will not only open new worlds to us beyond Earth, but will also open new inner worlds to us as the human condition expands to comprise unprecedented experiences that can have no parallel on Earth.

. . . . .

A Note on Terminology: terminology is important, because our vocabulary for the internal experience of our bodies is relatively impoverished in comparison with the vocabulary at our command when it comes to our knowledge of the external world. Neither interoception or “enteroreception” appear in the Oxford English Dictionary. The Free Online Dictionary defines “interception” as “sensitivity to stimuli originating inside of the body.”

I found this distinction made between “enteroreception” and “exteroreception”: “Enteroreception or changes within the organsim that are detected by receptor cells within the organism. Exteroreception or changes that occur outside the orgnasim that are detected by receptor cells at the surface of the organism.”

I am here using “interoception” as a blanket term to cover all forms of visceral perception and sensation, though it might to worth considering coining a new term to cover all these uses, such as, for example, endoception.

There is an interesting glossary of terms related to interoception in The Senses of Touch: Haptics, Affects and Technologies by Mark Paterson (New York and Oxford: Berg, 2007):

Haptic Relating to the sense of touch in all its forms, including those below.

Proprioception Perception of the position, state and movement of the body and limbs in space. Includes cutaneous, kinaesthetic, and vestibular sensations.

Vestibular Pertaining to the perception of balance, head position, acceleration and deceleration. Information obtained from semi-circular canals in the inner ear.

Kinaesthesia The sensation of movement of body and limbs. Relating to sensations originating in muscles, tendons and joints.

Cutaneous Pertaining to the skin itself or the skin as a sense organ. Includes sensation of pressure, temperature and pain.

Tactile Pertaining to the cutaneous sense, but more specifically the sensation of pressure (from mechanoreceptors) rather than temperature (thermoceptors) or pain (nociceptors).

Force Feedback Relating to the mechanical production of information sensed by the human kinaesthetic system. Devices provide cutaneous and kinaesthetic feedback that usually correlates to the visual display.

. . . . .

Astronaut-in-Microgravity

. . . . .

signature

. . . . .

Grand Strategy Annex

. . . . .

The Genocidal Species

15 March 2014

Saturday


hominid-evolution

Homo sapiens is the genocidal species. I have long had it on my mind to write about this. I have the idea incorporated in an unpublished manuscript, but I don’t know if it will ever see the light of day, so I will give a brief exposition here. What does it mean to say that Homo sapiens is the genocidal species (or, if you prefer, a genocidal animal)?

Early human history is a source of controversy that exceeds the controversy over the scientific issues at stake. It is not difficult to understand why this is the case. Controversies over human origins are about us, what we are as a species, notwithstanding the obvious fact that we are in no way limited by our past, and we may become many things that have no precedent in our long history. Moreover, the kind of evidence that we have of human origins is not such as to provide us with the kind of narrative that we would like to have of our early ancestors. We have the evidence of scientific historiography, but no poignant human interest stories. In so far as our personal experience of life paradoxically provides the big picture narrative by which we understand the world (a point I tried to make in Kierkegaard and Futurism), the absence of a personal account of our origins is an ellipsis of great consequence.

To assert that humanity is a genocidal species is obviously a tendentious, if not controversial, claim to make. I make this claim partly because it is controversial, because we have seen the human past treated with excessive care and caution, because, as I said above, it is about us. We don’t like to think of ourselves has intrinsically genocidal in virtue of our biology. Indeed, when a controversial claim such as this is made, one can count on such a claim being dismissed not on grounds of evidence, or the lack thereof, but because it is taken to imply biological determinism. According to this reasoning, an essentialist reading of our history shows us that we are genocidal, therefore we cannot be anything other than genocidal. Apart from being logically flawed, this response misses the point and fails to engage the issue.

Yet, in saying that man is a genocidal species, I obviously making an implicit reference to a long tradition of pronouncing humanity to be this or that, as when Plato said that man is a featherless biped. This is, by the way, a rare moment providing a glimpse into Plato’s naturalism, which is a rare thing. There is a story that, hearing this definition, Diogenes of Sinope plucked a chicken and brought it to Plato’s Academy, saying, “Here is Plato’s man.” (Perhaps he should have said, “Ecce homo!”) This, in turn, reveals Diogenes’ non-naturalism (as uncharacteristic as Plato’s naturalism). Plato is supposed to have responded by adding to his definition, “with broad, flat nails.”

Aristotle, most famously of all, said that man is by nature a political animal. This has been variously translated from the Greek as, “Man is by nature an animal that lives in a polis,” and, “Man is by nature a social animal.” This I do not dispute. However, once we recognize that homo sapiens is a social or political animal (and Aristotle, as the Father of the Occidental sciences, would have enthusiastically approved of the transition from “man” to “homo sapiens”), we must then take the next step and ask what exactly is the nature of human sociability, or human political society. What does it mean for homo sapiens to be a political animal?

If Clausewitz was right, political action is one pole of a smoothly graduated continuum, the other pole of which is war, because, according to Clausewitz, war is the continuation of policy by other means (cf. The Clausewitzean Continuum). This claim is equivalent to the claim that politics is the continuation of war by other means (the Foucauldian inversion of Clausewitz). Thus war and politics are substitutable salve veritate, so that homo sapiens the political animal is also homo sapiens the military animal.

I don’t know if anyone has ever said, man is a military animal, but Freud came close to this in a powerful passage that I have quoted previously (in A Note on Social Contract Theory):

“…men are not gentle creatures who want to be loved, and who at the most can defend themselves if they are attack; they are, on the contrary, creatures among whose instinctual endowments is to be reckoned a powerful share of aggressiveness. As a result, their neighbor is for them not only a potential helper or sexual object, but also someone who tempts them to satisfy their aggressiveness on him, to exploit his capacity for work without compensation, to use him sexually without his consent, to seize his possessions, to humiliate him, to cause him pain, to torture and to kill him. Homo homini lupus. Who, in the face of all his experience of life and of history, will have the courage to dispute this assertion? As a rule this cruel aggressiveness waits for some provocation or puts itself at the service of some other purpose, whose goal might also have been reached by milder measures. In circumstances that are favorable to it, when the mental counter-forces which ordinarily inhibit it are out of action, it also manifests itself spontaneously and reveals man as a savage beast to whom consideration towards his own kind is something alien.”

Is it unimaginable that it is this aggressive instinct, at least in part, that made in possible for homo sapiens to out-compete every other branch of the hominid tree, and to leave itself as the only remaining hominid species? We are, existentially speaking, El último hombre — the last man standing.

What was the nature of the competition by which homo sapiens drove every other hominid to extinction? Over the multi-million year history of hominids on Earth, it seems likely that the competition among hominids likely assumed every possible form at one time or another. Some anthropologists that observed a differential reproductive success rate only marginally more fertile than other hominid species would have, over time, guaranteed our demographic dominance. This gives the comforting picture of a peaceful and very slow pace of one hominid species supplanting another. No doubt some of homo sapiens’ triumphs were of this nature, but there must have also been, at some time in the deep time of our past, violent and brutal episodes when we actively drove our fellow hominids into extinction — much as throughout the later history of homo sapiens one community frequently massacred another.

A recent book on genocide, The Specter of Genocide: Mass Murder in Historical Persepctive (edited by ROBERT GELLATELY, Clark University, and BEN KIEMAN Yale University), is limited in its “historical perspective” to the twentieth century. I think we must go much deeper into our history. In an even larger evolutionary framework than that employed above, if we take the conception of humanity as a genocidal species in the context of Peter Ward’s Medea Hypothesis, according to which life itself is biocidal, then humanity’s genocidal instincts are merely a particular case (with the added element of conscious agency) of a universal biological imperative. Here is how Ward defines his Medea Hypothesis:

Habitability of the Earth has been affected by the presence of life, but the overall effect of life has been and will be to reduce the longevity of the Earth as a habitable planet. Life itself, because it is inherently Darwinian, is biocidal, suicidal, and creates a series of positive feedbacks to Earth systems (such as global temperature and atmospheric carbon dioxide and methane content) that harm later generations. Thus it is life that will cause the end of itself, on this or any planet inhabited by Darwinian life, through perturbation and changes of either temperature, atmospheric gas composition, or elemental cycles to values inimical to life.

Ward, Peter, The Medea Hypothesis: Is Life on Earth Ultimately Self-Destructive? Princeton and Oxford: Princeton University Press, 2009, p. 35

Ward goes on to elaborate his Medea Hypothesis in greater detail in the following four hypotheses:

1. All species increase in population not only to the carrying capacity as defined by some or a number of limiting factors, but to levels beyond that capacity, thus causing a death rate higher than would otherwise have been dictated by limiting resources.

2. Life is self-poisoning in closed systems. The byproduct of species metabolism is usually toxic unless dispersed away. Animals pro- duce carbon dioxide and liquid and solid waste. In closed spaces this material can build up to levels lethal either through direct poisoning or by allowing other kinds of organisms living at low levels (such as the microbes living in animal guts and carried along with fecal wastes) to bloom into populations that also produce toxins from their own metabolisms.

3. In ecosystems with more than a single species there will be competition for resources, ultimately leading to extinction or emigration of some of the original species.

4. Life produces a variety of feedbacks in Earth systems. The majority are positive, however.

Ward, Peter, The Medea Hypothesis: Is Life on Earth Ultimately Self-Destructive? Princeton and Oxford: Princeton University Press, 2009, pp. 35-36

The experience of industrial-technological civilization has added a new dimension to hypothesis 2 above, as industrial processes and their wastes have been added to biological processes and their wastes, leading to forms of poisoning that do not occur unless facilitated by civilization. Moreover, a corollary to hypothesis 3 above (call is 3a, if you like) might be formulated such that those species within an ecosystem that seek to fill the same niche (i.e., that feed off the same trophic level) will be in more direct competition that those species feeding off distinct trophic levels. In this way, multiple hominid species that found themselves in the same ecosystem would be trying to fill the same niche, leading to extinction or emigration. Once homo sapiens achieved extensive totality in the distribution of the species range, however, there is nowhere else for competitors to emigrate, so if they are out-competed, they simply go extinct.

Ward was not the first to focus on the destructive aspects of life. I have previously quoted the great biologist Ernst Haeckel, who defined ecology as the science of the struggle for existence (cf. Metaphysical Ecology Reformulated), and of course in the same vein there is the whole tradition of nature red in tooth and claw. Such visions of nature no longer hold the attraction that they exercised in the nineteenth century, and such phrases have been criticized, but it may be that these expressions of the deadly face of nature did not go far enough.

There is a sense in which all life if genocidal, and this is the Medean Hypothesis; what distinguishes human beings is that we have made genocide planned, purposeful, systematic, and conscious. The genocidal campaigns that have punctuated modern history, and especially those of the twentieth century, represent the conscious implementation of Medean life. We knowingly engage in genocide. Genocide is now a policy option for political societies, and in so far as we are political animals all policy options are “on the table” so to speak. It is this that makes us the uniquely genocidal species.

. . . . .

signature

. . . . .

Grand Strategy Annex

. . . . .

Friday


franklin quote

It is often said that there has never been a good war or a bad peace. I disagree with this. There have been many periods of human history that have been called peaceful but which have not constituted peace worthy of the name. We must allow at least the possibility that if a short, decisive war can bring a rapid end to a peace not worthy of the name, and substitute for this something more closely approximating an ideal peace, then such a war would not necessarily be a bad thing. I am not making the claim that such a situation is often exemplified in human history (i.e., “good” wars are not often exemplified, although history has many examples of a bad peace), nor even that when such a condition obtains that it is recognizable by us, but only that it is possible that such a condition obtains.

Yet to focus on war and peace as though they were polar opposites is likely to be counter-productive because misleading. War and peace are related in a way not unlike love and hatred. As we have all heard, it is indifference that is the antithesis of love, not hate. In other words, war and peace lie along a continuum, and a continuum is characterized by a smooth gradation between to opposed states. And so the complexity of history often reveals to us the smooth, imperceptible gradation between war and peace. In escalation, we have the gradual transition from peace to war, and in deescalation we have the gradual transition from war to peace.

The dialectic of war and peace, unfolding as the pendulum of history swings between the poles of war and peace, yields distinct species of war and peace as the development of history forces the realization of each polar concept in turn to take novel forms in the light of unprecedented historical developments. I have elsewhere argued that war is likely an ineradicable feature of civilization (cf. Invariant Properties of Civilization), i.e., the two — war and peace — are locked together in a co-evolutionary spiral so that you cannot have the one without the other.

We would like to think that peace is the equilibrium state to which society returns, and in which equilibrium it remains until this equilibrium is disturbed by war, and that war is a disequilibrium condition which must inevitably give way to the equilibrium condition of peace. This is wishful thinking. Of course, if one is dedicated to this idea one can certainly interpret history in this way, but the fit between the interpretation and the facts is not a good one, and considerable hermeneutical ingenuity must be invested to try to make the interpretation look plausible. In other words, we must tie ourselves in knots in order to try to make this interpretation work; it is not prima facie plausible.

This last point is sufficiently interesting that I would like to pause over it for a moment. I can remember the first time that I came to realize that history is a powerful tool for conveying in interpretation, not a vehicle for the conveyance of facts. History isn’t just an account of the past, a chronicle of names, dates, and places, that only becomes distorted when an historian with an agenda twists the material in order to make it serve a moral, social, or political function. All history, one way or another, conveys an interpretation. I came to this conclusion not from the study of war, but from the study of logic. Some many years ago I was trying to write a comprehensive history of logic, and the more deeply I penetrated into the subject matter from the perspective of the historian that I wanted to be, the more I realized that, no matter how I told the story, it would still be my story.

That all history — including contemporary history — involves interpretation does not make it arbitrary or merely idiosyncratic. The best histories robustly embody the temperament of their authors, and one knows when one is reading what the author’s point of view is, whether or not one agrees with it. This is true of all the great histories from Herodotus to Braudel.

One certainly could write a history of civilization in which peace is an equilibrium condition, from which war is a pathological departure, and this might well be a powerful interpretation of the human condition. One could just as easily write a history of civilization in which war is the equilibrium condition, from which peace is the pathological departure. We have histories such as the first variety, but very few of the second variety, mostly because people simply do not want to believe that war is the norm and peace a suspension of the norm.

Clausewitz famously held that war and peace are two sides of the same coin:

War is a mere continuation of policy by other means. We see, therefore, that war is not merely a political act, but also a real political instrument, a continuation of political commerce, a carrying out of the same by other means. All beyond this which is strictly peculiar to war relates merely to the peculiar nature of the means which it uses. That the tendencies and views of policy shall not be incompatible with these means, the art of war in general and the commander in each particular case may demand, and this claim is truly not a trifling one. But however powerfully this may react on political views in particular cases, still it must always be regarded as only a modification of them; for the political view is the object, war is the means, and the means must always include the object in our conception.

Carl von Clausewitz, On War, Book 1, Chapter 1, section 24

This is the Clausewitzean continuum: war and peace are what philosophers call polar concepts — concepts that anchor two ends of a single continuum — and each derives its meaning from its contrast with the other. Between the two polar concepts is a graduated continuum in which one is either closer to one end or the other of the continuum, but the positions on the intervening continuum do not perfectly exemplify the polar concepts, which are sometimes idealizations never realized in actual fact.

Foucault made the obvious inversion of this Clausewitzean dictum, namely, that politics is the continuation of war by other means (cf. Foucault on Strategy and A Clausewitzean Conception of Philosophy).

In light of Clausewitz’s dictum on the convertibility of war and politics, Clausewitz’s philosophy of war is at the same time a philosophy of politics, and, by extension, a philosophy of civilization, as I have characterized it in A Clausewitzean Conception of Civilization and Civilization, War, and Industrial Technology.

Whether or not we can transcend this dialectic of polar concepts and attain a realization of civilization that does not derive its meaning from its polar opposite, warfare, will be an inquiry for another time.

. . . . .

Carl von Clausewitz

Carl von Clausewitz

. . . . .

signature

. . . . .

Grand Strategy Annex

. . . . .

Follow

Get every new post delivered to your Inbox.

Join 297 other followers

%d bloggers like this: