Induced Failure in Complex Adaptive Systems

12 August 2011

Friday


Decoy tanks have been in use almost as long as tanks themselves have been in use, employed in an attempt to persuade an enemy to learn the wrong lesson from their intelligence gathering activities.

To speak of induced failure in complex adaptive systems is already to acknowledge a distinction between induced failure and non-induced failure, and beyond this distinction we can make a further distinction between failure that is purposefully induced and failure that is induced but as an unintended consequence of some other action — i.e., failure as an externality. An obvious example of purposefully induced failure is military action undertaken with the intention of causing catastrophic failure on the part of enemy forces. An equally obvious example of induced failure as an unintended consequence is that of environmental damage that results from pollution and the pressures of industrialized society on the ecosystem.

It could be argued that the 1997 Asian financial crisis, which was precipitated by the collapse of the value of the Thai Bhat (much as the global financial crisis of 2008-2009 was precipitated by the subprime mortgage crisis), was, for all intents and purposes, an intended failure, as investors had placed the Thai Bhat under considerable pressure by shorting the currency on international currency markets. When a nation-state (or even a quasi-state entity like the Eurozone) finds its currency under pressure by international speculators, it will often protest that the speculators are at fault, while the speculators will say that they are only trying to make a profit, and that they serve a valuable function within the financial community in bringing vulnerabilities to light.

In the past few days we have seen some dramatic examples of this sort of thing, as the downgrading of US government securities by Standard & Poor’s was called a “mistake” by Gene Sperling (NEC director) when it was clearly a carefully deliberated decision (especially in terms of its announcement after close of business on Friday to give markets time to absorb the news before opening on Monday), while Greece and Turkey enacted bans on short selling, although European regulators could not agree on a wide-ranging ban on short sales. Are we to say that this week’s market turmoil was induced by Standard & Poor’s downgrade, so that the ratings agency has a measure of historical agency in bringing this about, or is the ratings agency merely the canary in the coal mine?

Clearly, it becomes a matter of how the boundary is drawn between agency and absence of agency, just as it is a matter of how we draw the boundary between induced and non-induced failure. I think it would quite difficult to formulate an adequate theoretical definition of non-induced failure, and in fact I am not prepared to even suggest anything at this time. Since non-induced failure is when failure “just happens,” there will always be claims made for agency in failure, including the agency of natural forces (say, friction). So I think the better method here is to try to understand induced failure better, and then to define non-induced failure as the complement of the cases of induced failure.

In the present context, we will call a purposefully induced failure a formal failure, while a failure that results from unintended consequences will be called an informal failure. According to this terminology, an old building that has been dynamited to bring it down has experienced a formal failure, while shoddy design or construction practices that have resulted in a building collapsing (as in the Hyatt Regency walkway collapse) is an example of informal failure.

It is a standard mode of argument among conspiracy theorists to claim that an informal failure is really a formal failure, though the mechanisms of purpose in the failure have been disguised by nefarious agents so that what appears, at first sight, to be an informal failure is in fact a formal failure. It could be argued that the attempt to impose purpose upon informal failures is a consequence of what evolutionary psychologists call the agency detector. On an intuitive level, it doesn’t take much sophistication to understand that, 1) individuals want to believe that they understand things that others do not understand, and 2) that this intellectual form of self-aggrandizement plays a role in drawing the boundary between formal and informal failures so as to exclude all informal failures. This, however, is ultimately uninteresting, and I maintain that there is a valid distinction between formal and informal failure. Just as a cigar is sometimes just a cigar, so too failure is sometimes just failure and involves no agency.

At a somewhat higher level of sophistication, it is a standard mode of argument among ideologically-motivated partisans that, although informal failures are technically informal failures, any reasonable and responsible person should have seen the unintended consequences that would follow from their actions, so that if people would just take their blinders off they would see the informal failures for the formal failures that they are. Such an argument implies self-deception at some level, whether on the part of participants who are following orders or on the part of those issuing the orders. This argument is important because it brings our attention to the role of self-deception in understanding the world — and I believe the role of self-deception to be under-estimated in human affairs — but it is easy to make sweeping claims in this regard which, when pressed, lead to the denial of the very possibility of informal failures, and the denial of the possibility of informal failure leads to the search for agents responsible for the formal failure — scapegoating and witch-hunts.

The universal search for scapegoats is just as uninteresting as the universal search for nefarious and hidden agents, and so I reject the ideological attempt to draw the boundary between formal and informal failure so as to exclude all informal failure. I have said elsewhere, in another context, that the facts do not speak for themselves. This bears repeating, as does the observation that what is obvious to one person in terms of unintended consequences is in no sense obvious to another person.

There is as yet no standard definition for complex adaptive systems; the discipline is too recent to have settled upon the requisite conventions. The Wikipedia article on complex adaptive systems cites a definition by John Henry Holland: “Cas [complex adaptive systems] are systems that have large numbers of components, often called agents, that interact and adapt or learn.”

If the agents that constitute a complex adaptive system fail to adapt, or adapt poorly, fail to learn or learn the wrong lessons, then such complex adaptive systems are vulnerable to failure. If a complex adaptive system can be induced to adapt poorly, or induced to learn the wrong lesson, then such complex adaptive systems can be induced to reveal vulnerabilities. If the induced vulnerabilities are intentional (that is to say, if they are formal failures), the vulnerability can be exploited to bring about catastrophic failure cascading from the point of the vulnerability.

As we follow out this reasoning we must be careful because matters become complicated very quickly. In all of the above cases we must distinguish between formally inducing failure and informally inducing failure. Taking the example of environmental degradation, we know that some industrial chemicals allowed into the biosphere mimic naturally occurring substances and replace the naturally occurring substances, sometimes to deleterious effect. This in an informally induced poor adaptation that results in a vulnerability. Taking the example of military defeat, a campaign of disinformation can cause the enemy to “learn” the wrong lesson and this can be calculated to open a vulnerability. This is a formally induced learning of an incorrect lesson.

Adaptation and learning occur in the context of interaction, and interaction takes place at many different levels. Following my adaptation of Bronfenbrenner’s bioecological model (cf. Metaphysical Ecology), I hold that interaction takes place on five levels of metaphysical ecology:

micro-interaction
meso-interaction
exo-interaction
macro-interaction, and
metaphysical interaction

Such interaction may take place simultaneously across many different ecological levels, or at one or several levels. All of these interactions carry with them the possibility of adaptation and learning on the part of the agents primarily functioning on the levels in question, and all of these interactions carry with them the possibility of formal or informal failure.

We know from ordinary experience how a complex adaptive system can fail on one level and this failure can cascade bringing about a catastrophic failure of the entire system, even when other ecological levels of the complex adaptive have learned and adapted appropriately. For example, during wars one always hears of soldiers learning lessons on the battlefield (micro- and meso- learning) that have not been learned at an institutional level (meso- and exo- level learning), and thus the institution goes on making the same mistake that the soldiers know to be a mistake but cannot change because they are not empowered to bring about institutional change. These kinds of failures are also very common in business, when frontline employees know policies to be failing, but are required by management to continue a failing policy because the lesson has not yet been learned at an institutional level.

On the other side of this dialectic, it is often the case that people who see the big picture clearly understand the nature of a problem and have learned their lessons (on meso- and exo- levels), but are, for one reason or another, unable to communicate this understanding to meso- and micro- levels, where the same mistakes continue to be made. This is clearly the case with social workers who understand the roots of inter-personal violence (IPV) in families and communities, and although they seek to educate families and communities with all the resources that they have available, the same problems continue to appear over and over again.

I assumed both of the above examples to be generalizable throughout metaphysical ecology), which means that even in ecological systems — and complex adaptive systems are ecological systems — there is just enough compartmentalization for an isolated failure to develop to the point that it can cause a cascading catastrophic failure, even if successful adaptations and effective learning is taking place on other ecological levels.

I assume that in a highly sensitive complex adaptive system that minor failures and disturbances would be rapidly transmitted up and down through all ecological levels of the system. In so far as learning and adaptation are global — meaning not that it takes place on the highest ecological level, but that it takes place across all ecological levels, and that there is a feedback loop that allows one level to learn from the adaptations and learning of other levels — I suggest that a highly sensitive complex adaptive system, while superficially fragile, may represent the more robust and resilient form of order.

The ability to learn from what others have learned — which I have expressed here as learning lessons and adaptations from other ecological levels — might be called higher-order learning, but this is a fancy name for a simple idea… the idea that you don’t have to be the one to burn your finger on the stove to know that it is hot. There is a kind of intellectual maturity involved in learning from the lessons of others, and when this intellectual maturity can be integrated into institutions the resultant institutions would possess a much higher degree of resiliency than those that lack this capacity.

. . . . .

signature

. . . . .

Grand Strategy Annex

. . . . .

2 Responses to “Induced Failure in Complex Adaptive Systems”

  1. You are reinventing the wheel. Have you read Jean Baudrillard on this? The Japanese believe that there is a demon inside every piece of technology.

    Kind of like gremlins in air force planes in War II.

    • geopolicraticus said

      Well, I suppose that there will always be a certain unavoidable amount of the duplication of philosophical labor, since no one can take in everything. If you’ll steer me toward the appropriate place in Baudrillard I would be happy to see what he has to say about this.

      However, it seems to me, intuitively speaking, that the notion of “gremlins” or of a demon inside every technology is a paradigm case of technological failure that just happens, which I mentioned above but did not characterize because I had no ready definition. I wanted to focus on technological failures that are induced, and an induced failure, whether intentional or unintentional, seems to me to be very different from a failure attributable to gremlins in the works — that is to say, attributable to no known cause.

      I guess I am assuming that an induced cause of technological failure is discoverable through rational inquiry. I could either re-cast this assumption as a principle, or I could pose it as a question for further inquiry.

      Best wishes,

      Nick

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.