Adaptation and Viability

14 June 2011


The Tacoma Narrows Bridge completed in 1940 suffered catastrophic failure as a result of aeroelastic flutter.

A few more words about failure, if you will. Some days ago in Complex Systems and Complex Failure I wanted to make the point that, while failures in complex systems may have simple beginnings, the actual collapse of the complex system is as complex as the system itself. There is a sense in which this is logically, even tautologically, true. A complex system can’t be said to have experienced catastrophic failure until it has been compromised across a broad range of functionality.

Here is my formulation from Complex Systems and Complex Failure:

“Complex systems fail in complex ways. Moreover, the scope of a catastrophic failure of a complex system is commensurate with the scope of the complex system. This is easy to see intuitively since a catastrophic cascading failure in a complex system must penetrate through all levels of the system and encompass both core and periphery.”

There are many ways that this might be formulated. I regard the above formulations as tentative, since I haven’t yet thought about this enough to have converged upon an optimal formulation. But I’m sure you get the idea.

Ultimately, the failures of complex adaptive systems are as interesting as the complex adaptive systems themselves, and so these failures merit a careful study. The theoretical justification of the study of catastrophic and cascading failures is not difficult to seek. When searching for materials related to the failure of complex systems I found a lot of intuitive formulations related to the fact that one learns through failures. I believe that this is true. However, one must be careful because one can overstate the case for failure.

Theoretical failure is refutation, and Karl Popper famously made a virtue of refutation by asserting that the difference between science and pseudo-science is that scientific hypotheses can be tested and falsified, whereas non-science cannot be falsified. Thus falsifiability — i.e., theoretical failure — is a theoretical virtue. A theory that cannot be falsified also cannot be much use to us as knowledge.

However, if we look closely we can discern that there are different varieties of falsification. To give Popper an exposition in Kuhnian terms, we can distinguish between falsification in normal science and falsification in revolutionary science. We could also call this local and global falsification. With local falsification, the bulk of the theory remains untouched, and we need only tweak the details in order to set matters right. With global falsification, the bulk of a theory is shown to be false, and a new and perhaps revolutionary theory must be formulated to take its place.

There are pragmatic analogues to these theoretical forms of failure. That is to say, there are “normal,” local failures that do not call a given (social or industrial) infrastructure into question. A flat tire is like this. Most people don’t give up on driving because of a flat tire. But there are “revolutionary” or global failures of infrastructure. For example, chronic traffic congestion may cause someone to give up on driving.

As there are different varieties of failure, so too there are different responses to failure, both on the part of the systems that fail and on the part of those who stand in some relation to the systems that fail.

Complex adaptive systems fail when their ability to adapt to changed conditions is impaired. However, the kind of failure will determine the kind of response that is necessary to the survival of the complex adaptive system. A local failure will call forth a local adaptation; a global failure will call forth a global adaptation. Put otherwise, faced with a local failure, a complex adaptive system will locally adapt or the failure will cascade and perhaps become catastrophic. Faced with a global failure, a complex adaptive system must adapt globally or be annihilated.

These ideas can also be formulated in terms of Toynbee’s challenge and response model for civilizations. If a civilization is faced with a local challenge, it must adapt locally or the failure will cascade; faced with a global failure of its institutions, a civilization must response globally or become extinct.

Change is the price of historical viability; complex adaptive systems “succeed” by adapting to changed conditions, but when the conditions are globally changed a global response is necessary, and a global adaptation means that the complex adaptive system has been changed beyond recognition. In other words, the “success” of a complex adaptive system extrapolated over time means that, in order to survive, a complex adaptive system must change its nature, and if it is an entity of a wholly changed nature that ultimately “survives” as a result of adaptation, it is at least arguable that the entity of the original nature has not survived and therefore the survival strategy of complex adaptive system has failed.

This paradox of survival is widely applicable to real world entities, and bears a strong resemblance to what philosophers call sorites paradoxes. Sorites paradoxes are paradoxes of identity: how many hairs must you remove from a head before it is bald? How many grains of sand must you pile up before they constitute a heap? Do the individual grains of sand lose their identity as individuals in order to become incorporated into the identity of the heap?

There are answers to these questions, but there is no one, universally accepted answer. Any one answer to a philosophical paradox assumes a particular metaphysic, and those who subscribe to another metaphysic will find themselves at odds with such an answer. It would seem that complex adaptive systems are paradoxical, therefore open to distinct metaphysical expositions. Thus the existing literature of complex adaptive systems ought to be interrogated for its metaphysical presuppositions. I will save any such inquiry for another time.

. . . . .


. . . . .

Grand Strategy Annex

. . . . .


6 Responses to “Adaptation and Viability”

  1. djmarsay said

    People can fail in complex ways, but they can also be blown up. So complex systems can fail in simple ways. But only complex systems can fail in complex ways – and they often do.

    • geopolicraticus said

      Dear Dr. Marsay,

      Good point. If a complex system is annihilated in one fell swoop, it doesn’t experience a cascading failure throughout the scope and complexity of its internal structures; rather, it simply vanishes. But I think that vanishing is relatively rare in comparison to cascading failure that leaves a trace of such failure in the form of fragments of a former whole.

      The example you give of being blown up is certainly a paradigmatic instance of vanishing quickly, and the quickly is important in this context. Take, for example, failure on the battlefield: we may say that an opposing force is “annihilated” in a single afternoon of combat. This is a rapid failure in comparison to the failure of a civilization, but it is a slow and complex failure in comparison to an atom that has been split in an atom smasher.

      The point I am making is that we must consider the temporal scale of the failure to determine the extent to which it is a case of vanishing in a puff of smoke or whether they experience a rapidly cascading failure that resembles vanishing in a puff of smoke.

      In any case, your point is well taken that only complex systems can fail in complex ways. In other words, complexity is the conditio sine qua non of complex failure. In yet other terms, complexity is a necessary but not a sufficient condition of complex failure.

      Since I notice that you are affiliated with the UCL Institute for Security & Resilience Studies, I hope that you’ve also read what I wrote about resilience in Combat Power and Battle Ecology. While what I wrote there about resilience is formulated in terms specific to war, I could just as well give a more general formulation of the same conception of resiliency that would address non-combat forms of failure. Specifically, the distinction that I make between internal and external aspects of resilience should have obvious implications for all species of complex failures.

      Very Respectfully Yours,


  2. djmarsay said


    I have just posted at on the ISRS first output. It drew on some of my work but I wasn’t involved in drafting it. The thinking seems in the same general area as yours. But I have some quibbles.

    Stereotypical military thinking is ‘attritional’, seeking to ‘mark down’ people, equipment, fuel etc. Attrition can take place slowly. Attrition can put pressure on resilience, but pressure can lead people to ‘pull together’ more. On the other hand, in Libya a ‘strategic pause’ might give Gadaffi’s followers a chance to re-think their positions. But then it would also give the allies a chance to re-think. Anyway, a pause in attrition could be more productive than an intensification. It depends on the nature of the combatants. I don’t know of a similar effect in biological ecologies. So I see these analogies as important, but not the whole strory.

    Regards, Dave

    • geopolicraticus said

      Dear Dave,

      Thanks for the link. I read your comments, and, since this was a little abstract without knowing that to which you were responding, I downloaded the full text of Cyber Doctrine: Towards a coherent evolutionary framework for learning resilience by JP MacIntosh, J Reid and LR Tyler. Since this is a document of some 140 pages, I have only skimmed it so far.

      But so far, I have come across this interesting formulation on page 1 (Preamble, section 2): “Self-evidently, the addition of the first man-made environmental domain to maritime, land, space and air is far from just a bureaucratic detail. Unfortunately, the implications can over-excite people from many different fields.”

      On this point I differ from the authors, since I regard the use of electromagnetic telecommunications (beginning with the use of the telegraph) to have opened up the domain of the airwaves. Therefore, cyberspace is not the first new such domain, though it is certainly an important development, and, I would argue, ultimately an extension of electromagnetic telecommunications.

      This is relevant to your remarks above because of the apparent lack of analogies if indeed cyberspace does represent “the first man-made environmental domain.” If I am right, and it is not the first, then analogies can be expected to be found between early telecommunications and early cyberspace. As you point out, analogies are important, but they aren’t the whole story. Similarly, the absence of an analogy on which to base our thinking of what is essentially a new human reality would also be important.

      By the way, the XGW conception of warfare in terms of gradients classifies attrition as the second gradient of warfare, and thereby distinguishes it from other gradients (such as the third gradient, which is maneuver warfare) in which attrition is de-emphasized in favor of seizing the initiative in avoiding the enemy’s hardened, secure points in favor of critical vulnerabilities.

      Best wishes,


  3. Nick, I agree that cyberspace didn’t suddenly arise from nothing.

    The XGW framework looks interesting. My understanding of the Great War was that the initial intent was manoeuvrist on both sides but they got bogged down until 1918 when they adopted strategies that I don’t see in the XGW framework, unless they were Darwinian. Taking a currently conflict, Libya, it is isn’t obvious what NATO’s approach is. Maybe it thought it was moral but is it now resorting to attrition?

    Is a higher level somehow thought to be better? Or is Darwinian actually the ultimate? Is it true that ‘our’ approach to Libya is somehow ‘on a higher level’ than the approaches of the Great War? I find such analogies beg more questions than they answer.

    Cheers, Dave

    • geopolicraticus said

      Dear Dave,

      Thanks for these observations.

      I was made aware of the XGW “gradient” conception after I posted a critique (The Generational Warfare Model) of its predecessor, the “generational” conception of war. I then followed my earlier post with Generations and Gradients and Gradient Superiority.

      While I regard the gradient conception as an improvement over the generational conception, it still strikes me as inadequate. The generational conception implied a sequence in time that did not in fact describe the history of armed conflict, while the gradient conception implies a sequence in intensity of escalation that does not in fact reflect the progress of armed conflict. For example, the fifth gradient is supposed to describe a very subtle form of fifth column, but an effective campaign of moral warfare and subversion can be escalated in intensity until it causes the equivalent of national nervous breakdown, which is in no sense a subtle result.

      For these reasons, among others, I can’t say that a higher level of gradient is “better” or “worse” than any other level of gradient. Thus I regard my writings on the gradient conceptions to be hypothetical in nature, exploring the conception for its intrinsic interest, but not a conception that I would myself advocate.

      For the particular example you mention, I have tried to explain the structural features of what is happening in Libya in The Devolution of War, as I previously attempted to describe what dictators like Gaddafi have been doing in The Weaponization of Eliminationism. These reflections exist entirely outside the framework of the gradient conception.

      Partly because, as you say, “such analogies beg more questions than they answer,” and partly because of other dissatisfactions I have with the gradient conception, and partly because I was working out the ideas anyway, I have started to formulate a very different schematic approach to understanding armed conflict, and I began to articulate this in Axioms and Postulates in Strategy. Obviously this is merely an initial sketch and cannot be considered a fully formulated doctrine, but it expresses my own point of view and is therefore something I would defend as I would not defend the gradient conception. While I am still exploring the idea of a formal conception of strategy in a tentative manner, here I am also an advocate and my exposition is not hypothetical.

      Very Respectfully Yours,


      PS – I hope you’ll take a look at my Induced Failure in Complex Adaptive Systems, as it was partially spurred by your recent comments on what I wrote about failure. This, too, is a hypothetical exploration of ideas, as I do not feel that I have yet got hold of a definitive formulation.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: