Sunday


shush

In several posts I have described what I called the STEM cycle, which typifies our industrial-technological civilization. The STEM cycle involves scientific discoveries employed in new technologies, which are in turn engineered into industries which supply new instruments to science resulting in further scientific discoveries. For more on the STEM cycle you can read my posts The Industrial-Technological Thesis, Industrial-Technological Disruption, The Open Loop of Industrial-Technological Civilization, Chronometry and the STEM Cycle, and The Institutionalization of the STEM Cycle.

Industrial-technological civilization is a species of the genus of scientific civilizations (on which cf. David Hume and Scientific Civilization and The Relevance of Philosophy of Science to Scientific Civilization). Ultimately, it is the systematic pursuit of science that drives industrial-technological civilization forward in its technological progress. While it is arguable whether contemporary civilization can be said to embody moral, aesthetic, or philosophical progress, it is unquestionable that it does embody technological progress, and, almost as an epiphenomenon, the growth of scientific knowledge. And while knowledge may not grow evenly across the entire range of human intellectual accomplishment, so that we cannot loosely speak of “intellectual progress,” we can equally unambiguously speak of scientific progress, which is tightly-coupled with technological and industrial progress.

Now, it is a remarkable feature of science that there are no secrets in science. Science is out in the open, as it were (which is one reason the appeal to embargoed evidence is a fallacy). There are scientific mysteries, to be sure, but as I argued in Scientific Curiosity and Existential Need, scientific mysteries are fundamentally distinct from the religious mysteries that exercised such power over the human mind during the epoch of agrarian-ecclesiastical civilization. You can be certain that you have encountered a complete failure to understand the nature of science when you hear (or read) of scientific mysteries being assimilated to religious mysteries.

That there are no secrets in science has consequences for the warfare practiced by industrial-technological civilization, i.e., industrialized war based on the application of scientific method to warfare and the exploitation of technological and industrial innovations. While, on the one hand, all wars since the first global industrialized war have been industrialized war, since the end of the Second World War, now seventy years ago, on the other hand, no wars have been mass wars, or, if you prefer, total wars, as a result of the devolution of warfare.

Today, for example, any competent chemist could produce phosgene or mustard gas, and anyone who cares to inform themselves can learn the basic principles and design of nuclear weapons. I made this point some time ago in Weapons Systems in an Age of High Technology: Nothing is Hidden. In that post I wrote:

Wittgenstein in his later work — no less pregnantly aphoristic than the Tractatus — said that nothing is hidden. And so it is in the age of industrial-technological civilization: Nothing is hidden. Everything is, in principle, out in the open and available for public inspection. This is the very essence of science, for science progresses through the repeatability of its results. That is to say, science is essentially an iterative enterprise.

Although science is out in the open, technology and engineering are (or can be made) proprietary. There is no secret science or sciences, but technologies and industrial engineering can be kept secret to a certain degree, though the closer they approximate science, the less secret they are.

I do not believe that this is well understood in our world, given the pronouncements and policies of our politicians. There are probably many who believe that science can be kept secret and pursued in secret. Human history is replete with examples of the sequestered development of weapons systems that rely upon scientific knowledge, from Greek Fire to the atom bomb. But if we take the most obvious example — the atomic bomb — we can easily see that the science is out in the open, even while the technological and engineering implementation of that science was kept secret, and is still kept secret today. However, while no nation-state that produces nuclear weapons makes its blueprints openly available, any competent technologist or engineer familiar with the relevant systems could probably design for themselves the triggering systems for an implosion device. Perhaps fewer could design the trigger for a hydrogen bomb — this came to Stanislaw Ulam in a moment of insight, and so represents a higher level of genius, but Andrei Sakharov also figured it out — however, a team assembled for the purpose would also certainly hit on the right solution if given the time and resources.

Science nears optimality with it is practiced openly, in full view of an interested public, and its results published in journals that are read by many others working in the field. These others have their own ideas — whether to extend research already preformed, reproduce it, or to attempt to turn it on its head — and when they in turn pursue their research and publish their results, the field of knowledge grows. This process is exponentially duplicated and iterated in a scientific civilization, and so scientific knowledge grows.

When Lockheed’s Skunkworks recently announced that they were working on a compact fusion generator, many fusion scientists were irritated that the Skunkworks team did not publish their results. The fusion research effort is quite large and diverse (something I wrote about in One Hundred Years of Fusion), and there is an expectation that those working in the field will follow scientific practice. But, as with nuclear weapons, a lot is at stake in fusion energy. If a private firm can bring proprietary fusion electrical generation technology to market, it stands to be the first trillion dollar enterprise in human history. With the stakes that high, Lockheed’s Skunkworks keeps their research tightly controlled. But this same control slows down the process of science. If Lockheed opened its fusion research to peer review, and others sought to duplicate the results, the science would be driven forward faster, but Lockheed would stand to lose its monopoly on propriety fusion technology.

Fusion science is out in the open — it is the same as nuclear science — but particular aspects and implementations of that science are pursued under conditions of industrial secrecy. There is no black and white line that separates fusion science from fusion technology research and fusion engineering. Each gradually fades over into the other, even when the core of each of science, technology, and engineering can be distinguished (this is an instance of what I call the Truncation Principle).

The stakes involved generate secrecy, and the secrecy involved generates industrial espionage. Perhaps the best known example of industrial espionage of the 20th century was the acquisition of the plans for the supersonic Concorde, which allowed the Russians to get their “Konkordski” TU-144 flying before the Concorde itself flew. Again, the science of flight and jet propulsion cannot be kept secret, but the technological and engineering implementations of that science can be hidden to some degree — although not perfectly. Supersonic, and now hypersonic, flight technology is a closely guarded secret of the military, but any enterprise with the funding and the mandate can eventually master the technology, and will eventually produce better technology and better engineering designs once the process is fully open.

Because science cannot be effectively practiced in private (it can be practiced, but will not be as good as a research program pursued jointly by a community of researchers), governments seek the control and interdiction of technologies and materials. Anyone can learn nuclear science, but it is very difficult to obtain fissionables. Any car manufacturer can buy their rival’s products, disassemble them, and reserve engineer their components, but patented technologies are protected by the court system for a certain period of time. But everything in this process is open to dispute. Different nation-states have different patent protection laws. When you add industrial espionage to constant attempts to game the system on an international level, there are few if any secrets even in proprietary technology and engineering.

The technologies that worry us the most — such as nuclear weapons — are purposefully retarded in their development by stringent secrecy and international laws and conventions. Moreover, mastering the nuclear fuel cycle requires substantial resources, so that mostly limits such an undertaking to nation-states. Most nation-states want to get along to go along, so they accept the limitations on nuclear research and choose not to build nuclear weapons even if they possess the industrial infrastructure to do so. And now, since the end of the Cold War, even the nation-states with nuclear arsenals do not pursue the development of nuclear technology; so-called “fourth generation nuclear weapons” may be pursued in the secrecy of government laboratories, but not with the kind of resources that would draw attention. It is very unlikely that they are actually being produced.

Why should we care that nuclear technology is purposefully slowed and regulated to the point of stifling innovation? Should we not consider ourselves fortunate that governments that seem to love warfare have at least limited the destruction of warfare by limiting nuclear weapons? Even the limitation of nuclear weapons comes at a cost. Just as there is no black and white line separating science, technology, and engineering, there is no black and white line that separates nuclear weapons research from other forms of research. By clamping down internationally on nuclear materials and nuclear research, the world has, for all practical purposes, shut down the possibility of nuclear rockets. Yes, there are a few firms researching nuclear rockets that can be fueled without the fissionables that could also be used to make bombs, but these research efforts are attempts to “design around” the interdictions of nuclear technology and nuclear materials.

We have today the science relevant to nuclear rocketry; to master this technology would require practical experience. It would mean designing numerous designs, testing them, and seeing what works best. What works best makes its way into the next iteration, which is then in its turn improved. This is the practical business of technology and engineering, and it cannot happen without an immersion into practical experience. But the practical experience in nuclear rocketry is exactly what is missing, because the technology and materials are tightly controlled.

Thus we already can cite a clear instance of how existential risk mitigation becomes the loss of an existential opportunity. A demographically significant spacefaring industry would be an existential opportunity for humanity, but if the nuclear rocket would have been the breakout technology that actualized this existential opportunity, we do not know, and we may never know. Nuclear weapons were early recognized as an existential risk, and our response to this existential risk was to consciously and purposefully put a brake on the development of nuclear technology. Anyone who knows the history of nuclear rockets, of the NERVA and DUMBO programs, of the many interesting designs that were produced in the early 1960s, knows that this was an entire industry effectively strangled in the cradle, sacrificed to nuclear non-proliferation efforts as though to Moloch. Because science cannot be kept secret, entire industries must be banned.

. . . . .

Nuclear rocketry: an industry that never happened.

Nuclear rocketry: an industry that never happened.

. . . . .

signature

. . . . .

Grand Strategy Annex

. . . . .

project astrolabe logo smaller

. . . . .

Advertisements

Wednesday


Eighth in a Series on Existential Risk:

risk_diagram

Every Risk is also an Opportunity


It is a commonplace that every risk is an opportunity, and every opportunity is a risk; risk and opportunity are two sides of the same coin. This can also be expressed by distinguishing negative risk (what we ordinarily call “risk” simpliciter) and positive risk (what we ordinarily call “opportunity”). What this means in terms of existential thought is that every existential risk is an existential opportunity, and existential opportunity is at the same time an existential risk.

If we understand by risk the uncertainty of frequency and uncertainty of magnitude of future loss, then by opportunity we should understand the uncertainty of frequency and uncertainty of magnitude of future gain. The relative probability of a loss is offset by the relative probability of a gain, and the relative probability of a gain is offset by the relative probability of a loss; both are calculable; both are, in principle, insurable. Thus these risks and opportunities represent the subset of uncertainties that present actionable mitigation strategies. Where uncertainty exceeds the possibility of actionable mitigation, we pass beyond insurable risk to uncertainty proper.

In existential risk scenarios, our very existence is at stake; in existential opportunity scenarios, again, our very existence is at stake. To formulate this parallel to the above, we can assert that existential risk is the uncertainty of frequency and uncertainty of magnitude of future loss of earth-originating life and civilization, while existential opportunity is the uncertainty of frequency and uncertainty of magnitude of future gain for earth-originating life and civilization. In formulating the existential condition of humanity, there is little that is risk sensu stricto, since much of the big picture of the human future is given over to uncertainty that lies beyond presently actionable risk. However, the calculus of risk and reward remains, even if we are not speaking strictly of risk that can be fully calculated and thus fully insured. In other words, the existential uncertainties facing humanity admit of a distinction between positive uncertainties and negative uncertainties. Any valuation of this kind, however, is intrinsically disputable and controversial.

Given that our very existence is at stake in existential opportunity no less than in existential risk, a future defined by the realization of an existential opportunity might be unrecognizable as a human future. Indeed, the realization of an existential opportunity might be every bit as unrecognizable as the realization of an existential threat, which means that the two futures might be indistinguishable, which means in turn that existential opportunity might be mistaken for existential risk, and vice versa.

Faced with a stark choice (i.e., faced with an existential choice), I think few would choose extinction, flawed realization, permanent stagnation, or subsequent ruination over species survival, flawless realization, permanent amelioration, or subsequent escalation. (If, in moments of decision in our life, we make our choice in fear and trembling, how must we fear and tremble in moments of decision for our species?) Any such choice, however, is not likely to be visited upon us in this form.

Much more likely that an explicit choice between an utopian future of astonishing wonders and a dystopian future of dismal oppression is an imperceptibly gradual process whereby a promising future suggests certain day-to-day decisions (seemingly seizing an opportunity) which lead incrementally to a future with unintended consequences that greatly outweigh the promises that prompted the daily decisions that led to the future in question. This is how history generally works: by degrees, and not by intention. (Notwithstanding the Will Durant quote — “The future never just happened, it was created.” — that I mentioned in Predicting the Human Future in Space.)

In so far as industrial-technological civilization continues its exponential growth of technology (growing incrementally and often imperceptibly by degrees, and not always by intention), and therefore also the growth of human agency in shaping our environment, the expanding scope of this civilization will mitigate certain existential risks even as it exposes humanity to new and unprecedented risks. That is to say, industrial-technological civilization itself is at once both a risk and an opportunity. Civilization centered on escalating industrial-technological development exposes us to escalating industrial accidents and unintended consequences of technology, unprecedented pollution from industrial processes, changes in our way of life, and indeed changes to our very being as a result of the technological transformation of humanity (i.e., transhumanism).

At the same time, escalating industrial-technological development offers the unprecedented possibility of a spacefaring civilization, which could establish earth-originating life off the surface of the earth and thereby secure the minimum redundancy necessary to the long-term survival of such life. The transition of the terrestrial economy to an economy fully integrated with the industrialization of space — a process that I have called extraterrestrialization — could not take place without the advent of industrial-technological civilization.

Yet the expansion of business operations and interests into extraterrestrial space is a paradigm of uncertainty — no such effort has been made on a large scale, and so the risks of such an enterprise are unknown and cannot be calculated, fully managed, or insured against. Space operations therefore exemplify uncertainty rather than risk, and for the same reason that such operations are uncertain, their execution is potentially beset with contingencies unknown to us today. This does not make such an enterprise is too risky to contemplate — this is the only imaginable contribution that industrial-technological civilization can make to the long-term survival of earth-originating life — but we must undertake such enterprises without illusions or the subsequent losses endured may become socially unsustainable leading to the end of the enterprise. Subsequent unforeseen losses resulting from the transition to a spacefaring civilization may even be interpreted as a form of subsequent ruination, and thereby conceived by many as an existential threat. How we understand existential risk, then, affects what we understand to be a risk and what we understand to be a reward.

In the larger context of industrial-technological civilization we can identify individual industries and technologies that represent in themselves both risks and opportunities. The most fantastic speculations of transhumanist utopias, like the most dismal speculations on transhumanist dystopias, constitute unprecedented opportunities (or risks) implied by the present trajectories of technology. One of the best examples of risk and opportunity in future technology are the possibilities of nano-scale robots. The development of nano-scale robots could, on the one hand, provide for unprecedented medical technologies — robots that could be injected like an inoculation which would treat medical conditions from the inside out, repairing the body on a microscopic scale and potentially greatly improving health and extending longevity. On the other hand, nano-scale robots loose in the biosphere could potentially cause great harm. if not havoc, perhaps even resulting in a gray goo scenario.

In so far as any proposed existential risk mitigation initiatives prioritize safety over opportunity, any concern for existential risk could itself become an existential risk by lending support for policies that address risk through calculated stagnation instituted as a risk-averse response to existential threats. The question then becomes how humanity can lower its exposure to existential risks without reducing its existential opportunities. The attempt to answer this question, even if it does not issue in clear, unambiguous imperatives, may at least provide a framework in which to conceptualize problematic scenarios for the human future that some may identify as desirable while others would identify the same as a moral horror — such as transhumanism.

. . . . .

danger imminent existential threat

. . . . .

Existential Risk: The Philosophy of Human Survival

1. Moral Imperatives Posed by Existential Risk

2. Existential Risk and Existential Uncertainty

3. Addendum on Existential Risk and Existential Uncertainty

4. Existential Risk and the Death Event

5. Risk and Knowledge

6. What is an existential philosophy?

7. An Alternative Formulation of Existential Risk

8. Existential Risk and Existential Opportunity

. . . . .

ex risk ahead

. . . . .

signature

. . . . .

Grand Strategy Annex

. . . . .

%d bloggers like this: