The Law of Stalled Technologies
9 February 2009
Yesterday, in The Singularity has no Clothes, I wrote that, “some scientists find themselves recapitulating old errors that philosophy worked through in its two and half thousand year history, in a needless duplication of labor. There is a sense in which this constitutes a threat to the continued progress of science, but this is not the question I wish to examine today.” This is a pregnant assertion that is well worth developing, but my remarks of yesterday on the “technological singularity” (named by analogy with gravitational singularities for the supposed effect it would have on conventional scientific laws) contained many unfulfilled promises. There is much to be said here, as the promoters of the technological singularity have had much to say, and it can’t be said all at once.
As I mentioned yesterday, Kurweil’s futurism makes for some fun reading. Unfortunately, It will not age well, and will become merely humorous over time (this is not to be confused with his very real technological achievements, which may well develop into robust and durable technologies). I have a copy of Kurzweil’s book that preceded The Singularity is Near, namely The Age of Spiritual Machines (published ten years ago in 1999), which is already becoming humorous. Part III, Chapter Nine of The Age of Spiritual Machines, contains his prophecies for 2009, and now it seems that the future is upon us, because it is the year AD 2009 as I write this. Kurzweil predicted that “People typically have at least a dozen computers on and around their bodies.” It is true that many people do carry multiple gadgets with microprocessors, and some of these are linked together via Bluetooth, so this prophecy does not come off too badly. He also notes that “Cables are disappearing” and this is undeniably true.
Kurzweil goes a little off the rails, however, when it comes to matters that touch directly on human consciousness and its expressions such as language. He predicted that, “The majority of text is created using continuous speech recognition”, and I think it is safe to say that this is not the case. I don’t want to parse all his predictions, but I need to be specific about a few particularly damning failures. Among the damning failures is the prediction that, “Translating Telephone technology … is commonly used for many language pairs.” Here we step over the line of the competence of technology and the limitations of even the most imaginative engineers. While machine translation is common today for text, everyone knows that it is a joke — quite literally so, as the results can be very funny though not terribly helpful.
Accurate and dependable machine translation will turn on the emergence of strong AI capabilities, and machines will not have strong AI capabilities until they have consciousness. Expert systems galore can help us and reduce the burden of our workload, but they are algorithms and they aren’t conscious. They are number crunchers — very helpful numbers crunchers, too — and they manipulate electrical signals, not meanings. Human minds manipulate meanings. They are often intolerably vague, and only become precise with the utmost effort. (How many people do you know who can competently do logic and mathematics?)
In The Age of Spiritual Machines, Kurzweil has an answer for this, and his answer to this is “simple methods combined with heavy doses of computation” (Part One, Chapter Four). But no amount of computation will add up to consciousness, subjectivity, and the ability to manipulate meanings and to conceive ideas. Actually, no… let me take that back. In some cases, a sufficiently complex machine can spawn consciousness as an emergent property. This is the case with the brain of the more advanced mammals. But it doesn’t happen automatically as a response to sheer computational power and complexity. A certain kind of complexity is required. Some of the largest computers today have as many “switches” as does the human brain (although “switches” is metaphorical), and the way these switches are hooked together allows for computation far faster than any human brain could manage, but it does not allow for consciousness to emerge. We can expect this trend to continue. Computers will become bigger, faster, more powerful, and make more things possible for us, but they are not on the verge of consciousness quite yet.
I mentioned in the previous paragraph that it is metaphorical to speak of the human brain being composed of “switches.” It has become one of my pet peeves to hear people speak of human beings as being “hardwired” for anything (you name it). The brain is not a computer or an electrical appliance. It does have electrical signals, but it also has important chemical signals. From before birth, the brain cooks in a stew of chemicals that circulate through it and around it. Most adults are familiar with the effect that artificially introduced chemicals can have on our thinking, and human thought is similarly subject to our own body chemistry. Each neuron is hooked up to, I think, nineteen other neurons. A typical transistor (or its integrated circuit equivalent) has three or four connections. A neuron and a transistor are significantly different pieces of “hardware.” If we were to prioritize the creation of machine consciousness, out best bet would be to try to re-create a brain, but then that wouldn’t quite be a machine, would it?
Kurzweil’s “simple methods combined with heavy doses of computation” brings us back to our earlier observation that contemporary scientists are recapitulating a couple thousand years of philosophical effort, and as a result they run the risk of spending time in some blind alleys. Consciousness is highly selective. It gives its focus to one or two items at a time, and its waking, rational time is strictly limited, so very little that bombards the senses captures the attention of the mind. Two first-rate thinkers are applicable here: Kant and Freud. It was one of the observations of Kant’s transcendental aesthetic that the mind is bombarded with perceptions and only makes use of a select few, and to these few it must give shape and order. Freud spoke of the “preconscious” as that in the mind which can readily be brought to consciousness but is not necessarily held in the full light of awareness. Kant was an idealist, and Freud a naturalist, but both have something to contribute to the understanding of the mind. Machines that are to possess strong AI will need to be similarly selective; they will need a transcendental aesthetic and they will need a preconscious faculty. Enormous amounts of computational power are no substitute.
In line with Kurzweil’s assumptions about massive computing power is his interest in “Moore’s Law”, which he frequently cites and discusses in the both the above-mentioned books. The gains predicted by Moore’s Law are to supply the future computing power that will drive the intelligence explosion of the coming technological singularity. Kurzweil likes to talk as though “Moore’s Law” is a law of nature with predictable consequences, but it is not. Moore’s Law is more like Murphy’s Law: it is a sociological generalization, it is a law of the social sciences, not of the natural sciences. It is, if you will, a law of human ambition. Moore himself is well aware that the predictions of his “law” must stall out for reasons of simple engineering, and he has publicly said as much on several occasions.
Moore’s Law has been largely correct for some years now because it was formulated relatively early in the history of integrated circuits, when this technology had yet to experience its most impressive gains. And it has been embodied in concrete technologies because engineers have seen it as a challenge, and capitalism has pitted major industries in competition with each other. These historical circumstances, like all good things, must come to an end at some point in time. Moore’s Law will eventually be subsumed under the Law of Stalled Technologies, which is a more general and pervasive law, hence a law that more specific and finely targeted laws must subordinate themselves.
The Law of Stalled Technologies, which I am here proposing, is that technological growth does not follow an exponential curve (sometimes called a “farmer’s dog” curve, or the curve of pursuit, and here the object of pursuit would be the technological singularity) of ever-increasing growth, but rather the sigmoid curve (the same kind of curve that describes the carrying capacity of a given ecosystem, for example) of logistic growth. The sigmoid curve resembles an exponential curve in its early stages; the difference is that the latter flattens out in a plateau. Stalled technologies reach a plateau, at which point, after early exponential growth, further growth is gradual and incremental. Furthermore, we often find that a mature or stalled technology of this kind is displaced by the initial rapid growth of a completely different technology, that seemingly comes out of nowhere and which could not have been predicted on the basis on progress made in the slow and gradual advance of the old technology.
Futurists and technophiles would do well in their predictions to focus not on what a technology might become if only its extrapolation continued to follow the curve it establishes in its early phases of development, but instead to forecast when a given technology will stall so that its future improvements will be consistently gradual and incremental rather than sudden and revolutionary. At this point, it is time to move on to another technology that will eventually displace the old technology — though when that happens is, again, a matter of uncertain prediction.
So here I have spent some 1,800 words and I have only scratched the surface of the interesting questions surrounding the AI angle on the technological singularity (a one-sentence summary of Kant cannot really be taken seriously). There is much more to say, but for the moment I am tired of typing, and since keyboards are still widely in use in 2009 for inputting texts into computers, I will now rest my fingers.
. . . . .
. . . . .