The Law of Stalled Technologies

9 February 2009


Yesterday, in The Singularity has no Clothes, I wrote that, “some scientists find themselves recapitulating old errors that philosophy worked through in its two and half thousand year history, in a needless duplication of labor. There is a sense in which this constitutes a threat to the continued progress of science, but this is not the question I wish to examine today.” This is a pregnant assertion that is well worth developing, but my remarks of yesterday on the “technological singularity” (named by analogy with gravitational singularities for the supposed effect it would have on conventional scientific laws) contained many unfulfilled promises. There is much to be said here, as the promoters of the technological singularity have had much to say, and it can’t be said all at once.

As I mentioned yesterday, Kurweil’s futurism makes for some fun reading. Unfortunately, It will not age well, and will become merely humorous over time (this is not to be confused with his very real technological achievements, which may well develop into robust and durable technologies). I have a copy of Kurzweil’s book that preceded The Singularity is Near, namely The Age of Spiritual Machines (published ten years ago in 1999), which is already becoming humorous. Part III, Chapter Nine of The Age of Spiritual Machines, contains his prophecies for 2009, and now it seems that the future is upon us, because it is the year AD 2009 as I write this. Kurzweil predicted that “People typically have at least a dozen computers on and around their bodies.” It is true that many people do carry multiple gadgets with microprocessors, and some of these are linked together via Bluetooth, so this prophecy does not come off too badly. He also notes that “Cables are disappearing” and this is undeniably true.

kurzweil_age

Kurzweil goes a little off the rails, however, when it comes to matters that touch directly on human consciousness and its expressions such as language. He predicted that, “The majority of text is created using continuous speech recognition”, and I think it is safe to say that this is not the case. I don’t want to parse all his predictions, but I need to be specific about a few particularly damning failures. Among the damning failures is the prediction that, “Translating Telephone technology … is commonly used for many language pairs.” Here we step over the line of the competence of technology and the limitations of even the most imaginative engineers. While machine translation is common today for text, everyone knows that it is a joke — quite literally so, as the results can be very funny though not terribly helpful.

Accurate and dependable machine translation will turn on the emergence of strong AI capabilities, and machines will not have strong AI capabilities until they have consciousness. Expert systems galore can help us and reduce the burden of our workload, but they are algorithms and they aren’t conscious. They are number crunchers — very helpful numbers crunchers, too — and they manipulate electrical signals, not meanings. Human minds manipulate meanings. They are often intolerably vague, and only become precise with the utmost effort. (How many people do you know who can competently do logic and mathematics?)

Of what is this a picture? Recent philosophical work has come to emphasize the role of vagueness in human cognition and language.

Of what is this a picture? Recent philosophical work has come to emphasize the role of vagueness in human cognition and language. The same picture illustrates the difference between information and knowledge.

In The Age of Spiritual Machines, Kurzweil has an answer for this, and his answer to this is “simple methods combined with heavy doses of computation” (Part One, Chapter Four). But no amount of computation will add up to consciousness, subjectivity, and the ability to manipulate meanings and to conceive ideas. Actually, no… let me take that back. In some cases, a sufficiently complex machine can spawn consciousness as an emergent property. This is the case with the brain of the more advanced mammals. But it doesn’t happen automatically as a response to sheer computational power and complexity. A certain kind of complexity is required. Some of the largest computers today have as many “switches” as does the human brain (although “switches” is metaphorical), and the way these switches are hooked together allows for computation far faster than any human brain could manage, but it does not allow for consciousness to emerge. We can expect this trend to continue. Computers will become bigger, faster, more powerful, and make more things possible for us, but they are not on the verge of consciousness quite yet.

Of what is this a picture? Certainly not a pipe. But, then again, it is a pipe. When machines are ready to grapple with this, they will be ready to translate for us.

Of what is this a picture? Certainly not a pipe. But, then again, it is a pipe. When machines are ready to grapple with this, they will be ready to translate for us.

I mentioned in the previous paragraph that it is metaphorical to speak of the human brain being composed of “switches.” It has become one of my pet peeves to hear people speak of human beings as being “hardwired” for anything (you name it). The brain is not a computer or an electrical appliance. It does have electrical signals, but it also has important chemical signals. From before birth, the brain cooks in a stew of chemicals that circulate through it and around it. Most adults are familiar with the effect that artificially introduced chemicals can have on our thinking, and human thought is similarly subject to our own body chemistry. Each neuron is hooked up to, I think, nineteen other neurons. A typical transistor (or its integrated circuit equivalent) has three or four connections. A neuron and a transistor are significantly different pieces of “hardware.” If we were to prioritize the creation of machine consciousness, out best bet would be to try to re-create a brain, but then that wouldn’t quite be a machine, would it?

The transcendental aesthetic of Immanuel Kant (1724-1804) is a lesson in not confusing information with knowledge.

The transcendental aesthetic of Immanuel Kant (1724-1804) is a lesson in not confusing information with knowledge.

Kurzweil’s “simple methods combined with heavy doses of computation” brings us back to our earlier observation that contemporary scientists are recapitulating a couple thousand years of philosophical effort, and as a result they run the risk of spending time in some blind alleys. Consciousness is highly selective. It gives its focus to one or two items at a time, and its waking, rational time is strictly limited, so very little that bombards the senses captures the attention of the mind. Two first-rate thinkers are applicable here: Kant and Freud. It was one of the observations of Kant’s transcendental aesthetic that the mind is bombarded with perceptions and only makes use of a select few, and to these few it must give shape and order. Freud spoke of the “preconscious” as that in the mind which can readily be brought to consciousness but is not necessarily held in the full light of awareness. Kant was an idealist, and Freud a naturalist, but both have something to contribute to the understanding of the mind. Machines that are to possess strong AI will need to be similarly selective; they will need a transcendental aesthetic and they will need a preconscious faculty. Enormous amounts of computational power are no substitute.

Freud's conception of the preconscious could provide another powerful lesson to AI researchers on the selectivity of consciousness.

Freud's conception of the preconscious could provide another powerful lesson to AI researchers on the selectivity of consciousness.

In line with Kurzweil’s assumptions about massive computing power is his interest in “Moore’s Law”, which he frequently cites and discusses in the both the above-mentioned books. The gains predicted by Moore’s Law are to supply the future computing power that will drive the intelligence explosion of the coming technological singularity. Kurzweil likes to talk as though “Moore’s Law” is a law of nature with predictable consequences, but it is not. Moore’s Law is more like Murphy’s Law: it is a sociological generalization, it is a law of the social sciences, not of the natural sciences. It is, if you will, a law of human ambition. Moore himself is well aware that the predictions of his “law” must stall out for reasons of simple engineering, and he has publicly said as much on several occasions.

Moore’s Law has been largely correct for some years now because it was formulated relatively early in the history of integrated circuits, when this technology had yet to experience its most impressive gains. And it has been embodied in concrete technologies because engineers have seen it as a challenge, and capitalism has pitted major industries in competition with each other. These historical circumstances, like all good things, must come to an end at some point in time. Moore’s Law will eventually be subsumed under the Law of Stalled Technologies, which is a more general and pervasive law, hence a law that more specific and finely targeted laws must subordinate themselves.

Kurzweil loves exponential growth curves, and his books are filled with them.

Kurzweil loves exponential growth curves, and his books are filled with them.

The Law of Stalled Technologies, which I am here proposing, is that technological growth does not follow an exponential curve (sometimes called a “farmer’s dog” curve, or the curve of pursuit, and here the object of pursuit would be the technological singularity) of ever-increasing growth, but rather the sigmoid curve (the same kind of curve that describes the carrying capacity of a given ecosystem, for example) of logistic growth. The sigmoid curve resembles an exponential curve in its early stages; the difference is that the latter flattens out in a plateau. Stalled technologies reach a plateau, at which point, after early exponential growth, further growth is gradual and incremental. Furthermore, we often find that a mature or stalled technology of this kind is displaced by the initial rapid growth of a completely different technology, that seemingly comes out of nowhere and which could not have been predicted on the basis on progress made in the slow and gradual advance of the old technology.

The Law of Stalled Technologies posits that technological growth is most closely approximated by a sigmoid curve.

The Law of Stalled Technologies posits that technological growth is most closely approximated by a sigmoid curve.

Futurists and technophiles would do well in their predictions to focus not on what a technology might become if only its extrapolation continued to follow the curve it establishes in its early phases of development, but instead to forecast when a given technology will stall so that its future improvements will be consistently gradual and incremental rather than sudden and revolutionary. At this point, it is time to move on to another technology that will eventually displace the old technology — though when that happens is, again, a matter of uncertain prediction.

The "qwerty" keyboard has proved itself a robust and durable technology and will be with us for some time yet, futurist's forecasts notwithstanding. Unfortunately, it is also one of those examples of technology that was not selected for its optimality.

The "qwerty" keyboard has proved itself a robust and durable technology and will be with us for some time yet, futurist's forecasts notwithstanding. Unfortunately, it is also one of those examples of technology that was not selected for its optimality.

So here I have spent some 1,800 words and I have only scratched the surface of the interesting questions surrounding the AI angle on the technological singularity (a one-sentence summary of Kant cannot really be taken seriously). There is much more to say, but for the moment I am tired of typing, and since keyboards are still widely in use in 2009 for inputting texts into computers, I will now rest my fingers.

. . . . .

signature

. . . . .

About these ads

9 Responses to “The Law of Stalled Technologies”

  1. Mark Luppi said

    Good article. This subject obsesses me, so I hope you don’t mind a couple of comments that might be a little tangential to the topic:

    1. It’s easy to challenge Kurweil, but the basic evolution he describes will probably occur at some point, whether it’s a few decades or a few centuries. My own view is that the evolution of artificial decision-making agents, either autonomous or as adjuncts to our own activity, will depend less on advances in classic AI than on “brute-force” technologies (e.g., a “monkey-see monkey-do” imitation of fine-grained human cognitive physical processes combined with massively commoditized parallel hardware—I’d use the term “neural networks” if it weren’t so discredited).

    It seems to me that market-driven forces will move us inexorably in this direction. This is disturbing since most of the outcomes that I can envision (speaking as one committed to the current human enterprise) are dystopic. I assume that agents of the future will disagree.

    There’s no winner here between socialistic attempts to control this evolution (based on politically correct “defense of human values”, etc), and market forces based on economic imperatives. There’s nothing but the blind hope that evolutionary logic creates a synthesis that has some meaning from our present perspective (which, parochial though it may sound, is the only one I care about).

    2. A minor nit on your Kant reference. His “Transcendental Aesthetic” (object recognition, temporal organization) is where we’ve made some serious progress in the technology. It’s emulation of the layers on top — the “Transcendental Deduction” and the rest — where we’ve really fallen short.

    • geopolicraticus said

      Dear Mr. Luppi,

      Thanks much for your response to my thoughts. All comments, tangential and otherwise, are welcome.

      About your two points:

      1. I remain unconvinced concerning the inevitability of the technological singularity — or even the “basic evolution” that Kurzweil describes — on the basis of brute force massive parallelism. Historical inevitability is a strong claim, and in fact I have been intending to devote a post to this topic. That being said, I hold a position of qualified inevitability for the future of machine consciousness. The qualifications are these: A) that civilization in its present technological-industrial form continues to develop, and B) that other technologies, and not those primarily in use today, will provide the hardware from which machine consciousness can or will emerge. This is in no sense to be taken as a devalorization of present technologies, which have robustly demonstrated their usefulness, and we can continue to expect further use from them. Indeed, there are already certain senses in which our technological infrastructure incorporates autonomous decision-making agents. Computerized trading on stock exchanges has driven dramatic market swings as the result of allowing machines to autonomously make trading decisions. We need not wait for “classic AI” or machine consciousness for this to be the case in fact.

      2. We’ll have to respectfully disagree about the degree to which the transcendental aesthetic has been realized by hardware technologies. There certainly is progress in such things as image recognition, but this is based on the same kind of “brute force” massive parallelism that Kurzweil advocates for AI. In the language of philosophy, there are already the rudiments of machine perception, but machine apperception remains elusive. Have you considered the possibility that machine realization of the transcendental deduction has “fallen short” precisely because it is pursuing the same essential methods of a transcendental aesthetic based not a principles of consciousness but rather upon what is most convenient for automation? If I am right in this, it is another instance of what I wrote about in Selection for non-sentience.

      It would be easy to write a volume in response to your interesting comments, but I will let it rest there.

      Sincerely,

      Nick

  2. [...] always have more information than the people of the past. As Nick Neilson writes in two well-worded rebuttals to the Singularity on his blog, “Kurzweil’s futurism makes for some fun reading. Unfortunately, It will [...]

  3. Tom said

    I think that AI (computational Anthropomorphism, human qualities shaped computation) has a great future as long as there are proper motivations for its advancement.

    Autonomous/automatic software will get more widespread as software demands and hardware capabilities increase.

    There are many things that a computer does that are sentient such as it “thinks,” “sleeps,” “helps,” “annoys,” etc.

    • geopolicraticus said

      Thanks for your comment.

      By “proper motivations” do you mean ethically good motivations or socio-economically effective motivations? The attempt to guide the development of science or technology according to currently accepted notions of what is ethically admirable is almost certain to go wrong. But this is not likely what you meant. Social and economic motivations have in fact driven the pursuit of computational anthropomorphism to date, and are likely to continue to be the drivers of such research. But why do social and economic motivations drive research into computational anthropomorphism? Because of who and what we are. This is essentially a human interest story.

      Best wishes,

      Nick

  4. Camilo said

    Good article. I disagree in a couple of things though:

    The brain is an electro-chemical computer, the fact that there are chemical processes as the signals does not mean otherwise.
    Or, is there a fundamental difference between processes involving electrons-only (i.e silicon) and those involving electrons and full-molecules, for you to make the distinction you describe?

    In contrast to your idea, I believe that Complexity is what spawns Consciousness, not the other way around. Who are we to say that a super-computer (or a tree, for that matter) does not experience something when running? only It knows.

    This is not to say that any of this excludes God.

    Cheers

    • geopolicraticus said

      Dear Camilo,

      I don’t believe that I made the claim that consciousness spawns complexity, and in fact I would argue the exact opposite: the emergence of consciousness makes possibility the intuitive (and simple) manipulation of ideas, which if done without consciousness is fiendishly difficult. Think of how difficult it is to formalize the simplest of mathematical ideas, and multiply that by the complexity of ideas that the mind routinely engages in a given day (in comparison to, for example, the natural numbers).

      As for knowing what a supercomputer or a tree experiences, this is an interesting point. I understand that some of the speculative realists maintain that all things have an inner life. I do not necessarily disagree with this, but certain kinds of sentient life (such as human beings) not only have an inner life, but are able to communicate this written life and leave records of it. When machines are able to do this, then we will have a better idea of their inner life. I don’t deny this, but I will admit that I am very skeptical that machines so far experience something like subjective awareness, and if you maintain that objects (be they trees or computers) experience subjective awareness without being able to give some sign of this), I suggest that this is an unfalsifiable proposition. Moreover, I have no reason to believe that it is true, falsifiable or not.

      To take up your first point last of all, is there a different between an electro-chemical computer and an exclusively electronic computer? I think there is, just as there is a difference between analogue computers (such as were used extensively on the B-29 superfortress) and digital computers. However, I do not know enough about electronics, neurology, or chemistry to argue this point in detail. Except I will say this: it is easy to come up with a reductivist schema based on digital information processing, so I believe that people who are instinctively positivists (even if they have never read a word of philosophy and don’t know what a positivist is) are drawn to digital formulations. Analogue formulations are continuous, not involving the finite precision errors and finite dimension errors inevitably involved in digital information processing; however, they certainly have other limitations, as subsequent history has show the meteoric rise of digital computing even while analogue computing has languished.

      Best wishes,

      Nick

  5. Great post. I’ve always suspected the singularity wasn’t coming.

    • geopolicraticus said

      Dear Mr. August,

      Glad you enjoyed it. No one knows what the future holds, except that it probably doesn’t hold one thing and one thing only. In further posts I have tried to address this. For example, cf. my Three Futures post.

      Best wishes,

      Nick

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

Join 332 other followers

%d bloggers like this: