Sunday, July 19, 2009

A word of caution on the technological singularity

SUMMARY - Famous futurist Ray Kurzweil thinks the boundaries between "humans" and "machines" will soon be blurred or gone. He also thinks it's good to enhance ourselves beyond our biology and become super-human beings with superior intelligence. Critics contend we will lose our peculiar humanity that way and basically annihilate ourselves. I think that both sides ignore important social and philosophical aspects that we must consider before taking sides. I agree with Kurzweil that we will eventually transcend our current human nature and become superhuman. I also agree that we should. But I am convinced we are not prepared for it, and that if we do it as quickly as he advocates we will effectively destroy ourselves.


Futurist and computer scientist Ray Kurzweil holds (The Singularity is Near, 1999-2005) the transhumanist view that technology will continue to grow until humans and their machines merge. In principle, he's correct. Consider the frequency of paradigm shifts (radical innovations, brief periods of radical social, scientific, and technological change). Examples are the discovery of fire, the advent of writing, the rise of democracy, and the first computers. If we chart paradigm shifts, we see they happen exponentially (see picture above).

It took about 3,000 years between the discovery of agriculture and the invention of writing, but in 3,000 more years we went from writing to democracy, a much faster change. Likewise, 500 years passed from the Renaissance to quantum mechanics, and a mere 40 from there to nuclear reactors. This accelerating rate of change is most impressive in computer science. When I was a kid, 486 processors were the big thing. They operated at around 50 megahertz. After only 15 years, today's typical commercial processors perform at 4 gigahertz, about 80 times faster. Th capacity of integrated circuits doubles every two years, and according to Kurzweil the world's combined computing power doubles each year.

Why is this important to understand? Because (Kurzweil claims and I agree) we think about progress as linear and historical, but our estimates are always too conservative. Most of us don't realize just how fast things change. Just think that in 1909 household electricity was rare and commercial flights didn't exist. Only 8 years passed from manned spaceflight to moon landing, and today we have space stations and thousands of satellites. We think that history and progress move slowly and gradually, perhaps because exponential growth looks like linear growth at first, but that's never been the case. Exponential growth soon snowballs. With this in mind, the truth is that in only 30 years we'll see machines that pass the Turing AI test convincingly, we'll have colonized Mars, we'll be able to replicate virtually any type of solid object from thin air, much of medicine will be handed over to nanotechnology, and we'll be able to upload our minds to a computer and replace any part of our bodies with mechanical implants, thus prolong our lives almost indefinitely. In other words, Star Trek is here.

Kurzweil's main point is that nonbiological intelligence will inevitably take over the biological mind. Contrary to popular belief that the human brain is "the best computer," Kurzweil argues that our brain only excels at some tasks. It allows for exceptional pattern recognition (which is why brains have evolved the way they have), but even in the 1990s most computer processors could calculate faster, store more data, and recall them faster and more accurately. Human brains will soon need mechanical implants to keep pace with technological progress, and those implants will in turn accelerate progress, again producing a snowball effect. In short, in less than a half-century "unenhanced" humans will be unable to understand the world around them and be able to live in it.

Now, Kurzweil is obviously an enthusiast. Perhaps his predictions are too optimistic, but the facts are there and I've little doubt that our lifestyle will be radically different by the time I'm 60. What I do doubt and he doesn't is whether any of this is good. In fact, I am quite torn and can't yet pick sides. I'm no conservative nor a luddite; I make extensive use of technology, which facilitates both my work and my relationships. I don't think "life is a miracle" and I sure don't believe in God and the good ol' ways. But the projected rate of progress still worries me. I can also see the good in it, though, and so I'll tackle the conservative objections first and try to defend Kurzweil.

Suppose Kurzweil is right and we basically let intelligent machines take over. Critics say we will have lost our humanity then, but for Kurzweil that will still be human intelligence: we need but expand our definition of "human." I see his point here. When we integrate ourselves with mechanical implants and our creation is virtually indistinguishable from us, aren't those machines also "human" merely because they're the product of our creation? Consider this from the outside, looking in. Why shouldn't we enhance ourselves? Do we have a superior obligation to do anything else? Nothing of what we have is so sacred that it mustn't be changed. "But we'll no longer be human," says the critic. But "being human" is just whatever we make it out to be. It's not self-evidently true that we must remain true to our original nature, especially since "natural" doesn't equal "good." Yes, we evolved through natural selection, which is a slow and gradual process. But for one, the whole of life on earth also follows the exponential paradigm shift scale (see chart above); and even if it didn't, nothing says we shouldn't depart from what has been and move into what will be. Of course, nothing says we should just because we can, either. In fact, at this point we should have a pretty neutral stance about it.

On the other hand, the considerations I've just made hinge on my own metaphysical view. I am a nonreligious materialist who thinks humans are beautiful cosmic accidents. I see no overarching purpose (divine or otherwise) for our species, who is the sole architect of its future or demise. I am thus open to radical change, even in human nature itself. But what does a person think who doesn't share my bias, a person who believes humanity has a "manifest destiny" and must respond to a creator, or simply one for whom our roots are as important as our potential and that as soon as we depart from them we are lost? Clearly this person's outlook will be quite different. It is very narrow-minded to judge Kurzweil's predictions without checking your own metaphysics first. For once, abstraction and neutrality may be negative assets in philosophical inquiry.

Kurzweil's own analysis is unilateral and narrow-minded, for it downplays all social considerations. Take, for example, the ambiguous term "we" I've (and he has) been using. Who's "we"? All humans? The 10-15% who make up technologically proficient societies? Only the scientific community? It's true that what scientists pioneer people ultimately adopt, but this step is slow and scrutinized by ethics and politics. Conservative forces are always defeated in the end, but they have a crucial purpose: they restrain us from adopting new things until we're ready for them. Does Kurzweil factor all this into his predictions of perfect androids by 2030? I think it could be done, but I have yet to see someone do it.

Perhaps more importantly, Kurzweil forgets that the vast majority of human beings are a half-century behind in the adoption of technology, and in this case exponential growth backfires: 50 years may not have been much a millennium ago, but it's night-and-day now. How will Turing-worthy AIs affect people who just got used to cell phones and color tvs? It seems as if technological progress thus accelerated will further facilitate Western imperialism in its dominance over of third-world countries (a dominance which has always been driven by technology anyway, from Pizarro to the free market).

With that in mind, my previous claims need revision. It may be okay to outdo our own nature and enhance ourselves, but must the price be the annihilation of "lesser" peoples? Kurzweil is right that by 2030 new technologies may eliminate world hunger, but will they actually be allowed to do that? It seems that as long as the new techs are in our Western hands we'll do all that we can to keep them to ourselves and maintain power over the third world. At that point, non-Westerners will be forced to adopt the new hi-tech standard in order to even eat, because if Kurzweil is right, then unenhanced humans will be unable to live in the new world order at all. It's going to be "conform or die out." It's then easy to envision a new type of world war, which Kurzweil himself predicts: third-world, neo-Luddite "bio-humans" on one side and elite "mech-humans" on the other... and there's no doubt how that one will end.

So the logical and philosophical considerations must perforce be matched against the social and ethical ones. I've little doubt there's anything majorly wrong in enhancing ourselves. Sure, let's go ahead and become super-mech-badass-brainmachine-humans... but perhaps we should make dead certain that "we" means all and not just an elite. It pains me to say the problem, as with almost everything in the world, is money. If Kurzweil is really right, new matter-energy conversion technologies (as well as new methods to harvest energy directly from the sun) will render money obsolete and we will enjoy virtually infinite resources. Perhaps we should wait until at least then before becoming trans-humans.

0 comments:

Post a Comment