Read The Future of the Mind Online
Authors: Michio Kaku
In light of this, AI researchers are beginning to reexamine the “top-down approach” they have followed for the last fifty years (e.g., putting all the rules of common sense on a CD). Now AI researchers are giving the “bottom-up approach” a second look. This approach tries to follow Mother Nature, which has created intelligent beings (us) via evolution, starting with simple animals like worms and fish and then creating more complex ones. Neural networks must learn the hard way, by bumping into things and making mistakes.
Dr. Rodney Brooks, former director of the famed MIT Artificial Intelligence Laboratory, and cofounder of iRobot, which makes those mechanical vacuum cleaners found in many living rooms, introduced an entirely new approach to AI. Instead of designing big, clumsy robots, why not build small, compact, insectlike robots that have to learn how to walk, just as in nature?
When I interviewed him, he told me that
he used to marvel at the mosquito, which had a nearly microscopic brain with very few neurons, yet was able to maneuver in space better than any robot airplane. He built a series of remarkably simple robots, affectionately called “insectoids” or “bugbots,” which scurried around the floors of MIT and could run circles around the more traditional robots. The goal was to create robots that follow the trial-and-error method of Mother Nature. In other words, these robots learn by bumping into things.
(At first, it may seem that this requires a lot of programming. The irony, however, is that neural networks require no programming at all. The only thing that the neural network does is rewire itself, by changing the strength of certain pathways each time it makes a right decision. So programming is nothing; changing the network is everything.)
Science-fiction writers once envisioned that robots on Mars would be sophisticated humanoids, walking and moving just like us, with complex programming that gave them human intelligence. The opposite has happened. Today the grandchildren of this approach—like the Mars Curiosity rover—are now roaming over the surface of Mars. They are not programmed to walk like a human. Instead, they have the intelligence of a bug, but they do quite fine in this terrain. These Mars rovers have relatively little programming; instead, they learn as they bump into obstacles.
ARE ROBOTS CONSCIOUS?
Perhaps the clearest way to see why true robot automatons do not yet exist is to rank their level of consciousness. As we have seen in
Chapter 2
, we can rank consciousness in four levels. Level 0 consciousness describes thermostats and plants; that is, it involves a few feedback loops in a handful of simple parameters such as temperature or sunlight. Level I consciousness describes insects and reptiles, which are mobile and have a central nervous system; it involves creating a model of your world in relationship to a new parameter, space. Then we have Level II consciousness, which creates a model of the world in relationship to others of its kind, requiring emotions. Finally we have Level III consciousness, which describes humans, who incorporate time and self-awareness to simulate how things will evolve in the future and determine our own place in these models.
We can use this theory to rank the robots of today. The first generation
of robots were at Level 0, since they were static, without wheels or treads. Today’s robots are at Level I, since they are mobile, but they are at a very low echelon because they have tremendous difficulty navigating in the real world. Their consciousness can be compared to that of a worm or slow insect. To fully produce Level I consciousness, scientists will have to create robots that can realistically duplicate the consciousness of insects and reptiles. Even insects have abilities that current robots do not have, such as rapidly finding hiding places, locating mates in the forest, recognizing and evading predators, or finding food and shelter.
As we mentioned earlier, we can numerically rank consciousness by the number of feedback loops at each level. Robots that can see, for example, may have several feedback loops because they have visual sensors that can detect shadows, edges, curves, geometric shapes, etc., in three-dimensional space. Similarly, robots that can hear require sensors that can detect frequency, intensity, stress, pauses, etc. The total number of these feedback loops may total ten or so (while an insect, because it can forage in the wild, find mates, locate shelter, etc., may have fifty or more feedback loops). A typical robot, therefore, may have Level I:10 consciousness.
Robots will have to be able to create a model of the world in relation to others if they are to enter Level II consciousness. As we mentioned before, Level II consciousness, to a first approximation, is computed by multiplying the number of members of its group times the number of emotions and gestures that are used to communicate between them. Robots would thus have a consciousness of Level II:0. But hopefully, the emotional robots being built in labs today may soon raise that number.
Current robots view humans as simply a collection of pixels moving on their TV sensors, but some AI researchers are beginning to create robots that can recognize emotions in our facial expressions and tone of voice. This is a first step toward robots’ realizing that humans are more than just random pixels, and that they have emotional states.
In the next few decades, robots will gradually rise in Level II consciousness, becoming as intelligent as a mouse, rat, rabbit, and then a cat. Perhaps late in this century, they will be as intelligent as a monkey, and will begin to create goals of their own.
Once robots have a working knowledge of common sense and the Theory of Mind, they will be able to run complex simulations into the future featuring
themselves as the principal actors and thus enter Level III consciousness. They will leave the world of the present and enter the world of the future. This is many decades beyond the capability of any robot today. Running simulations of the future means that you have a firm grasp of the laws of nature, causality, and common sense, so that you can anticipate future events. It also means that you understand human intentions and motivations, so you can predict their future behavior as well.
The numerical value of Level III consciousness, as we mentioned, is calculated by the total number of causal links one can make in simulating the future in a variety of real-life situations, divided by the average value of a control group. Computers today are able to make limited simulations in a few parameters (e.g., the collision of two galaxies, the flow of air around an airplane, the shaking of buildings in an earthquake), but they are totally unprepared to simulate the future in complex, real-life situations, so their level of consciousness would be something like Level III:5.
As we can see, it may take many decades of hard work before we have a robot that can function normally in human society.
SPEED BUMPS ON THE WAY
So when might robots finally match and exceed humans in intelligence? No one knows, but there have been many predictions. Most of them rely on Moore’s law extending decades into the future. However, Moore’s law is not a law at all, and in fact it ultimately violates a fundamental physical law: the quantum theory.
As such, Moore’s law cannot last forever. In fact, we can already see it slowing down now. It might flatten out by the end of this or the next decade, and the consequences could be dire, especially for Silicon Valley.
The problem is simple. Right now, you can place hundreds of millions of silicon transistors on a chip the size of your fingernail, but there is a limit to how much you can cram onto these chips. Today the smallest layer of silicon in your Pentium chip is about twenty atoms in width, and by 2020 that layer might be five atoms across. But then Heisenberg’s uncertainty principle kicks in, and you wouldn’t be able to determine precisely where the electron is and it could “leak out” of the wire. (See the Appendix, where we discuss the quantum theory and the uncertainty principle in more detail.) The chip
would short-circuit. In addition, it would generate enough heat to fry an egg on it. So leakage and heat will eventually doom Moore’s law, and a replacement will soon be necessary.
If packing transistors on flat chips is maxing out in computing power, Intel is making a multibillion-dollar bet that chips will rise into the third dimension. Time will tell if this gamble pays off (one major problem with 3-D chips is that the heat generated rises rapidly with the height of the chip).
Microsoft is looking into other options, such as expanding into 2-D with parallel processing. One possibility is to spread chips horizontally in a row. Then you break up a software problem into pieces, sort out each piece on a small chip, and reassemble it at the end. However, it may be a difficult process, and software grows at a much slower pace than the supercharged exponential rate we are accustomed to with Moore’s law.
These stopgap measures may add years to Moore’s law. But eventually, all this must pass, too: the quantum theory inevitably takes over. This means that physicists are experimenting with a wide variety of alternatives after the Age of Silicon draws to a close, such as quantum computers, molecular computers, nanocomputers, DNA computers, optical computers, etc. None of these technologies, however, is ready for prime time.
THE UNCANNY VALLEY
But assume for the moment that one day we will coexist with incredibly sophisticated robots, perhaps using chips with molecular transistors instead of silicon. How closely do we want our robots to look like us? Japan is the world’s leader in creating robots that resemble cuddly pets and children, but their designers are careful not to make their robots appear too human, which can be unnerving. This phenomenon was first studied by Dr. Masahiro Mori in Japan in 1970, and is called the “uncanny valley.” It posits that robots look creepy if they look too much like humans. (The effect was actually first mentioned by Darwin in 1839 in
The Voyage of the Beagle
and again by Freud in 1919 in an essay titled “The Uncanny.”) Since then, it has been studied very carefully not just by AI researchers but also by animators, advertisers, and anyone promoting a product involving humanlike figures. For instance, in a review of the movie
The Polar Express
, a CNN writer noted, “Those human
characters in the film come across as downright … well, creepy. So
The Polar Express
is at best disconcerting, and at worst, a wee bit horrifying.”
According to Dr. Mori, the more a robot looks like a human, the more we feel empathy toward it, but only up to a point. There is a dip in empathy as the robot approaches actual human appearance—hence the uncanny valley. If the robot looks very similar to us save for a few features that are “uncanny,” it creates a feeling of revulsion and fear. If the robot appears 100 percent human, indistinguishable from you and me, then we’ll register positive emotions again.
This has practical implications. For example, should robots smile? At first, it seems obvious that robots should smile to greet people and make them feel comfortable. Smiling is a universal gesture that signals warmth and welcome. But if the robot smile is too realistic, it makes people’s skin crawl. (For example, Halloween masks often feature fiendish-looking ghouls that are grinning.) So robots should smile only if they are childlike (i.e., with big eyes and a round face) or are perfectly human, and nothing in between. (When we force a smile, we activate facial muscles with our prefrontal cortex. But when we smile because we are in a good mood, our nerves are controlled by our limbic system, which activates a slightly different set of muscles. Our brains can tell the subtle difference between the two, which was beneficial for our evolution.)
This effect can also be studied using brain scans. Let’s say that a subject is placed into an MRI machine and is shown a picture of a robot that looks perfectly human, except that its bodily motions are slightly jerky and mechanical. The brain, whenever it sees anything, tries to predict that object’s motion into the future. So when looking at a robot that appears to be human, the brain predicts that it will move like a human. But when the robot moves like a machine, there is a mismatch, which makes us uncomfortable. In particular, the parietal lobe lights up (specifically, the part of the lobe where the motor cortex connects with the visual cortex). It is believed that mirror neurons exist in this area of the parietal lobe. This makes sense, because the visual cortex picks up the image of the humanlike robot, and its motions are predicted via the motor cortex and by mirror neurons. Finally, it is likely that the orbitofrontal cortex, located right behind the eyes, puts everything together and says, “Hmmm, something is not quite right.”
Hollywood filmmakers are aware of this effect. When spending millions
on making a horror movie, they realize that the scariest scene is not when a gigantic blob or Frankenstein’s monster pounces out of the bushes. The scariest scene is when there is a perversion of the ordinary. Think of the movie
The Exorcist
. What scene made moviegoers vomit as they ran to escape the theater or faint right in their seats? Was it the scene when a demon appears? No. Theaters across the world erupted in shrill screams and loud sobs when Linda Blair turned her head completely around.
This effect can also be demonstrated in young monkeys. If you show them pictures of Dracula or Frankenstein, they simply laugh and rip the pictures apart. But what sends these young monkeys screaming in terror is a picture of a decapitated monkey. Once again, it is the perversion of the ordinary that elicits the greatest fear. (In
Chapter 2
, we mentioned that the space-time theory of consciousness explains the nature of humor, since the brain simulates the future of a joke, and then is surprised to hear the punch line. This also explains the nature of horror. The brain simulates the future of an ordinary, mundane event, but then is shocked when things suddenly become horribly perverted.)
For this reason, robots will continue to look somewhat childlike in appearance, even as they approach human intelligence. Only when robots can act realistically like humans will their designers make them look fully human.