The Design of Future Things (7 page)

Car+driver is a conscious, emotional, intelligent system. When automobiles were first available at the very start of the twentieth century, the human driver provided all processing levels: visceral, behavioral, and reflective. As the technology improved, more and more visceral elements were added, so that the car took care of internal engine and fuel adjustments and shifting. With antiskid braking, stability controls, cruise control, and now lane-keeping functionality, the car has taken on more and more of the behavioral side of driving. So, with most modern cars, the car provides the visceral part, and the driver the reflective part, with both active at the behavioral level.

The twenty-first-century automobile has more and more reflective components: the conscious, reflective parts of the car+driver are being taken over by the car itself. The reflective powers are evident in the adaptive cruise control that continually
assesses how close the car is to other vehicles, navigation systems that monitor how well the driver conforms to instructions, and all systems that monitor the driver's behavior. When the car's reflective analyses find problems, they signal the person to change behavior or just simply correct it when possible—but the car will take over complete control when it determines that this is required.

Someday cars will no longer need drivers. Instead, people will all be passengers, able to gossip, read, or even sleep while the car chauffeurs them to their destination. Do you enjoy driving? Fine, there will be special places set aside for people to drive their cars, just as those who enjoy horseback riding today have special places set aside for that activity. When this day arrives, and I expect it to happen some time in the twenty-first century, the entity known as car+driver will be extinct. Instead, we will have cars, and we will have people, just as we used to, except now the car will be visceral, behavioral, and reflective: a truly intelligent, autonomous machine, at least for the purposes of transportation, which will include not only the navigation and driving but also taking care of the comfort and well-being of the passengers, providing the right lighting, temperature, food and drink, and entertainment.

Will passengers be able to have meaningful conversations with their cars? In the past, the human tendency to assign beliefs, emotions, and personality traits to all sorts of things has been criticized as anthropomorphism. As machines gain in their cognitive and emotional capacities, the anthropomorphism may not be so erroneous. These assignments might very well be appropriate and correct.

The Gulfs of Goals, Action, and Perception

People have many unique capabilities that cannot be replicated in machines—at least not yet. As we introduce automation and intelligence into the machines we use today, we need to be humble and to recognize the problems and the potential for failure. We also need to recognize the vast discrepancy between the workings of people and of machines.

Today, there are “intelligent systems” in many everyday things. We have intelligent washing machines, dishwashers, robot vacuum cleaners, automobiles, computers, telephones, and computer games. Are these systems really intelligent? No, they are responsive. The intelligence is all in the heads of the design team, people who carefully try to anticipate all possible conditions and program into the system the appropriate responses. In other words, the design team is mind reading, trying to assess all of the possible future states and how a person would respond in each situation. On the whole, these responsive systems are valuable and helpful—but they often fail.

Why the failure? Because these systems can seldom measure directly the object of interest: they can only measure things their sensors can detect. Human beings have an incredibly rich sensorimotor system that allows continuous assessment of the state of the world and of our own bodies. We have tens of millions of specialized nerve cells for detecting light and sound, touch and taste, feel and balance, temperature and pressure, and pain, and internal sensors for our muscles and body position. In addition, we have built up complex representations of the world and our actions upon it, as well as accurate expectations based
upon a long history of interaction. Machines don't even come close.

Machines' sensors are not only limited, but they measure different things from those of people. Psychological perception is not the same as physical sensing. Machines can detect light frequencies, infrared and radio waves that people cannot. They can detect sound frequencies that lie outside the range of human perception. The same is true for many other variables, as well as for action systems. We humans have flexible muscles and limbs, with dexterous fingers and toes. Machines are much less flexible but also more powerful.

Finally, people's goals are very different from those of machines. Indeed, many people would even deny that machines have goals. As machines get smarter and smarter, more and more intelligent, however, they will assess the situation and decide upon a course of action, with some distinct goals that they wish to accomplish. As for emotions, well, human emotions are central to our behavior and interpretation of the world. Machine emotions don't exist, and even when machines do start to have rudimentary emotions, they will differ considerably from those of people.

Common Ground: The Fundamental Limitation in Human-Machine Interaction

Alan and Barbara begin with a great mass of knowledge, beliefs, and suppositions they believe they share. This is what I call their common ground. . . . [T]hey assume to be common ground what has taken place in conversations
they have jointly participated in, including the current conversation so far. The more time Alan and Barbara spend together, the larger their common ground. . . . [T]hey cannot coordinate their actions without rooting them in their common ground.

—Herbert Clark,
Using Language.
    

Communication and negotiation require what linguists call a “common ground”: a shared basis of understanding that serves as the platform for the interaction. In the quotation by the psycholinguist Herbert Clark, above, the fictitious couple, Alan and Barbara, involve their shared common ground in all joint activities, whether linguistic or not. When people from the same culture and social group interact, their shared beliefs and experiences allow for rapid and efficient interactions. Ever eavesdrop on the conversations of others? I do it often while walking through shopping malls and parks, in the name of science, of course. I am continually amazed by the lack of content, even between two people heavily engaged in discussion. A typical conversation might go like this:

Alan: “You know?”

Barbara: “Yeah.”

To Alan and Barbara this interchange might very well be deep and significant. You and I will never know because all the critical knowledge we need to understand what is being referred to is missing: their common ground is unavailable to us.

The lack of common ground is the major cause of our inability to communicate with machines. People and machines have
so little in common that they lack any notion of common ground. People and people? Machine and machine? That's different: those pairs function quite well. People can share with other people. Machines can share with other machines. But people and machines? Nope.

It might surprise you to hear that machines can share common ground with one another, but that is because their designers, usually engineers, spend a lot of time to ensure that all the background information required for efficient communication is indeed shared. When two machines start to interact, they first go through a ritual to ensure that there is mutual agreement about shared information, states, and even the syntax of the interaction. In the jargon of communication engineers, this is called “handshaking.” This is so important that the engineering world has developed a huge framework of international committees to develop worldwide standards to ensure that communicating devices share the same assumptions and background knowledge. Standards are difficult to work out, for they require complex negotiations among otherwise competing companies, with technical, legal, political issues all having to be resolved. The end results are worth it, however: they establish the common language, protocols, and background knowledge required for the establishment of a common ground and, therefore, for effective communication.

Want an example of how two machines establish common ground? Although the handshaking is usually quiet and invisible to us humans, it is involved in almost every use of electronic devices that wish to communicate with another, whether it is your television set talking to the cable box and the cable box to the transmitting equipment, your computer connecting to a
website, or your cell phone searching for a signal when you first turn it on. The most accessible example, however, comes from all those peculiar sounds that come out of a fax machine. After you have dialed the phone number (note that the dial tone and ringing sounds are also forms of handshaking), you then hear a series of warbling tones as your fax machine negotiates with the receiving machine what coding standard to use, what transmission rate, and what resolution on the page. Then, as the fax proceeds, one machine transmits the signals, and the other continually acknowledges correct receipt. It's a more restricted and mechanized version of the interaction between two people meeting for the first time as they try to figure out whom they know in common and what skills and interests they might share.

People can share common ground with other people. Machines can negotiate a common ground with other machines. But machines and people inhabit two different universes, one of logically prescribed rules that govern their interaction, the other of intricate, context-dependent actions, where the same apparent condition will give rise to different actions because “circumstances are different.” Moreover, the fundamental gulfs of goals, actions, and perception mean that machines and people will not even be able to agree upon such fundamental things as, What is happening in the world? What actions can we take? What are we trying to accomplish? The lack of common ground is a supergulf, keeping machines and humans far apart.

People learn from their pasts, modifying their behavior to account for what they have learned. This also means that the common ground between people grows over time. Moreover, people
are sensitive to which activities have been shared, so that Alan may interact with Barbara quite differently than he does with Charles, even in similar circumstances, because Alan realizes that the common ground he shares with Barbara is quite different from what he shares with Charles. Alan, Charles, and Barbara have the capacity to exchange new information; they can learn from their experiences and modify their behavior accordingly.

In contrast, machines can barely learn. Yes, they can make modifications in their performance as they experience success or failure, but their ability to generalize is very weak and, except in a few laboratory systems, pretty much nonexistent. Machine capabilities are continually improving, of course; throughout the world, research laboratories are working on all of these issues. But the gulf between what people have in common with one another and what machines and people have in common is huge and unlikely to be bridged in the foreseeable future.

Consider the three opening scenarios of future capabilities that started this chapter. Are they possible? How can machines know a person's private thoughts? How can they know what other activities are happening outside the range of their sensors? How can machines share enough knowledge about people to be so cocky in their suggestions? The answer is, they can't.

My refrigerator won't let me eat eggs? Maybe I'm not going to eat them; maybe I'm cooking for someone else. Yes, the refrigerator could detect that I was removing eggs, could know my weight and cholesterol levels through a medical information network that included both my home and some parts of my medical record from my physician's office, but that still doesn't give it the ability to read my mind and determine my intentions.

Can my automobile check my schedule and select an interesting route for me to drive? Yes, everything in that scenario is possible except, perhaps, the natural language interaction, but systems that speak are getting pretty good, so I wouldn't rule that out. Would I agree with the choice? If the car acted as described, it wouldn't matter: it is presenting an interesting suggestion, one I might not have thought of, but allowing me to choose. That's a nice, friendly interaction, one I certainly approve of.

Could my house actually be jealous of other nearby homes? This is unlikely, although comparing the equipment and operation of nearby homes is a perfectly sensible way to keep up to date. In businesses, this is called “benchmarking” and following “best practices.” So, once again, the scenario is possible, although not necessarily with the same jaunty language.

Machines are very limited in learning and predicting the consequences of new interactions. Their designers have incorporated whatever limited sensors their budget and the state of technology will allow. Beyond that, the designers are forced to imagine how the world might appear to the machine. From the limited data provided by the sensors, the designers must infer what might actually be going on and what actions the machine ought to take. Many of these systems do remarkably well as long as the task is well constrained and there are no unexpected occurrences. Once the situation goes outside the simple parameters for which they were designed, their simple sensors and intelligent decision-making and problem-solving routines are simply insufficient for the task. The gulf that separates people from machines is immense.

Other books

Cosmonaut Keep by Ken Macleod
The Pretender by David Belbin
Birth of a Mortal God by Armand Viljoen
The Girls on Rose Hill by Bernadette Walsh
The Office Girl by T.H. Sandal
[Last Of The Jedi] - 07 by Secret Weapon (Jude Watson)
The Good Student by Espino, Stacey
Gold Comes in Bricks by A. A. Fair (Erle Stanley Gardner)