Alone Together (27 page)

Read Alone Together Online

Authors: Sherry Turkle

As a first step, and it would be her only step, Lindman constructs a device capable of manipulating her face by a set of mechanical pliers, levers, and wires, “just to begin with the experience of having my face put into different positions.” It is painful and prompts Lindman to reconsider the direct plug-in she hopes some day to achieve. “I’m not afraid of too much pain,” she says. “I’m more afraid of damage, like real damage, biological damage, brain damage. I don’t think it’s going to happen, but it’s scary.” And Lindman imagines another kind of damage. If some day she does hook herself up to a robot’s program, she believes she will have knowledge of herself that no human has ever had. She will have the experience of what it feels like to be “taken over” by an alien intelligence. Perhaps she will feel its pull and her lack of resistance to it. The “damage” she fears relates to this. She may learn something she doesn’t want to know. Does the knowledge of the extent to which we are machines mark the limit of our communion with machines? Is this knowledge taboo? Is it harmful?
Lindman’s approach is novel, but the questions she raises are not new. Can machines develop emotions? Do they need emotions to develop full intelligence? Can people only relate to machines by projecting their own emotions onto them, emotions that machines cannot achieve? The fields of philosophy and artificial intelligence have a long history of addressing such matters. In my own work, I argue the limits of artificial comprehension because neither computer agents nor robots have a human life cycle.
17
For me, this objection is captured by the man who challenged the notion of having a computer psychotherapist with the comment, “How can I talk about sibling rivalry to something that never had a mother?” These days, AI scientists respond to the concern about the lack of machine emotion by proposing to build some. In AI, the position that begins with “computers need bodies in order to be intelligent” becomes “computers need affect in order to be intelligent.”
Computer scientists who work in the field known as “affective computing” feel supported by the work of social scientists who underscore that people always
project
affect onto computers, which helps them to work more constructively with them.
18
For example, psychologist Clifford Nass and his colleagues review a set of laboratory experiments in which “individuals engage in social behavior towards technologies even when such behavior is entirely inconsistent with their beliefs about machines.”
19
People attribute personality traits and gender to computers and even adjust their responses to avoid hurting the machines’ “feelings.” In one dramatic experiment, a first group of people is asked to perform a task on computer A and to evaluate the task on the same computer. A second group is asked to perform the task on computer A but to evaluate it on computer B. The first group gives computer A far higher grades. Basically, participants do not want to insult a computer “to its face.”
Nass and his colleagues suggest that “when we are confronted with an entity that [behaves in humanlike ways, such as using language and responding based on prior inputs,] our brains’ default response is to unconsciously treat the entity as human.”
20
Given this, they propose that technologies be made more “likeable” for practical reasons. People will buy them and they will be easier to use. But making a machine “likeable” has moral implications. “It leads to various secondary consequences in interpersonal relationships (for example, trust, sustained friendship, and so forth).”
21
For me, these secondary consequences are the heart of the matter. Making a machine easy to use is one thing. Giving it a winning personality is another. Yet, this is one of the directions taken by affective computing (and sociable robotics).
Computer scientists who work in this tradition want to build computers able to assess their users’ affective states and respond with “affective” states
of their own
. At MIT, Rosalind Picard, widely credited with coining the phrase “affective computing,” writes, “I have come to the conclusion that if we want computers to be genuinely intelligent, to adapt to us, and to interact naturally with us, then they will need the ability to recognize and express emotions, and to have what has come to be called ‘emotional intelligence.’”
22
Here the line is blurred between computers having emotions and behaving as if they did. Indeed, for Marvin Minsky, “Emotion is not especially different from the processes that we call ‘thinking.”
23
He joins Antonio Damasio on this but holds the opposite view of where the idea takes us. For Minsky, it means that robots are going to be emotional thinking machines. For Damasio, it means they can never be unless robots acquire bodies with the same characteristics and problems of living bodies.
In practice, researchers in affective computing try to avoid the word “emotion.” Talking about emotional computers is always on track to raise strong objections. How would computers get these emotions? Affects sound more cognitive. Giving machines a bit of “affect” to make them easier to use sounds like common sense, more a user interface strategy than a philosophical position. But synonyms for “affective” include “emotional,” “feeling,” “intuitive,” and “noncognitive,” just to name a few.
24
“Affect” loses these meanings when it becomes something computers have. The word “intelligence” underwent a similar reduction in meaning when we began to apply it to machines. Intelligence once denoted a dense, layered, complex attribute. It implied intuition and common sense. But when computers were declared to have it, intelligence started to denote something more one-dimensional, strictly cognitive.
Lindman talks about her work with Domo and Mertz as a contribution to affective computing. She is convinced that Domo needs an additional layer of emotional intelligence. Since it wasn’t programmed in, she says she had to “add it herself ” when she enacted the robot’s movements. But listening to Lindman describe how she had to “add in” yearning and tenderness to the relationship between Domo and Edsinger, I have a different reaction. Perhaps it is better that Lindman had to “add in” emotion. It put into sharp relief what is unique about people. The idea of affective computing intentionally blurs the line.
THROUGH THE EYES OF THE ROBOT
 
Domo and Mertz are advanced robots. But we know that feelings of communion are evoked by far simpler ones. Recall John Lester, the computer scientist who thought of his AIBO as both machine and creature. Reflecting on AIBO, Lester imagines that robots will change the course of human evolution.
25
In the future, he says, we won’t simply enjoy using our tools, “we will come to care for them. They will teach us how to treat them, how to live with them. We will evolve to love our tools; our tools will evolve to be loveable.”
Like Lindman and Edsinger, Lester sees a world of creature-objects burnished by our emotional attachments. With a shy shrug that signals he knows he is going out on a limb, he says, “I mean, that’s the kind of bond I can feel for AIBO now, a tool that has allowed me to do things I’ve never done before.... Ultimately [tools like this] will allow society to do things that it has never done.” Lester sees a future in which something like an AIBO will develop into a prosthetic device, extending human reach and vision.
26
It will allow people to interact with real, physical space in new ways. We will see “through its eyes,” says Lester, and interact “through its body. . . . There could be some parts of it that are part of you, the blending of the tools and the body in a permanent physical way.” This is how Brooks talks about the merging of flesh and machine. There will be no robotic “them” and human “us.” We will either merge with robotic creatures, or in a long first step, we will become so close to them that we will integrate their powers into our sense of self. In this first step, a robot will still be an other, but one that completes you.
These are close to the dreams of Thad Starner, one of the founders of MIT’s Wearable Computing Group, earlier known as the “cyborgs.” He imagines bringing up a robot as a child in the spirit of how Brooks set out to raise Cog. But Starner insists that Cog—and successor robots such as Domo and Mertz—are “not extreme enough.”
27
They live in laboratories, so no matter what the designers’ good intentions, the robots will never be treated like human babies. Starner wants to teach a robot by having it learn from his life—by transmitting his life through sensors in his clothes. The sensors will allow “the computer to see as I see, hear as I hear, and experience the world around me as I experience it,” Starner says. “If I meet somebody at a conference it might hear me say, ‘Hi, David,’ and shake a hand. Well, if it then sees me typing in somebody’s name or pulling up that person’s file, it might actually start understanding what introductions are.” Starner’s vision is “to create something that’s not just an artificial intelligence. It’s me.”
In a more modest proposal, the marriage of connectivity and robotics is also the dream of Greg, twenty-seven, a young Israeli entrepreneur who has just graduated from business school. It is how he intends to make his fortune—and in the near future. In Greg’s design, data from his cell phone will animate a robot. He says,
I will walk around with my phone, but when I come home at night, I will plug it into a robotic body, also intelligent but in different ways. The robot knows about my home and how to take care of it and to take care of me if I get sick. The robot would sit next to me and prepare the documents I need to make business calls. And when I travel, I would just have to take the phone, because another robot will be in Tel Aviv, the same model. And it will come alive when I plug in my phone. And the robot bodies will offer more, say, creature comforts: a back rub for sure and emergency help if you get into medical trouble. It will be reassuring for a young person, but so much more for an old person.
 
We will animate our robots with what we have poured into our phones: the story of our lives. When the brain in your phone marries the body of your robot, document preparation meets therapeutic massage. Here is a happy fantasy of security, intellectual companionship, and nurturing connection. How can one not feel tempted?
Lester dreams of seeing the world through AIBO’s eyes: it would be a point of access to an enhanced environment. Others turn this around, saying that the robot will become the environment; the physical world will be laced with the intelligence we are now trying to put into machines. In 2008, I addressed a largely technical audience at a software company, and a group of designers suggested that in the future people will not interact with stand-alone robots at all—that will become an old fantasy. What we now want from robots, they say, we will begin to embed in our rooms. These intellectually and emotionally “alive” rooms will collaborate with us. They will understand speech and gesture. They will have a sense of humor. They will sense our needs and offer comfort. Our rooms will be our friends and companions.
CONSIDERING THE ROBOT FOR REAL
 
The story of robots, communion, and moments of more opens up many conversations, both philosophical and psychological. But these days, as people imagine robots in their daily lives, their conversations become quite concrete as they grapple with specific situations and try to figure out if a robot could help.
Tony, a high school teacher, has just turned fifty. Within just the past few years, his life has entered a new phase. All three of his children are in college. His parents are dead. He and his wife, Betty, find themselves in constant struggle with her mother, Natasha, eighty-four, who is recuperating from a stroke and also showing early signs of Alzheimer’s. When a younger woman and at her best, Natasha had been difficult. Now, she is anxious and demanding, often capricious. She criticizes her daughter and son-in-law when they try to help; nothing seems enough. Tony, exhausted, considers their options. With some dread, he and Betty have been talking about moving Natasha into their home. But they both work, so Natasha will require a caretaker to tend to her as she declines. He hears of work in progress on robots designed for child and elder care. This is something new to consider, and his first thoughts are positive.
Well, if I compare having a robot with an immigrant in my house, the kind of person who is available to take care of an elderly person, the robot would be much better. Sort of like flying Virgin Atlantic and having your own movie. You could have the robot be however you wanted it. It wouldn’t be rude or illiterate or steal from you. It would be very safe and specialized. And personalized. I like that. Natasha’s world is shrinking because of the Alzheimer’s. A robot geared to the Alzheimer’s—that could sort of measure where she was on any given day and give her some stimulation based on that—that would be great.
 
And then there is a moment of reconsideration:
But maybe I’m getting it backwards. I’m not sure I would want a robot taking care of me when I’m old. Actually, I’m not sure I would rather not be alive than be maintained by a robot. The human touch is so important. Even when people have Alzheimer’s, even when they are unconscious, in a coma, I’ve read that people still have the grasping reflex. I suppose I want Natasha to have the human touch. I would want to have the human touch at the end. Other than that it is like that study where they substituted the terry cloth monkeys for the real monkeys and the baby monkeys clung to wire monkeys with terry cloth wrapped around. I remember studying that in college and finding it painfully sad. No, you need the real monkey to preserve your dignity. Your dignity as a person. Without that, we’re like cows hooked up to a milking machine. Or like our life is like an assembly line where at the end you end up with the robot.
 

Other books

A Study in Ashes by Emma Jane Holloway
Midu's Magic by Judith Post
The Mummy's Curse by Penny Warner
Just the Way You Are by Lynsey James
Blaze by Laurie Boyle Crompton
Smells Like Dog by Selfors, Suzanne
Recipe for Treason by Andrea Penrose