Alone Together (11 page)

Read Alone Together Online

Authors: Sherry Turkle

People are surprised by how upset they get in this theater of distress. And then they get upset that they are upset. They often try to reassure themselves, saying things like, “Chill, chill, it’s only a toy!” They are experiencing something new: you can feel bad about yourself for how you behave with a computer program. Adults come to the upside-down test knowing two things: the Furby is a machine and they are not torturers. By the end, with a whimpering Furby in tow, they are on new ethical terrain.
12
We are at the point of seeing digital objects as both creatures and machines. A series of fractured surfaces—pet, voice, machine, friend—come together to create an experience in which knowing that a Furby is a machine does not alter the feeling that you can cause it pain. Kara, a woman in her fifties, reflects on holding a moaning Furby that says it is scared. She finds it distasteful, “not because I believe that the Furby is really scared, but because I’m not willing to hear anything talk like that and respond by continuing my behavior. It feels to me that I could be hurt if I keep doing this.” For Kara, “That is not what I do.... In that moment, the Furby comes to represent how I treat creatures.”
When the toy manufacturer Hasbro introduced its My Real Baby robot doll in 2000, it tried to step away from these complex matters. My Real Baby shut down in situations where a real baby might feel pain. This was in contrast to its prototype, a robot called “IT,” developed by a team led by MIT roboticist Rodney Brooks. “IT” evolved into “BIT” (for Baby IT), a doll with “states of mind” and facial musculature under its synthetic skin to give it expression.
13
When touched in a way that would induce pain in a child, BIT cried out. Brooks describes BIT in terms of its inner states:
If the baby were upset, it would stay upset until someone soothed it or it finally fell asleep after minutes of heartrending crying and fussing. If BIT . . . was abused in any way—for instance, by being swung upside down—it got very upset. If it was upset and someone bounced it on their knee, it got more upset, but if the same thing happened when it was happy, it got more and more excited, giggling and laughing, until eventually it got overtired and started to get upset. If it were hungry, it would stay hungry until it was fed. It acted a lot like a real baby.
14
 
BIT, with its reactions to abuse, became the center of an ethical world that people constructed around its responses to pleasure and pain. But when Hasbro put BIT into mass production as My Real Baby, the company decided not to present children with a toy that responded to pain. The theory was that a robot’s response to pain could “enable” sadistic behavior. If My Real Baby were touched, held, or bounced in a way that would hurt a real baby, the robot shut down.
In its promotional literature, Hasbro marketed My Real Baby as “the most real, dynamic baby doll available for young girls to take care of and nurture.” They presented it as a companion that would teach and encourage reciprocal social behavior as children were trained to respond to its needs for amusement as well as bottles, sleep, and diaper changes. Indeed, it was marketed as realistic in all things—except that if you “hurt” it, it shut down. When children play with My Real Baby, they do explore aggressive possibilities. They spank it. It shuts down. They shake it, turn it upside down, and box its ears. It shuts down.
Hasbro’s choice—maximum realism, but with no feedback for abuse—inspires strong feelings, especially among parents. For one group of parents, what is most important is to avoid a child’s aggressive response. Some believe that if you market realism but show no response to “pain,” children are encouraged to inflict it because doing so seems to have no cost. Others think that if a robot simulates pain, it enables mistreatment.
Another group of parents wish that My Real Baby would respond to pain for the same reason that they justify letting their children play violent video games: they see such experiences as “cathartic.” They say that children (and adults too) should express aggression (or sadism or curiosity) in situations that seem “realistic” but where nothing “alive” is being hurt. But even these parents are sometimes grateful for My Real Baby’s unrealistic show of “denial.” They do not want to see their children tormenting a screaming baby.
No matter what position one takes, sociable robots have taught us that we do not shirk from harming realistic simulations of life. This is, of course, how we now train people for war. First, we learn to kill the virtual. Then, desensitized, we are sent to kill the real. The prospect of studying these matters raises awful questions. Freedom Baird had people hold a whining, complaining Furby upside down, much to their discomfort. Do we want to encourage the abuse of increasingly realistic robot dolls?
When I observe children with My Real Baby in an after-school playgroup for eight-year-olds, I see a range of responses. Alana, to the delight of a small band of her friends, flings My Real Baby into the air and then shakes it violently while holding it by one leg. Alana says the robot has “no feelings.” Watching her, one wonders why it is necessary then to “torment” something without feelings. She does not behave this way with the many other dolls in the playroom. Scott, upset, steals the robot and brings it to a private space. He says, “My Real Baby is like a baby and like a doll.... I don’t think she wants to get hurt.”
As Scott tries to put the robot’s diaper back on, some of the other children stand beside him and put their fingers in its eyes and mouth. One asks, “Do you think that hurts?” Scott warns, “The baby’s going to cry!” At this point, one girl tries to pull My Real Baby away from Scott because she sees him as an inadequate protector: “Let go of her!” Scott resists. “I was in the middle of changing her!” It seems a good time to end the play session. As the research team, exhausted, packs up to go, Scott sneaks behind a table with the robot, gives it a kiss, and says good-bye, out of the sight of the other children.
In the pandemonium of Scott and Alana’s playgroup, My Real Baby is alive enough to torment and alive enough to protect. The adults watching this—a group of teachers and my research team—feel themselves in an unaccustomed quandary. If the children had been tossing around a rag doll, neither we, nor presumably Scott, would have been as upset. But it is hard to see My Real Baby treated this way. All of this—the Furbies that complain of pain, the My Real Babies that do not—creates a new ethical landscape. The computer toys of the 1980s only suggested ethical issues, as when children played with the idea of life and death when they “killed” their Speak & Spells by taking out the toys’ batteries. Now, relational artifacts pose these questions directly.
One can see the new ethics at work in my students’ reactions to Nexi, a humanoid robot at MIT. Nexi has a female torso, an emotionally expressive face, and the ability to speak. In 2009, one of my students, researching a paper, made an appointment to talk with the robot’s development team. Due to a misunderstanding about scheduling, my student waited alone, near the robot. She was upset by her time there: when not interacting with people, Nexi was put behind a curtain and blindfolded.
At the next meeting of my graduate seminar, my student shared her experience of sitting alongside the robot. “It was very upsetting,” she said. “The curtain—and why was she blindfolded? I was upset because she was blindfolded.” The story of the shrouded and blindfolded Nexi ignited the seminar. In the conversation, all the students talked about the robot as a “she.” The designers had done everything they could to give the robot gender. And now, the act of blindfolding signaled sight and consciousness. In class, questions tumbled forth: Was the blindfold there because it would be too upsetting to see Nexi’s eyes? Perhaps when Nexi was turned off, “her” eyes remained open, like the eyes of a dead person? Perhaps the robot makers didn’t want Nexi to see “out”? Perhaps they didn’t want Nexi to know that when not in use, “she” is left in a corner behind a curtain? This line of reasoning led the seminar to an even more unsettling question: If Nexi is smart enough to need a blindfold to protect “her” from fully grasping “her” situation, does that mean that “she” is enough of a subject to make “her” situation abusive? The students agreed on one thing: blindfolding the robot sends a signal that “this robot can see.” And seeing implies understanding and an inner life, enough of one to make abuse possible.
I have said that Sigmund Freud saw the uncanny as something long familiar that feels strangely unfamiliar. The uncanny stands between standard categories and challenges the categories themselves. It is familiar to see a doll at rest. But we don’t need to cover its eyes, for it is we who animate it. It is familiar to have a person’s expressive face beckon to us, but if we blindfold that person and put them behind a curtain, we are inflicting punishment. The Furby with its expressions of fear and the gendered Nexi with her blindfold are the new uncanny in the culture of computing.
I feel even more uncomfortable when I learn about a beautiful “female” robot, Aiko, now on sale, that says, “Please let go . . . you are hurting me,” when its artificial skin is pressed too hard. The robot also protests when its breast is touched: “I do not like it when you touch my breasts.” I find these programmed assertions of boundaries and modesty disturbing because it is almost impossible to hear them without imagining an erotic body braced for assault.
FROM THE ROMANTIC REACTION TO THE ROBOTIC MOMENT
 
Soon, it may seem natural to watch a robot “suffer” if you hurt it. It may seem natural to chat with a robot and have it behave as though pleased you stopped by. As the intensity of experiences with robots increases, as we learn to live in new landscapes, both children and adults may stop asking the questions “Why am I talking to a robot?” and “Why do I want this robot to like me?” We may simply be charmed by the pleasure of its company.
The romantic reaction of the 1980s and 1990s put a premium on what only people can contribute to each other: the understanding that grows out of shared human experience. It insisted that there is something essential about the human spirit. In the early 1980s, David, twelve, who had learned computer programming at school, contrasted people and programs this way: “When there are computers who are just as smart as the people, the computers will do a lot of the jobs, but there will still be things for the people to do. They will run the restaurants, taste the food, and they will be the ones who will love each other, have families and love each other. I guess they’ll still be the only ones who go to church.”
15
Adults, too, spoke of life in families. To me, the romantic reaction was captured by how one man rebuffed the idea that he might confide in a computer psychotherapist: “How can I talk about sibling rivalry to something that never had a mother?”
Of course, elements of this romantic reaction are still around us. But a new sensibility emphasizes what we share with our technologies. With psychopharmacology, we approach the mind as a bioengineerable machine.
16
Brain imaging trains us to believe that things—even things like feelings—are reducible to what they look like. Our current therapeutic culture turns from the inner life to focus on the mechanics of behavior, something that people and robots might share.
A quarter of a century stands between two conversations I had about the possibilities of a robot confidant, the first in 1983, the second in 2008. For me, the differences between them mark the movement from the romantic reaction to the pragmatism of the robotic moment. Both conversations were with teenage boys from the same Boston neighborhood; they are both Red Sox fans and have close relationships with their fathers. In 1983, thirteen-year-old Bruce talked about robots and argued for the unique “emotionality” of people. Bruce rested his case on the idea that computers and robots are “perfect,” while people are “imperfect,” flawed and frail. Robots, he said, “do everything right”; people “do the best they know how.” But for Bruce it was human imperfection that makes for the ties that bind. Specifically, his own limitations made him feel close to his father (“I have a lot in common with my father.... We both have chaos”). Perfect robots could never understand this very important relationship. If you ever have a problem, you go to a person.
Twenty-five years later, a conversation on the same theme goes in a very different direction. Howard, fifteen, compares his father to the idea of a robot confidant, and his father does not fare well in the comparison. Howard thinks the robot would be better able to grasp the intricacies of high school life: “Its database would be larger than Dad’s. Dad has knowledge of basic things, but not enough of high school.” In contrast to Bruce’s sense that robots are not qualified to have an opinion about the goings-on in families, Howard hopes that robots might be specially trained to take care of “the elderly and children”—something he doesn’t see the people around him as much interested in.
Howard has no illusions about the uniqueness of people. In his view, “they don’t have a monopoly” on the ability to understand or care for each other. Each human being is limited by his or her own life experience, says Howard, but “computers and robots can be programmed with an infinite amount of information.” Howard tells a story to illustrate how a robot could provide him with better advice than his father. Earlier that year, Howard had a crush on a girl at school who already had a boyfriend. He talked to his father about asking her out. His father, operating on an experience he had in high school and what Howard considers an outdated ideal of “macho,” suggested that he ask the girl out even though she was dating someone else. Howard ignored his father’s advice, fearing it would lead to disaster. He was certain that in this case, a robot would have been more astute. The robot “could be uploaded with many experiences” that would have led to the right answer, while his father was working with a limited data set. “Robots can be made to understand things like jealousy from observing how people behave.... A robot can be fully understanding and open-minded.” Howard thinks that as a confidant, the robot comes out way ahead. “People,” he says, are “risky.” Robots are “safe.”

Other books

Glitches by Marissa Meyer
Riding the Storm by Julie Miller
Finding Kat by McMahen, Elizabeth
Alien Jungle by Roxanne Smolen
An Unexpected Affair by Ellis, Jan
You Will Know Me by Abbott,Megan
Fringe-ology by Steve Volk