Read The End of Absence: Reclaiming What We've Lost in a World of Constant Connection Online
Authors: Michael Harris
Most social Web sites leave it to the public. Facebook, Twitter, and the like incorporate a button that allows users to “flag this as inappropriate” when they see something they disapprove of. In the age of crowdsourced knowledge like Wikipedia’s, such user-driven moderation sounds like common sense, and perhaps it is.
8
“But what happens,” Dinakar explains, “is that all flagging goes into a stream where a moderation team has to look at it. Nobody gets banned automatically, so the problem becomes how do you deal with eight hundred million users throwing up content and flagging each other?” (Indeed, Facebook has well over one billion users whose actions it must manage.) “The truth is that the moderation teams are so shockingly small compared with the amount of content they must moderate that there’s simply no way it can be workable. What I realized was that technology must help the moderators. I found that, strangely, nobody was working on this.”
The most rudimentary algorithms, when searching for abusive behavior online, can spot a word like “faggot” or “slut” but remain incapable of contextualizing those words. For example, such an algorithm would flag this paragraph as bully material simply because those words appear in it. Our brains and our meaning, however, do not work in an “on” and “off” way. The attainment of meaning requires a subtle understanding of context, which is something computers have trouble with. What Dinakar wanted to deliver was a way to identify abusive
themes
. “The brain,” he told me, “is multinominal.” We think, in other words, by combining several terms in relation to one another, not merely by identifying particular words. “If I tell a guy he’d look good in lipstick,” says Dinakar, “a computer would not pick that up as a potential form of abuse. But a human knows that this could be a kind of bullying.” Now Dinakar just had to teach a computer to do the same.
The solution came in the form of latent Dirichlet allocation (LDA), a complex language-processing model derived in 2003 that can discover topics from within the mess of infinite arrangements of words the human brain spews forth. LDA is multinominal, like our brains, and works with what Dinakar calls “that bag-of-associations thing.” Dinakar began with a simple assumption about the bag of associations he was looking for: “If we try to detect power differentials between people, we can begin to weed out cases of bullying.”
The work was barely under way when Dinakar received a letter from the Executive Office of the President of the United States. Would he like to come to a summit in D.C.? Yes, he said.
There he met Aneesh Chopra—the inaugural chief technology officer of the United States—who was chairing the panel on cyberbullying that Dinakar had been invited to join. Three years later, Dinakar has the White House’s backing on a new project he’s calling the National Helpline, a combined governmental and NGO effort that means to, for the first time, begin dealing with the billions of desperate messages in bottles that teens are throwing online. The National Helpline incorporates artificial intelligence to analyze problems that are texted in, then produces resources and advice specific to each problem. It is one of the most humane nonhuman systems yet constructed. The effort is fueled in part by Dinakar’s frustration with the limits of traditional psychiatry, which is “mostly based on single-subject studies and is often very retrospective. They come up with all these umbrella terms that are very loosely defined. And there’s no data anywhere. There is no data
anywhere
. I think it’s a very peculiar field.”
By contrast, Dinakar’s National Helpline—in addition to providing its automated and tailored advice—will amass an enormous amount of data, which will be stored and analyzed in a kind of e–health bank. “We’ll be analyzing every instance with such granularity,” says Dinakar. “And hopefully this will help psychiatry to become a much more hard science. Think about it. This is such an unexplored area. . . . We can mine photos of depressed people and get information on depression in a way that no one at any other point in history could have done.”
The reduction of our personal lives to mere data does run the risk of collapsing things into a Big Brother scenario, with algorithms like Dinakar’s scouring the Internet for “unfriendly” behavior and dishing out “correction” in one form or another.
9
One Carnegie Mellon researcher
, Alessandro Acquisti, has shown that in some cases facial recognition software can analyze a photo and within thirty seconds deliver that person’s Social Security number. Combine this with algorithms like Dinakar’s and perhaps I could ascertain a person’s emotional issues after snapping his or her photo on the street. The privacy issues that plague our online confessions are something Dinakar is aware of, but he leaves policy to the policy makers. “I don’t have an answer about that,” he told me. “I guess it all depends on how we use this technology. But I don’t have an answer as to how that should be.”
10
I don’t think any of us do, really. In our rush toward confession and connection—all those happy status updates and geo-tagged photo uploads—rarely do we consider how thorough a “confession” we’re really making. Nor do we consider to what authority we’re doing the confessing. This is because the means of confession—the technology itself—is so very amiable. Dinakar is building a more welcoming online world, and it’s a good thing he is. But we need to remain critical as we give over so much of ourselves to algorithmic management.
• • • • •
In a sense, Dinakar and others at the Media Lab are still pursuing Alan Turing’s dream. “I want to compute for empathy,” Dinakar told me as our time together wound down. “I don’t want to compute for banning anyone. I just want . . . I want the world to be a less lonely place.” Of course, for such affective computing to work the way its designers intend, we must be prepared to give ourselves over to its care.
How far would such handling by algorithms go? How cared for shall we be? “I myself can sometimes think in a very reactive way,” says Dinakar. He imagines that, one day, technologies like the software he’s working on could help us manage all kinds of baser instincts. “I’d like it if my computer read my e-mail and told me about the consequences when I hit a Send button. I would like a computer that would tell me to take five deep breaths. A technology that could make me more self-aware.”
A part of me has a knee-jerk reaction against the management Dinakar is describing. Do we want to abstract, monitor, quantify, ourselves so?
Then I think again about the case of Amanda Todd and whether such online watchdogs might have helped her. Only one in six suicides is accompanied by an actual suicide note, but it’s estimated that three-quarters of suicide attempts are preceded by some warning signs—signs we hapless humans fail to act on. Sometimes the signs are explicit: Tyler Clementi updated his Facebook page to read, “Jumping off the GW Bridge sorry.” Sometimes the messages are more obscure: Amanda Todd’s video merely suggests deep depression. How much can be done when those warning signs are issued in the empathic vacuum of the Internet? Are we not obliged to try to humanize that which processes so much of our humanity? Dinakar’s software could help those who reach out directly to it—but here’s the rub: When we go online, we commit ourselves to the care of online mechanisms. Digital Band-Aids for digital wounds.
We feed ourselves into machines, hoping some algorithm will digest the mess that is our experience into something legible, something more meaningful than the “bag of associations” we fear we are. Nor do the details of our lives need to be drawn from us by force. We do all the work ourselves.
We all of us love to broadcast, to call ourselves into existence against the obliterating silence that would otherwise dominate so much of our lives. Perhaps teenage girls offer the ultimate example, projecting their avatars insistently into social media landscapes with an army of selfies, those ubiquitous self-portraits taken from a phone held at arm’s length; the pose—often pouting—is a mainstay of Facebook (one that sociologist Ben Agger has called “
the male gaze gone viral
”). But as Nora Young points out in her book
The Virtual Self,
fervent self-documentation extends far beyond the problematic vanities of teenage girls. Some of us wear devices that track our movements and sleep patterns, then post results on Web sites devoted to constant comparison; others share their sexual encounters and exercise patterns; we “check in” to locations using GPS-enabled services like Foursquare.com; we publish our minute-by-minute musings, post images of our meals and cocktails before consuming them, as devotedly as others say grace. Today, when we attend to our technologies, we
elect
to divulge information, free of charge and all day long. We sing our songs to the descendants of Alan Turing’s machines, now designed to consume not merely neutral computations, but the triumphs, tragedies, and minutiae of lived experience—we deliver children opening their Christmas presents; middle-aged men ranting from their La-Z-Boys; lavishly choreographed wedding proposals.
There’s a basic pleasure in accounting for a life that, in reality, is always somewhat inchoate. Young discusses the “gold star” aspect of that moment when we broadcast ourselves: “
Self-tracking is
. . . revelatory, and consequently, for some of us at least, motivating.” In reality, life outside of orderly institutions like schools, jobs, and prisons is lacking in “gold star” moments; it passes by in a not-so-dignified way, and nobody tells us whether we’re getting it right or wrong. But publish your experience online and an institutional approval system rises to meet it—your photo is “liked,” your status is gilded with commentary. It’s even a way to gain some sense of immortality, since online publishing creates a lasting record, a living scrapbook. This furthers our enjoyable sense of an ordered life. We become consistent, we are approved, we are a known and sanctioned quantity.
If a good life, today, is a recorded life, then a great life is a famous one. Yalda T. Uhls, a researcher at UCLA’s Children’s Digital Media Center, delivered a conference presentation in the spring of 2013 called “Look at ME,” in which she analyzed the most popular TV shows for tween audiences from 1967 to 2007. The post-Internet television content (typified by
American Idol
and
Hannah Montana
) had swerved dramatically from family-oriented shows like
Happy Days
in previous decades. “
Community feeling” had been a dominant theme
in content from 1967 to 1997; then, in the final decade leading up to 2007, fame became an overwhelming focus (it was one of the least important values in tween television in earlier years). Uhls points out that the most significant environmental change in that final decade was the advent of the Internet and, more to the point, platforms such as YouTube and Facebook, which “encourage broadcasting yourself and sharing aspects of your life to people beyond your face-to-face community. . . . In other words, becoming famous.”
11
One recent survey of three thousand
British parents confirmed this position
when it found that the top three job aspirations of children today are sportsman, pop star, and actor. Twenty-five years ago, the top three aspirations were teacher, banker, and doctor.
If the glory of fame has indeed trumped humbler ambitions, then the ethos of YouTube is an ideal medium for the message. Its tantalizing tagline: “Broadcast Yourself.”
• • • • •
We feel a strange duality when watching a YouTube video like Amanda Todd’s. The video is at once deeply private and unabashedly public. This duality seems familiar, though: The classic handwritten diary, secured perhaps with a feeble lock and key, shoved to the bottom of the underwear drawer, suggests an abhorrence of the casual, uninvited reader; but isn’t there also a secret hope that those confessions will be read by an idealized interloper? We desire both protection and revelation for our soul’s utterance. W. H. Auden wrote, “The image of myself which I try to create in my own mind in order that I may love myself is very different from the image which I try to create in the minds of others in order that they may love me.” But broadcast videos like Amanda Todd’s attempt to collapse those two categories. Bending both inward and outward, they confuse the stylized public persona and the raw private confession.
What, then, is the material difference between making our confessions online, to the bewildering crowds of comment makers, and making our confessions in the calm and private cloister of a paper diary? What absence have we lost?
When we make our confessions online, we abandon the powerful workshop of the lone mind, where we puzzle through the mysteries of our own existence without reference to the demands of an often ruthless public.
Our ideas wilt when exposed to scrutiny too early—and that includes our ideas about ourselves. But we almost never remember that. I know that in my own life, and in the lives of my friends, it often seems natural, now, to reach for a broadcasting tool when anything momentous wells up. The first time I climbed the height of the Eiffel Tower I was alone, and when at last I reached the summit and looked out during a sunset at that bronzed and ancient city, my first instinct was not to take in the glory of it all, but to turn to someone next to me and say, “Isn’t it awesome?” But I had come alone. So I texted my boyfriend, long distance, because the experience wouldn’t be real until I had shared it, confessed my “status.”
Young concludes
The Virtual Self
by asking us to recall that much of life is not “trackable,” that we must be open to “
that which cannot be articulated
in an objective manner or reduced to statistics.” It’s a caution worth heeding. The idea that technology must always be a way of opening up the world to us, of making our lives richer and never poorer, is a catastrophic one. But the most insidious aspect of this trap is the way online technologies encourage confession while simultaneously alienating the confessor. I wish, for example, I had just looked out at Paris and left it at that. When I gave in to “sharing” the experience, I fumbled and dropped the unaccountable joy that life was offering up. Looking back, I think it seems obvious that efficient communication is not the ultimate goal of human experience.