Our Final Invention: Artificial Intelligence and the End of the Human Era Hardcover (5 page)

Michael Vassar is a trim, compact man of about thirty. He holds degrees in biochemistry and business, and is fluent in assessments of human annihilation, so words like “Ebola” and “plutonium” come out of his mouth without hesitation or irony. One wall of his high-rise condo is a floor-to-ceiling window, and it frames a red suspension bridge that links San Francisco to Oakland, California. This isn’t the elegant Golden Gate—that’s across town. This one has been called its ugly stepsister. Vassar told me people bent on committing suicide have been known to drive
over
this bridge to get to the nice one.

Vassar has devoted his life to thwarting suicide on a larger scale. He’s the president of the Machine Intelligence Research Institute, a San Francisco–based think tank established to fight the extinction of the human race at the hands, or bytes, of artificial intelligence. On its Web site,
MIRI
posts thoughtful papers on dangerous aspects of AI, and once a year it organizes the influential Singularity Summit. At the two-day conference, programmers, neuroscientists, academics, entrepreneurs, ethicists, and inventors hash out advances and setbacks in the ongoing AI revolution. MIRI invites talks from believers and nonbelievers alike, people who don’t think the Singularity will ever happen, and people who think MIRI is an apocalyptic techno cult.

Vassar smiled at the cult idea. “People who come to work for MIRI are the opposite of joiners. Usually they realize AI’s dangers before they even know MIRI exists.”

I didn’t know MIRI existed until after I’d heard about the AI-Box Experiment. A friend had told me about it, but in the telling he got a lot wrong about the lone genius and his millionaire opponents. I tracked the story to a MIRI Web site, and discovered that the experiment’s creator, Eliezer Yudkowsky, had cofounded MIRI (then called the Singularity Institute for Artificial Intelligence) with entrepreneurs Brian and Sabine Atkins. Despite his reputed reticence, Yudkowsky and I exchanged e-mails and he gave me the straight dope about the experiment.

The bets placed between the AI played by Yudkowsky and the Gatekeeper assigned to rein him in were at most thousands of dollars, not millions. The game had been held just five times, and the AI in the box won three of these times. Meaning, the AI usually got out of the box, but it wasn’t a blowout.

Some parts of the AI-Box rumor had been true—Yudkowsky
was
reclusive, stingy with his time, and secretive about where he lived. I had invited myself to Michael Vassar’s home because I was pleased and amazed that a nonprofit had been founded to combat the dangers of AI, and young, intelligent people were devoting their lives to the problem. And I hoped my conversation with Vassar would smooth my final steps to Yudkowsky’s front door.

Before jumping feet first into AI danger advocacy, Vassar had earned an MBA and made money cofounding Sir Groovy, an online music-licensing firm. Sir Groovy pairs independent music labels with TV and film producers to provide fresh soundtracks from lesser known and hence cheaper artists. Vassar had been toying with the idea of applying himself to the dangers of nanotechnology until 2003. That year he met Eliezer Yudkowsky, after having read his work online for years. He learned about MIRI, and a threat more imminent and dangerous than nanotechnology: artificial intelligence.

“I became extremely concerned about global catastrophic risk from AGI after Eliezer convinced me that it was plausible that AGI could be developed in a short time frame and on a relatively small budget. I didn’t have any convincing reason to think that AGI could
not
happen say in the next twenty years.” That was sooner than predictions for nanotech. And AGI’s development would take a lot less overhead. So Vassar changed course.

When we met, I confessed I hadn’t thought much about the idea that small groups with small budgets could come up with AGI. From the polls I’d seen, only a minority of experts predicted such a team would be the likely parents.

So, could Al Qaeda create AGI? Could FARC? Or Aum Shinrikyo?

Vassar doesn’t think a terrorist cell will come up with AGI. There’s an IQ gap.

“The bad guys who actually want to destroy the world are reliably not very capable. You know the sorts of people who do want to destroy the world lack the long-term planning abilities to execute anything.”

But what about Al Qaeda? Didn’t all the attacks up to and including 9/11 require high levels of imagination and planning?

“They do not compare to creating AGI. Writing code for an application that does any one thing better than a human, never mind the panoply of capabilities of AGI, would require orders of magnitude more talent and organization than demonstrated by Al Qaeda’s entire catalogue of violence. If AGI were that easy, someone smarter than Al Qaeda would have already done it.”

But what about governments like those of North Korea and Iran?

“As a practical matter the quality of science that bad regimes produce is shit. The Nazis are the only exception and, well, if the Nazis happen again we have very big problems with or without AI.”

I disagreed, though not about the Nazis. Iran and North Korea have found high-tech ways to blackmail the rest of the world with the development of nuclear weapons and intercontinental missiles. So I wouldn’t cross them off the short list of potential AGI makers with a track record of blowing raspberries in the face of international censure. Plus, if AGI can be created by small groups, any rogue state could sponsor one.

When Vassar talked about small groups, he included companies working under the radar. I’d heard about so-called stealth companies that are privately held, hire secretly, never issue press releases or otherwise reveal what they’re up to. In AI, the only reason for a company to be stealthy is if they’ve had some powerful insight, and they don’t want to reward competitors with information about what that their breakthrough is. By definition, stealth companies are hard to discover, though rumors abound. PayPal founder Peter Thiel funds three stealth companies devoted to AI.

Companies in “stealth mode” however, are different and more common. These companies seek funding and even publicity, but don’t reveal their plans. Peter Voss, an AI innovator known for developing voice-recognition technology, pursues AGI with his company, Adaptive AI, Inc. He has gone on record saying AGI can be achieved within ten years. But he won’t say how.

*   *   *

Stealth companies come with another complication. A small, motivated company could exist within a larger company with a big public presence. What about Google? Why wouldn’t the cash-rich megacorp take on AI’s Holy Grail?

When I questioned him at an AGI conference, Google’s Director of Research Peter Norvig, coauthor of the classic AI textbook,
Artificial Intelligence: A Modern Approach,
said Google wasn’t looking into AGI. He compared the quest to NASA’s plan for manned interplanetary travel. It doesn’t have one. But it will continue to develop the component sciences of traveling in space—rocketry, robotics, astronomy, et cetera—and one day all the pieces will come together, and a shot at Mars will look feasible.

Likewise, narrow AI projects do lots of intelligent jobs like search, voice recognition, natural language processing, visual perception, data mining, and much more. Separately they are well-funded, powerful tools, dramatically improving each year. Together they advance the computer sciences that will benefit AGI systems.

However, Norvig told me, no AGI program for Google exists. But compare that statement to what his boss, Google cofounder Larry Page said at a London conference called Zeitgeist ’06:

People always make the assumption that we’re done with search. That’s very far from the case. We’re probably only 5 percent of the way there. We want to create the ultimate search engine that can understand anything … some people could call that artificial intelligence.… The ultimate search engine would understand everything in the world. It would understand everything that you asked it and give you back the exact right thing instantly.… You could ask “what should I ask Larry?” and it would tell you.

That sounds like AGI to me.

Through several well-funded projects, IBM pursues AGI, and DARPA seems to be backing every AGI project I look into. So, again, why not Google? When I asked Jason Freidenfelds, from Google PR, he wrote:

… it’s much too early for us to speculate about topics this far down the road. We’re generally more focused on practical machine learning technologies like machine vision, speech recognition, and machine translation, which essentially is about building statistical models to match patterns—nothing close to the “thinking machine” vision of AGI.

But I think Page’s quotation sheds more light on Google’s attitudes than Freidenfelds’s. And it helps explain Google’s evolution from the visionary, insurrectionist company of the 1990s, with the much touted slogan
DON’T BE EVIL
, to today’s opaque, Orwellian, personal-data-aggregating behemoth.

The company’s privacy policy shares your personal information among Google services, including Gmail, Google
+
, YouTube, and others. Who you know, where you go, what you buy, who you meet, how you browse—Google collates it all. Its purported goal: to improve your user experience by making search virtually omniscient about the subject of
you.
It’s parallel goal—to shape what ads you see, and even your news, videos, and music consumption, and automatically target you with marketing campaigns. Even the Google camera cars that take “Street View” photographs for Google Maps are part of the plan—for three years, Google used its photo-taking fleet to grab data from private Wi-Fi networks in the United States and elsewhere. Passwords, Internet usage history, personal e-mails—nothing was off limits.

It’s clear they’ve put once loyal customers in our place, and it’s not first place. So it seemed inconceivable that Google did not have AGI in mind.

Then, about a month after my last correspondence with Freidenfelds,
The New York Times
broke a story about Google X.

Google X was a stealth company. The secret Silicon Valley laboratory was initially headed by AI expert and developer of Google’s self-driving car, Sebastian Thrun. It is focused on one hundred “moon-shot” projects such as the Space Elevator, which is essentially a scaffolding that would reach into space and facilitate the exploration of our solar system. Also onboard at the stealth facility is Andrew Ng, former director of Stanford University’s Artificial Intelligence Lab, and a world-class roboticist.

Finally, late in 2012, Google hired esteemed inventor and author Ray Kurzweil to be its director of engineering. As we’ll discuss in chapter 9, Kurzweil has a long track record of achievements in AI, and has promoted brain research as the most direct route to achieving AGI.

It doesn’t take Google glasses to see that if Google employs at least two of the world’s preeminent AI scientists, and Ray Kurzweil, AGI likely ranks high among its moon-shot pursuits.

Seeking a competitive advantage in the marketplace, Google X and other stealth companies may come up with AGI away from public view.

*   *   *

Stealth companies may represent a surprise track to AGI. But according to Vassar the quickest path to AGI will be very public, and cost serious money. That route calls for reverse engineering the human brain, using a combination of programming skill and brute force technology. “Brute force” is the term for overpowering a problem with sheer hardware muscle—racks of fast processors, petabytes of memory—along with clever programming.

“The extreme version of brute force is coming out of biology,” Vassar told me. “If people continue to use machines to analyze biological systems, work out metabolisms, work out these complex relationships inside biology, eventually they’ll accumulate a lot of information on how neurons process information. And once they have enough information about how neurons process information, that information can be analyzed for AGI purposes.”

It works like this:
thinking
runs on biochemical processes performed by parts of the brain called neurons, synapses, and dendrites. With a variety of techniques, including PET and fMRI brain scanning, and applying neural probes inside and outside the skull, researchers determine what individual neurons and clusters of neurons are doing in a computational sense. Then they express each of these processes with a computer program or algorithm.

That’s the thrust of the new field of computational neuroscience. One of the field’s leaders, Dr. Richard Granger, director of the Dartmouth University Brain Engineering Laboratory, has created algorithms that mimic circuits in the human brain. He’s even patented a hugely powerful computer processor based on how these brain circuits work. When it gets to market we’ll see a giant leap forward in how computer systems visually identify objects because they’ll do it the way our brain does.

There are still many brain circuits remaining to probe and map. But once you’ve created algorithms for all the brain’s processes, congratulations, you have a brain. Or do you? Maybe not. Maybe what you have is a machine that emulates a brain. This is a big question in AI. For instance, does a chess-playing program think?

When IBM set out to create Deep Blue, and defeat the world’s best chess players, they didn’t program it to play chess like World Champion Gary Kasparov only better. They didn’t know how. Kasparov developed his virtuosity by playing a vast number of games, and studying more games. He developed a huge repository of opens, attacks, feints, blockades, decoys, gambits, endgames—strategies and tactics. He recognizes board patterns, remembers, and
thinks.
Kasparov normally thinks three to five moves ahead, but can go as far as fourteen. No current computer can do all that.

So instead, IBM programmed a computer to evaluate 200 million positions per second.

First Deep Blue would make a hypothetical move, and evaluate all of Kasparov’s possible responses. It would make
its
hypothetical response to each of those responses, and again evaluate all of Kasparov’s responses. This two-levels-deep modeling is called a two-ply search—Deep Blue would sometimes search up to six plies deep. That’s each side “moving” six times for every hypothetical move.

Other books

Mad Dogs by James Grady
A Promise of Fireflies by Susan Haught
The Hidden Heart by Sharon Schulze
The Chromosome Game by Hodder-Williams, Christopher
Close Call by J.M. Gregson