Our Final Invention: Artificial Intelligence and the End of the Human Era Hardcover

 

The author and publisher have provided this e-book to you for your personal use only. You may not make this e-book publicly available in any way.
Copyright infringement is against the law. If you believe the copy of this e-book you are reading infringes on the author’s copyright, please notify the publisher at:
us.macmillanusa.com/piracy
.

 

To my wife, Alison Barrat, whose love and support sustain me

 

Contents

 

Title Page

Copyright Notice

Dedication

Acknowledgments

Introduction

  
1.
The Busy Child

  
2.
The Two-Minute Problem

  
3.
Looking into the Future

  
4.
The Hard Way

  
5.
Programs that Write Programs

  
6.
Four Basic Drives

  
7.
The Intelligence Explosion

  
8.
The Point of No Return

  
9.
The Law of Accelerating Returns

10.
The Singularitarian

11.
A Hard Takeoff

12.
The Last Complication

13.
Unknowable by Nature

14.
The End of the Human Era

15.
The Cyber Ecosystem

16.
AGI 2.0

Notes

Index

About the Author

Copyright

 

Acknowledgments

While researching and writing this book I was humbled by the willingness of scientists and thinkers to make room in their busy lives for prolonged, inspired, and sometimes contentious conversations with me. Many then joined the cadre of readers who helped me stay accurate and on target. In particular I’m deeply grateful to Michael Anissimov, David L. Banks, Bradford Cottel, Ben Goertzel, Richard Granger, Bill Hibbard, Golde Holtzman, and Jay Rixse.

 

Introduction

A few years ago I was surprised to discover I had something in common with a large number of strangers. They were men and women I had never met—scientists and college professors, Silicon Valley entrepreneurs, engineers, programmers, bloggers, and more. They were scattered around North America, Europe, and India—I would never have known about any of them if the Internet hadn’t existed. What my network of strangers and I had in common was a rational skepticism about the safe development of advanced artificial intelligence. Individually and in groups of two or three, we studied the literature and built our arguments. Eventually I reached out and connected to a far more advanced and sophisticated web of thinkers, and even small organizations, than I had imagined were focused on the issue. Misgivings about AI wasn’t the only thing we shared; we also believed that time to take action and avoid disaster was running out.

*   *   *

For more than twenty years I’ve been a documentary filmmaker. In 2000, I interviewed science-fiction great Arthur C. Clarke, inventor Ray Kurzweil, and robot pioneer Rodney Brooks. Kurzweil and Brooks painted a rosy, even rapturous picture of our future coexistence with intelligent machines. But Clarke hinted that we would be overtaken. Before, I had been drunk with AI’s potential. Now skepticism about the rosy future slunk into my mind and festered.

My profession rewards critical thinking—a documentary filmmaker has to be on the lookout for stories too good to be true. You could waste months or years making a documentary about a hoax, or take part in perpetrating one. Among other subjects, I’ve investigated the credibility of a gospel according to Judas Iscariot (real), of a tomb belonging to Jesus of Nazareth (hoax), of Herod the Great’s tomb near Jerusalem (unquestionable), and of Cleopatra’s tomb within a temple of Osirus in Egypt (very doubtful). Once a broadcaster asked me to present UFO footage in a credible light. I discovered the footage was an already discredited catalogue of hoaxes—thrown pie plates, double exposures, and other optical effects and illusions. I proposed to make a film about the hoaxers, not the UFOs. I got fired.

Being suspicious of AI was painful for two reasons. Learning about its promise had planted a seed in my mind that I wanted to cultivate, not question. And second, I did not doubt AI’s existence or power. What I was skeptical about was advanced AI’s safety, and the recklessness with which modern civilization develops dangerous technologies. I was convinced that the knowledgeable experts who did not question AI’s safety at all were suffering from delusions. I continued talking to people who knew about AI, and what they said was more alarming than what I’d already surmised. I resolved to write a book reporting their feelings and concerns, and reach as many people as I could with these ideas.

*   *   *

In writing this book I spoke with scientists who create artificial intelligence for robotics, Internet search, data mining, voice and face recognition, and other applications. I spoke with scientists trying to create human-level artificial intelligence, which will have countless applications, and will fundamentally alter our existence (if it doesn’t end it first). I spoke with chief technology officers of AI companies and the technical advisors for classified Department of Defense initiatives. Every one of these people was convinced that in the future all the important decisions governing the lives of humans will be made by machines or humans whose intelligence is augmented by machines. When? Many think this will take place within their lifetimes.

This is a surprising but not particularly controversial assertion. Computers already undergird our financial system, and our civil infrastructure of energy, water, and transportation. Computers are at home in our hospitals, cars, and appliances. Many of these computers, such as those running buy-sell algorithms on Wall Street, work autonomously with no human guidance. The price of all the labor-saving conveniences and diversions computers provide is dependency. We get more dependent every day. So far it’s been painless.

But artificial intelligence brings computers to life and turns them into something else. If it’s inevitable that machines will make our decisions, then
when
will the machines get this power, and will they get it with our compliance?
How
will they gain control, and how quickly? These are questions I’ve addressed in this book.

Some scientists argue that the takeover will be friendly and collaborative—a handover rather than a takeover. It will happen incrementally, so only troublemakers will balk, while the rest of us won’t question the improvements to life that will come from having something immeasurably more intelligent decide what’s best for us. Also, the superintelligent AI or AIs that ultimately gain control might be one or more augmented humans, or a human’s downloaded, supercharged brain, and not cold, inhuman robots. So their authority will be easier to swallow. The handover to machines described by some scientists is virtually indistinguishable from the one you and I are taking part in right now—gradual, painless, fun.

*   *   *

The smooth transition to computer hegemony would proceed unremarkably and perhaps safely if it were not for one thing: intelligence. Intelligence isn’t unpredictable just
some
of the time, or in special cases. For reasons we’ll explore, computer systems advanced enough to act with human-level intelligence will likely be unpredictable and inscrutable
all of the time.
We won’t know at a deep level what self-aware systems will do or how they will do it. That inscrutability will combine with the kinds of accidents that arise from complexity, and from novel events that are unique to intelligence, such as one we’ll discuss called an “intelligence explosion.”

*   *   *

And
how
will the machines take over? Is the best, most realistic scenario threatening to us or not?

When posed with this question some of the most accomplished scientists I spoke with cited science-fiction writer Isaac Asimov’s Three Laws of Robotics. These rules, they blithely replied, would be “built in” to the AIs, so we have nothing to fear. They spoke as if this were settled science. We’ll discuss the three laws in chapter 1, but it’s enough to say for now that when someone proposes Asimov’s laws as the solution to the dilemma of superintelligent machines, it means they’ve spent little time thinking or exchanging ideas about the problem. How to make
friendly
intelligent machines and what to fear from superintelligent machines has moved beyond Asimov’s tropes. Being highly capable and accomplished in AI doesn’t inoculate you from naïveté about its perils.

I’m not the first to propose that we’re on a collision course. Our species is going to mortally struggle with this problem. This book explores the plausibility of losing control of our future to machines that won’t necessarily hate us, but that will develop unexpected behaviors as they attain high levels of the most unpredictable and powerful force in the universe, levels that we cannot ourselves reach, and behaviors that probably won’t be compatible with our survival. A force so unstable and mysterious, nature achieved it in full just once—intelligence.

 

Chapter One

The Busy Child

artificial intelligence (abbreviation: AI) noun

the theory and development of computer systems able to perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.


The New Oxford American Dictionary,
Third Edition

On a supercomputer operating at a speed of 36.8 petaflops, or about twice the speed of a human brain, an AI is improving its intelligence. It is rewriting its own program, specifically the part of its operating instructions that increases its aptitude in learning, problem solving, and decision making. At the same time, it debugs its code, finding and fixing errors, and measures its IQ against a catalogue of IQ tests. Each rewrite takes just minutes. Its intelligence grows exponentially on a steep upward curve. That’s because with each iteration it’s improving its intelligence by 3 percent. Each iteration’s improvement contains the improvements that came before.

During its development, the Busy Child, as the scientists have named the AI, had been connected to the Internet, and accumulated exabytes of data (one exabyte is one billion
billion
characters) representing mankind’s knowledge in world affairs, mathematics, the arts, and sciences. Then, anticipating the intelligence explosion now underway, the AI makers disconnected the supercomputer from the Internet and other networks. It has no cable or wireless connection to any other computer or the outside world.

Soon, to the scientists’ delight, the terminal displaying the AI’s progress shows the artificial intelligence has surpassed the intelligence level of a human, known as AGI, or artificial general intelligence. Before long, it becomes smarter by a factor of ten, then a hundred. In just two days, it is
one thousand
times more intelligent than any human, and still improving.

The scientists have passed a historic milestone! For the first time humankind is in the presence of an intelligence greater than its own. Artificial
super
intelligence, or ASI.

Now what happens?

AI theorists propose it is possible to determine what an AI’s fundamental
drives
will be. That’s because once it is self-aware, it will go to great lengths to fulfill whatever goals it’s programmed to fulfill, and to avoid failure. Our ASI will want access to energy in whatever form is most useful to it, whether actual kilowatts of energy or cash or something else it can exchange for resources. It will want to improve itself because that will increase the likelihood that it will fulfill its goals. Most of all, it will
not
want to be turned off or destroyed, which would make goal fulfillment impossible. Therefore, AI theorists anticipate our ASI will seek to expand out of the secure facility that contains it to have greater access to resources with which to protect and improve itself.

Other books

Billionaire Boy by David Walliams
Everran's Bane by Kelso, Sylvia
Love in Paradise by Maya Sheppard
BFF Breakup by Taylor Morris
Point of Impact by Tom Clancy