Regenesis (40 page)

Read Regenesis Online

Authors: George M. Church

They named the lab Livly. By the summer of 2010 Livly had received a round of seed funding from ImmunePath, a Silicon Valley start-up aimed at curing cancer with stem cell immunotherapy. Gentry, meanwhile, inspired by the DIYbio movement, investigated the idea of creating a community lab in the San Francisco Bay area. The idea was to open the lab to interested citizen-scientists who, for a monthly membership fee of about $200, could perform their own molecular biology experiments, everything from DNA sequencing to the rite-of-passage project of inserting green fluorescent protein genes into bacteria. If successful, it would be a case of wresting science from its conventional setting in industry and academia and returning it to something like the days of the gentleman-scientist-scholar who was not formally connected with any established organization: people such as Newton, Darwin, Alfred Russel Wallace, Benjamin Franklin, and others.

Gentry and two partners came up with the name BioCurious (
biocurious.org
) and started a fund drive on the website
Kickstarter.com
(“A New Way to Fund and Follow Creativity”), where inventors, entrepreneurs, and dreamers of every stripe could post their wild schemes and pet projects and ask for money to fund them. BioCurious announced an initial goal of $30,000. The partners were soon oversubscribed, almost overwhelmed, with 239 backers pledging $35,319. In the fall of 2010 Gentry and her partners were looking to lease 3,000 square feet of industrial space in Mountain View, but in the end settled for a 2,400 square feet in Sunnyvale, calling it “Your Bay Area hackerspace for biotech.” In December 2010, meanwhile, another DIY biohacker lab, Genspace, opened in Brooklyn, New York. The founders referred to it as “the world's first permanent, biosafety level 1 community laboratory” (
genspace.org
). Many others soon followed, in the United States, Canada, Europe, and Asia.

With free synthetic biology kits, DIYbio, Livly lab, BioCurious, Genspace, and others, the synthetic biology genie was well and truly out of the bottle.

That did not mean we were headed for some sort of synthetic biology holocaust, Armageddon, or meltdown, however. For one thing, despite efforts by iGEMites to make biological engineering “easy” it is still reasonably difficult to design and implement biological circuitry that actually works (much less works reliably and exactly as planned). Biological systems are complex, “noisy” and susceptible to mutations, evolution, and emergent behaviors. For these reasons their operations are full of surprises. A random change to any given genome is more likely to weaken the organism than to strengthen it, and the same is often true of changes that have been carefully designed and engineered in advance.

For another, in parallel with the development of the biohacker, DIYbio, and garage biology movements (and in fact slightly predating them), many of those doing hands-on synthetic biology work had written white papers, position papers, opinion pieces, and had participated in conferences, study groups, and other organized attempts aimed at dealing with the risks and dangers posed by engineering life. In 2004, for example, I wrote “A Synthetic Biohazard Nonproliferation Proposal.” Here I advanced two main ideas for enhancing the safety and security of synthetic biology research. The first was that the sale and maintenance of oligonucleotide (DNA) synthesis machines and supplies would be restricted to government-licensed nonprofit, government, and for-profit institutions; in addition, the use of reagents and oligos would be tracked, much after the manner of nuclear materials. Second, orders placed with commercial oligo suppliers would be screened for similarity to known infectious agents. While unpopular back then, today many suppliers now do this.

Admittedly, both measures had limitations. Restricting the sale of DNA synthesis machines to licensed or “legitimate” users would be easier to apply to future sales and do little to restrict the use of the machines that
were already out there. Also, it's an open question how stringently enforced such licensing could be. Further, the proposal to screen orders placed with DNA synthesis companies would not affect firms that, for proprietary, competitive, or other reasons, did their DNA synthesis with their own equipment, in-house, unless those instruments fell under the same regulations.

Still, further measures to keep people safe and to keep engineered organisms under control have been proposed by myself and others. One is that synthetic biology research be done in physical isolation, employing the same safeguards that are routinely employed in biosafety labs: safety cabinets, protective clothing and gloves, plus face protection. (But as we have seen, accidents happen even in biosafety labs.) Another is to build one or more self-destruct mechanisms into engineered organisms so that they would die if they escaped from a lab. They could be made dependent on nutrients found only in a laboratory setting, or they could be programmed with suicide genes that would kill the organism after a certain predetermined number of replications, or in response to high-density levels of the organism, or even in response to an externally generated chemical signal. Farther into the future, you could also base the organism on a genetic code different from the one used by natural organisms. Such a code change, I have argued, or the introduction of “mirror life” (see
Chapter 1
) would mean that such organisms would not be recognizable to natural organisms, and therefore would be unable to exchange genetic material with them.

Further safety measures have been proposed by others. In their 2006 piece, “The Promise and Perils of Synthetic Biology,” Jonathan Tucker and Raymond Zilinskas suggested that before a genetically engineered organism was released to the wild, it first be tested in a microcosm, a model ecosystem ranging in size from a few millimeters to fifteen cubic meters (about the size of a one-car garage), and then to a “mesocosm,” a model system larger than fifteen cubic meters. “Ideally” they wrote, “a model ecosystem should be sufficiently realistic and reproducible to serve as a bridge between the laboratory and a field test in the open environment.”

Synthetic biology needs a suite of safety measures comparable to those that we now have for cars. Modern cars represent powerful technology,
and so we require licensing, surveillance, safety design, and safety testing of both the cars and their drivers. Despite the complex technology of today's automobiles and traffic management, the average citizen is expected to be able to pass a written and practical licensing exam. In addition, we enact numerous surveillance procedures, including radar monitoring of speed, license plate photos, timing speed between tolls, annual inspections, visual checks for erratic behavior, weighing trucks, and checking registration papers. The cars are designed for safety, including anti-hydroplaning tires; antilock brake systems; seat belts; shoulder harnesses; front, side, and supplemental air bags; and so on. In addition, cars are tested using actual cars and synthetic humans—originally cadavers, and later increasingly realistic crash dummies. But even with all these systems, checks, devices, and procedures in place, there are still about 40,000 automotive deaths annually in the United States alone.

We could similarly require licensing for all aspects and users, even DIY-Bio; computer-aided surveillance of all commerce; designing cells that self-destruct outside of the lab; and rigorous testing of what would happen if the cell escaped from the lab by bringing the outside ecosystem into the lab in a secure physical containment setting.

Regulations, however, can be circumvented by anyone who is sufficiently determined to evade them. In other words, security is far more difficult to achieve than safety. This point was made repeatedly by the authors of the 2007 report,
Synthetic Genomics: Options for Governance
. The document was the fruit of an exhaustive two-year study funded by the Alfred P. Sloan Foundation. The study involved eighteen core members (including Drew Endy, Tom Knight, Craig Venter, and myself) and three institutes: the J. Craig Venter Institute, MIT's Department of Biological Engineering, and the Center for Strategic and International Studies.

Our final report advanced many policy options along the lines of those already mentioned. We made no bones about the fact that their “security benefits would be modest because no such regime could have high confidence in preventing illegitimate synthesis” DNA synthesizers, after all, were relatively small (desktop-size) machines, easy to acquire and hide from view. Even if the registration of synthesizers were legally required,
the policy would be difficult to enforce because it would be virtually impossible to ensure that all existing machines had been identified and incorporated into the registry. Furthermore, DNA synthesis machines can be built from scratch, can be stolen, and can be misused at “legitimate” institutions by someone posing as benign and genuine while nevertheless engaging in illicit activity (the Bruce Ivins paradigm).

The group tackling options for governance considered the difficulty of synthesizing several pathogenic viruses, including the 1918 influenza virus, poliovirus, Marburg and Ebola viruses, and foot and mouth disease virus. (Foot and mouth disease affects only certain hoofed animals such as sheep and cattle, but it is highly contagious and could trigger the wholesale loss of herds that in turn would entail carcass removal and decontamination costs. An outbreak would destroy consumer confidence, cripple the economy, and provoke trade embargoes.) Of these viruses, we classified two of them—poliovirus and foot and mouth disease virus—as easy to synthesize by “someone with knowledge of and experience in virology and molecular biology and an equipped lab but not necessarily with advanced experience (‘difficulty' includes obtaining the nucleic acid and making the nucleic acid infectious).”

In the end, we found no magic bullets for absolutely preventing worstcase scenarios, no fail-safe fail-safes, but in my opinion the measures we proposed are worth implementing anyway since their costs are low, the risks high, and their effectiveness would be measureable, and subject to improvement.

To be on the safe side, then, why not prohibit the entire enterprise, or at least the riskiest parts of it? Given the amount of information, machinery, and engineered organisms that already exist in the world, total prohibition would be unrealistic even if it were desirable, which it is not: synthetic genomics offers too many benefits in comparison to its risks. And there are powerful arguments against prohibiting even a subset of experiments or research directions that might be considered relatively dangerous. The first is that prohibitions mostly don't work; instead, they merely drive the
prohibited activity underground and create black markets, or clandestine labs and lab work that are more difficult to monitor and control than open markets and open laboratories. The second is that they also produce a raft of adverse unintended consequences, many of them foreseeable.

The classic case, of course, is Prohibition, which was enacted in 1920 by the Eighteenth Amendment outlawing “the manufacture, sale, or transportation of intoxicating liquors” within the United States. The law stopped none of that activity and instead created a huge network of illegal alcohol production, distribution, and transportation, including massive smuggling across the Canadian border. At one point there were 20,000 speakeasies in Detroit alone (one for every thirty adults). Millions of formerly law-abiding citizens suddenly became habitual lawbreakers. Many drinkers were poisoned by poorly made bootlegged liquor. The net effect, in sum, was to increase crime, violence, and death.

The other major prohibition story of our time is the war on drugs, which has created a cottage industry of illegal drug production, transportation, distribution, and sale in the United States and abroad. It also fosters ingenious methods of drug smuggling, including the use of false-paneled pickup trucks, vans, and tractor trailers, and the building of air-conditioned tunnels under the US-Mexican border. But the ultimate high-tech drug-running innovation was the home-built submarine (called narco subs in the trade), used to move large amounts of cocaine underwater. In July 2010 Ecuadorean police discovered a so-called supersub in the jungle, a seventy-four-foot (23-meter), camouflage-painted submarine that was almost twice as long as a city bus, was equipped with diesel engines, battery-powered electric motors, twin propellers, and topped by a five-foot conning tower. With a crew of four, the sub had a range of 6,800 nautical miles, and in its cargo hold could carry nine tons of cocaine, worth a total of about $250 million. Jay Bergman, the top US Drug Enforcement Agency official in South America, said of the sub, which he praised as “a quantum leap in technology,” that “it poses some formidable challenges.”

Other books

88 Days to Kandahar: A CIA Diary by Grenier, Robert L.
Present Perfect by Alison G. Bailey
Then There Was You by Melanie Dawn
Darkest Before Dawn by Katie Flynn
Eternity The Beginning by Felicity Heaton
A Texan's Luck by Jodi Thomas