Authors: George M. Church
3. Personal versus generic:
Do-it-yourself versus outsourcing. Examples of the former include the use of home solar chemistry and solar power, 3D printers capable of producing complex objects on demand, DIY personal medicine, personal genomes, highly personalized solar foods, and personalized movies. At Harvard's Wyss Institute for Biologically Inspired Engineering, a regenerative medicine project is under way with the goal of reprogramming adult cells and then multiplexing the printing of scaffolding at very high resolution. This huge step toward individualized tissue engineering ultimately could seem no more intimidating than tattooing is today, and possibly less so if it's easily reversible. Many of us already get stem cell transplants as part of routine medical care, for example, both blood and skin. If we are up for a transplant anyway and are presented with the options of cells from an adequately matched donor (probably needing immune suppression) or our own cells engineered to be more resistant to our genetic, microbial, or cancer-based diseases, many of us would opt for the new, improved cells, especially since our ability to do quality assurance has improved so much recently.
The goal of optogenetics, the process of reading and writing to brain cells with light, is to enhance our ability to get measurements of neural activity from electrical to optical methods for the purpose of understanding and controlling how our brain operates on the cellular level. However, it's not obvious how much more scalable this might be for brain cells stimulated by multi-electrodes, which currently number in the dozens per brain. We need a way of getting a recording device for each of our 60 billion neurons (40,000 per cubic mm). This potential logjam argues for an alternative to hardwiring, the use of a wireless network or DNA recording of neuronal activity, rather than electrical or optical cables, perhaps employing 10 micron chips disguised as white blood cells that naturally penetrate the blood-brain barrier. This is the logical extrapolation of the demand for growing the bandwidth of the interface between our electronic computers and our biological (brain) computer and catching pathological brain events (ministrokes, onset of psychiatric disorders, etc.) as early as possible.
4. Priceless versus worthless:
The cost of materials today ranges from $0.1 per kg for wood to $4 trillion per kg for certain pharmaceuticals (reimbursable by health insurance). With revolutions in smart materials and molecular engineering, all materials and objects could be reduced to the range of $0.2 per kg (electronics, clothes, foods, cosmetics, and so on)âor people could spend more and more for less and less via clever branding, copyright and patent laws, elaborate licensing and regulatory schemes, and the like. Or is there a way of artfully combining and integrating all of the above?
5. Rich versus poor:
A growing global middle class can shrink the gap between rich and poor. As our species has moved from 2 percent of the population living in cities in ancient days to 80 percent in cities in the near future, the number of births per family has dropped from 8 to 1.2 (whereas 2.1 is needed for break-even). The fraction of the world living in extreme poverty and experiencing violence is decreasing; several technologies combined with earth shrinking and flattening (due to transportation, telecommunications, and trade) could accelerate this trend. Access to information and hence personalization means more resources available for further improvements.
6. Privacy versus publicity:
Even if governments ever stay clear of spying on their own citizens or others, we will be increasingly motivated to share data about ourselves and everything that we see and hearâfor reasons of security, personalization, and entertainment. Privacy is a relatively new social phenomenon. When our ancestors died in the tiny village of their birth, surrounded by relatives, there were no secrets. Privacy versus security is in fact a false dichotomy. A third option, lowering the need for secrets, seems to be growing in momentum. Consider that over the past decades the number of people concealing their psychiatric status, sexual orientation, STDs, cancer, and salary has shrunk for a variety of reasons. Some of these characteristics seem less susceptible to extortion or scandal than in the past. People share more now because new technologies make data more accessible (2012 Google versus 1950 private eye), or make the
process of sharing exciting, fun, and chic (Facebook). Technologies also make sharing more personally valuable, for example, discussing which new drugs to take for cancer, AIDS, or depression. In forums such as Pa-tientsLikeMe and
PersonalGenomes.org
, individuals communicate information to benefit people around the world.
So in addition to economic incentives to voluntarily give up privacy, attempts to sell secrecy result in false security. The memory hole of Orwell's
1984
, a purposefully disingenuous illusion of information security, is not so fictional (e.g., deleted personal web searches are restored in crime investigations). Add to that human error and willful individuals and teams, and we see strong arguments against dark secrets with nowhere to hide. Secrets are symptomsânot demanding a better bandage but a treatment for the underlying disease causes. AIDS created an activism that led to open discussion and reduced the stigma associated with sexual preference. Even “essential” secrets maintained by police and war fighters are symptoms of a failure of diplomacy and a failure of technology policy to provide a decent life for all.
There may be a trend toward less violence with global improvements in the standard of living or educationâand as we learn more about the underlying biology of violence. What will we see as anachronistic as we look back from a few decades hence? Will it be the wimpiness of our security or the prevalence of our secrets?
Yes, evolution has evolved deceptions, for example, camouflage or mimicking poisonous species. Could those sometime serve the common good? Maybe. But evolution also gave us malaria, HIV, tuberculosis, and smallpox, among other things. Perhaps better than deception would be the notion that we need to separate ideas/memes/cultures for long enough to test themâand then recombine the parts we like best. But separation could be done without deception.
Let's say that we had strong artificial intelligence. Would we prefer to be able to read computers' minds or teach them how to lie to us and each other? Hopefully we will want the former. We would prefer to have them explain and justify their decisions rather than just strut off while taunting “It's for me to know and you to find out!” We generally engage the rule
“Never hide information from the programmers” The ability to check the robot's logic when a bug is found is valuable.
We can imagine scenarios where a person might not want to know somethingâbut that should be their choice, not the choice of some paternalistic data-hoarding computer. Software should help people decide if information is useless, harmful, or helpful to them (before they see the details).
*
Deception and secrecy (as typically implemented) do not have such choices as top priorities. What kinds of information would hurt people's feelings for no purpose? If you tell me I'm ugly, that could hurt my feelings because society favors handsome people. If I need a shave in order to succeed, then it helps me more hearing about it now rather than a decade from now after I've been passed over in the marketplace of life.
You could hurt me by telling me that I invested stupidly, or had cancer, or was conceived out of wedlock, or someone was insulted by my narcolepsy or by false rumors. Overall, the openness gives me a better chance of nipping any such rumors in the bud.
We may decide to dispense with our current vast and destructive mechanisms for secret keeping (and deception). Cases in which new societal trends favoring secrecy override the need to know (or want to know) of one or more individuals may become rare, as they were centuries ago.
7. Will we become a new species?
This new species is sometimes called
Homo evolutis
, posthuman, transhuman, parahuman, or H+. It seems likely that legal, moral, and ethical concerns will loom larger and sooner due more to selection than to speciationâand due more to mixing of species than to isolating one species from another. We are already using parts from hundreds of previously separate species/kingdoms in human-induced pluripotent stem cells, DNA enzymes from fungi that promote genetic recombination, transcription factors from
Xanthomonas
, green fluorescent protein (GFP) from jellyfish, enzyme genes from plants to make essential fatty acids, and so on down a long list (see
Chapter 2
). The interspecies
barrier is falling as fast as the Berlin Wall did in 1989. Not just occasional horizontal transfer but massive and intentional exchangeâthere is a global marketplace for genes. Not the isolating effect of islands or valleys resulting in genetic drift and xenophobia, but a growing addiction to foreign gene products, for example, humans “mating” with wormwood for antimalarial drug precursor artemisinin, and with
Clostridium
for Botox. Maybe instead of the genus name
Homo
, we should adopt the genus name of our chimpanzee cousins,
Pan
(derived from the mischievous Greek god of that name, but note also the prefix “pan” for “all-inclusive” in this context, giving us a double entendre pointing to our genus increasingly using bits of DNA from the whole biosphere).
One fundamental question is, How much does speciation need isolation, for example, the type of ecological isolation between finch populations on Galapagos that, in part, led Darwin to develop the theory of evolution by natural selection? The earth used to be large relative to migrations. Humans died within a mile of their birth (one 25,000th of the earth's circumference). Now ordinary people commute 8.8 million miles in twenty-seven years. In addition to greatly enhanced travel, Stewart Brand would say that progressive urbanization means that by 2050, 80 percent of the population will live on just 3 percent of the land. So we will be literally running into each other on foot (without need for jets), as we are already doing in major cities. Such extreme population density will radically reduce isolation as a speciation driver. We would need a new way to achieve geographical isolation. Even space colonies may not do it if we can exchange ideas and DNA recipes at the speed of light.
Furthermore, the trend of relaxing rules against intermarriage seems to be taking us in the opposite direction from what's needed for speciation and for the evolution of a new human species. In
Loving v. Virginia
(1967), for example, the Supreme Court declared Virginia's anti-miscegenation law unconstitutional; the South African Immorality Act of 1927 (which banned sexual relations between whites and blacks) was repealed in 1985; and so on. Nevertheless, one way to achieve isolation would be for some of us to get far from our mother planet (a good survival idea in any case). Another way would be to design a species that is instantly incompatible
with the current edition of
Homo sapiens
. Yes, I know: it's hard enough to design the properties of a truly new protein, much less a truly new genome! Still, one exceptional example is that of mirror life. Mirror people, as we have seen, would be immune to nearly all current pathogens, and their sperm and eggs would be incompatible with ours. Their blood and tissues would be immunologically rejected too (see
Chapter 3
).
One scenario for the distant future is the appearance of the technological singularity, an idea that goes back to 1847 when Richard Thornton wrote about “thinking machines” that they might “remedy all their own defects and then grind out ideas beyond the ken of mortal mind!” Other proponents of the idea of the singularityâthe notion that at some point in the near to distant future, technology will change us so much that, essentially, all bets are offâinclude mathematicians Alan Turing (in 1951), Stan Ulam (1958), I. J. Good (1965), Vernor Vinge (1983), and most recently Ray Kurzweil.
The singularity is the subject of many books. There is the Singularity University (where I am a lecturer) located at NASA Moffett field in California (where else?), and even a documentary film,
Transcendent Man
(2009), and a science fiction film,
Transcendence
, in which a brain implant allows you to control a computer network. The argument goes basically that many of our key technologies, in particular computing and nanotechnology, are improving exponentially and synergizing, and as a consequence the rate of change is increasing. At some point computers may become artificially intelligent and then smarter than humans, being able to recode their own code. This seems like a perfect recipe for an autocat-alytic explosion, but unlike a literal explosion there is no dilution at the end of it, just unthinkable levels of smartness.
The fifth question for vitalism (posed way back in
Chapter 1
) is whether consciousness (or a mind) can be engineered synthetically. This is often framed as a method of mapping a human mind into a silicon-based computer, because of the perceived potential speed of thinking, sharing, and backing up mind states. Well before then, we will continue to engineer biological minds (with words, drugs, computer interfaces, and genetics). If we freeze and thaw a midge larva, it survives. If we slice its brain in half
and realign the two pieces perfectly, will it remember its previous thoughts? (Midge-level thoughts like “Gee. Is winter early this year?”) Can we make variations on this brain and see the effects perhaps on a large robotic scale? (Midges are cheap.) We genomically engineer the midge to help or engineer larger animals to be more midgelike in surviving brain freeze.
Some of us might be uncomfortable extrapolating further measured exponential curves (e.g., those in
Figure 7.1
) or feel that theoretical limits to improvement, such as minimum size of circuits set by atoms, thermodynamics, or quantum uncertainty, may kick in soon, and impede progress. But an alternative critique accepts that we will soon be able to compute at currently inconceivable (but still finite) speeds, at which point we won't find much to compute. It may turn out that many of the most interesting things to model, to number-crunch, and to think about, are already computed at high speed by natureâfor example, the many-body problem in physics for predicting trajectories of complex collections of particles, or the folding of polymers into developing humans. Analytic intelligence, where we merely observe and predict, gives way to synthetic intelligence, where we predict the future by changing it. It may turn out that well before we hit limits of computing, we will discover that we can achieve most of our goals that don't defy laws of physics without radical change from whatever passes for human nature at the timeâpossibly even recognizable to current humansâprobably no eugenics (government control of genetic inheritance) but heavily laden with W-genics (you-eu-genics, individual control over their own body genetics) and euphenics (changing traits by changing environments, drugs, and devices).