What Technology Wants (44 page)

Read What Technology Wants Online

Authors: Kevin Kelly

Ubiquitous Motors.
Machinery for grinding crankshafts at the Ford Motor Company, 1915.
By the 1910s, electric motors had started their inevitable spread into homes. They had been domesticated. Unlike a steam engine, they did not smoke or belch or drool. Just a tidy, steady whirr from a five-pound hunk. As in factories, these single “home motors” were designed to drive all the machines in one home. The 1916 Hamilton Beach “Home Motor” had a six-speed rheostat and ran on 110 volts. Designer Donald Norman points out a page from the 1918 Sears, Roebuck and Co. catalog advertising the Home Motor for $8.75 (which is equivalent to about $100 these days). This handy motor would spin your sewing machine. You could also plug it into the Churn and Mixer Attachment (“for which you will find many uses”) and the Buffer and Grinder Attachments (“will be found very useful in many ways around the home”). The Fan Attachment “can be quickly attached to Home Motor,” as well as the Beater Attachment to whip cream and beat eggs.
Ad for the Home Motor.
A 1918 magazine advertisement for the Sears Home Motor.
One hundred years later, the electric motor has seeped into ubiquity and invisibility. There is no longer one home motor in a household; there are dozens of them, and each is nearly invisible. No longer stand-alone devices, motors are now integral parts of many appliances. They actuate our gadgets, acting as the muscles for our artificial selves. They are everywhere. I made an informal census of all the embedded motors I could find in the room I am sitting in while I write:
5 spinning hard disks
3 analog tape recorders
3 cameras (move zoom lenses)
1 video camera
1 watch
1 clock
1 printer
1 scanner (moves scan head)
1 copier
1 fax machine (moves paper)
1 CD player
1 pump in radiant floor heat
That's 20 home motors in one room of my home. A modern factory or office building has thousands. We don't think about motors. We are unconscious of them, even though we depend on their work. They rarely fail, but they have changed our lives. We aren't aware of roads and electricity either because they are ubiquitous and usually work. We don't think of paper and cotton clothing as technology because their reliable presences are everywhere.
In addition to a deep embeddedness, ubiquity also breeds certainty. The advantages of new technology are always disruptive. The first version of an innovation is cumbersome and finicky. It is, to repeat Danny Hillis's definition of technology, “stuff that does not work yet.” A newfangled type of plow, waterwheel, saddle, lamp, phone, or automobile can offer only uncertain advantages in exchange for certain trouble. Even after an invention has been perfected elsewhere, when it is first introduced into a new zone or culture it requires the retraining of old habits. The new type of waterwheel may require less water to run but also require a different type of milling stone that is hard to find, or it may produce a different quality of flour. A new plow may speed tilling but demand planting seed later, thus disrupting ancient traditions. A new kind of automobile may have a longer range but less reliability or greater efficiency but less range, altering driving and fueling patterns. The first version is almost always only marginally better than what it hopes to displace. That is why only a few eager pioneers are inclined to adopt an innovation at first, because the new primarily promises headaches and the unknown. As an innovation is perfected, its benefits and education are sorted out and illuminated, it becomes less uncertain, and the technology spreads. That diffusion is neither instantaneous nor even.
In every technology's life span, then, there will be a period of haves and have-nots. Clear advantages may flow to the individuals or societies who first take a risk with unproven guns or the alphabet or electrification or laser eye surgery over those who do not. The distribution of these advantages may depend on wealth, privilege, or lucky geography as much as desire. This divide between the haves and the have-nots was most recently and most visibly played out at the turn of the last century when the internet blossomed.
The internet was invented in the 1970s and offered very few benefits at first. It was primarily used by its inventors, a very small clique of professionals fluent in programming languages, as a tool to improve itself. From birth the internet was constructed in order to make talking about the idea of an internet more efficient. Likewise, the first ham radio operators primarily broadcast discussions about ham radio; the early world of CB radio was filled with talk about CB; the first blogs were about blogging; the first several years of twitterings concerned Twitter. By the early 1980s, early adopters who mastered the arcane commands of network protocols in order to find kindred spirits interested in discussing this tool moved onto the embryonic internet and told their nerdy friends. But the internet was ignored by everyone else as a marginal, teenage male hobby. It was expensive to connect to; it demanded patience, the ability to type, and a willingness to deal with obscure technical languages; and very few other nonobsessive people were online. Its attraction was lost on most people.
But once the early adaptors modified and perfected the tool to give it pictures and a point-and-click interface (the web), its advantages became clearer and more desirable. As the great benefits of digital technology became apparent, the question of what to do about the have-nots became a contested issue. The technology was still expensive, requiring a personal computer, a telephone line, and a monthly subscription fee—but those who adopted it acquired power through knowledge. Professionals and small businesses grasped its potential. The initial users of this empowering technology were—on the global scale—the same set of people who had so many other things: cars, peace, education, jobs, opportunities.
The more evident the power of the internet as an uplifting force became, the more evident the divide between the digital haves and have-nots. One sociological study concluded that there were “two Americas” emerging. The citizens of one America were poor people who could not afford a computer, and of the other, wealthy individuals equipped with PCs who reaped all the benefits. During the 1990s, when technology boosters like me were promoting the advent of the internet, we were often asked: What are we going to do about the digital divide? My answer was simple: nothing. We didn't have to do anything, because the natural history of a technology such as the internet was self-fulfilling. The have-nots were a temporary imbalance that would be cured (and more) by technological forces. There was so much profit to be made connecting up the rest of the world, and the unconnected were so eager to join, that they were already paying higher telecom rates (when they could get such service) than the haves. Furthermore, the costs of both computers and connectivity were dropping by the month. At that time most poor in America owned televisions and had monthly cable bills. Owning a computer and having internet access was no more expensive and would soon be cheaper than TV. In a decade, the necessary outlay would become just a $100 laptop. Within the lifetimes of all born in the last decade, computers of some sort (connectors, really) will cost $5.
This was simply a case, as computer scientist Marvin Minsky once put it, of the “haves and have-laters.” The haves (the early adopters) overpay for crummy early editions of technology that barely work. They purchase flaky first versions of new goods that finance cheaper and better versions for the have-laters, who will get things that work for dirt cheap not long afterward. In essence, the haves fund the evolution of technology for the have-laters. Isn't that how it should be, that the rich fund the development of cheap technology for the poor?
We saw this have-later cycle play out all the more clearly with cell phones. The very first cell phones were larger than bricks, extremely costly, and not very good. I remember an early-adopter techie friend who bought one of the first cell phones for $2,000; he carried it around in its own dedicated briefcase. I was incredulous that anyone would pay that much for something that seemed more toy than tool. It seemed equally ludicrous at that time to expect that within two decades, the $2,000 devices would be so cheap as to be disposable, so tiny as to fit in a shirt pocket, and so ubiquitous that even the street sweepers of India would have one. While internet connection for sidewalk sleepers in Calcutta seemed impossible, the long-term trends inherent in technology aim it toward ubiquity. In fact, in many respects the cell coverage of these “later” countries overtook the quality of the older U.S. system, so that the cell phone became a case of the haves and have-sooners, in that the later adopters got the ideal benefits of mobile phones sooner.
The fiercest critics of technology still focus on the ephemeral have-and-have-not divide, but that flimsy border is a distraction. The significant threshold of technological development lies at the boundary between commonplace and ubiquity, between the have-laters and the “all have.” When critics asked us champions of the internet what we were going to do about the digital divide and I said “nothing,” I added a challenge: “If you want to worry about something, don't worry about the folks who are currently offline. They'll stampede on faster than you think. Instead you should worry about what we are going to do when
everyone
is online. When the internet has six billion people, and they are all e-mailing at once, when no one is disconnected and always on day and night, when everything is digital and nothing offline, when the internet is ubiquitous. That will produce unintended consequences worth worrying about.”
I would say the same today about DNA sequencing, GPS location tracking, dirt-cheap solar panels, electric cars, or even nutrition. Don't worry about those who don't own a personal fiber-optic cable to their school; worry what happens when everyone does. We were so focused on those who don't have plenty to eat that we missed what happens when everyone does have plenty. A few isolated manifestations of a technology can reveal its first-order effects. But not until technology saturates a culture do the second- and third-order consequences erupt. Most of the unintended consequences that so scare us in technology usually arrive in ubiquity.
And most of the good things as well. The trend toward embedded ubiquity is most pronounced in technologies that are convivially open-ended: communications, computation, socialization, and digitization. There appears to be no end to their possibilities. The amount of computation and communication that can be crowded into matter and materials seems infinite. There is nothing we have invented to date about which we've said, “It's smart enough.” In this way the ubiquity of this type of technology is insatiable. It constantly stretches toward a pervasive presence. It follows the trajectory that pushes all technology into ubiquity.
FREEDOM
As with other things, our free wills are not unique. Unconscious free-will choice exists in primeval modes in animals. Every animal has primitive wants and will make choices to satisfy them. But free will precedes even life. Some theoretical physicists, including Freeman Dyson, argue that free will occurs in atomic particles, and therefore free choice was born in the great fire of the big bang and has been expanding ever since.
As an example Dyson notes that the exact moment when a subatomic particle decays or changes the direction of its spin must be described as an act of free will. How can this be? Well, all the other microscopic motions of that cosmic particle are absolutely predetermined from the particle's previous position/state. If you know where a particle is and its energy and direction, you can predict exactly, without fail, where it will be in the next moment. This utter allegiance to a path predetermined by its previous state is the foundation of the “laws of physics.” Yet a particle's spontaneous dissolution into subparticles and energy rays is not predictable, nor predetermined by laws of physics. We tend to call this decay into cosmic rays a “random” event. Mathematician John Conway proposed a proof arguing that neither the mathematics of randomness nor the logic of determinism can properly explain the sudden (why right now?) decay or shift of spin direction in cosmic particles. The only mathematical or logical option left is free will. The particle simply chooses in a way that is indistinguishable from the tiniest quantum bit of free will.
Theoretical biologist Stuart Kauffman argues that this “free will” is a result of the mysterious quantum nature of the universe, by which quantum particles can be two places at once, or be both wave and particle at once. Kauffman points out that when physicists shoot photons of light (which are wave/particles) through two tiny parallel slits (a famous experiment), the photon can pass through only as either a wave or a particle, but not both. The photon particle must “choose” which form it manifests. But the weird and telling thing about this experiment, which has been done many times, is that the wave/particle only chooses its form (either a wave or a particle)
after
it has already passed through the slit and is measured on the other side. According to Kauffman, the particle's shift from undecided state (called quantum decoherence) to the decided state (quantum coherence) is a type of volition and thus the source of free will in our own brains, since these quantum effects happen in all matter.

Other books

From Across the Ancient Waters by Michael Phillips
The Toynbee Convector by Ray Bradbury
That New York Minute by Abby Gaines
Trouble Walks In by Sara Humphreys
Remnants 14 - Begin Again by Katherine Alice Applegate
Breath of Desire by Ophelia Bell
Grunt by Roach, Mary
Shadow of a Hero by Peter Dickinson