Read Terminator and Philosophy: I'll Be Back, Therefore I Am Online

Authors: Richard Brown,William Irwin,Kevin S. Decker

Terminator and Philosophy: I'll Be Back, Therefore I Am (11 page)

 
This isn’t to say that the results of our technologies couldn’t be harmful to us in some way or other. As the examples of Hiroshima, Chernobyl, the
Titanic
, and other disasters remind us, there are very significant threats posed by the machines we create. We might even create machines that would bring about our ultimate demise, like the fate depicted in
The Terminator
. But this would be our own nature turning back upon itself, a proverbial shooting ourselves in the foot, rather than the creation of a fundamentally alien foe. Skynet, no matter how intelligent and conscious it may be, would be an extension of our own intelligence and consciousness in a seemingly external form. If it were to destroy humanity, it would not be a case of genocide, but rather
suicide
.
 
So there is no reason to fear machines as forces unto themselves. Rather than being concerned with whether technology itself is a good thing or a bad thing, we should instead be concerned with the values that we bring to the table in using it. Technology is neither intrinsically good nor evil, but rather takes on the form we give it as active, creating beings. Where we go with it is ultimately up to us. As John Connor puts it, “There is no fate but what we make for ourselves.” So the next time you look at a computer and start to wonder whether an infant Skynet might be lurking within, take a step back and recognize the connectedness you have with the machine. Rest assured that it is not independently growing into an evil self-awareness bent on demolishing the human race, but rather is an extended component of humanity itself, inseparable from whatever functions we carry out through our various thoughts and actions.
 
NOTES
 
1
From the director’s commentary,
Terminator 2: Judgment Day
, dir. James Cameron (Live/Artisan, DVD release 2000).
 
2
See Bruce Mazlish’s
The Fourth Discontinuity: The Co-Evolution of Humans and Machines
(New Haven: Yale Univ. Press, 1993), which offers an interesting historical analysis of the relationship between humans and machines, culminating in the dissolution of their apparent separateness.
 
3
Taking this a step further, we could even say that we are machines produced by our genes as vehicles for their survival and reproduction. See Richard Dawkins,
The Selfish Gene
(New York: Oxford Univ. Press, 1976), for a fascinating portrayal of this perspective.
 
4
For an introductory exploration of the functionalist conception of the mind as a kind of machine, take a look at the fascinating collection of articles put together by Daniel Dennett and Douglas Hofstadter in their now-famous anthology
The Mind’s I: Fantasies and Reflections On Self and Soul
(New York: Basic Books, 1981). Both Dennett’s and Hofstadter’s more recent work on the nature of the mind is worth checking out as well. For one of the best-known views that opposes computational models of the human mind, see Hubert Dreyfus,
What Computers Still Can’t Do
(New York: Cambridge University Press, 1992).
 
5
William Lycan,
Consciousness
(Cambridge: MIT Press, 1987).
 
6
Richard Dawkins,
The Extended Phenotype
(New York: Oxford University Press, 1982), vi.
 
7
Andy Clark,
Natural-Born Cyborgs: Minds, Technologies, and the Future of Human Intelligence
(New York: Oxford Univ. Press, 2003), 78.
 
8
Aristotle,
Nicomachean Ethics,
trans. Christopher Rowe (New York: Oxford Univ. Press, 2002), 233. Thanks to Benjamin Rider for drawing this connection.
 
9
Clark,
Natural-Born Cyborgs
, 6.
 
PART TWO
 
WOMEN AND REVOLUTIONARIES
 
5
 
“I KNOW NOW WHY YOU CRY”: TERMINATOR 2, MORAL PHILOSOPHY, AND FEMINISM
 
Harry Chotiner
 
 
 
 
 
The first spoken words in
Terminator 2: Judgment Day
belong to Sarah Connor: in a flat, emotionless voice she tells us, “Three billion lives ended on August 29, 1997.” That knowledge is her curse. It defines her primary task in the first
Terminator,
staying alive so that someday her unborn son can lead the resistance to the machines that will precipitate the holocaust and wage war against humanity. And in
T2
it gives her an additional burden: not only must she protect her now adolescent son, John, but she must try to stop that future Judgment Day of extermination.
 
But at what cost? What would she, or for that matter, what would
we
do to save three billion lives? Kill one innocent person? Almost all of us would happily make that trade-off, and that’s precisely what Sarah Connor sets off to do. Laden with death-dealing hardware, she moves with military precision to the home of Miles Dyson, whose research will produce the genocidal machines.
 
The idea of sacrificing one innocent life to save many seems compelling to us. But what if my doctor had four transplant patients who would die without donor organs, and so she decided to kill me and harvest my organs to save them? Does my more-than-squeamish discomfort simply reflect my selfish attachment to my own life, or is there something morally wrong with what she’s doing? Would it be acceptable, even moral, for my doctor to use me like this if she could save ten lives? A hundred lives? A million lives? Maybe the numbers don’t matter: maybe there’s something wrong with the very principle that Sarah Connor and my doctor both adhere to. Or maybe, as some feminist philosophers suggest, their reliance on abstract principles is poor moral thinking in the first place.
 
The choices facing Sarah Connor and my soon-to-be ex-doctor concern how we make difficult moral decisions. Philosophers reject the world’s most common approach: “Trust an authority.” Whether it be a king or the laws, the Bible or the pope, a parent or customs and tradition, almost all modern philosophers reject moral choices based on the commands of a traditional authority.
1
But without traditional authorities, on what basis can Sarah justify her decision? To answer this question, let’s look at the two most widely accepted philosophical answers: utilitarianism and deontological ethics, and then explore how their application to theories of children’s moral development opened the door for feminist criticism of them both.
 
“You Can’t Just Go Around Killing People.” “Why?”
 
Utilitarianism is the idea that an action can be judged as good or bad based on its consequences, and in particular, on how much pleasure or pain is produced. Jeremy Bentham (1742- 1838), the modern father of utilitarianism, believed that while a few “moral heroes” act only on the basis of some disinterested greater good, most of us use our emotions and act on the basis of what will bring us the most happiness. That principle is so ingrained and so fundamental and should be so obvious that it needs almost no defense. And society would be better off if governments would make laws based on this principle of the greatest happiness for the greatest number. Bentham even developed a set of formulae (called the “hedonistic calculus”) that would measure happiness and pain, allowing individuals and governments to make rational decisions about maximizing the former and minimizing the latter.
2
So, a moral action is right if its consequences maximize happiness and/or diminish pain for the greatest number of people.
 
When Sarah Connor decided to kill one innocent man (the scientist, Miles Dyson) to save the lives of billions, she was thinking of consequences as a utilitarian would. Bentham wouldn’t have been interested in her motivations. Perhaps she wanted to save lives so she would become famous or perhaps she wanted to kill Miles Dyson because she hated scientists. Bentham wouldn’t care. For him, the only question would have involved the consequences of her actions.
 
Bentham’s ideas were developed and refined by John Stuart Mill (1806-1883). In his 1863 book
Utilitarianism
, Mill argued that happiness was too vague and hard to calculate with the mathematical precision that Bentham claimed. He also believed that different types of happiness had different value and merit: as Mill put it, it’s better to be “Socrates dissatisfied than a fool satisfied.”
3
But though modifying and developing Bentham’s ideas, he still worked from the premise that a decision or an action is right or wrong based solely on its consequences.
 
But neither Bentham nor Mill resolves our problem with Sarah Connor’s utilitarian calculation. Our moral intuition recoils at the idea of sacrificing innocent lives for some greater good. Even if we approved of shooting Miles Dyson to save three billion people, most of us would not sanction sacrificing humans in medical experiments that would lead to a cure for AIDS or cancer.
4
 
This limitation brings us to the other moral system that many philosophers embrace: Immanuel Kant’s (1724-1804) deontology.
5
His thoughts about ethics are worked out with great complexity, but we need only focus on three of his conclusions. First, for Kant, unlike for Bentham, our
motivations
are crucial: only an action that stems from our sense of duty to obey moral rules is an ethically correct act.
6
So Kant’s judgment about Sarah Connor’s decision would necessarily have to include a look at her motivations.
 
Kant’s second moral conclusion is linked to his assertion that motives matter. Only an act done from a good will—a desire to do one’s duty for its own sake—can be moral (
deontology
means a duty-based ethics). But what kinds of actions are done purely from duty? Kant argues that there are two types of imperatives—hypothetical and categorical.
Hypothetical
imperatives are what we must do if we want to obtain a desired goal; they are the means to ends. So, for example, if I want to make the football or chess team, I ought to practice hard.
Categorical
imperatives command an action because of the inherent value of that action in itself, not because it’s a means to an end. These types of imperatives are unconditional, not based on any desire (like making a varsity team). They are absolute, and don’t allow for exceptions based on circumstances. This categorical law is
un
concerned with consequences and results. These moral duties represent the injunctions of reason, and they are universal principles of conduct. As Kant says, “One ought never to act except in such a way that one can also will that one’s maxim should become a universal law.”
7
 
Suppose, for example, that I borrow a hundred dollars from someone and then decide not to pay him back because he’s an unpleasant person. The categorical imperative tells me that I must condone that everyone in the world who borrows money should do just as I intend to: act on the principle that they needn’t pay money back if they find the lender unpleasant. Or if I lie to someone when it would be embarrassing for me to tell the truth, then I have to condone the behavior of lying from anyone who’s embarrassed. For Kant, breaking promises and lying are wrong in and of themselves, regardless of the consequences. And given my concern about my doctor’s intention to harvest my organs to save numerous other patients, I’m drawn to this ethical system that precludes the calculation of consequences as the basis for judging an action.
 
Yet there’s an example from
T2
’s opening chase through the mall that shows us a problem with the Kantian approach. The T-1000 is showing John’s photo to other kids at the mall and asking if they’ve seen the boy. When he asks John’s friend, the friend lies, saying he hasn’t seen John and doesn’t know where he is. But Kant finds it wrong to lie: we can’t universalize lying for all humanity. Moreover, Kant believes we can’t know the consequences of our actions. Suppose that the T-1000, realizing that the kid lied, ended up killing the kid and his whole family? The kid would have been better off telling the truth. Or what if John had gone to hide in the garage, so the kid lied and told the Terminator that John was on the roof? But what if, unbeknownst to the kid, John had changed his mind and gone to the roof to hide? The kid would have sent the T-1000 to where John was. Being unable to know the consequences of our actions, and knowing that lying is wrong, Kant would say that John’s friend should not have lied even if that meant telling the T-1000 where to find John.
8
 
Kant’s third moral principle would be the most difficult for Sarah Connor. Kant believed that what most makes us human is our capacity to use reason. Unlike every other animal, we use reason to determine our goals and endeavors rather than having them determined by our biology. Insects, fish, lions, and whales may have very complicated lives, but what they interact with, reproduce with, and how they hunt, kill, nurture, and play are all dictated by their biology. We humans can use our reason to define what we want our lives to be about. We can think about the future, imagine how things can be different, and make plans to bring about that future. Looking at every other form of life, it’s hard not to be impressed with this capacity of ours.

Other books

Captive of My Desires by Johanna Lindsey
Re-Vamped! by Sienna Mercer
The Outsider by Rosalyn West
Master and God by Lindsey Davis
Powers by Brian Michael Bendis
Frederick's Coat by Duff, Alan