Read The Internet of Us Online

Authors: Michael P. Lynch

The Internet of Us (6 page)

My mom's skepticism about the reliability of testimony has deep roots in our culture. The seventeenth-century philosopher John Locke snarled at the idea that you can know something
merely
because someone else tells you: “I hope it will not be thought arrogance to say,” he begins, “that perhaps we should make greater progress in the discovery of rational and contemplative knowledge if we sought it in the fountain . . . of things in themselves, and made use rather of our own thoughts than other men's to find it.” He goes on, “The floating of other men's opinions in our brains makes us not one jot the more knowing, though they happen to be true. What in them was science is in us but opiniatrety.”
8
Locke's point seems to be that real knowledge only comes from
your own personal observations, or use of your memory, logical reasoning, or so on.
Real knowers, he seems to say, are self-reliant: they drink from the fountain of “things in themselves.” That is, they believe only when they have, or least can easily obtain, reasons—reasons based on personal observations and critical thinking—for one side of the given issue or another.

An emphasis on self-reliance makes sense given that Locke was a founding figure of the Enlightenment, known for celebrating the individual's political rights and autonomy. For Locke, citizens had a natural right to their property, and the government needed to be relatively standoffish with regard to how people used that property. This trumpeting of the rights of the individual had a natural epistemic correlate, call it Locke's command:
thou shalt figure things out thyself.
The sentiment is echoed again and again throughout the period. Kant, in fact, defined enlightenment partly in terms of it: as humanity's “emergence from a self-imposed immaturity”—immaturity due to lack of courage to think for oneself as opposed to going with the flow.
9
One could not even enter the British Royal Society without passing under their motto (then and now) of
nullius in verba
(“take nobody's word for it”).

These sentiments were largely a reaction to an older idea—that all knowledge requires deferring to, and trusting, authority. Education in the sixteenth century was still very much a matter of mastering certain religious and classical texts, and what you knew came from those, and only those, texts. But as it became apparent that these texts were often wrong (think of Galileo's and Copernicus' discoveries, for example) the method of consulting them for knowledge came to seem naive. Thus by 1641, we find Descartes wiping away such methods with the very first sentence of his most famous book: “Some years ago I was struck by the large numbers of falsehoods that I had accepted as true in my childhood, and by the highly doubtful nature of the whole edifice that I had subsequently based on them.”
10
Descartes sat down and attempted to reconstruct what he really knew—using only materials that he could find with his own mind—and he implicitly urged his reader to do the same. In other words, don't trust someone else's say-so; question authority.

This is still good advice—to a point. That's because, really, “Locke's command” is impossible to follow strictly. We can't
figure
everything
out for ourselves. So, if we interpret Locke as telling us that you only really know that which you discover on your
own, then you and I don't know very much. And neither did Locke. Even in Locke's time, when someone like himself could master so much about the world (and write a book titled, non-ironically,
An Essay concerning Human Understanding
), there was still a cognitive division of labor. Expertise was acknowledged and encouraged in mathematics, navigation, engineering, farming and warfare. Education systems—universities—were already centuries old by Locke's time, and the printing press, and growing literacy, were allowing more and more people to learn from the knowledge of others.

The point is partly economic. It simply isn't efficient to try and have everyone know everything—any more than it is efficient to try and get everyone to grow all their own food. After all, how much of the information that you get from your phone could you work out personally? My experiment from last summer demonstrates that the answer is: not very much. Even if we can figure some things out offline, we still consult experts and outside resources. In all these sorts of cases, we must defer to the testimony and expertise of others.

Sure, when the zombie apocalypse comes, you want to be able to stand on your own. But this isn't the zombie apocalypse. In real life, self-reliant folks get that way because of what other people have done for them. Economically self-reliant people typically have benefited from the help of others (and from those public entities that maintain the roads, maintain armies and teach the workers how to read and write). Likewise for the mythic self-reliant believer. This is the fantasy of the rugged individual judging the truth for himself without dependence on anyone else. In the TV commercial version, he is the man in the white
lab coat, smoking his pipe, squinting into a microscope. But how did he get there? By learning from others, of course—from education. And while some of the information we learn this way can, at least in principle, be verified by our own reasoning and observation, the fact is that not everything we learn is like this. All of us are finite creatures with limited life spans. We can't check out all the information we rely on in any given day, let alone over a whole lifetime.

So, while information cascades, rumors and ignorance do spread like wildfire, we aren't going to give up on Google-knowing because of that. Nor should we, any more than we should give up learning from others. The lesson I glean from Mom and Locke, therefore, isn't to be an intellectual hermit. You don't have to throw your iPhone away and stop using Twitter.
What we—both as individuals and as a society—should learn from Mom and Locke is that we must be extremely careful about allowing online information acquisition—Google-knowing—to swamp other ways of knowing.
And yet that is, increasingly, precisely what we are prone to do.

Being Reasonable: Uploading Reasons

Imagine you want to buy a good apple from folks who have their own apple-sorting device. They
claim
it is great at picking out the good apples from the bad. Does that help you decide whether to buy one of their apples? Not really—even if they were to later turn out to be right. For unless you already have reason to trust them, the mere fact they say they have a reliable apple-sorter is of little use to you. And that remains the case even if they are
actually right—they really do have the good apples. Analogously, where the issue isn't sorting good apples from bad but true information from false, my merely
saying
I've got good information isn't generally enough to make you want to buy it—even if I really do have a way of telling the difference between what's true and what isn't.

This tells us something important: that if we were to define knowing as only being receptive—as accurate downloading and nothing more—then we ignore something important about the human condition.

In some cases—many cases, in fact—we trade information in situations where trust has already been established to some degree. It is a spectrum. On one end of the spectrum are cases where we have a reasonably high degree of trust in the person we are asking for information: as when you talk to your trusted family doctor about your health, your spouse about your children, your professor about the material which she is professing. Of course, no human is infallible, but you ask those you trust for information because you think they have a good probability of being right—or at least more right than yourself. Farther down the spectrum is the case where you stop a passerby to ask for directions. Here too, you already have some degree of trust, since a) you have some reason to think he is local (he is walking down the street) and b) you have no reason to think he will lie. We all know that both of these conditions can be defeated, of course (sometimes it turns out that you're asking another tourist), but nonetheless, we ask directions with the reasonable expectation of getting useful information.

Yet while many of our interactions are like this, many are
farther down the spectrum still. In some cases, we may need information but have very little to go on. That's why we seek evidence to help us assess other people's opinions. When, for example, we aren't an expert on something ourselves, we seek advice from those who say they are. But if we are wise, we also get evidence of that person's expertise: references, degrees, or word of mouth. Moreover, we look for them to explain their opinions to us in ways that make sense given what we do know. We don't trust blindly—we look for reasons.

This is another place where a philosophical thought experiment can help. The seventeenth-century philosopher Thomas Hobbes postulated that governments are rational responses to the nasty, brutish and short lives we are doomed to lead in what he called the
state of nature
—a state where everyone is against everyone and no one works together to distribute resources. His thought was that those in the state of nature would face strong pressures to form a government, allowing them to coordinate and share these resources: to stop the “war of all against all.” Usually, when we think about this idea, we are thinking of shelter, food and water as resources. But it is pretty clear that justified accurate information—knowledge—is a resource as well. In order to escape the state of nature, we would need to
exchange information
in a situation where, at least to begin with, we would have fairly low levels of preexisting trust. In other words, we would face what we might call the
information coordination problem
.
11

The information coordination problem isn't just hypothetical. All societies face it, since no society can survive without its citizens trading information with one another. But how
do
we
solve it? You can't just look and see the truth in my brain. What you need is
some reflectively appreciable evidence
—you are looking for a
reason
to believe that my apple sorting is reliable, so to speak. By a “reason” here, I mean a consideration in favor of believing something. Not all reasons are good ones, of course. But when we are consciously deciding what to believe, we are engaging in our capacity for reflection, or “system 2” as Kahneman calls it. We are trying to sort the true from the false. When we do so successfully, we are knowing in a different way: we are not just being receptive. We are being reflective,
responsible
believers.

A key challenge to living in the Internet of Us is not letting our super-easy access to so much information lull us into being passive receptacles for other people's opinions. Locke and Descartes may have overemphasized the role of reason in our lives. But we can't fall into the opposite error either. Knowing now is
both
faster
and
more dependent on others than Descartes or Locke would have ever imagined. If we are not careful, that can encourage in us the thought that all knowing is downloading—that all knowing is passive. That would be a serious mistake. If we want more than to be just passive, receptive knowers, we need to struggle to be
autonomous
in our thought. To do that is to believe based on
reasons you can own
—stemming from principles you would, on reflection, endorse.
12

But if the principles we use to evaluate one another's information are forever hidden from view, they aren't of much use. In order to solve the information coordination problem we can't just live up to our
own
standards. We need to be willing to explain ourselves to one another in terms we can
both
understand.
13
It is
not enough to be receptive downloaders and reflective, responsible believers. We also need to be
reasonable
.

Reasonableness isn't a matter of being polite. It has a public point. Exchanging reasons matters because it is a useful way of laying out evidence of credibility. It is why we often demand that people give us arguments for their views, reasons that they can
upload
onto our shared public workspace. We use these reasons, for good or ill, as
trust-tags
. And the converse holds as well—if I want you to trust me, I will find it useful to give you some publicly appreciable evidence for thinking of me as credible. One way to do that is to upload a reason into our shared workspace.

Public workspaces require public rules. If we are going to live together and share resources, we need people to play by at least some of the same rules. We need them to be reasonable, ethical actors. The same holds when it comes to sharing information. If that is going to work, we need people to be reflective
and
reasonable believers
—to be willing to play the game of giving and asking for reasons by rules most of us could accept were we to stop to think about it.
Only if we can hold one another accountable for following the rules can we make sense of having a fair market of information.

But how realistic is this? Digital media gives us more means for self-expression and autonomous opinion-forming than any human has ever had. But it also allows us to find support for any view, no matter how wacky. And that raises an important question: What if our digital form of life has already exposed “reason” as a naive philosopher's fantasy? What if we no longer recognize the same rules of reason?

What if it is too late to be reasonable?

3

Other books

Shiver by Roberts, Flora
Curricle & Chaise by Church, Lizzie
The Awakening Evil by R.L. Stine
The Humbug Murders by L. J. Oliver
Planet of Dread by Murray Leinster
Shooting the Sphinx by Avram Noble Ludwig
The Penalty Box by Deirdre Martin
Tempting a Sinner by Kate Pearce