Read Statistics for Dummies Online

Authors: Deborah Jean Rumsey

Tags: #Non-Fiction, #Reference

Statistics for Dummies (12 page)

Surveys (polls)

A
survey
(more commonly known as a
poll
) is a measurement tool that is most often used to gather people's opinions along with some relevant demographic information. Because so many policymakers, marketers, and others want to "get at the pulse of the American public" and find out what the average American is thinking and feeling, many people now feel that they cannot escape the barrage of requests to take part in surveys and polls. In fact, you've probably received many requests to participate in surveys, and you may even have become numb to them, simply throwing away surveys received in the mail, or saying "no" when you're asked to participate in a telephone survey.

If done properly, a survey can really be informative. People use surveys to find out what TV programs Americans (and others) like how consumers feel about Internet shopping and whether the United States should have a nuclear defense system. Surveys are used by companies to assess the level of satisfaction their customers feel, to find out what products their customers want, and to determine who is buying their products. TV stations use surveys to get instant reactions to news stories and events, and movie producers use them to determine how to end their movies.

If I had to choose one word to describe the general state of surveys in the media today, I'd have to use the word
quantity
, rather than
quality.
In other words, you'll find no shortage of bad surveys. You can ask a few basic questions to determine whether a survey has been conducted properly; these issues are covered in detail in
Chapter 16
.

Estimation

One of the biggest uses of statistics is to guesstimate something (the statistical term is
estimation
), as in the following examples:

  • What's the average household income in America?

  • What percentage of households tuned in to the Academy Awards this year?

  • What's the average life expectancy of a baby born today?

  • How effective is this new drug?

  • How clean is the air today, compared to ten years ago?

All of these questions require some sort of numerical estimate to answer the question, yet the business of coming up with a fair and accurate estimate can be quite involved. The following sections cover major elements in that process. For more information on making and interpreting estimates, see
Chapter 11
.

Margin of error

You've probably heard someone report, "This survey had a margin of error of plus or minus 3 percentage points." What does this mean? All surveys are based on information collected from a sample of individuals, not the entire population. A certain amount of error is bound to occur — not in the sense of calculation error (although there may be some of that, too) but in the sense of
sampling error
, or error that's bound to happen simply because the researchers aren't asking everyone. The
margin of error
is supposed to measure the maximum amount by which the sample results are expected to differ from those of the actual population. Because the results of most survey questions can be reported in terms of percentages, the margin of error most often appears as a percentage, as well.

How do you interpret a margin of error? Suppose you know that 51% of those sampled say that they plan to vote for Miss Calculation in the upcoming election. Now, projecting these results to the whole voting population, you would have to add and subtract the margin of error and give a range of possible results in order to have sufficient confidence that you're bridging the gap between your sample and the population. So, in this case (supposing a margin of error of plus or minus 3 percentage points) you would be pretty confident that between 48% and 54% of the population will vote for Miss Calculation in the election, based on the sample results. In this case, Miss Calculation may get slightly more or slightly less than the majority of votes and could either win or lose the election. This has become a familiar situation in recent years, where the media want to report results on Election Night, but based on survey results, the election is "too close to call." For more on the margin of error, see
Chapter 10
.

HEADS UP 

The margin of error measures accuracy; it does not measure the amount of bias that may be present. Results that look numerically scientific and precise don't mean anything if they were collected in a biased way.

Confidence interval

When you combine your estimate with the margin of error, you come up with a
confidence interval.
For example, suppose the average time it takes you to drive to work each day is 35 minutes, with a margin of error of plus or minus 5 minutes. You estimate that the average time to work would be anywhere from 30 to 40 minutes. This estimate is a confidence interval. It takes into account the fact that sample results will vary and gives an indication of how much variation to expect. For more on confidence interval basics, see
Chapter 11
.

Some confidence intervals are wider than others (and wide isn't good, because it equals less accuracy). Several factors influence the width of a confidence interval, such as sample size, the amount of variability in the population being studied, and how confident you want to be in your results. (Most researchers are happy with a 95% level of confidence in their results.) For more on factors that influence confidence intervals, see
Chapter 12
.

TECHNICAL STUFF 

Many different types of confidence intervals are done in scientific research, including confidence intervals for means, proportions, the difference of two means or proportions, or paired differences. For specifics on the most common hypothesis tests, see
Chapter 13
.

Probability versus odds

A
probability
is a measurement of the likelihood of an event happening. In other words, a probability is the chance that something will happen. For example, if the chance of rain tomorrow is 30%, it's less likely to rain than not rain tomorrow, but the chance of rain is still 3 out of 10. (Given those chances, will you bring your umbrella with you tomorrow?) A chance of rain of 30% also means that over many, many days with the same conditions as those predicted for tomorrow, it rained 30% of the time.

Probabilities are calculated in many different ways:

  • Math is used to grind out the numbers (for example, figuring your chances of winning the lottery or determining the hierarchy of hands in poker).

  • Data are collected, and the probabilities are estimated based on the history of the data (for example, to predict the weather).

  • Complex math and computer models are used to try to predict future behavior and occurrence of natural phenomena (for example, hurricanes and earthquakes).

The laws of probability often go against your intuition and your own beliefs about what you think can happen (that's why casinos stay in business). See
Chapter 6
for more on probability.

HEADS UP 

Odds and probability are slightly different. The best way to describe this difference is by looking at an example. Suppose the probability that a certain race horse is going win the race is 1 out of 10. That means his probability of winning is 1 in 10 or 1 ÷ 10 or 0.10. A probability reflects the chances of winning. Now what are this horse's odds of winning? They are 9 to 1. That's because odds are actually a ratio of the chances of losing to the chances of winning. This horse has a 9 in 10 chance of losing and a 1 in 10 chance of winning. Take 9/10 over 1/10 and the 10s cancel, leaving you with 9/1, which in odds lingo is stated as "9 to 1." For more on gambling, see
Chapter 7
.

The law of averages

You've probably heard people mention the law of averages before. Perhaps it was the local baseball reporter lamenting that his team, who defied the odds by winning 50 games and losing only 12 in the first 3 months of the season, were now starting to lose, giving in to the law of averages. Or maybe the context was gambling ("The law of averages is bound to catch up with me — I'm on too hot of a winning streak!"). What is the law of averages, exactly, and are people using this term properly?

The
law of averages
is a rule of probability. It says that, in the long term, results will average out to their expected value, but in the short term, no one knows what will happen. For example, casinos set up all of their games so that the chances of the house winning are slightly in their favor. That means that in the long term, as long as people keep playing, the casinos are going to come out ahead, on average. Of course there will be some winners, that's what keeps people playing, knowing they could be among them. But in the long term, the losers outweigh the winners (not to mention the fact that many times people who win big just put their money back into the games again and end up losing). For more on the law of averages, see
Chapter 7
.

Hypothesis testing

Hypothesis test
is a term you probably haven't run across in your everyday dealings with numbers and statistics. But I guarantee that hypothesis tests have been a big part of your life and your workplace, simply because of the major role they play in industry, medicine, agriculture, government, and a host of other areas. Any time you hear someone talking about their results showing a "statistically significant difference", you're encountering the results of a hypothesis test. Basically, a
hypothesis test
is a statistical procedure in which data are collected and measured against a claim about a population.

For example, if a pizza delivery chain claims to deliver pizzas within 30 minutes of placing the order, you could test whether this claim is true by collecting a random sample of delivery times over a certain period of time and looking at the average delivery time for that sample.

HEADS UP 

Because your decision is based on a sample and not the entire population, a hypothesis test can sometimes lead you to the wrong conclusion. However, statistics are all you have, and if done properly, they can get as close to the truth as is humanly possible without actually knowing the truth. For more on the basics of hypothesis testing, see
Chapter 14
.

TECHNICAL STUFF 

A variety of hypothesis tests are done in scientific research, including t-tests, paired t-tests, and tests of proportions or means for one or more populations. For specifics on the most common hypothesis tests, see
Chapter 15
.

P-value

Hypothesis tests are used to confirm or deny a claim that is made about a population. This claim that's on trial, in essence, is called the
null hypothesis.
The evidence in the trial is your data and the statistics that go along with it. All hypothesis tests ultimately use a
p
-value to weigh the strength of the evidence (what the data are telling you about the population). The
p
-value is a number between 0 and 1 that reflects the strength of the data that are being used to evaluate the null hypothesis. If the
p
-value is small, you have strong evidence against the null hypothesis. A large
p
-value indicates weak evidence against the null hypothesis. For example, if a pizza chain claims to deliver pizzas in less than 30 minutes (this is the null hypothesis), and your random sample of 100 delivery times has an average of 40 minutes for the delivery time (which is more than 2 standard deviations above what the average delivery time is supposed to be) the
p
-value for this test would be small, and you would say you have strong evidence against the pizza chain's claim.

Statistically significant

Whenever data are collected to perform a hypothesis test, the researcher is usually typically looking for a significant result. Usually, this means that the researcher has found something out of the ordinary. (Research that simply confirms something that was already well known doesn't make headlines, unfortunately.) A
statistically significant
result is one that would have had a very small probability of happening just by chance. The
p
-value reflects that probability.

For example, if a drug is found to be more effective at treating breast cancer than the current treatment is, researchers say that the new drug shows a statistically significant improvement in the survival rate of patients with breast cancer (or something to that effect). That means that based on their data, the difference in the results from patients on the new drug compared to those using the old treatment is so big that it would be hard to say it was just a coincidence.

HEADS UP 

Sometimes, a sample doesn't represent the population (just by chance) and this results in a wrong conclusion. For example, a positive effect that's experienced by a sample of people who took the new treatment may have just been a fluke. (Assume for the moment that you know that the data were not fabricated, fudged, or exaggerated.) The beauty of medical research is that as soon as someone gives a press release saying that he or she found something significant, the rush is on to try to replicate the results, and if the results can't be replicated, this probably means that the original results were wrong, for some reason. Unfortunately, a press release announcing a "major breakthrough" tends to get a lot of play in the media, but follow-up studies refuting those results often don't show up on the front page.

REMEMBER 

One statistically significant result shouldn't lead to quick decisions on anyone's part. In science, what counts is not a single remarkable study, but a body of evidence that is built up over time, along with a variety of well-designed follow-up studies. Take any major breakthroughs you hear about with a grain of salt and wait until the follow-up work has been done before using the information from a single study to make important decisions in your life.

Other books

Unseen by Mari Jungstedt
Hardcore Volume 3 by Staci Hart
The Glimpsing by James L. Black, Mary Byrnes
Exodus by Bailey Bradford
Looking for Lucy Jo by Suzy Turner
The Buy Side by Turney Duff