Experimental Design

This section is adapted from Chapter 1 of OpenIntro Statistics, second edition.

Studies where the researchers assign treatments to cases are called experiments. When this assignment includes randomization, for example using a coin flip to decide which treatment a patient receives, it is called a randomized experiment. Randomized experiments are fundamentally important when trying to show a cause-and-effect connection between two variables.

Principles of experimental design
Randomized experiments are generally built on four principles.

Controlling Researchers assign treatments to cases, and they do their best to control any other differences among the groups. For example, when patients take a drug in pill form, some patients take the pill with only a sip of water while others may have it with an entire glass of water. To control for water consumption, a doctor may ask all patients to drink a 12-ounce glass of water with the pill.

Randomization Researchers randomize patients into treatment groups to account for variables that cannot be controlled. For example, some patients may be more susceptible to a disease than others due to their dietary habits. Randomizing patients into the treatment or control group helps even out such differences, and it also prevents accidental bias from entering the study.

Replication The more cases researchers observe, the more accurately they can estimate the effect of the explanatory variable on the response. In a single study, we replicate by collecting a sufficiently large sample. Additionally, a group of scientists may replicate an entire study to verify an earlier finding.

Blocking Researchers sometimes know or suspect that variables, other than the treatment, influence the response. Under these circumstances, they may first group individuals based on this variable into blocks and then randomize cases within each block to the treatment groups. This strategy is often referred to as blocking. For instance, if we are looking at the effect of a drug on heart attacks, we might first split patients in the study into low-risk and high-risk blocks, then randomly assign half the patients from each block to the control group and the other half to the treatment group. This strategy ensures each treatment group has an equal number of low-risk and high-risk patients.

It is important to incorporate the first three experimental design principles into any study, and this course includes applicable methods for analyzing data from such experiments. Blocking is a slightly more advanced technique, and statistical methods in this course may be extended to analyze data collected using blocking.

Reducing bias in human experiments
Randomized experiments are the gold standard for data collection, but they do not ensure an unbiased perspective into the cause-and-effect relationships in all cases. Human studies are perfect examples where bias can unintentionally arise. Here we reconsider a study where a new drug was used to treat heart attack patients. In particular, a group of researchers wanted to know if the drug reduced deaths among patients. 

The researchers designed a randomized experiment because they wanted to draw causal conclusions about the drug's effect. Study volunteers were randomly placed into two study groups. One group, the treatment group, received the drug. The other group, called the control group, did not receive any drug treatment.

Put yourself in the place of a person in the study. If you are in the treatment group, you are given a fancy new drug that you anticipate will help you. On the other hand, a person in the other group doesn't receive the drug and sits idly, hoping her participation doesn't increase her risk of death. These perspectives suggest there are actually two effects: the one of interest is the effectiveness of the drug, and the second is an emotional effect that is difficult to quantify.

Researchers aren't usually interested in the emotional effect, which might bias the study. To circumvent this problem, researchers do not want patients to know which group they are in. When researchers keep the patients uninformed about their treatment, the study is said to be blind. But there is one problem: if a patient doesn't receive a treatment, she will know she is in the control group. The solution to this problem is to give fake treatments to patients in the control group. A fake treatment is called a placebo, and an effective placebo is the key to making a study truly blind. A classic example of a placebo is a sugar pill that is made to look like the actual treatment pill. Often, a placebo results in a slight but real improvement among patients in the control group. This effect has been dubbed the placebo effect.

The patients are not the only ones who should be blinded: doctors and researchers can accidentally bias a study. When a doctor knows a patient has been given the real treatment, she might inadvertently give that patient more attention or care than a patient that she knows is on the placebo. To guard against this bias, which again has been found to have a measurable effect in some instances, most modern studies employ a double-blind setup where doctors or researchers who interact with patients are, just like the patients, unaware of who is or is not receiving the treatment.

Exercises

1. Mason-Dixon, a nonpartisan polling firm based in Jacksonville, Florida, conducted a phone survey of 800 registered Florida voters (whom the company deemed likely to vote in the November 2012 election) on behalf of the Tampa Bay Times, Miami Herald, El Nuevo Herald, Bay News 9 and Central Florida News 13. The poll, conducted Sept. 17–19, found that 46% of those surveyed plan to vote to reelect President Barack Obama, while 45% plan to vote for this Republican rival, former Massachusetts governor Mitt Romney.

a) Was this an observational study or an experiment?

b) Was randomization employed in this study?

2. On September 21, 2012, The New England Journal of Medicine published an article online ("A Randomized Trial of Sugar-Sweetened Beverages and Adolescent Body Weight") reporting the results of research conducted by a team led by Dr. David S. Ludwig of Boston Children's Hospital. The article reports that after one year, the average weight gain for study participants who consumed non-sugary beverages was 1.9 kg less than the average for those who did not modify their beverage consumption.

a) Download the article from the link above and read the abstract on the first page. (Don't worry about terms you don't understand; you'll become familiar with many of them by the end of this course.) Was this an observational study or an experiment?

b) Identify, if possible, explanatory and response variables for this study.

c) Was randomization employed in this study?