Controlling for a variable
This article needs additional citations for verification. (February 2017) (Learn how and when to remove this template message)
In statistics, controlling for a variable is the attempt to reduce the effect of confounding variables in an observational study or experiment. It means that when looking at the effect of one variable, the effects of all other variable predictors are taken into account, either by making the other variables take on a fixed value (in an experiment) or by including them in a regression to separate their effects from those of the explanatory variable of interest (in an observational study).
Experiments attempt to assess the effect of manipulating one or more independent variables on one or more dependent variables. To ensure the measured effect is not influenced by external factors, other variables must be held constant. These variables that are made to remain constant during an experiment are referred to as the control variables.
For example, if an outdoor experiment were to be conducted to compare how different wing designs of a paper airplane (the independent variable) affect how far it can fly (the dependent variable), one would want to ensure that they conduct the experiment at times when the weather is the same because one would not want weather to affect the experiment. In this case, the control variables may be wind speed, direction and precipitation. If the experiment were conducted when it was sunny with no wind, but the weather changed, one would want to postpone the completion of the experiment until the control variables (the wind and precipitation level) were the same as when the experiment began.
In controlled experiments of medical treatment options on humans, researchers randomly assign individuals to a treatment group or control group. This is done to reduce the confounding effect of irrelevant variables that are not being studied, such as the placebo effect.
Observational studies are used when controlled experiments may be unethical or impractical. For instance, if a researcher wished to study the effect of unemployment (the independent variable) on health (the dependent variable), it would be considered unethical by most institutional review boards to randomly assign some participants to have jobs and some not to. Instead, the researcher will have to create a sample where some people are employed and some are unemployed. However, there could be factors that affect both whether someone is employed and how healthy he or she is. Any observed association between the independent variable and the dependent variable could be due instead to these outside, spurious factors rather than indicating a true link between them. This can be problematic even in a true random sample. By controlling for the extraneous variables, the researcher can come closer to understanding the true effect of the independent variable on the dependent variable.
In this context the extraneous variables can be controlled for by using multiple regression. The regression uses as independent variables not only the one or ones whose effects on the dependent variable are being studied, but also any potential confounding variables, thus avoiding omitted variable bias.
- Frost, Jim. "A Tribute to Regression Analysis | Minitab". Retrieved 2015-08-04.
- Freedman, David; Pisani, Robert; Purves, Roger (2007). Statistics. W. W. Norton & Company. ISBN 0393929728.
|This statistics-related article is a stub. You can help Wikipedia by expanding it.|