||It has been suggested that this article be merged with Participation bias. (Discuss) Proposed since April 2015.|
Non-response bias occurs in statistical surveys if the answers of respondents differ from the potential answers of those who did not answer. It may occur due to several factors as outlined in Deming (1990), and Mittal.
If one selects a sample of 1000 managers in a field and polls them about their workload, the managers with a high workload may not answer the survey because they do not have enough time to answer it, and/or those with a low workload may decline to respond for fear that their supervisors or colleagues will perceive them as unnecessary (either immediately, if the survey is non-anonymous, or in the future, should their anonymity be compromised). Therefore, non-response bias may make the measured value for the workload too low, too high, or, if the effects of the above biases happen to offset each other, "right for the wrong reasons." For a simple example of this effect, consider a survey that includes, "Agree or disagree: I have enough time in my day to complete a survey."
In the 1936 U.S. presidential election, The Literary Digest mailed out 10 million questionnaires, of which 2.3 million were returned. Based on this, they predicted that Republican Alf Landon would win with 370 of 531 electoral votes; he actually got 8. Research published in 1976 and 1988 concluded that non-response bias was the primary source of this error, although their sampling frame was also quite different from the vast majority of voters.
There are different ways to test for non-response bias. A common technique involves comparing the first and fourth quartiles of responses for differences in demographics and key constructs. In e-mail surveys some values are already known from all potential participants (e.g. age, branch of the firm, ...) and can be compared to the values that prevail in the subgroup of those who answered. If there is no significant difference this is an indicator that there might be no non-response bias.
In e-mail surveys those who didn't answer can also systematically be phoned and a small number of survey questions can be asked. If their answers don't differ significantly from those who answered the survey, there might be no non-response bias. This technique is sometimes called non-response follow-up.
Generally speaking, the lower the response rate, the greater the likelihood of a non-response bias in play.
Self-selection bias is a type of bias in which individuals voluntarily select themselves into a group, thereby potentially biasing the response of that group.
Participation bias is bias that arises due to the characteristics of those who choose to participate in a survey or poll.
Response bias is not the opposite of non-response bias, but instead relates to a possible tendency of respondents to give inaccurate or untruthful answers for various reasons.
- Deming, W. Edwards. Sample design in business research. Vol. 23. John Wiley & Sons, 1990.
- Mittal, Vikas, Sample Design for Customer-Focused Research (July 30, 2015). Available at SSRN: http://ssrn.com/abstract=2638086
- Armstrong, J.S.; Overton, T. (1977). "Estimating Nonresponse Bias in Mail Surveys". Journal of Marketing Research 14 (3): 396–402.
- Special issue of Public Opinion Quarterly (Volume 70, Issue 5) about "Nonresponse Bias in Household Surveys": http://poq.oxfordjournals.org/content/70/5.toc