A Likert scale (// LIK-ərt but more commonly pronounced // LY-kərt) is a psychometric scale commonly involved in research that employs questionnaires. It is the most widely used approach to scaling responses in survey research, such that the term (or more accurately the Likert-type scale) is often used interchangeably with rating scale, even though the two are not synonymous.
The scale is named after its inventor, psychologist Rensis Likert. Likert distinguished between a scale proper, which emerges from collective responses to a set of items (usually eight or more), and the format in which responses are scored along a range. Technically speaking, a Likert scale refers only to the latter. The difference between these two concepts has to do with the distinction Likert made between the underlying phenomenon being investigated and the means of capturing variation that points to the underlying phenomenon.
When responding to a Likert questionnaire item, respondents specify their level of agreement or disagreement on a symmetric agree-disagree scale for a series of statements. Thus, the range captures the intensity of their feelings for a given item.
A scale can be created as the simple sum of questionnaire responses over the full range of the scale. In so doing, Likert scaling assumes distances between each item are equal. Importantly, "All items are assumed to be replications of each other or in other words items are considered to be parallel instruments"  (p. 197). By contrast modern test theory treats the difficulty of each item (the ICCs) as information to be incorporated in scaling items.
A Likert scale is the sum of responses on several Likert items. Because many Likert scales pair each constituent Likert item with its own instance of a visual analogue scale (e.g., a horizontal line, on which a subject indicates his or her response by circling or checking tick-marks), an individual item is itself sometimes erroneously referred to as a scale, with this error creating pervasive confusion in the literature and parlance of the field.
A Likert item is simply a statement that the respondent is asked to evaluate by giving it a quantitative value on any kind of subjective or objective dimension, with level of agreement/disagreement being the dimension most commonly used. Well-designed Likert items exhibit both "symmetry" and "balance". Symmetry means that they contain equal numbers of positive and negative positions whose respective distances apart are bilaterally symmetric about the "neutral"/zero value (whether or not that value is presented as a candidate). Balance means that to the distance between each candidate value is the same, allowing for quantitative comparisons such as averaging to be valid across items containing more than two candidate values. Often five ordered response levels are used, although many psychometricians advocate using seven or nine levels; an empirical study found that items with five or seven levels may produce slightly higher mean scores relative to the highest possible attainable score, compared to those produced from the use of 10 levels, and this difference was statistically significant. In terms of the other data characteristics, there was very little difference among the scale formats in terms of variation about the mean, skewness or kurtosis.
The format of a typical five-level Likert item, for example, could be:
- Strongly disagree
- Neither agree nor disagree
- Strongly agree
Likert scaling is a bipolar scaling method, measuring either positive or negative response to a statement. Sometimes an even-point scale is used, where the middle option of "Neither agree nor disagree" is not available. This is sometimes called a "forced choice" method, since the neutral option is removed. The neutral option can be seen as an easy option to take when a respondent is unsure, and so whether it is a true neutral option is questionable. A 1987 study found negligible differences between the use of "undecided" and "neutral" as the middle option in a 5-point Likert scale.
Likert scales may be subject to distortion from several causes. Respondents may:
- Avoid using extreme response categories (central tendency bias), especially out of a desire to avoid being perceived as having extremist views (an instance of social desirability bias). For questions early in a test, an expectation that questions about which one has stronger views may follow, such that on earlier questions one "leaves room" for stronger responses later in the test, which expectation creates bias that is especially pernicious in that its effects are not uniform throughout the test and cannot be corrected for through simple across-the-board normalization;
- Agree with statements as presented (acquiescence bias), with this effect especially strong among persons, such as children, developmentally disabled persons, and the elderly or infirm, who are subjected to a culture of institutionalization that encourages and incentivizes eagerness to please;
- Disagree with sentences as presented out of a defensive desire to avoid making erroneous statements and/or avoid negative consequences that respondents may fear will result from their answers being used against them, especially if misinterpreted and/or taken out of context;
- Provide answers that they believe will be evaluated as indicating strength or lack of weakness/dysfunction ("faking good"),
- Provide answers that they believe will be evaluated as indicating weakness or presence of impairment/pathology ("faking bad"),
- Try to portray themselves or their organization in a light that they believe the examiner or society to consider more favorable than their true beliefs (social desirability bias, the intersubjective version of objective "faking good" discussed above);
- Try to portray themselves or their organization in a light that they believe the examiner or society to consider less favorable / more unfavorable than their true beliefs (norm defiance, the intersubjective version of objective "faking bad" discussed above).
Designing a scale with balanced keying (an equal number of positive and negative statements and, especially, an equal number of positive and negative statements regarding each position or issue in question) can obviate the problem of acquiescence bias, since acquiescence on positively keyed items will balance acquiescence on negatively keyed items, but defensive, central tendency, and social desirability biases are somewhat more problematic.
Scoring and analysis
After the questionnaire is completed, each item may be analyzed separately or in some cases item responses may be summed to create a score for a group of items. Hence, Likert scales are often called summative scales.
Whether individual Likert items can be considered as interval-level data, or whether they should be treated as ordered-categorical data is the subject of considerable disagreement in the literature, with strong convictions on what are the most applicable methods. This disagreement can be traced back, in many respects, to the extent to which Likert items are interpreted as being ordinal data.
There are two primary considerations in this discussion. First, Likert scales are arbitrary. The value assigned to a Likert item has no objective numerical basis, either in terms of measure theory or scale (from which a distance metric can be determined). The value assigned to each Likert item is simply determined by the researcher designing the survey, who makes the decision based on a desired level of detail. However, by convention Likert items tend to be assigned progressive positive integer values. Likert scales typically range from 2 to 10 – with 5 or 7 being the most common. Further, this progressive structure of the scale is such that each successive Likert item is treated as indicating a ‘better’ response than the preceding value. (This may differ in cases where reverse ordering of the Likert Scale is needed).
The second, and possibly more important point, is whether the ‘distance’ between each successive item category is equivalent, which is inferred traditionally. For example, in the above five-point Likert item, the inference is that the ‘distance’ between category 1 and 2 is the same as between category 3 and 4. In terms of good research practice, an equidistant presentation by the researcher is important; otherwise a bias in the analysis may result. For example, a four-point Likert item with categories "Poor", "Average", "Good", and "Very Good" is unlikely to have all equidistant categories since there is only one category that can receive a below average rating. This would arguably bias any result in favor of a positive outcome. On the other hand, even if a researcher presents what he or she believes are equidistant categories, it may not be interpreted as such by the respondent.
A good Likert scale, as above, will present a symmetry of categories about a midpoint with clearly defined linguistic qualifiers. In such symmetric scaling, equidistant attributes will typically be more clearly observed or, at least, inferred. It is when a Likert scale is symmetric and equidistant that it will behave more like an interval-level measurement. So while a Likert scale is indeed ordinal, if well presented it may nevertheless approximate an interval-level measurement. This can be beneficial since, if it was treated just as an ordinal scale, then some valuable information could be lost if the ‘distance’ between Likert items were not available for consideration. The important idea here is that the appropriate type of analysis is dependent on how the Likert scale has been presented.
Notions of central tendency are often applicable at the item level - that is responses often show a quasi-normal distribution. The validity of such measures depends on the underlying interval nature of the scale.
Responses to several Likert questions may be summed providing that all questions use the same Likert scale and that the scale is a defensible approximation to an interval scale, in which case the Central Limit Theorem allows treatment of the data as interval data measuring a latent variable. If the summed responses fulfill these assumptions, parametric statistical tests such as the analysis of variance can be applied. Typical cutoffs for thinking that this approximation will be acceptable is a minimum of 4 and preferably 8 items in the sum.
To model binary Likert responses directly, they may be represented in a binomial form by summing agree and disagree responses separately. The chi-squared, Cochran's Q test, or McNemar test are common statistical procedures used after this transformation. Non-parametric tests such as chi-squared test, Mann–Whitney test, Wilcoxon signed-rank test, or Kruskal–Wallis test. are often used in the analysis of Likert scale data.
Consensus based assessment (CBA) can be used to create an objective standard for Likert scales in domains where no generally accepted or objective standard exists. Consensus based assessment (CBA) can be used to refine or even validate generally accepted standards.
Visual presentation of Likert-type data
An important part of data analysis and presentation is the visualization (or plotting) of data. The subject of plotting Likert (and other) rating scales is discussed at length in a paper by Robbins and Heiberger. They recommend the use of what they call diverging stacked bar charts.
Level of measurement
The five response categories are often believed to represent an Interval level of measurement. But this can only be the case if the intervals between the scale points correspond to empirical observations in a metric sense. Reips and Funke (2008) show that this criterion is much better met by a visual analogue scale. In fact, there may also appear phenomena which even question the ordinal scale level in Likert scales. For example, in a set of items A,B,C rated with a Likert scale circular relations like A>B, B>C and C>A can appear. This violates the axiom of transitivity for the ordinal scale.
Research by Labovitz  and Traylor  provide evidence that, even with rather large distortions of perceived distances between scale points, Likert-type items perform closely to scales that are perceived as equal intervals. So these items and other equal-appearing scales in questionnaires are robust to violations of the equal distance assumption many researchers believe are required for parametric statistical procedures and tests.
Munshi has shown that the equal interval assumption may not be valid and that careful construction of the scale paying attention to both the number of choices and their placement on the scale (and therefore their weight) may be necessary if the data are to be treated as interval data.
Likert scale data can, in principle, be used as a basis for obtaining interval level estimates on a continuum by applying the polytomous Rasch model, when data can be obtained that fit this model. In addition, the polytomous Rasch model permits testing of the hypothesis that the statements reflect increasing levels of an attitude or trait, as intended. For example, application of the model often indicates that the neutral category does not represent a level of attitude or trait between the disagree and agree categories.
Again, not every set of Likert scaled items can be used for Rasch measurement. The data has to be thoroughly checked to fulfill the strict formal axioms of the model.
Rensis Likert, the developer of the scale, pronounced his name // LIK-ərt'. Some have claimed that Likert's name "is among the most mispronounced in [the] field", because many people pronounce the name of the scale as // LY-kərt.
- Wuensch, Karl L. (October 4, 2005). "What is a Likert Scale? and How Do You Pronounce 'Likert?'". East Carolina University. Retrieved April 30, 2009.
- Likert, Rensis (1932). "A Technique for the Measurement of Attitudes". Archives of Psychology. 140: 1–55.
- Carifio, James and Rocco J. Perla. (2007) "Ten Common Misunderstandings, Misconceptions, Persistent Myths and Urban Legends about Likert Scales and Likert Response Formats and their Antidotes." Journal of Social Sciences 3 (3): 106-116
- Burns, Alvin; Burns, Ronald (2008). Basic Marketing Research (Second ed.). New Jersey: Pearson Education. p. 245. ISBN 978-0-13-205958-9.
- A. van Alphen, R. Halfens, A. Hasman and T. Imbos. (1994). Likert or Rasch? Nothing is more applicable than good theory. Journal of Advanced Nursing. 20, 196-201
- Burns, Alvin; Burns, Ronald (2008). Basic Marketing Research (Second ed.). New Jersey: Pearson Education. p. 250. ISBN 978-0-13-205958-9.
- Dawes, John (2008). "Do Data Characteristics Change According to the number of scale points used? An experiment using 5-point, 7-point and 10-point scales". International Journal of Market Research. 50 (1): 61–77.
- Allen, Elaine and Seaman, Christopher (2007). "Likert Scales and Data Analyses". Quality Progress. pp. 64–65.
- Armstrong, Robert (1987). "The midpoint on a Five-Point Likert-Type Scale". Perceptual and Motor Skills. 64 (2): 359–362. doi:10.2466/pms.1918.104.22.1689.
- Jamieson, Susan (2004). “Likert Scales: How to (Ab)use Them,” Medical Education, Vol. 38(12), pp.1217-1218
- Norman, Geoff (2010). “Likert scales, levels of measurement and the “laws” of statistics”. Advances in Health Science Education. Vol 15(5) pp625-632
- Carifio and Perla, 2007, Ten Common Misunderstandings, Misconceptions, Persistent Myths and Urban Legends about Likert Scales and Likert Response Formats and their Antidotes. Journal of Social Sciences 3 (3): 106-116.
- Norman, Geoff (2010)
- Mogey, Nora (March 25, 1999). "So You Want to Use a Likert Scale?". Learning Technology Dissemination Initiative. Heriot-Watt University. Retrieved April 30, 2009.
- B Robbins, Naomi; M Heiberger, Richard (2011). "Plotting Likert and Other Rating Scales". JSM 2011: 1058–1066.
- Reips, Ulf-Dietrich; Funke, Frederik (2008). "Interval level measurement with visual analogue scales in Internet-based research: VAS Generator". Behavior Research Methods. 40 (3): 699–704. doi:10.3758/BRM.40.3.699. PMID 18697664.
- Johanson, George A.; Gips, Crystal J. (1993). "Paired Comparison Intransitivity: Useful Information or Nuisance?" (PDF). Paper presented at the Annual Meeting of the American Educational Research Association (Atlanta, GA, April 12–16, 1993).
- Labovitz, S (1967). "Some observations on measurement and statistics". Social Forces. 46: 151–160. doi:10.2307/2574595.
- Traylor, Mark (October 1983). "Ordinal and interval scaling". Journal of the Market Research Society. 25 (4): 297–303.
- Munshi, Jamal. "A Method for Constructing Likert Scales". ssrn.com. Retrieved 4 April 2014.
- Babbie, Earl R. (2005). The Basics of Social Research. Belmont, CA: Thomson Wadsworth. p. 174. ISBN 0-534-63036-7.
- Meyers, Lawrence S.; Anthony Guarino; Glenn Gamst (2005). Applied Multivariate Research: Design and Interpretation. Sage Publications. p. 20. ISBN 1-4129-0412-9.
- Latham, Gary P. (2006). Work Motivation: History, Theory, Research, And Practice. Thousand Oaks, Calif.: Sage Publications. p. 15. ISBN 0-7619-2018-8.
- Carifio (2007). "Ten Common Misunderstandings, Misconceptions, Persistent Myths and Urban Legends about Likert Scales and Likert Response Formats and their Antidotes" (PDF). Retrieved September 19, 2011.
- Trochim, William M. (October 20, 2006). "Likert Scaling". Research Methods Knowledge Base, 2nd Edition. Retrieved April 30, 2009.
- Uebersax, John S. (2006). "Likert Scales: Dispelling the Confusion". Retrieved August 17, 2009.
- "A search for the optimum feedback scale". Getfeedback.
- Correlation scatter-plot matrix - for ordered-categorical data - On the visual presentation of correlation between Likert scale variables
- Net stacked distribution of Likert data - Method of visualizing Likert data to highlight differences from a central neutral value.
- Build a Likert scale - How to make a Likert scale questionnaire.