Cronbach's alpha

From Wikipedia, the free encyclopedia

Cronbach's alpha (Cronbach's ), also known as rho-equivalent reliability () or coefficient alpha (coefficient ), is a reliability coefficient that provides a method of measuring internal consistency of tests and measures.[1][2][3] Numerous studies warn against using it unconditionally, and note that reliability coefficients based on structural equation modeling (SEM) or generalizability theory are in many cases a suitable alternative in certain situations.[4][5][6][7][8][9]


Cronbach (1951)[edit]

As with previous studies,[10][11][12][13] Cronbach (1951) published an additional method to derive Cronbach's alpha.[14] His interpretation was more intuitively attractive than those of previous studies and became quite popular.[15]

After 1951[edit]

Novick and Lewis (1967)[16] proved the necessary and sufficient condition for to be equal to reliability and named it the condition of being essentially tau-equivalent.

Cronbach (1978)[2]: 263  mentioned that the reason Cronbach (1951) received a lot of citations was "mostly because [he] put a brand name on a common-place coefficient".[3] He explained that he had originally planned to name other types of reliability coefficients (e.g., inter-rater reliability or test-retest reliability) after consecutive Greek letters (e.g., , , ), but later changed his mind.

Cronbach and Shavelson (2004)[9] encouraged readers to use generalizability theory rather than . Cronbach opposed the use of the name Cronbach's alpha, and explicitly denied the existence of studies that had published the general formula of KR-20 prior to Cronbach (1951).

Prerequisites for using Cronbach's alpha[edit]

In order to use Cronbach’s alpha as a reliability coefficient, the data from the measure must satisfy the following conditions:[17][18]

  1. Normality distributed and linear
  2. Tau-equivalence (essential)
  3. Independence between errors

Formula and calculation[edit]

Cronbach’s alpha is calculated by taking the score from each scale item and correlating them with the total score for each observation and then comparing that with the variance for all individual item scores. Cronbach’s alpha is best understood as a function of the number of questions or items in a measure, the between pairs of items average covariance,[19] and the overall variance of the total measured score.[20]

the number of items in the measure

variance associated with each

variance associated of the total scores

Common misconceptions[7][edit]

The value of Cronbach's alpha ranges between zero and one[edit]

By definition, reliability cannot be less than zero and cannot be greater than one. Many textbooks mistakenly equate with reliability and give an inaccurate explanation of its range. can be less than reliability when applied to data that are not tau-equivalent. Suppose that copied the value of as it is, and copied by multiplying the value of by -1. The covariance matrix between items is as follows, .

Observed covariance matrix

Negative can occur for reasons such as negative discrimination or mistakes in processing reversely scored items.

Unlike , SEM-based reliability coefficients (e.g., ) are always greater than or equal to zero.

This anomaly was first pointed out by Cronbach (1943)[21] to criticize , but Cronbach (1951)[14] did not comment on this problem in his article, which discussed all conceivable issues related .[9]: 396 

If there is no measurement error, the value of Cronbach's alpha is one[edit]

This anomaly also originates from the fact that underestimates reliability. Suppose that copied the value of as it is, and copied by multiplying the value of by two. The covariance matrix between items is as follows, .

Observed covariance matrix

For the above data, both and have a value of one.

The above example is presented by Cho and Kim (2015).[7]

A high value of Cronbach's alpha indicates homogeneity between the items[edit]

Many textbooks refer to as an indicator of homogeneity[22] between items. This misconception stems from the inaccurate explanation of Cronbach (1951)[14] that high values show homogeneity between the items. Homogeneity is a term that is rarely used in the modern literature, and related studies interpret the term as referring to uni-dimensionality. Several studies have provided proofs or counterexamples that high values do not indicate uni-dimensionality.[23][7][24][25][26][27] See counterexamples below.

Unidimensional data

in the unidimensional data above.

Multidimensional data

in the multidimensional data above.

Multidimensional data with extremely high reliability

The above data have , but are multidimensional.

Unidimensional data with unacceptably low reliability

The above data have , but are unidimensional.

Uni-dimensionality is a prerequisite for . You should check uni-dimensionality before calculating , rather than calculating to check uni-dimensionality.[3]

A high value of Cronbach's alpha indicates internal consistency[edit]

The term internal consistency is commonly used in the reliability literature, but its meaning is not clearly defined. The term is sometimes used to refer to a certain kind of reliability (e.g., internal consistency reliability), but it is unclear exactly which reliability coefficients are included here, in addition to . Cronbach (1951)[14] used the term in several senses without an explicit definition. Cho and Kim (2015)[7] showed that is is not an indicator of any of these.

Removing items using "alpha if item deleted" always increases reliability[edit]

Removing an item using "alpha if item deleted"[clarification needed] may result in 'alpha inflation,' where sample-level reliability is reported to be higher than population-level reliability.[28] It may also reduce population-level reliability.[29] The elimination of less-reliable items should be based not only on a statistical basis but also on a theoretical and logical basis. It is also recommended that the whole sample be divided into two and cross-validated.[28]

Ideal reliability level and how to increase reliability[edit]

Nunnally's recommendations for the level of reliability[edit]

The most frequently cited source of how high reliability coefficients should be is Nunnally's book.[30][31][32] However, his recommendations are cited contrary to his intentions. What he meant was to apply different criteria depending on the purpose or stage of the study. However, regardless of the nature of the research, such as exploratory research, applied research, and scale development research, a criterion of 0.7 is universally used.[33] 0.7 is the criterion he recommended for the early stages of a study, which most studies published in the journal are not. Rather than 0.7, the criterion of 0.8 referred to applied research by Nunnally is more appropriate for most empirical studies.[33]

Nunnally's recommendations on the level of reliability
1st edition[30] 2nd[31] & 3rd[32] edition
Early stage of research 0.5 or 0.6 0.7
Applied research 0.8 0.8
When making important decisions 0.95 (minimum 0.9) 0.95 (minimum 0.9)

His recommendation level did not imply a cutoff point. If a criterion means a cutoff point, it is important whether or not it is met, but it is unimportant how much it is over or under. He did not mean that it should be strictly 0.8 when referring to the criteria of 0.8. If the reliability has a value near 0.8 (e.g., 0.78), it can be considered that his recommendation has been met.[34]

Cost to obtain a high level of reliability[edit]

Nunnally's idea was that there is a cost to increasing reliability, so there is no need to try to obtain maximum reliability in every situation.

Trade-off with validity[edit]

Measurements with perfect reliability lack validity.[7] For example, a person who take the test with the reliability of one will get a perfect score or a zero score, because the examinee who gives the correct answer or incorrect answer on one item will give the correct answer or incorrect answer on all other items. The phenomenon in which validity is sacrificed to increase reliability is called attenuation paradox.[35][36]

A high value of reliability can be in conflict with content validity. For high content validity, each item should be constructed to be able to comprehensively represent the content to be measured. However, a strategy of repeatedly measuring essentially the same question in different ways is often used only for the purpose of increasing reliability.[37][38]

Trade-off with efficiency[edit]

When the other conditions are equal, reliability increases as the number of items increases. However, the increase in the number of items hinders the efficiency of measurements.

Methods to increase reliability[edit]

Despite the costs associated with increasing reliability discussed above, a high level of reliability may be required. The following methods can be considered to increase reliability.

Before data collection:

  • Eliminate the ambiguity of the measurement item.
  • Do not measure what the respondents do not know.[39]
  • Increase the number of items. However, care should be taken not to excessively inhibit the efficiency of the measurement.
  • Use a scale that is known to be highly reliable.[40]
  • Conduct a pretest - discover in advance the problem of reliability.
  • Exclude or modify items that are different in content or form from other items (e.g., reverse-scored items).

After data collection:

  • Remove the problematic items using "alpha if item deleted". However, this deletion should be accompanied by a theoretical rationale.
  • Use a more accurate reliability coefficient than . For example, is 0.02 larger than on average.[41]

Which reliability coefficient to use[edit]

is used in an overwhelming proportion. A study estimates that approximately 97% of studies use as a reliability coefficient.[3]

However, simulation studies comparing the accuracy of several reliability coefficients have led to the common result that is an inaccurate reliability coefficient.[42][43][6][44][45]

Methodological studies are critical of the use of . Simplifying and classifying the conclusions of existing studies are as follows.

  1. Conditional use: Use only when certain conditions are met.[3][7][8]
  2. Opposition to use: is inferior and should not be used.[46][5][47][6][4][48]

Alternatives to Cronbach's alpha[edit]

Existing studies are practically unanimous in that they oppose the widespread practice of using unconditionally for all data. However, different opinions are given on which reliability coefficient should be used instead of .

Different reliability coefficients ranked first in each simulation study[42][43][6][44][45] comparing the accuracy of several reliability coefficients.[7]

The majority opinion is to use SEM-based reliability coefficients as an alternative to .[3][7][46][5][47][8][6][48]

However, there is no consensus on which of the several SEM-based reliability coefficients (e.g., unidimensional or multidimensional models) is the best to use.

Some people suggest [6] as an alternative, but shows information that is completely different from reliability. is a type of coefficient comparable to Revelle's .[49][6] They do not substitute, but complement reliability.[3]

Among SEM-based reliability coefficients, multidimensional reliability coefficients are rarely used, and the most commonly used is ,[3] also known as composite or congeneric reliability.

Software for SEM-based reliability coefficients[edit]

General-purpose statistical software such as SPSS and SAS include a function to calculate . Users who don't know the formula of have no problem in obtaining the estimates with just a few mouse clicks.

SEM software such as AMOS, LISREL, and MPLUS does not have a function to calculate SEM-based reliability coefficients. Users need to calculate the result by inputting it to the formula. To avoid this inconvenience and possible error, even studies reporting the use of SEM rely on instead of SEM-based reliability coefficients.[3] There are a few alternatives to automatically calculate SEM-based reliability coefficients.

  1. R (free): The psych package[50] calculates various reliability coefficients.
  2. EQS (paid):[51] This SEM software has a function to calculate reliability coefficients.
  3. RelCalc (free):[3] Available with Microsoft Excel. can be obtained without the need for SEM software. Various multidimensional SEM reliability coefficients and various types of can be calculated based on the results of SEM software.


  1. ^ Cronbach, Lee J. (1951). "Coefficient alpha and the internal structure of tests". Psychometrika. Springer Science and Business Media LLC. 16 (3): 297–334. doi:10.1007/bf02310555. hdl:10983/2196. ISSN 0033-3123. S2CID 13820448.
  2. ^ a b Cronbach, L. J. (1978). "Citation Classics" (PDF). Current Contents. 13: 263.
  3. ^ a b c d e f g h i j Cho, Eunseong (2016-07-08). "Making Reliability Reliable". Organizational Research Methods. SAGE Publications. 19 (4): 651–682. doi:10.1177/1094428116656239. ISSN 1094-4281. S2CID 124129255.
  4. ^ a b Sijtsma, K. (2009). On the use, the misuse, and the very limited usefulness of Cronbach's alpha. Psychometrika, 74(1), 107–120. doi:10.1007/s11336-008-9101-0
  5. ^ a b c Green, S. B., & Yang, Y. (2009). Commentary on coefficient alpha: A cautionary tale. Psychometrika, 74(1), 121–135. doi:10.1007/s11336-008-9098-4
  6. ^ a b c d e f g Revelle, W., & Zinbarg, R. E. (2009). Coefficients alpha, beta, omega, and the glb: Comments on Sijtsma. Psychometrika, 74(1), 145–154. doi:10.1007/s11336-008-9102-z
  7. ^ a b c d e f g h i Cho, E., & Kim, S. (2015). Cronbach's coefficient alpha: Well known but poorly understood. Organizational Research Methods, 18(2), 207–230. doi:10.1177/1094428114555994
  8. ^ a b c Raykov, T., & Marcoulides, G. A. (2017). Thanks coefficient alpha, we still need you! Educational and Psychological Measurement, 79(1), 200–210. doi:10.1177/0013164417725127
  9. ^ a b c Cronbach, L. J., & Shavelson, R. J. (2004). My Current Thoughts on Coefficient Alpha and Successor Procedures. Educational and Psychological Measurement, 64(3), 391–418. doi:10.1177/0013164404266386
  10. ^ Hoyt, C. (1941). Test reliability estimated by analysis of variance. Psychometrika, 6(3), 153–160. doi:10.1007/BF02289270
  11. ^ Guttman, L. (1945). A basis for analyzing test-retest reliability. Psychometrika, 10(4), 255–282. doi:10.1007/BF02288892
  12. ^ Jackson, R. W. B., & Ferguson, G. A. (1941). Studies on the reliability of tests. University of Toronto Department of Educational Research Bulletin, 12, 132.
  13. ^ Gulliksen, H. (1950). Theory of mental tests. John Wiley & Sons. doi:10.1037/13240-000
  14. ^ a b c d Cronbach, L.J. (1951). Coefficient alpha and the internal structure of tests. Psychometrika, 16 (3), 297–334. doi:10.1007/BF02310555
  15. ^ Cronbach, Lee (1978). "Citation Classics" (PDF). Current Contents. 13 (8).
  16. ^ Novick, M. R., & Lewis, C. (1967). Coefficient alpha and the reliability of composite measurements. Psychometrika, 32(1), 1–13. doi:10.1007/BF02289400
  17. ^ Spiliotopoulou, Georgia (2009). "Reliability reconsidered: Cronbach's alpha and paediatric assessment in occupational therapy". Australian Occupational Therapy Journal. 56 (3): 150–155. doi:10.1111/j.1440-1630.2009.00785.x. PMID 20854508.
  18. ^ Cortina, Jose M. (1993). "What is coefficient alpha? An examination of theory and applications". Journal of Applied Psychology. 78 (1): 98–104. doi:10.1037/0021-9010.78.1.98. ISSN 1939-1854.
  19. ^ "Covariance", Wikipedia, 2023-02-03, retrieved 2023-02-20
  20. ^ Goforth, Chelsea (November 16, 2015). "Using and Interpreting Cronbach's Alpha | University of Virginia Library Research Data Services + Sciences". University of Virginia Library. Retrieved 2022-09-06.
  21. ^ Cronbach, L. J. (1943). On estimates of test reliability. Journal of Educational Psychology, 34(8), 485–494. doi:10.1037/h0058608
  22. ^ "APA Dictionary of Psychology". Retrieved 2023-02-20.
  23. ^ Cortina, J. M. (1993). What is coefficient alpha? An examination of theory and applications. Journal of Applied Psychology, 78(1), 98–104. doi:10.1037/0021-9010.78.1.98
  24. ^ Green, S. B., Lissitz, R. W., & Mulaik, S. A. (1977). Limitations of coefficient alpha as an Index of test unidimensionality. Educational and Psychological Measurement, 37(4), 827–838. doi:10.1177/001316447703700403
  25. ^ McDonald, R. P. (1981). The dimensionality of tests and items. The British Journal of Mathematical and Statistical Psychology, 34(1), 100–117. doi:10.1111/j.2044-8317.1981.tb00621.x
  26. ^ Schmitt, N. (1996). Uses and abuses of coefficient alpha. Psychological Assessment, 8(4), 350–353. doi:10.1037/1040-3590.8.4.350
  27. ^ Ten Berge, J. M. F., & Sočan, G. (2004). The greatest lower bound to the reliability of a test and the hypothesis of unidimensionality. Psychometrika, 69(4), 613–625. doi:10.1007/BF02289858
  28. ^ a b Kopalle, P. K., & Lehmann, D. R. (1997). Alpha inflation? The impact of eliminating scale items on Cronbach's alpha. Organizational Behavior and Human Decision Processes, 70(3), 189–197. doi:10.1006/obhd.1997.2702
  29. ^ Raykov, T. (2007). Reliability if deleted, not 'alpha if deleted': Evaluation of scale reliability following component deletion. The British Journal of Mathematical and Statistical Psychology, 60(2), 201–216. doi:10.1348/000711006X115954
  30. ^ a b Nunnally, J. C. (1967). Psychometric theory. New York, NY: McGraw-Hill.
  31. ^ a b Nunnally, J. C. (1978). Psychometric theory (2nd ed.). New York, NY: McGraw-Hill.
  32. ^ a b Nunnally, J. C., & Bernstein, I. H. (1994). Psychometric theory (3rd ed.). New York, NY: McGraw-Hill.
  33. ^ a b Lance, C. E., Butts, M. M., & Michels, L. C. (2006). What did they really say? Organizational Research Methods, 9(2), 202–220. doi:10.1177/1094428105284919
  34. ^ Cho, E. (2020). A comprehensive review of so-called Cronbach's alpha. Journal of Product Research, 38(1), 9–20.
  35. ^ Loevinger, J. (1954). The attenuation paradox in test theory. Psychological Bulletin, 51(5), 493–504. doi:10.1002/j.2333-8504.1954.tb00485.x
  36. ^ Humphreys, L. (1956). The normal curve and the attenuation paradox in test theory. Psychological Bulletin, 53(6), 472–476. doi:10.1037/h0041091
  37. ^ Boyle, G. J. (1991). Does item homogeneity indicate internal consistency or item redundancy in psychometric scales? Personality and Individual Differences, 12(3), 291–294. doi:10.1016/0191-8869(91)90115-R
  38. ^ Streiner, D. L. (2003). Starting at the beginning: An introduction to coefficient alpha and internal consistency. Journal of Personality Assessment, 80(1), 99–103. doi:10.1207/S15327752JPA8001_18
  39. ^ Beatty, P.; Herrmann, D.; Puskar, C.; Kerwin, J. (July 1998). ""Don't know" responses in surveys: is what I know what you want to know and do I want you to know it?". Memory (Hove, England). 6 (4): 407–426. doi:10.1080/741942605. ISSN 0965-8211. PMID 9829099.
  40. ^ Lee, H. (2017). Research Methodology (2nd ed.), Hakhyunsa.
  41. ^ Peterson, R. A., & Kim, Y. (2013). On the relationship between coefficient alpha and composite reliability. Journal of Applied Psychology, 98(1), 194–198. doi:10.1037/a0030767
  42. ^ a b Kamata, A., Turhan, A., & Darandari, E. (2003). Estimating reliability for multidimensional composite scale scores. Annual Meeting of American Educational Research Association, Chicago, April 2003, April, 1–27.
  43. ^ a b Osburn, H. G. (2000). Coefficient alpha and related internal consistency reliability coefficients. Psychological Methods, 5(3), 343–355. doi:10.1037/1082-989X.5.3.343
  44. ^ a b Tang, W., & Cui, Y. (2012). A simulation study for comparing three lower bounds to reliability. Paper Presented on April 17, 2012 at the AERA Division D: Measurement and Research Methodology, Section 1: Educational Measurement, Psychometrics, and Assessment, 1–25.
  45. ^ a b van der Ark, L. A., van der Palm, D. W., & Sijtsma, K. (2011). A latent class approach to estimating test-score reliability. Applied Psychological Measurement, 35(5), 380–392. doi:10.1177/0146621610392911
  46. ^ a b Dunn, T. J., Baguley, T., & Brunsden, V. (2014). From alpha to omega: A practical solution to the pervasive problem of internal consistency estimation. British Journal of Psychology, 105(3), 399–412. doi:10.1111/bjop.12046
  47. ^ a b Peters, G. Y. (2014). The alpha and the omega of scale reliability and validity comprehensive assessment of scale quality. The European Health Psychologist, 1(2), 56–69.
  48. ^ a b Yang, Y., & Green, S. B. (2011). Coefficient alpha: A reliability coefficient for the 21st century? Journal of Psychoeducational Assessment, 29(4), 377–392. doi:10.1177/0734282911406668
  49. ^ Revelle, W. (1979). Hierarchical cluster analysis and the internal structure of tests. Multivariate Behavioral Research, 14(1), 57–74. doi:10.1207/s15327906mbr1401_4
  50. ^ Revelle, William (7 January 2017). "An overview of the psych package" (PDF).
  51. ^ "Multivariate Software, Inc". Archived from the original on 2001-05-21.

External links[edit]