Cronbach's alpha

From Wikipedia, the free encyclopedia
Jump to navigation Jump to search

Tau-equivalent reliability (), also known as Cronbach's alpha or coefficient alpha, is the most common test score reliability coefficient for single administration (i.e., the reliability of persons over items holding occasion fixed).[1][2][3]

Recent studies recommend not using it unconditionally.[4][5][6][7][8][9] Reliability coefficients based on structural equation modeling (SEM) are often recommended as its alternative.

Formula and calculation[edit]

Systematic and conventional formula[edit]

Let denote the observed score of item and denote the sum of all items in a test consisting of items. Let denote the covariance between and , denote the variance of , and denote the variance of . consists of item variances and inter-item covariances:


Let denote the average of the inter-item covariances:


's "systematic"[3] formula is


The more frequently used version of the formula is


Calculation example[edit]

When applied to appropriate data[edit]

is applied to the following data that satisfies the condition of being tau-equivalent.

Observed covariance matrix

, ,


and .

When applied to inappropriate data[edit]

is applied to the following data that does not satisfy the condition of being tau-equivalent.

Observed covariance matrix

, ,


and .

Compare this value with the value of applying congeneric reliability to the same data.

Prerequisites for using tau-equivalent reliability[edit]

In order to use as a reliability coefficient, the data must satisfy the following conditions.

1) Unidimensionality;

2) (Essential) tau-equivalence;

3) Independence between errors.

The conditions of being parallel, tau-equivalent, and congeneric[edit]

Parallel condition[edit]

At the population level, parallel data have equal inter-item covariances (i.e., non-diagonal elements of the covariance matrix) and equal variances (i.e., diagonal elements of the covariance matrix). For example, the following data satisfy the parallel condition. In parallel data, even if a correlation matrix is used instead of a covariance matrix, there is no loss of information. All parallel data are also tau-equivalent, but the reverse is not true. That is, among the three conditions, the parallel condition is most difficult to meet.

Observed covariance matrix

Tau-equivalent condition[edit]

A tau-equivalent measurement model is a special case of a congeneric measurement model, hereby assuming all factor loadings to be the same, i.e.

At the population level, tau-equivalent data have equal covariances, but their variances may have different values. For example, the following data satisfies the condition of being tau-equivalent. All items in tau-equivalent data have equal discrimination or importance. All tau-equivalent data are also congeneric, but the reverse is not true.

Observed covariance matrix

Congeneric condition[edit]

Congeneric measurement model

At the population level, congeneric data need not have equal variances or covariances, provided they are unidimensional. For example, the following data meet the condition of being congeneric. All items in congeneric data can have different discrimination or importance.

Observed covariance matrix

Relationship with other reliability coefficients[edit]

Classification of single-administration reliability coefficients[edit]

Conventional names[edit]

There are numerous reliability coefficients. Among them, the conventional names of reliability coefficients that are related and frequently used are summarized as follows:[3]

Conventional names of reliability coefficients
Split-half Unidimensional Multidimensional
Parallel Spearman-Brown formula Standardized (No conventional name)
Tau-equivalent Flanagan formula
Rulon formula
Flanagan-Rulon formula
Hoyt reliability
Congeneric Angoff-Feldt coefficient
Raju(1970) coefficient
composite reliability
construct reliability
congeneric reliability
Raju(1977) coefficient

Combining row and column names gives the prerequisites for the corresponding reliability coefficient. For example, Cronbach's and Guttman's are reliability coefficients derived under the condition of being unidimensional and tau-equivalent.

Systematic names[edit]

Conventional names are disordered and unsystematic. Conventional names give no information about the nature of each coefficient, or give misleading information (e.g., composite reliability). Conventional names are inconsistent. Some are formulas, and others are coefficients. Some are named after the original developer, some are named after someone who is not the original developer, and others do not include the name of any person. While one formula is referred to by multiple names, multiple formulas are referred to by one notation (e.g., alphas and omegas). The proposed systematic names and their notation for these reliability coefficients are as follows: [3]

Systematic names of reliability coefficients
Split-half Unidimensional Multidimensional
Parallel split-half parallel reliability() parallel reliability() multidimensional parallel reliability()
Tau-equivalent split-half tau-equivalent reliability() tau-equivalent reliability() multidimensional tau-equivalent reliability()
Congeneric split-half congeneric reliability () congeneric reliability () Bifactor model
Bifactor reliability()
Second-order factor model
Second-order factor reliability()
Correlated factor model
Correlated factor reliability()

Relationship with parallel reliability[edit]

is often referred to as coefficient alpha and is often referred to as standardized alpha. Because of the standardized modifier, is often mistaken for a more standard version than . There is no historical basis to refer to as standardized alpha. Cronbach (1951)[10] did not refer to this coefficient as alpha, nor did it recommend using it. was rarely used before the 1970s. As SPSS began to provide under the name of standardized alpha, this coefficient began to be used occasionally.[11] The use of is not recommended because the parallel condition is difficult to meet in real-world data.

Relationship with split-half tau-equivalent reliability[edit]

equals the average of the values obtained for all possible split-halves. This relationship, proved by Cronbach (1951),[10] is often used to explain the intuitive meaning of . However, this interpretation overlooks the fact that underestimates reliability when applied to data that are not tau-equivalent. At the population level, the maximum of all possible values is closer to reliability than the average of all possible values.[7] This mathematical fact was already known even before the publication of Cronbach (1951).[12] A comparative study[13] reports that the maximum of is the most accurate reliability coefficient.

Revelle (1979) refers to the minimum of all possible values as coefficient ,[14] and recommends that provides complementary information that does not.[6]

Relationship with congeneric reliability[edit]

If the assumptions of unidimensionality and tau-equivalence are satisfied, equals .

If unidimensionality is satisfied but tau-equivalence is not satisfied, is smaller than .[7]

is the most commonly used reliability coefficient after . Users tend to present both, rather than replacing with .[3]

A study investigating studies that presented both coefficients reports that is .02 smaller than on average.[15]

Relationship with multidimensional reliability coefficients and [edit]

If is applied to multidimensional data, its value is smaller than multidimensional reliability coefficients and larger than .[3]

Relationship with Intraclass correlation[edit]

is said to be equal to the stepped-up consistency version of the intraclass correlation coefficient, which is commonly used in observational studies. But this is only conditionally true. In terms of variance components, this condition is, for item sampling: if and only if the value of the item (rater, in the case of rating) variance component equals zero. If this variance component is negative, will underestimate the stepped-up intra-class correlation coefficient; if this variance component is positive, will overestimate this stepped-up intra-class correlation coefficient.


Before 1937[edit]

[16][17] was the only known reliability coefficient. The problem was that the reliability estimates depended on how the items were split in half (e.g., odd/even or front/back). Criticism was raised against this unreliability, but for more than 20 years no fundamental solution was found.[18]

Kuder and Richardson (1937)[edit]

Kuder and Richardson (1937)[19] developed several reliability coefficients that could overcome the problem of . They did not give the reliability coefficients particular names. Equation 20 in their article is . This formula is often referred to as Kuder-Richardson Formula 20, or KR-20. They dealt with cases where the observed scores were dichotomous (e.g., correct or incorrect), so the expression of KR-20 is slightly different from the conventional formula of . A review of this paper reveals that they did not present a general formula because they did not need to, not because they were not able to. Let denote the correct answer ratio of item , and denote the incorrect answer ratio of item (). The formula of KR-20 is as follows.

Since , KR-20 and have the same meaning.

Between 1937 and 1951[edit]

Several studies published the general formula of KR-20[edit]

Kuder and Richardson (1937) made unnecessary assumptions to derive . Several studies have derived in a different way from Kuder and Richardson (1937).

Hoyt (1941)[20] derived using ANOVA (Analysis of variance). Cyril Hoyt may be considered the first developer of the general formula of the KR-20, but he did not explicitly present the formula of .

The first expression of the modern formula of appears in Jackson and Ferguson (1941).[21] The version they presented is as follows. Edgerton and Thompson (1942)[22] used the same version.

Guttman (1945)[12] derived six reliability formulas, each denoted by . Louis Guttman proved that all of these formulas were always less than or equal to reliability, and based on these characteristics, he referred to these formulas as 'lower bounds of reliability'. Guttman's is , and is . He proved that is always greater than or equal to (i.e., more accurate). At that time, all calculations were done with paper and pencil, and since the formula of was simpler to calculate, he mentioned that was useful under certain conditions.

Gulliksen (1950)[23] derived with fewer assumptions than previous studies. The assumption he used is essential tau-equivalence in modern terms.

Recognition of KR-20's original formula and general formula at the time[edit]

The two formulas were recognized to be exactly identical, and the expression of general formula of KR-20 was not used. Hoyt[20] explained that his method "gives precisely the same result" as KR-20 (p.156). Jackson and Ferguson[21] stated that the two formulas are "identical"(p.74). Guttman[12] said is "algebraically identical" to KR-20 (p.275). Gulliksen[23] also admitted that the two formulas are “identical”(p.224).

Even studies critical of KR-20 did not point out that the original formula of KR-20 could only be applied to dichotomous data.[24]

Criticism of underestimation of KR-20[edit]

Developers[19] of this formula reported that consistently underestimates reliability. Hoyt[25] argued that this characteristic alone made more recommendable than the traditional split-half technique, which was unknown whether to underestimate or overestimate reliability.

Cronbach (1943)[24] was critical of the underestimation of . He was concerned that it was not known how much underestimated reliability. He criticized that the underestimation was likely to be excessively severe, such that could sometimes lead to negative values. Because of these problems, he argued that could not be recommended as an alternative to the split-half technique.

Cronbach (1951)[edit]

As with previous studies,[20][12][21][23] Cronbach (1951)[10] invented another method to derive . His interpretation was more intuitively attractive than those of previous studies. That is, he proved that equals the average of values obtained for all possible split-halves. He criticized that the name KR-20 was weird and suggested a new name, coefficient alpha. His approach has been a huge success. However, he not only omitted some key facts, but also gave an incorrect explanation.

First, he positioned coefficient alpha as a general formula of KR-20, but omitted the explanation that existing studies had published the precisely identical formula. Those who read only Cronbach (1951) without background knowledge could misunderstand that he was the first to develop the general formula of KR-20.

Second, he did not explain under what condition equals reliability. Non-experts could misunderstand that was a general reliability coefficient that could be used for all data regardless of prerequisites.

Third, he did not explain why he changed his attitude toward . In particular, he did not provide a clear answer to the underestimation problem of , which he himself[24] had criticized.

Fourth, he argued that a high value of indicated homogeneity of the data.

After 1951[edit]

Novick and Lewis (1967)[26] proved the necessary and sufficient condition for to be equal to reliability, and named it the condition of being essentially tau-equivalent.

Cronbach (1978)[2] mentioned that the reason Cronbach (1951) received a lot of citations was "mostly because [he] put a brand name on a common-place coefficient"(p.263).[3] He explained that he had originally planned to name other types of reliability coefficients (e.g., inter-rater reliability or test-retest reliability) in consecutive Greek letter (e.g., , , ), but later changed his mind.

Cronbach and Schavelson (2004)[27] encouraged readers to use generalizability theory rather than . He opposed the use of the name Cronbach's alpha. He explicitly denied the existence of existing studies that had published the general formula of KR-20 prior to Cronbach (1951).

Common misconceptions about tau-equivalent reliability[7][edit]

The value of tau-equivalent reliability ranges between zero and one[edit]

By definition, reliability cannot be less than zero and cannot be greater than one. Many textbooks mistakenly equate with reliability and give an inaccurate explanation of its range. can be less than reliability when applied to data that are not tau-equivalent. Suppose that copied the value of as it is, and copied by multiplying the value of by -1. The covariance matrix between items is as follows, .

Observed covariance matrix

Negative can occur for reasons such as negative discrimination or mistakes in processing reversely scored items.

Unlike , SEM-based reliability coefficients (e.g., ) are always greater than or equal to zero.

This anomaly was first pointed out by Cronbach (1943)[24] to criticize , but Cronbach (1951)[10] did not comment on this problem in his article, which discussed all conceivable issues related and he himself[27] described as being "encyclopedic" (p.396).

If there is no measurement error, the value of tau-equivalent reliability is one[edit]

This anomaly also originates from the fact that underestimates reliability. Suppose that copied the value of as it is, and copied by multiplying the value of by two. The covariance matrix between items is as follows, .

Observed covariance matrix

For the above data, both and have a value of one.

The above example is presented by Cho and Kim (2015).[7]

A high value of tau-equivalent reliability indicates homogeneity between the items[edit]

Many textbooks refer to as an indicator of homogeneity between items. This misconception stems from the inaccurate explanation of Cronbach (1951)[10] that high values show homogeneity between the items. Homogeneity is a term that is rarely used in the modern literature, and related studies interpret the term as referring to unidimensionality. Several studies have provided proofs or counterexamples that high values do not indicate unidimensionality.[28][7][29][30][31][32] See counterexamples below.

Unidimensional data

in the unidimensional data above.

Multidimensional data

in the multidimensional data above.

Multidimensional data with extremely high reliability

The above data have , but are multidimensional.

Unidimensional data with unacceptably low reliability

The above data have , but are unidimensional.

Unidimensionality is a prerequisite for . You should check unidimensionality before calculating , rather than calculating to check unidimensionality.[3]

A high value of tau-equivalent reliability indicates internal consistency[edit]

The term internal consistency is commonly used in the reliability literature, but its meaning is not clearly defined. The term is sometimes used to refer to a certain kind of reliability (e.g., internal consistency reliability), but it is unclear exactly which reliability coefficients are included here, in addition to . Cronbach (1951)[10] used the term in several senses without an explicit definition. Cho and Kim (2015)[7] showed that is not an indicator of any of these.

Removing items using "alpha if item deleted" always increases reliability[edit]

Removing an item using "alpha if item deleted" may result in 'alpha inflation,' where sample-level reliability is reported to be higher than population-level reliability.[33] It may also reduce population-level reliability.[34] The elimination of less-reliable items should be based not only on a statistical basis, but also on a theoretical and logical basis. It is also recommended that the whole sample be divided into two and cross-validated.[33]

Ideal reliability level and how to increase reliability[edit]

Nunnally's recommendations for the level of reliability[edit]

The most frequently cited source of how much reliability coefficients should be is Nunnally's book.[35][36][37] However, his recommendations are cited contrary to his intentions. What he meant was to apply different criteria depending on the purpose or stage of the study. However, regardless of the nature of the research, such as exploratory research, applied research, and scale development research, a criterion of .7 is universally used.[38] .7 is the criterion he recommended for the early stages of a study, which most studies published in the journal are not. Rather than .7, the criterion of .8 referred to applied research by Nunnally is more appropriate for most empirical studies.[38]

Nunnally's recommendations on the level of reliability
1st edition[35] 2nd[36] & 3rd[37] edition
Early stage of research .5 or .6 .7
Applied research .8 .8
When making important decisions .95 (minimum .9) .95 (minimum .9)

His recommendation level did not imply a cutoff point. If a criterion means a cutoff point, it is important whether or not it is met, but it is unimportant how much it is over or under. He did not mean that it should be strictly .8 when referring to the criteria of .8. If the reliability has a value near .8 (e.g., 0.78), it can be considered that his recommendation has been met.[39]

His idea was that there is a cost to increasing reliability, so there is no need to try to obtain maximum reliability in every situation.

Cost to obtain a high level of reliability[edit]

Many textbooks explain that the higher the value of reliability, the better. The potential side effects of high reliability are rarely discussed. However, the principle of sacrificing something to get one also applies to reliability.

Trade-off between reliability and validity[edit]

Measurements with perfect reliability lack validity.[7] For example, a person who take the test with the reliability of one will get a perfect score or a zero score, because the examinee who give the correct answer or incorrect answer on one item will give the correct answer or incorrect answer on all other items. The phenomenon in which validity is sacrificed to increase reliability is called attenuation paradox.[40][41]

A high value of reliability can be in conflict with content validity. For high content validity, each item should be constructed to be able to comprehensively represent the content to be measured. However, a strategy of repeatedly measuring essentially the same question in different ways is often used only for the purpose of increasing reliability.[42][43]

Trade-off between reliability and efficiency[edit]

When the other conditions are equal, reliability increases as the number of items increases. However, the increase in the number of items hinders the efficiency of measurements.

Methods to increase reliability[edit]

Despite the costs associated with increasing reliability discussed above, a high level of reliability may be required. The following methods can be considered to increase reliability.

Before data collection[edit]

Eliminate the ambiguity of the measurement item.

Do not measure what the respondents do not know.

Increase the number of items. However, care should be taken not to excessively inhibit the efficiency of the measurement.

Use a scale that is known to be highly reliable.[44]

Conduct a pretest. Discover in advance the problem of reliability.

Exclude or modify items that are different in content or form from other items (e.g., reversely scored items).

After data collection[edit]

Remove the problematic items using "alpha if item deleted". However, this deletion should be accompanied by a theoretical rationale.

Use a more accurate reliability coefficient than . For example, is .02 larger than on average.[15]

Which reliability coefficient to use[edit]

Should we continue to use tau-equivalent reliability?[edit]

is used in an overwhelming proportion. A study estimates that approximately 97% of studies use as a reliability coefficient.[3]

However, simulation studies comparing the accuracy of several reliability coefficients have led to the common result that is an inaccurate reliability coefficient.[45][13][6][46][47]

Methodological studies are critical of the use of . Simplifying and classifying the conclusions of existing studies are as follows.

(1) Conditional use: Use only when certain conditions are met.[3][7][9]

(2) Opposition to use: is inferior and should not be used. [48][5][49][6][4][50]

Alternatives to tau-equivalent reliability[edit]

Existing studies are practically unanimous in that they oppose the widespread practice of using unconditionally for all data. However, different opinions are given on which reliability coefficient should be used instead of .

Different reliability coefficients ranked first in each simulation study[45][13][6][46][47] comparing the accuracy of several reliability coefficients.[7]

The majority opinion is to use SEM-based reliability coefficients as an alternative to .[3][7][48][5][49][9][6][50]

However, there is no consensus on which of the several SEM-based reliability coefficients (e.g., unidimensional or multidimensional models) is the best to use.

Some people suggest [6] as an alternative, but shows information that is completely different from reliability. is a type of coefficient comparable to Revelle's .[14][6] They do not substitute, but complement reliability.[3]

Among SEM-based reliability coefficients, multidimensional reliability coefficients are rarely used, and the most commonly used is .[3]

Software for SEM-based reliability coefficients[edit]

General-purpose statistical software such as SPSS and SAS include a function to calculate . Users who don't know the formula of have no problem in obtaining the estimates with just a few mouse clicks.

SEM software such as AMOS, LISREL, and MPLUS does not have a function to calculate SEM-based reliability coefficients. Users need to calculate the result by inputting it to the formula. To avoid this inconvenience and possible error, even studies reporting the use of SEM rely on instead of SEM-based reliability coefficients.[3] There are a few alternatives to automatically calculate SEM-based reliability coefficients.

1) R (free): The psych package [51] calculates various reliability coefficients.

2) EQS (paid):[52] This SEM software has a function to calculate reliability coefficients.

3) RelCalc (free):[3] Available with Microsoft Excel. can be obtained without the need for SEM software. Various multidimensional SEM reliability coefficients and various types of can be calculated based on the results of SEM software.

Derivation of formula[3][edit]

Assumption 1. The observed score of an item consists of the true score of the item and the error of the item, which is independent of the true score.


Assumption 2. Errors are independent of each other.

Assumption 3. (The assumption of being essentially tau-equivalent) The true score of an item consists of the true score common to all items and the constant of the item.

Let denote the sum of the item true scores.

The variance of is called the true score variance.

Definition. Reliability is the ratio of true score variance to observed score variance.

The following relationship is established from the above assumptions.

Therefore, the covariance matrix between items is as follows.

Observed covariance matrix

You can see that equals the mean of the covariances between items. That is,

Let denote the reliability when satisfying the above assumptions. is:


  1. ^ Cronbach, Lee J. (1951). "Coefficient alpha and the internal structure of tests". Psychometrika. Springer Science and Business Media LLC. 16 (3): 297–334. doi:10.1007/bf02310555. hdl:10983/2196. ISSN 0033-3123. S2CID 13820448.
  2. ^ a b Cronbach, L. J. (1978). "Citation Classics" (PDF). Current Contents. 13: 263.
  3. ^ a b c d e f g h i j k l m n o p Cho, Eunseong (2016-07-08). "Making Reliability Reliable". Organizational Research Methods. SAGE Publications. 19 (4): 651–682. doi:10.1177/1094428116656239. ISSN 1094-4281. S2CID 124129255.
  4. ^ a b Sijtsma, K. (2009). On the use, the misuse, and the very limited usefulness of Cronbach’s alpha. Psychometrika, 74(1), 107–120. doi:10.1007/s11336-008-9101-0
  5. ^ a b c Green, S. B., & Yang, Y. (2009). Commentary on coefficient alpha: A cautionary tale. Psychometrika, 74(1), 121–135. doi:10.1007/s11336-008-9098-4
  6. ^ a b c d e f g h Revelle, W., & Zinbarg, R. E. (2009). Coefficients alpha, beta, omega, and the glb: Comments on Sijtsma. Psychometrika, 74(1), 145–154. doi:10.1007/s11336-008-9102-z
  7. ^ a b c d e f g h i j k Cho, E., & Kim, S. (2015). Cronbach’s coefficient alpha: Well known but poorly understood. Organizational Research Methods, 18(2), 207–230. doi:10.1177/1094428114555994
  8. ^ McNeish, D. (2017). Thanks coefficient alpha, we’ll take it from here. Psychological Methods, 23(3), 412–433. doi:10.1037/met0000144
  9. ^ a b c Raykov, T., & Marcoulides, G. A. (2017). Thanks coefficient alpha, we still need you! Educational and Psychological Measurement, 79(1), 200–210. doi:10.1177/0013164417725127
  10. ^ a b c d e f Cronbach, L.J. (1951). Coefficient alpha and the internal structure of tests. Psychometrika, 16 (3), 297–334. doi:10.1007/BF02310555
  11. ^ a b Cho, E. and Chun, S. (2018), Fixing a broken clock: A historical review of the originators of reliability coefficients including Cronbach's alpha. Survey Research, 19(2), 23–54.
  12. ^ a b c d Guttman, L. (1945). A basis for analyzing test-retest reliability. Psychometrika, 10(4), 255–282. doi:10.1007/BF02288892
  13. ^ a b c Osburn, H. G. (2000). Coefficient alpha and related internal consistency reliability coefficients. Psychological Methods, 5(3), 343–355. doi:10.1037/1082-989X.5.3.343
  14. ^ a b Revelle, W. (1979). Hierarchical cluster analysis and the internal structure of tests. Multivariate Behavioral Research, 14(1), 57–74. doi:10.1207/s15327906mbr1401_4
  15. ^ a b Peterson, R. A., & Kim, Y. (2013). On the relationship between coefficient alpha and composite reliability. Journal of Applied Psychology, 98(1), 194–198. doi:10.1037/a0030767
  16. ^ Brown, W. (1910). Some experimental results in the correlation of mental abilities. British Journal of Psychology, 3(3), 296–322. doi:10.1111/j.2044-8295.1910.tb00207.x
  17. ^ Spearman, C. (1910). Correlation calculated from faulty data. British Journal of Psychology, 3(3), 271–295. doi:10.1111/j.2044-8295.1910.tb00206.x
  18. ^ Kelley, T. L. (1924). Note on the reliability of a test: A reply to Dr. Crum’s criticism. Journal of Educational Psychology, 15(4), 193–204. doi:10.1037/h0072471
  19. ^ a b Kuder, G. F., & Richardson, M. W. (1937). The theory of the estimation of test reliability. Psychometrika, 2(3), 151–160. doi:10.1007/BF02288391
  20. ^ a b c Hoyt, C. (1941). Test reliability estimated by analysis of variance. Psychometrika, 6(3), 153–160. doi:10.1007/BF02289270
  21. ^ a b c Jackson, R. W. B., & Ferguson, G. A. (1941). Studies on the reliability of tests. University of Toronto Department of Educational Research Bulletin, 12, 132.
  22. ^ Edgerton, H. A., & Thomson, K. F. (1942). Test scores examined with the lexis ratio. Psychometrika, 7(4), 281–288. doi:10.1007/BF02288629
  23. ^ a b c Gulliksen, H. (1950). Theory of mental tests. John Wiley & Sons. doi:10.1037/13240-000
  24. ^ a b c d Cronbach, L. J. (1943). On estimates of test reliability. Journal of Educational Psychology, 34(8), 485–494. doi:10.1037/h0058608
  25. ^ Hoyt, C. J. (1941). Note on a simplified method of computing test reliability: Educational and Psychological Measurement, 1(1). doi:10.1177/001316444100100109
  26. ^ Novick, M. R., & Lewis, C. (1967). Coefficient alpha and the reliability of composite measurements. Psychometrika, 32(1), 1–13. doi:10.1007/BF02289400
  27. ^ a b Cronbach, L. J., & Shavelson, R. J. (2004). My Current Thoughts on Coefficient Alpha and Successor Procedures. Educational and Psychological Measurement, 64(3), 391–418. doi:10.1177/0013164404266386
  28. ^ Cortina, J. M. (1993). What is coefficient alpha? An examination of theory and applications. Journal of Applied Psychology, 78(1), 98–104. doi:10.1037/0021-9010.78.1.98
  29. ^ Green, S. B., Lissitz, R. W., & Mulaik, S. A. (1977). Limitations of coefficient alpha as an Index of test unidimensionality. Educational and Psychological Measurement, 37(4), 827–838. doi:10.1177/001316447703700403
  30. ^ McDonald, R. P. (1981). The dimensionality of tests and items. The British Journal of Mathematical and Statistical Psychology, 34(1), 100–117. doi:10.1111/j.2044-8317.1981.tb00621.x
  31. ^ Schmitt, N. (1996). Uses and abuses of coefficient alpha. Psychological Assessment, 8(4), 350–353. doi:10.1037/1040-3590.8.4.350
  32. ^ Ten Berge, J. M. F., & Sočan, G. (2004). The greatest lower bound to the reliability of a test and the hypothesis of unidimensionality. Psychometrika, 69(4), 613–625. doi:10.1007/BF02289858
  33. ^ a b Kopalle, P. K., & Lehmann, D. R. (1997). Alpha inflation? The impact of eliminating scale items on Cronbach’s alpha. Organizational Behavior and Human Decision Processes, 70(3), 189–197. doi:10.1006/obhd.1997.2702
  34. ^ Raykov, T. (2007). Reliability if deleted, not ‘alpha if deleted’: Evaluation of scale reliability following component deletion. The British Journal of Mathematical and Statistical Psychology, 60(2), 201–216. doi:10.1348/000711006X115954
  35. ^ a b Nunnally, J. C. (1967). Psychometric theory. New York, NY: McGraw-Hill.
  36. ^ a b Nunnally, J. C. (1978). Psychometric theory (2nd ed.). New York, NY: McGraw-Hill.
  37. ^ a b Nunnally, J. C., & Bernstein, I. H. (1994). Psychometric theory (3rd ed.). New York, NY: McGraw-Hill.
  38. ^ a b Lance, C. E., Butts, M. M., & Michels, L. C. (2006). What did they really say? Organizational Research Methods, 9(2), 202–220. doi:10.1177/1094428105284919
  39. ^ Cho, E. (2020). A comprehensive review of so-called Cronbach's alpha. Journal of Product Research, 38(1), 9–20.
  40. ^ Loevinger, J. (1954). The attenuation paradox in test theory. Psychological Bulletin, 51(5), 493–504. doi:10.1002/j.2333-8504.1954.tb00485.x
  41. ^ Humphreys, L. (1956). The normal curve and the attenuation paradox in test theory. Psychological Bulletin, 53(6), 472–476. doi:10.1037/h0041091
  42. ^ Boyle, G. J. (1991). Does item homogeneity indicate internal consistency or item redundancy in psychometric scales? Personality and Individual Differences, 12(3), 291–294. doi:10.1016/0191-8869(91)90115-R
  43. ^ Streiner, D. L. (2003). Starting at the beginning: An introduction to coefficient alpha and internal consistency. Journal of Personality Assessment, 80(1), 99–103. doi:10.1207/S15327752JPA8001_18
  44. ^ Lee, H. (2017). Research Methodology (2nd ed.), Hakhyunsa.
  45. ^ a b Kamata, A., Turhan, A., & Darandari, E. (2003). Estimating reliability for multidimensional composite scale scores. Annual Meeting of American Educational Research Association, Chicago, April 2003, April, 1–27.
  46. ^ a b Tang, W., & Cui, Y. (2012). A simulation study for comparing three lower bounds to reliability. Paper Presented on April 17, 2012 at the AERA Division D: Measurement and Research Methodology, Section 1: Educational Measurement, Psychometrics, and Assessment., 1–25.
  47. ^ a b van der Ark, L. A., van der Palm, D. W., & Sijtsma, K. (2011). A latent class approach to estimating test-score reliability. Applied Psychological Measurement, 35(5), 380–392. doi:10.1177/0146621610392911
  48. ^ a b Dunn, T. J., Baguley, T., & Brunsden, V. (2014). From alpha to omega: A practical solution to the pervasive problem of internal consistency estimation. British Journal of Psychology, 105(3), 399–412. doi:10.1111/bjop.12046
  49. ^ a b Peters, G. Y. (2014). The alpha and the omega of scale reliability and validity comprehensive assessment of scale quality. The European Health Psychologist, 1(2), 56–69.
  50. ^ a b Yang, Y., & Green, S. B. (2011). Coefficient alpha: A reliability coefficient for the 21st century? Journal of Psychoeducational Assessment, 29(4), 377–392. doi:10.1177/0734282911406668
  51. ^
  52. ^

External links[edit]