Jump to content

Receiver operating characteristic

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by 141.51.214.211 (talk) at 08:34, 31 May 2012 (→‎Area Under Curve: Fixed download link for Powers reference). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

ROC curve of three epitope predictors

In signal detection theory, a receiver operating characteristic (ROC), or simply ROC curve, is a graphical plot which illustrates the performance of a binary classifier system as its discrimination threshold is varied. It is created by plotting the fraction of true positives out of the positives (TPR = true positive rate) vs. the fraction of false positives out of the negatives (FPR = false positive rate), at various threshold settings. (TPR is also known as sensitivity, and FPR is one minus the specificity or true negative rate.)

ROC analysis provides tools to select possibly optimal models and to discard suboptimal ones independently from (and prior to specifying) the cost context or the class distribution. ROC analysis is related in a direct and natural way to cost/benefit analysis of diagnostic decision making. The ROC curve was first developed by electrical engineers and radar engineers during World War II for detecting enemy objects in battle fields and was soon introduced to psychology to account for perceptual detection of stimuli. ROC analysis since then has been used in medicine, radiology, biometrics, and other areas for many decades and is increasingly used in machine learning and data mining research.

The ROC is also known as a Relative Operating Characteristic curve, because it is a comparison of two operating characteristics (TPR & FPR) as the criterion changes.[1]

Basic concept

Terminology and derivations
from a confusion matrix
true positive (TP)
eqv. with hit
true negative (TN)
eqv. with correct rejection
false positive (FP)
eqv. with false alarm, Type I error
false negative (FN)
eqv. with miss, Type II error
sensitivity or true positive rate (TPR)
eqv. with hit rate, recall
false positive rate (FPR)
eqv. with fall-out
accuracy (ACC)
specificity (SPC) or True Negative Rate
positive predictive value (PPV)
eqv. with precision
negative predictive value (NPV)
false discovery rate (FDR)
Matthews correlation coefficient (MCC)
F1 score

Source: Fawcett (2006).

A classification model (classifier or diagnosis) is a mapping of instances between certain classes/groups. The classifier or diagnosis result can be a real value (continuous output), in which case the classifier boundary between classes must be determined by a threshold value (for instance, to determine whether a person has hypertension based on a blood pressure measure). Or it can be a discrete class label, indicating one of the classes.

Let us consider a two-class prediction problem (binary classification), in which the outcomes are labeled either as positive (p) or negative (n). There are four possible outcomes from a binary classifier. If the outcome from a prediction is p and the actual value is also p, then it is called a true positive (TP); however if the actual value is n then it is said to be a false positive (FP). Conversely, a true negative (TN) has occurred when both the prediction outcome and the actual value are n, and false negative (FN) is when the prediction outcome is n while the actual value is p.

To get an appropriate example in a real-world problem, consider a diagnostic test that seeks to determine whether a person has a certain disease. A false positive in this case occurs when the person tests positive, but actually does not have the disease. A false negative, on the other hand, occurs when the person tests negative, suggesting they are healthy, when they actually do have the disease.

Let us define an experiment from P positive instances and N negative instances. The four outcomes can be formulated in a 2×2 contingency table or confusion matrix, as follows:

  actual value
  p n total
prediction
outcome
p' True
Positive
False
Positive
P'
n' False
Negative
True
Negative
N'
total P N

ROC space

The ROC space and plots of the four prediction examples.

The contingency table can derive several evaluation "metrics" (see infobox). To draw an ROC curve, only the true positive rate (TPR) and false positive rate (FPR) are needed (as functions of some classifier parameter). The TPR defines how many correct positive results occur among all positive samples available during the test. FPR, on the other hand, defines how many incorrect positive results occur among all negative samples available during the test.

An ROC space is defined by FPR and TPR as x and y axes respectively, which depicts relative trade-offs between true positive (benefits) and false positive (costs). Since TPR is equivalent with sensitivity and FPR is equal to 1 − specificity, the ROC graph is sometimes called the sensitivity vs (1 − specificity) plot. Each prediction result or instance of a confusion matrix represents one point in the ROC space.

The best possible prediction method would yield a point in the upper left corner or coordinate (0,1) of the ROC space, representing 100% sensitivity (no false negatives) and 100% specificity (no false positives). The (0,1) point is also called a perfect classification. A completely random guess would give a point along a diagonal line (the so-called line of no-discrimination) from the left bottom to the top right corners (regardless of the positive and negative base rates). An intuitive example of random guessing is a decision by flipping coins (heads or tails). As the size of the sample increases, a random classifier's ROC point migrates towards (0.5,0.5).

The diagonal divides the ROC space. Points above the diagonal represent good classification results (better than random), points below the line poor results (worse than random). Note that the output of a consistently poor predictor could simply be inverted to obtain a good predictor.

Let us look into four prediction results from 100 positive and 100 negative instances:

A B C C′
TP=63 FP=28 91
FN=37 TN=72 109
100 100 200
TP=77 FP=77 154
FN=23 TN=23 46
100 100 200
TP=24 FP=88 112
FN=76 TN=12 88
100 100 200
TP=76 FP=12 88
FN=24 TN=88 112
100 100 200
TPR = 0.63 TPR = 0.77 TPR = 0.24 TPR = 0.76
FPR = 0.28 FPR = 0.77 FPR = 0.88 FPR = 0.12
ACC = 0.68 ACC = 0.50 ACC = 0.18 ACC = 0.82

Plots of the four results above in the ROC space are given in the figure. The result of method A clearly shows the best predictive power among A, B, and C. The result of B lies on the random guess line (the diagonal line), and it can be seen in the table that the accuracy of B is 50%. However, when C is mirrored across the center point (0.5,0.5), the resulting method C′ is even better than A. This mirrored method simply reverses the predictions of whatever method or test produced the C contingency table. Although the original C method has negative predictive power, simply reversing its decisions leads to a new predictive method C′ which has positive predictive power. When the C method predicts p or n, the C′ method would predict n or p, respectively. In this manner, the C′ test would perform the best. The closer a result from a contingency table is to the upper left corner, the better it predicts, but the distance from the random guess line in either direction is the best indicator of how much predictive power a method has. If the result is below the line (i.e. the method is worse than a random guess), all of the method's predictions must be reversed in order to utilize its power, thereby moving the result above the random guess line.

Curves in ROC space

Oftentimes, objects are classified based on a continuous random variable. For example, imagine that the blood protein levels in diseased people and healthy people are normally distributed with means of 2 g/dL and 1 g/dL respectively. A medical test might measure the level of a certain protein in a blood sample and classify any number above a certain threshold as indicating disease. The experimenter can adjust the threshold (black vertical line in the figure), which will in turn change the false positive rate. Increasing the threshold would result in fewer false positives (and more false negatives), corresponding to a leftward movement on the curve. The actual shape of the curve is determined by how much overlap the two distributions have.

Further interpretations

Sometimes, the ROC is used to generate a summary statistic. Common versions are:

  • the intercept of the ROC curve with the line at 90 degrees to the no-discrimination line (also called Youden's J statistic)
  • the area between the ROC curve and the no-discrimination line [citation needed]
  • the area under the ROC curve, or "AUC" ("Area Under Curve"), or A' (pronounced "a-prime"),[2] or "c-statistic".[3]
  • d' (pronounced "d-prime"), the distance between the mean of the distribution of activity in the system under noise-alone conditions and its distribution under signal-alone conditions, divided by their standard deviation, under the assumption that both these distributions are normal with the same standard deviation. Under these assumptions, it can be proved that the shape of the ROC depends only on d'.
  • C (Concordance) Statistic: This is a rank order statistic related to Somers' D statistic. It is commonly used in the medical literature to quantify the capacity of the estimated risk score in discriminating among subjects with different event times. It varies between 0.5 and 1.0 with higher values indicating a better predictive model. For binary outcomes C is identical to the area under the receiver operating characteristic curve. Although bootstrapping to generate confidence intervals is possible, the power of testing the differences between two (or more) C statistics is low and alternative methods such as logistic regression should probably be used.[4] The C statistic has been generalized for use in survival analysis[5] and it is also possible to combine this with statistical weighting systems. Other extensions have been proposed.[6][7]

However, any attempt to summarize the ROC curve into a single number loses information about the pattern of tradeoffs of the particular discriminator algorithm.

Detection Error Tradeoff graph

Example DET graph

An alternative to the ROC curve is the Detection Error Tradeoff (DET) graph, which plots the False Negative Rate (missed detections) vs. the False Positive Rate (false alarms) on non-linearly transformed x- and y-axes. This alternative spends more graph area on the region of interest. Most of the ROC area is of little interest; one primarily cares about the region tight against the y-axis and the top left corner.

Z-transformation

If a z-transformation is applied to the ROC curve, the curve will be transformed into a straight line.[8] This z-transformation is based on a normal distribution with a mean of zero and a standard deviation of one. In memory strength theory, one must assume that the zROC is not only linear, but has a slope of 1.0. The normal distributions of targets (studied objects that the subjects need to recall) and lures (non studied objects that the subjects attempt to recall) is the factor causing the zROC to be linear.

The linearity of the zROC curve depends on the standard deviations of the target and lure strength distributions. If the standard deviations are equal, the slope will be 1.0. If the standard deviation of the target strength distribution is larger than the standard deviation of the lure strength distribution, then the slope will be smaller than 1.0. In most studies, it has been found that the zROC curve slopes constantly fall below 1, usually between 0.5 and 0.9.[9] Many experiments yielded a zROC slope of 0.8. A slope of 0.8 implies that the variability of the target strength distribution is 25% larger than the variability of the lure strength distribution.[10]

Another variable used is d'. d' is a measure of sensitivity for yes-no recognition that can easily be expressed in terms of z-values. d' measures sensitivity, in that it measures the degree of overlap between target and lure distributions. It is calculated as the mean of the target distribution minus the mean of the lure distribution, expressed in standard deviation units. For a given hit rate and false alarm rate, d' can be calculated with the following equation: d'=z(hit rate)- z(false alarm rate). Although d' is a commonly used parameter, it must be recognized that it is only relevant when strictly adhering to the very strong assumptions of strength theory made above.[11]

The z-transformation of an ROC curve is always linear, as assumed, except in special situations. The Yonelinas Familiarity-Recollection model is a two-dimensional account of recognition memory. Instead of the subject simply answering yes or no to a specific input, the subject gives the input a feeling of familiarity, which operates like the original ROC curve. What changes, though, is a parameter for Recollection (R). Recollection is assumed to be all-or-none, and it trumps familiarity. If there were no recollection component, zROC would have a predicted slope of 1. However, when adding the recollection component, the zROC curve will be concave up, with a decreased slope. This difference in shape and slope result from an added element of variability due to some items being recollected. Patients with anterograde amnesia are unable to recollect, so their Yonelinas zROC curve would have a slope close to 1.0.[12]

Area Under Curve

The Area Under Curve (AUC), when using normalized units, is equal to the probability that a classifier will rank a randomly chosen positive instance higher than a randomly chosen negative one (assuming 'positive' ranks higher than 'negative').[13] It can be shown that the area under the ROC curve is closely related to the Mann–Whitney U,[14][15] which tests whether positives are ranked higher than negatives. It is also equivalent to the Wilcoxon test of ranks.[15] The AUC is related to the Gini coefficient () by the formula , where:

[16]

In this way, it is possible to calculate the AUC by using an average of a number of trapezoidal approximations.

It is also common to calculate the Area Under the Convex Hull (AUCH) as any point on the line segment between two prediction results can be achieved by randomly using one or other system with probabilities proportional to the relative length of the opposite component of the segment[17]. Interestingly, it is also possible to invert concavities - just as in the figure the worse solution can be reflected to become a better solution; concavities can be reflected in any line segment, but this more extreme form of fusion is much more likely to overfit the data[18].

The machine learning community most often uses the ROC AUC statistic for model comparison.[19] However, this practice has recently been questioned based upon new machine learning research that shows that the AUC is quite noisy as a classification measure[20] and has some other significant problems in model comparison.[21][22] A reliable and valid AUC estimate can be interpreted as the probability that the classifier will assign a higher score to a randomly chosen positive example than to a randomly chosen negative example. However, the critical research[20][21] suggests frequent failures in obtaining reliable and valid AUC estimates. Thus, the practical value of the AUC measure has been called into question,[22] raising the possibility that the AUC may actually introduce more uncertainty into machine learning classification accuracy comparisons than resolution.

One recent explanation of the problem with ROC AUC is that reducing the ROC Curve to a single number ignores the fact that it is about the tradeoffs between the different systems or performance points plotted and not the performance of an individual system, as well as ignoring the possibility of concavity repair, so that related alternative measures such as Informedness[23] or DeltaP are recommended[24]. These measures are essentially equivalent to the Gini for a single prediction point with DeltaP' = Informedness = 2AUC-1, whilst DeltaP = Markedness represents the dual (viz. predicting the prediction from the real class) and their geometric mean is Matthews Correlation[23]. Alternatively ROC AUC may be divided into two components: its Certainty (ROC-Cert) which corresponds to the single point AUC and its Consistency (ROC-Con) which corresponds to multipoint AUC − singlepoint AUC, with the pair of measures (ROC-ConCert) being argued to capture some of the additional information that ROC ad ds to the single point measures (noting that it can also be applied to ROCH, and should be if it is to capture the real potential of the system whose parameterization is being investigated)[25].

Other measures

In engineering, the area between the ROC curve and the no-discrimination line is often preferred, due to its useful mathematical properties as a non-parametric statistic.[citation needed] This area is often simply known as the discrimination. In psychophysics, the Sensitivity Index d', ΔP' or DeltaP' is the most commonly used measure[26] and is equivalent to twice the discrimination, being equal also to Informedness, deskewed WRAcc and Gini Coefficient in the single point case (single parameterization or single system)[27]. These measures all have the advantage that 0 represents chance performance whilst Informedness=1 represents perfect performance, and -1 represents the "perverse" case of full informedness used to always give the wrong response, with Informedness being proven to be the probability of making an informed decision (rather than guessing)[28]. ROC AUC and AUCH have a related property that chance performance has a fixed value, but it is 0.5, and the normalization to 2AUC-1 brings this to 0 and allows Informedness and Gini to be interpreted as Kappa statistics, but Informedness has been shown to have desirable characteristics for Machine Learning versus other common definitions of Kappa such as Cohen Kappa and Fleiss Kappa[29].

The illustration at the top right of the page shows the use of ROC graphs for the discrimination between the quality of different algorithms for predicting epitopes. The graph shows that if one detects at least 60% of the epitopes in a virus protein, at least 30% of the output is falsely marked as epitopes.

Sometimes it can be more useful to look at a specific region of the ROC Curve rather than at the whole curve. It is possible to compute partial AUC.[30] For example, one could focus on the region of the curve with low false positive rate, which is often of prime interest for population screening tests.[31] Another common approach for classification problems in which P ≪ N (common in bioinformatics applications) is to use a logarithmic scale for the x-axis.[32]

History

The ROC curve was first used during World War II for the analysis of radar signals before it was employed in signal detection theory.[33] Following the attack on Pearl Harbor in 1941, the United States army began new research to increase the prediction of correctly detected Japanese aircraft from their radar signals.

In the 1950s, ROC curves were employed in psychophysics to assess human (and occasionally non-human animal) detection of weak signals.[33] In medicine, ROC analysis has been extensively used in the evaluation of diagnostic tests.[34][35] ROC curves are also used extensively in epidemiology and medical research and are frequently mentioned in conjunction with evidence-based medicine. In radiology, ROC analysis is a common technique to evaluate new radiology techniques.[36] In the social sciences, ROC analysis is often called the ROC Accuracy Ratio, a common technique for judging the accuracy of default probability models.

ROC curves also proved useful for the evaluation of machine learning techniques. The first application of ROC in machine learning was by Spackman who demonstrated the value of ROC curves in comparing and evaluating different classification algorithms.[37]

See also

General references

  • Zhou, Xiao-Hua; Obuchowski, Nancy A.; McClish, Donna K. (2002). Statistical Methods in Diagnostic Medicine. New York, NY: Wiley & Sons. ISBN 978-0-471-34772-9.

Further reading

References

  1. ^ a b Swets, John A.; Signal detection theory and ROC analysis in psychology and diagnostics : collected papers, Lawrence Erlbaum Associates, Mahwah, NJ, 1996
  2. ^ Fogarty, James; Baker, Ryan S.; Hudson, Scott E. (2005). "Case studies in the use of ROC curve analysis for sensor-based estimates in human computer interaction". ACM International Conference Proceeding Series, Proceedings of Graphics Interface 2005. Waterloo, ON: Canadian Human-Computer Communications Society. {{cite conference}}: Unknown parameter |booktitle= ignored (|book-title= suggested) (help)
  3. ^ Hastie, Trevor; Tibshirani, Robert; Friedman, Jerome H. (2009). The elements of statistical learning: data mining, inference, and prediction (2nd ed.).
  4. ^ LaValley MP (2008) Logistic Regression. Circulation 117: 2395-2399 doi: 10.1161/​CIRCULATIONAHA.106.682658
  5. ^ Heagerty PJ, Zheng Y (2005) Survival model predictive accuracy and ROC curves. Biometrics 61:92–105
  6. ^ Gonen M, Heller G (2005) Concordance probability and discriminatory power in proportional hazards regression. Biometrika 92:965–970
  7. ^ Chambless LE, Diao G (2006) Estimation of time-dependent area under the ROC curve for long-term risk prediction. Stat Med 25:3474 –3486.
  8. ^ MacMillan, Neil A.; Creelman, C. Douglas (2005). Detection Theory: A User's Guide (2nd ed.). Mahwah, NJ: Lawrence Erlbaum Associates. ISBN 1-4106-1114-0.
  9. ^ Glanzer, Murray; Kisok, Kim; Hilford, Andy; Adams, John K. (1999). "Slope of the receiver-operating characteristic in recognition memory". Journal of Experimental Psychology: Learning, Memory, and Cognition. 25 (2): 500–513.
  10. ^ Ratcliff, Roger; McCoon, Gail; Tindal, Michael (1994). "Empirical generality of data from recognition memory ROC functions and implications for GMMs". Journal of Experimental Psychology: Learning, Memory, and Cognition. 20: 763–785.
  11. ^ Zhang, Jun; Mueller, Shane T. (2005). "A note on ROC analysis and non-parametric estimate of sensitivity". Psychometrika. 70 (203–212).
  12. ^ Yonelinas, Andrew P.; Kroll, Neal E. A.; Dobbins, Ian G.; Lazzara, Michele; Knight, Robert T. (1998). "Recollection and familiarity deficits in amnesia: Convergence of remember-know, process dissociation, and receiver operating characteristic data". Neuropsychology. 12: 323–339.
  13. ^ Fawcett, Tom (2006); An introduction to ROC analysis, Pattern Recognition Letters, 27, 861–874.
  14. ^ Hanley, James A.; McNeil, Barbara J. (1982). "The Meaning and Use of the Area under a Receiver Operating Characteristic (ROC) Curve". Radiology. 143 (1): 29–36. PMID 7063747.
  15. ^ a b c Mason, Simon J.; Graham, Nicholas E. (2002). "Areas beneath the relative operating characteristics (ROC) and relative operating levels (ROL) curves: Statistical significance and interpretation" (PDF). Quarterly Journal of the Royal Meteorological Society (128): 2145–2166.
  16. ^ Hand, David J.; and Till, Robert J. (2001); A simple generalization of the area under the ROC curve to multiple class classification problems, Machine Learning, 45, 171–186.
  17. ^ Provost, F.; Fawcett, T. (2001). "Robust classification for imprecise environments". Machine Learning,. 44: 203–231.{{cite journal}}: CS1 maint: extra punctuation (link)
  18. ^ Wu (\, S. (2005). "Repairing concavities in ROC curves.". 19th International Joint Conference on Artificial Intelligence (IJCAI'05),. pp. 702–707. {{cite conference}}: |first1= missing |last1= (help); Missing pipe in: |first1= (help); Unknown parameter |booktitle= ignored (|book-title= suggested) (help)CS1 maint: numeric names: authors list (link)
  19. ^ Hanley, James A.; McNeil, Barbara J. (1983-09-01). "A method of comparing the areas under receiver operating characteristic curves derived from the same cases". Radiology. 148 (3): 839–43. PMID 6878708. Retrieved 2008-12-03. {{cite journal}}: More than one of |number= and |issue= specified (help)
  20. ^ a b Hanczar, Blaise; Hua, Jianping; Sima, Chao; Weinstein, John; Bittner, Michael; and Dougherty, Edward R. (2010); Small-sample precision of ROC-related estimates, Bioinformatics 26 (6): 822–830
  21. ^ a b Lobo, Jorge M.; Jiménez-Valverde, Alberto; and Real, Raimundo (2008), AUC: a misleading measure of the performance of predictive distribution models, Global Ecology and Biogeography, 17: 145–151
  22. ^ a b Hand, David J. (2009); Measuring classifier performance: A coherent alternative to the area under the ROC curve, Machine Learning, 77: 103–123
  23. ^ a b Powers, David M W (2007/2011). "Evaluation: From Precision, Recall and F-Factor 
to ROC, Informedness, Markedness & Correlation" (PDF). Journal of Machine Learning Technologies. 2 (1): 37–63. {{cite journal}}: Check date values in: |date= (help)
  24. ^ Powers, David M.W. (2012). "The Problem of Area Under the Curve". International Conference on Information Science and Technology. {{cite conference}}: Unknown parameter |booktitle= ignored (|book-title= suggested) (help)
  25. ^ Powers, David M.W. (2012). "ROC-ConCert". Spring Conference on Engineering Technology. {{cite conference}}: Unknown parameter |booktitle= ignored (|book-title= suggested) (help)
  26. ^ Perruchet, P.; Peereman, R. (2004). "The exploitation of distributional information in syllable processing". J. Neurolinguistics. 17: 97−119.
  27. ^ Powers,, David M W (2007/2011). "Evaluation: From Precision, Recall and F-Factor 
to ROC, Informedness, Markedness & Correlation" (PDF). Journal of Machine Learning Technologies. 2 (1): 37–63. {{cite journal}}: Check date values in: |date= (help)CS1 maint: extra punctuation (link)
  28. ^ Powers (2003), Recall and Precision versus the Bookmaker,, David M. W. Proceedings. of the International Conference on Cognitive Science (ICSC- 2003), Sydney Australia, 2003, pp.529-534. {{cite conference}}: Unknown parameter |booktitle= ignored (|book-title= suggested) (help)CS1 maint: extra punctuation (link) CS1 maint: multiple names: authors list (link) CS1 maint: numeric names: authors list (link)
  29. ^ Powers, David M. W. (2012). "The Problem with Kappa". Conference of the European Chapter of the Association for Computational Linguistics (EACL2012) Joint ROBUS-UNSUP Workshop. {{cite conference}}: Unknown parameter |booktitle= ignored (|book-title= suggested) (help)
  30. ^ McClish, Donna Katzman (1989-08-01). "Analyzing a Portion of the ROC Curve". Medical Decision Making. 9 (3): 190–195. doi:10.1177/0272989X8900900307. PMID 2668680. Retrieved 2008-09-29.
  31. ^ Dodd, Lori E.; Pepe, Margaret S. (2003). "Partial AUC Estimation and Regression". Biometrics. 59 (3): 614–623. doi:10.1111/1541-0420.00071. PMID 14601762. Retrieved 2007-12-18.
  32. ^ Karplus, Kevin (2011); Better than Chance: the importance of null models, University of California, Santa Cruz, in Proceedings of the First International Workshop on Pattern Recognition in Proteomics, Structural Biology and Bioinformatics (PR PS BB 2011)
  33. ^ a b Green, David M.; Swets, John A. (1966). Signal detection theory and psychophysics. New York, NY: John Wiley and Sons Inc. ISBN 0-471-32420-5.
  34. ^ Zweig, Mark H.; Campbell, Gregory (1993). "Receiver-operating characteristic (ROC) plots: a fundamental evaluation tool in clinical medicine" (PDF). Clinical Chemistry. 39 (8): 561–577. PMID 8472349.
  35. ^ Pepe, Margaret S. (2003). The statistical evaluation of medical tests for classification and prediction. New York, NY: Oxford.
  36. ^ Obuchowski, Nancy A. (2003). "Receiver operating characteristic curves and their use in radiology". Radiology. 229 (1): 3–8. doi:10.1148/radiol.2291010898. PMID 14519861.
  37. ^ Spackman, Kent A. (1989). "Signal detection theory: Valuable tools for evaluating inductive learning". Proceedings of the Sixth International Workshop on Machine Learning. San Mateo, CA: Morgan Kaufmann. pp. 160–163. {{cite conference}}: Unknown parameter |booktitle= ignored (|book-title= suggested) (help)