COMPAS (software)
Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) is a case management and decision support tool developed and owned by Northpointe (now Equivant) used by U.S. courts to assess the likelihood of a defendant becoming a recidivist.[1][2]
COMPAS has been used by the U.S. states of New York, Wisconsin, California, Florida's Broward County, and other jurisdictions.[3]
Risk Assessment
The COMPAS software uses an algorithm to assess potential recidivism risk. Northpointe created risk scales for general and violent recidivism, and for pretrial misconduct. According to the COMPAS Practitioner's Guide, the scales were designed using behavioral and psychological constructs "of very high relevance to recidivism and criminal careers."[4]
- Pretrial Release Risk scale: Pretrial risk is a measure of the potential for an individual to fail to appear and/or to commit new felonies while on release. According to the research that informed the creation of the scale, "current charges, pending charges, prior arrest history, previous pretrial failure, residential stability, employment status, community ties, and substance abuse" are the most significant indicators affecting pretrial risk scores.[4]
- General Recidivism scale: The General Recidivism scale is designed to predict new offenses upon release, and after the COMPAS assessment is given. The scale uses an individual's criminal history and associates, drug involvement, and indications of juvenile delinquency.[5]
- Violent Recidivism scale: The Violent Recidivism score is meant to predict violent offenses following release. The scale uses data or indicators that include a person's "history of violence, history of non-compliance, vocational/educational problems, the person’s age-at-intake and the person’s age-at-first-arrest."[6]
The Violent Recidivism Risk Scale is calculated as follows:
where is the violent recidivism risk score, is a weight multiplier, is current age, is the age at first arrest, is the history of violence, is vocation education level, and is history of noncompliance. The weight, , is "determined by the strength of the item’s relationship to person offense recidivism that we observed in our study data."[7]
Critiques and legal rulings
In July 2016, the Wisconsin Supreme Court ruled that COMPAS risk scores can be considered by judges during sentencing, but there must be warnings given to the scores to represent the tool's "limitations and cautions."[3]
A general critique of the use of proprietary software such COMPAS is that since the algorithms it uses are trade secrets, they cannot be examined by the public and affected parties which may be a violation of due process. Additionally, simple, transparent and more interpretable algorithms (such as linear regression) have been shown to perform predictions approximately as well as the COMPAS algorithm.[8][9][10][11]
Another general criticism of machine-learning based algorithms is since they are data-dependent if the data are biased, the software will likely yield biased results.[12][page needed]
Accuracy
In 2016, Julia Angwin was co-author of a ProPublica investigation of the algorithm.[13] The team found that “blacks are almost twice as likely as whites to be labeled a higher risk but not actually re-offend,” whereas COMPAS “makes the opposite mistake among whites: They are much more likely than blacks to be labeled lower-risk but go on to commit other crimes.”[13][8][14] They also found that only 20 percent of people predicted to commit violent crimes actually went on to do so.[13]
In a letter, Northpointe criticized ProPublica’s methodology and stated that: “[The company] does not agree that the results of your analysis, or the claims being made based upon that analysis, are correct or that they accurately reflect the outcomes from the application of the model.”[13]
Another team at the Community Resources for Justice, a criminal justice think tank, published a rebuttal of the investigation's findings.[15] Among several objections, the CRJ rebuttal concluded that the Propublica's results: "contradict several comprehensive existing studies concluding that actuarial risk can be predicted free of racial and/or gender bias."[15]
A subsequent study has shown that COMPAS software is more accurate than individuals with little or no criminal justice expertise and less accurate than groups of individuals.[16] They found that: "On average, they got the right answer 63 percent of their time, and the group’s accuracy rose to 67 percent if their answers were pooled. COMPAS, by contrast, has an accuracy of 65 percent."[8]
Further reading
- Northpointe (15 March 2015). "A Practitioner's Guide to COMPAS Core" (PDF).
{{cite web}}
: Invalid|ref=harv
(help) - Angwin, Julia; Larson, Jeff (2016-05-23). "Machine Bias". ProPublica. Retrieved 2019-11-21.
- Flores, Anthony; Lowenkamp, Christopher; Bechtel, Kristin. "False Positives, False Negatives, and False Analyses" (PDF). Community Resources for Justice. Retrieved 2019-11-21.
- Sample COMPAS Risk Assessment
See also
- Algorithmic bias
- Algorithmic governance
- Garbage in, garbage out
- Legal expert systems
- Loomis v. Wisconsin
- Criminal sentencing in the United States
References
- ^ Sam Corbett-Davies, Emma Pierson, Avi Feller and Sharad Goel (October 17, 2016). "A computer program used for bail and sentencing decisions was labeled biased against blacks. It's actually not that clear". The Washington Post. Retrieved January 1, 2018.
{{cite news}}
: CS1 maint: multiple names: authors list (link) - ^ Aaron M. Bornstein (December 21, 2017). "Are Algorithms Building the New Infrastructure of Racism?". Nautilus. No. 55. Retrieved January 2, 2018.
- ^ a b Kirkpatrick, Keith (2017-01-23). "It's not the algorithm, it's the data". Communications of the ACM. 60 (2): 21–23. doi:10.1145/3022181.
- ^ a b Northpointe 2015, p. 27.
- ^ Northpointe 2015, p. 26.
- ^ Northpointe 2015, p. 28.
- ^ Northpointe 2015, p. 29.
- ^ a b c Yong, Ed (2018-01-17). "A Popular Algorithm Is No Better at Predicting Crimes Than Random People". Retrieved 2019-11-21.
- ^ Angelino, Elaine; Larus-Stone, Nicholas; Alabi, Daniel; Seltzer, Margo; Rudin, Cynthia (August 3, 2018). "Learning Certifiably Optimal Rule Lists for Categorical Data". arXiv:1704.01701 – via arXiv.org.
{{cite journal}}
: Cite journal requires|journal=
(help) - ^ Robin A. Smith. Opening the lid on criminal sentencing software. Duke Today, 19 July 2017
- ^ Angelino, Elaine; Larus-Stone, Nicholas; Alabi, Daniel; Seltzer, Margo; Rudin, Cynthia (2017-04-06). "Learning Certifiably Optimal Rule Lists for Categorical Data". arXiv:1704.01701 [stat.ML].
- ^ O'Neil, Cathy (2016). Weapons of Math Destruction. ISBN 0553418815.
- ^ a b c d Angwin, Julia; Larson, Jeff (2016-05-23). "Machine Bias". ProPublica. Retrieved 2019-11-21.
- ^ Israni, Ellora (2017-10-26). "When an Algorithm Helps Send You to Prison (Opinion)". Retrieved 2019-11-21.
- ^ a b Flores, Anthony; Lowenkamp, Christopher; Bechtel, Kristin. "False Positives, False Negatives, and False Analyses" (PDF). Community Resources for Justice. Community Resources for Justice. Retrieved 2019-11-21.
- ^ Dressel, Julia; Farid, Hany (2018-01-17). "The accuracy, fairness, and limits of predicting recidivism". Science Advances. 4 (1). doi:10.1126/sciadv.aao5580. Retrieved 2019-11-21.