User:Friend of facts2/sandbox

From Wikipedia, the free encyclopedia

Unit-Weighted Regression Update[edit]

https://acestoohigh.com/2017/05/02/addiction-doc-says-stop-chasing-the-drug-focus-on-aces-people-can-recover/


Rank-biserial correlation[edit]

Rank correlation is used as the effect size in some common nonparametric methods of statistical inference, including the Mann–Whitney U test, and the Wilcoxon signed-rank test.


Correction for Wendt Formula[edit]

The formula uses only the test value of U from the Mann-Whitney U test, and the sample sizes of the two groups: r = 1 – (2U)/ (n1 * n2).

Kerby simple difference formula[edit]

The Kerby simple difference formula states that the rank correlation can be expressed as the difference between the proportion of favorable evidence (f) minus the proportion of unfavorable evidence (u).

  • Kerby, Dave S. (2014). "The Simple Difference Formula: An Approach to Teaching Nonparametric Correlation". Comprehensive Psychology. 3 (1): 11.IT.3.1. doi:10.2466/11.IT.3.1.

Rank correlation[edit]

In some studies, the test statistic reported is T. In such cases, Dave Kerby (2014) has shown that the rank correlation can be computed with the Kerby simple difference formula.[1] This formula states that the rank correlation is the simple difference between the proportion of favorable evidence and the unfavorable evidence. For the signed-rank test, the evidence comes from the two rank sums. Knowing that T is the smaller rank sum allows for computing the rank correlation. To continue with the above example, the total rank sum is 45, and the smaller rank sum is 18; thus, T = 18, and the other rank sum is 27. The two rank-sum proportions are 27/45 = 60% and 18/45 = 40%. By the Kerby simple difference formula, the rank rank correlation is the difference between the two proportions (.60 minus .40), hence r = .20.

Example and interpretation[edit]

The maximum value for the correlation is r =1, which means that 100% of the pairs favor the hypothesis. A correlation of r = 0 indicates that half the pairs favor the hypothesis and half do not. In other words, the groups do not differ in ranks, so there is no evidence that the two groups differ. An effect size of r = 0 can be said to describe no relationship between group membership and the members' rank.

The Rank-biserial correlation[edit]

Gene Glass (1965) noted that the rank-biserial can be derived from Spearman's rho. "One can derive a coefficient defined on X, the dichotomous variable, and Y, the ranking variable, which estimates Spearman's rho between X and Y in the same way that biserial r estimates Pearson's r between two normal variables” (p. 91)

Dave Kerby (2014) recommended that the rank-biserial can be used to introduce students to rank correlation, because the general logic can be explained at an introductory level. The rank-biserial is the correlation used with the Mann-Whitney U test, a method commonly covered in introductory college courses on statistics. The data for this test consists of two groups; and for each member of the group, the outcome is ranked for the study as a whole. Kerby shows that this rank correlation can be expressed in terms of two concepts: the percent of data that support a stated hypothesis, and the percent of data that do not support it. The correlation is the simple difference between the two.

To illustrate the computation, suppose a coach trains long-distance runners for one month using two methods. Group A has 5 runners, and Group B has 4 runners. The stated hypothesis is that method A produces faster runners. The race to assess the results finds that the runners from Group A do indeed run faster, with the following ranks: 1, 2, 3, 4, and 6. The slower runners from Group B thus have ranks of 5, 7, 8, and 9.

The analysis is conducted on pairs, defined as a member of one group compared to a member of the other group. For example, the fastest runner in the study is a member of four pairs: (1,5), (1,7), (1,8), and (1,9). All four of these pairs support the hypothesis, because in each pair the runner from Group A is faster than the runner from Group B. There are a total of 20 pairs, and 19 pairs support the hypothesis. The only pair that does not support the hypothesis are the two runners with ranks 5 and 6, because in this pair, the runner from Group B had the faster time. By the Kerby simple difference formula, 95% of the data support the hypothesis (19 of 20 pairs), and 5% do not support (1 of 20 pairs), so the rank correlation is r = .95 - .05 = .90.

References[edit]

  • Kerby, D. S. (2014). The simple difference formula: An approach to teaching nonparametric correlation. Innovative Teaching, volume 3, article 1. doi:10.2466/11.CP.3.1. link to pdf
  • Kerby, D. S. (2014). The simple difference formula: An approach to teaching nonparametric correlation. Innovative Teaching, volume 3, article 1. doi:10.2466/11.CP.3.1. link to pdf
  • Kerby, D. S. (2014). The simple difference formula: An approach to teaching nonparametric correlation. Innovative Teaching, volume 3, article 1. doi:10.2466/11.CP.3.1. link to pdf
  • Glass, G. V. (1965). A ranking variable analogue of biserial correlation: implications for short-cut item analysis. Journal of Educational Measurement, 2(1), 91–95. DOI: 10.1111/j.1745-3984.1965.tb00396.x

References[edit]

  • Everitt, B. S. (2002), The Cambridge Dictionary of Statistics, Cambridge: Cambridge University Press, ISBN 0-521-81099-X
  • Diaconis, P. (1988), Group Representations in Probability and Statistics, Lecture Notes-Monograph Series, Hayward, CA: Institute of Mathematical Statistics, ISBN 0-940600-14-5
  • Glass, G. V. (1965). A ranking variable analogue of biserial correlation: implications for short-cut item analysis. Journal of Educational Measurement, 2(1), 91–95. DOI: 10.1111/j.1745-3984.1965.tb00396.x
  • Kendall, M. G. (1970), Rank Correlation Methods, London: Griffin, ISBN 0-85264-199-0
  • Kerby, D. S. (2014). The simple difference formula: An approach to teaching nonparametric correlation. Innovative Teaching, volume 3, article 1. doi:10.2466/11.CP.3.1. link to pdf

See Also[edit]

Brooks, M.E., Dalal, D.K., & Nolan, K.P. (2013 online before publication). Are common language effect sizes easier to understand than traditional effect sizes? Journal of Applied Psychology doi:10.1037/a0034745

McGraw, K.O., & Wong, S.P. (1992). A common language effect size statistic. Psychological Bulletin, volume 111(2), pages 361-365. doi:10.1037/0033-2909.111.2.361

Signed-Rank Test[edit]

Effect Size[edit]

To compute an effect size for the signed-rank test, one can use the rank correlation.

If the test statistic W is reported, Kerby (2014) has shown that the correlation is equal to W divided by the total rank sum.[1] Using the above example, the test statistic is W = 9, and the sample size of 9 has a total rank sum of (1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9) = 45. Hence, the correlation is 9/45, so r = .20.

If the test statistic T is reported, an equivalent way to compute the effect size is with the difference in proportion between the two rank sums, which is the Kerby (2014) simple difference formula.[1] To continue with the current example, the sample size is 9, so the total rank sum is 45. T is the smaller of the two rank sums, so T is 3 + 4 + 5 + 6 = 18. From this information alone, the remaining rank sum can be computed, because it is the total sum minus T, or in this case 45 - 18 = 27. Next, the two rank proportions are 27/45 = 60% and 18/45 = 40%. Finally, the correlation is the difference between the two proportions (.60 minus .40), hence r = .20.

If the test statistic W is reported, the rank correlation r is equal to the test statistic W divided by the total rank sum S, or r = W/S. [1] Using the above example, the test statistic is W = 9. The sample size of 9 has a total rank sum of S = (1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9) = 45. Hence, the rank correlation is 9/45, so r = 0.20.


If the test statistic T is reported, an equivalent way to compute the effect size is with the difference in proportion between the two rank sums, which is the Kerby (2014) simple difference formula.[1] To continue with the current example, the sample size is 9, so the total rank sum is 45. T is the smaller of the two rank sums, so T is 3 + 4 + 5 + 6 = 18. From this information alone, the remaining rank sum can be computed, because it is the total sum minus T, or in this case 45 - 18 = 27. Next, the two rank proportions are 27/45 = 60% and 18/45 = 40%. Finally, the correlation is the difference between the two proportions (.60 minus .40), hence r = .20.

Mann-Whitney U test: Effect Size[edit]

It is standard practice among scientists to report an effect size for an inferential test. [2][3]

Common language effect size[edit]

One method of reporting the effect size for the Mann-Whitney U test is with the common language effect size. [4] As a sample statistic, the common language effect size is computed by forming all possible pairs between the two groups, then finding the proportion of pairs that support a hypothesis. [5] To illustrate, in a study with a sample of ten hares and ten tortoises, the total number of pairs is ten times ten or 100 pairs of hares and tortoises. Suppose the results show that the hare ran faster than the tortoise in 90 of the 100 sample pairs; in that case, the sample common language effect size is 90%. This sample value is an unbiased estimator of the population value, so the sample suggests that the best estimate of the common language effect size in the population is 80%. [6]

Rank-biserial correlation[edit]

An effect size related to the common language effect size is the rank-biserial correlation. This measure was introduced by Cureton as an effect size for the Mann-Whitney U test. [7] That is, there are two groups, and scores for the groups have been converted to ranks. The Kerby simple difference formula [8] computes the rank-biserial correlation from the common language effect size. Letting f be the proportion of pairs favorable to the hypothesis (the common language effect size), and letting u be the proportion of pairs not favorable, the rank-biserial r is the simple difference between the two proportions: r = f - u. For example, if the common language effect size is 60%, then the rank-biserial r equals 60% minus 40%, or r = .20. The Kerby formula is directional, with positive values indicating that the results support the hypothesis.

A non-directional formula for the rank-biserial correlation was provided by Wendt, such that the correlation is always positive. [9] The advantage of the Wendt formula is that it can be computed with information that is readily available in published papers. The formula uses only the test value of U from the Mann-Whitney U test, and the sample sizes of the two groups: r = 1 – (2U)/ (n1 * n2).

Mann-Whitney: Rank-biserial correlation[edit]

A second method of reporting the effect size for the Mann-Whitney U test is with the rank-biserial correlation. Edward Cureton introduced and named the measure. [10] Like other correlational measures, the rank-biserial correlation can range from minus one to plus one, with a value of zero indicating no relationship. Dave Kerby [11] introduced the simple difference formula to compute the rank-biserial correlation from the common language effect size: the correlation is the difference between the proportion of pairs that support the hypothesis minus the proportion that do not. Stated another way, the correlation is the difference between the common language effect size and its complement; for example, if the common language effect size is 90%, then the rank-biserial correlation is 90% minus 10%; so the rank-biserial r = .80.

Hans Wendt [12] described a formula to compute the rank-biserial from the Mann-Whitney U and the sample sizes of each group: r = 1 – (2U)/ (n1 * n2). This formula is useful when the data are not available, but when there is a published report, because U and the sample sizes are routinely reported. Using the example above with 90 pairs that favor the hares and 10 pairs that favor the tortoise, U is the smaller of the two, so U = 10. The Wendt formula is then r = 1 - (2*10) = (10 * 10) = .80, which of course is the same result as with the Kerby simple difference formula.



An example can illustrate the use of the two formulas. Consider a study of twenty older adults, with ten in the treatment group and ten in the control group; hence, there are ten times ten or 100 pairs. The health program uses diet, exercise, and supplements to improve memory, and memory is measured by a standardized test. A Mann-Whitney U test shows that the adult in the treatment group had the better memory in 75 of the 100 pairs, and the poorer memory in 25 pairs. The Mann-Whitney U is the smaller of 75 and 25, so U = 25. The correlation between memory and treatment performance by the Kerby simple difference formula is r = (75/100) - (25/100) = 0.50. The correlation by the Wendt formula is r = 1 - (2*25) / (10*10) = 0.50.

Read More: http://www.amsciepub.com/doi/full/10.2466/11.IT.3.1 [13]


A second method of reporting the effect size for the Mann-Whitney U test is with the rank-biserial correlation. Introduced and named by Cureton in 1956, it can like other correlational measures range from minus one to plus one, with a value of zero indicating no relationship. [14] Kerby (2014) demonstrated that the rank-biserial could be easily computed from the common language effect size: the correlation is the difference between the proportion of pairs that support the hypothesis minus the proportion that do not. [15] Thus, if the common language effect size is 90%, then the rank-biserial correlation is 90% minus 10%; thus, the rank-biserial r = .80.



doi:10.1007/BF02291695

Kerby (2014) notes that pairs are important.

The citation to Grissom [16]

they describe the population value of the common language effect size as follows: "in any random pairing of young adult males and females, the probability of the male being taller than the female is .92, or in simpler terms yet, in 92 out of 100 blind dates among young adults, the male will be taller than the female" (p. 381).

Correctional Psychology[edit]

Correctional psychology is an area of psychology that focuses on applying psychology to a correctional setting. According to researcher Michael Decaire, "The correctional psychologist's primary mission is to assist in offender rehabilitation and reintegration."[17] The correctional psychologist’s primary mission is to assist in offender rehabilitation and reintegration. Additionally, the psychologist enhances staff and inmate safety by promoting a healthy institutional environment.[18] The correctional psychologist clearly has varied responsibilities. Their primary focus is their application of direct psychological services with inmates, evaluation of the prison population, inmate management, release evaluation and recommendations. While correctional has become a highly popular sub-discipline of psychology, it is also riddled with unique ethical dilemmas and conflicts.[19] Unfortunately, many of the ethical dilemmas within correctional psychology appear to be far from successful resolution. There is virtually no recent academic literature concerning the ethical problems in corrections, and even fewer recommendations on how one should proceed when faced with such problems (Weinberger & Sreenivasan, 1994). The ethical guidelines that govern psychological practice are equally unhelpful.[20][21]


Challenges in Correctional Psychology[edit]

While psychologist in and out of prison face many similar issues, the prison environment often adds special challenges.

Deliberate Indifference[edit]

One challenge facing correctional psychologists is the doctrine of deliberate indifference. The concept of deliberate indifference was first stated by the U.S. Supreme Court in the decision Estelle v. Gamble in 1976. The ruling was based on the eighth amendment ban on cruel and unusual punishment, stating that this right is violated if a prisoner in need of medical attention does not receive reasonable medical care.

Federal courts have extended this idea to include mental health [22]

Kerby (2014) notes that pairs are important. [23]

  • Kerby, D. S. (2014). The simple difference formula: An approach to teaching nonparametric correlation. Innovative Teaching, volume 3, article 1. doi:10.2466/11.CP.3.1. link to pdf

doi:10.1007/BF02291695 doi:10.2466/11.IT.3.1


. Thus, while a psychologist in the community can accept clients when they come to a health care center to seek services, the correctional psychologist has the additional ethical responsibility to seek out those who need services, and prisons have the legal responsibility to provide the resources for those services.

One recent case involved prisoners in Indiana state prisons. [24] A bench trial occurred in July of 2011 in the U.S. District Court for the Southern District of Indiana, and Judge Tanya Walton Pratt issued a ruling in December 31, 2012 (Case No. 1:08-cv-01317-TWP-MJD). The ruling noted that of the 26,700 state inmates, about 22% (nearly 6,000 inmates) were diagnosed as mentally ill, yet the mental health unit had room for only 250 inmates. Thus, the plaintiffs claimed that if an inmate began to show typical signs of schizophrenia and did not comply with rules, then he was seldom treated; rather he was punished in some way, including use of force and segregation. The ruling by Judge Pratt concluded that such treatment shows deliberate indifference to the mental health needs of the inmates. "The Plaintiffs’ thesis that the effect of segregation on mentally ill prisoners in Indiana is toxic to their welfare is supported by a preponderance of the evidence," wrote Judge Pratt. The evidence includes reports of use of force, acts of self-harm, and suicides.

eighth amendment

eighth amendment

A ruling by Tanya Walton Pratt . . .

The decision by Judge Pratt noted, "The deterioration and injury caused to mentally ill prisoners by segregation is documented by the IDOC in prisoner medical records, suicide and self-harm reports, and reports of use of force incidents".



Ruling by Judge Tanya Walton Pratt - Deliberate Indifference


unit weights

Prison Suicide[edit]

Another challenge is suicide. The decision by Judge Pratt said that over about a four-year period, "11 of 23 suicides were committed by mentally ill offenders in a segregated setting."

See Also[edit]

Criminal Psychology, Forensic Psychology

Notes[edit]

  1. ^ a b c d e Kerby, D. S. (2014). The simple difference formula: An approach to teaching nonparametric correlation. Innovative Teaching, volume 3, article 1. doi:10.2466/11.IT.3.1. link to pdf Cite error: The named reference "Kerby2014" was defined multiple times with different content (see the help page).
  2. ^ Wilkinson, Leland (1999). "Statistical methods in psychology journals: Guidelines and explanations". American Psychologist. 54 (8): 594–604. doi:10.1037/0003-066X.54.8.594.{{cite journal}}: CS1 maint: date and year (link)
  3. ^ Nakagawa, Shinichi; Cuthill, Innes C. (2007). "Effect size, confidence interval and statistical significance: a practical guide for biologists". Biological Reviews Cambridge Philosophical Society. 82 (4): 591–605. doi:10.1111/j.1469-185X.2007.00027.x. PMID 17944619.{{cite journal}}: CS1 maint: date and year (link)
  4. ^ Kerby, D. S. (2014). The simple difference formula: An approach to teaching nonparametric correlation. Innovative Teaching, volume 3, article 1. doi:10.2466/11.IT.3.1. link to pdf
  5. ^ McGraw, K.O., & Wong, S.P. (1992). A common language effect size statistic. Psychological Bulletin, volume 111(2), pages 361-365. doi:10.1037/0033-2909.111.2.361
  6. ^ Grissom RJ (1994). "Statistical analysis of ordinal categorical status after therapies". Journal of Consulting and Clinical Psychology. 62 (2): 281–284. doi:10.1037/0022-006X.62.2.281. PMID 8201065.
  7. ^ Cureton, E.E. (1956). Rank-biserial correlation. Psychometrika, volume 21(3), pages 287-290. doi:10.1007/BF02289138
  8. ^ Kerby, D. S. (2014). The simple difference formula: An approach to teaching nonparametric correlation. Innovative Teaching, volume 3, article 1. doi:10.2466/11.IT.3.1. link to pdf
  9. ^ Wendt, H. W. (1972). Dealing with a common problem in social science: A simplified rank-biserial coefficient of correlation based on the U statistic. European Journal of Social Psychology, 2(4), 463–465. doi:10.1002/ejsp.2420020412
  10. ^ Cureton, E.E. (1956). Rank-biserial correlation. Psychometrika, volume 21(3), pages 287-290. doi:10.1007/BF02289138
  11. ^ Kerby, D. S. (2014). The simple difference formula: An approach to teaching nonparametric correlation. Innovative Teaching, volume 3, article 1. doi:10.2466/11.IT.3.1. link to pdf
  12. ^ Wendt, H. W. (1972). Dealing with a common problem in social science: A simplified rank-biserial coefficient of correlation based on the U statistic. European Journal of Social Psychology, 2(4), 463–465. doi:10.1002/ejsp.2420020412
  13. ^ Wendt, H. W. (1972). Dealing with a common problem in social science: A simplified rank-biserial coefficient of correlation based on the U statistic. European Journal of Social Psychology, 2(4), 463–465. doi:10.1002/ejsp.2420020412 Read More: http://www.amsciepub.com/doi/full/10.2466/11.IT.3.1
  14. ^ Cureton, E.E. (1956). Rank-biserial correlation. Psychometrika, volume 21(3), pages 287-290. doi:10.1007/BF02289138
  15. ^ Kerby, D. S. (2014). The simple difference formula: An approach to teaching nonparametric correlation. Innovative Teaching, volume 3, article 1. doi:10.2466/11.IT.3.1. link to pdf
  16. ^ Grissom, R.J. (1994). Statistical analysis of ordinal categorical status after therapies. Journal of Consulting and Clinical Psychology, volume 62(2), pages 281-284. doi= 10.1037/0022-006X.62.2.281
  17. ^ http://www.uplink.com.au/lawlibrary/Documents/Docs/Doc93.html
  18. ^ Hawk, K. M. (1997). Personal reflections on a career in correctional psychology. Professional Psychology: Research and Practice, 28(4), 335-337
  19. ^ Van Voorhis, P. & Spencer, K. (1999). When programs "don’t work" with everyone: Planning for differences among correctional clients. Corrections Today, 2, 38-42
  20. ^ American Psychological Association (1992). Ethical principles of psychologists and code of conduct. Washington, DC: Author
  21. ^ Canadian Psychological Association (1991). The Canadian Code of Ethics for Psychologists.[1]
  22. ^ Bober, D. I., & Pinals, D. A. Prisoners' rights and deliberate indifference. Journal of the American Academy of Psychiatry and the Law vol 35(3), pages 388-391
  23. ^ Kerby, D. S. (2014). The simple difference formula: An approach to teaching nonparametric correlation. Innovative Teaching, volume 3, article 1. doi:10.2466/11.IT.3.1. link to pdf
  24. ^ Ruling by Judge Pratt, Dec 31, 2012

External links[edit]


  • Chis Stucchio blog - Why a pro/con list is 75% as good as your fancy machine learning algorithm

External links[edit]

  • Chis Stucchio blog - Why a pro/con list is 75% as good as your fancy machine learning algorithm

Types of Effect Sizes[edit]

Effect sizes based on ranks[edit]

Among the easiest to understand effect sizes are those based on ranks.

Common language effect size[edit]

As the name implies, the common language effect size is designed to communicate the meaning of an effect size in plain English, so that those with little statistics background can grasp the meaning. This effect size was proposed and named by Kenneth McGraw and S. P. Wong (1992). [1]

The core concept of the common language effect size is the notion of a pair, defined as a score in group one paired with a score in group two. For example, if a study has ten people in a treatment group and ten people in a control group, then there are 100 pairs. The common language effect size ranks all the scores, compares the pairs, and reports the results in the common language of the percent of pairs that support the hypothesis.

As an example, consider a treatment for a chronic disease such as arthritis, with the outcome a scale that rates mobility and pain; further consider that there are ten people in the treatment group and ten people in the control group, for a total of 100 pairs. The sample results may be reported as follows: "When a patient in the treatment group was compared with a patient in the control group, in 80 of 100 pairs the treated patient showed a better treatment outcome."

This sample value is an unbiased estimator of the population value. [2] The population value for the common language effect size can be reported in terms of pairs randomly chosen from the population. McGraw and Wong [3] use the example of heights between men and women, and they describe the population value of the common language effect size as follows: "in any random pairing of young adult males and females, the probability of the male being taller than the female is .92, or in simpler terms yet, in 92 out of 100 blind dates among young adults, the male will be taller than the female" (p. 381).


  • Kerby, D. S. (2014). The simple difference formula: An approach to teaching nonparametric correlation. Innovative Teaching, volume 3, article 1. doi:10.2466/11.CP.3.1


(The Jnl Applied Psy article) [4]

Rank-biserial correlation[edit]

Closely related to the common language effect size is the rank-biserial correlation, a rank-based correlation which was first proposed as an effect size for the Mann-Whitney U test (Cureton, 1956). As a measure of correlation, the rank-biserial r has a range from -1 to +1.

One formula for the rank-biserial r is the Kerby simple difference formula (cite Kerby, 2014). The first step in using the formula is to state a hypothesis; for example, consider the hypothesis that males on average have greater leg strength than females. The second step is to rate each pair as favorable or unfavorable to the hypothesis: in the current example a pair is favorable when the male has greater strength, and the pair is unfavorable when the female has greater strength. Finally, the rank-biserial is the simple difference between the two proportions. For example, suppose that the male has greater strength in 90% of the pairs, while the female has greater strength is 10%; in this case, the rank-biserial r is .90 - .10 = .80. The relationship to the common language effect size is readily seen, because the proportion of favorable pairs (90% in the example) is the common language effect size.

Gottfredson paragraph[edit]

Gottfredson and Snyder (2005) compared the Burgess method of unit-weighted regression to other methods, using a cross-validation sample of N = 7,552. Using the Pearson point-biserial, the effect size in the cross validation sample for the unit-weights model was r = .392, which was somewhat larger than for logistic regression (r = .368) and predictive attribute analysis (r = .387), and less than multiple regression only in the third decimal place (r = .397).

CART link[edit]

the Kerby method uses classification and regression tree (CART) analysis.

See also robust regression.

Unit-weighted intro[edit]

In statistics, unit-weighted regression is a simplified and robust version (Wainer & Thissen, 1976) of multiple regression analysis where only the intercept term is estimated.


Unit-weighted regression is a method of robust regression that proceeds in three steps.

  • Wainer, H., & Thissen, D. (1976). "Three steps toward robust regression." Psychometrika, volume 41(1), pages 9-34. doi:10.1007/BF02291695

correctional setting

Robust Statistics: Unit weights[edit]

Google books

Ernest Burgess (1928) used unit weights to predict success on parole. He scored 21 positive factors as present (e.g., "no prior arrest" = 1) or absent ("prior arrest" = 0), then summed to yield a predictor score, which was shown to be a useful predictor of parole success.

Another robust method is the use of unit weights (Wainer & Thissen, 1976). Samuel S. Wilks (1938) showed that nearly all sets of regression weights sum to composites that are very highly correlated with one another (Wilks, 1938), including unit weights, a result referred to as Wilk's theorem (Ree, Carretta, & Earles, 1998). Robyn Dawes (1979) demonstrated the usefulness of models with unit weights for decision making in applied settings. Bobko, Roth, and Buster (2007) reviewed the literature on unit weights, and they concluded that decades of empirical studies show that unit weights perform similar to ordinary regression weights on cross validation.

  • Bobko, P., Roth, P. L., & Buster, M. A. (2007). "The usefulness of unit weights in creating composite scores: A literature review, application to content validity, and meta-analysis". Organizational Research Methods, volume 10, pages 689-709. doi:10.1177/1094428106294734
  • Kerby, D. S. (2014). The simple difference formula: An approach to teaching nonparametric correlation. Innovative Teaching, volume 3, article 1. doi:10.2466/11.CP.3.1. link to pdf
  • Ree, M. J., Carretta, T. R., & Earles, J. A. (1998). "In top-down decisions, weighting variables does not matter: A consequence of Wilk's theorem. Organizational Research Methods, volume 1(4), pages 407-420. doi:10.1177/109442819814003
  • Wilks, S. S. (1938). "Weighting systems for linear functions of correlated variables when there is no dependent variable". Psychometrika, volume 3, pages 23-40. doi:10.1007/BF02287917

Link to idea and not exact word[edit]

Correctional psychology is an area of psychology that focuses on applying psychology to a correctional setting. In making decisions in an applied setting, correctional psychologist may apply unit weights to the predictors.

Practice with refences[edit]

Here is my first attempt to use the reference list template. [5]

He applied linear models to human decision making, including models with equal weights [6][7], a method known as unit-weighted regression.

References[edit]

  1. ^ McGraw KO, Wong SP (1992). "A common language effect size statistic". Psychological Bulletin. 111 (2): 361–365. doi:10.1037/0033-2909.111.2.361.
  2. ^ Grissom RJ (1994). "Statistical analysis of ordinal categorical status after therapies". Journal of Consulting and Clinical Psychology. 62 (2): 281–284. doi:10.1037/0022-006X.62.2.281. PMID 8201065.
  3. ^ McGraw KO, Wong SP (1992). "A common language effect size statistic". Psychological Bulletin. 111 (2): 361–365. doi:10.1037/0033-2909.111.2.361.
  4. ^ Brooks ME, Dalal DK, Nolan KP (2014). "Are common language effect sizes easier to understand than traditional effect sizes?". Journal of Applied Psychology. 99 (2): 332–340. doi:10.1037/a0034745. PMID 24188393.{{cite journal}}: CS1 maint: multiple names: authors list (link)
  5. ^ Author, A. A. (2013). "Title of the article". Journal Name, volume 56(2), pages 1-20.
  6. ^ Dawes, R. M., & Corrigan, B. (1974). "Linear models in decision making." Psychological Bulletin, volume 81, pages 95-106. doi:10.1037/h0037613
  7. ^ Dawes, Robyn M. (1979). "The robust beauty of improper linear models in decision making". American Psychologist, volume 34, pages 571-582. doi:10.1037/0003-066X.34.7.571 archived pdf .


Samuel Wilks[edit]

He also conducted work on unit-weighted regression: he proved the idea that under a wide variety of common conditions, almost all set of weights will yield composites that are very highly correlated, a result that has been dubbed Wilk's theorem (Ree, Carretta, & Earles, 1998).

  • Ree, M. J., Carretta, T. R., & Earles, J. A. (1998). "In top-down decisions, weighting variables does not matter: A consequence of Wilk's theorem. Organizational Research Methods, volume 1(4), pages 407-420. doi:10.1177/109442819814003

Robyn Dawes[edit]

Dawes was born in Pittsburgh, Pennsylvania, where he grew up. He attended Harvard College, where he majored in philosophy.

His first publication was on the importance of base rates for making decisions (Dawes, 1962). When base rates are very low, then diagnostic accuracy is high by merely predicting that the condition will not occur. As an example, suicide attempts are rare, so despite any evidence of suicidality, a clinician can have a high rate of accuracy by ignoring all clinical evidence and merely predicting that no suicide attempt will occur.

  • Dawes, R. M. (1962). A note on base rates and psychometric efficiency. Journal of Consulting Psychology, volume 26(5), pages 422-424.


  • Dana, J., & Dawes, R. M. (2004). "The superiority of simple alternatives to regression for social science predictions". Journal of Educational and Behavioral Statistics, volume 29(3), pages 317-331. doi:10.3102/10769986029003317


The Kerby method[edit]

The Kerby method is similar to the Burgess method, but differs in two ways. First, while the Burgess method uses subjective judgment to select a cutoff score for a multi-valued predictor with a binary outcome, the Kerby method uses classification and regression tree (CART) analysis. In this way, the selection of the cutoff score is based not on subjective judgment, but on a statistical criterion, such as the point where the chi-square value is a maximum.

The second difference is that while the Burgess method is applied to a binary outcome, the Kerby method can apply to a multi-valued outcome, because CART analysis can identify cutoff scores in such cases, using a criterion such as the point where the t-value is a maximum. Because CART analysis is not only binary, but also recursive, the result can be that a predictor variable will be divided again, yielding two cutoff scores. The standard form for each predictor is that a score of one is added when CART analysis creates a partition.

One study (Kerby, 2003) selected as predictors the five traits of the Big five personality traits, predicting a multi-value measure of suicidal ideation. Next, the personality scores were converted into standard form with CART analysis. When the CART analysis yielded one partition, the result was like the Burgess method in that the predictor was coded as either zero or one. But for the measure of neuroticism, the result was two cutoff scores. Because higher neuroticism scores correlated with more suicidal thinking, the two cutoff scores led to the following coding: “low Neuroticism” = 0, “moderate Neuroticism” = 1, “high Neuroticism” = 2 (Kerby, 2003).

  • Gottfredson, D. M., & Snyder, H. N. (July 2005). The mathematics of risk classification: Changing data into valid instruments for juvenile courts. Pittsburgh, Penn.: National Center for Juvenile Justice. NCJ 209158. Eric pdf

Other Stuff[edit]

  • Dawes, R. M. (1976). Shallow psychology. In J. S. Carroll & J. W. Payne (Eds.), Cognition and social behavior (pages 3-12). Hillsdale, NJ: Lawrence Erlbaum.
  • Burgess, E. W. (1928). Factors determining success or failure on parole. In A. A. Bruce (Ed.), The Workings of the Indeterminate Sentence Law and Parole in Illinois (pp. 205-249). Springfield, Illinois: Illinois State Parole Board.


I want to make a link to the Big five personality traits.

  • Newman, J. R., Seaver, D., Edwards, W. (1976). Unit versus differential weighting schemes for decision making: A method of study and some preliminary results. Los Angeles, CA: Social Science Research Institute. http://www.dtic.mil/dtic/tr/fulltext/u2/a033183.pdf

http://www-stat.wharton.upenn.edu/~hwainer/Readings/Wainer_Estimating%20Coefficients%20in%20Linear%20Models.pdf

http://www-stat.wharton.upenn.edu/~hwainer/Readings/Wainer_Estimating%20Coefficients%20in%20Linear%20Models.pdf



Burgess Method of Unit-weighted regression[edit]

In the field of criminology, Burgess conducted work on predicting the success or failure of inmates on parole. He identified 21 measures believed to be associated with success on parole, converting these measures to a score of zero or one, with a score of one associated with success on parole. For example, a man lacking in job skills would have a score of zero, while a man with job skills would have a score of one. He then added the scores to obtain a scale in which higher scores predicted a greater chance of success on parole.

The results showed that the scale worked well. To illustrate, for men with the highest scores from 14 to 21, the rate of parole success was 98%; for men with scores of 4 or less, the rate of parole success was only 24%. This method of combining scores has come to be called the Burgess method of unit-weighted regression. Hakeem (1948) reported that the Burgess method had "remarkable accuracy in prediction". Though more advanced methods of analysis have become common in the social sciences (such as multiple regression), they have yet to show a clear advantage over unit-weighted methods, so the Burgess method is still used in criminology (Gottfredson & Snyder, 2005).

How does the reference thing work?

  • Burgess, E. W. (1928). Factors determining success or failure on parole. In A. A. Bruce (Ed.), The Workings of the Indeterminate Sentence Law and Parole in Illinois (pp. 205-249). Springfield, Illinois: Illinois State Parole Board.
  • Gottfredson, D. M., & Snyder, H. N. (July 2005). The mathematics of risk classification: Changing data into valid instruments for juvenile courts. Pittsburgh, Penn.: National Center for Juvenile Justice. NCJ209158. http://files.eric.ed.gov/fulltext/ED485849.pdf
  • Hakeem, M. (1948). The validity of the Burgess method of parole prediction. American Journal of Sociology, volume 53(5), pages 376-386. http://www.jstor.org/stable/2771477

He introduced the method of converting each predictor to a score of zero or one (Burgess, 1928).


Park, R. E., Burgess, E. W., & McKenzie, R. D. (1925). The city. Chicago, Illinois: The University of Chicago Press. http://www.esperdy.net/wp-content/uploads/2009/09/Park-The-City.pdf

Park, R. E., & Burgess, E. W. (1921). Introduction to the science of sociology. Chicago, Illinois: University of Chicago Press. http://www.gutenberg.org/files/28496/28496-h/28496-h.htm

The z-score Method[edit]

Another method can be applied when the predictors are measured on a continuous scale. In such a case, each predictor can be converted into a standard score, or z-score, so that all the predictors have a mean of zero and a standard deviation of one. With this method of unit-weighted regression, the variate is a sum of the z-scores (Bobko, Roth, & Buster, 2007).


Gottfredson, D. M., & Snyder, H. N. (July 2005). The mathematics of risk classification: Changing data into valid instruments for juvenile courts. Pittsburgh, Penn.: National Center for Juvenile Justice. NCJ209158. http://files.eric.ed.gov/fulltext/ED485849.pdf

Jacob Cohen (statistician)

Czerlinski, J., Gigerenzer, G., & Goldstein, D. G. (1999). How goood are simple heuristics? In G. Gigerenzer & P. M. Todd (Eds.), Simple heuristics that make us smart (pp. 97 - 118). New York: Oxford University Press.

R2

In a review of the literature on unit weights, Bobko, Roth, and Buster (2007) noted that "unit weights and regression weights perform similarly in terms of the magnitude of cross-validated multiple correlation, and empirical studies have confirmed this result across several decades" (p. 693).


  • Bobko, P., Roth, P. L., & Buster, M. A. (2007). The usefulness of unit weights in creating composite scores: A literature review, application to content validity, and meta-analysis. Organizational Research Methods, volume 10, pages 689-709. doi:10.1177/1094428106294734