# Incremental validity

Incremental validity is a type of validity that is used to determine whether a new psychometric assessment will increase the predictive ability beyond that provided by an existing method of assessment.[1] In other words, incremental validity seeks to answer if the new test adds much information that might be obtained with simpler, already existing methods.[2]

## Definition and Examples

When an assessment is used with the purpose of predicting an outcome (perhaps another test score or some other behavioral measure), a new instrument must show that it is able to increase our knowledge or prediction of the outcome variable beyond what is already known based on existing instruments.[3]

A positive example may be a clinician who uses an interview technique as well as a specific questionnaire to determine if a patient has mental illness and has better success at determining mental illness than a clinician who uses the interview technique alone. Thus, the specific questionnaire would be considered incrementally valid. Because the questionnaire in conjunction with the interview produced more accurate determinations, and added information for the clinician, the questionnaire is incrementally valid.

## Statistical Tests

Incremental validity is usually assessed using multiple regression methods. A regression model with other variables is fitted to the data first and then the focal variable is added to the model. A significant change in the R-square statistic (using an F-test to determine significance) is interpreted as an indication that the newly added variable offers significant additional predictive power for the dependent variable over variables previously included in the regression model.

Recall that the R-square statistic in multiple regression reflects the percent of variance accounted for in the Y variable using all X variables. Thus, the change in R-square will reflect the percent of variance explained by the variable added to the model. The change in R-square is more appropriate than simply looking at the raw correlations because the raw correlations do not reflect the overlap of the newly introduced measure and the existing measures.[3]

An example this method is in the prediction of college grade point average (GPA) where high school GPA and admissions test scores (e.g., SAT, ACT) usually account for a large proportion of variance in college GPA. The use of admissions tests is supported by incremental validity evidence. For example, the pre-2000 SAT correlated .34 with freshman GPA while high school GPA correlated .36 with freshman GPA.[4] It might seem that both measures are strong predictors of freshman GPA, but in fact high school GPA and SAT scores are also strongly correlated, so we need to test for how much predictive power we get from the SAT when we account for high school GPA. The incremental validity is indicated by the change in R-square when high school GPA included in the model. In this case, high school GPA accounts for 13% of the variance in freshman GPA and the combination of high school GPA plus SAT accounts for 20% of the variance in freshman GPA. Therefore, the SAT adds 7 percentage points to our predictive power. If this is significant and deemed an important improvement, then we can say that the SAT has incremental validity over using high school GPA alone to predict freshman GPA. Any new admissions criterion or test must add additional predictive power (incremental validity) in order to be useful in predicting college GPA when high school GPA and test scores are already known.