Jump to content

Cross-battery assessment

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by Trappist the monk (talk | contribs) at 15:59, 9 March 2022 (cite repair;). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Cross-battery assessment is the process by which psychologists use information from multiple test batteries (i.e., various IQ tests) to help guide diagnostic decisions and to gain a fuller picture of an individual's cognitive abilities than can be ascertained through the use of single-battery assessments. The cross-battery approach (XBA) was first introduced in the late 1990s[1] by Dawn Flanagan, Samuel Ortiz and Kevin McGrew. It offers practitioners the means to make systematic, valid and up-to-date interpretations of intelligence batteries and to augment them with other tests in a way that is consistent with the empirically supported Cattell–Horn–Carroll (CHC) theory of cognitive abilities.[2]

Three Foundational Sources of Information

The XBA approach is a time efficient method to reliably measure a wider (or more in-depth but selective range) of cognitive abilities/processes than any single intelligence battery can measure. It is based on three foundational sources of information (i.e., practice, research and test development) that provide the knowledge necessary to organise theory-driven, comprehensive, reliable, and valid assessments of cognitive abilities.[2]

Practice

R. W. Woodcock conducted a joint factor analysis suggesting the necessity of cross-battery assessments to measure a broad range of cognitive abilities rather than a single intellectual battery.[2] For instance, he found that of the major intellectual batteries utilized prior to 2000, most failed to measure three or more broad CHC abilities that were considered essential in understanding and predicting school achievement. This provided the impetus for developing the XBA approach. The XBA approach also helps facilitate communication among professionals, which guards against misinterpretation. The XBA approach offers practitioners a psychometrically defensible way of identifying normative strengths and weaknesses in cognitive abilities.[2]

Research

The XBA approach helped to promote a greater understanding between cognitive abilities and important outcome criteria. Furthermore, improving the validity of CHC ability measures will further elucidate the relations between CHC cognitive abilities and different outcomes, such as achievement and occupational outcomes.[2]

Test Development

Test authors have utilized CHC theory and XBA CHC test classifications as a blueprint for test development (WJ III, SB5, KABC-II, and DAS-II etc.). Despite the fact that cognitive abilities tests demonstrate a greater coverage of CHC broad cognitive abilities now as compared to previous years; there is still a need to use XBA approach for assessment.[2]

Application of the XBA Approach

It is recommended that practitioners adhere to several guiding principles in order to ensure that XBA procedures are psychometrically and theoretically sound.[2] First, one should select an intelligence battery that best addresses referral concerns. Second, subtests and clusters or composites from a single battery should be utilized whenever possible in order to best represent the broad CHC abilities (i.e., use actual norms whenever possible). Third, it is important to construct CHC broad and narrow ability clusters through acceptable methods, such as CHC theory driven factor analyses or expert consensus content-validity studies.[2] Fourth, when two or more qualitatively different indicators of a broad abilities of interest are not assessed or available on the core battery, then one may supplement it for broad ability indicators from another battery. Finally, when crossing batteries, select tests that were developed and normed within a few years of one another. Sixth, in order to minimize the effect of spurious differences between test scores, select tests from the smallest number of batteries.[2] Evaluation requires professional judgement and should include direct observations, including interviews with those who know the test subject. Sound decisions require an explanatory framework that is logical and consistent, with an explanation for any conflicting data.[3]

Implementation of the XBA Approach Step-by-Step

  1. Select primary intelligence battery for assessment
  2. Identify represented CHC abilities
  3. Select tests to measure CHC abilities not measured by the primary battery
  4. Administer the primary battery (and any other supplemental tests)
  5. Enter data into the XBA DMIA (provided in "Essentials of Cross Battery Assessment: Second Edition"[2]
  6. Follow XBA guidelines

[2]

Use of XBA in Specific Learning Disability (SLD) Evaluation

The "Seven Deadly Sins" in SLD Evaluation

Specific learning disability (SLD) is the largest disability identified among school-aged children. According to Flanagan, Ortiz and Alfonso,[2] in order to receive a diagnosis of SLD the following criteria must be met following these steps: a deficit in academic functioning is determined, academic difficulties are not due to secondary exclusionary factors (e.g., neurological issues, etc.), a deficit in cognitive ability is determined, exclusionary factors are reviewed again to determine that the academic and cognitive deficits are not due to secondary factors, underachievement is established, the academic deficits are shown to have a negative effect on daily life. Flanagan, Ortiz and Alfonso [2] suggest "seven deadly sins" as a metaphor for understanding the misconceptions surrounding SLD evaluation that continue to undermine its reliability and validity.

1. Relentless searching for ipsative or intra-individual discrepancies

One of the most common practices in SLD evaluations is when the scores are ipsatized. Ipsatized scores are scores that have been averaged and subtracted from the overall average in order to determine the degree of deviation from the average. This suggests that when scores deviate from the mean they are clinically important indicators of either relative weaknesses (lower) or relative strengths (higher). Thus, weaknesses are thought of as evidence of SLD. This approach only focuses on the identification of discrepancies that exist within the individual. The vast majority of people do not have flat cognitive profiles and instead show significant variability in their profile of cognitive ability scores. The assumption that people who have certain scores in one domain will show similar ability in all domains is erroneous. Instead of looking for discrepancies wherever they might be found, theory should guide comparison between different sub-tests.[2]

2. Failure to distinguish between a relative weakness and a normative weakness

A lower score does not automatically gain clinical significance simply because the discrepancy has been determined to be real (statistically significant). Statistical significance only means that the difference between the two scores is not due to chance (i.e., that they are different from one another), that is, it does not mean that the difference between the two scores in the comparison is clinically meaningful or indicative of impairment.

3. Obsession with the severe discrepancy calculation

The ability-achievement discrepancy has been regarded as important to definitions and diagnostic criteria of SLD that practitioners often resort to calculating every sub-test score obtained at an evaluation. Given the high number of discrepancies available to calculate, it would be surprising if at least one significant discrepancy was not found. The significant ability-achievement discrepancy should not be synonymous with nor a necessary condition for a SLD diagnosis.

4. Belief that IQ is a near perfect predictor of potential

This ability-achievement discrepancy was likely fostered by the notion that IQ and other global ability composites are near-perfect predictors of an individual's academic achievement. For instance, scores of general ability, like the FSIQ, only account for about 35 to 50% of total achievement variance, which leaves about 50 to 65% of the variance unexplained. Thus, practitioners must recognize that there are other important factors that explain significant variance in achievement and global ability.

5. Failure to apply current theory and research

In evaluating SLD, practitioners may not always be privy to or able to implement procedures that are based on modern theory and research. Practitioners often omit contemporary psychometric theory and current research on SLD that aid in determining identification and diagnosis of SLD.

6. Over-reliance on findings from a single sub-test

Diagnostic decisions are often based on the results from either a single sub-test score or scores used to screen individuals. The reliance on these single scores may not be suitable for the purpose of diagnosis or high-stakes decision making. For instance, one of the fundamental properties of psychometrics is that a single sub-test can't be considered a reliable indicator by itself of the construct it is intended to measure. One sub-test is not sufficient to indicate the presence of an SLD or other impairment.

7. Belief that aptitude and ability are the same

Aptitude and ability are two concepts that are often mistakenly confused. It is important to differentiate between the two given the shift in understanding SLD which is based on the difference between ability and aptitude. When evaluating SLD, looking at aptitude is important because those abilities are associated with long-term academic outcomes.

References

  1. ^ McGrew, D. P. & , K. S (1997). "A cross-battery to assessing and interpreting cognitive abilities: Narrowing the gap between practice and cognitive science". In Flanagan, Dawn P.; Harrison, Patti L. (eds.). Contemporary intellectual assessment: Theories, tests, and issues. New York: The Guilford Press. pp. 314–325. ISBN 978-1-59385-125-5.{{cite book}}: CS1 maint: multiple names: authors list (link)
  2. ^ a b c d e f g h i j k l m n Flanagan, D. P., Ortiz, S. O., & Alfonso, V. C. (2007). Essentials of Cross Battery Assessment 2nd Edition. New Jersey: Wiley
  3. ^ Stephens; Reuter (11 January 2009). "Hits and Myths of XBA". Education Faculty Publications and Presentations.

Further reading