Curriculum-based measurement

From Wikipedia, the free encyclopedia

Curriculum-based measurement, or CBM, is also referred to as a general outcomes measures (GOMs) of a student's performance in either basic skills or content knowledge.

Early history[edit]

CBM began in the mid-1970s with research headed by Stan Deno at the University of Minnesota.[1] Over the course of 10 years, this work led to the establishment of measurement systems in reading, writing, and spelling that were: (a) easy to construct, (b) brief in administration and scoring, (c) had technical adequacy (reliability and various types of validity evidence for use in making educational decisions), and (d) provided alternate forms to allow time series data to be collected on student progress.[2] This focus in the three language arts areas eventually was expanded to include mathematics, though the technical research in this area continues to lag that published in the language arts areas. An even later development was the application of CBM to middle-secondary areas: Espin and colleagues at the University of Minnesota developed a line of research addressing vocabulary and comprehension (with the maze) and by Tindal and colleagues at the University of Oregon developed a line of research on concept-based teaching and learning.[3]

Increasing importance[edit]

Early research on the CBM quickly moved from monitoring student progress to its use in screening, normative decision-making, and finally benchmarking. Indeed, with the implementation of the No Child Left Behind Act in 2001, and its focus on large-scale testing and accountability, CBM has become increasingly important as a form of standardized measurement that is highly related to and relevant for understanding student's progress toward and achievement of state standards.

Key feature[edit]

Probably the key feature of CBM is its accessibility for classroom application and implementation. It was designed to provide an experimental analysis of the effects from interventions, which includes both instruction and curriculum. This is one of the most important conundrums to surface on CBM: To evaluate the effects of a curriculum, a measurement system needs to provide an independent "audit" and not be biased to only that which is taught. The early struggles in this arena referred to this difference as mastery monitoring (curriculum-based which was embedded in the curriculum and therefore forced the metric to be the number (and rate) of units traversed in learning) versus experimental analysis which relied on metrics like oral reading fluency (words read correctly per minute) and correct word or letter sequences per minute (in writing or spelling), both of which can serve as GOMs. In mathematics, the metric is often digits correct per minute. N.B. The metric of CBM is typically rate-based to focus on "automaticity" in learning basic skills.[4]

Recent advancements[edit]

The most recent advancements of CBM have occurred in three areas. First, they have been applied to students with low incidence disabilities. This work is best represented by Zigmond in the Pennsylvania Alternate Assessment and Tindal in the Oregon and Alaska Alternate Assessments. The second advancement is the use of generalizability theory with CBM, best represented by the work of John Hintze, in which the focus is parceling the error term into components of time, grade, setting, task, etc. Finally, Yovanoff, Tindal, and colleagues at the University of Oregon have applied Item Response Theory (IRT) to the development of statistically calibrated equivalent forms in their progress monitoring system.[5]

Critique[edit]

Curriculum-based measurement emerged from behavioral psychology and yet several behaviorists have become disenchanted with the lack of the dynamics of the process.[6][7]

See also[edit]

References[edit]

  1. ^ Deno, S.L. (1985). Curriculum-based measurement: The emerging alternative. Exceptional Children, 52(3), 219–32
  2. ^ Skinner, Neddenriep, Bradley-Klug & Ziemann (2002) Advances in Curriculum-Based Measurement: Alternative Rate Measures for Assessing Reading Skills in Pre- and Advanced Readers. The Behavior Analyst Today, 3(3), 270–83 BAO
  3. ^ Espin, C. & Tindal, G. (1998). Curriculum-based measurement for secondary students (pp. 214–53). In M.R. Shinn (Ed.), Advanced applications of curriculum-based measurement. New York: Guilford Press.
  4. ^ Hale, A.D.; Skinner, C.H.; Williams, J.; Hawkins, R.; Neddenriep, C.E. & Dizer, J. (2007). Comparing Comprehension Following Silent and Aloud Reading across Elementary and Secondary Students: Implication for Curriculum-Based Measurement. The Behavior Analyst Today, 8(1), 9–23. BAO
  5. ^ Rachel M. Stewart, Ronald C. Martella, Nancy E. Marchand-Martella and Gregory J. Benner (2005): Three-Tier Models of Reading and Behavior. JEIBI, 2(3), 115–24. BAO
  6. ^ Williams, R.L.; Skinner, C.H. & Jaspers, K. (2008). Extending Research on the Validity of Brief Reading Comprehension Rate and Level Measures to College Course Success. The Behavior Analyst Today, 8(2), 163–74. BAO
  7. ^ Ardoin, et al. Evaluating Curriculum-Based Measurement from a Behavioral Assessment Perspective. The Behavior Analyst Today, 9(1), 36–49 BAO

Further reading[edit]

  • Fletcher, J.M.; Francis, D.J.; Morris, R.D. & Lyon, G.R. (2005). Evidence-based assessment of learning disabilities in children and adolescents. Journal of Clinical Child and Adolescent Psychology, 34(3), 506–22.
  • Fuchs, L.S. & Fuchs, D. (1999). Monitoring student progress toward the development of reading competence: A review of three forms of classroom-based assessment. School Psychology Review, 28(4), 659–71.
  • Hosp, M.; Hosp, J. & Howell, K. (2007). The ABCs of CBM: A Practical guide to curriculum-based measurement. New York: Guilford Press.
  • Martínez, R.S.; Nellis, L.M. & Prendergast, K.A. (2006). Closing the achievement gap series: Part II: Response to intervention (RTI) – basic elements, practical applications, and policy recommendations (Education Policy Brief: Vol. 4, No. 11). Bloomington: Indiana University, School of Education, Center for Evaluation and Education Policy.
  • Jones, K.M. & Wickstrom, K.F. (2002). Done in sixty seconds: Further analysis of the brief assessment model for academic problems. School Psychology Review, 31(4), 554–68.
  • Shinn, M.R. (2002). Best practices in using curriculum-based measurement in a problem-solving model. In A. Thomas & J. Grimes (Eds.), Best Practices in School Psychology IV (pp. 671–93). Bethesda, MD: National Association of School Psychologists. ISBN 0-932955-85-1