Jump to content

Value-added modeling

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by Pfoggia (talk | contribs) at 09:00, 14 September 2010 (Corrected typo). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Value-added modeling (also known as value-added analysis and value-added assessment) is a method of teacher evaluation that measures the teacher's contribution in a given year by comparing current school year test scores of their students to the scores of those same students in the previous school year, as well as to the scores of other students in the same grade. In this manner, value-added modeling seeks to isolate the contribution that each teacher makes in a given year, which can be compared to the performance measures of other teachers.[1]

As of 2010, school districts across the United States had adopted the system, including the Chicago Public Schools, New York City Department of Education and District of Columbia Public Schools. The rankings have been used to decide on issues of teacher retention and the awarding of bonuses, as well as a tool for identifying those teachers who would benefit most from teacher training.[1] Under programs developed by the Obama administration advocating for better means of evaluating teacher performance, districts have looked to value-added modeling as a replacement of observing teachers in classrooms.[1]

The Los Angeles Times reported on the use of the program in that city's schools, creating a searchable web site that provided the score calculated by the value-added modeling system for 6,000 elementary school teachers in the district. United States Secretary of Education Arne Duncan praised the newspaper's reporting on the teacher scores citing it as a model of increased transparency, though he noted that greater openness must be balanced against concerns regarding "privacy, fairness and respect for teachers". Statistician William Sanders, a senior research manager at SAS has developed value-added models for school districts in North Carolina and Tennessee. First created as a teacher evaluation tool for school programs in Tennessee in the 1990s, the use of the technique expanded with the passage of the No Child Left Behind legislation in 2002. Based on his experience and research, Sanders argued that "if you use rigorous, robust methods and surround them with safeguards, you can reliably distinguish highly effective teachers from average teachers and from ineffective teachers."[1]

A 2003 study by the RAND Corporation prepared for the Carnegie Corporation of New York, identified the fact that value-added modeling "holds out the promise of separating the effects of teachers and schools from the powerful effects of such noneducational factors as family background" and that studies had shown that there was a wide variance in teacher scores when using such models, which could make value-added modeling an aeffective tool for evaluating and rewarding teacher performance if the variability could be substantiated as linked to the performance of individual teachers.[2]

Louisiana legislator Frank Hoffmann introduced a bill to authorize the use of value-added modeling techniques in the state's public schools schools as a means to reward strong teachers and to identify successful pedagological methods, as well as providing a means to provide additional professional development for those teachers identified as weaker than others. Despite opposition from the Louisiana Federation of Teachers, the bill passed the Louisiana State Senate on May 26, 2010, and was immediately signed into law by Governor Bobby Jindal.[3]

Criticism and concerns

A report issued by the Economic Policy Institute in August 2010 recognized that "American public schools generally do a poor job of systematically developing and evaluating teachers" but expressed concern that using performance on standardized tests as a measuring tool will not lead to better performance. The EPI report recommends that measures of performance based on standardized test scores might be one factor among many that should be considered to "provide a more accurate view of what teachers in fact do in the classroom and how that contributes to student learning." The study called value-added modeling a fairer means of comapring teachers that allows for better measures of educational methodologies and overall school performance, but argued that student test scores were not sufficiently reliable as a means of making "high-stakes personnel decisions".[4]

Edward Haertel, who led the Economic Policy Institute research team, wrote that the methodologies being pushed as part of the Race to the Top program placed "too much emphasis on measures of growth in student achievement that have not yet been adequately studied for the purposes of evaluating teachers and principals" and that the techniques of valued-added modeling need to be more thoroughly evaluated and should only be used "in closely studied pilot projects".[1]

References

  1. ^ a b c d e Dillon, Sam. "Method to Grade Teachers Provokes Battles", The New York Times, August 31, 2010. Accessed September 1, 2010.
  2. ^ McCaffrey,Daniel F.; Lockwood, J. R.; Koretz, Daniel M.; and Hamilton, Laura S. "Evaluating Value-Added Models for Teacher Accountability", RAND Corporation, 2003. Accessed September 1, 2010.
  3. ^ Staff. "Value-added evaluation bill is now law", "Louisiana Federation of Teachers Weekly Legislative Digest, May 28, 2010. Accesed September 1, 2010.
  4. ^ Baker, Eva L.; Barton, Paul E.; Darling-Hammond, Linda; Haertel, Edward; Ladd, Hellen F.; Linn, Robert L.; Ravitch, Diane; Rothstein, Richard; Shavelson, Richard J.; and Shepard, Lorrie A. "Problems with the Use of Student Test Scores to Evaluate Teachers", Economic Policy Institute, August 29, 2010. Accessed September 1, 2010.