Standardized test: Difference between revisions
i have shared a little imformation on cgina |
|||
Line 2: | Line 2: | ||
==History== |
==History== |
||
{{Unreferenced section|date=November 2009}} |
{{Unreferenced section|date=November 2009}}The earliest evidence of standardized testing based on merit comes from china {cool right its not so think again}. |
||
during the [[Han dynasty]]. The concept of a state ruled by men of ability and virtue was an outgrowth of Confucian philosophy. The [[imperial examination]]s covered the [[Six Arts]] which included music, archery and horsemanship, arithmetic, writing, and knowledge of the rituals and ceremonies of both public and private parts. Later, the five studies (military strategies, civil law, revenue and taxation, agriculture and geography) were added to the testing. In this form, the examinations were institutionalized during the sixth century CE, under the Sui Dynasty. These examinations are regarded by most historians as the first standardized tests based on merit. |
|||
Standardized testing was not traditionally a part of Western pedagogy; based on the sceptical and open-ended tradition of debate inherited from Ancient Greece, Western academia favored the essay. However, given the large number of school students during and after the [Industrial Revolution], open-ended assessment of all students was not viable. Moreover, the lack of a standardized process introduces a substantial source of measurement error. |
Standardized testing was not traditionally a part of Western pedagogy; based on the sceptical and open-ended tradition of debate inherited from Ancient Greece, Western academia favored the essay. However, given the large number of school students during and after the [Industrial Revolution], open-ended assessment of all students was not viable. Moreover, the lack of a standardized process introduces a substantial source of measurement error. |
Revision as of 14:43, 8 March 2010
A standardized test is a test designed in such a way that the questions, conditions for administering, scoring procedures, and interpretations are consistent [1] and are administered and scored in a predetermined, standard manner.[2]
History
The earliest evidence of standardized testing based on merit comes from china {cool right its not so think again}.
during the Han dynasty. The concept of a state ruled by men of ability and virtue was an outgrowth of Confucian philosophy. The imperial examinations covered the Six Arts which included music, archery and horsemanship, arithmetic, writing, and knowledge of the rituals and ceremonies of both public and private parts. Later, the five studies (military strategies, civil law, revenue and taxation, agriculture and geography) were added to the testing. In this form, the examinations were institutionalized during the sixth century CE, under the Sui Dynasty. These examinations are regarded by most historians as the first standardized tests based on merit.
Standardized testing was not traditionally a part of Western pedagogy; based on the sceptical and open-ended tradition of debate inherited from Ancient Greece, Western academia favored the essay. However, given the large number of school students during and after the [Industrial Revolution], open-ended assessment of all students was not viable. Moreover, the lack of a standardized process introduces a substantial source of measurement error.
United States
The use of standardized testing in the United States is a 20th century phenomenon with its origins in World War I. It was also given a major boost in the Cold War. More recently it has been driven in part by the ease of computer-grading of standardized tests, and the comparative difficulty of grading essays by computer. In the United States, the need for the Federal government to make meaningful comparisons across a highly de-centralized (locally controlled) public education system has also contributed to the debate about standardized testing.
The first large-scale use of standardized assessment methods related to the IQ test, first used in the US was during World War I (circa 1914–18).
The U.S.-based Educational Testing Service (ETS) established in 1948 is the world's largest private educational testing and measurement organization, operating on an annual budget of approximately $900 million.
The Elementary and Secondary Education Act of 1965 required standardized testing in public schools. US Public Law 107-110, known as the No Child Left Behind Act of 2001 further ties public school funding to standardized testing.
Design and scoring
In practice, standardized testing can be composed of multiple-choice, true-false and/or essay questions. Such items can be tested inexpensively and quickly by scoring special answer sheets by computer or via computer-adaptive testing. Some tests also have short-answer or essay writing components that are assigned a score by independent evaluators who use rubrics (rules or guidelines) and benchmark papers (examples of papers for each possible score) to determine the grade to be given to a response. Most assessments, however, are not scored by people; people are used to score items that are not able to be scored easily by computer (i.e., essays). For example, the Graduate Record Exam is a computer-adaptive assessment that requires no scoring by people (except for the writing portion).[3]
Scoring issues
There can be issues with human scoring, which is a reason for the preference given to computer scoring. For example, the Seattle Times reported that for Washington State's WASL, temporary employees that were paid $8.75 an hour spent as little as 20 seconds on each math problem and 2.5 minutes on essay items which might determine if a student graduates from high school. Some believe this is a matter of concern given the high stakes nature of such tests. Pearson scores many other state tests similarly.[4] Agreement between scorers can vary between 60 to 85 percent depending on the test and the scoring session. Sometimes states pay to have two or more scorers read each paper to improve reliability, though this does not eliminate test responses getting different scores.[5] Note, however, that open-ended components of test are often only a small proportion of the test.
Score
There are two types of standardized test score interpretations: a norm-referenced score interpretation or a criterion-referenced score interpretation. Norm-referenced score interpretations compare test-takers to a sample of peers. Criterion-referenced score interpretations compare test-takers to a criterion (a formal definition of content), regardless of the scores of other examinees. These may also be described as standards-based assessments as they are aligned with the standards-based education reform movement.[6] Norm-referenced test score interpretations are associated with traditional education, which measures success by rank ordering students using a variety of metrics, including grades and test scores, while standards-based assessments are based on the belief that all students can succeed if they are assessed against standards which are required of all students regardless of ability or economic background.[citation needed]
Standards
The considerations of validity and reliability typically are viewed as essential elements for determining the quality of any standardized test. However, professional and practitioner associations frequently have placed these concerns within broader contexts when developing standards and making overall judgments about the quality of any standardized test as a whole within a given context.
Evaluation standards
In the field of evaluation, and in particular educational evaluation, the Joint Committee on Standards for Educational Evaluation[7] has published three sets of standards for evaluations. The Personnel Evaluation Standards[8] was published in 1988, The Program Evaluation Standards (2nd edition)[9] was published in 1994, and The Student Evaluation Standards[10] was published in 2003.
Each publication presents and elaborates a set of standards for use in a variety of educational settings. The standards provide guidelines for designing, implementing, assessing and improving the identified form of evaluation. Each of the standards has been placed in one of four fundamental categories to promote educational evaluations that are proper, useful, feasible, and accurate. In these sets of standards, validity and reliability considerations are covered under the accuracy topic. For example, the student accuracy standards help ensure that student evaluations will provide sound, accurate, and credible information about student learning and performance.
Testing standards
In the field of psychometrics, the Standards for Educational and Psychological Testing[11] place standards about validity and reliability, along with errors of measurement and individuals with disabilities. The third and final major topic covers standards related to testing applications, credentialing, plus testing in program evaluation and public policy.
Advantages
One of the main advantages of standardized testing is that the results can be empirically documented, therefore the test scores can be shown to have a relative degree of validity and reliability, as well as results which are generalizable and replicable.[12] This is often contrasted with grades on a school transcript, which are assigned by individual teachers. It may be difficult to account for differences in educational culture across schools, difficulty of a given teacher's curriculum, differences in teaching style, and techniques and biases that affect grading. This makes standardized tests useful for admissions purposes in higher education, where a school is trying to compare students from across the nation or across the world.
Another advantage is aggregation. A well designed standardized test provides an assessment of an individual's mastery of a domain of knowledge or skill which at some level of aggregation will provide useful information. That is, while individual assessments may not be accurate enough for practical purposes, the mean scores of classes, schools, branches of a company, or other groups may well provide useful information because of the reduction of error accomplished by increasing the sample size.
Disadvantages and criticism
"Standardized tests can't measure initiative, creativity, imagination, conceptual thinking, curiosity, effort, irony, judgment, commitment, nuance, good will, ethical reflection, or a host of other valuable dispositions and attributes. What they can measure and count are isolated skills, specific facts and function, content knowledge, the least interesting and least significant aspects of learning."
Though many educators recognize that standardized tests have a place in the arsenal of tools used to assess student achievement, critics feel that overuse and misuse of these tests is having serious negative consequences on teaching and learning. According to the group FairTest, when standardized tests are the primary factor in accountability, the temptation is to use the tests to define curriculum and focus instruction. What is not tested is not taught, and how the subject is tested becomes a model for how to teach the subject. Critics say this disfavors higher-order learning. Of course this can also be used to focus instruction on desired outcomes[14], such as basic reading and math . Moreover, Popham[15] points out that standardized test scores are problematic tools for school accountability because the examinee scores are influenced by three things: what kids learn in school, what kids learn outside of school, and innate intelligence. New value-added-models have been proposed to cope with this criticism by statistically controlling for innate ability and out of school contextual factors.[16]
While it is possible to use a standardized test and not let its limits control curriculum and instruction, this can result in a school putting itself at risk for producing lower test scores, with negative political consequences. For example, under the federal No Child Left Behind law in the United States, low test scores mean schools and districts can be labeled "in need of improvement" and punished. If the test is the only method of accountability, then parents and the community are less likely to know how well children are learning in untested areas.
Supporters of standardized testing respond that these are not reasons to abandon testing, but rather criticisms of poorly designed testing regimes. They argue that testing focuses educational resources on the most important aspects of education — imparting a pre-defined set of knowledge and skills — and that other aspects are either less important, or should be added to the testing scheme. If "knowledge and skills" include the ability to write an essay, for example, then it clearly lies outside the province of standardized testing.
Some critics say [attribution needed] that some children do not do well on standardized tests, despite mastery of the material, due to testing anxiety or lack of time management or test-taking skills. This reflects the fact that tests cannot directly measure student knowledge, only the ability of students to apply knowledge in a stressful situation. Testing anxiety has been linked to trait Neuroticism, which is related to generalized anxiety.
The growing influence of test preparation is also a concern for some. As the importance of standardized testing rises, many students attempt to prepare themselves for a test, either through free sample tests and programs, purchasing books designed to prepare the student for a test, or private tutoring sessions. Some parents are willing to pay thousands of dollars to prepare their children for tests,[17] a financial barrier that may give children of more wealthy parents an advantage compared to less affluent families. However this criticism would probably apply even more to testing alternatives such as portfolios or essays. Many studies also show that test coaching has little effect on scores of well-built tests[citation needed]. The ability of wealthy families to pay for higher-quality education is not specifically related to standardized testing.
Scoring information loss
When tests are scored right-wrong an important assumption has been made about learning. The number of right answers or the sum of item scores (where partial credit is given) is assumed to be the appropriate and sufficient measure of current performance status. In addition, a secondary assumption is made that there is no meaningful information in the wrong answers.
In the first place, a correct answer can be achieved using memorization without any profound understanding of the underlying content or conceptual structure of the problem posed. Second, when more than one step for solution is required, there are often a variety of approaches to answering that will lead to a correct result. The fact that the answer is correct does not indicate which of the several possible procedures were used. When the student supplies the answer (or shows the work) this information is readily available from the original documents.
Second, if the wrong answers were blind guesses, there would be no information to be found among these answers. On the other hand, if wrong answers reflect interpretation departures from the expected one, these answers should show an ordered relationship to whatever the overall test is measuring. This departure should be dependent upon the level of psycholinguistic maturity of the student choosing or giving the answer in the vernacular in which the test is written.
In this second case it should be possible to extract this order from the responses to the test items.[18] Such extraction processes, the Rasch model for instance, are standard practice for item development among professionals. However, because the wrong answers are discarded during the scoring process, attempts to interpret these answers for the information they might contain is seldom undertaken.
Third, although topic-based subtest scores are sometimes provided, the more common practice is to report the total score or a rescaled version of it. This rescaling is intended to compare these scores to a standard of some sort. This further collapse of the test results systematically removes all the information about which particular items were missed.
Thus, scoring a test right–wrong loses 1) how students achieved their correct answers, 2) what led them astray towards unacceptable answers and 3) where within the body of the test this departure from expectation occurred.
This commentary suggests that the current scoring procedure conceals the dynamics of the test-taking process and obscures the capabilities of the students being assessed. Current scoring practice oversimplifies these data in the initial scoring step. The result of this procedural error is to obscure of the diagnostic information that could help teachers serve their students better. It further prevents those who are diligently preparing these tests from being able to observe the information that would otherwise have alerted them to the presence of this error.
A solution to this problem, know as Response Spectrum Evaluation (RSE),[19] is currently being developed that appears to be capable of recovering all three of these forms of information loss, while still providing a numerical scale to establish current performance status and to track performance change.
This RSE approach provides an interpretation of the thinking processes behind every answer (both the right and the wrong ones) that tells teachers how they were thinking for every answer they provide.[20] Among other findings, this chapter reports that the recoverable information explains between two and three times more of the test variability than considering only the right answers. This massive loss of information can be explained by the fact that the "wrong" answers are removed from the test information being collected during the scoring process and is no longer available to reveal the procedural error inherent in right-wrong scoring. The procedure bypasses the limitations produced by the linear dependencies inherent in test data.
Testing bias occurs when a test systematically favors one group over another, even though both groups are equal on the trait the test measures. Critics allege that test makers and facilitators tend to represent a middle class, white background. Critics claim that standardized testing match the values, habits, and language of the test makers[citation needed]. However, being that most tests come from a white, middle-class background, it is important to note that the highest scoring groups are not people of that background, but rather tend to come from Asian populations.[21]
Not all tests are well-written, for example, containing multiple-choice questions with ambiguous answers, or poor coverage of the desired curriculum. Some standardized tests include essay questions, and some have criticized the effectiveness of the grading methods. Recently, partial computerized grading of essays has been introduced for some tests, which is even more controversial.[22]
Educational decisions
Test scores are in some cases used as a sole, mandatory, or primary criterion for admissions or certification. For example, some U.S. states require high school graduation examinations. Adequate scores on these exit exams are required for high school graduation. The General Educational Development test is often used as an alternative to a high school diploma.
Other applications include tracking (deciding whether a student should be enrolled in the "fast" or "slow" version of a course) and awarding scholarships. In the United States, many colleges and universities automatically translate scores on Advanced Placement tests into college credit, satisfaction of graduation requirements, or placement in more advanced courses. Generalized tests such as the SAT are more often used as one measure among several, when making admissions decisions. Some public institutions have cutoff scores for the SAT, GPA, or class rank, for creating classes of applicants to automatically accept or reject.
Heavy reliance on standardized tests for decision-making is often controversial, for the reasons noted above. Critics often propose emphasizing cumulative or even non-numerical measures, such as classroom grades or brief individual assessments (written in prose) from teachers. Supporters argue that test scores provide a clear-cut, objective standard that minimizes the potential for political influence or favoritism.
The National Academy of Sciences recommends that major educational decisions not be based solely on a test score.[23] The use of minimum cut-scores for entrance or graduation does not imply a single standard, since test scores are nearly always combined with other minimal criteria such as number of credits, prerequisite courses, attendance, etc. Test scores are often perceived as the "sole criteria" simply because they are the most difficult, or the fulfillment of other criteria is automatically assumed. One exception to this rule is the GED, which has allowed many famous individuals to have their skills recognized even though they did not meet traditional criteria.
See also
Major topics
- Assessment
- Evaluation
- List of standardized tests in the United States
- Psychometrics
- Standards-based assessment
- Test (student assessment)
Other topics
- Alternative assessment
- Campbell's Law
- Criterion-referenced test
- High school graduation exam
- Norm-referenced test
- Standards-based education reform
- Standardized testing and public policy
References
- ^ Sylvan Learning glossary, retrieved online, source no longer available
- ^ Popham, J. (1991). Why standardized tests don’t measure educational quality. Educational Leadership, 56(6), 8–15.
- ^ ETS webage about scoring the GRE.
- ^ Sunday, August 27, 2000 "Temps spend just minutes to score state test A WASL math problem may take 20 seconds; an essay, 2 1/2 minutes" Jolayne Houtz Seattle Times "In a matter of minutes, a $10-an-hour temp assigns a score to your child's test"
- ^ Why the WASL is Awful
- ^ Where We Stand: Standards-Based Assessment and Accountability (American Federation of Teachers) [dead link]
- ^ Joint Committee on Standards for Educational Evaluation
- ^ Joint Committee on Standards for Educational Evaluation. (1988). The Personnel Evaluation Standards: How to Assess Systems for Evaluating Educators. Newbury Park, CA: Sage Publications.
- ^ Joint Committee on Standards for Educational Evaluation. (1994). The Program Evaluation Standards, 2nd Edition. Newbury Park, CA: Sage Publications.
- ^ Committee on Standards for Educational Evaluation. (2003). The Student Evaluation Standards: How to Improve Evaluations of Students. Newbury Park, CA: Corwin Press.
- ^ The Standards for Educational and Psychological Testing
- ^ Kuncel, N. R., & Hezlett, S. A. (2007). Science, 315, 1080-81.
- ^ To teach: the journey of a teacher, by William Ayers, Teachers College Press, 1993, ISBN 0807739855, 9780807739853, pg. 116
- ^ The College Work Readiness Assessment. http://www.cae.org/content/pro_collegework.htm
- ^ Popham, W.J. (1999). Why Standardized Test Scores Don't Measure Educational Quality. Educational Leadership, 56(6) 8–15.
- ^ Hassel, B. & Rosch, J. (2008) "Ohio Value-Added Primer." Fordham Foundation. http://www.edexcellence.net/doc/Ohio_Value_Added_Primer_FINAL_small.pdf
- ^ Associated Press (August 4, 1998). "Tackling the SAT? Test-prep help abounds". Christian Science Monitor. 90 (175): B3. ISSN 0882-7729. Retrieved 2007-07-09.
Some parents spend thousands of dollars for private sessions...
{{cite journal}}
: Cite has empty unknown parameters:|quotes=
,|laysummary=
,|laydate=
,|laysource=
, and|coauthors=
(help) - ^ Powell, J. C. and Shklov, N. (1992) The Journal of Educational and Psychological Measurement, 52, 847–865
- ^ A Paradigm Shift in Test Scoring!
- ^ Powell, Jay C. (2010 – in press) Testing as Feedback to Inform Teaching. Chapter 3 in; Learning and Instruction in the Digital Age: Making a Difference through Cognitive Approaches. New York: Springer.
- ^ Race and intelligence (test data)#IQ test score gap in the US
- ^ Weighing In On the Elements of Essay by Jay Mathews. Washington Post, 1 Aug 2004, p. A01.
- ^ "High Stakes: Testing for Tracking, Promotion, and Graduation"