||It has been suggested that Standardized testing and its effects and Standardized testing and public policy be merged into this article. (Discuss) Proposed since February 2011.|
Any test in which the same test is given in the same manner to all test takers is a standardised test. Standardised tests need not be high-stakes tests, time-limited tests, or multiple-choice tests. The opposite of a standardised test is a non-standardised test. Non-standardised testing gives significantly different tests to different test takers, or gives the same test under significantly different conditions (e.g., one group is permitted far less time to complete the test than the next group), or evaluates them differently (e.g., the same answer is counted right for one student, but wrong for another student).
Standardised tests are perceived as being more fair than non-standardised tests. The consistency also permits more reliable comparison of outcomes across all test takers.
- 1 History
- 2 Design and scoring
- 3 Standards
- 4 Advantages
- 5 Disadvantages and criticism
- 6 Scoring information loss
- 7 Educational decisions
- 8 See also
- 9 References
- 10 Further reading
- 11 External links
The earliest evidence of standardised testing was in China, where the imperial examinations covered the Six Arts which included music, archery and horsemanship, arithmetic, writing, and knowledge of the rituals and ceremonies of both public and private parts. Later, sections on military strategies, civil law, revenue and taxation, agriculture and geography were added to the testing. In this form, the examinations were institutionalised for more than a millennium.
Standardised testing was introduced into Europe in the early 19th century, modeled on the Chinese mandarin examinations, through the advocacy of British colonial administrators, the most "persistent" of which was Britain's consul in Guangzhou, China, Thomas Taylor Meadows. Meadows warned of the collapse of the British Empire if standardized testing was not implemented throughout the empire immediately.
Prior to their adoption, standardised testing was not traditionally a part of Western pedagogy; based on the sceptical and open-ended tradition of debate inherited from Ancient Greece, Western academia favored non-standardised assessments using essays written by students. It is because of this that the first European implementation of standardised testing did not occur in Europe proper, but in British India. Inspired by the Chinese use of standardised testing, in the early 19th century, British "company managers hired and promoted employees based on competitive examinations in order to prevent corruption and favouritism." This practice of standardised testing was later adopted in the late 19th century by the British mainland. The parliamentary debates that ensued made many references to the "Chinese mandarin system."
It was from Britain, that standardised testing spread, not only throughout the British Commonwealth, but to Europe and then America. Its spread was fueled by the Industrial Revolution. Given the large number of school students during and after the Industrial Revolution, when compulsory education laws increased student populations, open-ended assessment of all students decreased. Moreover, the lack of a standardised process introduces a substantial source of measurement error, as graders might show favouritism or might disagree with each other about the relative merits of different answers.
More recently, it has been shaped in part by the ease and low cost of grading of multiple-choice tests by computer. Grading essays by computer is more difficult, but is also done. In other instances, essays and other open-ended responses are graded according to a pre-determined assessment rubric by trained graders.
The use of standardised testing in the United States is a 20th-century phenomenon with its origins in World War I and the Army Alpha and Beta tests developed by Robert Yerkes and colleagues. Contributing to the growth of standardized tests in the United States in the mid 1800s was immigration.
Another example of standardized testing and how it started in the United States was by Everett Lindquist, a professor from Iowa, who made the ACT (American College Testing), which is a very well known standardized test.
In the United States, the need for the federal government to make meaningful comparisons across a highly de-centralised (locally controlled) public education system has also contributed to the debate about standardised testing, including the Elementary and Secondary Education Act of 1965 that required standardised testing in public schools. U.S. Public Law 107-110, known as the No Child Left Behind Act of 2001, further ties public school funding to standardized testing.
Design and scoring
Standardized testing can be composed of multiple-choice questions, true-false questions, essay questions, authentic assessments, or nearly any other form of assessment. Multiple-choice and true-false items are often chosen because they can be given and scored inexpensively and quickly by scoring special answer sheets by computer or via computer-adaptive testing. Some standardized tests have short-answer or essay writing components that are assigned a score by independent evaluators who use rubrics (rules or guidelines) and benchmark papers (examples of papers for each possible score) to determine the grade to be given to a response. Most assessments, however, are not scored by people; people are used to score items that are not able to be scored easily by computer (i.e., essays). For example, the Graduate Record Exam is a computer-adaptive assessment that requires no scoring by people (except for the writing portion).
Human scoring is often variable, which is why computer scoring is preferred when feasible. For example, some believe that poorly paid employees will score tests badly. Agreement between scorers can vary between 60 to 85 percent, depending on the test and the scoring session. Sometimes states pay to have two or more scorers read each paper; if their scores do not agree, then the paper is passed to additional scorers.
Open-ended components of tests are often only a small proportion of the test. Most commonly, a major test includes both human-scored and computer-scored sections. These major tests do not measure the student's overall ability in learning.
|Student answers||Standardized grading||Non-standardized grading|
|Grading rubric: Answers must be marked correct if they mention at least one of the following: Germany's invasion of Poland, Japan's invasion of China, or economic issues.||No grading standards. Each teacher grades however he/she wants to, considering factors like the answer, the student's academic potential, and attitude.|
WWII was caused by Hitler and Germany invading Poland.
WWII was caused by multiple factors, including the Great Depression and the general economic situation, the rise of nationalism, fascism, and imperialist expansionism, and unresolved resentments related to WWI. The war in Europe began with the German invasion of Poland.
WWII was caused by the assassination of Archduke Ferdinand.
- Norm-referenced score interpretations compare test-takers to a sample of peers. The goal is to rank students as being better or worse than other students. Norm-referenced test score interpretations are associated with traditional education. Students who perform better than others pass the test, and students who perform worse than others fail the test.
- Criterion-referenced score interpretations compare test-takers to a criterion (a formal definition of content), regardless of the scores of other examinees. These may also be described as standards-based assessments, as they are aligned with the standards-based education reform movement. Criterion-referenced score interpretations are concerned solely with whether or not this particular student's answer is correct and complete. Under criterion-referenced systems, it is possible for all students to pass the test, or for all students to fail the test.
Either of these systems can be used in standardized testing. What is important to standardized testing is whether all students are asked equivalent questions, under equivalent circumstances, and graded equally. In a standardized test, if a given answer is correct for one student, it is correct for all students. Graders do not accept an answer as good enough for one student but reject the same answer as inadequate for another student.
The considerations of validity and reliability typically are viewed as essential elements for determining the quality of any standardized test. However, professional and practitioner associations frequently have placed these concerns within broader contexts when developing standards and making overall judgments about the quality of any standardized test as a whole within a given context.
In the field of evaluation, and in particular educational evaluation, the Joint Committee on Standards for Educational Evaluation has published three sets of standards for evaluations. The Personnel Evaluation Standards was published in 1988, The Program Evaluation Standards (2nd edition) was published in 1994, and The Student Evaluation Standards was published in 2003.
Each publication presents and elaborates a set of standards for use in a variety of educational settings. The standards provide guidelines for designing, implementing, assessing and improving the identified form of evaluation. Each of the standards has been placed in one of four fundamental categories to promote educational evaluations that are proper, useful, feasible, and accurate. In these sets of standards, validity and reliability considerations are covered under the accuracy topic. For example, the student accuracy standards help ensure that student evaluations will provide sound, accurate, and credible information about student learning and performance.
In the field of psychometrics, the Standards for Educational and Psychological Testing place standards about validity and reliability, along with errors of measurement and issues related to the accommodation of individuals with disabilities. The third and final major topic covers standards related to testing applications, credentialing, plus testing in program evaluation and public policy.
One of the main advantages of standardized testing is that the results can be empirically documented; therefore, the test scores can be shown to have a relative degree of validity and reliability, as well as results which are generalizable and replicable. This is often contrasted with grades on a school transcript, which are assigned by individual teachers. It may be difficult to account for differences in educational culture across schools, difficulty of a given teacher's curriculum, differences in teaching style, and techniques and biases that affect grading. This makes standardized tests useful for admissions purposes in higher education, where a school is trying to compare students from across the nation or across the world.
Another advantage is aggregation. A well designed standardized test provides an assessment of an individual's mastery of a domain of knowledge or skill which at some level of aggregation will provide useful information. That is, while individual assessments may not be accurate enough for practical purposes, the mean scores of classes, schools, branches of a company, or other groups may well provide useful information because of the reduction of error accomplished by increasing the sample size.
Standardized tests, which by definition give all test-takers the same test under the same (or reasonably equal) conditions, are also perceived as being more fair than assessments that use different questions or different conditions for students according to their race, socioeconomic status, or other considerations.
Disadvantages and criticism
Standardized tests are useful tools for assessing student achievement, and can be used to focus instruction on desired outcomes, such as reading and math skills. However, critics feel that overuse and misuse of these tests harms teaching and learning by narrowing the curriculum. According to the group FairTest, when standardized tests are the primary factor in accountability, schools use the tests to narrowly define curriculum and focus instruction. FairTest says that negative consequences of test misuse include narrowing the curriculum, teaching to the test, pushing students out of school, driving teachers out of the profession, and undermining student engagement and school climate. Critics say that "teaching to the test" disfavors higher-order learning. While it is possible to use a standardized test without letting its contents determine curriculum and instruction, frequently, what is not tested is not taught, and how the subject is tested often becomes a model for how to teach the subject.
Uncritical use of standardized test scores to evaluate teacher and school performance is inappropriate, because the students' scores are influenced by three things: what students learn in school, what students learn outside of school, and the students' innate intelligence. The school only has control over one of these three factors. Value-added modeling has been proposed to cope with this criticism by statistically controlling for innate ability and out-of-school contextual factors. In a value-added system of interpreting test scores, analysts estimate an expected score for each student, based on factors such as the student's own previous test scores, primary language, or socioeconomic status. The difference between the student's expected score and actual score is presumed to be due primarily to the teacher's efforts.
Supporters of standardized testing respond that these are not reasons to abandon standardized testing in favor of either non-standardized testing or of no assessment at all, but rather criticisms of poorly designed testing regimes. They argue that testing does and should focus educational resources on the most important aspects of education — imparting a pre-defined set of knowledge and skills — and that other aspects are either less important, or should be added to the testing scheme.
In her book, Now You See It, Cathy Davidson criticizes standardized tests. She describes our youth as "assembly line kids on an assembly line model," meaning the use of standardized test as a part of a one-size-fits-all educational model. She also criticizes the narrowness of skills being tested and labeling children without these skills as failures or as students with disabilities. Widespread and organized cheating has been a growing culture in today's reformation of schools.
Education theorist Bill Ayers has commented on the limitations of the standardized test, writing that "Standardized tests can't measure initiative, creativity, imagination, conceptual thinking, curiosity, effort, irony, judgment, commitment, nuance, good will, ethical reflection, or a host of other valuable dispositions and attributes. What they can measure and count are isolated skills, specific facts and function, content knowledge, the least interesting and least significant aspects of learning."
Scoring information loss
A test question might require a student to calculate the area of a triangle. Compare the information provided in these two answers.
When tests are scored right-wrong, an important assumption has been made about learning. The number of right answers or the sum of item scores (where partial credit is given) is assumed to be the appropriate and sufficient measure of current performance status. In addition, a secondary assumption is made that there is no meaningful information in the wrong answers.
In the first place, a correct answer can be achieved using memorization without any profound understanding of the underlying content or conceptual structure of the problem posed. Second, when more than one step for solution is required, there are often a variety of approaches to answering that will lead to a correct result. The fact that the answer is correct does not indicate which of the several possible procedures were used. When the student supplies the answer (or shows the work) this information is readily available from the original documents.
Second, if the wrong answers were blind guesses, there would be no information to be found among these answers. On the other hand, if wrong answers reflect interpretation departures from the expected one, these answers should show an ordered relationship to whatever the overall test is measuring. This departure should be dependent upon the level of psycholinguistic maturity of the student choosing or giving the answer in the vernacular in which the test is written.
In this second case it should be possible to extract this order from the responses to the test items. Such extraction processes, the Rasch model for instance, are standard practice for item development among professionals. However, because the wrong answers are discarded during the scoring process, attempts to interpret these answers for the information they might contain is seldom undertaken.
Third, although topic-based subtest scores are sometimes provided, the more common practice is to report the total score or a rescaled version of it. This rescaling is intended to compare these scores to a standard of some sort. This further collapse of the test results systematically removes all the information about which particular items were missed.
Thus, scoring a test right–wrong loses 1) how students achieved their correct answers, 2) what led them astray towards unacceptable answers and 3) where within the body of the test this departure from expectation occurred.
This commentary suggests that the current scoring procedure conceals the dynamics of the test-taking process and obscures the capabilities of the students being assessed. Current scoring practice oversimplifies these data in the initial scoring step. The result of this procedural error is to obscure of the diagnostic information that could help teachers serve their students better. It further prevents those who are diligently preparing these tests from being able to observe the information that would otherwise have alerted them to the presence of this error.
A solution to this problem, known as Response Spectrum Evaluation (RSE), is currently being developed that appears to be capable of recovering all three of these forms of information loss, while still providing a numerical scale to establish current performance status and to track performance change.
This RSE approach provides an interpretation of the thinking processes behind every answer (both the right and the wrong ones) that tells teachers how they were thinking for every answer they provide. Among other findings, this chapter reports that the recoverable information explains between two and three times more of the test variability than considering only the right answers. This massive loss of information can be explained by the fact that the "wrong" answers are removed from the test information being collected during the scoring process and is no longer available to reveal the procedural error inherent in right-wrong scoring. The procedure bypasses the limitations produced by the linear dependencies inherent in test data.
Testing bias occurs when a test systematically favors one group over another, even though both groups are equal on the trait the test measures. Critics allege that test makers and facilitators tend to represent a middle class, white background. Critics claim that standardized testing match the values, habits, and language of the test makers. However, being that most tests come from a white, middle-class background, it is important to note that the highest scoring groups are not people of that background, but rather tend to come from Asian populations.
Not all tests are well-written, for example, containing multiple-choice questions with ambiguous answers, or poor coverage of the desired curriculum. Some standardized tests include essay questions, and some have criticized the effectiveness of the grading methods. Recently, partial computerized grading of essays has been introduced for some tests, which is even more controversial.
Test scores are in some cases used as a sole, mandatory, or primary criterion for admissions or certification. For example, some U.S. states require high school graduation examinations. Adequate scores on these exit exams are required for high school graduation. The General Educational Development test is often used as an alternative to a high school diploma.
Other applications include tracking (deciding whether a student should be enrolled in the "fast" or "slow" version of a course) and awarding scholarships. In the United States, many colleges and universities automatically translate scores on Advanced Placement tests into college credit, satisfaction of graduation requirements, or placement in more advanced courses. Generalized tests such as the SAT or GRE are more often used as one measure among several, when making admissions decisions. Some public institutions have cutoff scores for the SAT, GPA, or class rank, for creating classes of applicants to automatically accept or reject.
Heavy reliance on standardized tests for decision-making is often controversial, for the reasons noted above. Critics often propose emphasizing cumulative or even non-numerical measures, such as classroom grades or brief individual assessments (written in prose) from teachers. Supporters argue that test scores provide a clear-cut, objective standard that minimizes the potential for political influence or favoritism.
The National Academy of Sciences recommends that major educational decisions not be based solely on a test score. The use of minimum cut-scores for entrance or graduation does not imply a single standard, since test scores are nearly always combined with other minimal criteria such as number of credits, prerequisite courses, attendance, etc. Test scores are often perceived as the "sole criteria" simply because they are the most difficult, or the fulfillment of other criteria is automatically assumed. One exception to this rule is the GED, which has allowed many people to have their skills recognized even though they did not meet traditional criteria.
- Concept inventory
- Educational assessment
- List of standardized tests in the United States
- Standards-based assessment
- Test (assessment)
- Alternative assessment
- Campbell's Law
- Criterion-referenced test
- High school graduation exam
- IBM 805 Test Scoring Machine
- Norm-referenced test
- Standardized testing and its effects
- Standards-based education reform
- Standardized testing and public policy
- Encyclopædia Britannica
- Mark and Boyer (1996), 9-10.
- Kazin, Edwards, and Rothman (2010), 142.
- Gould, S. J., "A Nation of Morons", New Scientist (6 May 1982), 349–352.
- Johnson, Robert. "Standardized Tests." Encyclopedia of Educational Reform and Dissent. SAGE Publications, INC. 2010. 853-856.Web.
- Fletcher, Dan. "Standardized Testing." Time. Time Inc., 11 Dec. 2009. Web. 09 Mar. 2014.
- ETS webage about scoring the GRE.
- Houtz, Jolayne (August 27, 2000) "Temps spend just minutes to score state test A WASL math problem may take 20 seconds; an essay, 21⁄2 minutes". Seattle Times "In a matter of minutes, a $10-an-hour temp assigns a score to your child's test"
- Where We Stand: Standards-Based Assessment and Accountability (American Federation of Teachers) [dead link]
- Joint Committee on Standards for Educational Evaluation
- Joint Committee on Standards for Educational Evaluation. (1988). The Personnel Evaluation Standards: How to Assess Systems for Evaluating Educators. Newbury Park, CA: Sage Publications.
- Joint Committee on Standards for Educational Evaluation. (1994). The Program Evaluation Standards, 2nd Edition. Newbury Park, CA: Sage Publications.
- Committee on Standards for Educational Evaluation. (2003). The Student Evaluation Standards: How to Improve Evaluations of Students. Newbury Park, CA: Corwin Press.
- The Standards for Educational and Psychological Testing
- Kuncel, N. R., & Hezlett, S. A. (2007). Science, 315, 1080-81.
- The College Work Readiness Assessment. http://www.cae.org/content/pro_collegework.htm
- Popham, W.J. (1999). Why Standardized Test Scores Don't Measure Educational Quality. Educational Leadership, 56(6) 8–15.
- Hassel, B. & Rosch, J. (2008) "Ohio Value-Added Primer." Fordham Foundation. http://www.edexcellence.net/doc/Ohio_Value_Added_Primer_FINAL_small.pdf
- Davidson, Cathy (2011). Now You See It: How the Brain Science of Attention Will Transform the Way We Live, Work, and Learn. New York: Viking.
- To teach: the journey of a teacher, by William Ayers, Teachers College Press, 1993, ISBN 0-8077-3985-5, ISBN 978-0-8077-3985-3, pg. 116
- Powell, J. C. and Shklov, N. (1992) The Journal of Educational and Psychological Measurement, 52, 847–865
- "A Paradigm Shift in Test Scoring!"
- Powell, Jay C. (2010) Testing as Feedback to Inform Teaching. Chapter 3 in; Learning and Instruction in the Digital Age, Part 1. Cognitive Approaches to Learning and Instruction. (J. Michael Spector, Dirk Ifenthaler, Pedro Isaias, Kinshuk and Demetrios Sampson, Eds.), New York: Springer. ISBN 978-1-4419-1551-1, "http://dx.doi.org/10.1007/978-1-4419-1551-1"
- Weighing In On the Elements of Essay by Jay Mathews. Washington Post, 1 Aug 2004, p. A01.
- "High Stakes: Testing for Tracking, Promotion, and Graduation"
- FairTest, "What's Wrong With Standardized Tests," Fact Sheet.
- Ravitch, Diane, “The Uses and Misuses of Tests”, in The Schools We Deserve (New York: Basic Books, 1985), pp. 172–181.
- Huddleston, Mark W. Boyer, William W.The higher civil service in the United States: quest for reform. (University of Pittsburgh Press, 1996)
- Phelps, Richard P., Ed. Correcting Fallacies about Educational and Psychological Testing. (Washington, DC: American Psychological Association, 2008)
- Phelps, Richard P., Standardized Testing Primer. (New York, NY: Peter Lang, 2007)
- Harris,Smith and Harris The Myths of Standardized Tests: Why They Don't Tell You What You Think They Do, Rowman & Littlefield 2011
- Joint Committee on Standards for Educational Evaluation
- Standardized Testing in School
- The Standards for Educational and Psychological Testing