This article needs additional citations for verification. (October 2016) (Learn how and when to remove this template message)
- 1 History
- 2 Modern-day use
- 3 Types
- 4 Assessment formats
- 5 Preparations
- 6 Cheating
- 7 Support and criticisms
- 8 Other types of tests and other related terms
- 9 See also
- 10 References
- 11 Further reading
- 12 External links
A test or examination (informally, exam or evaluation) is an assessment intended to measure a test-taker's knowledge, skill, aptitude, physical fitness, or classification in many other topics (e.g., beliefs). A test may be administered verbally, on paper, on a computer, or in a predetermined area that requires a test taker to demonstrate or perform a set of skills. Tests vary in style, rigor and requirements. For example, in a closed book test, a test taker is usually required to rely upon memory to respond to specific items whereas in an open book test, a test taker may use one or more supplementary tools such as a reference book or calculator when responding. A test may be administered formally or informally. An example of an informal test is a reading test administered by a parent to a child. A formal test might be a final examination administered by a teacher in a classroom or an I.Q. test administered by a psychologist in a clinic. Formal testing often results in a grade or a test score. A test score may be interpreted with regards to a norm or criterion, or occasionally both. The norm may be established independently, or by statistical analysis of a large number of participants. An exam is meant to test a persons knowledge or willingness to give time to manipulate that subject.
A standardized test is any test that is administered and scored in a consistent manner to ensure legal defensibility. Standardized tests are often used in education, professional certification, psychology (e.g., MMPI), the military, and many other fields.
A non-standardized test is usually flexible in scope and format, variable in difficulty and significance. Since these tests are usually developed by individual instructors, the format and difficulty of these tests may not be widely adopted or used by other instructors or institutions. A non-standardized test may be used to determine the proficiency level of students, to motivate students to study, and to provide feedback to students. In some instances, a teacher may develop non-standardized tests that resemble standardized tests in scope, format, and difficulty for the purpose of preparing their students for an upcoming standardized test. Finally, the frequency and setting by which a non-standardized tests are administered are highly variable and are usually constrained by the duration of the class period. A class instructor may for example, administer a test on a weekly basis or just twice a semester. Depending on the policy of the instructor or institution, the duration of each test itself may last for only five minutes to an entire class period.
In contrasts to non-standardized tests, standardized tests are widely used, fixed in terms of scope, difficulty and format, and are usually significant in consequences. Standardized tests are usually held on fixed dates as determined by the test developer, educational institution, or governing body, which may or may not be administered by the instructor, held within the classroom, or constrained by the classroom period. Although there is little variability between different copies of the same type of standardized test (e.g., SAT or GRE), there is variability between different types of standardized tests.
Any test with important consequences for the individual test taker is referred to as a high-stakes test.
A test may be developed and administered by an instructor, a clinician, a governing body, or a test provider. In some instances, the developer of the test may not be directly responsible for its administration. For example, Educational Testing Service (ETS), a nonprofit educational testing and assessment organization, develops standardized tests such as the SAT but may not directly be involved in the administration or proctoring of these tests. As with the development and administration of educational tests, the format and level of difficulty of the tests themselves are highly variable and there is no general consensus or invariable standard for test formats and difficulty. Often, the format and difficulty of the test is dependent upon the educational philosophy of the instructor, subject matter, class size, policy of the educational institution, and requirements of accreditation or governing bodies. In general, tests developed and administered by individual instructors are non-standardized whereas tests developed by testing organizations are standardized.
Ancient China was the first country in the world that implemented a nationwide standardized test, which was called the imperial examination. The main purpose of this examination was to select able candidates for specific governmental positions. The imperial examination was established by the Sui dynasty in 605 AD and was later abolished by the Qing dynasty 1300 years later in 1905. England had adopted this examination system in 1806 to select specific candidates for positions in Her Majesty's Civil Service, modeled on the Chinese imperial examination. This examination system was later applied to education and it started to influence other parts of the world as it became a prominent standard (e.g. regulations to prevent the markers from knowing the identity of candidates), of delivering standardised tests.
As the profession transitioned to the modern mass-education system, the style of examination became fixed, with the stress on standardized papers to be sat by large numbers of students. Leading the way in this regard was the burgeoning Civil Service that began to move toward a meritocratic basis for selection in the mid 19th century in England.
British civil service was influenced by the imperial examinations system and meritocratic system of China. Thomas Taylor Meadows, Britain's consul in Guangzhou, China argued in his Desultory Notes on the Government and People of China, published in 1847, that "the long duration of the Chinese empire is solely and altogether owing to the good government which consists in the advancement of men of talent and merit only," and that the British must reform their civil service by making the institution meritocratic. As early as in 1806, the Honourable East India Company established a college near London to train and examine administrators of the Company's territories in India. Examinations for the Indian 'civil service'- a term coined by the Company – were introduced in 1829.
In 1853 the Chancellor of the Exchequer William Gladstone, commissioned Sir Stafford Northcote and Charles Trevelyan to look into the operation and organisation of the Civil Service. Influenced by the ancient Chinese Imperial Examination, the Northcote–Trevelyan Report of 1854 made four principal recommendations: that recruitment should be on the basis of merit determined through standardized written examination, that candidates should have a solid general education to enable inter-departmental transfers, that recruits should be graded into a hierarchy and that promotion should be through achievement, rather than 'preferment, patronage or purchase'. A Civil Service Commission was also set up in 1855 to oversee open recruitment and end patronage, and most of the other Northcote–Trevelyan recommendations were implemented over some years.
The Northcote–Trevelyan model of meritocratic examination remained essentially stable for a hundred years. This was a tribute to its success in removing corruption, delivering public services (even under the stress of two world wars), and responding effectively to political change. It also had a great international influence and was adapted by members of the Commonwealth. The Pendleton Civil Service Reform Act established a similar system in the United States.
Written examinations had been unheard of before 1702 for European education. "The Chinese examinations were described repeatedly in Western literature on China of the seventeenth and eighteenth centuries." Standardized testing began to influence the method of examination in British universities from the 1850s, where oral examination had been the norm since the Middle Ages. In the US, the transition happened under the influence of the educational reformer Horace Mann. This shift decisively helped to move education into the modern era, by standardizing expanding curricula in the sciences and humanities, creating a rationalized method for the evaluation of teachers and institutions and creating a basis for the streaming of students according to ability.
Both World War I and World War II demonstrated the necessity of standardized testing and the benefits associated with these tests. Tests were used to determine the mental aptitude of recruits to the military. The US Army used the Stanford–Binet Intelligence Scale to test the IQ of the soldiers.
After the War, industry began using tests to evaluate applicants for various jobs based on performance. In 1952, the first Advanced Placement (AP) test was administered to begin closing the gap between high schools and colleges.
Some countries such as the United Kingdom and France require all their secondary school students to take a standardized test on individual subjects such as the General Certificate of Secondary Education (GCSE) (in England) and Baccalauréat respectively as a requirement for graduation. These tests are used primarily to assess a student's proficiency in specific subjects such as mathematics, science, or literature. In contrast, high school students in other countries such as the United States may not be required to take a standardized test to graduate. Moreover, students in these countries usually take standardized tests only to apply for a position in a university program and are typically given the option of taking different standardized tests such as the ACT or SAT, which are used primarily to measure a student's reasoning skill. High school students in the United States may also take Advanced Placement tests on specific subjects to fulfill university-level credit. Depending on the policies of the test maker or country, administration of standardized tests may be done in a large hall, classroom, or testing center. A proctor or invigilator may also be present during the testing period to provide instructions, to answer questions, or to prevent cheating.
Grades or test scores from standardized test may also be used by universities to determine if a student applicant should be admitted into one of its academic or professional programs. For example, universities in the United Kingdom admit applicants into their undergraduate programs based primarily or solely on an applicant's grades on pre-university qualifications such as the GCE A-levels or Cambridge Pre-U. In contrast, universities in the United States use an applicant's test score on the SAT or ACT as just one of their many admission criteria to determine if an applicant should be admitted into one of its undergraduate programs. The other criteria in this case may include the applicant's grades from high school, extracurricular activities, personal statement, and letters of recommendations. Once admitted, undergraduate students in the United Kingdom or United States may be required by their respective programs to take a comprehensive examination as a requirement for passing their courses or for graduating from their respective programs.
Standardized tests are sometimes used by certain countries to manage the quality of their educational institutions. For example, the No Child Left Behind Act in the United States requires individual states to develop assessments for students in certain grades. In practice, these assessments typically appear in the form of standardized tests. Test scores of students in specific grades of an educational institution are then used to determine the status of that educational institution, i.e., whether it should be allowed to continue to operate in the same way or to receive funding.
Finally, standardized tests are sometimes used to compare proficiencies of students from different institutions or countries. For example, the Organisation for Economic Co-operation and Development (OECD) uses Programme for International Student Assessment (PISA) to evaluate certain skills and knowledge of students from different participating countries.
Licensing and certification
Standardized tests are sometimes used by certain governing bodies to determine if a test taker is allowed to practice a profession, to use a specific job title, or to claim competency in a specific set of skills. For example, a test taker who intends to become a lawyer is usually required by a governing body such as a governmental bar licensing agency to pass a bar exam.
Immigration and naturalization
Standardized tests are also used in certain countries to regulate immigration. For example, intended immigrants to Australia are legally required to pass a citizenship test as part of that country's naturalization process.
Language Testing in naturalization process
When analyzed in the context of language texting in the naturalization processes, the ideology can be found from two distinct but nearly related points. One refers to the construction and deconstruction of the nation's constitutive elements that makes their own identity, while the second has a more restricted view of the notion of specific language and ideologies that may served in an specific purpose.
Tests are sometimes used as a tool to select for participants that have potential to succeed in a competition such as a sporting event. For example, serious skaters who wish to participate in figure skating competitions in the United States must pass official U.S. Figure Skating tests just to qualify.
Tests are sometimes used by a group to select for certain types of individuals to join the group. For example, Mensa International is a high I.Q. society that requires individuals to score at the 98th percentile or higher on a standardized, supervised IQ test.
- Formative assessments are informal and formal tests taken during the learning process. These assessments modify the later learning activities, to improve student achievement. They identify strengths and weakness and help target areas that need work.
- Summative assessments evaluate competence at the end of an instructional unit. Final exams allow assessors to determine if the candidate has assimilated the knowledge or skills to the required standard.
- Norm-referenced tests compare a student's performance against a national or other "norm" group.
- Performance-based assessments require students to solve real world problems or produce something with real world application. These assessments allow the educator to distinguish how well the students think critically and analytically.[clarification needed]
- Authentic assessment is the measurement of accomplishments that are worth while compared to multiple choice standardized tests.[clarification needed]
- Criterion-referenced tests are designed to measure student performance against a fixed set of criteria or learning standards.
Written tests are tests that are administered on paper or on a computer (as an eExam). A test taker who takes a written test could respond to specific items by writing or typing within a given space of the test or on a separate form or document.
In some tests; where knowledge of many constants or technical terms is required to effectively answer questions, like Chemistry or Biology – the test developer may allow every test taker to bring with them a cheat sheet.
A test developer's choice of which style or format to use when developing a written test is usually arbitrary given that there is no single invariant standard for testing. Be that as it may, certain test styles and format have become more widely used than others. Below is a list of those formats of test items that are widely used by educators and test developers to construct paper or computer-based tests. As a result, these tests may consist of only one type of test item format (e.g., multiple choice test, essay test) or may have a combination of different test item formats (e.g., a test that has multiple choice and essay items).
In a test that has items formatted as multiple choice questions, a candidate would be given a number of set answers for each question, and the candidate must choose which answer or group of answers is correct. There are two families of multiple choice questions. The first family is known as the True/False question and it requires a test taker to choose all answers that are appropriate. The second family is known as One-Best-Answer question and it requires a test taker to answer only one from a list of answers.
There are several reasons to using multiple choice questions in tests. In terms of administration, multiple choice questions usually requires less time for test takers to answer, are easy to score and grade, provide greater coverage of material, allows for a wide range of difficulty, and can easily diagnose a test taker's difficulty with certain concepts. As an educational tool, multiple choice items test many levels of learning as well as a test taker's ability to integrate information, and it provides feedback to the test taker about why distractors were wrong and why correct answers were right. Nevertheless, there are difficulties associated with the use of multiple choice questions. In administrative terms, multiple choice items that are effective usually take a great time to construct. As an educational tool, multiple choice items do not allow test takers to demonstrate knowledge beyond the choices provided and may even encourage guessing or approximation due to the presence of at least one correct answer. For instance, a test taker might not work out explicitly that , but knowing that , they would choose an answer close to 48. Moreover, test takers may misinterpret these items and in the process, perceive these items to be tricky or picky. Finally, multiple choice items do not test a test taker's attitudes towards learning because correct responses can be easily faked.
True/False questions present candidates with a binary choice – a statement is either true or false. This method presents problems, as depending on the number of questions, a significant number of candidates could get 100% just by guesswork, and should on average get 50%.
A matching item is an item that provides a defined term and requires a test taker to match identifying characteristics to the correct term.
A fill-in-the-blank item provides a test taker with identifying characteristics and requires the test taker to recall the correct term. There are two types of fill-in-the-blank tests. The easier version provides a word bank of possible words that will fill in the blanks. For some exams all words in the word bank are used exactly once. If a teacher wanted to create a test of medium difficulty, they would provide a test with a word bank, but some words may be used more than once and others not at all. The hardest variety of such a test is a fill-in-the-blank test in which no word bank is provided at all. This generally requires a higher level of understanding and memory than a multiple choice test. Because of this, fill-in-the-blank tests[with no word bank] are often feared by students.
Items such as short answer or essay typically require a test taker to write a response to fulfill the requirements of the item. In administrative terms, essay items take less time to construct. As an assessment tool, essay items can test complex learning objectives as well as processes used to answer the question. The items can also provide a more realistic and generalizable task for test. Finally, these items make it difficult for test takers to guess the correct answers and require test takers to demonstrate their writing skills as well as correct spelling and grammar.
The difficulties with essay items are primarily administrative: for example, test takers require adequate time to be able to compose their answers. When these questions are answered, the answers themselves are usually poorly written because test takers may not have time to organize and proofread their answers. In turn, it takes more time to score or grade these items. When these items are being scored or graded, the grading process itself becomes subjective as non-test related information may influence the process. Thus, considerable effort is required to minimize the subjectivity of the grading process. Finally, as an assessment tool, essay questions may potentially be unreliable in assessing the entire content of a subject matter.
Instructions to exam takers rely on the use of command words which direct the examinee to respond in a particular way, for example by describing or defining a concept, comparing and contrasting two or more scenarios or events. In the UK, Ofqual maintains an official list of command words explaining their meaning.
A quiz is a brief assessment which may cover a small amount of material that was given in a class. Some of them cover two to three lectures that were given in a period of times as a reading section or a given exercise in were the most important part of the class was summarize. However, a simple quiz usually does not count very much, and instructors usually provide this type of test as a formative assessment to help determine whether the student is learning the material. In addition, doing this at the time the instructor collected all can make a significant part of the final course grade.
Most mathematics questions, or calculation questions from subjects such as chemistry, physics or economics employ a style which does not fall into any of the above categories, although some papers, notably the Maths Challenge papers in the United Kingdom employ multiple choice. Instead, most mathematics questions state a mathematical problem or exercise that requires a student to write a freehand response. Marks are given more for the steps taken than for the correct answer. If the question has multiple parts, later parts may use answers from previous sections, and marks may be granted if an earlier incorrect answer was used but the correct method was followed, and an answer which is correct (given the incorrect input) is returned.
Higher level mathematical papers may include variations on true/false, where the candidate is given a statement and asked to verify its validity by direct proof or stating a counterexample.
Though not as popular as the closed-note test, open-note tests are slowly rising in popularity. An open-note test allows the test taker to bring in all of their notes and use them while taking the test. The questions asked on open-note exams are typically more thought provoking and intellectual than questions on a closed-note exam. Rather than testing what facts you know, open-note exams force you to apply the facts to a broader question. The main benefit that is seen from open-note tests is that they are a better preparation for the real world where you don't have to memorize and have anything you need at your disposal.
An oral test is a test that is answered orally (verbally). The teacher or oral test assessor will verbally ask a question to a student, who will then answer it using words.
Physical fitness tests
A physical fitness test is a test designed to measure physical strength, agility, and endurance. They are commonly employed in educational institutions as part of the physical education curriculum, in medicine as part of diagnostic testing, and as eligibility requirements in fields that focus on physical ability such as military or police. Throughout the 20th century, scientific evidence emerged demonstrating the usefulness of strength training and aerobic exercise in maintaining overall health, and more agencies began to incorporate standardized fitness testing. In the United States, the President's Council on Youth Fitness was established in 1956 as a way to encourage and monitor fitness in schoolchildren.
Common tests include timed running or the multi-stage fitness test (commonly known as the "beep test), and numbers of push-ups, sit-ups/abdominal crunches and pull-ups that the individual can perform. More specialised tests may be used to test ability to perform a particular job or role. Many gyms, private organisations and event organizers have their own fitness tests. Using military techniques developed by the British Army and modern test like Illinois Agility Run and Cooper Test.
Stop watch timing was the norm until recent years when hand timing has been proven to be inaccurate and inconsistent . Electronic timing is the new norm in order to promote accuracy, consistency, and lessen bias.
A performance test is an assessment that requires an examinee to actually perform a task or activity, rather than simply answering questions referring to specific parts. The purpose is to ensure greater fidelity to what is being tested.
An example is a behind-the-wheel driving test to obtain a driver's license. Rather than only answering simple multiple-choice items regarding the driving of an automobile, a student is required to actually drive one while being evaluated.
Performance tests are commonly used in workplace and professional applications, such as professional certification and licensure. When used for personnel selection, the tests might be referred to as a work sample. A licensure example would be cosmetologists being required to demonstrate a haircut or manicure on a live person. The Group-Bourdon test is one of a number of psychometric tests which trainee train drivers in the UK are required to pass.
Some performance tests are simulations. For instance, the assessment to become certified as an ophthalmic technician includes two components, a multiple-choice examination and a computerized skill simulation. The examinee must demonstrate the ability to complete seven tasks commonly performed on the job, such as retinoscopy, that are simulated on a computer.
From the perspective of a test developer, there is great variability with respect to time and effort needed to prepare a test. Likewise, from the perspective of a test taker, there is also great variability with respect to the time and needed to obtain a desired grade or score on any given test. When a test developer constructs a test, the amount of time and effort is dependent upon the significance of the test itself, the proficiency of the test taker, the format of the test, class size, deadline of test, and experience of the test developer.
The process of test construction has been aided in several ways. For one, many test developers were themselves students at one time, and therefore are able to modify or outright adopt questions from their previous tests. In some countries, book publishers often provide teaching packages that include test banks to university instructors who adopt their published books for their courses. These test banks may contain up to four thousand sample test questions that have been peer-reviewed and time-tested. The instructor who chooses to use this testbank would only have to select a fixed number of test questions from this test bank to construct a test.
As with test constructions, the time needed for a test taker to prepare for a test is dependent upon the frequency of the test, the test developer, and the significance of the test. In general, nonstandardized tests that are short, frequent, and do not constitute a major portion of the test taker's overall course grade or score do not require the test taker to spend much time preparing for the test. Conversely, nonstandardized tests that are long, infrequent, and do constitute a major portion of the test taker's overall course grade or score usually require the test taker to spend great amounts of time preparing for the test. To prepare for a nonstandardized test, test takers may rely upon their reference books, class or lecture notes, Internet, and past experience. Test takers may also use various learning aids to study for tests such as flashcards and mnemonics. Test takers may even hire tutors to coach them through the process so that they may increase the probability of obtaining a desired test grade or score. In countries such as the United Kingdom, demand for private tuition has increased significantly in recent years. Finally, test takers may rely upon past copies of a test from previous years or semesters to study for a future test. These past tests may be provided by a friend or a group that has copies of previous tests or by instructors and their institutions, or by the test provider (such as an examination board) itself.
Unlike a nonstandardized test, the time needed by test takers to prepare for standardized tests is less variable and usually considerable. This is because standardized tests are usually uniform in scope, format, and difficulty and often have important consequences with respect to a test taker's future such as a test taker's eligibility to attend a specific university program or to enter a desired profession. It is not unusual for test takers to prepare for standardized tests by relying upon commercially available books that provide in-depth coverage of the standardized test or compilations of previous tests (e.g., 10 year series in Singapore). In many countries, test takers even enroll in test preparation centers or cram schools that provide extensive or supplementary instructions to test takers to help them better prepare for a standardized test. In Hong Kong, it has been suggested that the tutors running such centers are celebrities in their own right. This has led to private tuition being a popular career choice for new graduates in developed economies. Finally, in some countries, instructors and their institutions have also played a significant role in preparing test takers for a standardized test.
Cheating on a test is the process of using unauthorized means or methods for the purpose of obtaining a desired test score or grade. This may range from bringing and using notes during a closed book examination, to copying another test taker's answer or choice of answers during an individual test, to sending a paid proxy to take the test.
Several common methods have been employed to combat cheating. They include the use of multiple proctors or invigilators during a testing period to monitor test takers. Test developers may construct multiple variants of the same test to be administered to different test takers at the same time, or write tests with few multiple-choice options, based on the theory that fully worked answers are difficult to imitate. In some cases, instructors themselves may not administer their own tests but will leave the task to other instructors or invigilators, which may mean that the invigilators do not know the candidates, and thus some form of identification may be required. Another method is that if the student showed too many failed test at given color levels,[clarification needed] the students have to begin again at work in through additional skills builders, at that level the student choose to drop down a color level or obtain a missing prerequisite skills. For that if the student passed the requisites number of test without a color level, the student will color by his or her name a large wall chart that provide the status of the entire class . Finally, instructors or test providers may compare the answers of suspected cheaters on the test themselves to determine if cheating did occur.
Support and criticisms
Despite their widespread use, the validity, quality, or use of tests, particularly standardized tests in education have continued to be widely supported or criticized. Like the tests themselves, supports and criticisms of tests are often varied and may come from a variety of sources such as parents, test takers, instructors, business groups, universities, or governmental watchdogs.
Supporters of standardized tests in education often provide the following reasons for promoting testing in education:
- Feedback or diagnosis of test taker's performance
- Fair and efficient
- Promotes accountability
- Prediction and selection
- Improves performance
Critics of standardized tests in education often provide the following reasons for revising or removing standardized tests in education:
- Narrows curricular format and encourages teaching to the test.
- Poor predictive quality.
- Grade inflation of test scores or grades.
- Culturally or socioeconomically biased.
This section provides insufficient context for those unfamiliar with the subject.August 2014) (Learn how and when to remove this template message)(
This section does not cite any sources. (August 2014) (Learn how and when to remove this template message)
- ordinary exam: an exam taken during the corresponding course;
- sufficiency exam or examination for credit: an exam which should be taken as a way of getting official credits from the academic institution;
- revalidation exam or equivalence exam: offering value for an exam previously taken in another institution;
- extraordinary exam: an exam taken after the period of ordinary exams corresponding to the course.
- Academic dishonesty
- Homework, also known as Assignment (education) – tasks assigned to students to be completed outside of class
- Bar examination
- Blue book exam, used in free response exams
- Computerized adaptive testing – A form of computer-based test that adapts to the examinee's ability level
- Computerized classification test
- Concept inventory – A criterion-referenced test to help determine whether a student has an accurate working knowledge of a specific set of concepts
- Cooper test, used by Law, Military and Fire services
- Driver's license
- Electronic assessment
- E-scape, a technology and approach that looks specifically at the assessment of creativity and collaboration.
- Educational software – Software intended for an educational purpose.
- Test anxiety
- General Educational Development
- Grading in education
- Harvard step test, a cardiovascular test
- List of standardized tests in the United States
- Matriculation examination
- Medical College Admission Test
- Optical mark recognition
- Performance testing – An assessment that requires the subject to actually perform a task or activity
- Physical examination – Process by which a medical professional investigates the body of a patient for signs of disease
- Pilot certification in the United States – Pilot certification
- Progress testing
- Project Talent (in the US)
- Vertical jump, a leg power test
- Trial and error, a method of problem solving
- Abitur—used in Germany.
- GCSE and A-level—Used in the UK except Scotland.
- International Baccalaureate Diploma Programme—International examination.
- International General Certificate of Secondary Education (IGCSE)—international examinations
- Junior Certificate and Leaving Certificate—Republic of Ireland.
- Matura/Maturita—used in Austria, Bosnia and Herzegovina, Bulgaria, Croatia, the Czech Republic, Italy, Liechtenstein, Hungary, Macedonia, Montenegro, Poland, Serbia, Slovenia, Switzerland and Ukraine; previously used in Albania.
- Nationella prov—used in Sweden.
- National 5, Higher Grade, and Advanced Higher—used in Scotland
- "Definition of TEST".
- Thissen, D., & Wainer, H. (2001). Test Scoring. Mahwah, NJ: Erlbaum. Page 1, sentence 1.
- North Central Regional Educational Laboratory, NCREL.org Archived 2008-03-05 at the Wayback Machine
- "Goswami U (1991) Put to the Test: The Effects of External Testing on Teachers. Educational Researcher 20: 8-11". Archived from the original on 2013-02-02.
- Advanced Level Examination, Chinese Language and Culture, Paper 1A
- Bodde, D., Chinese Ideas in the West
- Bodde, Derke. "China: A Teaching Workbook". Columbia University.
- (Bodde 2005)
- Mark W. Huddleston, William W. Boyer (1996). The Higher Civil Service in the United States: Quest for Reform. University of Pittsburgh Press. ISBN 9780822974734.
- Kazin, Edwards, and Rothman (2010), 142.
- Walker, David (2003-07-09). "Fair game". The Guardian. London. Retrieved 2003-07-09.
- Bodde, D., Chinese Ideas in the West, p.9
- David R. Russell (2002). Writing in the Academic Disciplines: A Curricular History. SIU Press. pp. 158–159. ISBN 9780809324675.
- Kaplan, R. M., & Saccuzzo, D. P. (2009) Psychological Testing Belmont, CA: Wadsworth
- "Archived copy" (PDF). Archived from the original (PDF) on 2009-02-05. Retrieved 2009-01-29.CS1 maint: archived copy as title (link)
- "GCSEs: The official guide to the system" (PDF). Archived from the original (PDF) on 2012-06-04.
- "About the SAT". 2016-11-28.
- "About ACT: History". Archived from the original on October 8, 2006. Retrieved October 31, 2006.Name changed in 1996.
- "Cambridge Pre-U".
- "International Qualifications - University of Oxford". Archived from the original on 2010-08-22.
- "Harvard College Admissions".
- "Australian Citizenship - Australian Citizenship test".
- European Journal of Language Policy., ENGLISH DEPARTMENT , UNIVERSITY OF ZADAR , CROATIA (October 2012). "Language ideology and citizenship: a comparative analysis of language testing in naturalisation processes". COPYRIGHT 2012 Liverpool University Press (UK). 4 (2): 217–236. doi:10.3828/ejlp.2012.13.
- "Welcome to U.S. Figure Skating". Archived from the original on 2010-07-27.
- "What is Mensa?".
- "Constructing Written Test Questions For the Basic and Clinical Sciences" (PDF).
- "Types of Test Item Formats".
- "MFO Topic C5: Developing Test Questions".
- AQA, Command words, accessed 27 December 2018
- Tobias, S (1995). Overcoming Math Anxiety. New York: W.W. Norton and Company. p. 85 (Chapter 4).
- "Different Exam Types - Different Approaches". ExamTime. 2012-02-21. Retrieved 2017-12-11.
- Johanns, Beth; Dinkens, Amber; Moore, Jill (2017-11-01). "A systematic review comparing open-book and closed-book examinations: Evaluating effects on development of critical thinking skills". Nurse Education in Practice. 27: 89–94. doi:10.1016/j.nepr.2017.08.018. ISSN 1471-5953. PMID 28881323.
- "Army Fitness Standards".
- "RAF Fitness Standards".
- "USMC Personal Fitness Test (Chapter 2 - Conduct of the PFT)" (PDF).
- "Welcome". Fittest.live. Retrieved 2016-11-10.
- Mayhew, Jerry L.; Houser, Jeremy J.; Briney, Ben B.; Williams, Tyler B.; Piper, Fontaine C.; Brechue, William F. (2010). "Comparison Between Hand and Electronic Timing of 40-yd Dash Performance in College Football Players". Journal of Strength and Conditioning Research. 24 (2): 447–451. doi:10.1519/JSC.0b013e3181c08860. PMID 20072055.
- "Group-Bourdon tool". Digital Reality. Archived from the original on 3 January 2011. Retrieved 2 March 2011.
- WEHMEIER, Nicolas. "Oxford University Press | Online Resource Centre | Learn about Test banks". global.oup.com. Retrieved 2016-12-09.
- "How to study for Quizzes and Exams in Biochemistry" (PDF). Archived from the original (PDF) on 2010-12-31.
- "Study strategies". Archived from the original on 2011-10-07.
- Weale, Sally (2016-09-07). "Sharp rise in children receiving private tuition". The Guardian. ISSN 0261-3077. Retrieved 2016-12-09.
- "Past Exam Papers". Archived from the original on 2010-08-10.
- "Past papers and mark schemes". www.aqa.org.uk. AQA. Archived from the original on 2016-12-21. Retrieved 2016-12-09.
- Sharma, Yojana (2012-11-27). "Meet the 'tutor kings and queens'". BBC News. Retrieved 2016-12-09.
- Lomax, Robert. "How to become a private tutor". Retrieved 2016-12-09.
- Cohen, Daniel H. (2013-10-25). "The new boom in home tuition – if you can pay £40 an hour". The Guardian. ISSN 0261-3077. Retrieved 2016-12-09.
- "Proxy test takers, item harvesters and cheaters... be very afraid". ccie-in-3-months.blogspot.co.uk. Retrieved 2016-12-09.
- "Easy Ways to Prevent Cheating". TeachHUB. Retrieved 2016-12-09.
- How to do it , Detected , Prevent, Cizek, Gregory J. (1999). Cheating on Test. Lawrence Erlbaum Associates.CS1 maint: multiple names: authors list (link)
- Phelps, Richard (2005). Defending standardized testing. London: Psychology Press. ISBN 978-0-8058-4912-7.
- Hirsch Jr., Eric (1999). The Schools We Need: And Why We Don't Have Them. New York: Anchor. ISBN 978-0-385-49524-0.
- "FairTest criticism of the SAT". fairtest.org.
- Paton, Graeme (July 6, 2010). "Universities criticise exam 'grade inflation'". The Daily Telegraph. London.
- Vasagar, Jeevan (August 2, 2010). "Fears for state pupils as top universities insist on A* at A-level". The Guardian. London.
- Finch, Julia (March 10, 2010). "They can't read, can't write, keep time or be tidy: Tesco director's verdict on school-leavers". The Guardian. London.
- Hedges, Larry V.; Laine, Richard D.; Greenwald, Rob (1994). "Hedges LV (1994) An Exchange: Part I*: Does Money Matter? A Meta-Analysis of Studies of the Effects of Differential School Inputs on Student Outcomes. Educational Researcher 23: 5-14". Educational Researcher. 23 (3): 5–14. doi:10.3102/0013189X023003005.
- Coughlan, Sean. Bright poor 'held back for decades', BBC, October 16, 2013. Retrieved on October 17, 2013.
- Airasian, P. (1994) "Classroom Assessment," Second Edition, NY" McGraw-Hill.
- Cangelosi, J. (1990) "Designing Tests for Evaluating Student Achievement." NY: Addison-Wesley.
- Gronlund, N. (1993) "How to make achievement tests and assessments," 5th edition, NY: Allyn and Bacon.
- Haladyna, T.M. & Downing, S.M. (1989) Validity of a Taxonomy of Multiple-Choice Item-Writing Rules. "Applied Measurement in Education," 2(1), 51-78.
- Monahan, T. (1998) The Rise of Standardized Educational Testing in the U.S. – A Bibliographic Overview.
- Ravitch, Diane, "The Uses and Misuses of Tests", in The Schools We Deserve (New York: Basic Books, 1985), pp. 172–181.
- Wilson, N. (1997) Educational standards and the problem of error. Education Policy Analysis Archives, Vol 6 No 10
|Wikisource has the text of the 1911 Encyclopædia Britannica article Examinations.|
- "About the Joint Committee on Testing Practices". http://www.apa.org: American Psychological Association. Retrieved 2 Aug 2011.
The Joint Committee on Testing Practices (JCTP) was established in 1985 by the American Educational Research Association (AERA), the American Psychological Association (APA), and the National Council on Measurement in Education (NCME). In 2007 the JCTP disbanded, but JCTP publications are still available and may be obtained by contacting any of the groups listed in the product descriptions shown below.
- How the traditional Chinese system of exams worked