SAT Reasoning Test
|Type||Paper-based standardized test|
|Developer / administrator||College Board, Educational Testing Service.|
|Knowledge/skill(s) tested||Writing, critical reading, mathematics.|
|Purpose||Admissions to undergraduate programs of universities or colleges.|
|Duration||3 hours and 45 minutes|
|Score/grade range||200 to 800 (in 10 point increments) on each of the 3 sections (so total of 600 to 2400).
(Essay scored on scale of 0 to 12, in 1 point increments.)
|Offered||8 times a year|
|Country(ies) / region(s)||Worldwide|
|Test takers||Over 1.66 million high school graduates in the class of 2013|
|Prerequisites / eligibility criteria||No official prerequisite. Intended for high school students. Fluency in English assumed.|
|Testing fee||US$ 51 to US$ 91, depending on country.|
|Scores/grades used by||Most universities and colleges offering undergraduate programs, in USA.|
The SAT is owned, published, and developed by the College Board, a private, nonprofit organization in the United States. It was formerly developed, published, and scored by the Educational Testing Service which still administers the exam. The test is intended to assess a student's readiness for college. It was first introduced in 1926, and its name and scoring have changed several times. It was first called the Scholastic Aptitude Test, then the Scholastic Assessment Test.
The current SAT Reasoning Test, introduced in 2005, takes 3 hours and 45 minutes to finish, and costs US$51 (US$91 International), excluding late fees. Possible scores on the SAT range from 600 to 2400, combining test results from three 800-point sections – Mathematics, Critical Reading, and Writing. However, the SAT does not mirror high school curriculum. Some SAT experts assert that the SAT does not measure raw math or verbal abilities and that the SAT is primarily only a measure of how well one takes the SAT. Taking the SAT or its competitor, the ACT, is required for freshman entry to many, but not all, universities in the United States.
- 1 Function
- 2 Structure
- 3 Logistics
- 4 Preparations
- 5 Raw scores, scaled scores, and percentiles
- 6 SAT-ACT score comparisons
- 7 History
- 7.1 1901 test
- 7.2 1926 test
- 7.3 1928 and 1929 tests
- 7.4 1930 test and 1936 changes
- 7.5 1946 test and associated changes
- 7.6 1980 test and associated changes
- 7.7 1994 changes
- 7.8 1995 re-centering (raising mean score back to 500)
- 7.9 1995 re-centering controversy
- 7.10 2002 changes – Score Choice
- 7.11 2005 changes
- 7.12 Scoring problems of October 2005 tests
- 7.13 2008 changes
- 7.14 2012 changes
- 7.15 2016 changes
- 8 Name changes and recentered scores
- 9 Math-verbal achievement gap
- 10 Reuse of old SAT exams
- 11 Perception
- 12 See also
- 13 References
- 14 Further reading
- 15 External links
The SAT is typically taken by high school sophomores, juniors and seniors. The College Board states that SAT measures literacy and writing skills that are needed for academic success in college. They state that the SAT assesses how well the test takers analyze and solve problems—skills they learned in school that they will need in college. However, the test is administered under a tight time limit (speeded) to help produce a range of scores. This can cause brilliant students that are slow test takers to receive only average scores.
The College Board also states that use of the SAT in combination with high school grade point average (GPA) provides a better indicator of success in college than high school grades alone, as measured by college freshman GPA. Various studies conducted over the lifetime of the SAT show a statistically significant increase in correlation of high school grades and freshman grades when the SAT is factored in.
There are substantial differences in funding, curricula, grading, and difficulty among U.S. secondary schools due to U.S. federalism, local control, and the prevalence of private, distance, and home schooled students. SAT (and ACT) scores are intended to supplement the secondary school record and help admission officers put local data—such as course work, grades, and class rank—in a national perspective.
Historically, the SAT has been more popular among colleges on the coasts and the ACT more popular in the Midwest and South. There are some colleges that require the ACT to be taken for college course placement, and a few schools that formerly did not accept the SAT at all. Nearly all colleges accept the test.
While the exact manner in which SAT scores will help to determine admission of a student at American institutions of higher learning is generally a matter decided by the individual institution, some foreign countries have made SAT (and ACT) scores a legal criterion in deciding whether holders of U.S. high school diplomas will be admitted at their public universities. Most universities will accept applications only if a person is a freshman. It means that the person should not have joined any course in the mean time. But the student can join a 2 year course of ILETS after which he or she would be given credits based on performance. Those whose application is approved while in the course may apply as freshmen.
SAT consists of three major sections: Critical Reading, Mathematics, and Writing. Each section receives a score on the scale of 200–800. All scores are multiples of 10. Total scores are calculated by adding up scores of the three sections. Each major section is divided into three parts. There are 10 sub-sections, including an additional 25-minute experimental or "equating" section that may be in any of the three major sections. The experimental section is used to normalize questions for future administrations of the SAT and does not count toward the final score. The test contains 3 hours and 45 minutes of actual timed sections; most administrations (after accounting for orientation, distribution of materials, completion of biographical sections, and fifteen minutes of timed breaks) run for about four and a half hours. The questions range from easy, medium, and hard depending on the scoring from the experimental sections. Easier questions typically appear closer to the beginning of the section while harder questions are toward the end in certain sections. This is not true for every section (the Critical Reading section is in chronological order) but it is the rule of thumb mainly for math, grammar, and the 19 sentence completions in the reading sections.
The Critical Reading section of the SAT is made up of three scored sections: two 25-minute sections and one 20-minute section, with varying types of questions, including sentence completions and questions about short and long reading passages. Critical Reading sections normally begin with 5 to 8 sentence completion questions; the remainder of the questions are focused on the reading passages. Sentence completions generally test the student's vocabulary and understanding of sentence structure and organization by requiring the student to select one or two words that best complete a given sentence. The bulk of the Critical Reading section is made up of questions regarding reading passages, in which students read short excerpts on social sciences, humanities, physical sciences, or personal narratives and answer questions based on the passage. Certain sections contain passages asking the student to compare two related passages; generally, these consist of shorter reading passages. The number of questions about each passage is proportional to the length of the passage. Unlike in the Mathematics section, where questions go in the order of difficulty, questions in the Critical Reading section go in the order of the passage. Overall, question sets near the beginning of the section are easier, and question sets near the end of the section are harder.
The Mathematics section of the SAT is widely known as the Quantitative Section or Calculation Section. The mathematics section consists of three scored sections. There are two 25-minute sections and one 20-minute section, as follows:
- One of the 25-minute sections is entirely multiple choice, with 20 questions.
- The other 25-minute section contains 8 multiple choice questions and 10 grid-in questions. For grid-in questions, test-takers write the answer inside a grid on the answer sheet. Unlike multiple choice questions, there is no penalty for incorrect answers on grid-in questions because the test-taker is not limited to a few possible choices.
- The 20-minute section is all multiple choice, with 16 questions.
- New topics include Algebra II and scatter plots. These recent changes have resulted in a shorter, more quantitative exam requiring higher level mathematics courses relative to the previous exam.
Four-function, scientific, and graphing calculators are permitted on the SAT math section; however, calculators are not permitted on either of the other sections. Calculators with QWERTY keyboards, cell phone calculators, portable computers, and personal organizers are not permitted.
With the recent changes to the content of the SAT math section, the need to save time while maintaining accuracy of calculations has led some to use calculator programs during the test. These programs allow students to complete problems faster than would normally be possible when making calculations manually.
The use of a graphing calculator is sometimes preferred, especially for geometry problems and exercises involving multiple calculations. According to research conducted by the CollegeBoard performance on the math sections of the exam is associated with the extent of calculator use, with those using calculators on about a third to a half of the items averaging higher scores than those using calculators less frequently. The use of a graphing calculator in mathematics courses, and also becoming familiar with the calculator outside of the classroom, is known to have a positive effect on the performance of students using a graphing calculator during the exam.
The writing portion of the SAT, based on but not directly comparable to the old SAT II subject test in writing (which in turn was developed from the old Test of Standard Written English (TSWE)), includes multiple choice questions and a brief essay. The essay subscore contributes about 28% to the total writing score, with the multiple choice questions contributing 70%. This section was implemented in March 2005 following complaints from colleges about the lack of uniform examples of a student's writing ability and critical thinking.
The multiple choice questions include error identification questions, sentence improvement questions, and paragraph improvement questions. Error identification and sentence improvement questions test the student's knowledge of grammar, presenting an awkward or grammatically incorrect sentence; in the error identification section, the student must locate the word producing the source of the error or indicate that the sentence has no error, while the sentence improvement section requires the student to select an acceptable fix to the awkward sentence. The paragraph improvement questions test the student's understanding of logical organization of ideas, presenting a poorly written student essay and asking a series of questions as to what changes might be made to best improve it.
The essay section, which is always administered as the first section of the test, is 25 minutes long. All essays must be in response to a given prompt. The prompts are broad and often philosophical and are designed to be accessible to students regardless of their educational and social backgrounds. For instance, test takers may be asked to expand on such ideas as their opinion on the value of work in human life or whether technological change also carries negative consequences to those who benefit from it. No particular essay structure is required, and the College Board accepts examples "taken from [the student's] reading, studies, experience, or observations." Two trained readers assign each essay a score between 1 and 6, where a score of 0 is reserved for essays that are blank, off-topic, non-English, not written with a Number 2 pencil, or considered illegible after several attempts at reading. The scores are summed to produce a final score from 2 to 12 (or 0). If the two readers' scores differ by more than one point, then a senior third reader decides. The average time each reader/grader spends on each essay is less than 3 minutes.
In March 2004, Les Perelman analyzed 15 scored sample essays contained in the College Board's ScoreWrite book along with 30 other training samples and found that in over 90% of cases, the essay's score could be predicted from simply counting the number of words in the essay. Two years later, Perelman trained high school seniors to write essays that made little sense but contained infrequently used words such as "plethora" and "myriad". All of the students received scores of "10" or better, which placed the essays in the 92nd percentile or higher.
Style of questions
Most of the questions on the SAT, except for the essay and the grid-in math responses, are multiple choice; all multiple-choice questions have five answer choices, one of which is correct. The questions of each section of the same type are generally ordered by difficulty. However, an important exception exists: Questions that follow the long and short reading passages are organized chronologically, rather than by difficulty. Ten of the questions in one of the math sub-sections are not multiple choice. They instead require the test taker to bubble in a number in a four-column grid.
The questions are weighted equally. For each correct answer, one raw point is added. For each incorrect answer one-fourth of a point is deducted. No points are deducted for incorrect math grid-in questions. This ensures that a student's mathematically expected gain from guessing is zero. The final score is derived from the raw score; the precise conversion chart varies between test administrations.
The SAT therefore recommends only making educated guesses, that is, when the test taker can eliminate at least one answer he or she thinks is wrong. Without eliminating any answers one's probability of answering correctly is 20%. Eliminating one wrong answer increases this probability to 25% (and the expected gain to 1/16 of a point); two, a 33.3% probability (1/6 of a point); and three, a 50% probability (3/8 of a point).
|Section||Average Score||Time (Minutes)||Content|
|Writing||493||60||Grammar, usage, and diction.|
|Mathematics||515||70||Number and operations; algebra and functions; geometry; statistics, probability, and data analysis|
|Critical Reading||501||70||Vocabulary, Critical reading, and sentence-level reading|
The SAT is offered seven times a year in the United States; in October, November, December, January, March (or April, alternating), May, and June. The test is typically offered on the first Saturday of the month for the November, December, May, and June administrations. In other countries, the SAT is offered on the same dates as in the United States except for the first spring test date (i.e., March or April), which is not offered. The test was taken by 1,660,047 high school graduates in the class of 2013.
Candidates may take either the SAT Reasoning Test or up to three SAT Subject Tests on any given test date, except the first spring test date, when only the SAT Reasoning Test is offered. Candidates wishing to take the test may register online at the College Board's website, by mail, or by telephone, at least three weeks before the test date.
The SAT Subject Tests are all given in one large book on test day. Therefore, it is actually immaterial which tests, and how many, the student signs up for; with the possible exception of the language tests with listening, the student may change his or her mind and take any tests, regardless of his or her initial sign-ups. Students who choose to take more subject tests than they signed up for will later be billed by College Board for the additional tests and their scores will be withheld until the bill is paid. Students who choose to take fewer subject tests than they signed up for are not eligible for a refund.
The SAT Reasoning Test costs $51 ($78 International, $99 for India and Pakistan, since the older testing system is in place). For the Subject tests, students pay a $24.50 ($49 International, $73 for India and Pakistan) Basic Registration Fee and $13 per test (except for language tests with listening, which cost $24 each). The College Board makes fee waivers available for low income students. Additional fees apply for late registration, standby testing, registration changes, scores by telephone, and extra score reports (beyond the four provided for free).
Candidates whose religious beliefs prevent them from taking the test on a Saturday may request to take the test on the following day, except for the October test date in which the Sunday test date is eight days after the main test offering. Such requests must be made at the time of registration and are subject to denial.
Students with verifiable disabilities, including physical and learning disabilities, are eligible to take the SAT with accommodations. The standard time increase for students requiring additional time due to learning disabilities is time + 50%; time + 100% is also offered.
SAT preparation is a highly lucrative field and many companies and organizations offer test preparation in the form of books, classes, online courses, and tutoring. The College Board maintains that the SAT is essentially uncoachable and research by the College Board and the National Association of College Admission Counseling suggests that tutoring courses result in an average increase of about 20 points on the math section and 10 points on the verbal section. Other studies have shown significantly different results. A study from Ohio State showed that private SAT prep classes boost scores by an average of 60 points. A study from Oxford showed that coaching courses boosted scores by an average of 56 points. However, several private test prep companies have boasted much higher average results from their dedicated students. PrepScholar guarantees a point increase of at least 240 points. The founder of Ivy Bound Test Prep is on record as saying that diligent students of their courses will see at least a 150 point increase. When evaluating the results of studies and private institutions, it is important to keep in mind that there are many different lengths of prep courses including for example one day courses, 10 day courses, and beyond.
Raw scores, scaled scores, and percentiles
Students receive their online score reports approximately three weeks after test administration (six weeks for mailed, paper scores), with each section graded on a scale of 200–800 and two sub scores for the writing section: the essay score and the multiple choice sub score. In addition to their score, students receive their percentile (the percentage of other test takers with lower scores). The raw score, or the number of points gained from correct answers and lost from incorrect answers (ranges from just under 50 to just under 60, depending upon the test), is also included. Students may also receive, for an additional fee, the Question and Answer Service, which provides the student's answer, the correct answer to each question, and online resources explaining each question.
The corresponding percentile of each scaled score varies from test to test—for example, in 2003, a scaled score of 800 in both sections of the SAT Reasoning Test corresponded to a percentile of 99.9, while a scaled score of 800 in the SAT Physics Test corresponded to the 94th percentile. The differences in what scores mean with regard to percentiles are due to the content of the exam and the caliber of students choosing to take each exam. Subject Tests are subject to intensive study (often in the form of an AP, which is relatively more difficult), and only those who know they will perform well tend to take these tests, creating a skewed distribution of scores.
|Percentile||Score, 1600 Scale
|Score, 2400 Scale
|* The percentile of the perfect score was 99.98 on the 2400 scale and 99.93 on the 1600 scale.|
|** 99+ means better than 99.5 percent of test takers.|
The older SAT (before 1995) had a very high ceiling. In any given year, only seven of the million test-takers scored above 1580. A score above 1580 was equivalent to the 99.9995 percentile.
SAT-ACT score comparisons
The College Board and ACT, Inc. conducted a joint study of students who took both the SAT and the ACT between 2004 and 2006 and released a pair of concordance tables in 2009 that concord composite and writing scores separately. ACT, Inc. has also created its own "Estimated Relationship between ACT Composite Score and SAT CR+M+W Score" chart.
Originally used mainly by colleges and universities in the northeastern United States, and developed by Carl Brigham, one of the psychologists who worked on the Army Alpha and Beta tests, the SAT was originally developed as a way to eliminate test bias between people from different socio-economic backgrounds.
The College Board began on June 17, 1901, when 973 students took its first test, across 67 locations in the United States, and two in Europe. Although those taking the test came from a variety of backgrounds, approximately one third were from New York, New Jersey, or Pennsylvania. The majority of those taking the test were from private schools, academies, or endowed schools. About 60% of those taking the test applied to Columbia University. The test contained sections on English, French, German, Latin, Greek, history, mathematics, chemistry, and physics. The test was not multiple choice, but instead was evaluated based on essay responses as "excellent", "good", "doubtful", "poor" or "very poor".
The first administration of the SAT occurred on June 23, 1926, when it was known as the Scholastic Aptitude Test. This test, prepared by a committee headed by Princeton psychologist Carl Campbell Brigham, had sections of definitions, arithmetic, classification, artificial language, antonyms, number series, analogies, logical inference, and paragraph reading. It was administered to over 8,000 students at over 300 test centers. Men composed 60% of the test-takers. Slightly over a quarter of males and females applied to Yale University and Smith College. The test was paced rather quickly, test-takers being given only a little over 90 minutes to answer 315 questions.
1928 and 1929 tests
In 1928 the number of verbal sections was reduced to 7, and the time limit was increased to slightly under two hours. In 1929 the number of sections was again reduced, this time to 6. These changes in part loosened time constraints on test-takers. Math was eliminated entirely for these tests, instead focusing only on verbal ability.
1930 test and 1936 changes
In 1930 the SAT was first split into the verbal and math sections, a structure that would continue through 2004. The verbal section of the 1930 test covered a more narrow range of content than its predecessors, examining only antonyms, double definitions (somewhat similar to sentence completions), and paragraph reading. In 1936, analogies were re-added. Between 1936 and 1946, students had between 80 and 115 minutes to answer 250 verbal questions (over a third of which were on antonyms). The mathematics test introduced in 1930 contained 100 free response questions to be answered in 80 minutes, and focused primarily on speed. From 1936 to 1941, like the 1928 and 1929 tests, the mathematics section was eliminated entirely. When the mathematics portion of the test was re-added in 1942, it consisted of multiple choice questions.
1946 test and associated changes
Paragraph reading was eliminated from the verbal portion of the SAT in 1946, and replaced with reading comprehension, and "double definition" questions were replaced with sentence completions. Between 1946 and 1957 students were given 90 to 100 minutes to complete 107 to 170 verbal questions. Starting in 1958 time limits became more stable, and for 17 years, until 1975, students had 75 minutes to answer 90 questions. In 1959 questions on data sufficiency were introduced to the mathematics section, and then replaced with quantitative comparisons in 1974. In 1974 both verbal and math sections were reduced from 75 minutes to 60 minutes each, with changes in test composition compensating for the decreased time.
1980 test and associated changes
The inclusion of the "Strivers" Score study was implemented. This study was introduced by The Educational Testing Service, which administers the SAT, and has been conducting research on how to make it easier for minorities and individuals who suffer from social and economic barriers.
The original "Strivers" project, which was in the research phase from 1980–1994, awarded special "Striver" status to test-takers who scored 200 points higher than expected for their race, gender and income level. The belief was that this would give minorities a better chance at being accepted into a college of higher standard, e.g. an Ivy League school. In 1992, the Strivers Project was leaked to the public; as a result the Strivers Project was terminated in 1993. After Federal Courts heard arguments from the ACLU, NAACP and the Educational Testing Service, the courts ordered the study to alter its data collection process, stating that only the age, race and zip code could be used to determine the test-takers eligibility for "Strivers" points.
These changes were introduced to the SAT effective in 1994.
In 1994 the verbal section received a dramatic change in focus. Among these changes were the removal of antonym questions, and an increased focus on passage reading. The mathematics section also saw a dramatic change in 1994, thanks in part to pressure from the National Council of Teachers of Mathematics. For the first time since 1935, the SAT asked some non-multiple choice questions, instead requiring students to supply the answers. 1994 also saw the introduction of calculators into the mathematics section for the first time in the test's history. The mathematics section introduced concepts of probability, slope, elementary statistics, counting problems, median and mode.
The average score on the 1994 modification of the SAT I was usually around 1000 (500 on the verbal, 500 on the math). The most selective schools in the United States (for example, those in the Ivy League) typically had SAT averages exceeding 1400 on the old test.
1995 re-centering (raising mean score back to 500)
The test scoring was initially scaled to make 500 the mean score on each section with a standard deviation of 100. As the test grew more popular and more students from less rigorous schools began taking the test, the average dropped to about 428 Verbal and 478 Math. The SAT was "recentered" in 1995, and the average "new" score became again close to 500. Scores awarded after 1994 and before October 2001 are officially reported with an "R" (e.g. 1260R) to reflect this change. Old scores may be recentered to compare to 1995 to present scores by using official College Board tables, which in the middle ranges add about 70 points to Verbal and 20 or 30 points to Math. In other words, current students have a 100 (70 plus 30) point advantage over their parents.
1995 re-centering controversy
Certain educational organizations viewed the SAT re-centering initiative as an attempt to stave off international embarrassment in regard to continuously declining test scores, even among top students. As evidence, it was presented that the number of pupils who scored above 600 on the verbal portion of the test had fallen from a peak of 112,530 in 1972 to 73,080 in 1993, a 36% backslide, despite the fact that the total number of test-takers had risen over 500,000.
2002 changes – Score Choice
In October 2002, the College Board dropped the Score Choice Option for SAT-II exams. Under this option, scores were not released to colleges until the student saw and approved of the score. The College Board has since decided to re-implement Score Choice in the spring of 2009. It is described as optional, and it is not clear if the reports sent will indicate whether or not this student has opted-in or not. A number of highly selective colleges and universities, including Yale, the University of Pennsylvania, and Stanford, have announced they will require applicants to submit all scores. Stanford, however, only prohibits Score Choice for the traditional SAT. Others, such as MIT and Harvard, have fully embraced Score Choice.
In 2005, the test was changed again, largely in response to criticism by the University of California system. Because of issues concerning ambiguous questions, especially analogies, certain types of questions were eliminated (the analogies from the verbal and quantitative comparisons from the Math section). The test was made marginally harder, as a corrective to the rising number of perfect scores. A new writing section, with an essay, based on the former SAT II Writing Subject Test, was added, in part to increase the chances of closing the opening gap between the highest and midrange scores. Other factors included the desire to test the writing ability of each student; hence the essay. The New SAT (known as the SAT Reasoning Test) was first offered on March 12, 2005, after the last administration of the "old" SAT in January 2005. The Mathematics section was expanded to cover three years of high school mathematics. The Verbal section's name was changed to the Critical Reading section.
Scoring problems of October 2005 tests
In March 2006, it was announced that a small percentage of the SATs taken in October 2005 had been scored incorrectly due to the test papers' being moist and not scanning properly, and that some students had received erroneous scores. The College Board announced they would change the scores for the students who were given a lower score than they earned, but at this point many of those students had already applied to colleges using their original scores. The College Board decided not to change the scores for the students who were given a higher score than they earned. A lawsuit was filed in 2005 by about 4,400 students who received an incorrect low score on the SAT. The class-action suit was settled in August 2007 when The College Board and another company that administers the college-admissions test announced they would pay $2.85 million to over 4,000 students. Under the agreement each student can either elect to receive $275 or submit a claim for more money if he or she feels the damage was even greater. A similar scoring error occurred on a secondary school admission test in 2010-2011 when the ERB (Educational Records Bureau) announced after the admission process was over that an error had been made in the scoring of the tests of 2010 (17%) of the students who had taken the Independent School Entrance Examination for admission to private secondary schools for 2011. Commenting on the effect of the error on students' school applications in the New York Times, David Clune, President of the ERB stated "It is a lesson we all learn at some point — that life isn’t fair."
In late 2008, a new variable came into play. Previously, applicants to most colleges were required to submit all scores, with some colleges that embraced Score Choice retaining the option of allowing their applicants not to have to submit all scores. However, in 2008, an initiative to make Score Choice universal had begun, with some opposition from colleges desiring to maintain score report practices. While students theoretically now have the choice to submit their best score (in theory one could send any score one wishes to send) to the college of their choice, some popular colleges and universities, such as Cornell, ask that students send all test scores. This had led the College Board to display on their web site which colleges agree with or dislike Score Choice, with continued claims that students will still never have scores submitted against their will. Regardless of whether a given college permits applicants to exercise Score Choice options, most colleges do not penalize students who report poor scores along with high ones; many universities, such as Columbia and Cornell, expressly promise to overlook those scores that may be undesirable to the student and/or to focus more on those scores that are most representative of the student's achievement and academic potential. College Board maintains a list of colleges and their respective score choice policies that is recent as of November 2011.
Beginning in 2012, test takers are required to submit a current, recognizable photo during registration. Students are required to present their photo admission ticket – or another acceptable form of photo ID – for admittance to their designated test center. Student scores and registration information, including the photo provided, are made available to the student’s high school. In the event of an investigation involving the validity of a student’s test scores,their photo may be made available to institutions to which they have sent scores. Any college that is granted access to a student’s photo is first required to certify that they are an admitted student.
The College Board recently announced changes in the SAT for 2016. The test will revert to the 1600 point system and will no longer deduct points for incorrect answers.
The College Board made these changes to "focus more on skills essential for college", like focusing less on rare vocabulary words and more on words actually seen in college courses. However, many educators argue that these changes will have less effect than anticipated.
Name changes and recentered scores
The name originally stood for "Scholastic Aptitude Test". But in 1990, because of uncertainty about the SAT's ability to function as an intelligence test, the name was changed to Scholastic Assessment Test. In 1993 the name was changed to SAT I: Reasoning Test (with the letters not standing for anything) to distinguish it from the SAT II: Subject Tests. In 2004, the roman numerals on both tests were dropped, and the SAT I was renamed the SAT Reasoning Test. The scoring categories are now the following: Critical Reading (comparable to some of the Verbal portions of the old SAT I), Mathematics, and Writing. The writing section now includes an essay, whose score is involved in computing the overall score for the Writing section, as well as grammar sections (also comparable to some Verbal portions of the previous SAT).
The test scoring was initially scaled to make 500 the mean score on each section with a standard deviation of 100. The SAT was "recentered" in 1995, and the average "new" score became again close to 500. Scores awarded after 1994 and before October 2001 are officially reported with an "R" (e.g. 1260R) to reflect this change. Old scores may be recentered to compare to 1995 to present scores by using official College Board tables, which in the middle ranges add about 70 points to Verbal and 20 or 30 points to Math.
Math-verbal achievement gap
In 2002, Richard Rothstein (education scholar and columnist) wrote in The New York Times that the U.S. math averages on the SAT and ACT continued its decade-long rise over national verbal averages on the tests.
Reuse of old SAT exams
The College Board has been accused of completely reusing old SAT papers previously given in the United States. In 2007, there was a security breach in South Korea, when the SAT exam administered internationally was identical to the one given in 2005 in the United States.
Correlations with IQ
Frey and Detterman (2003) investigated associations of SAT scores with intelligence test scores. Using an estimate of general mental ability, or g, based on the ASVAB test battery, which can be best thought of as representing crystallized intelligence (learned abilities), they found SAT scores to be highly correlated with g (r=.82 in their sample, .857 when adjusted for non-linearity). However, several of the subtests that contributed to the construction of the ASVAB had higher correlations to this ASVAB learned abilities g. These higher correlating subtests included Word Knowledge (.885), General Science (.881), Arithmetic Reasoning (.858), Electronics Info (.829), and Paragraph Comprehension (.825). Additionally, they found that the correlation of SAT results with scores on the Raven's Advanced Progressive Matrices, a test of fluid intelligence (reasoning), was .483 which is a low-end moderate correlation. However, this latter correlation rose to 0.72 after correction for range restriction, and the authors noted that there appeared to be a ceiling effect on the Raven’s scores, which may have also suppressed the correlation. Beaujean and colleagues (2006) have reached similar conclusions to those reached by Frey and Detterman.
For decades many critics have accused designers of the verbal SAT of cultural bias toward the white and wealthy. The National Center for Education Statistics did a study of high school student accomplishments of students of high, medium, and low socioeconomic status; 32% of students with a high socioeconomic status earned a score of 1100 on the SAT, while only 9% of students with a low socioeconomic status earned this score. A famous (and long past) example of this bias in the SAT I was the oarsman–regatta analogy question. The object of the question was to find the pair of terms that have the relationship most similar to the relationship between "runner" and "marathon". The correct answer was "oarsman" and "regatta". The choice of the correct answer was thought to have presupposed students' familiarity with rowing, a sport popular with the wealthy. However, according to Murray and Herrnstein, the black-white gap is smaller in culture-loaded questions like this one than in questions that appear to be culturally neutral. Analogy questions have since been replaced by short reading passages.
One example of a college that did this is Drew University in New Jersey. After they adopted an optional SAT policy, they had a 20% increase in applications. Dean of Admissions Mary Beth Carey says that "Our own research showed us that high school grade point average is by far the most important predictor of success in college." The college reported that they accepted their most diverse class ever as a result of the policy.
Anyone involved in education should be concerned about how overemphasis on the SAT is distorting educational priorities and practices, how the test is perceived by many as unfair, and how it can have a devastating impact on the self-esteem and aspirations of young students. There is widespread agreement that overemphasis on the SAT harms American education.
In response to threats by the University of California to drop the SAT as an admission requirement, the College Entrance Examination Board announced the restructuring of the SAT, to take effect in March 2005, as detailed above.
In the 1960s and 1970s there was a movement to drop achievement scores. After a period of time, the countries, states and provinces that reintroduced them agreed that academic standards had dropped, students had studied less, and had taken their studying less seriously. They reintroduced the tests after studies and research concluded that the high-stakes tests produced benefits that outweighed the costs.
In 2005, MIT Writing Director Les Perelman plotted essay length versus essay score on the new SAT from released essays and found a high correlation between them. After studying over 50 graded essays, he found that longer essays consistently produced higher scores. In fact, he argues that by simply gauging the length of an essay without reading it, the given score of an essay could likely be determined correctly over 90% of the time. He also discovered that several of these essays were full of factual errors; the College Board does not claim to grade for factual accuracy.
Perelman, along with the National Council of Teachers of English also criticized the 25-minute writing section of the test for damaging standards of writing teaching in the classroom. They say that writing teachers training their students for the SAT will not focus on revision, depth, accuracy, but will instead produce long, formulaic, and wordy pieces. "You're getting teachers to train students to be bad writers", concluded Perelman.
Use by intellect societies
Certain high IQ societies, like Mensa, the Prometheus Society and the Triple Nine Society, use scores from certain years as one of their admission tests. For instance, the Triple Nine Society accepts scores of 1450 on tests taken before April 1995, and scores of at least 1520 on tests taken between April 1995 and February 2005.
The SAT is sometimes given to students younger than 13 by organizations such as the Study of Mathematically Precocious Youth, who use the results to select, study and mentor students of exceptional ability.
- ACT (test), a college entrance exam, competitor to the SAT
- College admissions in the United States
- List of admissions tests
- SAT calculator program
- SAT Subject Tests
- "2013 College-Bound Seniors Total Group Profile Report" (PDF). College Board. Retrieved March 22, 2014.
- "About the College Board". College Board. Retrieved May 29, 2007.
- "SAT Fees: 2010–11 Fees". College Board. Retrieved September 5, 2010.
- Jed Applerouth (September 18, 2013). "Preparing Students for a New Era of Admission Testing". Independent Educational Consultants Association. Retrieved February 15, 2014.
- Valerie Strauss (September 14, 2009). "The Answer Sheet: What Does the SAT Test?". The Washington Post. Retrieved February 15, 2014.
- O'Shaughnessy, Lynn (26 July 2009). "The Other Side of 'Test Optional'". The New York Times. p. 6. Retrieved 22 June 2011.
- "Official SAT Reasoning Test page". College Board. Retrieved June 2007.
- 01-249.RD.ResNoteRN-10 collegeboard.com
- Korbin, L. (2006). SAT Program Handbook. A Comprehensive Guide to the SAT Program for School Counselors and Admissions Officers, 1, 33+. Retrieved January 24, 2006, from College Board Preparation Database.
- "College Admissions – SAT & SAT Subject Tests". College Board. Retrieved November 2009.
- "SAT FAQ: Frequently Asked Questions". College Board. Retrieved May 29, 2007.
- collegeboard.org; Calculator Use and the SAT
- Winerip, Michael (May 5, 2005). "SAT Essay Test Rewards Length and Ignores Errors". New York Times. Retrieved 2008-03-06.
- Jaschik, Scott (March 26, 2007). "Fooling the College Board". Inside Higher Education. Retrieved 2010-07-17.
- "Collegeboard Test Tips". Collegeboard. Retrieved September 9, 2008.
- 2009 Worldwide Exam Preparation & Tutoring Industry Report – Market Research Reports – Research and Markets
- SAT Prep - Are SAT Prep Courses Worth the Cost?
- Jeff Grabmeier (August 7, 2006). "SAT TEST PREP TOOLS GIVE ADVANTAGE TO STUDENTS FROM WEALTHIER FAMILIES". Ohio State University. Retrieved February 23, 2014.
- Paul Montgomery and Jane Lilly (July 6, 2011). "Coaching has a ‘Significant Result’ on SAT Scores Says Oxford Study". University of Oxford. Retrieved February 23, 2014.
- "PrepScholar Results". PrepScholar. Retrieved February 23, 2014.
- Scott Jaschik (May 20, 2009). "Test Prep, to What End?". Inside Higher Ed. Retrieved February 23, 2014.
- My SAT: Help
- "SAT Percentile Ranks for Males, Females, and Total Group:2006 College-Bound Seniors—Critical Reading + Mathematics" (PDF). College Board. Retrieved May 29, 2007.
- "SAT Percentile Ranks for Males, Females, and Total Group:2006 College-Bound Seniors—Critical Reading + Mathematics + Writing" (PDF). College Board. Retrieved May 29, 2007.
- Membership Committee (1999). 1998/99 Membership Committee Report. Prometheus Society. Retrieved 2013-06-19.
- "2010 SAT Trends". The College Board. 2010.
- "frontline: secrets of the sat: where did the test come from?: the 1901 college board". Secrets of the SAT. Frontline. Retrieved 2007-10-20.
- Lawrence, Ida; Rigol, Gretchen W.; Van Essen, Thomas; Jackson, Carol A. (2002). "Research Report No. 2002-7: A Historical Perspective on the SAT: 1926–2001" (PDF). College Entrance Examination Board. Retrieved 2007-10-20.
- "frontline: secrets of the sat: where did the test come from?: the 1926 sat". Secrets of the SAT. Frontline. Retrieved 2007-10-20.
- Intelligence. MSN Encarta. Retrieved 2008-03-02.
- SAT I Individual Score Equivalents
- The Center for Education Reform (1996-08-22). "SAT Increase--The Real Story, Part II".
- Schoenfeld, Jane. College board drops 'score choice' for SAT-II exams. St. Louis Business Journal, May 24, 2002.
- "Freshman Requirements & Process: Testing". stanford.edu. Stanford University Office of Undergraduate Admissions. Retrieved 13 August 2011.
- College Board To Alter SAT I for 2005–06 – Daily Nexus
- "Chapter 12: Improving Paragraphs". The Official SAT Study Guide (Second ed.). The College Board. 2009. p. 169. ISBN 978-0-87447-852-5
- Hoover, Eric (2007-08-24). "$2.85-Million Settlement Proposed in Lawsuit Over SAT-Scoring Errors". The Chronicle of Higher Education. Archived from the original on 2007-09-30. Retrieved 2007-08-27.
- Maslin Nir, Sarah (April 8, 2011). "7,000 Private School Applicants Got Incorrect Scores, Company Says". New York Times.
- "Cornell Rejects SAT Score Choice Option". The Cornell Daily Sun. Retrieved 2008-02-13.
- "Universities Requesting All Scores" (PDF). Retrieved 2009-06-22.
- Test Security and Fairness
- "SAT FAQ". The College Board. Retrieved 2008-09-13.
- Rothstein, Richard (August 28, 2002). "Better sums than at summerizing; The SAT gap". The New York Times.
- "Old SAT Exams Get Reused". Washington Post.
- Frey, M. C.; Detterman, D. K. (2003). "Scholastic Assessment or g? The Relationship Between the Scholastic Assessment Test and General Cognitive Ability". Psychological Science 15 (6): 373–378. doi:10.1111/j.0956-7976.2004.00687.x. PMID 15147489.
- Beaujean, A. A.; Firmin, M. W.; Knoop, A. J.; Michonski, J. D.; Berry, T. B.; Lowrie, R. E. (2006). "Validation of the Frey and Detterman (2004) IQ prediction equations using the Reynolds Intellectual Assessment Scales". Personality and Individual Differences 41: 353–357.
- Zwick, Rebecca (2004). Rethinking the SAT: The Future of Standardized Testing in University Admissions. New York: RoutledgeFalmer. pp. 203–204. ISBN 0-415-94835-5.
- Don't Believe the Hype, Chideya, 1995; The Bell Curve, Hernstein and Murray, 1994
- Herrnstein, Richard J.; Murray, Charles (1994). The Bell Curve: Intelligence and Class Structure in American Life. New York: Free Press. pp. 281–282. ISBN 0-02-914673-9.
- Gilroy, Marilyn (December 2007). "Colleges Making SAT Optional as Admissions Requirement". Education Digest 73 (4): 35–39. Retrieved 5 October 2013.
- Achievement Versus Aptitude Tests in College Admissions
- Phelps, Richard (2003). Kill the Messenger. New Brunswick, New Jersey: Transaction Publishers. p. 220. ISBN 0-7658-0178-7.
- Winerip, Michael (May 4, 2005). "SAT Essay Test Rewards Length and Ignores Errors". The New York Times.
- Harris, Lynn (May 17, 2005). "Testing, testing". Salon.com.
- Coyle, T. R. & Pillow, D. R. (2008). "SAT and ACT predict college GPA after removing g". Intelligence 36 (6): 719–729. doi:10.1016/j.intell.2008.05.001.
- Coyle, T.; Snyder, A.; Pillow, D.; Kochunov, P. (2011). "SAT predicts GPA better for high ability subjects: Implications for Spearman's Law of Diminishing Returns". Personality and Individual Differences 50 (4): 470–474. doi:10.1016/j.paid.2010.11.009.
- Frey, M. C.; Detterman, D. K. (2003). "Scholastic Assessment or g? The Relationship Between the Scholastic Assessment Test and General Cognitive Ability". Psychological Science 15 (6): 373–378. doi:10.1111/j.0956-7976.2004.00687.x. PMID 15147489.
- Gould, Stephen Jay (1996). The Mismeasure of Man (Rev/Expd ed.). W. W. Norton & Company. ISBN 0-393-31425-1.
- Hoffman, Banesh (1962). The Tyranny of Testing. Orig. pub. Collier. ISBN 0-486-43091-X. (and others)
- Hubin, David R. (1988). The Scholastic Aptitude Test: Its Development and Introduction, 1900–1948. Ph.D. dissertation in American History at the University of Oregon.
- Owen, David (1999). None of the Above: The Truth Behind the SATs (Revised ed.). Rowman & Littlefield. ISBN 0-8476-9507-7.
- Sacks, Peter (2001). Standardized Minds: The High Price of America's Testing Culture and What We Can Do to Change It. Perseus. ISBN 0-7382-0433-1.
- Zwick, Rebecca (2002). Fair Game? The Use of Standardized Admissions Tests in Higher Education. Falmer. ISBN 0-415-92560-6.
- Gladwell, Malcolm (December 17, 2001). "Examined Life: What Stanley H. Kaplan taught us about the S.A.T.". The New Yorker.
|Wikibooks has a book on the topic of: SAT Study Guide|