Graduate Record Examinations
||This article has multiple issues. Please help improve it or discuss these issues on the talk page.
The Graduate Record Examinations (GRE) is a standardized test that is an admissions requirement for many graduate schools in the United States, in other English-speaking countries, and for English-taught graduate and business programs world-wide. Created and administered by Educational Testing Service (ETS) in 1949, the exam aims to measure verbal reasoning, quantitative reasoning, analytical writing, and critical thinking skills that have been acquired over a long period of time and that are not related to any specific field of study. The GRE General Test is offered as a computer-based exam administered through qualified testing centers.
In the graduate school admissions process, the level of emphasis that is placed upon GRE scores varies widely between schools and between departments within schools. The importance of a GRE score can range from being a mere admission formality to an important selection factor.
The GRE was significantly overhauled in August 2011, resulting in an exam that is not adaptive on a question-by-question basis, but rather by section, so that the performance on the first verbal and math sections determine the difficulty of the second sections presented. Overall, the test retained the sections and many of the question types from its predecessor, but the scoring scale was changed to a 130 to 170 scale (from a 200 to 800 scale).
The cost to take the test varies between US$130 and $210, depending on the country in which it is taken, although ETS will reduce the fee under certain circumstances. They also promote financial aid to those GRE applicants who prove economic hardship. ETS does not release scores that are older than 5 years, although graduate program policies on the acceptance of scores older than 5 years will vary.
The computer-based GRE General Test consists of six sections. The first section is always the analytical writing section involving separately timed issue and argument tasks. The next five sections consist of two verbal reasoning sections, two quantitative reasoning sections, and either an experimental or research section. These five sections may occur in any order. The experimental section does not count towards the final score but is not distinguished from the scored sections. Unlike on the computer adaptive test prior to August 2011, the examinee is free to skip back and forth within sections. The entire testing procedure lasts about 3 hours 45 minutes. One-minute breaks are offered after each section and a 10-minute break after the third section.
The paper-based GRE General Test consists of six sections and is only available in areas where computer-based testing is unavailable. The analytical writing is split up into two sections, one section for each issue and argument task. The next four sections consist of two verbal and two quantitative sections in varying order. There is no experimental section on the paper-based test.
Verbal section 
The computer-based verbal sections assess reading comprehension, critical reasoning and vocabulary usage. The verbal test is scored on a scale of 130-170, in 1-point increments (Before August, 2011 the scale was 200–800, in 10-point increments). In a typical examination, each verbal section consists of 20 questions to be completed in 30 minutes. Each verbal section consists of about 6 text completion, 4 sentence equivalence, and 10 critical reading questions. The changes in 2011 include a reduced emphasis on rote vocabulary knowledge and the elimination of antonyms and analogies. Text completion items have replaced sentence completions and new reading question types allowing for the selection of multiple answers were added.
Quantitative section 
The computer-based quantitative sections assess basic high school level mathematical knowledge and reasoning skills. The quantitative test is scored on a scale of 130–170, in 1-point increments (Before August 2011 the scale was 200–800, in 10-point increments). In a typical examination, each quantitative section consists of 20 questions to be completed in 35 minutes. Each quantitative section consists of about 8 quantitative comparisons, 9 problem solving items, and 3 data interpretation questions. The changes in 2011 include the addition of numeric entry items requiring the examinee to fill in a blank and multiple-choice items requiring the examinee to select multiple correct responses.
Analytical writing section 
The analytical writing section consists of two different essays, an "issue task" and an "argument task". The writing section is graded on a scale of 0-6, in half-point increments. The essays are written on a computer using a word processing program specifically designed by ETS. The program allows only basic computer functions and does not contain a spell-checker or other advanced features. Each essay is scored by at least two readers on a six-point holist scale. If the two scores are within one point, the average of the scores is taken. If the two scores differ by more than a point, a third reader examines the response.
Issue task 
Argument task 
The test taker will be given an argument (i.e. a series of facts and considerations leading to a conclusion) and will be asked to write an essay that critiques the argument. Test takers are asked to consider the argument's logic and to make suggestions about how to improve the logic of the argument. Test takers are expected to address the logical flaws of the argument, not to provide a personal opinion on the subject. The time allotted for this essay is 30 minutes. Arguments are selected from a pool of topics.
Experimental section 
The experimental section, which can be either a verbal, quantitative, or analytical writing task, contains new questions ETS is considering for future use. Although the experimental section does not count towards the test-taker's score, it is unidentified and appears identical to the scored sections. Because test takers have no definite way of knowing which section is experimental, it is typically advised that test takers try their best on every section. Sometimes an identified research section at the end of the test is given instead of the experimental section. There is no experimental section on the paper-based GRE.
An examinee can miss one or more questions on a multiple-choice section and still receive a perfect score of 170. Likewise, even if no question is answered correctly, 130 is the lowest possible score.
Scaled score percentiles 
The percentiles for the current revised General test and the concordance with the prior format are as follows. Means and standard deviations for the measures on the new score scale are not yet available:
|Scaled score||Verbal Reasoning Percentile||Verbal Prior Scale||Quantitative Reasoning Percentile||Quantitative Prior Scale|
|Analytical Writing score||Writing % Below|
Comparisons for "Intended Graduate Major" are "limited to those who earned their college degrees up to two years prior to the test date." ETS provides no score data for "non-traditional" students who have been out of school more than two years, although its own report "RR-99-16" indicated that 22% of all test takers in 1996 were over the age of 30.
Use in admissions 
||This section needs additional citations for verification. (August 2010)|
Many graduate schools in English-speaking countries (especially in the United States) require GRE results as part of the admissions process. The GRE is a standardized test intended to measure the abilities of all graduates in tasks of general academic nature, regardless of their fields of specialization. The GRE is supposed to measure the extent to which undergraduate education has developed an individual's verbal and quantitative skills in abstract thinking.
Unlike other standardized admissions tests (such as the SAT, LSAT, and MCAT), the use and weight of GRE scores vary considerably not only from school to school, but from department to department, and from program to program also. Programs in liberal arts topics may only consider the applicant's verbal score to be of interest, while mathematics and science programs may only consider quantitative ability; however, since most applicants to mathematics, science, or engineering graduate programs all have high quantitative scores, the verbal score can become a deciding factor even in these programs. Admission to graduate schools depends on a complex mix of several different factors. Schools see letters of recommendation, statement of purpose, GPA, GRE score etc. Some schools use the GRE in admissions decisions, but not in funding decisions; others use the GRE for the selection of scholarship and fellowship candidates, but not for admissions. In some cases, the GRE may be a general requirement for graduate admissions imposed by the university, while particular departments may not consider the scores at all. Graduate schools will typically provide information about how the GRE is considered in admissions and funding decisions, and the average scores of previously admitted students. The best way to find out how a particular school or program evaluates a GRE score in the admissions process is to contact the person in charge of graduate admissions for the specific program in question (and not the graduate school in general).
GRE Subject Tests 
In addition to the General Test, there are also eight GRE Subject Tests testing knowledge in the specific areas of Biochemistry, Cell and Molecular Biology; Biology; Chemistry; Computer Science[dated info]; Literature in English; Mathematics; Physics; and Psychology. The length of each exam is 170 minutes.
In the past, subject tests were also offered in the areas of Economics, Revised Education, Engineering, Geology, History, Music, Political Science, and Sociology. In April 1998, the Revised Education and Political Science exams were discontinued. In April 2000, the History and Sociology exams were discontinued, and the other four were discontinued in April 2001. The Computer Science exam is being discontinued after April 20, 2013.
GRE and GMAT 
The GMAT (Graduate Management Admission Test) is a computer-adaptive standardized test in mathematics and the English language for measuring aptitude to succeed academically in graduate business studies. Business schools commonly use the test as one of many selection criteria for admission into an MBA program. Starting in 2009, many business schools began accepting the GRE in lieu of a GMAT score. Policies varied widely for several years. However, as of the 2012-2013 admissions season, all business schools accept both tests equally. Either a GMAT score, or a GRE score, can be submitted for an application to an MBA program. Business schools also accept either score for their other (non-MBA) Master's and PhD programs.
The primary issue on which business school test acceptance policies vary is in how old a GRE or GMAT score can be before it is no longer accepted. The standard is that scores cannot be more than 5 years old (e.g., Wharton, MIT Sloan, Columbia Business School).
A variety of resources are available for those wishing to prepare for the GRE. ETS provides preparation software called PowerPrep, which contains two practice tests of retired questions, as well as further practice questions and review material. Since the software replicates both the test format and the questions used, it can be useful to predict the actual GRE scores. ETS does not license their past questions to any other company, making them the only source for official retired material. ETS used to publish the "BIG BOOK" which contained a number of actual GRE questions; however, this publishing was abandoned. Several companies provide courses, books, and other unofficial preparation materials.
Some students taking the GRE use a test preparation company. Students who do not use these courses often rely on material from university text books, GRE preparation books, sample tests, and free web resources.
Testing locations 
While the general and subject tests are held at many undergraduate institutions, the computer-based general test is only held at test centers with appropriate technological accommodations. Students in major cities in the United States, or those attending large U.S. universities, will usually find a nearby test center, while those in more isolated areas may have to travel a few hours to an urban or university location. Many industrialized countries also have test centers, but at times test-takers must cross country borders.
An analysis of the GRE's validity in predicting graduate school success found a correlation of .30 to .45 between the GRE and both first year and overall graduate GPA. The correlation between GRE score and graduate school completion rates ranged from .11 (for the now defunct analytical section) to .39 (for the GRE subject test). Correlations with faculty ratings ranged from .35 to .50.
Critics have claimed that the computer-adaptive methodology may discourage some test takers, because the question difficulty changes with performance (See question: How does the computer-based GRE revised General Test work? under Frequently Asked Questions About the GRE® revised General Test). For example, if the test-taker is presented with remarkably easy questions half way into the exam, they may infer that they are not performing well, which will influence their abilities as the exam continues, even though question difficulty is subjective. By contrast standard testing methods may discourage students by giving them more difficult items earlier on.
Critics have also stated that the computer-adaptive method of placing more weight on the first several questions is biased against test takers who typically perform poorly at the beginning of a test due to stress or confusion before becoming more comfortable as the exam continues. On the other hand, standard fixed-form tests could equally be said to be "biased" against students with less testing stamina since they would need to be approximately twice the length of an equivalent computer adaptive test to obtain a similar level of precision.
The GRE has also been subjected to the same racial bias criticisms that have been lodged against other admissions tests. In 1998, the Journal of Blacks in Higher Education noted that the mean score for black test-takers in 1996 was 389 on the verbal section, 409 on the quantitative section, and 423 on the analytic, while white test-takers averaged 496, 538, and 564, respectively. The National Association of Test Directors Symposia in 2004 stated a belief that simple mean score differences may not constitute evidence of bias unless the populations are known to be equal in ability. A more effective, accepted, and empirical approach is the analysis of differential test functioning, which examines the differences in item response theory curves for subgroups; the best approach for this is the DFIT framework.
Weak predictor of graduate school performance 
The GREs are criticized for not being a true measure of whether a student will be successful in graduate school. Robert Sternberg (now of Oklahoma State University–Stillwater; working at Yale University at the time of the study), a long-time critic of modern intelligence testing in general, found the GRE general test was weakly predictive of success in graduate studies in psychology. The strongest relationship was found for the now-defunct analytical portion of the exam.
The ETS published a report ("What is the Value of the GRE?") that points out the predictive value of the GRE on a student's index of success at the graduate level. The problem with earlier studies is the statistical phenomena of restriction of range. A correlation coefficient is sensitive to the range sampled for the test. Specifically, if only students accepted to graduate programs are studied (in Sternberg & Williams and other research), the relationship is occluded. Validity coefficients range from .30 to .45 between the GRE and both first year and overall graduate GPA in ETS' study.
Kaplan and Saccuzzo state that the criterion that the GRE best predicts is first-year grades in graduate school. However, this correlation is only in the high teens to low twenties. "If the test correlates with a criterion at the .4 level, then it accounts for 16% of the variability in that criterion, with the other 84% resulting from unknown factors and errors"  (p. 303). Graduate schools may be placing too much importance on standardized tests rather than on factors that more fully account for graduate school success, such as prior research experience, GPAs, or work experience. While graduate schools do consider these areas, many times schools will not consider applicants that score below a current score of roughly 314 (1301 prior score). Kaplan and Saccuzzo also state that "the GRE predict[s] neither clinical skill nor even the ability to solve real-world problems" (p. 303).
Historical susceptibility to cheating 
In May 1994, Kaplan, Inc warned ETS, in hearings before a New York legislative committee, that the small question pool available to the computer-adaptive test made it vulnerable to cheating. ETS assured investigators that it was using multiple sets of questions and that the test was secure. This was later discovered to be incorrect.
In December 1994, prompted by student reports of recycled questions, then Director of GRE Programs for Kaplan, Inc and current CEO of Knewton, Jose Ferreira led a team of 22 staff members deployed to 9 U.S. cities to take the exam. Kaplan, Inc then presented ETS with 150 questions, representing 70-80% of the GRE. According to early news releases, ETS appeared grateful to Stanley H. Kaplan, Inc for identifying the security problem. However, on December 31, ETS sued Kaplan, Inc for violation of a federal electronic communications privacy act, copyright laws, breach of contract, fraud, and a confidentiality agreement signed by test-takers on test day. On January 2, 1995, an agreement was reached out of court.
Additionally, in 1994, the scoring algorithm for the computer-adaptive form of the GRE was discovered to be insecure. ETS acknowledged that Kaplan, Inc employees, led by Jose Ferreira, reverse-engineered key features of the GRE scoring algorithms. The researchers found that a test taker’s performance on the first few questions of the exam had a disproportionate effect on the test taker’s final score. To preserve the integrity of scores, ETS revised its scoring and uses a more sophisticated scoring algorithm.
2011 Revision of the GRE 
In 2006, ETS announced plans to enact significant changes in the format of the GRE. Planned changes for the revised GRE included a longer testing time, a departure from computer-adaptive testing, a new grading scale, and an enhanced focus on reasoning skills and critical thinking for both the quantitative and qualitative sections.
On April 2, 2007, ETS announced the decision to cancel plans for revising the GRE. The announcement cited concerns over the ability to provide clear and equal access to the new test after the planned changes as an explanation for the cancellation. The ETS stated, however, that they do plan "to implement many of the planned test content improvements in the future", although specific details regarding those changes have not yet been announced.
Changes to the GRE took effect on November 1, 2007, as ETS started to include new types of questions in the exam. The changes mostly center on "fill in the blank" type answers for the mathematics section that requires the test-taker to fill in the blank directly, without being able to choose from a multiple choice list of answers. ETS currently plans to introduce two of these new types of questions in each quantitative section, while the majority of questions will be presented in the regular format.
Since January 2008, the Reading Comprehension within the verbal sections has been reformatted, passages' "line numbers will be replaced with highlighting when necessary in order to focus the test taker on specific information in the passage" to "help students more easily find the pertinent information in reading passages."
In December 2009, ETS announced plans to move forward with significant revisions to the GRE in 2011. Changes include a new 130-170 scoring scale, the elimination of certain question types such as antonyms and analogies, the addition of an online calculator, and the elimination of the CAT format of question-by-question adjustment, in favor of a section by section adjustment. The Revised GRE General test replaced General GRE test on August 1, 2011. The revised GRE is said to be better by design and gives better test taking experience. The new types of questions in the revised pattern are supposed to test the skills needed in graduate and business schools programs. From July 2012 onwards GRE announced an option for users to customize their scores called score select.
GRE prior to October 2002 
The earliest versions of the GRE tested only for verbal and quantitative ability. For a number of years prior to October 2002, the GRE had a separate Analytical Ability section which tested candidates on logical and analytical reasoning abilities. This section was replaced by the Analytical Writing Assessment.
See also 
- List of admissions tests
- Business School
- Graduate school
- Law School
- Medical School
- ACT (test)
- Master's degree
- Doctorate degree
- First professional degree
- Professional degree
- Terminal degree
- GRE Registration and Information Bulletin
- Alternative Admissions and Scholarship Selection Measures in Higher Education.
- MBA Channel: "GRE:Wharton joins the club" 31 July 2009
- GRE Test Content
- Weiner-Green, Sharon; Wolf, Ira K (2009), Barron's How to Prepare for the GRE (17 ed.), Barron's Educational Series, p. 9, ISBN 0-7641-7471-1
- GRE Revised Analytical Writing
- The Pool of Issue Topics
- The Pool of Argument Topics
- GRE Test Content
- http://www.ets.org/s/gre/pdf/gre_guide.pdf Guide to the Use of Scores 2011–2012
- "GRE: Computer Science Test". Retrieved January 14, 2013.
- "Application Requirements: The Wharton MBA Program" 9 May 2013
- "MIT Sloan Application Instructions" 9 May 2013
- "Columbia Business School MBA Program Application Requirements" 9 May 2013
- Kuncel, N. R.; Hezlett, S. A.; Ones, D. S. (2001). "A comprehensive meta-analysis of the predictive validity of the Graduate Record Examinations: Implications for graduate student selection and performance". Psychological Bulletin 127 (1): 162–181.
- "Testing service cancels February GRE".
- Weiss, D. J.; Kingsbury, G. G. (1984). "Application of computerized adaptive testing to educational problems". Journal of Educational Measurement 21 (4): 361–375. doi:10.1111/j.1745-3984.1984.tb01040.x.
- "Estimating the Effect a Ban on Racial Preferences Would Have on African- American Admissions to the Nation's Leading Graduate Schools". The Journal of Blacks in Higher Education 19: 80–82. 1998. JSTOR 2998926.
- The Achievement Gap: Test Bias or School Structures? National Association of Test Directors 2004 Symposia 
- Oshima, T. C.; Morris, S. B. (2008). "Raju's Differential Functioning of Items and Tests (DFIT)". Educational Measurement: Issues and Practice 27 (3): 43–50.
- Sternberg, R. J.; Williams, W. M. (1997). "Does the Graduate Record Examinations predict meaningful success in the graduate training of psychology? A case study". American Psychologist 52: 630–641.
- Kaplan, R. M. & Saccuzzo, D. P. (2009). Psychological testing: Principles, applications, and issues. Belmont, CA: Wadsworth
- Frantz, Douglas; Nordheimer, Jon (September 28, 1997). "Giant of Exam Business Keeps Quiet on Cheating". The New York Times. Retrieved April 2, 2010.
- "Computer Admissions Test Found to Be Ripe for Abuse". The New York Times. December 16, 1994. Retrieved April 2, 2010.
- Boxall, Bettina (January 1, 1995). "Educational Testing Service Sues Exam-Coaching Firm". Los Angeles Times. Retrieved May 4, 2010.
- Comparison Chart of GRE Changes
- Plans for the Revised GRE Cancelled
- GRE General Test to Include New Question Types in November
- Revisions to the Computer-based GRE General Test in 2008 at the Wayback Machine (archived August 22, 2008)
- Revised General Test
- A New Look for Graduate Entrance Test
- Revised GRE FAQs
- "GRE Score Select".