Jump to content

College and university rankings: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
Line 106: Line 106:
* [http://www.thes.co.uk/worldrankings/ ''Times Higher'' World University Rankings]
* [http://www.thes.co.uk/worldrankings/ ''Times Higher'' World University Rankings]
* [http://www.4icu.org/ ''4 International Colleges & Universities''] Directory of world-wide Universities and Colleges ranked by web popularity
* [http://www.4icu.org/ ''4 International Colleges & Universities''] Directory of world-wide Universities and Colleges ranked by web popularity
* [http://www.topuniversities.tk/ ''TopUniversities.tk'']
* [http://www.top.okuniversity.com/ ''http://www.TopUniversities.tk'']
* [http://www.universitymetrics.com/g-factor "G-Factor" Global University Ranking] World universities ranked by web links from other universities
* [http://www.universitymetrics.com/g-factor "G-Factor" Global University Ranking] World universities ranked by web links from other universities
* [http://www.webometrics.info Webometrics Ranking of World Universities]
* [http://www.webometrics.info Webometrics Ranking of World Universities]

Revision as of 04:51, 12 May 2006

In higher education, college and university rankings are listings of educational institutions in an order determined by any combination of factors. Rankings can be based on subjectively perceived "quality," on some combination of empirical statistics, or on surveys of educators, scholars, students, prospective students, or others. Such rankings are often consulted by prospective students as they choose which schools they will apply to or which school they will attend. Among college and university rankings, there are rankings of undergraduate and graduate programs. This article deals primarily with rankings of undergraduate programs. For details on ranking of law programs, see Law School Rankings.

Rankings vary significantly from country to country. A Cornell University study found that the rankings in the United States significantly affected colleges' applications and admissions. In the United Kingdom, several newspapers publish league tables which rank universities.

International rankings

Several organizations provide worldwide rankings, including:

The Times Higher Education Supplement, a British publication, annually publishes the Times Higher World University Rankings[1], a list of 200 ranked universities from around the world.

The webometrics ranking of universities is based entirely on the web-presence of the University (a computerised assessment of the size and sophistication of the website). As such it is unlikely to accurately reflect the academic performance directly, but will reflect the internet based activities of the universities in a way which is free of national or language bias.

The much-publicized Shanghai Jiao Tong University ranking project[2], which was a large-scale Chinese project to provide independent rankings of Universities around the world on behalf of the Chinese government.

Some rankings include ones based on numbers of Nobel Prizes obtained by Universities[3].

Regional and national rankings

The U.S. News & World Report rankings of US universities

The best-known American college and university rankings have been compiled since 1983 by the magazine U.S. News & World Report based on a combination of statistics provided by institutional researchers and surveys of university faculty and staff members. The college rankings were not published in 1984, but were published in all years since. The precise methodology used by the U.S. News rankings has changed many times, and the data are not all available to the public, so peer review of the rankings is limited. (A private 1997 review by the National Opinion Research Center, commissioned by U.S. News itself, was later published by the Washington Monthly; it appeared to contain several serious criticisms of the rankings' methodology.)

The U.S. News rankings, unlike some other such lists, create a strict hierarchy of colleges and universities in their "top tier," rather than ranking only groups or "tiers" of schools; the individual schools' order changes significantly every year the rankings are published. The most important factors in the rankings are:

  • Peer assessment: a survey of the institution's reputation among presidents, provosts, and deans of admission of other institutions
  • Retention: six-year graduation rate and first-year student retention rate
  • Student selectivity: standardized test scores of admitted students, proportion of admitted students in upper percentiles of their high-school class, and proportion of applicants accepted
  • Faculty resources: average class size, faculty salary, faculty degree level, student-faculty ratio, and proportion of full-time faculty
  • Financial resources: per-student spending
  • Graduation rate performance: difference between expected and actual graduation rate
  • Alumni giving rate

All these factors are combined according to statistical weights determined by U.S. News. The weighting is often changed by U.S. News from year to year, and is not empirically determined (the NORC methodology review said that these weights "lack any defensible empirical or theoretical basis"). The first four such factors account for the great majority of the U.S. News ranking (80%, according to U.S. News's 2005 methodology), and the "reputational measure" (which surveys high-level administrators at similar institutions about their perceived quality ranking of each college and university) is especially important to the final ranking (accounting by itself for 25% of the ranking according to the 2005 methodology).

A New York Times article reported that, given the U. S. News weighting methodology, "it's easy to guess who's going to end up on top: Harvard, Yale and Princeton round out the first three essentially every year. In fact, when asked how he knew his system was sound, Mel Elfin, the rankings' founder, often answered that he knew it because those three schools always landed on top. When a new lead statistician, Amy Graham, changed the formula in 1999 to what she considered more statistically valid, the California Institute of Technology jumped to first place. Ms. Graham soon left, and a slightly modified system pushed Princeton back to No. 1 the next year."[1]

Other rankings of US universities

Other organizations which compile general US annual college and university rankings include the Fiske Guide to Colleges and the Princeton Review. Many specialized rankings are available in guidebooks for undergraduate and graduate students, dealing with individual student interests, fields of study, and other concerns such as geographical location, financial aid, and affordability.

Among the best-known rankings dealing with individual fields of study is the Philosophical Gourmet Report or "Leiter Report" (after its founding author, Brian Leiter of the University of Texas at Austin), a ranking of departments of analytic philosophy. This report has been at least as controversial within its field as the general U.S. News rankings, attracting criticism from many different viewpoints, but it is also extremely popular and well regarded by many in the profession. Notably, practitioners of continental philosophy, who perceive the Leiter report as unfair to their field, have compiled alternative rankings.

Avery et al. recently published a working paper for the National Bureau of Economic Research titled "A Revealed Preference Ranking of U.S. Colleges and Universities." Rather than ranking programs by traditional criteria, their analysis uses a statistical model based on applicant preferences. They based their data on the applications and outcome of 3,240 high school students. The authors feel that their ranking is less subject to manipulation compared to conventional rankings (see criticism below).

Rankings of Canadian Universities

Maclean's, a news magazine in Canada, ranks Canadian Universities on an annual basis known as the MacLean’s University Rankings. Their criteria are based on a number of factors, which includes characteristics of the student body, classes, faculty, finances, the library, and reputation. The criteria are described here. The rankings are split into three categories: primarily undergraduate (schools that focus on undergraduate studies with few to no graduate programs), comprehensive (schools that focus on undergraduate studies but have a healthy selection of graduate programs), and medical doctoral (schools that have a very wide selection of graduate programs). As the most prominent ranking of Canadian universities, these rankings have received much scrutiny and criticism from universities, especially those that receive unfavourable rankings. For example, the University of Calgary produced a formal study examining the methodology of the ranking, illuminating the factors that determined the university's rank, and criticizing certain aspects of the methodology[4]. Top-ranked schools, on the other hand, tend to embrace the results and refrain from criticising. A notable difference between US rankings and Maclean's rankings, however, is that Maclean's does not include privately-funded universities in its rankings. However, the vast majority and most well known universities in Canada are publicly funded.

The primarily undergraduate rankings The Comprehensive University rankings The medical doctoral rankings

Rankings of UK universities

The Research Assessment Exercises (RAE) are attempts by the UK government to evaluate the quality of research undertaken by British Universities. Each subject, called a unit of assessment is given a ranking by a peer review panel. The rankings are used in the allocation of funding each university receives from the government. The last assessment was made in 2001. The RAE provides quality ratings for research across all disciplines. Panels use a standard scale to award a rating for each submission. Ratings range from 1 to 5*, according to how much of the work is judged to reach national or international levels of excellence. Higher education institutions (HEIs) which take part receive grants from one of the four higher education funding bodies in England, Scotland, Wales and Northern Ireland.

Standards of undergraduate teaching are assessed by the Quality Assurance Agency for Higher Education (QAA), an independent body established by the UK's universities and other higher education institutions in 1997. The QAA is under contract to the Higher Education Funding Council for England to assess quality for universities in England. This replaced a previous system of Teaching Quality Assessments (TQAs) which aimed to assess the administrative, policy and procedural framework within which teaching took place but did not directly assess teaching quality.

Criticisms of rankings

College and university rankings, especially the well-known U.S. News rankings, have drawn significant criticism from within and without higher education. Critics feel that the rankings are arbitrary and based on criteria unimportant to education itself (especially wealth and reputation); they also charge that, with little oversight, colleges and universities inflate their reported statistics. Beyond these criticisms, critics claim that the rankings impose ill-considered external priorities on college administrations, whose decisions are sometimes driven by the need to create the most desirable statistics for reporting to U.S. News rather than by sound educational goals.

Furthermore, some have suggested that the formulae and methodologies used to turn the various data into a ranking are arrived at specifically, if unconsciously, to keep a few key institutions at the top of the chart — not because of any undue partisanship among the editors; but simply due to a subconscious assumption that a system which flies in the face of conventional wisdom must somehow be faulty. Hence editorial decisions would tend to reinforce preconceptions. In other words, if the public, as it is argued, looks to ranking publications not so much for guidance as for confirmation of its own assumptions, then mightn't the editors of U.S. News (as proud as they are of the annual "fine-tuning" they give their methodology), have a predisposition to overlook methodologies which "rock the boat" to the extent of dropping Harvard (say) out of the top handful of schools?

Some of the specific data used for quantification are also frequently criticized. For instance, Rice University, with a top 5 per-student endowment and a generous Financial Aid department, is ranked in the mid-twenties for per-student "Financial Resources". As another example, the "Peer Assessment" equally weighs the opinions of administrators at less-known schools such as Florida Atlantic and North Dakota State with those of say, Harvard and Stanford. Students with their sights set on the best graduate schools may not be interested in knowing which programs the administrators of bottom schools have heard of, or vice versa.

Other critics, seeing the issue from students' and prospective students' points of view, claim that the quality of a college or university experience is not quantifiable, and that the ratings should thus not be weighed seriously in a decision about which school to attend. Individual, subjective, and random factors all influence the educational experience to such an overwhelming extent, they say, that no general ranking can provide useful information to an individual student.

Suppose, as these critics illustrate, that the difference between an "excellent" school and a "good" one is often that most of the departments in the excellent school are excellent, while only some of the departments in the good school are excellent. And the difference between an excellent department and a good one might be, similarly, that most of the professors in the excellent department are excellent, while only some in the good department are. For an individual student, depending on the student's choices of field of study and professors, this will often mean that there is no difference between an excellent college or university and a merely good one; the student will be able to find excellent departments and excellent faculty to work with even at an institution which might be ranked "second-tier" or lower. Statistically, the rankings are distributions with large variances and small differences between the individual universities' means (averages).

Complicating matters further, as most educators and students observe, individuals' opinions about the excellence of academic departments and, especially, of professors, exhibit a wide range of variation depending on personal preferences. And the quality of an individual student's education is most determined by whether or not the student happens to encounter a small number of professors that "click" with and inspire him or her. Similarly, the main difference between a "good" or "second-tier" large state university and an "excellent" or "top-tier" prestigious smaller institution, for the student, is often just that, at the larger school, the student needs to work a bit harder and be a bit more assertive and motivated in order to actively extract a good education. For many students this will not be difficult enough to justify a preference for the smaller institution, though some individuals do prefer a smaller school.

Forget U.S. News Coalition

In the 1990s a coalition of student activists calling themselves the Forget U.S. News Coalition (and occasionally substituting "fuck" for "forget") arose, based initially at Stanford University. FUNC attempted to influence college and university administrations to reconsider their cooperation with the U.S. News rankings. They met with limited success, finding administrations encouraged the development of alternatives to the rankings, though most institutions (including Stanford) continued to cooperate with U.S. News. Critics of FUNC question its motives claiming that the organization is dissatisfied with the rankings not for principled objections to the ranking process, but rather because they are dissatisfied that Stanford has ranked below Harvard, Yale, and Princeton for the past 10 years. One school which has criticized US News, but also has held steady in the rankings is Emory University which generally ranks in the top 20 colleges nationwide.

Colleges and criticism of U.S. News rankings

Reed College has not cooperated with the U.S. News rankings nor submitted any institutional data to U.S. News since 1994; its administration has been outspoken in its criticism of the rankings. Critics charge, and Rolling Stone magazine reported, that Reed's "second-tier" or lower ranking in U.S. News's lists, which was based on U.S. News estimates of non-submitted data, is artificially depressed by U.S. News as retribution for Reed's harsh criticism of the rankings. Since the refusal to cooperate with U.S. News, Reed has nonetheless continued to be successful among liberal arts colleges. [citation needed]

Ohio Wesleyan University and St. John's College are also colleges which haven't been cooperative with the U.S. News rankings.

References

  1. ^ Thompson, Nicholas (2003): "The Best, The Top, The Most;" The New York Times, August 3, 2003, Education Life Supplement, p. 24

See also

Rankings on the Web

References

  1. ^ [5] - A 2005 ranking from the Institute of Higher Education, Shanghai Jiao Tong University of the world's research universities.
  2. ^ [6] - A 2005 ranking from The Times Higher Education Supplement of the world's research universities.
  3. ^ [7] - A list of universities with the most Nobel Prize winner affiliations. The totals are out of date for some universities.

World university rankings

Asian university rankings

Australian university rankings

Canadian university rankings

Germany university rankings

Italy university rankings

United States university rankings

United Kingdom university rankings