Template talk:Infobox US university ranking

From Wikipedia, the free encyclopedia
Jump to: navigation, search
WikiProject Universities (Rated Template-class)
WikiProject icon This template is within the scope of WikiProject Universities, a collaborative effort to improve the coverage of universities and colleges on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
 Template  This template does not require a rating on the project's quality scale.
 

Additions and subtractions[edit]

To follow up on my comment in June (see above section on Forbes), this template, frankly, is very problematic in its inclusions and exclusions of various rankings. That in itself introduces selection bias for ranking from Wikipedia's editors. The inclusiveness of the table is therefore highly suspect. Here are some issues:

1) US News World Ranking is simply a republication of QS and should be eliminated. (see here and compare to QS)
2) Long standing and highly respected, academically-produced international rankings such as HEEACT, Leiden, SIR, and URAP are missing for no apparent reason. These are rankings of total universities, not departments. And while these rankings focus on research impact and output, the methodology or focus of any particular ranking, whether it be undergraduate measures like US News and WAMO or research/graduate measures like ARWU and QS, is not within Wikipedia's directive to determine the importance of one methodology over another. In fact, what makes these rankings particularly important is that their methodologies are driven solely by quantitative data, thus eliminating any survey or prestige bias. In the least, a "research" section should be considered in the table that will include these rankings.
3)Webometrics, which is undoubtedly a major ranking with demonstrated staying power, should also likely be included because of the reasoning given above: "it's not our job to adjudicate the validity of ranking methodologies".
4) Major US evaluators of universities, particularly Princeton Review and, perhaps the most widely respected inside of academia, The Center for Measuring University Performance, are completely ignored simply because of the difficulty of incorporating their evaluations into a standard ordinal rank. For such evaluators, their scores could be reported, such as PR's academic and admissions scores and CMUP's # of measures in the top 25 (although CMUP's could be listed as clusters as well).

I aim to remove USNews World and add, at minimum, HEEACT, Leiden, SIR, URAP, Webometrics, Princeton Review academic score (perhaps admission score as well), and CMUP (either # of 25 measures or cluster grouping). Comments before I get started? CrazyPaco (talk) 08:48, 27 January 2013 (UTC)

Please don't get started, let us discuss here first. This template impacts numerous articles. Let's discuss each separately (see below). Keep in mind that the goal of this template is to present overall rankings of universities. There are myriad rankings that reflect only single aspects of universities (academics, research, athletics, environmental friendliness, social presence, etc.) or university departments/school/colleges -- too much and too complex to reflect in any one template; therefore, the community has decided to have a standard "overall rankings" template for all U.S. universities. More boutique rankings are welcome in prose in the body of the article. —Eustress talk 18:17, 27 January 2013 (UTC)
Agreed. I'd like evidence that these ranking systems are influential and taken seriously beyond the organizations who create them, local media who report anything about local institutions, university press releases, and general "we have nothing else to write about today so here's this thing that happened" "news" stories. In fact, I think it would be wonderful if we stepped back and did that for even the items currently included in this template.
So on those grounds I stand opposed to adding anything else to this template until it's actually demonstrated - not merely asserted without evidence - that a ranking system has significant influence and merit. ElKevbo (talk) 04:16, 28 January 2013 (UTC))
None of the suggested inclusions are of dubious merit as was CWUR above, unless you are suggesting non-commercial government agencies and university based academic researchers are dubious. HEEACT is the non-commercial Higher Education Evaluation and Accreditation Council of Taiwan, funded by the country's Ministry of Education, and has been releasing its study for 7 years. It is perhaps the most influential academic research ranking in the world, certainly in Asia. SCIMago is consortium of university statistical researchers out of Spain and their methodology has been published in scientific journals (see this article in FASEB). Same with Leiden (here and in the journal Scientometrics). This ranking obviously comes from Leiden University in the Netherlands at their Centre for Science and Technology Studies. It is probably most noted academic research rankings in Europe. URAP is one of the newer ones, but arrises from the non-profit Informatics Institute of Middle East Technical University in Turkey and holds annual academic symposiums. These are non-commercial, respected, academic-based rankings...truly international ones, that obviously don't get articles published regularly about them in the NY Times because they aren't best sellers in newsstands or hocked by guidance councilors to high school students and their parents. However, they are followed closely by academics and published and presented in academic forums. I recommend you click on the links to them above to see if they look anything like the fly-by-night commercial rankings you are concerned about. CrazyPaco (talk) 08:48, 28 January 2013 (UTC)
(Sorry, Crazypaco; this isn't personal or directed at you! We struggle to keep college and university articles neutral and the constant additions of rankings of meritorious and dubious merit alike makes that job very difficult. It's bad enough that nearly all of our articles about U.S. colleges and universities make them out to be glowing paragons of academic rigor, student happiness, and research productivity without us adding fuel to the fire by continuing to pile on more ranking systems. ElKevbo (talk) 04:16, 28 January 2013 (UTC))
Difficulty in maintaining articles is not a guiding principal of Wikipedia or a valid criteria for inclusion or exclusion of content. Neutrality, preciseness and expert knowledge are. Exclusion of legitimate information by arbitrary criteria achieves the opposite of your objective, which in this case should be a well-rounded and inclusive use of a variety of evaluative methodologies. CrazyPaco (talk) 08:01, 28 January 2013 (UTC)
NPOV is a core policy and that is the problem I am addressing. ElKevbo (talk) 14:49, 28 January 2013 (UTC)
*The rankings you decry as being just about "research performance" is almost exactly identical to what the included ARWU ranking evaluates: "1) the number of alumni and staff winning Nobel Prizes and Fields Medals 2) number of highly cited researchers selected by Thomson Scientific 3) number of articles published in journals of Nature and Science 4) number of articles indexed in Science Citation Index 4) per capita performance with respect to the size of an institution. The ARWU is a pure research ranking, ignoring all undergraduate and and outreach/service components of any institution and its inclusion in this template is entirely inconsistent with your definition of "overall". Further ARWU is really no different than HEEACT which prioritizes the measures three key components "research productivity, research impact, and research excellence" with many essentially identical metrics. Likewise SIR maintains a emphasis on "scientific impact, thematic specialization, output size, and international collaboration networks of the institutions". SIR is in turn similar to URAP. In fact only QS and THES seem to have an interest in the "overall" university, but then their methodologies are heavily weighted towards research, and several of their metrics are duplicated in the "research-focused" rankings above. For QS, "40% is an academic quality survey that asks for respondents opinion of what institutions "they consider best for research", 20% is publication citations from Scopus, 20% is student faculty ratio, 10% is international faculty and students, and 10% is employer reputation. So QS weights research, at minimum, as 60% of their score. Likewise, THES basis their ranking on Research (30%), citations (30%), industry income (2.5%, which is research), international outlook (including research, 7.5%), student:faculty ratio (4.5%), PhDs to bachelors ratio (2.25%), PhDs awarded (6%) and academic reputation survey of research (15%...the THES reputation survey questions scholars "at the level of their specific subject discipline" with such queries as "Which university would you send your most talented graduates to for the best postgraduate supervision?" Such a question therefore doesn't evaluate undergrad education at all.) Therefore, conservatively, at least 85% of THES ranking is based on research or graduate measures. CrazyPaco (talk) 07:33, 28 January 2013 (UTC)

--

IMO, there is a major flaw in your reasoning before you even began to examine each of the rankings because you are addressing these rankings without defining the template scope, which currently is inconsistent. You need to clarify this before you address individual rankings.
You state that the object of this template is to present "overall rankings". Your definition of "overall" is ambiguous. "Overall" can and does mean the university in toto, as opposed to individual components of the institution such as individual university schools, programs, departments, and centers. Each of the above rankings evaluates the university as a whole in this manner, i.e. not breaking things down to specific fields or disciplines as is found in many rankings. Since I stated this above and you did not comment on it, I therefore have to assume that how you are attempting to define "overall" as the inclusion of all aspects of a university's mission or function. The problem is that there is not a single ranking methodology in existence, including the ones currently included in the table, that measures any institution's overall mission or function.
For example, take the mission statement of, as a random choice, UCLA, which primarily states it exists for the "creation, dissemination, preservation and application of knowledge". This seems like it would apply to many universities, but most of the included rankings (US News, Forbes) do nothing to evaluate UCLA's core mission of the "creation" of knowledge, which is research. Further, UCLA states "civic engagement is fundamental to our mission as a public university", something that you'll find central in many university missions but ignored by almost all rankings except Washington Monthly. Therefore, your initial premise that any ranking evaluates the "overall" university, in terms of its function or mission, is fatally flawed. No one rankings comes close to evaluating the sum of the stated core missions or functions of the vast majority of universities and colleges.
For a particular examples of this, Forbes "America's Top Colleges" states right on the front of its website "Our annual list of America's best undergraduate institutions focuses on educational outcomes..." Therefore, Forbes ONLY focuses on and thus evaluates undergraduate components of institutions, NOT the "overall" university that includes graduate, research, community outreach. Thus all other aspects of institutions are ignored other than what Forbes has determined to be its focus. Because of this, based on your definition of "overall" that you use to exclude many of the above rankings, I can find no justifiable way to also continue to include Forbes' rankings. Therefore, this template is entirely inconsistent with your stated criteria of "overall" rankings by your definition. You can make this same argument for every single ranking already included in the template, some more strongly than others. What this effectively results in is the inclusion of some rankings based on editorial WP:POV or WP:Bias, not on a standardized criteria applicable across the plethora and diversity of American institutions of higher education.
What differs in each of all published rankings is their purpose, which is reflected in the unique methodology that each organization constructs to evaluate universities towards their own ends. Each ranking therefore brings their own priorities and Points of View as to the importance of any one metric for the comparison of one universities to another. For US News, the purpose of their annual "Best Colleges" ranking is serve as a college guide for prospective undergraduate students. Therefore, their methodology reflects the evaluation of metrics that they have deemed important in the cross-institutional evaluation for these future college applicants. For Washington Monthly, their ranking states that they are interested in university's "contribution to the public good" and thus their methodology focuses on social mobility, research, and service while largely ignoring undergraduate and graduate education itself. Every ranking is thus POV and/or Biased, and it is not Wikipedia's place to editorialize on either the focus or methodology of any one ranking, whether this editorialization comes in the form of a written critique or via censorship, by exclusion, of any one established ranking's individual POV on university quality. The only way to avoid this editorialization-by-exclusion is for Wikipedia to be as inclusive as possible in providing available information to readers so that can decide for themselves the importance or relevance of this information.
And with this stated, it is easily justifiable to include "research-focused" rankings in this table, perhaps in a separate "research" section, or to create an new separate research ranking table that would be inclusive of these measures. In either case, because of the individuality and diversity of educational institutions in the US, along with infinite numbers of possible motives and interests of any one reader, an editorialized POV of the merits of rankings' focus and methodology, as is manifested in the current form of this template, should not be foisted into the Wikipedia articles of colleges and universities. Undoubtedly, in the template's current state, readers of the articles are better off with customized tables that adhere to WP:NPOV and are able to reflect the individual missions and focus of each particular institution: undergrad, grad, research, service, value, etc.
I look forward to you addressing these points. Respectful regards. CrazyPaco (talk) 07:33, 28 January 2013 (UTC)
  • Good points, although brevity would be appreciated in the future. I think the community needs to have a discussion about the scope of this template before discussing individual rankings any further. Once we can decide upon some criteria, I think the rest will flow. —Eustress talk 23:03, 28 January 2013 (UTC)
I do apologize. Being wordy is often a weakness of mine, especially as I try to show various examples and angles. CrazyPaco (talk) 07:03, 30 January 2013 (UTC)
  • The whole premise of this template is troubling--I would support a deletion altogether. If they are to be used at all (something I do grudgingly accept), these rankings need context within the article, not a hit-it-and-quit-it infobox.--GrapedApe (talk) 13:29, 28 January 2013 (UTC)
  • I think it's beneficial to have a table that concisely presents a handful of notable rankings. —Eustress talk 23:03, 28 January 2013 (UTC)
  • Without context, especially explaining the ranking, the rankings are useless.--GrapedApe (talk) 00:41, 29 January 2013 (UTC)

Template 2.0[edit]

This template has served enwp well for a while now, but the discussion above makes me feel like a discussion about the fundamentals about the template is needed. That is, we need to decide upon the scope of the template, some criteria of what rankings should be included and which ones should not. Here are the key criteria as I see them:

  • Notability. The ranking should, as ElKevbo said above, be "influential and taken seriously beyond the organizations who create them, local media who report anything about local institutions, [and] university press releases". Anyone can create their own ranking and get it cited by universities for whom the ranking is favorable.
  • Reflects the university as a whole and not a subcomponent. The ranking should address the university, not a department/school/college of the larger university.
  • Reflects a broad aspect of the universitiy. This one is complex. CrazyPaco is correct that "there is not a single ranking methodology in existence... that measures any institution's overall mission or function;" however, I think this template should strive to include rankings that are as broad in aspect as possible. Ranking a university's undergraduate education such as USNWR is broad, but a ranking that considers only publications in two academic fields such as Leiden is too narrow.
  • Standardized format. Rankings should be in ordinal format (Jimbo University #1, Wales College #2, etc.) so all rankings in the template are parallel in format.

What do you think? Again, my hope is that once the criteria are nailed down, we can easily run current and proposed rankings through them. —Eustress talk 23:31, 28 January 2013 (UTC)

One way to further operationalize "notability" in this context would be to set a minimum number of years for which the system has been in place. ElKevbo (talk) 02:15, 29 January 2013 (UTC)
I have a dream, my friends, that some day we will have this and similar templates (e.g., parts of the university infobox, Carnegie Classifications) regularly populated and updated by one or more bots pulling from centralized sources of data (e.g. IPEDS, annual NACUBO endowment listing, tables of rankings). Anyone up for helping me think about this some more and maybe moving it foeward in some small way? ElKevbo (talk) 02:15, 29 January 2013 (UTC)
  • Thank you for starting this discussion. I agree it is needed. I will try to have more brevity in my comments.
re: Notability I agree with your statements on notability, but this should not necessarily be the same as WP:Notability policy for the appropriateness of inclusion of articles within Wikipedia. For instance, academic journals are generally thought to be sufficiently notable, within certain guidelines, to warrant their own articles despite the fact few of them have ever received "significant coverage" from secondary sources. (See Wikipedia:Notability (academic journals)). Therefore, academically produced and utilized evaluations may suffer a similar lack of coverage (outside of ranked universities' press releases promoting their ranking). However, this does not make them necessarily less valid than highly commercialized rankings. Therefore, I would caution on the use of the word "notability" in order to avoid confusion with Wikipedia policy that governs the worthiness of article inclusion within Wikipedia. For commercial rankings, or just random blogs and websites, yes, I do think WP:NOTE is a valid criteria, but academic ones may warrant special consideration. Thankfully, it is pretty easy to tell the difference between the two.
That said, academic-based rankings that have been covered in academic literature, or have published their methodology in academic literature, are in no danger of resembling ones "just anyone is making up". As you mentioned, it may be useful to note if they publish updated rankings somewhat regularly as well, although I'm not sure that fact alone make a ranking legitimate. Preemptively, here is an academic article comparing HEEACT, AWRU, and THE-QS (now QS methodology). Here is a journal article on SCImago's methodology. Here's one on Leiden. An article in the journal Science discusses ARWU, THES, and Leiden here. Here's an article from The Chronicle of Higher Education that mentions the significance of CMUP's rankings within academia.
re: Reflects the university as a whole and not a subcomponent. I agree.
re: Reflects a broad aspect of the university The broad aspect is a difficult question to tackle, and may require more space than this reply. Are, in fact, undergraduate-focused rankings broad? You say they are, yet I say they are not. I think you can get into trouble be defining "broad" in ways that are, ironically, narrow. Is a "best value" ranking broad? They typically take into account both the aspects of the financials of attendance and quality of undergrad teaching. Therefore, perhaps "best value" rankings are broad, or even more broad because they evaluate additional criteria. What then about other rankings of more limited scope, like "Top Publics"? Rankings are more meaningful for one school than another based on mission (public/private, research/non-research, etc) and we likely cannot create a one-size-fits-all model. Moreover, is it actually problematic for this template offer the ability to include such rankings if they meet the criteria of being from reputable source? I don't think that it is. If you are going to offer this template as a tool for editors to help summarize rankings of a school, which effectively highlights those rankings in the school's article, then we have to be careful that such highlighting doesn't unfairly editorialize on the rankings by our inclusion or exclusion of them.
Regarding Leiden specifically, and perhaps this should wait until later, but Leiden states upfront that it measure sciences and social sciences, but not humanities and arts because of insufficient accuracy of publication indices in those fields. However, it is somewhat disingenuous to claim all of science and social sciences constitute but two fields. These are major super-categories of human intellectual endeavor covering hundreds, perhaps 1000s of fields. Again, this gets into issues of institutional missions and differences for which a satisfactory one-sized-fits model will not exist. BTW, the already included ARWU ranking also exclude arts and humanities from its evaluation. The truth is, most research-based rankings do exclude humanities and arts for the same reason Leiden does...and because the evaluation of productivity and quality in the humanities and arts doesn't necessarily follow the peer-reviewed research publication model of science...and therefore it is very difficult to evaluate.
re: Standardized format. I disagree with the requirement for an ordinal rank. I don't think it is problematic to include non-ordinal evaluations. In particularly, doing so eliminates the most influential inter-academic evaluation in the US: CMUP. Notations can be added to the template to explain the values assigned. I don't see how that would be difficult, or that difficulty is a valid reason to prevent an evaluation's inclusion. I also think it is more in line with Wikipedia's mission to err on the side of providing info, rather than to eliminate info because it doesn't fit neatly into a paradigm that, in this case, has popularized by commercial evaluations like US News. CrazyPaco (talk) 09:06, 30 January 2013 (UTC)

2013 Forbes[edit]

Note: the built-in references for this template currently point to the 2012 Forbes ranking. This is problematic, as the 2013 Forbes ratings now are available. Thanks, DA Sonnenfeld (talk) 09:25, 25 July 2013 (UTC)

Along similar lines, the 2013/14 QS rankings are available: [1]. I'd try to fix things myself, but I honestly can't find the page where I could make the change. Zagalejo^^^ 02:44, 11 September 2013 (UTC)
I've updated info for Forbes, USNWR, and QS, thanks. —Eustress talk 20:19, 11 September 2013 (UTC)
Thanks for doing that. Zagalejo^^^ 05:11, 12 September 2013 (UTC)

Washington Monthly should be removed[edit]

They use totally different criteria -- e.g. rate of participation in ROTC -- to judge universities. This leads to highly unorthodox results, such as Texas A & M and University of Texas at El Paso (which has a 99.8% acceptance rate (1) ) being ranked nearly two dozen spots above Yale (and higher than both Harvard and Princeton). ((2). They're entitled to their opinion, but they're sufficiently heterodox that simply putting them in the template box (which provides no explanation as to methodology, etc) is misleading and WP:Undue. Steeletrap (talk) 22:46, 10 October 2013 (UTC)

Please stop jumping around with this. There's a discussion at Wikipedia talk:WikiProject Universities#Mass removal of Washington Monthly rankings and let's keep it there for the time being. Madcoverboy (talk) 22:49, 10 October 2013 (UTC)
The main point of that discussion is my alleged "misbehavior." Reread the section title. This is about the rankings. Steeletrap (talk) 22:58, 10 October 2013 (UTC)
I agree - that thread is focused on behavior and not on content or this desired change. Would anyone mind if I copied the substantive comments in that thread and in other places (e.g. Talk:University of Chicago) here to centralize the discussion? ElKevbo (talk) 02:06, 11 October 2013 (UTC)
Good idea. You might also post a {{moved to}} template. And hat the other comments above. – S. Rich (talk) 02:26, 11 October 2013 (UTC)
  • Keep Washington Monthly in template. It's no worse than any other of the ridiculous rankings.--GrapedApe (talk) 11:36, 11 October 2013 (UTC)
  • I agree that this ranking should be kept in this template. It's discussed in many reliable sources and it's a credible ranking system (within the paradigm of ranking systems; if you reject them entirely then of course I expect that you would also reject this particular one). ElKevbo (talk) 13:54, 11 October 2013 (UTC)
  • OP: Keep in article but not in template The methodology of the WM rankings is extremely different from the other ones so they can't simply be listed in the template without explanation. The natural assumption is that the WM rankings are determined according to "mainstream criteria" -- e.g. amount of research, endowment, student high school grades/test scores, employment/grade school prospects, etc -- so presenting them without explanation misleads many of our users. It should be removed from the template but can be included in the body of articles, provided that its criteria (e.g. rate of ROTC participation) are made clear. Steeletrap (talk) 21:54, 11 October 2013 (UTC)
  • WM is appropriate for template. The description of criteria for Rankings of universities in the United States is one for that article. Once we have WM as "notable" in as much as it has its own article, the RS which can discuss the appropriateness of its university ranking methodologies can be included in that article. To try and parse out how well WM performs its rankings cannot be done in individual school articles. If I read OP's comments correctly, she would like an explanation of the methodology, including caveats about ROTC participation, in the prose for each of the school articles. Well, actually, the template serves to avoid such parsing, which is vulnerable to BOOSTERism. Moreover, consider what is to be done when doing these comparisons in the prose. UCSD is ranked #1 by WM. Do they have an ROTC program? I don't see it described in the article. But USCD mentions WM on their webpage Campus Profile with the 1st for Positive Impact, which they achieved without an ROTC program. Should we say so in the prose & point out how they got the first w/o the ROTC participation factor? If anything is to be done, the template might be modified to say "Positive Impact" on the WM line. – S. Rich (talk) 02:22, 12 October 2013 (UTC)
  • Keep WM in the template. Reasoning has been posted below a couple times. Chris1834 (talk) 02:33, 12 October 2013 (UTC)

Discussion previously held at WikiProject Universities[edit]

I have sound reasons for this "mass removal", which I provided as edit summaries. Inclusion of the Washington Monthly Rankings is WP:Undue because those rankings radically contradict all of the other (mainstream, reliable) rankings. For instance, Texas A & M, University of California at Riverside, and University of Texas, El Paso are ranked well ahead of Harvard, Princeton, and Yale (1). Some of the Criteria sued to detemrmine the rankings highly dubious, such as rate of ROTC participation (which has roughly as much weight as Research expenditures). It is WP:Undue to include these fringe rankings alongside the notable, mainstream ones they contradict. Steeletrap (talk) 22:07, 10 October 2013 (UTC)

Ranking something a varied and complex as university is an absurd endeavor in its own right. Each published methodology represents a separate view point and certainly will contradict each other because they are measuring very different aspects of the same institution. IMO, either all view points should be included or they shouldn't be used at all, which is my personal primary problem with how the ranking template is set up to begin with (and discussed here). But picking and choosing inclusion of sourced opinion (eg rankings) based solely on an editor's personal opinion of the outcomes of the employed methodology has no place on Wikipedia per WP:NPOV. All major rankings should be included, and Washington Monthly is certainly one of them. CrazyPaco (talk) 22:32, 10 October 2013 (UTC)
Again, this has nothing to do with who is "right" or "wrong." (though I have to emphatically reject your apparent view that the criteria of the mainstream rankings -- test scores and high school grades of students; size of endowment; reputation among peers; employment/grad school prospects -- are arbitrary.) The question of which university is "better" is inherently subjective in any case. However, on Wikipedia, we go off of what mainstream sources say. Mainstream rankings do not say the University of Texas, el Paso, is a better school than Harvard, Yale and Princeton. The school has a 99.8% acceptance rate. (1) — Preceding unsigned comment added by Steeletrap (talkcontribs)
I agree with Paco's arguments vis-a-vis varying levels of insanity, inanity, and absurdity in ranking methodologies' POVs but come down on the other side as to my preferred outcome -- that rankings should be omitted entirely and the substance these rankings purport to synthesize be discussed instead. If A&M is notable for having lots of ROTC, great, let's unpack that. If Harvard is notable for having lots of money, great, let's unpack that. But there's little consensus for this and the accommodation we've reached instead is to include a summary of notable rankings. WM explicitly markets its method as being purposefully orthogonal to other approaches so it's not surprising that institutions come higher or lower than popular perceptions. WM's rankings are recognized by other reliable sources as worthy of discussion, so UNDUE simply doesn't apply here as it's not a fringe ranking (though there are plenty of those linkbait as well). The purpose of the infobox is in part to prevent editors from erecting post-hoc justifications for excluding rankings that are unflattering to their institution on the basis on methodological qualms. Madcoverboy (talk) 22:43, 10 October 2013 (UTC)

Discussion previously held at Talk:University of Chicago[edit]

These absurd "rankings" claim Yale is #41 in the U.S. while Texas A & M is 2nd (1). (They also have Princeton ranked 6 spots below University of California at Santa Barbara) These should be removed from all the college rank pages because they contradict what the clearly reliable, mainstream rankings from major publications say. They appear to be a "populist" attempt to denigrate elite private schools (Ivies, UChicago, and others) and elevate low-tier public schools. Steeletrap (talk) 17:39, 10 October 2013 (UTC)

They're pretty widely reported so you need a much better reason than your own personal opinion to justify removing them. ElKevbo (talk) 19:29, 10 October 2013 (UTC)
Personal opinion? That's not what she said. Look at the most widely cited rankings -- US News, Princeton Review, and others. SPECIFICO talk

(edit conflict)

The WP:UNIGUIDE has suggested guidance related to this issue. Also see Rankings of universities in the United States. – S. Rich (talk) 19:53, 10 October 2013 (UTC)
The WM rankings, whether or not they are "right" or "wrong", radically contradict (E.G., through their claim that UT El Paso and Texas A & M are vastly superior institutions to Princeton and Yale) literally every other RS ranking cited. Therefore it is WP:Undue to include them uncritically alongside these other rankings. Steeletrap (talk) 20:01, 10 October 2013 (UTC)
Also: LOL at their methodology. Rate of "ROTC participation" is one of their criteria for a top university, which is why A&M is so high. Steeletrap (talk) 20:04, 10 October 2013 (UTC)
There was a little discussion re Washington Monthly at Template talk:Infobox US university ranking. As this question pertains to more than just Yale, UofC, or other particular schools, I suggest bringing up the issue on the UNIGUIDE talkpage Wikipedia talk:College and university article guidelines. – S. Rich (talk) 20:38, 10 October 2013 (UTC)
So you believe that this ranking system is not useful because it doesn't duplicate other existing systems...? First, that is your personal point of view which is wholly insufficient to remove the material or pass judgment on it; that is particularly evident in your unjustified and uncritical disdain for the methodology. Second, as a criterion for validity it's a very problematic one because of the differences in methodologies.
Look, I get that we need to employ some level of editorial judgment when deciding what material to include or exclude in encyclopedia articles. But we can't employ our personal judgment to omit material simply because we don't like it or have an amateur gut feeling that the material might be incorrect when reliable sources have clearly stated otherwise. This particular ranking differs from your own judgment of the comparable quality of U.S. colleges and universities; deal with it. ElKevbo (talk) 20:39, 10 October 2013 (UTC)
It's an especially bad idea to edit war over this (or any other) issue. You've made your case; please respect WP:BRD and by not edit warring. ElKevbo (talk) 20:41, 10 October 2013 (UTC)
This is an accepted WP template we are talking about. If it was populated with only the best looking or worst looking numbers, then WP:BOOSTER would be in play. But if it is populated with all or most of the available rankings, then we go with it. We would really be in trouble if we went to the Tx A&M article and removed this particular data from the template because the methodology included ROTC. – S. Rich (talk) 21:03, 10 October 2013 (UTC)
However, it a very problematic template in that its contents has been unnecessarily limited (and thus biased) to a very small subset of rankings that often excludes other major evaluations measuring other major components of universities as discussed on the template's discussion page. CrazyPaco (talk) 22:16, 10 October 2013 (UTC)
Just because a ranking uses different criteria doesn't make it wrong. It can use whatever criteria it wants. If someone wants to know why it is ranked that way, they can continue on and find out just like they can with any of the other rankings...which all use different methods to some degree. I don't believe any one has brought just cause as to why these rankings should be excluded. Chris1834 (talk) 21:15, 10 October 2013 (UTC)
Again, it's not about "right or wrong." It's about "mainstream or fringe." By virtue of radically opposing the rankings of the mainstream RS, Washington Monthly's inclusion is WP:Undue. You seem to fundamentally misunderstand the rules of this community. Steeletrap (talk) 22:01, 10 October 2013 (UTC)
OK let me rephrase... Just because a ranking uses different criteria doesn't make it fringe. It can use whatever criteria it wants. If it was claiming to use the same criteria and came up with vastly different rankings, I may agree with you but these are rankings...with different criteria all published by reputable major sources. "Neutrality requires that each article or other page in the mainspace fairly represents all significant viewpoints that have been published by reliable sources, in proportion to the prominence of each viewpoint in the published, reliable sources." --WP:UNDUE. This ranking is named on the University rankings WP page. That seems like there is a consensus that it is major enough to be listed on that page and thus major enough to be included here. Chris1834 (talk) 22:42, 10 October 2013 (UTC)
You're engaged in disruptive edit warring to prove your POV. Until such time as there's consensus to remove WM as a ranking, it stays in this and other articles using this template. While I'm very much sympathetic to your critiques about the flaws in the methodology, you don't get to pick and choose which notable rankings are included in the article based on gut feelings. Madcoverboy (talk) 22:13, 10 October 2013 (UTC)
"Absurd" is your personal point of view, which apparently leads you to disapprove of what Washington Monthly is actually measuring with their particular methodology, which isn't the same methodologically to what other rankings measure. All rankings have their absurdities and biases and none of them are ranking the exact same aspects of schools. The fact remains that Washington Monthly is a ranking published by a highly cited, national source and represents an alternative view point on institutional rankings that should not be ignored because of personal preference. Frankly, as a scientist, I find Forbes to be the most ridiculous of all published rankings because it employes extremely faulty and biased statistical methodologies, but Forbes is still widely cited and distributed and should be included as an alternative view point.
Now, that said, the ranking template itself is very poor in that it provides a very limited view on institutional rankings based on its own set of biases and the template (and Wikipedia readers) would be greatly served to be all-inclusive so that readers can form their own opinions. Wikipedia should be providing all pertinent sourced information, not casting judgements on them via inclusion or exclusion which is the crux of WP:NPOV. (see template discussion). CrazyPaco (talk) 22:16, 10 October 2013 (UTC)

QS & THE (national)[edit]

I wonder why only ARWU was incorporated into the national part but not QS and THE which also rank universities around the world. Biomedicinal (contact)

CWUR Ranking[edit]

Please make CWUR visible in the US table ranking!Juicy fruit146 (talk) 00:45, 20 July 2014 (UTC)

Template-protected edit request on 30 July 2014[edit]

Please make the CWUR global rankings visible. It's in the template, but for some reason, it's not showing up in the code. These are notable rankings of the Top 1000 universities in the world. Thanks! -AllisonFoley (talk) 13:05, 30 July 2014 (UTC) AllisonFoley (talk) 13:05, 30 July 2014 (UTC)

That would essentially mean reverting this edit; accordingly, Red information icon with gradient background.svg Not done: please establish a consensus for this alteration before using the {{edit template-protected}} template. See Template talk:Infobox US university ranking/Archive 1#Notability of the CWUR rankings for previous discussion. --Redrose64 (talk) 14:31, 30 July 2014 (UTC)

Sources[edit]

Where do the sources come from, and how do they get updated? In the example, they just magically appear. BollyJeff | talk 18:15, 9 September 2014 (UTC)

They're coded into the six subtemplates Template:Infobox US university ranking/Global, Template:Infobox US university ranking/National, Template:Infobox US university ranking/Baccalaureate, Template:Infobox US university ranking/Masters, Template:Infobox US university ranking/LiberalArts, Template:Infobox US university ranking/Regional. --Redrose64 (talk) 19:30, 9 September 2014 (UTC)
Thank you. BollyJeff | talk 20:07, 9 September 2014 (UTC)

Merge discussion[edit]

FYI, Wikipedia:Templates_for_discussion/Log/2014_November_29#University_ranking_templatesEustress 23:23, 30 November 2014 (UTC)

Expansion of inclusion[edit]

I read from the post above that only overall rankings with multiple factors should be included in the template. So, I suggest to encompass two international rankings, the Best Global Universities by US New & World Report, which previously collaborated with QS, and the ARWU Alternative Ranking. Moreover, I saw two national measures on the Stanford page which are stated as using multiple factors - not sure if they should be comprised as well. Biomedicinal (contact)