Wikipedia talk:Manual of Style/Film

From Wikipedia, the free encyclopedia
Jump to: navigation, search
WikiProject Film (Rated Project-class)
WikiProject icon This page is within the scope of WikiProject Film. If you would like to participate, please visit the project page, where you can join the discussion and see lists of open tasks and regional and topical task forces. To use this banner, please refer to the documentation. To improve this article, please refer to the guidelines.
 Project  This page does not require a rating on the project's quality scale.
WikiProject Manual of Style
WikiProject icon This page falls within the scope of WikiProject Manual of Style, a drive to identify and address contradictions and redundancies, improve language, and coordinate the pages that form the MoS guidelines.

Years in film[edit]

I really think we should develop a policy for the "Years in film" lists. There is a page for each year that lists all films released (or to be released) in the year, as well as important events to do with film in the year. However, these pages (at least the ones from 2011 forward) list the full cast and crew for each film, as well as the entire filmographies for each person in the "Notable deaths" section, which makes these pages very long and difficult to navigate in some cases. For instance, the page 2013 in film is currently 346,400 bytes without images, plus it can easily be 10 times longer since it only includes a small fraction of films in Category:2013 films. Since most of the content in these pages is the cast and crew for films and filmographies for notable deaths, I suggest developing a policy limiting the number of crew members listed for each film and limiting the number of works listed for each person in the "Notable deaths" section. I think that for films, listing all the director(s) and screenwriter(s) plus 3 of the main cast will be reasonable. For notable deaths, I believe that listting 3-5 works per person would be sufficient.

Aside from length, the release dates of films in these lists can also be unclear. The vast majority of films in the "Years in film" lists cite, as the source of information. Unfortunately, that site deals primarily with films released in the United States and do not have info for many films outside of the country. In addition, only gives US release dates for films, which are not always the films' first release dates. It's common practice to place films under their first release dates in "Years in film" lists, but because of the source that is most widely cited, most films end up getting placed under their US release dates. For example, most people recognize November 22, 2013 to be the release date of The Hunger Games: Catching Fire (which is the date the film is placed under in 2013 in film). Nevertheless, its very first release date was actually November 11 at the world premiere in London. I think we should also make it a policy to go with films' first release dates when listing them in "Years in film". Eventhorizon51 (talk) 17:55, 26 April 2014 (UTC)

The "notable deaths" sections have become problematic. They used to include just a few films but in recent months editors have been adding entire filmographies. Personally I don't think they should include film credits, or at the very most should only list the first and final credit to indicate the span of the person's career. As for release dates, this has been previously discussed and as far as I am aware there is a standing consensus to list the film by the date of its first public exhibition i.e. the date listing should match up to the earliest date in the release field on the individual film articles. If MOS guidelines are created for these lists (which I think is a good idea) in addition I would like to see box office totals reflect the totals from that particular release i.e. 1997 in film should only list the box office total for Titanic from the 1997 release since the 2012 reissue technically has nothing to do with the 1997 release. Betty Logan (talk) 18:33, 26 April 2014 (UTC)
If a policy is developed for the "years in film" lists, I think it should be included in the "Lists" subsection of the "Guidelines for related topics" section. I actually think that some film credits should be given in the "Notable deaths" sections just so that their nobility is immediately apparent in the article. Credits, in my opinion, should definitely be given to first and final works of each person, and maybe one or two in between if they are especially well known or impactful. In addition to this, I think that the number of crew members in listed films should be limited to the director(s), screenwriter(s) and 3 actors/actresses. Could the policy be worded like this:
  • If a list of films has a section for cast and crew, only the director(s), screenwriter(s) and three of the main cast should be listed for each film. Avoid listing the entire cast, as this increases the length of lists considerably. In "years in film" articles, the list of notable deaths should only include each person's first and last works, and no more than three notable works during a person's career. Eventhorizon51 (talk) 16:22, 27 April 2014 (UTC)
  1. The notable deaths section should follow the same rationale on recent deaths, IE listing one or two films (and no more) per individual.
  2. Awards section - Why is the Critics' Choice Awards listed here? Drop this and possibly the Screen Actors Guild Awards too. If anything, the main awards for the Cannes Film Festival should be listed.
  3. Speaking of main awards, the last six listed awards should all be dropped as being trivial. "Wow! Gravity. How many Best Original Score awards did it win?!"
  4. Films - studio and genre seem excessive. I'd recommend dropping them both, or at least the genre field.

That should trim it down to make it a bit easier on the load time. Lugnuts Dick Laurent is dead 17:02, 27 April 2014 (UTC)

I would oppose dropping the studio and genre columns. Those are basic, quick facts about films and do not lengthen the lists by a whole lot. What really needs to be trimmed is the number of cast members listed, as they make up quite a substantial portion of the list. Eventhorizon51 (talk) 23:56, 27 April 2014 (UTC)
For the genre, I'm guessing this has been the target for edit wars in the past (I've not checked, but it's a safe assumption). Maybe to ease the page-load time, I don't see any need to link to any of the terms in this column, so replacing 50 instances of [[drama film|drama]] with simply "drama" should do no harm. Lugnuts Dick Laurent is dead 07:28, 28 April 2014 (UTC)
If you're only suggesting that the links in the genre section be dropped, I have no objections. But what about the number of cast members listed per film? I would highly recommend a policy being developed to limit those. Eventhorizon51 (talk) 23:57, 28 April 2014 (UTC)
Totally agree. How about this - only list the cast as you would per the infobox's starring field? This is limited to what's on the film poster/billing block. Lugnuts Dick Laurent is dead 06:52, 29 April 2014 (UTC)
Here's my .02: First, while I like the suggestion to make the number of actors mentioned a hard limit (like 3 per Eventhorizon51's original suggestion), that might be difficult in terms of an ensemble piece, like The Big Chill, so Lugnuts' later suggestion, about limiting it to the "infobox rule" would hopefully cut down the actors listed. Second, I also agree that you could cut the crew down to the director and the screenwriter. Third, I think you could drop the Film Critics Awards, other than that, I would keep the others, as they are the four main ones. There are others you could add, such as Cannes, Sundance, DGA, all of which have some importance in the industry, but you have to cut somewhere, and I think those 4 are the majors. Four, regarding genre, if edit wars are a concern, pick an industry standard to go by when declaring a genre (e.g. AFI), and limit genre selection that way. Fifth, I agree with using the same dates as the earliest release (we just went through that discussion regarding release dates in the infobox). Sixth, regarding notable deaths, I agree with the consenus here, although there I would limit the maximum number to 2 films - if they want more info, all they have to do is click on the page link for that person. However, I would change the format of the notable deaths to be able to sort it alphabetically, or by date. Onel5969 (talk) 12:08, 29 April 2014 (UTC)
Should we have a subsection on this page for Years in film articles? I feel that many policies apply to those articles only. Eventhorizon51 (talk) 22:46, 29 April 2014 (UTC)
Yes, it can be added to this part of the MOS once agreed. I think we have the outline of what we need to do already. Lugnuts Dick Laurent is dead 07:36, 30 April 2014 (UTC)


OK, here's a go at a first draft. Please feel free to change any of what I've done:

  • For years in film articles, such as 2013 in film, please follow these guidelines:
  1. Always go by the films' earliest release date, whether it be at a film festival, a world premiere, a public release, or the release in the country or countries that produced the film, excluding sneak previews or screenings.
  2. List only the director, screenwriter and the main cast, as per the guidance in the starring field of the film infobox.
  3. For the deaths section, only list one or two of the most important works attributed to the individual, as per their listing on the recent deaths page.
  4. Do not pipe a link to the genre, simply add the relevant text.

I think that starts to address the main issues of the size of these pages. Thanks. Lugnuts Dick Laurent is dead 10:45, 30 April 2014 (UTC)

I'm more or less ok with those. They would at least be an improvement on the current situation. Betty Logan (talk) 11:31, 30 April 2014 (UTC)
Thanks Betty. @Eventhorizon51: - do you have any changes or additions to make? Lugnuts Dick Laurent is dead 11:42, 30 April 2014 (UTC)
I'm okay with those above, although I would add: "The listing date should be selected as per WP:FILMRELEASE." Onel5969 (talk) 13:44, 30 April 2014 (UTC)
Thanks! Lugnuts Dick Laurent is dead 18:24, 30 April 2014 (UTC)
I added a policy about release dates, just to reinforce the relese date policy as per WP:FILMRELEASE. Should there also be a policy for what films should be included in these lists? Or is it ok to include every film in each article's respective film category (eg. Category:2013 films)? Eventhorizon51 (talk) 23:22, 30 April 2014 (UTC)
Thanks. Yes, I'd have the scope of any film in the relevant year category, and not to just fill the list with redlinks. Lugnuts Dick Laurent is dead 07:22, 1 May 2014 (UTC)
Ok, I personally think the draft is ready to go in the MOS as a policy. If anyone else wants to add anything, feel free to do so. If not, then how do we establish a consensus so we can actually add this to the MOS? Eventhorizon51 (talk) 22:31, 1 May 2014 (UTC)
Thanks EH. I think we be bold and publish it, with anything that needs a discussion raised here. I'll await feedback on the point below and go ahead with the change over the weekend, unless there's a serious objection. Thanks for your help too! Lugnuts Dick Laurent is dead 09:45, 2 May 2014 (UTC)
I would suggest that the release date should only be its first release date in theaters not its premiere date. also I feel that the deaths section although it high I feel that only 2 films wouldn't be notable enough, if anything we should decide a limit like 5 or 10 films. Dman41689 (talk) 08:13, 2 May 2014 (UTC)
Why do you think it should be between 5 and 10? Two or three cover most individuals, as per the standard on the recent deaths page. Lugnuts Dick Laurent is dead 09:44, 2 May 2014 (UTC)
Definitely publish it. We've already had the discussion elsewhere regarding the release date and reached a consensus that we should use the earliest date as per WP:FILMRELEASE (I, like Dman41689, was in favor of the actual release date, but that was not the decision, so we've made it, let's stick to it.) Regarding 2 vs. 5, I think you have to set a limit someplace. And while five might be appropriate for some, it would definitely be too many for others. Let's keep it simple. If the researcher wants more information, all they have to do is click on the pagelink. Onel5969 (talk) 12:50, 2 May 2014 (UTC)
I agree that the notable deaths should have more then just 2 films I feel that 20 would be decent however that would be a little ridiculous 5 or 10 would be a better compromise. you have to understand that actors are know for more then 2 films also I have a very strong feeling that editors will fight over the 2 films that are put there, they already do that on the recent deaths pages. I also want to add (if you haven't already) the highest grossing films table should only be Rank, Film, Studio, and Worldwide gross. (you don't have to include the studio if it's too much) I noticed that on a lot of the pages before 2000 the charts have actors and directors in it which is pointless because they are listed below under the release date. Redsky89 (talk) 19:13, 2 May 2014 (UTC)
Since years in film articles only cover film related deaths, I actually agree that 2 films per person is too little. After all, those articles are supposed to cover more about film related deaths than Recent deaths. Nevertheless, I also think that 10 films is a bit excessive. I would suggest no more than 6 notable films per person. Remember, the notable deaths section should only list films for which each person is notable. If a film does not have a major role which the person plays in its production, then it should not be listed as a notable film for the person. Perhaps we can begin each person with the films listed in recent deaths, and add as editors feel necessary.
Regarding release dates, I think the release dates for years in film articles should be every film's very first release date, regardless of whether it's a premiere, in theaters, or any other means. Eventhorizon51 (talk) 04:08, 3 May 2014 (UTC)
I agree with that. Wikipedia is not a consumer guide. We should go with the date it first plays to a public audience in some capacity. Betty Logan (talk) 04:34, 3 May 2014 (UTC)
Looking at the limit for films per deceased individual, if you look at Bob Hoskins recent death listing, only three films are listed. I don't see any evidence of edit-wars about that, or indeed, the need to replicate his filmography. Two or three notable examples per individual should suffice, with the reader being able to click on the link to find out more. Lugnuts Dick Laurent is dead 10:30, 3 May 2014 (UTC)
Right, I've been bold and added the basics to the MOS. Started to time down the 2013 article accordingly. Lugnuts Dick Laurent is dead 17:51, 3 May 2014 (UTC)

When should a film be included?[edit]

I was wondering when a film should be included on these lists at all. Right now the lists seem to consist mainly of films that received releases in theaters in the United States. What about films that were released only in other countries, or films shown at film festivals but without a release in theaters so far? Also, I was asking over at Talk:2014 in film whether a film that gets a festival showing in one year and then a theatrical release in another year (e.g., Belle (2013 film)) should be listed on the list for the year where it was first shown at festivals even though it didn't get a real theatrical release that year. It would seem odd to me to list a film on a yearly list when it wasn't shown in theaters to a general audience until a later year, but it would also seem wrong to list it on the later year's list when it premiered in an earlier year. Calathan (talk) 19:01, 5 May 2014 (UTC)

WP:FILMYEAR should answer your second question. As for which films should be admitted to the list then in theory any released film that has an article should be accepted. If the list becomes too big then sub-lists can always be created. Betty Logan (talk) 00:49, 6 May 2014 (UTC)
WP:FILMYEAR doesn't answer my second question, and in fact the point of my comments was to try to help write that section (isn't that the point of this whole discussion)? Maybe I should have worded my comments more as statements than as questions. Basically, while I agree with most of what is currently in WP:FILMYEAR, I still felt it seemed odd to list a film in a different year than when it received its first theatrical release (i.e., release other than festivals and similar special showings). I'm also not really sure that the first date a film was shown is any more important or encyclopedic than when it was first released in theaters. To me listing the film under either date by itself just doesn't seem right. I would prefer both dates to be given in some way, but obviously that would involve major changes to the tables. While I like all the information currently in the tables, I think the theatrical release date is more important than things like genres, and if we could fit it in that would be an improvement . . . but I don't see any easy way to fit it in without removing other information. About listing all films from a year and splitting the lists if they become too big, in a previous discussion a couple years ago I suggested having separate lists by country. Now that I check, there actually are such lists (e.g., List of American films of 2014, List of Japanese films of 2014, List of Bollywood films of 2014). As I commented in that past discussion, I think having a yearly article for every film from every country wouldn't be very useful to most readers, since I think readers would be most concerned with films from their own country. However, given that there are lists for individual countries, it might be alright to list every film in the main list. I think it might make more sense though to just remove the list of films from the main yearly lists and instead link to each of the relevant lists for other countries for that year (obviously, that would be a big change to the main lists). Calathan (talk) 01:45, 6 May 2014 (UTC)
Films that are already listed in country-specific lists, like List of Japanese films of 2014 should not be listed in "Year in film" articles like 2014 in film. Not only is it an unnecessary duplication, the resulting list would be too long. These articles should also have more info on major national and international awards and on industry statistics, like the ones on film industry.--Cattus talk 19:18, 6 May 2014 (UTC)
I support separating films into lists by language or country of origin, but in that case I believe American or English-language films should be classified in a similar sublist, "List of American films of 2014" and so on. Why does Hollywood get to be the only ones on a list with a title as authoritative as "2014 in film"? It implies that say, Japanese films are not worthy of being considered equal of Hollywood films and must be ghettoised into their own "Japanese" list. In my opinion, 2014 in film should either include all at once, or be a list of sublists. Otherwise, it is akin to renaming the article on "English literature" to merely "Literature", or renaming "American history" to merely "History" and assuming that most English-speakers would only be interested in Anglosphere topics. Sabre (talk) 04:46, 13 May 2014 (UTC)
I agree with your sentiments. Either all films are divided up into sub-lists or they should all be included on the main list. No single country or industry should be priortised above another since that would violate WP:NPOV. Betty Logan (talk) 05:24, 13 May 2014 (UTC)
In theory, years in film articles should include every film in the world released in their year. I think current years in film articles give undue weight to Hollywood simply because this is the English wikipedia and a substantial number of contributors here are from the United States. Ideally, every film in Category:2012 films should be included in 2012 in film, but Hollywood always receives the most attention because of the nationality of this wikipedia's contributors. Whether or not films should be split into lists by country is open to debate, but if they are all included in one main list, every film, regardless of origin, should be included. Eventhorizon51 (talk) 00:58, 14 May 2014 (UTC)
I think that is right, Eventhorizon. We shouldn't have a prejudice as to which films we include and that means we need to include them all. In practice, however, that is going to be challenging, since the category Category:2012 films has over 2000 film articles. We would either need to cut back to listing the bare minimum in these lists, i.e. the title only, create a separate article that hosts the listing of films, or allow the list to be broken into regional lists. Since we already have articles like List of Japanese films of 2012 and List of American films of 2012, I believe it would be more feasible to allow the list to be represented by the smaller articles. BOVINEBOY2008 12:56, 14 May 2014 (UTC)
A mix could work: a "bare minimum" approach on the main list, and add all the extra stuff to the sub-lists. Betty Logan (talk) 13:59, 14 May 2014 (UTC)
Wouldn't a link to the category be sufficient for the "bare minimum" list? BOVINEBOY2008 17:00, 14 May 2014 (UTC)
Well, if you've got complete sub-lists to link to then you don't really need to link to the categories. I'm just putting it forward as a suggestion for those editors who'd prefer to include all the films in a single article. I don't think it's necessary to list the films twice but it's there as an option. Betty Logan (talk) 18:02, 14 May 2014 (UTC)
Pretty much all main film producing countries have either lists by year like List of Japanese films of 2014, or lists by decade like List of Portuguese films of the 2010s that are usually divided by year. And in recent years, as has been said, there are over 2000 film articles per year (the most seems to be 2008 with 2,482). This is too much for a single article; if the lists are kept and include all films, the recent years need to be split into something like List of films released in January–March 2014, etc. Cattus talk 21:33, 17 May 2014 (UTC)

Highest-grossing films list[edit]

I thought this section would be a good place to discuss this topic since we're already on the subject of creating article policies. What guidelines should we implement for the highest-grossing films sections that lead these articles? How many films (and their box office grosses) should be included? Top 10? 20? 50? I noticed that countless articles (from 1979 and back) have a very arbitrary inclusion of films (some have lists between 20-30, while the 1979 one in particular has a whopping 40 films included. What should be the necessary cut-off point? I suggest that we keep these lists at just 10, which is what we already do for modern years (e.g. 2013 in film) ~ Jedi94 (Want to tell me something?) 01:45, 24 May 2014 (UTC)

I agree with keeping it at 10 films. I also believe it should be the top 10 for that year too, since re-release money doesn't come from that year's release. If you take 1977 in film for example, over half that gross for Star Wars came from reissues so it doesn't provide a fair comparison for that release window. Betty Logan (talk) 02:13, 24 May 2014 (UTC)
I have already implemented such changes on the articles from 1980-1999. I'll begin work shortly on the 1970s articles, keeping in mind your suggestion that the displayed box office grosses should be of that year only, and not include any subsequent and/or lifetime grosses. ~ Jedi94 (Want to tell me something?) 22:59, 26 May 2014 (UTC)
The lists of top grossing films on the pages of 1961 in film through 1979 in film are ALL my own edits. Deleting large amounts of content from these articles are NOT the least bit constructive. Those pages represent countless hours of my own research and editing and it's literally being obliterated due to a few people's own preference for "neatness". It's not a reason that justifies deleting large amounts of content on Wikipedia as there is no regulation or guideline that dictates a predetermined number. Providing this information in such detail in fact exposes readers to lesser known films (and hence Wikipedia articles) from those years that have been forgotten in the passage of time and helps to shape the readers' understanding of what the movie going trends were during those years. That information helps to shape one's understanding of the political and social zeitgeist of those respective time periods, which is critical in understanding the history of film which reflects that social atmosphere. Not to mention, implementing large deletions of content because it looks "prettier" is inverse to the goal of this online encyclopedia which is to inform. ~ Ldavid1985 15:31, 29 May 2014
Sticking to a top 10 is more than just an aesthetic issue, it increases the accuracy of the charts. In most cases our only reference point is Variety's annual published top tens so by sticking to a top 10 we have a reference point for what should be included. The ordering may change a bit because we have replaced the studio gross with the exhibitor gross, but the key point is that the Variety lists at least tells us which films more or less make the cut. Take 1973 in film for instance: a couple of years back you removed The Devil in Miss Jones from the chart even though it was verifiably one of the highest films of the year, charting between Paper Moon and Serpico in Variety's rankings. I restored it because I was able to demonstrate using Variety's top 10 that it was among the highest-grossing films of the year. The basis for you removing it was simply because the exhibitor gross wasn't available, and by doing so you made the chart inaccurate. Just imagine, for example, if we only knew the studio gross and not the exhibitor gross of Star Wars, despite it being the highest-grossing film ever at that point; under that logic Star Wars would be removed completely from the chart. Once you get beyond the top 10 there is no guarantee that you have included all the films that should be there since these rankings are actually based entirely on what information you have been able to find. Betty Logan (talk) 21:24, 29 May 2014 (UTC)
Betty hit the nail right on the head. It isn't just about aesthetics or consistency, accuracy is a strong focal point behind keeping the lists at the top 10.
Keep in mind Ldavid1985, that just because you contributed immensely to these articles, no one owns any content on Wikipedia and any editor is free to challenge and/or edit submitted article content, regardless of the costs the contributing editor took. Most editors on Wikipedia have had their share of these experiences (I certainly have), but never should you let that possessive belief of personal sacrifice interfere with the quality of an article or with consensus. However, since these particular contributions carry some merit and undoubtedly took a lot of time and effort on your part, we are therefore here discussing the issue civilly, so that a more defined consensus can be reached. ~ Jedi94 (Want to tell me something?) 22:26, 29 May 2014 (UTC)
It should be a top 10 only and should only include the Rank, Title, Studio, and Worldwide Gross. Dman41689 (talk) 16:32, 2 June 2014 (UTC)

Films from countries that made them and TV show airdates[edit]

I want a new rule about films and TV shows that from the countries that made them. The films from a country that made them should have the releases dates on the year of films, not from a country that first released a film, as seen on 2013 in film. It would confuse every signal reader that remembered the first release date from a country that made them.

The TV shows that are from various countries should be the only country to have the airdates on various season orders, not from the airdates that have earlier in different countries for reasons from what I said above. These changes must be agreed upon for the sake of readers from various countries. BattleshipMan (talk) 21:57, 12 December 2014 (UTC)

Lost films[edit]

Films with prints that still exist use the modifier "is", but is there something that says that films the no longer have known prints, or "lost" films, should use the modifier "was"?JOJ Hutton 17:43, 5 June 2014 (UTC)

Please see this discussion and WP:TENSE. Thanks. Lugnuts Dick Laurent is dead 18:17, 5 June 2014 (UTC)

Lists of films by country[edit]

How should lists such as List of American films of 2013 and List of Japanese films of 2013 be sorted? Right now, some of these film lists are sorted by release date, whereas others are sorted alphabetically. I personally think we should add a policy to sort these lists by release date, as these articles are chronological in nature. Either way, shouldn't these lists be as least sorted the same way to reduce confusion and inconsistency? Would it be ok to add a guideline for how we should sort these lists? Eventhorizon51 (talk) 15:26, 24 June 2014 (UTC)

I think we should sort by the title because it's the first column in the table and because I don't think the release date sorting is higher priority than that. (It also makes it consistent with the year-film categorizing, but is more powerful because there is an option to sort by release date.) Open to hearing what others think, though. Erik (talk | contrib) (ping me) 15:45, 24 June 2014 (UTC)
Title isn't always the first column. The Japanese one mentioned above, along with many others, have release date as the first column. Eventhorizon51 (talk) 15:58, 24 June 2014 (UTC)
Fair enough, I did not see the second list. I still think it is a common convention to put the title in the first column and that is what we should implement across these list articles. If there are editors who think it should be by "Opening", you can ping them here of this discussion. Erik (talk | contrib) (ping me) 16:08, 24 June 2014 (UTC)
I would be against adding a guideline of this nature. The purpose of the MOS is to encourage good editing practises, not to make editorial decisions which this plainly is. Sorting by title or by date both have their merits but neither are fundamentally the correct or incorrect way, so if there is a dispute or editors wish to adopt a consistent format across all the year lists then the Film project talk page is the place to discuss this. One suggestion I will make though off the cuff is to make all tables sortable by date and title, and then readers aren't locked in to just one sorting system. Betty Logan (talk) 17:20, 24 June 2014 (UTC)
Betty, I think the question is which should be the default sort. I'm fine with having both title and date sortable. It's just a matter of what should be the initial sorting. It would help to establish this initial sorting across the same set of lists, but the users can re-sort the other way however they please. Eventhorizon51, is that what you're asking? Erik (talk | contrib) (ping me) 17:25, 24 June 2014 (UTC)
I know what he is saying, I just disagree that it's an issue. Both methods are fine so it's basically down to editorial discretion as far as I can see, and making the tables fully sortable turns it into moot point anyway. Betty Logan (talk) 18:02, 24 June 2014 (UTC)
Yes Erik, that is what I'm asking. I just checked the lists of American films, and the tables there are all sortable. However, this is not true for every film producing country. Also, the format of these lists also differs for lists for different countries; some have film titles as the first column, and others have release date first. Should we implement a policy to secure the format so it's the same for all countries? Eventhorizon51 (talk) 18:15, 24 June 2014 (UTC)
One other thing, we should sort each film released in those countries by date rather than title to make them more consistent and more comprehensive like 2013 in film and 2014 in film. We should also set up a list of film release dates in certain countries, kind of like Lists of box office number-one films with # sign in various years, like how 2014 in film would have Lists of box office number-one films#2014 on there, rather than a template of that idea so readers could easily go to List of certain country films of certain year, such List of American films of 2013 and List of Canadian films of 2013. BattleshipMan (talk) 15:55, 20 October 2014 (UTC)

Additional instructions for the lead section[edit]

Some where between '06 and '08 links like [[2004 in film|2004]] were deprecated. I would suggest that we add wording to the lead section to make this clear to new (and experienced for that matter) editors. It is so long ago that I do not know where the relevant discussions took place but I think that it may have been in another wikiproject or even the WP:MOS itself. If anyone can dig them out of the archives perhaps we can add a link to them if any questions come up. Any other input on this will be appreciated. MarnetteD|Talk 15:55, 23 August 2014 (UTC)

Notification: RfC on Game of Thrones and chapter-to-episode statements[edit]

There's an RfC going on for which WP:FILMDIFF might be relevant.

RfC: Should the article state which chapters appear in the episode? is meant to determine whether Game of Thrones episode articles should have a statement like "This episode was based on [specific chapters] of [specific book]" in the body text. The first four respondents present the arguments for and against inclusion pretty thoroughly. This RfC is specifically about just one episode, but the outcome of this RfC is likely to affect all Game of Thrones episode articles. Right now, some of them have chapter-to-episode statements and some don't. They look and are placed like this: [1] Participation is greatly appreciated. Darkfrog24 (talk) 18:18, 1 October 2014 (UTC)

Audience response[edit]

I've elevated the "Audience response" section (which was under the "Critical response" section) as its own section. The "Critical response" section's introduction does not reference audiences at all because it is intended to be a guideline about how to report on critics responding to the film. The "Audience response" section's placement has led to a misunderstanding that CinemaScore must be a part of the critical response. This should not be a requirement. I would prefer for CinemaScore to be part of theatrical run and box office content, since box office grosses are a form of audience response to the film, and so is the CinemaScore grade. But right now, the section stands alone, so how to use CinemaScore can be on a case-by-case basis. Erik (talk | contrib) (ping me) 21:11, 3 November 2014 (UTC)

Flyer22, why are you under the assumption that "Critical response" includes audiences? If it was a generic "Reception" section, that would make sense, but it makes no sense to put audiences' reception of the film under a section meant to guide how to cover critics' reception of the film. Erik (talk | contrib) (ping me) 22:52, 3 November 2014 (UTC)
The reason that I have reverted is per what I stated at Talk:The Maze Runner (film), which is the talk page of an article that still has a serious Lkaliba (talk · contribs) problem. It's not being "under the assumption that 'Critical response' includes audiences"; it's about personal preference when it comes to reception layout.
I stated: As for CinemaScore, as noted here, I happen to prefer CinemaScore material being in the Critical response section, as a contrast to what the critics' stated. WP:MOSFILM currently allows it in that section, and putting it there has become routine for Wikipedia film articles...more so than including it in the Box office section, especially since it's more about what the audience thought than box office numbers and the like. Also, to me, it's better to include it in the Critical response section than creating an unnecessary Audience response heading. Per MOS:PARAGRAPHS, "Short paragraphs and single sentences generally do not warrant their own subheading." It irks me when people create a subheading for a sentence or for a very short paragraph.
I also stated: Regarding WP:MOSFILM, I'm the one who re-made the Audience response section a subsection of the Critical response section in January of this year (because, as noted at the guideline talk page, I didn't want editors to be tempted to create Audience response sections just for the sake of creating them; having that section formatted as part of the Critical response section at WP:MOSFILM makes that less likely to happen, in my opinion). I know that "CinemaScore's placement in MOS:FILM [does not mean it has] to be in the critical reception section." After all, the lead of WP:MOSFILM states, "There is no defined order of the sections; please see WikiProject Film's Good Articles and Featured Articles for examples of appropriate layouts. Since the page is a set of guidelines, it is subject to change depending on Wikipedia policies or participant consensus. For other guidelines, see Wikipedia:Manual of Style." I cited WP:MOSFILM allowing CinemaScore in the Critical response section to show that it's not wrong to place it there. It's my and others' personal preference that it goes there. Its your and others' personal preference that it does not go there. I don't think that we should change WP:MOSFILM to state that CinemaScore should not go in the Critical response section. In some cases, such as when a Box office section is all about numbers and has no commercial analysis detail, it might fit best in the Critical response section. In other cases, such as when the Box office section does have commercial analysis detail, it might fit best there. Flyer22 (talk) 22:57, 3 November 2014 (UTC)
The problem is we cannot leave "Audience response" under "Critical response" because it makes editors think that it has to be under that. If the only problem with moving it out is the concern that editors will create "Audience response" sections all of a sudden, I've added a disclaimer in this regard and have also stated that CinemaScore content should be placed where appropriate. Erik (talk | contrib) (ping me) 23:12, 3 November 2014 (UTC)
Thank you for this edit, Erik. I tweaked it here and here. Before your latest edit there, I was going to state the following: If we make clear that a standalone section is not needed, or that a subsection is not necessarily needed, for Audience response, then I won't mind if I'm reverted on my revert of you. And to be absolutely clear, CinemaScore had been listed as part of the Critical response section before I re-indicated that it is part of it in January of this year. Wikipedia film articles were already including it there, so I don't think that the heading level indication I added had much of an impact in that regard. Betty Logan's "19:53, 26 July 2013" edit separated the audience response material from the Critical response section. Flyer22 (talk) 23:31, 3 November 2014 (UTC)
I would suggest that making "Audience response" a separate section or subsection will create stub segments, first of all, but more significantly it will elevate the importance of transient audience response in a way that seems of undue weight. Certainly, a film's box office already indicated a film's popularity with audiences, so perhaps the Cinemascore figure would fit thematically in the "Box office" section.--Tenebrae (talk) 00:46, 4 November 2014 (UTC)
Tenebrae, yeah, I would obviously hate it if people simply look at the current layout for MOS:FILM, without reading its Audience response section, and think that an Audience response section should be created. But, per what I stated above, I also don't want the guideline indicating that CinemaScore should go in the Box office section. Similarly, Erik doesn't want it indicating that CinemaScore should go in the Critical reception section. Like I noted above, the lead of the guideline already states, "There is no defined order of the sections; please see WikiProject Film's Good Articles and Featured Articles for examples of appropriate layouts. Since the page is a set of guidelines, it is subject to change depending on Wikipedia policies or participant consensus. For other guidelines, see Wikipedia:Manual of Style." So we have that right there for readers to get the point, and Erik and I have clarified the Audience response matter in the guideline. Furthermore, the Release, Critical response and Box office sections are all separate sections in the guideline, and yet it is standard practice that these sections are presented together; I mean, for example, it's common that a Wikipedia film article will have a Release section that is divided into a Box office section and a Critical reception section. So if readers won't take the guideline literally in that regard (the Release, Critical response and Box office sections being separate sections in the guideline), we should think similarly regarding the Audience response section being separate in the guideline. Another option for the guideline is to combine the Audience response section with the Release section...without the Audience response subheading. This is because the Release section currently notes leeway by stating, "Coverage will vary by film, and editors can structure the content in a way that serves readers best... ...Presentation of content about a film's release and reception can range from a simple 'Release' section to several sections with their own subsections within." The Box office section similarly states that its "information can be included under the Reception section, or if sufficient coverage exists, it is recommended that this information is placed in a 'Box office' or 'Theatrical run' section." Flyer22 (talk) 07:42, 4 November 2014 (UTC)

Comment I think this is mostly my fault, because it used to be a sub-section until I made it a stand-alone section last year; effectively Flyer was just part-reverting my edit which she is entitled to do. My thinking at the time was that like Erik I didn't feel we should imply it is a type of critical reaction so didn't really belong as a sub-section. On the other hand I take Flyer's point that editors who look at the MOS may draw on our chapter structure, especially since we don't explicitly statewhere such coverage should go. In some cases I think a standalone section may be warranted (we have a "black reaction" at the Gone with the Wind article after all) if we have substantial commentary, but on the other hand there isn't much point creating section just to host a sentence about Cinemascore. If we agree an "audience response" that doesn't merit its own section is a better thematic fit with the box-office section maybe we could just merge the sections in the MOS and call it "Box-office and audience response" or something. Betty Logan (talk) 09:41, 4 November 2014 (UTC)

Betty, since the guideline included the Audience response section as a subsection before your aforementioned 2013 edit, I don't fault you in this matter (or Erik). It's not like your alteration led to creations of Audience response sections (that we know of, anyway). What do you think of what I stated in my "07:42, 4 November 2014 (UTC)" post above, about the fact that we have other sections in the guideline that are standalone but have been working out fine because editors have not been taking the design literally, and what I suggested as an alternative? Flyer22 (talk) 10:04, 4 November 2014 (UTC)
You mean recommending that the audience response be included in the "box-office/theatrical run" sections? Yeah, I definitely think it would be helpful if we did advise on where to place this content. At the moment we don't give much guidance as to where we stick this information. Betty Logan (talk) 10:02, 5 November 2014 (UTC)
Not exactly. I stated, "Another option for the guideline is to combine the Audience response section with the Release section...without the Audience response subheading. This is because the Release section currently notes leeway by stating, 'Coverage will vary by film, and editors can structure the content in a way that serves readers best... ...Presentation of content about a film's release and reception can range from a simple 'Release' section to several sections with their own subsections within.'" Then, I noted that the Box office section similarly states that its "information can be included under the Reception section, or if sufficient coverage exists, it is recommended that this information is placed in a 'Box office' or 'Theatrical run' section." I don't want the guideline indicating that we should put this material in the Box office section, as though we aren't open to putting it in the Critical reception section; it's simply is not my preference to put it in the Box office section, and it's clearly not the preference of other Wikipedia film editors. I also reiterate that "the Release, Critical response and Box office sections are all separate sections in the guideline, and yet it is standard practice that these sections are presented together; I mean, for example, it's common that a Wikipedia film article will have a Release section that is divided into a Box office section and a Critical reception section. So if readers won't take the guideline literally in that regard (the Release, Critical response and Box office sections being separate sections in the guideline), we should think similarly regarding the Audience response section being separate in the guideline." In other words, we should simply leave the Audience response as a standalone section in the guideline, and see how that works out, similar to what you did before, especially since we have now clarified that a standalone section for it is not needed (usually not needed anyway). The African-American reaction section you mentioned regarding the Gone with the Wind (film) article is not a purely standalone section; it's a subsection of the Reception section, and to have a subsection like that is, of course, fine. Flyer22 (talk) 11:43, 5 November 2014 (UTC)
I don't think the location of the section in the MOS is as important as clarifying where the information should go in the article itself. I may have missed this, but where exactly do you think editors should put the content in the actual article, in the typical case i.e. basically a sentence or two about the Cinemascore poll? The "box-office" section? The "critical response" section? The general "release" section? If we are explicit about that then the location of the section in the MOS isn't really a big deal for me. Betty Logan (talk) 13:18, 6 November 2014 (UTC)
Betty Logan (pinging you via WP:Echo so that you don't overlook this), I just noticed your latest reply minutes ago; I apologize for replying three days later. As noted above, I prefer that the CinemaScore information go in the Critical reception section; Erik prefers that it goes in the Box office section. I told him above, and previously, "I don't think that we should change WP:MOSFILM to state that CinemaScore should not go in the Critical response section. In some cases, such as when a Box office section is all about numbers and has no commercial analysis detail, it might fit best in the Critical response section. In other cases, such as when the Box office section does have commercial analysis detail, it might fit best there." So we've compromised on the Audience response section in the guideline, and it currently begins by stating, "This content is not intended to be a standalone section, or necessarily a subsection, in a film article. Polls of the public carried out by a reliable source in an accredited manner, such as CinemaScore, may be used and placed in the appropriate release or reception-based section, depending on the available context." Flyer22 (talk) 12:12, 9 November 2014 (UTC)

Proposal to include selected audience response stats (contrary to current guideline)[edit]

This addresses the substance of the "audience response" guideline as opposed to the documentation of it:

Currently, user ratings from movie coverage sites are discouraged: "Do not include user ratings submitted to websites such as the Internet Movie Database or Rotten Tomatoes, as they are vulnerable to vote stacking and demographic skew." However, in the specific case of Rotten Tomatoes and Metacritic, it seems appropriate to include their user ratings along with their critic aggregate ratings, as they themselves do, taking into account these considerations:

  1. Both sites display both critic and user ratings side by side and with equal visual weight, IOW, in practice, side-by-side ratings is their primary public offering.
  2. The difference between the two ratings is an interesting and useful comparison between the "pros" and the popular taste (one that can be correlated with box office stats to get a better impression of actual overall reception).
  3. Both sites publish user ratings regardless of, or perhaps taking into account or countering, the potential for vote stacking and demographic skew - we are trusting their brands to vet critics included in the critics score, how can not apply the same trust to their user ratings? Have we vetted the critic approval criteria of those sites; their critics include web-only critics, freelance critics where individual reviews haven't necessarily been published anywhere of note, each with a much greater potential to skew a score since the group is smaller.
  4. Rotten Tomatoes and Metacritic are both primarily "pro reviewer" aggregate sites, whereas IMdb is a general movie info site, so the cases are different (on the former, there is always the chekc-and-balance of the critics rating vs "the people").

The plain argument is that the side-by-side scores are real-life engaging and useful, and present an added dimension to critical reception that "pro critics" alone don't. The two typical scenarios are movies with great difference between critics and public, and those where the opinions of the two are similar, and from one end of the percentage scale to the other. Since the guideline seems to approve of Rotten Tomatoes and Metacritic, then it makes sense to trust their brand fully and accept their full product: side-by-side critic and consumer ratings. --Tsavage (talk) 02:52, 23 November 2014 (UTC)

Like the guidelines say, user ratings are subject to vote stacking and demographic skew. Vote stacking means that the votes are uncontrolled and can be abused. In addition, the votes are not necessarily reflective of the people who went to see the movie. It tends to skew toward younger men in most instances. This can be seen in the demographic breakdown of IMDb's user ratings. This is why we use CinemaScore; they conduct bona fide polls of the opening weekend audience. A grade is reported, and the demographic breakdown is provided. Sometimes they state a second grade for a particular demographic. But user ratings online are not controlled like this, so they are not a good gauge for audience response. I think CinemaScore and box office performance (small drop, week-to-week) are solid enough indicators of audience response. Erik (talk | contrib) (ping me) 14:09, 23 November 2014 (UTC)
Metacritic and Rotten Tomatoes are useful specifically because their critical scores aggregate already-reliable sources. Their user scores don't do that. If there are concerns with the reliability of the critics they use, our response should be to become more cautious (e.g., using the "Top Critics" rating only). What they are doing with their user reviews isn't the same sort of thing at all, and that the two score are listed side by side means nothing. There are news sites that prominently display user comments alongside their articles, but that doesn't mean we can start citing the comments.--Trystan (talk) 15:54, 23 November 2014 (UTC)
Thank you for your clarification (Erik, Trystan). Please believe that I fully comprehend what you are saying. I came to this point from having my inclusion of the aforementioned user ratings reverted in a particular movie article. I added that information because I believe - as noted above - it gives an additional, valuable and interesting perspective. I will use CinemaScore for the time being to avoid more reversions, however...
I see the general point, in the quest for verifiability, reliability, NPOV, but that can at times be attempted with too narrow, kinda reactionary thinking. In this case, the way both services present themselves, Rotten Tomatoes combination of critic and user ratings is a unique measure and product on its own (same for Metacritic), regardless of how the ratings are derived. In practice, you go to either site and scan both ratings. Some folks will favor the critics, some the audience, and others (myself included) take into account the spread as a distinct measure. Low critics rating and significantly higher audience rating has proven quite a reliable indicator that, especially when taking into account the genre, the movie is a lot more watchable - "better" - than the critics have indicated, which drives action: buy, rent, download, watch. In some cases, both ratings are low, rather reliably indicating a bad bet. This is quite unlike IMdb, where there is no spread, only an unpredictable user rating. While CinemaScore supplies an audience rating, what is missing is the dynamic of user ratings being racked up against clearly present critic ratings.
My larger argument is that here is a case where we have to be bold and tackle unusual new online situations, such as when the "establishment" ("pro" critics) is pitted against the masses. This typically arises online, in more or less real-time, with potentially vast numbers of participants. Case in point, a story from the last couple of days - Kirk Cameron Begs Fans To Save ‘Saving Christmas’ Through Positive Reviews — But His Plan Backfires - where the star of a movie successfully urges his fans to stuff votes on Rotten Tomatoes and has the tables turned: critics vote is under 10%, initially low audience vote is run up to 90%+ by stuffers, then run back down to, currently, 40% by counter-stuffers, apparently triggered by a Reddit post. This seems to indicate the audience rating's relationship with the critics rating, as an offset (and also indicates the Net's tendency to be self-regulating, like here at WP).
As a side note: this is absolutely not the same as putting an article from a reliable publication on par with its user comments. Using the audience rating is equivalent to using the number of comments (or perhaps the number of positive or negative comments or other meta information made available), not to quoting individual ratings or comments.
There is nothing radical about my suggestion as far as encyclopedic consideration, simply be bold and see things for what they are. Readers can use their own minds, in this case, they don't need WP editors' intellectual protection: by reporting both numbers, critics vs audience, as they appear on both sites, we are simply recording the public reality that is Rotten Tomatoes (and Metacritic) c. 2014. --Tsavage (talk) 18:52, 23 November 2014 (UTC)
Strongly disagree. "Critics pitted against the masses"? What, do you think critics get together in a room someplace and collude on how to "get" Kirk Cameron? That is paranoid craziness.
How much an audience likes a movie is reflected in a movie's box office. And we already have Cinemascore. Any other audience rating can be gamed by studios paying consumers to post good things. --Tenebrae (talk) 04:09, 25 November 2014 (UTC)
You're just repeating the existing guideline to me, not sure why. And you misunderstood my example: Members of the public tried to offset a terrible critics score on RT by stuffing the audience rating and were countered by more members of the public, countersttuffing, netting a corrected situation. Self-regulating. Speaks to the value of crowd-driven results. Pretty much like Wikipedia itself.
My proposal is about NPOV, striving for balanced coverage, to present as complete a picture as possible of the subject at hand.
Here's just a small but representative sample of four paragraphs worth of critical response quoted in the article for Dracula Untold, a film that did $200m+ worldwide in its opening weeks:
  • "Much like the recent, widely reviled I, Frankenstein, this misconceived project mainly signals a need to go back to the drawing board."
  • "Neither the Dracula we need nor the one we deserve."
  • "The film's problems aren't limited to liberal cadging from comic books."
  • "This Vlad the Impaler has all the edge of Vlasic the pickle."
That's not expert semiotic analysis of film, it's WWE wrestling, it's hype and gratuitous alliteration churned out for a quick thrill, further emphasized by excerption in Wikipedia, yet because it is "professional critics," it gets a quite lengthy section. As for the views of the actual filmgoers, widely manifest all over the web in blogs, comments, review and rating sites: that's all conspicuously absent here, except for perhaps a single letter from anointed subscription service CinemaScore (see more about them below).
Meanwhile, on Rotten Tomatoes, it's Critics: 23% (108 reviews), Audience 61% (37,000+ ratings).
That picture is seen by millions (RT is Alexa #523 on the Web). Yet somehow, we want to surgically suppress part of that basic, iconic, public information, based on misapplied principles.
My point is not that any and all who manage to post online should be freely quotable in Wikipedia, about films or anything else. Course not. This is about avoiding bias, about not favoring one ratings score over another score that sits right beside it based on speculation that it "could be queer." In this case, it is irrelevant if audience rating is "demographically skewed" (citation need, says who?) or "vote stacked," neither of which are proven charges (again, like arguments against the value of Wikipedia), when what's important is that, like the movie itself, for better or for worse, IT'S OUT THERE, it is the reality of Rotten Tomatoes, it's what a notable, highly popular public source of movie ratings presents. Why do we not recognize that? The audience response number should be in there, alongside Vlasic the pickle.
As for CinemaScore's credibility: The company charges studios $30,000-$50,000 annual subscription rate, is run by a father and son from their suburban Las Vegas home, and derives its ratings from pollsters in "about five theatres" around the country. Recently, subscriber Warner Bros, unhappy about a B+, got a "recount" that bumped it to A- in time for media calls. "The unprecedented redo irked rival studios and came at a critical juncture for CinemaScore, whose influential letter grades of moviegoer sentiment have been criticized for relying on outdated polling techniques and too limited a sample." - Hollywood Reporter, 10/18/2013 --Tsavage (talk) 18:07, 25 November 2014 (UTC)
I'm all for not including CinemaScore, and I would support a proposal not to. But this discussion isn't about that, and complaining about CinemaScore is off-topic.
As I said, and was not addressed above, a film's box office says all we need to know encyclopedically as to whether a film is popular. What could be more indicative that hard dollars and cents? Including easily manipulated audience ratings presents a subjective picture under the guise of statistical accuracy. Just because an audience survey is included in Rotten Tomatoes, which is not exactly the Quinnipiac Poll, that doesn't mean an encyclopedia has to include it.--Tenebrae (talk) 18:20, 25 November 2014 (UTC)
CinemaScore is central to what I'm addressing: it is the one acceptable source for audience response in the current guideline, and it is what I had to turn to to provide an audience rating when my inclusion of Rotten Tomatoes and Metacritic data was reverted in an article per this guideline. You vouch for it yourself in your comment just above: "And we already have Cinemascore. Any other audience rating can be gamed by studios." I'm not complaining about it, just pointing out, factually, that your, and the guideline's, faith in CinemaScore as an unimpeachable source seem seriously misguided.
On what do you base "a film's box office says all we need to know encyclopedically as to whether a film is popular," other than opinion? When we encyclopedically report on box office, it is just a number from a accepted-as-reliable source reporting financial information. Yeah, the popular presumption is that it represents bona fide ticket sales from everyday people, but does it really? Is there a way to stuff box office results (motivation: box office perception affects future sales)? For example, does box office include the value of comp tickets reimbursed by the studio? That's rhetorical. We are not here to vet the money trail, just to report it as one part of the picture. In the same way, audience reception in the form of quotes and ratings, is certainly another, distinct dimension of movie coverage, not "exactly the same as box office," and it's open for inclusion if the sources meet WP's reliability standards.
Let's say the New York Times quoted a filmgoer to illustrate a point in an article. We could quote that quote, because we have consider the NYT reliable, which extends to its film critics. In the same way, Rotten Tomatoes presents an audience rating alongside its critics rating. We have approved Rotten Tomatoes explicitly as a notable, credible site, so it follows that all of its prominently featured product is fair game for inclusion. --Tsavage (talk) 19:19, 25 November 2014 (UTC)
I'm not sure the definition of "vouch for" is clear. I don't "vouch for" CinemaScore. Indeed, I do the opposite: I say I don't like it myself and would support deleting it from our guidelines. It's incorrect to suggest that someone saying, "I'm a team player following the guidelines even though I may not agree with all of them" is vouching for one when he in fact says the opposite. But again, the use and viability of CinemaScore is a separate discussion. Consensus after debate was to include it in the guidelines, so unless consensus changes, use of CinemaScore is allowed.
There's nothing in Wikipedia guidelines that say just because we accept one thing from a website we have to accept all of it. In fact, the reliable-source guidelines say just the opposite: that the same source can be reliable for some things and not others.
We'll agree to disagree on box-office receipts as a measure of popularity. Speaking for myself, I believe that audience ticket-sales indicate Frozen is popular with audiences. --Tenebrae (talk) 00:59, 26 November 2014 (UTC)
Vouch for, endorse, put forward, you do that when you say "Any other audience rating [than CinemaScore] can be gamed," even if you're toeing the line and don't actually believe what you're saying. And please point me to the section of reliability you're referring to, unless you mean the overall "use common sense" guideline that applies to everything, which try I to do. And I don't agree to disagree with you, because I do believe box office is a measure of popularity, just not the only one, and not a suitable replacement for audience response.
Why is this audience response point so difficult to convey (or is it, only three people/editors have responded)? A high profile feature like Rotten Tomatoes audience rating score is well-known, and common sense says most people who use it understand that its a rough and ready public poll, not the PriceWaterhouse Academy Award treatment for user voting, caveat emptor is fully in force. It's part of the modern landscape. I use it all the time to decide on movies: first stop, compare the critics and audience percentages on Rotten, glance at a few reviews from both groups, then maybe doublecheck at Metacritic, and seldom, but once in a while, check in at Box Office Mojo as well if its an older movie. This is what people do. It works quite well. Listening to critics alone is not at all as reliable. And we should be able to report that on Wikipedia, instead of supporting some old school hierarchy where "pro critics" get to compare movie characters to popular pickles, because they're designated experts, yet audience ratings from popular sources are too unreliable to even be mentioned. 20th century commercial print media is imploding, who knows who they're even aiming at these days with their reviews, who their film critics are targeting, what systemic bias is at work in their reviews. They certainly are not the only critical voice widely consulted, yet we are somehow trying to uphold that structure.... --Tsavage (talk) 04:39, 26 November 2014 (UTC)
Per WP:RS, "The reliability of a source depends on context. Each source must be carefully weighed to judge whether it is reliable for the statement being made in the Wikipedia article [emphasis added] and is an appropriate source for that content." The audience data on Rotten Tomatoes is all user-generated. So that part of RT is not useable. As for audience-member quotes, we don't use people's comments from forums or comment sections, and we don't use non-experts comments from blogs. So I'm not sure why we'd use non-experts comments from any other source.
RE: "caveat emptor is fully in force. ... I use it all the time to decide on movies" . That's another good point: WIkipedia is not a consumer guide. --Tenebrae (talk) 05:41, 26 November 2014 (UTC)
Consumer guide? Caveat emptor referred to using Rotten Tomatoes, and the reasonable assumption that readers are aware of the nature of open online polls. And how is recording a Rotten Tomatoes audience score any more likely to turn Wikipedia into a consumer guide than including critics ratings and numerous quotes from their reviews?
With WP:RS, it seems like you are applying the wrong standards here. I'm not considering the Rotten Tomatoes audience rating as a reliable source for audience reaction, I'm talking about as a branded piece of film information from a recognized film information brand, used with attribution, like a quote.
The WP:RS case would apply to writing: "Meanwhile, unlike the 23% of critics, 61% of filmgoers rated the film 3.5/5 or better." That would require a reliable source, a statistically sound survey of all filmgoers...
What I'm concerned with here is writing: "The Rotten Tomatoes audience score was 61%, compared to their critics score of 23%."
Rotten Tomatoes doesn't make any claims of statistical accuracy for its audience score. It is what it is. It's their score. Why can't it be quoted as such? --Tsavage (talk) 07:14, 26 November 2014 (UTC)
We wouldn't include audience submitted ratings on Rotten Tomatoes any more than we would quote readers' letters submitted to the New York Times. They are subject to manipulation and demographic skew. Only audience statistics taken by professional pollsters should be included since they actually take a sample that is representative of a film's audience, rather than a possibly unrepresentative sub-section of the audience. Betty Logan (talk) 10:03, 26 November 2014 (UTC)
CinemaScore is different from user ratings because it is a bona fide poll. This means the appropriate sample is taken, where for a user rating on a website, a person has to go to that website and submit their grade, maybe even having to register on that website to be able to do so. I also almost never see side-by-side usage of both Rotten Tomatoes critic ratings and user ratings; it is the critics that are most frequently highlighted. As for quoting the critics directly, I find the Dracula Untold quotes very poor as well. Using only critics' aggregate scores does not mean these quotes are warranted. They go against WP:SLANG, and the best practice for using such quotes is to actually convey why a critic liked or disliked a film, not to regurgitate their oh-so-clever prose. I would support a guideline encouraging better quoting of critics, but I still oppose user ratings. I would love to see more polls like CinemaScore's; there is another such company called PostTrak (covered here), but its results are not being publicized for individual movies. Erik (talk | contrib) (ping me) 15:26, 26 November 2014 (UTC)
We seem to be going in circles. I get your point, and I've tried to demonstrate that over and over, but no-one seeems able to acknowledge that I get it, and address MY point. Once again, your point is:
  • "What does this number represent?"
  • "It's the Rotten Tomatoes Audience Score."
  • "So it's an audience score?"
  • "Yes."
  • "Well, then, we can't use it because Rotten Tomatoes is not a reliable source for audience scores, it can easily be gamed."
I am not arguing with that. I agree that RT's online audience scores can't be shown to be based on properly constructed statistical surveys. I am not proposing that we allow typical online user ratings to be used as accurate representations of the real world.
My point is that the Rotten Tomatoes Audience Score, presented together with the Tomatometer (critic) score, is a unique cultural artifact, a highly visible, recognizable, influential, therefore notable part of the contemporary movie review landscape, and as such, it should be usable in Wikipedia. In that context, it is not up to us to judge whether it is skewed or not, any more than it is to judge whether an individual critic's review is accurate or not before quoting him. Rotten Tomatoes, by virtue of its reach, visibility, and ownership (Warner Bros.), a significant and bona fide publisher of film information, be it critic aggregate scores, or ratings derived from its millions of users. We are simply reporting on that. As long as it is clearly identified as such and presented with the corresponding critics score, the "Rotten Tomatoes Audience Score should be acceptable based on notability. --Tsavage (talk) 20:45, 26 November 2014 (UTC)
I actually did respond that it's user-generated content and, at base, unusable for that reason. --Tenebrae (talk) 01:40, 28 November 2014 (UTC)
Why do you think the audience score should be paired with the critic score? Periodicals that report on the critic score rarely include the audience score with it. That means the audience score is not as noteworthy. Another thing is that movie websites are commercial in nature. IMDb has long used user ratings, and presumably other websites copied that feature as part of the competition. Wikipedia's goal to reference content that has encyclopedic value. To that end, the content needs to have integrity. The consensus is that most editors do not have a problem with how Rotten Tomatoes or Metacritic aggregate their scores. The consensus against user ratings was developed because of the reasons stated above as applied to IMDb. I do not see reason to assume that these reasons do not apply to other websites' user ratings. These ratings are not qualified enough to be encyclopedic content. Erik (talk | contrib) (ping me) 22:34, 26 November 2014 (UTC)
The RT critics and audience score should appear together because they are presented together on the RT site. They are a package, intentionally, and in how people use them. The critics score is the one that's generally quoted, on its own, in the media, because it is a convenient bit of content to fill out a story, "what all the critics say." For encyclopedic purposes on Wikipedia, where we're trying to provide a comprehensive, verifiable, neutral overview, I think being able to report on what a major influence in the film world today, Rotten Tomatoes, has to say is important. It is for exactly the same reason as we use quotes from critics' reviews - to provide a richer context.
Here's what Flixster, the parent company, and Rotten Tomatoes have to say about the two ratings: "Flixster and Rotten Tomatoes combine critical reviews and audience ratings to provide an overview of a movie's quality."Flixster/Rotten Tomatoes FAQ That's the developers speaking. And what I'm talking about is reporting that "overview of a movie's quality," since it is at least as notable as any film critic's.
I'm not sure if you are getting my point. Rotten Tomatoes Audience Score is a class of one, there is only the one, like a McDonald's hamburger may not be a great hamburger by whatever measurement standard you choose, may not even rate being called a hamburger, but it's still a very popular "hamburger" in the real world that you can't just not mention when discussing hamburgers in Wikipedia. --Tsavage (talk) 02:38, 27 November 2014 (UTC)
BTW, I addressed the IMDb difference in the original proposal at the very top of this section: 4. Rotten Tomatoes and Metacritic are both primarily "pro reviewer" aggregate sites, whereas IMdb is a general movie info site, so the cases are different (on the former, there is always the check-and-balance of the critics rating vs "the people"). --Tsavage (talk) 06:49, 27 November 2014 (UTC)
Readers' comments on online newspaper articles are often paired with the article, but that does not mean we would reference reader comments. It is user-submitted content, which we do not use on Wikipedia. On top of that the audience stat is generally regarded as insignificant by secondary sources i.e. it is common for the likes of The Wall Street Journal, LA Times and The Hollywood Reporter to refer to the "tomatometer" stat, but as a rule they don't cite the audience figure. Betty Logan (talk) 07:40, 27 November 2014 (UTC)
WP:BB: "Wikipedia does not "enshrine" old practices: bold changes to its policies and guidelines are sometimes the best way to allow the encyclopedia to adapt and improve."
'"Reviewing the Movies: Audiences vs. Critics" New York Times August 14, 2013': "Like any critic, I’m occasionally lectured about how critics are out of touch with The People. Critics regularly pan works that become immensely popular, after all, and praise some that roundly displease audiences. This gap seems especially true (or at least measurable) in film. ... So I wondered: Can I quantify the gulf in tastes between critics and the general public? ... The Times’s interactive news team helped me match up film listings with Rotten Tomatoes data on critics’ scores versus audiences’ scores. (We used Rotten Tomatoes data because it has not only public A.P.I.’s, but also a gigantic user base, with thousands of user-submitted reviews for even relatively small films. That makes it a good way to take the “pulse” on popular views of a wide swath of films, since it’s impossible to conduct a randomized poll of viewers of every film ever made.) ..."
So we have the New York Times not only publishing Rotten Tomatoes audience scores (alongside critic scores), but actively tabulating and analyzing them in exactly the same way I've been saying people do every day - the Rotten Tomatoes critic-vs-public score is a cultural reality, a sign of the times, a prominent part of the movie landscape, and it is bizarre and not objective or neutral to ban this info from Wikipedia, because of a misapplied standard, sweeping all "user-driven" content into one no-fly bin. We're supposed to consider case by case, to arrive at the best coverage per topic and situation. --Tsavage (talk) 08:25, 27 November 2014 (UTC)
@Betty Logan: Just reread this thread and saw that you twice equated my proposal with quoting reader comments on articles at New York Times or other online newspapers. I hadn't directly responded...
The comparison of an audience rating score to article comments is apples to oranges, comparing two entirely different things. A more appropriate comparison would be if an online newspaper published articles staff-written by professional journalists, headline beside headline, column beside column, with user-submission driven articles on the same subjects (articles like at, an entirely user-driven online newspaper, owned by a multi-billion dollar parent company). That isn't the case, so far, that I've seen, and if it came up, I think would required consideration.
So much of the Net is user-driven, from open source software like Apache and Linux that run it, to major content destinations like Wikipedia itself, that we can't just lump every case of "user-driven" product into one unreliable category, when clearly it is not. New day, new rules, on a case by case basis, is my point. --Tsavage (talk) 17:33, 27 November 2014 (UTC)

──────────────────────────────────────────────────────────────────────────────────────────────────── At this point, I'd have to say you are the only person advocating this. It's obvious there's no consensus here for changing the guidelines, and a clear consensus is required. I would suggest two options. One, of course, is to say, "Well, I gave it my best shot. Maybe another time." And the other is to begin a formal RfC in which, perhaps, a wider array of editors will comment. You're free to continue talk-page discussion, of course. But without consensus, the guidelines can't change. --Tenebrae (talk) 01:46, 28 November 2014 (UTC)

Yes, I agree, slow going. We need at least a few more voices. So far, it been three editors taking turns trying to convince me that I'm wrong by repeating the same user-driven point over and over, without addressing my concerns. Without real discussion, I wouldn't call it consensus, just a vote. Are there more editors lurking, or is this normal participation level for this page? I see 463 members in the Project Film active participant list, I guess they're not around atm.
I wasn't familiar with RfC so checked it out. That's a possibility.
Putting more time into this I think means dealing with the larger context, the overall Reception guidelines, of which I was with this proposal trying to adjust in part.
Reception coverage is significantly biased, not what I would call neutral, not comprehensive, and not all that accessible. I examined a number of film articles and that evaluation seems consistent. Key points:
  • Too much emphasis on box office figures without context: These numbers require some expertise to interpret, and don't convey much to a general reader. You have to be able to read the figures (number of screens, period, totals relative to other movies, etc). Also, these are dynamic figures especially in the first months of release, but they are not updated uniformly, so comparison between film articles can be practically meaningless. The usual barrage of figures is actually not reader-friendly without context.
  • Too much emphasis given to individual critics via arbitrary quotes: Quoting a sentence or two from a single review doesn't give an overview of all reviews, because the writing styles vary so much, and most tend to be hyped and bombastic. And, quoting a negative review for a predominantly positively reviewed film (and vice versa), creates more imbalance, because the sensational language gives undue emphasis to a minority view.
  • Unbalanced presentation of professional criticism vs audience response: The guideline is heavily biased towards "professional critics," allowing aggregate scores and review quotes, while entirely disallowing audience feedback (with the exception of CinemaScore), ignoring the notable extreme popularity and impact of particular user-driven sources (e.g. Rotten Tomatoes Audience Score, per my proposal above).
This biased coverage seems to stem from pre-Internet, print era thinking and realities. Maybe 20-30-40 years ago, newspaper film reviewers had a lot of exclusive access and public reach, and therefore held a lot sway in influencing audiences. Now, everything is available to everybody, one-to-many communication is available to all, and the way people evaluate movies is NOT just by reading professional critics' reviews, yet this is the imbalanced picture the Reception guidelines encourage and essentially impose. --Tsavage (talk) 18:55, 28 November 2014 (UTC)
One way to address a good part of this might be a Critical evaluation section, written/edited from a film studies perspective, derived from and sourced to critic reviews. We have the cheap version of that now: a convenient "aggregate score," a few zingy review quotes, and a jumble of figures is not a suitable replacement for what should be a summary of what critics thought and how the film was publicly received, put in plain English for general encyclopedia readers. --Tsavage (talk) 19:13, 28 November 2014 (UTC)
(edit conflict) It is irrelevant that Rotten Tomatoes presents the audience score next to the critic score. It is a commercial website that wants its users to participate more actively. Sources independent of Rotten Tomatoes that reference the website's content will reference the critic score far more than the audience score. That reflects that one score is far more noteworthy than the other. The article in The New York Times would be a great reliable source to cover in a Wikipedia article about user ratings of films, but this exception is not the rule. It is far more common that the critic scores are reported independently than the audience scores.
If a box office passage is not reader-friendly, it should be improved per WP:RAWDATA. Individual quotes from critics are also appropriate per WP:DUE: "The majority view should be explained in sufficient detail that the reader can understand how the minority view differs from it." As I've already stated, sensational language should be rewritten per WP:SLANG. The last point I've already responded to, that critic scores are far more commonly reported than audience scores in secondary sources, regardless of what the website wants to show as part of its business. Erik (talk | contrib) (ping me) 19:15, 28 November 2014 (UTC)
Concur with Erik, Betty, et al. This really is just going around in circles at this point. Changes as significant as these really should be addressed as an RfC. --Tenebrae (talk) 19:31, 28 November 2014 (UTC)
I may prepare an RfC. Right now, this uninspiring attempt at discussion has bored me off the topic, except for...
As one more contribution to this particular thread, here is the problem illustrated from yet another angle, this line taken from the Critical Response section of the Killing Them Softly article, where there is an attempt to cover the reality -- that critics and audience differed significantly -- without quoting the RT Audience Score by instead citing it, which creates even more of a mess than not being able to use it directly:
Rotten Tomatoes gives the film a rating of 75% based on reviews from 210 critics, with an average rating of 6.9 out of 10. While getting high ratings from critics it received heavy criticism from many people.[1]
  1. ^ "Killing Them Softly". Rotten Tomatoes. Flixster. Retrieved August 19, 2013. 
--Tsavage (talk) 00:43, 29 November 2014 (UTC)
Rotten Tomatoes audience score is regularly quoted by established mainstream media[edit]

Two more examples of the use of Rotten Tomatoes Audience Score by established mainstream media, in order to present a more complete and objective movie coverage when opinions of critics and consumers diverge (i.e. as proposed above):

  • USA Today - use of audience score for in regular film coverage, as here for Gone Girl, Dracula Untold tom compare critics vs audience
  • Chronicle Herald Halifix, NS/Atlantic Canada's largest newspaper - use in movie reviews whenever a significant critic-audience difference exists

These in addition to the above-mentioned NYT example, where data analysis and an entire article were based on it:

It seems likely that other examples are to be found.

Additionally, the argument was made that RT critics score was found in media more often than the audience score, the unproven implication being that those media didn't trust the audience score. However, at least equally plausible and likely is that media do not particularly like promoting their direct competition, and an online user-driven audience score or poll is something they could create themselves from their own users (making RT a competitor in directly measured audience reaction), whereas the aggregate score allows them to capture the thunder of all other competing reviewers without mentioning them or their publications. Furthermore, the audience score could undermine their own critic's review when there is a big spread either way (critic loves, audience hates, or vice versa). This is media business basics. Perhaps compare the somewhat similar situation of book bestseller lists, where major papers compile their own (USA Today, Toronto Star, etc), instead of, for example, licensing the popular New York Times lists.

Still considering/researching an RfC for this! --Tsavage (talk) 20:52, 1 December 2014 (UTC)

I think aggregate scores are reported widely enough to be ubiquitous in film articles (in this century, anyway). As for the occasional reporting of audience scores, I think there are two separate questions to ask. Does the occasional reporting of audience scores warrant ubiquitous inclusion in film articles? The other question is, can user ratings be included conditionally -- if they are reported in secondary sources? I personally think a lot of editors would balk at including user ratings at face value due to it being user-generated content that can be gamed (Transformers has been in IMDb's Top 250 before, FYI). I don't know if they would be okay with including ratings where they have been reported in periodicals. I recall seeing some leeway given to IMDb user ratings for the very top films because of the secondary sources. A conditional inclusion would be a better place to start rather than pushing to include the ratings everywhere. Erik (talk | contrib) (ping me) 21:48, 1 December 2014 (UTC)
Could we be approaching an actual two-way discussion? :)
1. Does the occasional reporting of audience scores warrant ubiquitous inclusion in film articles?
In this case, yes. I am citing in context uses in the mainstream media as hard examples of the audience score as a useful indicator of audience reaction. Industry-leading news outfits like USA Today and the New York Times, and others, have passed editorial judgement on the reliability of the audience score and seen fit to use it, and this counts towards reliability here at Wikipedia. We are applying sweeping conclusions about the unreliability of basically all "user-generated content," based on no specific evidence per case; I am showing that, in this case, world-class news organizations who make these judgement calls as a matter of daily business, have considered "demographic skewing" and "vote stacking," and specifically allowed Rotten Tomatoes audience scores to appear under their news brands: New York Times, USA Today, Chronicle Herald, not once, but on a regular basis. Popularity amongst newspapers and other media is not a precondition for reliability. NOTE: This throughout refers to the branded Rotten Tomatoes Audience Score, always attributed, which is distinct from their score used without in-text attribution; there is an argument for both uses but I am only proposing the former.
2. Can user ratings be included conditionally -- if they are reported in secondary sources?
Yes, but not the point. They can be quoted or referred to with in-text attribution: "USA Today noted that, at Rotten Tomatoes, the 68% audience score was notably higher than the critics' 23%." HOWEVER, the way is not to hunt down cases of secondary source use, as a kind of loophole way to include audience score. That is a perversion of the process, if we deliberately accept audience score in one article, simply because it was mentioned by a reliable source, but disallow and revert it in another, rather than address the underlying problem. I guess this is an inclusionist position: when cases appear where things are not clear cut, err on the side of cautious inclusion. Otherwise, for one, we encourage citation farming, finding any old source to prop up something an editor wants to include (an annoying and erosionary practice I've observed for one in fanboy articles about certain movies, video games, pop stars, etc, etc, and quite likely more subtly active in other areas). I am trying to deal with the core issue, not wikilawyer for loopholes. --Tsavage (talk) 22:25, 2 December 2014 (UTC)
Still have to repsectfully disagree: Audience rankings, whether from IMDb or Rotten Tomatoes, are user-generated and easily manipulated. And we already give audience-poll information, through Cinemascore, so the necessity of less-reliable ones is highly questionable. Additionally, we are WP:NOTNEWS, and whether USA Today quotes RT audience ratings isn't a deciding element. The New York Times citation here really doesn't address the issue: It's an article about the phenomenon of audience rankings and how they differ with critical consensus; it's not an article about a particular movie, quoting the RT audience rating as a viable metric. --Tenebrae (talk) 22:48, 2 December 2014 (UTC)
And may I say, the tenor of this discussion has been polite and thoughtful. There's been no cursing, no insults, and no incivility, unlike some discussions I've seen lately elsewhere. This is a wonderful example of how mature adults can respectfully disagree and rationally discuss. My compliments to Tsavage and all my other colleagues in this thread. --Tenebrae (talk) 22:51, 2 December 2014 (UTC)
Look, the whole old school approach to "user-generated content" as a big bogeyman of unreliability needs to be dealt with more realistically, often on a per case basis. It's a new world, with new assumptions. What does user-generated mean, anyhow? It is clear when you talk about text generated by anonymous or unverified authors, where you have no idea if whatever is being said is true, and no way to check on the source. But then, what about a poll? All polls by definition are user-generated, an aggregation of individual opinions. The issue here is not with who is doing the generating, but with the sampling methods. So you don't like IMDd or Rotten Tomatoes for their methods - register a free account, vote - whereas you do approve of, say, the almighty Quinnipiac Poll. But a quick look at the polling field shows that it's as much in upheaval in this new digital world as most other things. IOW, the strictly "scientific" underpinning is in question. One major factor is response rates: the academic gold standard was apparently 70% target sample had to actually be reached, that dropped to 50%, and now commercial pollsters are down to 30%, 20%, even 10% response rate, in good part because people are hard to track down, what with cell phones and whatnot. So we're...adapting. In fact, in large part, we judge the quality of our most publicized polling, like election polls, not by method but by how accurate the predictions are, and those predictions are the result of interpretation, art with science. By the same measure, IMDd, offering a standalone rating on a general movie information site, is not very predictive, their scores don't mean much and haven't for years. In contrast, Rotten Tomatoes, a dedicated review site, has a product that opposes critic and audience scores, and is currently and has been for several years, notably effective in gauging audience reception, and it's transparent: we know what it takes to vote, and how many voted, it's all there, managed by a presumably professionally staffed Warner Bros. company. CinemaScore, on the other hand, is non-transparent, run by people with no stated polling qualifications other than being fans who wanted to see what other fans not critics, thought, and have been caught adjusting their scores after client complaints, movie biz clients who pay $30,000-$50,000 annually for the service, not to mention they sample opening night audiences, a group with a vested interest in the movie being worth their investment. But we've found their scores acceptable. We're obviously exercising a lot of judgement on a per case basis here, and I am making the case for Rotten Tomatoes, not IMDb nor other audience scores, and I am backing it up with evidence that media professionals like USA Today and New York Times agree. --Tsavage (talk) 02:45, 3 December 2014 (UTC)
I'm not in favor of user ratings, per the well stated concerns above. I just want to point out that if you're making a case that because USA Today or NY Times has mentioned them before, I will also point out that IMDb has been cited by many professional outlets as well. We still don't accept them as a source because they are unreliable, regardless of whether another professional source chooses to exercise good judgment in their fact collection or not.  BIGNOLE  (Contact me) 03:48, 3 December 2014 (UTC)
The concerns you support are far from well-stated. They are repetition. Nothing that I've said has really been addressed. You're just repeating the same "user-generated, unreliable" mantra and pointing at what's been done in the past. IMDb was used because it was relevant and is no longer used nor relevant. Things change. Just since Wikipedia has been around, how much reliable science has been recorded here and then reversed in light of new findings? Things change. Repeating things over and over doesn't make them so. Indicate one shred of evidence other than your personal belief and opinion that Rotten Tomatoes Audience Score does not provide a useful gauge of audience reception. Or that CinemaScore is reliable (because professional organizations use it?). --Tsavage (talk) 04:37, 3 December 2014 (UTC)
The audience scores at Rotten Tomatoes do not require that you have actually seen the movie to vote. I can go right now and vote for Intersteller. I've never seen the movie, but I just gave it a 5 star rating (literally...I just did). That's a really reliable set up they got there. With CinemaScore, you get people come out of the theater, so you know they at least saw it. Wonderful bit of news...I just decided that I don't really care for the movie and gave it another score of 2 stars. Hmmm, it seemed to accept both of them from the same computer. This is the problem that we're talking about, and this is why we don't use web-based poll systems like this. They are fraught with vote stacking and the impossibility of proving any sort of reliability in the votes themselves.  BIGNOLE  (Contact me) 05:05, 3 December 2014 (UTC)
You seem more out to win an argument than discuss points, meanwhile what you actually say has no direct relevance:
  • CinemaScore happens to be done in cinemas, it's presumed exiters were watching not texting, but most polls aren't done at the scene of the subject. Seat belt use polls aren't generally conducted at stoplights. Polls rating dish detergent aren't conducted at the kitchen sink. Argumentative red herring.
  • So you are one dishonest reviewer amongst 113,149. If we're going one by one, who else have you checked?
  • So you changed your vote: people change their minds, but that doesn't register as two votes, just a changed rating, the better to accommodate real people. Reload and see for yourself.
And sure, you gamed the system by voting for a movie you haven't seen. Now if you want to actually make a perceptible difference in the Audience Score, log into a few thousand of your extra fake accounts to actually impose YOUR difference on over 110,000 votes. Or expose the syndicate of Romanian botnet operators who control all the accounts and run up votes for all the studios (except nobody wanted to pay for Brad Pitt's Killing Them Softly).
That stuff aside, my point is not that all user-generated movie polls are valid, simply that, at this time and in recent years, Rotten Tomatoes Audience Score, attributed as such, and reported alongside the critics score, increases the balance of reception coverage, particularly in cases where critics and audience diverge, and that this specific source is both widely recognized and understood for what it is by the public (11m visitors a month), and also recognized as such by media such as USA Today and New York Times. I'm repeating myself, too. This is like arguing with hall monitors. Almost futile unless there's an uprising... --Tsavage (talk) 05:39, 3 December 2014 (UTC)
PS: I don't intend to be rude, merely..factual, but it just occurred to me from what you said, "Hmmm, it seemed to accept both of them from the same computer," that you perhaps don't clearly understand the difference between the anonymous, unlogged state and being logged in. Like on Wikipedia, unlogged is largely untrackable, IP numbers are hugely variable, whereas logged in, you are completely tracked and identifiable by the hosting server, no matter where you're logged in from. At RT you have to be logged in to vote. My point being not to criticize your level of computer savvy, only to point out that if I am correct, that pretty much means you don't understand what you were talking about. --Tsavage (talk) 06:27, 3 December 2014 (UTC)

──────────────────────────────────────────────────────────────────────────────────────────────────── First, my point was that you cannot verify that any of those people actually saw the movie, or that the majority did or did not. THey could be Chris Nolan fans, or Anne Hathaway fans, etc. This is a universal issue with online polls. True, my one score did not affect the overall rating (because there were already so many). Since you cannot verify the authenticity of the rating itself, it makes it unreliable. Just because someone else wants to report unreliable information does not mean that we should. I agree, you cannot prove that someone coming out of the theater actually watched the movie, but since they engage them in a dialogue when they poll, I'll just assume that they weed out any viewers who clearly didn't pay attention. Either way, it's a more accurate representation of audience opinion. There is no need to use RT's user ratings, when a more accurate and reliable rating from CinemaScore is available. It adds nothing beyond, "this is what people at RT voted". It's not a real rating. It isn't measured by any sort of real qualitative data. As for my voting, IPs are variable, but they don't change for a particular location (unless you specifically set up a dynamic address). IP addresses are assigned to specific devices (e.g., modems). Most homes do not set up dynamic IPs, they set up static ones. Businesses will set up dynamic IPs to reduce the workload. That said, if I log out of Wikipedia and edit, they can still tell it came from the same computer that edits under the screen name "Bignole" by cross referencing the IP address tagged with that name and the one editing. That's why anon editors are defaulted to their IP address and you can track them on here. This is how we track sock accounts on Wikipedia, but running tracers. Regardless, because we're not here to discuss IP addresses, I was logged into RT when I made my votes, and I refreshed the screen to see if the overall numbers changed (which they did). Just FYI. I've said my peace, and it appears pretty clear that there is no consensus to use ratings like RT or MC.  BIGNOLE  (Contact me) 07:41, 3 December 2014 (UTC)

Wow. Everything you say is still so unfounded:
  • Since when is the reliability of a poll determined by proving that each responder is telling the truth? And is that by direct observation, you only poll people who you can observe actually interacting with the subject they're being asked about? Or do they put their hand on a Bible and swear to tell the whole truth and nothing but the truth? Polls are based on trusting people to answer truthfully, not on stalking them in cinemas. Seriously?
  • CinemaScore is totally suspect, studios pay BIG BUCKS to have a reasonable score to promote after opening weekend. Outed last year, Warner Bros complained about a B+ and they did a "recount" and upped it to an A- in time for media calls. CinemaScore's methodology and small sample size are being questioned - CinemaScore in Retreat as Studios Turn to PostTrak -Hollywood Reporter, 10/18/2013 I mean, a $30,000-$50,000 annual subscription means you get more than some guy polling at five cinemas to tell you your movie sucked. If you just watched and eavesdropped on exiting movie audiences, you could probably pick just as accurate A-B-C ratings. Point is, they are not the company to hold up as the shining standard for polls. AND (didn't I say this before) their sample is skewed: opening night moviegoers are clearly invested in wanting to like the movie, so basically, they're polling fans.
  • Your IP lecture is kinda condescending and STILL completely irrelevant and partly plain wrong: you seemed to think that being able to change your vote meant you could vote twice, and on the same machine, which was incorrect, you weren't voting twice. At RT, you LOG IN, you get one vote, you can change it if you like, no problem there, 10 or ONE MILLION people can log in from the same machine. same location, and it's 10 or ONE MILLION separate and distinct accounts, one vote per one account. No login, no vote, so, nothing whatsoever to do with IP addresses in any of that. (And, for home Net access, your ISP sets your IP address, so you're still wrong about that.)
Only three editors even participated much in this thread, and a couple including yourself dropped in, hardly a consensus number for a big film guideline (I hope not, that'd be sad). In any case, I gave up on "consensus" here a while ago, I'm just replying cause you did. I will eventually open an RfC or give up in disgust. One way or another, I'll still be smiling! :) --Tsavage (talk) 08:47, 3 December 2014 (UTC)
User ratings are essentially online polls, so the general way that online polls can be gamed or skewed can apply to user ratings. Here are some sources: Gulf News, TechCrunch, ABC News. Here are a couple of movie-related sources as well: FiveThirtyEight, Freakonomics,, and Roger Ebert. Erik (talk | contrib) (ping me) 13:42, 3 December 2014 (UTC)
Impressive-looking effort on the sources, but they only support what I've been saying. Scroll up, I already cited perhaps the most colorful and current example of the type of articles you've linked to, coverage of polls that received a highly visible partisan push: Kirk Cameron's Attempt to Game Rotten Tomatoes Backfired Spectacularly - Gawker 25 Nov 2014.
These reports don't somehow prove that online polling is unreliable and worthless. On the contrary, they demonstrate:
  • the relevance of polls as part of the cultural record (why report on them if nobody paid attention to them).
  • anomalies in polls being well-reported, they're constantly scrutinized, under continual peer-review, enhancing their overall reliability.
Two key points:
1. Most ongoing polls require an account to participate, which is a wholly different situation than anonymous, unmonitored voting that was common back at the dawn of the public Web. Members can be meticulously monitored and tracked, and standards maintained. Unlike the sock puppet hunts here at WP, filtering fake accounts can be way more efficiently executed in a members-only context. There are all kinds of measures a site can take to ensure that each account is "real," it's just a matter of balance between privacy and accessibility.
At Rotten Tomatoes, in addition to voting, there are member forums and an internal messaging system that lets you contact other members, so of course RT is monitoring all of this, maintaining community standards, which means accounts are far from unsuprvised and uncontrolled. They are not likely to let their prominently displayed audience score spiral out of control.
2. Getting out the vote for an online poll isn't necessarily gaming the system, any more than it's gaming an election for one side to outmotivate the other in getting voters to the polls. Completely hacking a system, like breaking into a server and injecting false numbers, is one thing, but reflects no worse on a polling site than it does any other hacked enterprise. But getting valid accounts to vote is simply community action, not a flaw in the method. As IMDb's managing editor said in the Ebert article: ""Our Top 250, as voted by users, is just that, a list of the Top 250 films as voted on by our users. ... We do get bouts of irrational exuberance for some titles. .. Our 'this too shall pass' approach has proved itself out..."
The overriding point here is that polling and the way public input impacts all aspects of life is changing dramatically - just google "problems with polling" for a start -- and I am trying to help WP keep up in the one small area of film reception.
(If you really want to be informed on this subject, you should probably be following stuff like this: Q/A: What the New York Times’ polling decision means - Pew Research Center, 28 Jul 2014 where they discuss and interview key people about the NYT and CBS deciding to use "online survey panels from YouGov as part of their election coverage ... conducted entirely online, among internet users ... selected using non-probability sampling methods. This is a very big deal in the survey world. Until now, no major news organization has put its brand on using surveys based on non-probability methods.")
To restate, my proposal here is about:
  • Improving the overall "reception" coverage in film articles, while maintaining practical, robust editorial guidelines.
  • Updating guidelines to reflect the real state of the world (i.e. in this case, some so-called user-generated content is valuable and encyclopedic).
  • Certifying Rotten Tomatoes Audience Score, with in-text attribution, and used in conjunction with the RT critics aggregate score.
AGAIN, this is about the reliability and value of Rotten Tomatoes Audience Score, not a referendum on all film polls or user-generated content. We have shown that we can allow and disallow specific sources - e.g. IMDb, CinemaScore - so this proposal is not setting precedent that way. --Tsavage (talk) 00:46, 4 December 2014 (UTC)
And...Here's a recent article from one of the two major movie industry trade publications, that discusses the movie with the biggest gap between RT critics and audience scores, and then lists RT's Top 8 other movies with the biggest gaps: Andrew Breitbart Doc Nears Rotten Tomatoes Record - Hollywood Reporter, 10 Jun 2013. --Tsavage (talk) 02:21, 4 December 2014 (UTC)
And I would say, in addition, that though  BIGNOLE  demonstrated how easy it is to game the RT user-generated system with two votes, it is incredibly simple for someone with a modicum of computer skill to run a program that could vote 1,000 or 10,000 times. --Tenebrae (talk) 16:19, 3 December 2014 (UTC)
Holy Toledo, Tenebrae, didn't you read my reply? BIGNOLE demonstrated absolutely nothing, he made a mistake, he THOUGHT he was voting twice, but he was only changing his vote. In order to vote on Rotten Tomatoes, you have to log in, via their own registration or with your Facebook credentials, and then you can vote ONCE, only once, he only voted once. You can CHANGE your ONE VOTE, but you can't vote twice, changing it doesn't count as a second vote. In his example, BIGNOLE did not vote twice, he just thought he did. And there is no anonymous voting on RT, like there is anonymous editing on Wikipedia. If you don't believe me, see for yourself (and read BIGNOLE's reply, he shifts the topic off-point to IP addresses and completely fails address my pointing out of his mistake).
As to your part 2, asserting that just about anyone could automate voting thousands of times, I don't think it would be too technically challenging, but without some sophistication to spread out the IPs and posting dates and times for a start, it would be quite easy to spot large scale stuffing, and reverse or otherwise offset if the system administrators chose to. But that's part of my reply to Erik's more meaty comment, probably coming later. And, you must have a modicum of computer skill, you're online, editing Wikipedia, using markup code and making judgements about online polling methods, how would you most easily put 10,000 fake votes on Rotten Tomatoes? --Tsavage (talk) 20:03, 3 December 2014 (UTC)
PS: AND, when BIGNOLE said at the end of his big block-of-words obfuscatory reply to me, "I was logged into RT when I made my votes, and I refreshed the screen to see if the overall numbers changed (which they did)," the total number of votes had changed only because other new votes had come in between his last page load and the refresh. That happens, it's a busy site. Try it again, change vote and reload, don't forget to clear cache just to be sure, changing a vote doesn't increment the vote counter. --Tsavage (talk) 20:19, 3 December 2014 (UTC)
 BIGNOLE  said, "Hmmm, it seemed to accept both of them from the same computer." The word "both" as opposed to "each" suggests two votes. Whether two went through or not, I was going by what I read. That said, automated voting is not difficult for a hacker. Hell, they do automated ticket buying the second a concert's box office opens, which is why the secondary market gets all the tickets and it's almost impossible to buy face-value tickets directly from the source. --Tenebrae (talk) 00:08, 4 December 2014 (UTC)
OK. I've just gone it to give a star rating to a movie I haven't seen (deliberately having chosen one with low critical and audience ratings already). After refreshing, the number of users rating did not go up. More troubling, I was able to rate a movie I haven't seen. That being the case, RT audience ratings simply aren't a statistically valid source. And when we're quoting a statistic, well. that's a problem. --Tenebrae (talk) 00:13, 4 December 2014 (UTC)
OMG. Try reloading a couple times, there's always some sorta latency, and clear cache. Bungling your own browser experiment doesn't change the fact that RT is one vote per movie per account (it only indicates that you don't have a clear understanding of the topic you're judging). Are you actually willing to believe that, rather than having a simple check to see if a user has already voted, RT allows each click on the rating stars to register as an all-new vote for the same user, and hopes that not too many of its millions of members discover that they can click 4 stars/5 stars or 1 star/2 stars back and forth as fast as they can to run the vote up or down? Is that how you think RT works? Rhetorical questions only.
Try admitting you're wrong and revising your opinion - it's liberating!
Think of RT Audience Score like this: Would we not list the duly recognized government of a country - a poll result - because it was also widely believed that their elections were rigged? Of course we would list it. We might also mention, via the relaible sources that informed us, that election rigging was suspected, but the government itself isn't "unreliable therefore unmentionable" because of how it was assembled. Why? Because the country is no more in the business of producing fairly elected governments, than RT is of producing statistically valid polls of the public at large (unlike CinemaScore, which sells itself as a representative poll).
RT's Audience number is editorial product, not a scientific poll result: RT maintains a community of registered users, it collects their ratings one vote per account (with absolute monitoring and termination control over those accounts, RT in effect hires and fires its raters), and it applies an editorial formula to that rating data to determine the audience score (3.5/5 stars and up is positive, below is negative, the positive to negative ratio as a percentage is what they label as their Audience Score - change the threshold, like, to 2.5/5 or 3.8/5 and the Score changes significantly, dramatically, independent of the ratings data.
The RT score is their branded product, clearly represented as "The percentage of Rotten Tomatoes users who have rated this movie 3.5 stars or higher," and it is useful and used by many reputable and expert sources (USA Today, New York Times, Hollywood Reporter, Variety, etc, to illustrate differences in critic and audience reception. That's what I've been saying these many words. --Tsavage (talk) 16:53, 6 December 2014 (UTC)
PS: To be polite and thorough, your comment about "automated voting" being easy for any "hacker" is not too relevant here, and your specific example of concert tickets is even less so. Automated anything is always possible, but with registered user accounts, you can easily ramp up your monitoring to detect problems and tighten up by kicking accounts. Everything online is literally constantly being hit by all sorts of intrusion attempts, for one, just look at any firewall log for anything on the Web, so you have to pick your case and examine the specifics. In your concert ticket example, that's big cash money. Last I heard, Ticketmaster owned two of the largest online secondary market sites, so while they were decrying the use of bots, they were profiting from resold tickets. And apparently, all available tickets are often not all available online, the artists themselves and other parties reserve blocks and then resell them. And so forth for the politics and inside jobs. On the technical side, it's quite a different problem to see if a user is "real" when they're attempting to buy an item via credit card, than to determine if over time a large number of user accounts are voting in organized blocks. You can't generalize. RT is in an effective position to control "fake" accounts on their site, and is no doubt already actively doing so in a way that maintains their Audience Score they way they want it to be. --Tsavage (talk) 19:04, 6 December 2014 (UTC)
I think this has been a polite discussion up to now, but uncalled-for remarks like "[b]ungling your own browser experiment" and "it only indicates that you don't have a clear understanding of the topic you're judging" when both I and  BIGNOLE  got the same results — apparently because of poor design in RT's audience=poll software — suggest we need to rethink this entire discussion. As for "Try admitting you're wrong and revising your opinion - it's liberating!" ... well, another person might say that could apply to you as well as to anyone. And posting walls of text is often seen by other editors as attempts at intimidation and bludgeoning, and is generally self-defeating since few editors are going to wade through mountains of convoluted arguments.
I made the point myself about sites like Ticketmaster being subject to bots and hacking, so we agree on that point. But it's false to compare highly regulated national elections that exist within a structure of civil and federal legal checks and balances with some poll website. --Tenebrae (talk) 16:31, 7 December 2014 (UTC)
Just to point out, the comparison to the government poll isn't accurate. What we're saying is that not only are the RT polls not reliable themselves (that doesn't mean RT isn't reliable), but it's segregated to the RT community. Thus, to compare it to a government poll would be like saying that because an opinion poll in 1 county showed favorable ratings (regardless of reliability of said poll), that that somehow means that everyone in that state/country feel that way. Online polls are not only regular victims of vote stacking, but they are also victims of circumstance. The circumstance in this case being only the people that visit their site regularly vote in their polls (regardless of whether they've seen the film). We're not talking about a national poll with statistical normality. It would be like doing a questionnaire in Tallahassee, Florida about gun control and generalizing that out to the whole country. Too many problems with online polling.  BIGNOLE  (Contact me) 16:56, 7 December 2014 (UTC)

──────────────────────────────────────────────────────────────────────────────────────────────────── Yes, this is clearly going nowhere; I will stop contributing to this thread with this reply. I did stop replying quite a bit earlier, said I would consider an RfC, but replies kept coming. In any case, I would characterize most of this thread not as discussion, but of repetition of "user-generated, unreliable, can't use" when my proposal had little to do with user-generated content as such.

Tenebrae: It almost feels as if you've been setting me up for some sort of charge of improper behavior, first complimenting me on the politeness of this discussion, then using your own comment to pivot, calling some of my remarks "uncalled-for." What is inappropriate about the word "bungling," or with politely putting forth the opinion that you may not understand a subject about which you're commenting? Especially, given the circumstance. You and BIGNOLE are trying to prove points based on inconsistent experiments with your own browsers. To do what you were trying to do properly:

  • Log in, pick an old movie where new votes are unlikely to come in at the same time, rate, refresh, a couple times if necessary, clearing cache for good measure, allowing several minutes if necessary, and the users voting total will go up.
  • Change your vote and repeat the reload procedure, the changed star rating will stick, but the users voting total won't increment.

Nobody said the system had to respond instantaneously. The delay could be caused by any number of things, their servers are handling hundreds of thousands of visits a day.

Additionally, you and BIGNOLE are claiming opposite things: you say the user total DIDN'T increment ("After refreshing, the number of users rating did not go up."), BIGNOLE to the best of my understanding, claims he was able to vote twice ("it seemed to accept both of them from the same computer ... I refreshed the screen to see if the overall numbers changed (which they did)". It is one account, one vote. And sure, CinemaScore is an exit poll, that's just one type of poll, aside from that, it is not expected that people definitively prove the truth of their responses in any poll ("do you swear you saw the movie, I want to see you watching that movie before I register your opinion"), truthful response is simply a for better or for worse expectation in polling.

BIGNOLE: I appreciate that you are making the attempt to address my points, and I am replying in kind. The fact that a national election and RT's audience poll are vastly different in scope and method is not the point here, I was using the election to illustrate how we apply the WP editorial guidelines: no more are a country's election results held against "scientific" standards to determine whether we report on them, than should be the RT membership poll. RT doesn't advertise itself as representing anything but what it actually is, a poll of its members, one of the cornerstones of Wikipedia is that we present verifiable information and for the most part let people judge things for themselves. Insofar as:

  • the RT audience score is useful in covering an aspect of a film's reception (it is, as shown in numerous examples provided above),
  • and clearly and explicitly represents its scope (the ratings of its registered users),
  • and considering that it is not a direct poll result, it is a percentage derived by applying an editorial weighting formula (3.5/5 stars is positive);
  • and considering the fact that it comes from a company that we already consider reliable by virtue of its critics poll,
  • and considering that it is notable, referenced and quoted by the biggest North American trade and consumer news media (NYT, USA Today, Hollywood Reporter, Variety, see above),

we should be able to use it. That's all I've been saying. :) --Tsavage (talk) 19:56, 7 December 2014 (UTC)

"Reception" section[edit]

What happened to the section on the MOS named Reception? It's referenced in the above discussion several times, and on the MOS page three times – in Themes, Box office, and Historical and scientific accuracies – but it's not there now. And to have the Critical response and Audience response sections without being sub-sections under one section, doesn't look right. It could be called something else (I'm not sure what) but something should be done. --Musdan77 (talk) 04:44, 10 November 2014 (UTC)

Musdan77, see the #Audience response discussion above, if you haven't already. As has been discussed there and on this talk page before, the layout of the guideline is not meant to imply that the layout of a film article should be exactly like that. For example, the Release section in the guideline partly states, "Presentation of content about a film's release and reception can range from a simple 'Release' section to several sections with their own subsections within." The Audience response section begins by stating, "This content is not intended to be a standalone section, or necessarily a subsection, in a film article. Polls of the public carried out by a reliable source in an accredited manner, such as CinemaScore, may be used and placed in the appropriate release or reception-based section, depending on the available context." And the Box office section partly states, "This information can be included under the Reception section, or if sufficient coverage exists, it is recommended that this information is placed in a 'Box office' or 'Theatrical run' section."
Maybe we should be clearer in the guideline's lead about the individuality of a film's layout, which is why I bolded the "There is no defined order of the sections" part, which goes on to state, "please see WikiProject Film's Good Articles and Featured Articles for examples of appropriate layouts. Since the page is a set of guidelines, it is subject to change depending on Wikipedia policies or participant consensus. For other guidelines, see Wikipedia:Manual of Style." Of course, some things are consistent for film layouts, such as placing the Plot section as the first section. Flyer22 (talk) 10:54, 10 November 2014 (UTC)
Thanks for the reply, but it doesn't really address the main issue that I brought up. The removal of the Reception section caused inconsistency and likely confusion, because the references to it still remain. When a major change is made, the person making it should be aware that it can affect other areas (and also that the average editor doesn't follow a discussion being made). I didn't see the bold sentence because I didn't think to look for any changes made to the lead. --Musdan77 (talk) 19:28, 10 November 2014 (UTC)
Musdan77 (last time pinging you to this section via W:Echo because I assume that you will check back here if you are interested in replies), can you point to an example of why you think not having a Reception heading in the guideline has caused inconsistency and likely confusion, whether for you or other editors? Also, what do you want to see as a solution? As seen, we do have reception sections in the guideline, but they are separated because of issues that have resulted from having them together as subsections; by that, I mean editors thinking that film articles have to use the exact layout as the guideline. Taking a look at the guideline from the table of contents, they should be able to see that the guideline should not be exactly seeing the Spoilers and Rotten Tomatoes Top Critics subsections. As for the bolded part in the lead, as indicated by the diff-link in my "10:54, 10 November 2014 (UTC)" post above, I bolded that before replying to you in this section. Flyer22 (talk) 03:04, 11 November 2014 (UTC)
I understand that. No, there is not, currently, a section called "Reception". Maybe it's just me, and if no one else thinks there's a problem then I'll just let it go, but I'm afraid that there are others who might find it a bit confusing. --Musdan77 (talk) 23:26, 11 November 2014 (UTC)

Plot summary length[edit]

I reverted this edit by Erik (talk · contribs) which removed the lower-bound word count for plot summaries. I don't know if there is a reason for the change (i.e. he thought it was unnecessary to stipulate a lower-bound or there are instances where it is causing problems) but I've just come across an editor that hacked the Jaws synopsis down to about 300 words so I have reverted the alteration. There are probably atypical instances (such as lost silent films) where it is not possible to apply the lower-limit but I think it is better that we stipulate a range to give editors an approximate idea for the typical case. Betty Logan (talk) 18:15, 16 November 2014 (UTC)

I agree with BL's reasoning. It is fine that Erik was bold but considering the amount of debate that section has received over the years a WP:CONSENSUS about any changes to it is preferable. MarnetteD|Talk 19:45, 16 November 2014 (UTC)
It has just felt like nobody tries to find a middle ground. The vast majority of plot summaries are people trying to max out their limit. Obviously, they want to put in as much detail as they're allowed. The opposite is rarely ever the case. Erik (talk | contrib) (ping me) 20:53, 16 November 2014 (UTC)
For clarity: Erik made the edit in question (changing "between 400 and 700 words" to "under 700 words") soon after I made this edit to the Edge of Tomorrow (film) article. I took his edit as him responding to me, especially since I saw that he hadn't edited any other film articles at the time; I took it as him trying to state that I was simply trying to "max out," because I stated, "Restored 714 words plot summary that IP took a hacking to. There is no need for plot summary to be 505 words. We [c]an get it a little under 700, and call it a day, per WP:FILMPLOT." Erik is incorrect, if he thought I was simply trying to max out. I reverted the IP because that IP cut too much, plain and simple. As seen here, before my revert of the IP who cut too much, I reverted what looked like significant plot bloat. Then, when Sock made this edit after that, an edit that restored a little bit of material, I re-analyzed that article's plot section and realized that the aforementioned IP had cut too much from it. That's when I reverted the plot section to the state it was in before that IP's edit, and made followup cuts/tweaks here and here, cutting what I felt was validly cut by the IP.
As Betty knows, since I thanked her via WP:Echo for this edit, I also agree with Betty's revert to "between 400 and 700 words," which she got to by reverting Lugnuts, me and Erik (in that order). When I changed the text to "700 words or less" and Lugnuts changed it to "700 words or fewer", I don't think either of us were thinking about the aspect that Betty mentioned -- that, despite common sense telling anyone that, for example, a one-sentence paragraph is an insufficient plot summary, our changes to the guideline can be interpreted to mean that any length is sufficient as long as it's 700 words or fewer. As for trying to max out? As noted in my edit to the guideline following Erik's edit in this regard, I don't see anything wrong with it, as long as the detail is not trivial. And similar can be stated of a person overly "maxing in"; I hate it when editors needlessly cut the plot section to well under 700 words. Unless there is a good reason for the plot section to be, for example, 500 words, then cutting it that far down irks me, as there are usually details that are missing that could better service the plot section. Flyer22 (talk) 14:02, 17 November 2014 (UTC)

...To Save Us All from Satan's Power[edit]

What is wrong with an 800 word plot summary? I don't think "its always been like that" is a valid argument (from trawling the archives... I don't see the building of a consensus on this issue}. 40 min episodes of TV series (eg Sopranos S03E10) do not have these constraints...Adrianne Wadewitz here (19:38) posits this as a positive. Is there any real need for a 700 limit from the point of view of the reader/integrity of the encyclopedia. Stacie Croquet (talk) 10:26, 1 December 2014 (UTC)

Stacie Croquet, Wikipedia's policy is that descriptions of works should be summarized. It specifically says, "Wikipedia treats fiction in an encyclopedic manner, discussing the reception and significance of notable works in addition to a concise summary." In addition, Wikipedia's policy on primary and secondary sources says, "Wikipedia articles should be based on reliable, published secondary sources and, to a lesser extent, on tertiary sources and primary sources." Both of this means that a concise summary describing the work should merely complement the content derived from secondary sources, such as a film's "reception and significance". In other words, the summary is for giving readers an understanding of the film to comprehend the rest of the article. If you think a film is long enough and/or complex enough to warrant additional detail, then you can build a consensus to do that. Otherwise, between 400 and 700 words is sufficient to convey what a film is about. Erik (talk | contrib) (ping me) 21:32, 1 December 2014 (UTC)
What you say holds equally, whether your final sentence is as written, or is modified to: Otherwise, between 400 and 800 words is sufficient to convey what a film is about. My personal view is that a tag mars the reader's experience more than a marginally overlong plot. I hope the original purpose of tagging was to prime the pump of improvement editing. A little loosening of the "standard allowance" might encourage content editors to make that effort to reduce plot lengths.Stacie Croquet (talk) 22:17, 1 December 2014 (UTC)
Have film articles or their Plot sections been tagged for improvement based on word limit with any regularity, is that the specific issue in this Satan's Power section? --Tsavage (talk) 00:10, 2 December 2014 (UTC)
Well, why not 600 words, or 900 words? Whichever word limit we set will be arbitrary to an extent, but it is important to have one because we've had 3000 word plot summaries before today. One side of A4 written in Times New Roman on point size 12 (arguably the most common font and size) is roughly 700 words, so we are basically suggesting to editors to keep plot summaries to one side of A4. There may be exceptional cases where a film is atypically long or convoluted, but those should be handled on a case by case basis. Betty Logan (talk) 00:37, 2 December 2014 (UTC)
It's true: Everyone wants to write all they can about their favorite movie. There needs to be a limit, and the properly derived consensus is 400 to 700. If someone wants to raise the limit to 800, that's fine. We're all allowed to suggest that a guideline be changed. But go through the process — formalize it with an RfC so that it gets Project-wide attention. Otherwise, we respect that consensus that editors before us hammered out. --Tenebrae (talk) 03:57, 2 December 2014 (UTC)
Support keeping the plot summary length at 700 words max. Cullen328 Let's discuss it 04:27, 2 December 2014 (UTC)
Support keeping the plot summary length at 700 words max. And also, there appears to be an edit-war brewing at ...To Save Us All from Satan's Power — over a plot tag, no less.--Tenebrae (talk) 23:28, 7 December 2014 (UTC)
I Origins (2014): another one to needlessly mar and "appear to edit war" over. Stacie Croquet (talk) 13:17, 8 December 2014 (UTC)

Cast in Plot[edit]

I asked this at Wikipedia talk:How to write a plot summary, but with no response after two days, I'll try here:

'It says, "list the actors' names in parentheses after them, Character (Actor)." Shouldn't it say something like: "This should not be done if there is a Cast list."? Otherwise, it's redundant and goes against WP:REPEATLINK.' --Musdan77 (talk) 21:25, 5 December 2014 (UTC)

I don't think there is a solid consensus for this. Some editors like to include actor names in the plot summary, some don't. I think that there may even be a tendency to include actor names in the plot section but not link them, saving the linking for the cast section. Sometimes it is just the actors' surnames, presumably based on their first being mentioned in the lead section. (Anyone not mentioned there may not warrant having their names inserted in the plot summary.) That's my take, anyway. Erik (talk | contrib) (ping me) 21:28, 5 December 2014 (UTC)
I'm of the "This should not be done if there is a Cast list." opinion. Or...if the matter is covered in the Casting section. Flyer22 (talk) 21:31, 5 December 2014 (UTC)
I won't actively remove cast from the Plot section just because there's a Cast list, but if I'm trimming the plot down in any case that's one of the first things I'll remove. Conversely I won't generally oppose someone adding the cast back in unless the plot already has length issues. DonIago (talk) 21:44, 5 December 2014 (UTC)
I agree with Flyer22, although I understand the confusion of Musdan77, based on the guidelines for the plotlist. If there is a cast list, it is pretty redundant to also include them in the plot. But like Doniago, I don't actively remove them if they are there. perhaps this could be an instance where we could have an RfC, in order to reach a consensus? Onel5969 (talk) 22:08, 5 December 2014 (UTC)

Films from countries that made them and TV show airdates[edit]

I want a new rule about films and TV shows that from the countries that made them. The films from a country that made them should have the releases dates on the year of films, not from a country that first released a film, as seen on 2013 in film. It would confuse every signal reader that remembered the first release date from a country that made them.

The TV shows that are from various countries should be the only country to have the airdates on various season orders, not from the airdates that have earlier in different countries for reasons from what I said above. These changes must be agreed upon for the sake of readers from various countries. BattleshipMan (talk) 22:01, 12 December 2014 (UTC)

We don't have jurisdiction over television articles, but films are listed by their global release date for two reasons: i) Wikipedia needs to adopt a WP:WORLDVIEW - the release date in just one particular country is largely inconsequential for the majority of our readers so we simply go with the earliest; ii) listing films by the release date in their "country of production" would introduce problems for co-productions i.e. Skyfall was a British-American co-production and was released on separate dates so what would you advocate in such cases? Perhaps using a date based ordering system isn't the smartest idea when films have different release dates in different countries, and arguably an alphabetic system would be better but that's down to the editors working on those articles to decide. Betty Logan (talk) 22:47, 12 December 2014 (UTC)
Well, I'm not pleased with the release dates listed in 2013 in film on the earliest ones because it would create confusion among readers who remember the films released from a country that made and produced them and that's why we need to change the rules regarding the release dates on films that were made by the country. BattleshipMan (talk) 22:58, 12 December 2014 (UTC)
Except readers don't remember the release dates in the production country, they remember the release date in the country they live in (that is if they remember the release date at all). Can you remember the British release dates for British films, or the French release dates for French films? If readers are confused by the date then their confusion can be cleared up with a single click anyway, so your argument is largely a red-herring. You didn't address my second point, either, regarding the impracticality of your proposal in the case of films which are produced by more than one country. Betty Logan (talk) 23:50, 12 December 2014 (UTC)
American, British and French movies should have their release dates in year movie by their own country, not by when first release by a different country which they never involved in. As seen in 2013 in film, American movies like Olympus Has Fallen, The Last Stand, Elysium, A Good Day to Die Hard and such should have their American release dates. British films like About Time, The World's End and such should have their British release dates. French films in that year should have their country's release dates. Therefore, a consensus should be made about the release dates of films made by the country to the year of film articles. BattleshipMan (talk) 01:36, 13 December 2014 (UTC)
You got to find a way for people to set up a consensus for films from the countries that made and produced them to have their release dates on the year in film articles, not just by earliest releases from a different country that was never involved in the production of that film. BattleshipMan (talk) 18:03, 16 December 2014 (UTC)
I'm not gonna stop until a consensus to have films from the countries that made and produced them to have their release dates on the year in film articles has been reached. BattleshipMan (talk) 05:32, 20 December 2014 (UTC)
There already is a consensus to list them by their first release date. You need to get one to go against this long standing consensus. Good luck! Lugnuts Dick Laurent is dead 10:51, 20 December 2014 (UTC)