Moderation system

From Wikipedia, the free encyclopedia
  (Redirected from Content moderation)
Jump to navigation Jump to search
Comment moderation on a GitHub discussion

On Internet websites that invite users to post comments, a moderation system is the method the webmaster chooses to sort contributions that are irrelevant, obscene, illegal, harmful, or insulting with regards to useful or informative contributions.

Various types of Internet sites permit user comments, such as: Internet forums, blogs, and news sites powered by scripts such as phpBB, a Wiki, or PHP-Nuke. Depending on the site's content and intended audience, the webmaster will decide what kinds of user comments are appropriate, then delegate the responsibility of sifting through comments to lesser moderators. Most often, webmasters will attempt to eliminate trolling, spamming, or flaming, although this varies widely from site to site.

Social media sites may also employ content moderators to manually inspect or remove content flagged for hate speech or other objectionable content. In the case of Facebook, the company has increased the number of content moderators from 4,500 to 7,500 in 2017 due to legal and other controversies. In Germany, Facebook is responsible for removing hate speech within 24 hours of when it is posted.[1]

Supervisor moderation[edit]

Also known as unilateral moderation, this kind of moderation system is often seen on Internet forums. A group of people are chosen by the webmaster (usually on a long-term basis) to act as delegates, enforcing the community rules on the webmaster's behalf. These moderators are given special privileges to delete or edit others' contributions and/or exclude people based on their e-mail address or IP address, and generally attempt to remove negative contributions throughout the community.

Commercial content moderation (CCM)[edit]

Commercial Content Moderation is a term coined by Sarah T. Roberts to describe the practice of "monitoring and vetting user-generated content (UGC) for social media platforms of all types, in order to ensure that the content complies with legal and regulatory exigencies, site/community guidelines, user agreements, and that it falls within norms of taste and acceptability for that site and its cultural context."[2]

While at one time this work may have been done by volunteers within the online community, for commercial websites this is largely achieved through outsourcing the task to specialized companies, often in low-wage areas such as India and the Philippines. Outsourcing of content moderation jobs grew as a result of the social media boom. With the overwhelming growth of users and UGC, companies needed many more employees to moderate the content. In the late 1980s and early 1990s, tech companies began to outsource jobs to foreign countries that had an educated workforce but were willing to work for cheap.[3]

Employees work by viewing, assessing and deleting disturbing content, and may suffer psychological damage.[4][5][6][7][8][9] Secondary trauma may arise, with symptoms similar to PTSD.[10] Some large companies such as Facebook offer psychological support[10] and increasingly rely on the use of Artificial Intelligence (AI) to sort out the most graphic and inappropriate content, but critics claim that it is insufficient.[11][12]


Facebook has decided to create an oversight board that will decide what content remains and what content is removed. This idea was proposed in late 2018. The "Supreme Court" at Facebook is to replace making decisions in an ad hoc manner.[12]

Distributed moderation[edit]

Distributed moderation comes in two types: user moderation and spontaneous moderation.

User moderation[edit]

User moderation allows any user to moderate any other user's contributions. On a large site with a sufficiently large active population, this usually works well, since relatively small numbers of troublemakers are screened out by the votes of the rest of the community. Strictly speaking, wikis such as Wikipedia are the ultimate in user moderation,[citation needed] but in the context of Internet forums, the definitive example of a user moderation system is Slashdot.

For example, each moderator is given a limited number of "mod points," each of which can be used to moderate an individual comment up or down by one point. Comments thus accumulate a score, which is additionally bounded to the range of -1 to 5 points. When viewing the site, a threshold can be chosen from the same scale, and only posts meeting or exceeding that threshold will be displayed. This system is further refined by the concept of karma—the ratings assigned to a user's' previous contributions can bias the initial rating of contributions he or she makes.

On sufficiently specialized websites, user moderation will often lead to groupthink, in which any opinion that is in disagreement with the website's established principles (no matter how sound or well-phrased) will very likely be "modded down" and censored, leading to the perpetuation of the groupthink mentality. This is often confused with trolling.[citation needed]

User moderation can also be characterized by reactive moderation. This type of moderation depends on users of a platform or site to report content that is inappropriate and breaches community standards. In this process, when users are faced with an image or video they deem unfit, they can click the report button. The complaint is filed and queued for moderators to look at.[13]

Spontaneous moderation[edit]

Spontaneous moderation is what occurs when no official moderation scheme exists. Without any ability to moderate comments, users will spontaneously moderate their peers through posting their own comments about others' comments. Because spontaneous moderation exists, no system that allows users to submit their own content can ever go completely without any kind of moderation.

See also[edit]


  1. ^ "Artificial intelligence will create new kinds of work". The Economist. Retrieved 2017-09-02.
  2. ^ "Behind the Screen: Commercial Content Moderation (CCM)". Sarah T. Roberts | The Illusion of Volition. 2012-06-20. Retrieved 2017-02-03.
  3. ^ Elliott, Vittoria; Parmar, Tekendra. ""The darkness and despair of people will get to you"". rest of world.{{cite web}}: CS1 maint: url-status (link)
  4. ^ Stone, Brad (July 18, 2010). "Concern for Those Who Screen the Web for Barbarity" – via
  5. ^ Adrian Chen (23 October 2014). "The Laborers Who Keep Dick Pics and Beheadings Out of Your Facebook Feed". WIRED. Archived from the original on 2015-09-13.
  6. ^ "The Internet's Invisible Sin-Eaters". The Awl. Archived from the original on 2015-09-08.
  7. ^ University, Department of Communications and Public Affairs, Western (March 19, 2014). "Western News - Professor uncovers the Internet's hidden labour force". Western News.
  8. ^ "Invisible Data Janitors Mop Up Top Websites - Al Jazeera America".
  9. ^ "Should Facebook Block Offensive Videos Before They Post?". WIRED. 26 August 2015.
  10. ^ a b Olivia Solon (2017-05-04). "Facebook is hiring moderators. But is the job too gruesome to handle?". The Guardian. Retrieved 2018-09-13.
  11. ^ Olivia Solon (2017-05-25). "Underpaid and overburdened: the life of a Facebook moderator". The Guardian. Retrieved 2018-09-13.
  12. ^ a b Gross, Terry. "For Facebook Content Moderators, Traumatizing Material Is A Job Hazard".
  13. ^ Grimes-Viort, Blaise (December 7, 2010). "6 types of content moderation you need to know about". Social Media Today.

External links[edit]