Jump to content

Reputation system: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
Irocgt (talk | contribs)
Line 36: Line 36:
* Non Governmental organizations (NGOs): www.GreatNonProfits.org, [[GlobalGiving]]
* Non Governmental organizations (NGOs): www.GreatNonProfits.org, [[GlobalGiving]]
* Professional reputation of translators and translation outsourcers: BlueBoard at [[ProZ.com]], HFS at [[Translatorscafe.com ]]
* Professional reputation of translators and translation outsourcers: BlueBoard at [[ProZ.com]], HFS at [[Translatorscafe.com ]]
* All purpose reputation system: [[Yelp, Inc.]]
* All purpose reputation system: [[Yelp, Inc.]], [[Customer Lobby]]


==Attacks on reputation systems==
==Attacks on reputation systems==

Revision as of 04:50, 1 October 2011

A reputation system computes and publishes reputation scores for a set of objects (e.g. service providers, services, goods or entities) within a community or domain, based on a collection of opinions that other entities hold about the objects. The opinions are typically passed as ratings to a reputation center which uses a specific reputation algorithm to dynamically compute the reputation scores based on the received ratings.

Entities in a community use reputation scores for decision making, e.g. whether or not to buy a specific service or good. An object with a high reputation score will normally attract more business than an object with a low reputation score. It is therefore in the interest of objects to have a high reputation score.

Since the collective opinion in a community determines an object's reputation score, reputation systems represent a form of collaborative sanctioning and praising. A low score represents a collaborative sanctioning of an object that the community perceives as having or providing low quality. Similarly, a high score represents a collaborative praising of an object that the community perceives as having or providing high quality. Reputation scores change dynamically as a function of incoming ratings. A high score can quickly be lost if rating entities start providing negative ratings. Similarly, it is possible for an object with a low score to recover and regain a high score.

Reputation systems are related to recommender systems and collaborative filtering, but with the difference that reputation systems produce scores based on explicit ratings from the community, whereas recommender systems use some external set of entities and events (such as the purchase of books, movies, or music) to generate marketing recommendations to users. The role of reputation systems is to facilitate trust (Resnick et al. 2000)(Jøsang, Ismail & Boyd 2007), and often functions by making the reputation more visible.

Reputation systems are often useful in large online communities in which users may frequently have the opportunity to interact with users with whom they have no prior experience or in communities where user generated content is posted like YouTube or Flickr. In such a situation, it is often helpful to base the decision whether or not to interact with that user on the prior experiences of other users.

Reputation systems may also be coupled with an incentive system to reward good behavior and punish bad behavior. For instance, users with high reputation may be granted special privileges, whereas users with low or unestablished reputation may have limited privileges.

Types of reputation systems

A simple reputation system, employed by eBay, is to record a rating (either positive, negative, or neutral) after each pair of users conducts a transaction. A user's reputation comprises the count of positive and negative transactions in that user's history.

More sophisticated algorithms scale an individual entity's contribution to other nodes' reputations by that entity's own reputation. PageRank is such a system, used for ranking web pages based on the link structure of the web. In PageRank, each web page's contribution to another page is proportional to its own pagerank, and inversely proportional to its number of outlinks.

Reputation systems are also emerging which provide a unified, and in many cases objective, appraisal of the impact to reputation of a particular news item, story, blog or online posting. The systems also utilize complex algorithms to firstly capture the data in question but then rank and score the item as to whether it improves or degrades the reputation of the individual, company or brand in question.

Online reputation systems

Howard Rheingold states that online reputation systems are 'computer-based technologies that make it possible to manipulate in new and powerful ways an old and essential human trait'. Rheingold inclines that these systems arose as a result of the need for Internet users to gain trust in the individuals they transact with online. The innate trait he makes note of in humans is that functions of society such as gossip 'keeps us up to date on who to trust, who other people trust, who is important, and who decides who is important'. Internet sites such as eBay and Amazon he argues seek to service this consumer trait and are 'built around the contributions of millions of customers, enhanced by reputation systems that police the quality of the content and transactions exchanged through the site'.

Other examples of practical applications

Attacks on reputation systems

Reputation systems are in general vulnerable to attacks, and many types of attacks are possible (Jøsang & Golbeck 2009). A typical example is the so-called Sybil attack where an attacker subverts the reputation system by creating a large number of pseudonymous entities, and using them to gain a disproportionately large influence (Lazzari 2010). A reputation system's vulnerability to a Sybil attack depends on how cheaply Sybils can be generated, the degree to which the reputation system accepts input from entities that do not have a chain of trust linking them to a trusted entity, and whether the reputation system treats all entities identically. It is named after the subject of the book Sybil, a case study of a woman with multiple personality disorder.

See also

References

  • Resnick, P.; Zeckhauser, R.; Friedman, E.; Kuwabara, K. (2000). "Reputation Systems" (PDF). Communications of the ACM. {{cite journal}}: Invalid |ref=harv (help)
  • Jøsang, A.; Ismail, R.; Boyd, C. (2007). "A Survey of Trust and Reputation Systems for Online Service Provision" (PDF). Decision Support Systems. 43 (2). {{cite journal}}: Invalid |ref=harv (help)
  • D. Quercia, S. Hailes, L. Capra. Lightweight Distributed Trust Propagation. ICDM 2007.
  • R. Guha, R. Kumar, P. Raghavan, A. Tomkins. Propagation of Trust and Distrust WWW2004.
  • A. Cheng, E. Friedman. Sybilproof reputation mechanisms SIGCOMM workshop on Economics of peer-to-peer systems, 2005.
  • Hamed Alhoori, Omar Alvarez, Richard Furuta, Miguel Muñiz, Eduardo Urbina: Supporting the Creation of Scholarly Bibliographies by Communities through Online Reputation Based Social Collaboration. ECDL 2009: 180-191
  • Sybil Attacks Against Mobile Users: Friends and Foes to the Rescue by Daniele Quercia and Stephen Hailes. IEEE INFOCOM 2010.
  • Dellarocas, C. (2003). "The Digitization of Word-of-Mouth: Promise and Challenges of Online Reputation Mechanisms" (PDF). Management Science. 49 (10): 1407–1424. doi:10.1287/mnsc.49.10.1407.17308. {{cite journal}}: Invalid |ref=harv (help)
  • J.R. Douceur. The Sybil Attack . IPTPS02 2002.
  • Rheingold, Howard (2002). Smart Mobs: The Next Social Revolution. Perseus, Cambridge, Massachusetts. {{cite book}}: Invalid |ref=harv (help)
  • Adams, Ethan (October 28, 2010). The Reputation Management Online Guide (Third ed.). p. 251. ISBN 9780805864267.

External links