Bradley–Terry model

From Wikipedia, the free encyclopedia
Jump to navigation Jump to search

The Bradley–Terry model is a probability model that can predict the outcome of a paired comparison. Given a pair of individuals i and j drawn from some population, it estimates the probability that the pairwise comparison i > j turns out true, as

where pi is a positive real-valued score assigned to individual i. The comparison i > j can be read as "i is preferred to j", "i ranks higher than j", or "i beats j", depending on the application.

For example, pi may represent the skill of a team in a sports tournament, estimated from the number of times i has won a match. then represents the probability that i will win a match against j.[1][2] Another example used to explain the model's purpose is that of scoring products in a certain category by quality. While it's hard for a person to draft a direct ranking of (many) brands of wine, it may be feasible to compare a sample of pairs of wines and say, for each pair, which one is better. The Bradley–Terry model can then be used to derive a full ranking.[2]

History and applications[edit]

The model is named after R. A. Bradley and M. E. Terry,[3] who presented it in 1952,[4] although it had already been studied by Zermelo in the 1920s.[1][5][6]

Real-world applications of the model include estimation of the influence of statistical journals, or ranking documents by relevance in machine-learned search engines.[7] In the latter application, may reflect that document i is more relevant to the user's query than document j, so it should be displayed earlier in the results list. The individual pi then express the relevance of the document, and can be estimated from the frequency with which users click particular "hits" when presented with a result list.[8]

Definition[edit]

The Bradley–Terry model can be parametrized in various ways. One way to do so is to pick a single parameter per observation, leading to a model of n parameters p1, ..., pn.[9] Another variant, in fact the version considered by Bradley and Terry,[2] uses exponential score functions so that

or, using the logit (and disallowing ties),[1]

reducing the model to logistic regression on pairs of individuals.

Estimating the parameters[edit]

The following algorithm computes the parameters pi of the basic version of the model from a sample of observations. Formally, it computes a maximum likelihood estimate, i.e., it maximizes the likelihood of the observed data. The algorithm dates back to the work of Zermelo.[1]

The observations required are the outcomes of previous comparisons, for example, pairs (i, j) where i beats j. Summarizing these outcomes as wij, the number of times i has beaten j, we obtain the log-likelihood of the parameter vector p = p1, ..., pn as[1]

Denote the number of comparisons "won" by i as Wi. Starting from an arbitrary vector p, the algorithm iteratively performs the update

for all i. After computing all of the new parameters, they should be renormalized,

This estimation procedure improves the log-likelihood in every iteration, and eventually converges to a unique maximum.

Worked Example of Iterated Procedure[edit]

Suppose there are 4 teams who have played a total of 21 games among themselves. Each team's wins are given in the rows of the table below and the opponents are given as the columns. For example, Team A has beat Team B two times and lost to team B three times; not played team C at all; won once and lost four times against team D.

Results
A B C D
A - 2 0 1
B 3 - 5 0
C 0 3 - 1
D 4 0 3 -

We would like to compute the relative strengths of the teams; that is, we want to compute one parameter per team, with higher parameters indicating greater prowess. We initialize the 4 entries in the parameter vector p arbitrarily, for example, assigning the value 1 to each team: [1, 1, 1, 1].

Then, we normalize all the parameters by dividing them by to get the estimated parameters p = [0.148, 0.304, 0.164, 0.384].

To get better estimates, we can repeat the process, using the new p values. For example,

Repeating this for the remaining parameters and normalizing, we get p = [0.145, 0.280, 0.162, 0.413]. Repeating this process a total of 20 times will show rapid convergence to p = [0.139, 0.226, 0.143, 0.492]. This shows that Team D is the strongest, Team B is the second strongest, and Team A and Team C are nearly equal in strength, but less than Teams B and D. The Bradley-Terry model lets us extrapolate the relationship between all 4 teams, even though all teams haven't played each other.

See also[edit]

References[edit]

  1. ^ a b c d e Hunter, David R. (2004). "MM algorithms for generalized Bradley–Terry models". The Annals of Statistics. 32 (1): 384–406. CiteSeerX 10.1.1.110.7878. doi:10.1214/aos/1079120141. JSTOR 3448514.
  2. ^ a b c Agresti, Alan (2014). Categorical Data Analysis. John Wiley & Sons. pp. 436–439.
  3. ^ E.E.M. van Berkum. "Bradley-Terry model". Encyclopedia of Mathematics. Retrieved 18 November 2014.
  4. ^ Bradley, Ralph Allan; Terry, Milton E. (1952). "Rank Analysis of Incomplete Block Designs: I. The Method of Paired Comparisons". Biometrika. 39 (3/4): 324–345. doi:10.2307/2334029. JSTOR 2334029.
  5. ^ Zermelo, Ernst (1929). "Die Berechnung der Turnier-Ergebnisse als ein Maximumproblem der Wahrscheinlichkeitsrechnung". Mathematische Zeitschrift. 29 (1): 436–460. doi:10.1007/BF01180541.
  6. ^ Heinz-Dieter Ebbinghaus (2007), Ernst Zermelo: An Approach to His Life and Work, pp. 268–269, ISBN 9783540495536
  7. ^ Szummer, Martin; Yilmaz, Emine (2011). Semi-supervised learning to rank with preference regularization (PDF). CIKM.
  8. ^ Radlinski, Filip; Joachims, Thorsten (2007). Active Exploration for Learning Rankings from Clickthrough Data (PDF). KDD '07 Proceedings of the 13th ACM SIGKDD international conference on Knowledge discovery and data mining. pp. 570–579. doi:10.1145/1281192.1281254.
  9. ^ Fangzhao Wu; Jun Xu; Hang Li; Xin Jiang (2014). Ranking Optimization with Constraints. CIKM '14 Proceedings of the 23rd ACM International Conference on Conference on Information and Knowledge Management. pp. 1049–1058. doi:10.1145/2661829.2661895.