Chessmetrics is a weighted average of past performance. The score considers a player's win percentage against other players weighted by the ratings of the other players and the time elapsed since the match. A 10% increase in performance is equivalent to an increase of 85 rating points.
The weighting of previous matches digresses linearly from 100% for just-finished matches to zero for matches conducted more than two years ago.
Performance rating adjustment after tournament:
- Performance Rating = Average Opponents' Rating + [(PctScore - 0.50) * 850]
Weighting of past tournaments (age in months):
- 100% * (24 - age)
In 2006 economists Charles C. Moul and John V. C. Nye used Chessmetrics to determine the "expected" results of games, and wrote:
Ratings in chess that make use of rigorous statistics to produce good estimates of relative player strength are now relatively common, but comparing ratings across different time periods is often complicated by idiosyncratic changes (cf. Elo, 1968 for the pioneering discussion). Sonas uses the same rating formula throughout our sample and updates this rating monthly instead of annually, as is more common. Moreover, retrospective grading allows him to establish rankings that are unbiased estimates of the "true" relative strengths of players.
Jeff Sonas, Chessmetrics' author, claims that Chessmetrics has advantages over Elo ratings:
- Elo ratings have a built-in bias against top players, in other words a top player's rating will suffer if he / she achieves against a lower-rated player the percentage result that the players' Elo ratings predict.
- Chessmetrics is more responsive to change than Elo ratings, for example Chessmetrics will track more promptly and accurately improvements in the playing strength of a rising star. Sonas regards this as an important part of Chessmetrics's superior ability to predict results.
- Chessmetrics takes account of the empirically-measured difference in results when playing as White or Black, which the Elo system ignores.
Chessmetrics occasionally gives strange-looking results, particularly for players with long periods of inactivity. For instance, World Chess Champion Emanuel Lasker was inactive for much of 1912–1914, and as a consequence dropped from #1 to #12 in the world in the Chessmetrics rankings, just before his famous victory ahead of all the world's other leading players at the 1914 Saint Petersburg chess tournament.
Chessmetrics can only be used to compare the level of a player against their peers; it is not appropriate to use Chessmetrics to compare players of different era. For example, GM John Nunn has highlighted the absurdity of attempts to compare the objective playing strengths of players from different eras: He used the example of Hugo Suechting, world ranked 27 and rated 2559 by Chessmetrics in 1911, after the Elite tournament in Karlsbad. An analysis of Suechting's games from that period suggests that his level of play was at best 2100 by today's standard.
The original article on Chessmetrics was published in Chessbase in October 2002. Since then, Chessmetrics has become reasonably well known, due to numerous articles in Chessbase and The Week in Chess.
- Chessmetrics formulas page
- Charles C. Moul and John V. C. Nye (May 2006). "Did the Soviets Collude? A Statistical Analysis of Championship Chess 1940–64". The Social Science Research Network. Retrieved 2008-07-08. Full article freely available via links on the cited web page.
- Jeff Sonas (22 October 2002). "The Sonas Rating Formula – Better than Elo?". Retrieved 2008-07-08.
- Chessmetrics Player Profile: Emanuel Lasker, Chessmetrics web site
- Nunn's criticism
- For instance, Chessmetrics articles on Comparison of top chess players throughout history at Chessbase: Part 1 Part 2 Part 3 Part 4
- Book Reviews, #82, by John L. Watson, published at The Week in Chess
- Computer Analysis of World Chess Champions, M.Guid and I.Bratko, ICGA Journal (Vol 29, No. 2, June 2006, pages 65–73).