# Pythagorean expectation

Jump to: navigation, search

Pythagorean expectation is a formula invented by Bill James to estimate how many games a baseball team "should" have won based on the number of runs they scored and allowed. Comparing a team's actual and Pythagorean winning percentage can be used to evaluate how lucky that team was (by examining the variation between the two winning percentages). The name comes from the formula's resemblance to the Pythagorean theorem.[1]

The basic formula is:

$\mathrm{Win} = \frac{\text{runs scored}^2}{\text{runs scored}^2 + \text{runs allowed}^2} = \frac{1}{1+(\text{runs allowed}/\text{runs scored})^2}$

where Win is the winning ratio generated by the formula. The expected number of wins would be the expected winning ratio multiplied by the number of games played.

## Empirical origin

Empirically, this formula correlates fairly well with how baseball teams actually perform. However, statisticians since the invention of this formula found it to have a fairly routine error, generally about 3 games off. For example, in 2002, the New York Yankees scored 897 runs, allowing 697 runs. According to James' original formula, the Yankees should have won 62.35% of their games.

$\mathrm{Win} = \frac{\text{897}^{2}}{\text{897}^{2} + \text{697}^{2}} = 0.623525865$

Based on a 162 game season, the Yankees should have won 101.07 games. The 2002 Yankees actually went 103-58.[2]

In efforts to fix this error, statisticians have performed numerous searches to find the ideal exponent.

If using a single number exponent, 1.83 is the most accurate, and the one used by baseball-reference.com, the premier website for baseball statistics across teams and time.[3] The updated formula therefore reads as follows:

$\mathrm{Win} = \frac{\text{runs scored}^{1.83}}{\text{runs scored}^{1.83} + \text{runs allowed}^{1.83}} = \frac{1}{1+(\text{runs allowed}/\text{runs scored})^{1.83}}$

The most widely known is the Pythagenport formula[4] developed by Clay Davenport of Baseball Prospectus:

$\mathrm{Exponent} = 1.50 \cdot \log\left(\frac{R+RA}G\right) +0.45$

He concluded that the exponent should be calculated from a given team based on the team's runs scored (R), runs allowed (RA), and games (G). By not reducing the exponent to a single number for teams in any season, Davenport was able to report a 3.9911 root-mean-square error as opposed to a 4.126 root-mean-square error for an exponent of 2.[4]

Less well known but equally (if not more) effective is the Pythagenpat formula, developed by David Smyth.[5]

$\mathrm{Exponent} = \left(\frac{R+RA}G\right)^{.287}$

Davenport expressed his support for this formula, saying:

After further review, I (Clay) have come to the conclusion that the so-called Smyth/Patriot method, aka Pythagenpat, is a better fit. In that, X = ((rs + ra)/g)0.285, although there is some wiggle room for disagreement in the exponent. Anyway, that equation is simpler, more elegant, and gets the better answer over a wider range of runs scored than Pythagenport, including the mandatory value of 1 at 1 rpg.[6]

These formulas are only necessary when dealing with extreme situations in which the average number of runs scored per game is either very high or very low. For most situations, simply squaring each variable yields accurate results.

There are some systematic statistical deviations between actual winning percentage and expected winning percentage, which include bullpen quality and luck. In addition, the formula tends to regress toward the mean, as teams that win a lot of games tend to be underrepresented by the formula (meaning they "should" have won fewer games), and teams that lose a lot of games tend to be overrepresented (they "should" have won more).

## "Second-order" and "third-order" wins

In their Adjusted Standings Report,[7] Baseball Prospectus refers to different "orders" of wins for a team. The basic order of wins is simply the number of games they have won. However, because a team's record may not reflect its true talent due to luck, different measures of a team's talent were developed.

First-order wins, based on pure run differential, are the number of expected wins generated by the "pythagenport" formula (see above). In addition, to further filter out the distortions of luck, sabermetricians can also calculate a team's expected runs scored and allowed via a runs created-type equation (the most accurate at the team level being Base Runs). These formulas result in the team's expected number of runs given their offensive and defensive stats (total singles, doubles, walks, etc.), which helps to eliminate the luck factor of the order in which the team's hits and walks came within an inning. Using these stats, sabermetricians can calculate how many runs a team "should" have scored or allowed.

By plugging these expected runs scored and allowed into the pythagorean formula, one can generate second-order wins, the number of wins a team deserves based on the number of runs they should have scored and allowed given their component offensive and defensive statistics. Third-order wins are second-order wins that have been adjusted for strength of schedule (the quality of the opponent's pitching and hitting). Second- and third-order winning percentage has been shown to predict future actual team winning percentage better than both actual winning percentage and first-order winning percentage.

## Theoretical explanation

Initially the correlation between the formula and actual winning percentage was simply an experimental observation. In 2003, Hein Hundal provided an inexact derivation of the formula and showed that the Pythagorean exponent was approximately 2/(σπ) where σ was the standard deviation of runs scored by all teams divided by the average number of runs scored.[8] In 2006, Professor Steven J. Miller provided a statistical derivation of the formula[9] under some assumptions about baseball games: if runs for each team follow a Weibull distribution and the runs scored and allowed per game are statistically independent, then the formula gives the probability of winning.[9]

## Use in basketball

American sports executive Daryl Morey was the first to adapt James' Pythagorean expectation to professional basketball while a researcher at STATS, Inc.. He found that using 13.91 for the exponents provided an acceptable model for predicting won-lost percentages:

$\mathrm{Win} = \frac{\text{points for}^{13.91}}{\text{points for}^{13.91} + \text{points against}^{13.91}}.$

Daryl's "Modified Pythagorean Theorem" was first published in STATS Basketball Scoreboard, 1993-94.[10]

Noted basketball analyst Dean Oliver also applied James' Pythagorean theory to professional basketball. The result was similar.

Another noted basketball statistician, John Hollinger, uses a similar Pythagorean formula except with 16.5 as the exponent.

## Use in pro football

The formula has also been used in pro football by football stat website and publisher Football Outsiders, where it is known as Pythagorean projection. The 2011 edition of Football Outsiders Almanac[11] states, "From 1988 through 2004, 11 of 16 Super Bowls were won by the team that led the NFL in Pythagorean wins, while only seven were won by the team with the most actual victories. Super Bowl champions that led the league in Pythagorean wins but not actual wins include the 2004 Patriots, 2000 Ravens, 1999 Rams and 1997 Broncos."

Although Football Outsiders Almanac acknowledges that the formula had been less-successful in picking Super Bowl participants from 2005–2008, it reasserted itself in 2009 and 2010. Furthermore, "[t]he Pythagorean projection is also still a valuable predictor of year-to-year improvement. Teams that win a minimum of one full game more than their Pythagorean projection tend to regress the following year; teams that win a minimum of one full game less than their Pythagoerean projection tend to improve the following year, particularly if they were at or above .500 despite their underachieving. For example, the 2008 New Orleans Saints went 8-8 despite 9.5 Pythagorean wins, hinting at the improvement that came with the next year's championship season."