# No free lunch theorem

In mathematical folklore, the "no free lunch" theorem (sometimes pluralized) of David Wolpert and William Macready appears in the 1997 "No Free Lunch Theorems for Optimization".[1] Wolpert had previously derived no free lunch theorems for machine learning (statistical inference).[2] In 2005, Wolpert and Macready themselves indicated that the first theorem in their paper "state[s] that any two optimization algorithms are equivalent when their performance is averaged across all possible problems".[3] The 1997 theorems of Wolpert and Macready are mathematically technical[4]and some[who?] find them unintuitive. The folkloric "no free lunch" (NFL) theorem is an easily stated and easily understood consequence of theorems Wolpert and Macready actually prove. It is weaker than the proven theorems, and thus does not encapsulate them.

Various investigators have extended the work of Wolpert and Macready substantively. See No free lunch in search and optimization for treatment of the research area.

## Original NFL theorems

Wolpert and Macready give two NFL theorems that are closely related to the folkloric theorem. The first hypothesizes objective functions that do not change while optimization is in progress, and the second hypothesizes objective functions that may change.[1]

Theorem 1: For any pair of algorithms a1 and a2
$\sum_f P(h_m^y | f, m, a_1) = \sum_f P(h_m^y | f, m, a_2).$

The theorem can be equivalently formulated as follows:

Theorem 1: Given a finite set $V$ and a finite set $S$ of real numbers, assume that $f : V \to S$ is chosen at random according to uniform distribution on the set $V^S$ of all possible functions from $V$ to $S$. For the problem of optimizing $f$ over the set $V$, then no algorithm performs better than blind search.

Here, blind search means that at each step of the algorithm, the element $v \in V$ is chosen at random with uniform probability distribution from the elements of $V$ that have not been chosen previously.

In essence, this says that when all functions f are equally likely, the probability of observing an arbitrary sequence of m values in the course of optimization does not depend upon the algorithm. In the analytic framework of Wolpert and Macready, performance is a function of the sequence of observed values, so it follows easily that all algorithms have identically distributed performance when objective functions are drawn uniformly at random, and also that all algorithms have identical mean performance. But identical mean performance of all algorithms does not imply Theorem 1, and thus the folkloric theorem is not equivalent to the original theorem.

Theorem 2 establishes a similar, but "more subtle", NFL result for time-varying objective functions.[1]