Richardson's theorem

In mathematics, Richardson's theorem establishes a limit on the extent to which an algorithm can decide whether certain mathematical expressions are equal. It states that for a certain fairly natural class of expressions, it is undecidable whether a particular expression E satisfies the equation E = 0, and similarly undecidable whether the functions defined by expressions E and F are everywhere equal. It was proved in 1968 by computer scientist Daniel Richardson of the University of Bath.

Specifically, the class of expressions for which the theorem holds is that generated by rational numbers, the number π, the number log 2, the variable x, the operations of addition, subtraction, multiplication, composition, and the sin, exp, and abs functions.

For some classes of expressions (generated by other primitives than in Richardson's theorem) there exist algorithms that can determine whether expression is zero.[1]

Statement of the Theorem

Richardson's theorem can be stated as follows.[2] Let E be a set of real functions such that if A(x), B(x)E then A(x) ± B(x), A(x)B(x), A(B(x))E. The rational numbers are contained as constant functions. Then for expressions A(x) in E,

• if log(2), π, ex, sin x ∈ E, then A(x) ≥ 0 for all x is unsolvable;
• if also |x|E then A(x) = 0 is unsolvable.

If furthermore there is a function B(x)E without an antiderivative in E then the integration problem is unsolvable. Example: $e^{ax^2}$ has an elementary antiderivative if and only if a=0 in the elementary functions.