# Least fixed point

The function f(x)=x2-4 has two fixed points, shown as the intersection with the blue line; its least one is at 1/2-17/2.

In order theory, a branch of mathematics, the least fixed point (lfp or LFP, sometimes also smallest fixed point) of a function from a partially ordered set to itself is the fixed point which is less than each other fixed point, according to the set's order. A function need not have a least fixed point, and cannot have more than one.

For example, with the usual order on the real numbers, the least fixed point of the real function f(x) = x2 is x = 0 (since the only other fixed point is 1 and 0 < 1). In contrast, f(x) = x+1 has no fixed point at all, let alone a least one, and f(x)=x has infinitely many fixed points, but no least one.

## Applications

Many fixed-point theorems yield algorithms for locating the least fixed point. Least fixed points often have desirable properties that arbitrary fixed points do not.

In mathematical logic and computer science, the least fixed point is related to making recursive definitions (see domain theory and/or denotational semantics for details).

Immerman [1][2] and Vardi [3] independently showed the descriptive complexity result that the polynomial-time computable properties of linearly ordered structures are deﬁnable in FO(LFP), i.e. in first-order logic with a least fixed point operator. However, FO(LFP) is too weak to express all polynomial-time properties of unordered structures (for instance that a structure has even size).

## Greatest fixed points

Greatest fixed points can also be determined, but they are less commonly used than least fixed points. However, in computer science they, analogously to the least fixed point, give rise to corecursion and codata.