# μ-recursive function

In mathematical logic and computer science, the general recursive functions (often shortened to recursive functions) or μ-recursive functions are a class of partial functions from natural numbers to natural numbers that are "computable" in an intuitive sense. In computability theory, it is shown that the μ-recursive functions are precisely the functions that can be computed by Turing machines(this is one of the theorems that supports Church–Turing thesis). The μ-recursive functions are closely related to primitive recursive functions, and their inductive definition (below) builds upon that of the primitive recursive functions. However, not every μ-recursive function is a primitive recursive function—the most famous example is the Ackermann function.

Other equivalent classes of functions are the λ-recursive functions and the functions that can be computed by Markov algorithms.

The subset of all total recursive functions with values in {0,1} is known in computational complexity theory as the complexity class R.

## Definition

The μ-recursive functions (or partial μ-recursive functions) are partial functions that take finite tuples of natural numbers and return a single natural number. They are the smallest class of partial functions that includes the initial functions and is closed under composition, primitive recursion, and the μ operator.

The smallest class of functions including the initial functions and closed under composition and primitive recursion (i.e. without minimisation) is the class of primitive recursive functions. While all primitive recursive functions are total, this is not true of partial recursive functions; for example, the minimisation of the successor function is undefined. The primitive recursive functions are a subset of the total recursive functions, which are a subset of the partial recursive functions. For example, the Ackermann function can be proven to be total recursive, and to be non-primitive.

 `int rho( int g(int,int), int h(int,int,int,int), int x0, int x1, int x2) { int res = g(x1,x2); for (int i=0; i

Initial or "basic" functions: (In the following the subscripting is per Kleene (1952) p. 219. For more about some of the various symbolisms found in the literature see Symbolism below.)

1. Constant function: For each natural number $n\,$ and every $k\,$ :
$f(x_{1},\ldots ,x_{k})=n\,$ .
Alternative definitions use compositions of the successor function and use a zero function, that always returns zero, in place of the constant function.
2. Successor function S:
$S(x){\stackrel {\mathrm {def} }{=}}x+1\,$ 3. Projection function $P_{i}^{k}$ (also called the Identity function $I_{i}^{k}$ ): For all natural numbers $i,k\,$ such that $1$ $i$ $k$ :
$P_{i}^{k}(x_{1},\ldots ,x_{k}){\stackrel {\mathrm {def} }{=}}x_{i}\,.$ Operators:[clarification needed]

1. Composition operator $\circ \,$ (also called the substitution operator): Given an m-ary function $h(x_{1},\ldots ,x_{m})\,$ and m k-ary functions $g_{1}(x_{1},\ldots ,x_{k}),\ldots ,g_{m}(x_{1},\ldots ,x_{k})$ :
$h\circ (g_{1},\ldots ,g_{m}){\stackrel {\mathrm {def} }{=}}f\quad {\text{where}}\quad f(x_{1},\ldots ,x_{k})=h(g_{1}(x_{1},\ldots ,x_{k}),\ldots ,g_{m}(x_{1},\ldots ,x_{k}))\,.$ 2. Primitive recursion operator $\rho \,$ : Given the k-ary function $g(x_{1},\ldots ,x_{k})\,$ and k+2 -ary function $h(y,z,x_{1},\ldots ,x_{k})\,$ :
{\begin{aligned}\rho (g,h)&{\stackrel {\mathrm {def} }{=}}f\quad {\text{where the k+1 -ary function }}f{\text{ is defined by}}\\f(0,x_{1},\ldots ,x_{k})&=g(x_{1},\ldots ,x_{k})\\f(y+1,x_{1},\ldots ,x_{k})&=h(y,f(y,x_{1},\ldots ,x_{k}),x_{1},\ldots ,x_{k})\,.\end{aligned}} 3. Minimization operator $\mu \,$ : Given a (k+1)-ary total function $f(y,x_{1},\ldots ,x_{k})\,$ , the k-ary function $\mu (f)$ is defined by:
{\begin{aligned}\mu (f)(x_{1},\ldots ,x_{k})=z{\stackrel {\mathrm {def} }{\iff }}\ f(z,x_{1},\ldots ,x_{k})&=0\quad {\text{and}}\\f(i,x_{1},\ldots ,x_{k})&>0\quad {\text{for}}\quad i=0,\ldots ,z-1.\end{aligned}} Intuitively, minimisation seeks—beginning the search from 0 and proceeding upwards—the smallest argument that causes the function to return zero; if there is no such argument, the search never terminates.

The strong equality operator $\simeq$ can be used to compare partial μ-recursive functions. This is defined for all partial functions f and g so that

$f(x_{1},\ldots ,x_{k})\simeq g(x_{1},\ldots ,x_{l})$ holds if and only if for any choice of arguments either both functions are defined and their values are equal or both functions are undefined.

## Equivalence with other models of computability

In the equivalence of models of computability, a parallel is drawn between Turing machines that do not terminate for certain inputs and an undefined result for that input in the corresponding partial recursive function. The unbounded search operator is not definable by the rules of primitive recursion as those do not provide a mechanism for "infinite loops" (undefined values).

## Normal form theorem

A normal form theorem due to Kleene says that for each k there are primitive recursive functions $U(y)\!$ and $T(y,e,x_{1},\ldots ,x_{k})\!$ such that for any μ-recursive function $f(x_{1},\ldots ,x_{k})\!$ with k free variables there is an e such that

$f(x_{1},\ldots ,x_{k})\simeq U(\mu y\,T(y,e,x_{1},\ldots ,x_{k}))$ .

The number e is called an index or Gödel number for the function f. A consequence of this result is that any μ-recursive function can be defined using a single instance of the μ operator applied to a (total) primitive recursive function.

Minsky (1967) observes (as does Boolos-Burgess-Jeffrey (2002) pp. 94–95) that the U defined above is in essence the μ-recursive equivalent of the universal Turing machine:

To construct U is to write down the definition of a general-recursive function U(n, x) that correctly interprets the number n and computes the appropriate function of x. to construct U directly would involve essentially the same amount of effort, and essentially the same ideas, as we have invested in constructing the universal Turing machine. (italics in original, Minsky (1967) p. 189)

## Symbolism

A number of different symbolisms are used in the literature. An advantage to using the symbolism is a derivation of a function by "nesting" of the operators one inside the other is easier to write in a compact form. In the following we will abbreviate the string of parameters x1, ..., xn as x:

• Constant function: Kleene uses " Cqn(x) = q " and Boolos-Burgess-Jeffrey (2002) (B-B-J) use the abbreviation " constn( x) = n ":
e.g. C137 ( r, s, t, u, v, w, x ) = 13
e.g. const13 ( r, s, t, u, v, w, x ) = 13
• Successor function: Kleene uses x' and S for "Successor". As "successor" is considered to be primitive, most texts use the apostrophe as follows:
S(a) = a +1 =def a', where 1 =def 0', 2 =def 0 ' ', etc.
• Identity function: Kleene (1952) uses " Uin " to indicate the identity function over the variables xi; B-B-J use the identity function idin over the variables x1 to xn:
Uin( x ) = idin( x ) = xi
e.g. U37 = id37 ( r, s, t, u, v, w, x ) = t
• Composition (Substitution) operator: Kleene uses a bold-face Snm (not to be confused with his S for "successor" ! ). The superscript "m" refers to the mth of function "fm", whereas the subscript "n" refers to the nth variable "xn":
If we are given h( x )= g( f1(x), ... , fm(x) )
h(x) = Smn(g, f1, ... , fm )
In a similar manner, but without the sub- and superscripts, B-B-J write:
h(x')= Cn[g, f1 ,..., fm](x)
• Primitive Recursion: Kleene uses the symbol " Rn(base step, induction step) " where n indicates the number of variables, B-B-J use " Pr(base step, induction step)(x)". Given:
• base step: h( 0, x )= f( x ), and
• induction step: h( y+1, x ) = g( y, h(y, x),x )
Example: primitive recursion definition of a + b:
• base step: f( 0, a ) = a = U11(a)
• induction step: f( b' , a ) = ( f ( b, a ) )' = g( b, f( b, a), a ) = g( b, c, a ) = c' = S(U23( b, c, a ))
R2 { U11(a), S [ (U23( b, c, a ) ] }
Pr{ U11(a), S[ (U23( b, c, a ) ] }

Example: Kleene gives an example of how to perform the recursive derivation of f(b, a) = b + a (notice reversal of variables a and b). He starts with 3 initial functions

1. S(a) = a'
2. U11(a) = a
3. U23( b, c, a ) = c
4. g(b, c, a) = S(U23( b, c, a )) = c'
5. base step: h( 0, a ) = U11(a)
induction step: h( b', a ) = g( b, h( b, a ), a )

He arrives at:

a+b = R2[ U11, S13(S, U23) ]