Implicit function theorem

From Wikipedia, the free encyclopedia
Jump to: navigation, search

In multivariable calculus, the implicit function theorem - also known, above all in Italy, as Dini's theorem, is a tool that allows relations to be converted to functions of several real variables. It does this by representing the relation as the graph of a function. There may not be a single function whose graph is the entire relation, but there may be such a function on a restriction of the domain of the relation. The implicit function theorem gives a sufficient condition to ensure that there is such a function.

The theorem states that if the equation R(x, y) = 0 satisfies some mild conditions on its partial derivatives, then one can in principle (though not necessarily with an analytic expression) express y in terms of x as f(x), at least in some disk. Then this implicit function f(x),[1]:204-206 implied by R(x, y)=0, is such that geometrically the locus defined by R(x, y) = 0 will coincide locally (that is in that disk) with the graph of f.

First example[edit]

The unit circle can be specified as the level curve f(x, y) = 1 of the function f(x,y)=x^2 + y^2. Around point A, y can be expressed as a function y(x), specifically g_1(x)=\sqrt{1-x^2}. No such function exists around point B.

If we define the function f(x,y)=x^2 + y^2, then the equation f(x, y) = 1 cuts out the unit circle as the level set {(x, y)| f(x, y) = 1}. There is no way to represent the unit circle as the graph of a function of one variable y = g(x) because for each choice of x ∈ (−1, 1), there are two choices of y, namely \pm\sqrt{1-x^2}.

However, it is possible to represent part of the circle as the graph of a function of one variable. If we let g_1(x) = \sqrt{1-x^2} for −1 < x < 1, then the graph of y = g_1(x) provides the upper half of the circle. Similarly, if g_2(x) = -\sqrt{1-x^2}, then the graph of y = g_2(x) gives the lower half of the circle.

The purpose of the implicit function theorem is to tell us the existence of functions like g_1(x) and g_2(x), even in situations where we cannot write down explicit formulas. It guarantees that g_1(x) and g_2(x) are differentiable, and it even works in situations where we do not have a formula for f(x, y).

Statement of the theorem[edit]

Let f : Rn+mRm be a continuously differentiable function. We think of Rn+m as the Cartesian product Rn × Rm, and we write a point of this product as (x, y) = (x1, ..., xn, y1, ..., ym). Starting from the given function f, our goal is to construct a function g: RnRm whose graph (x, g(x)) is precisely the set of all (x, y) such that f(x, y) = 0.

As noted above, this may not always be possible. We will therefore fix a point (a, b) = (a1, ..., an, b1, ..., bm) which satisfies f(a, b) = 0, and we will ask for a g that works near the point (a, b). In other words, we want an open set U of Rn containing a, an open set V of Rm containing b, and a function g : UV such that the graph of g satisfies the relation f = 0 on U × V. In symbols,

\{ (\mathbf{x}, g(\mathbf{x})) \mid \mathbf x \in U \} = \{ (\mathbf{x}, \mathbf{y})\in U \times V \mid f(\mathbf{x}, \mathbf{y}) = 0 \}.

To state the implicit function theorem, we need the Jacobian matrix of f, which is the matrix of the partial derivatives of f. Abbreviating (a1, ..., an, b1, ..., bm) to (a, b), the Jacobian matrix is

(Df)(\mathbf{a},\mathbf{b}) =  \left[\begin{matrix}
 \frac{\partial f_1}{\partial x_1}(\mathbf{a},\mathbf{b}) &
    \cdots & \frac{\partial f_1}{\partial x_n}(\mathbf{a},\mathbf{b})\\
 \vdots & \ddots & \vdots\\
 \frac{\partial f_m}{\partial x_1}(\mathbf{a},\mathbf{b}) & \cdots & \frac{\partial f_m}{\partial x_n}(\mathbf{a},\mathbf{b})
\end{matrix}\right|\left.
\begin{matrix} 
 \frac{\partial f_1}{\partial y_1}(\mathbf{a},\mathbf{b}) & \cdots & \frac{\partial f_1}{\partial y_m}(\mathbf{a},\mathbf{b})\\
 \vdots & \ddots & \vdots\\
\frac{\partial f_m}{\partial y_1}(\mathbf{a},\mathbf{b}) & \cdots & \frac{\partial f_m}{\partial y_m}(\mathbf{a},\mathbf{b})\\
\end{matrix}\right] = [X|Y]

where X is the matrix of partial derivatives in the variables xi and Y is the matrix of partial derivatives in the variables yj. The implicit function theorem says that if Y is an invertible matrix, then there are U, V, and g as desired. Writing all the hypotheses together gives the following statement.

Let f: Rn+mRm be a continuously differentiable function, and let Rn+m have coordinates (x, y). Fix a point (a, b) = (a1, ..., an, b1, ..., bm) with f(a, b) = c, where cRm. If the matrix [(∂fi/∂yj)(a, b)] is invertible, then there exists an open set U containing a, an open set V containing b, and a unique continuously differentiable function g: UV such that
\{ (\mathbf{x}, g(\mathbf{x}))|\mathbf x \in U  \} = \{ (\mathbf{x}, \mathbf{y}) \in U \times V| f(\mathbf{x}, \mathbf{y}) = \mathbf{c} \}.

Regularity[edit]

It can be proven that whenever we have the additional hypothesis that f is continuously differentiable up to k times inside U × V, then the same holds true for the explicit function g inside U and

\frac{\partial  g}{\partial  x_j}(x)=-\left( \frac{\partial f}{\partial y}(x,g(x)) \right)^{-1}  \frac{\partial f}{\partial x_j}(x,g(x)) .

Similarly, if f is analytic inside U × V, then the same holds true for the explicit function g inside U.[2] This generalization is called the analytic implicit function theorem.

The circle example[edit]

Let us go back to the example of the unit circle. In this case n = m = 1 and f(x,y) = x^2 + y^2 - 1. The matrix of partial derivatives is just a 1 × 2 matrix, given by

(Df)(a,b) = \left [ \frac{\partial f}{\partial x}(a,b)  \ \ \frac{\partial f}{\partial y}(a,b) \right ] = [2a \ \   2b]

Thus, here, the Y in the statement of the theorem is just the number 2b; the linear map defined by it is invertible iff b ≠ 0. By the implicit function theorem we see that we can locally write the circle in the form y = g(x) for all points where y ≠ 0. For (±1, 0) we run into trouble, as noted before. The implicit function theorem may still be applied to this two points, but writing x as a function of y, that is, x = h(y); now the graph of the function will be \left(h(y), y\right), since where b = 0 we have a = 1, and the conditions to locally express the function in this form are satisfied.

The implicit derivative of y with respect to x, and that of x with respect to y, can be found by totally differentiating the implicit function x^2+y^2-1 and equating to 0:

2x dx+2y dy = 0,

giving

dy/dx=-x/y

and

dx/dy=-y/x.

Application: change of coordinates[edit]

Suppose we have an m-dimensional space, parametrised by a set of coordinates  (x_1,\ldots,x_m) . We can introduce a new coordinate system  (x'_1,\ldots,x'_m) by supplying m functions  h_1\ldots h_m . These functions allow to calculate the new coordinates  (x'_1,\ldots,x'_m) of a point, given the point's old coordinates  (x_1,\ldots,x_m) using  x'_1=h_1(x_1,\ldots,x_m), \ldots,  x'_m=h_m(x_1,\ldots,x_m) . One might want to verify if the opposite is possible: given coordinates  (x'_1,\ldots,x'_m) , can we 'go back' and calculate the same point's original coordinates  (x_1,\ldots,x_m) ? The implicit function theorem will provide an answer to this question. The (new and old) coordinates (x'_1,\ldots,x'_m, x_1,\ldots,x_m) are related by f = 0, with

f(x'_1,\ldots,x'_m,x_1,\ldots x_m)=(h_1(x_1,\ldots x_m)-x'_1,\ldots , h_m(x_1,\ldots, x_m)-x'_m).

Now the Jacobian matrix of f at a certain point (a, b) [ where a=(x'_1,\ldots,x'_m), b=(x_1,\ldots,x_m) ] is given by

(Df)(a,b)  = \left [\begin{matrix}
 -1 & \cdots & 0 \\
 \vdots & \ddots & \vdots \\
 0 & \cdots & -1 
\end{matrix}\left|
\begin{matrix} 
\frac{\partial h_1}{\partial x_1}(b) & \cdots & \frac{\partial h_1}{\partial x_m}(b)\\
\vdots & \ddots & \vdots\\
\frac{\partial h_m}{\partial x_1}(b) & \cdots & \frac{\partial h_m}{\partial x_m}(b)\\
\end{matrix} \right.\right] = [-1_m |J ].

where 1m denotes the m × m identity matrix, and J is the m × m matrix of partial derivatives, evaluated at (a, b). (In the above, these blocks were denoted by X and Y. As it happens, in this particular application of the theorem, neither matrix depends on a.) The implicit function theorem now states that we can locally express  (x_1,\ldots,x_m) as a function of  (x'_1,\ldots,x'_m) if J is invertible. Demanding J is invertible is equivalent to det J ≠ 0, thus we see that we can go back from the primed to the unprimed coordinates if the determinant of the Jacobian J is non-zero. This statement is also known as the inverse function theorem.

Example: polar coordinates[edit]

As a simple application of the above, consider the plane, parametrised by polar coordinates (R, θ). We can go to a new coordinate system (cartesian coordinates) by defining functions x(R, θ) = R cos(θ) and y(R, θ) = R sin(θ). This makes it possible given any point (R, θ) to find corresponding cartesian coordinates (x, y). When can we go back and convert cartesian into polar coordinates? By the previous example, it is sufficient to have det J ≠ 0, with

J  =\begin{bmatrix}
 \frac{\partial x(R,\theta)}{\partial R} & \frac{\partial x(R,\theta)}{\partial \theta} \\
 \frac{\partial y(R,\theta)}{\partial R} & \frac{\partial y(R,\theta)}{\partial \theta} \\
\end{bmatrix}=
 \begin{bmatrix}
 \cos \theta & -R \sin \theta \\
 \sin \theta & R \cos \theta
\end{bmatrix}.

Since det J = R, conversion back to polar coordinates is possible if R ≠ 0. So it remains to check the case R = 0. It is easy to see that in case R = 0, our coordinate transformation is not invertible: at the origin, the value of θ is not well-defined.

Generalizations[edit]

Banach space version[edit]

Based on the inverse function theorem in Banach spaces, it is possible to extend the implicit function theorem to Banach space valued mappings.

Let X, Y, Z be Banach spaces. Let the mapping f : X × YZ be continuously Fréchet differentiable. If (x_0,y_0)\in X\times Y, f(x_0,y_0)=0, and y\mapsto Df(x_0,y_0)(0,y) is a Banach space isomorphism from Y onto Z, then there exist neighbourhoods U of x0 and V of y0 and a Fréchet differentiable function g : UV such that f(x, g(x)) = 0 and f(x, y) = 0 if and only if y = g(x), for all (x,y)\in U\times V.

Implicit functions from non-differentiable functions[edit]

Various forms of the implicit function theorem exist for the case when the function f is not differentiable. It is standard that it holds in one dimension.[3] The following more general form was proven by Kumagai[4] based on an observation by Jittorntrum.[5]

Consider a continuous function f : R^n \times R^m \to R^n such that f(x_0, y_0) = 0. If there exist open neighbourhoods A \subset R^n and B \subset R^m of x0 and y0, respectively, such that, for all y in B, f(\cdot, y) : A \to R^n is locally one-to-one then there exist open neighbourhoods A_0 \subset R^n and B_0 \subset R^m of x0 and y0, such that, for all y \in B_0, the equation f(x, y) = 0 has a unique solution

x = g(y) \in A_0,

where g is a continuous function from B0 into A0.

See also[edit]

Notes[edit]

  1. ^ Chiang, Alpha C. Fundamental Methods of Mathematical Economics, McGraw-Hill, third edition, 1984
  2. ^ K. Fritzsche, H. Grauert (2002), "From Holomorphic Functions to Complex Manifolds", Springer-Verlag, page 34.
  3. ^ Kudryavtsev, L. D. (1990). "Implicit function". In Hazewinkel, M. Encyclopedia of Mathematics. Dordrecht, The Netherlands: Kluwer. ISBN 1-55608-004-2. 
  4. ^ Kumagai, S. (June 1980). "An implicit function theorem: Comment". Journal of Optimization Theory and Applications 31 (2): 285–288. doi:10.1007/BF00934117. 
  5. ^ Jittorntrum, K. (1978). "An Implicit Function Theorem". Journal of Optimization Theory and Applications 25 (4): 575–577. doi:10.1007/BF00933522. 

References[edit]