# Symmetry of second derivatives

In mathematics, the symmetry of second derivatives (also called the equality of mixed partials) refers to the possibility of interchanging the order of taking partial derivatives of a function

${\displaystyle f\left(x_{1},\,x_{2},\,\ldots ,\,x_{n}\right)}$

of n variables without changing the result under certain conditions (see below). The symmetry is the assertion that the second-order partial derivatives satisfy the identity

${\displaystyle {\frac {\partial }{\partial x_{i}}}\left({\frac {\partial f}{\partial x_{j}}}\right)\ =\ {\frac {\partial }{\partial x_{j}}}\left({\frac {\partial f}{\partial x_{i}}}\right)}$

so that they form an n × n symmetric matrix, known as the function's Hessian matrix. This is sometimes known as Schwarz's theorem, Clairaut's theorem, or Young's theorem.[1][2]

In the context of partial differential equations it is called the Schwarz integrability condition.

## Formal expressions of symmetry

In symbols, the symmetry may be expressed as:

${\displaystyle {\frac {\partial }{\partial x}}\left({\frac {\partial f}{\partial y}}\right)\ =\ {\frac {\partial }{\partial y}}\left({\frac {\partial f}{\partial x}}\right)\qquad {\text{or}}\qquad {\frac {\partial ^{2}\!f}{\partial x\,\partial y}}\ =\ {\frac {\partial ^{2}\!f}{\partial y\,\partial x}}.}$

Another notation is:

${\displaystyle \partial _{x}\partial _{y}f=\partial _{y}\partial _{x}f\qquad {\text{or}}\qquad f_{yx}=f_{xy}.}$

In terms of composition of the differential operator Di which takes the partial derivative with respect to xi:

${\displaystyle D_{i}\circ D_{j}=D_{j}\circ D_{i}}$.

From this relation it follows that the ring of differential operators with constant coefficients, generated by the Di, is commutative; but this is only true as operators over a domain of sufficiently differentiable functions. It is easy to check the symmetry as applied to monomials, so that one can take polynomials in the xi as a domain. In fact smooth functions are another valid domain.

## History

The result on the equality of mixed partial derivatives under certain conditions has a long history. The list of unsuccessful proposed proofs started with Euler's, published in 1740,[3] although already in 1721 Bernoulli had implicitly assumed the result with no formal justification.[4] Clairaut also published a proposed proof in 1740, with no other attempts until the end of the 18th century. Starting then, for a period of 70 years, a number of incomplete proofs were proposed. The proof of Lagrange (1797) was improved by Cauchy (1823), but assumed the existence and continuity of the partial derivatives ${\displaystyle {\tfrac {\partial ^{2}f}{\partial x^{2}}}}$ and ${\displaystyle {\tfrac {\partial ^{2}f}{\partial y^{2}}}}$.[5] Other attempts were made by P. Blanchet (1841), Duhamel (1856), Sturm (1857), Schlömilch (1862), and Bertrand (1864). Finally in 1867 Lindelöf systematically analyzed all the earlier flawed proofs and was able to exhibit a specific counterexample where mixed derivatives failed to be equal.[6][7]

Six years after that, Schwarz succeeded in giving the first rigorous proof.[8] Dini later contributed by finding more general conditions than those of Schwarz. Eventually a clean and more general version was found by Jordan in 1883 that is still the proof found in most textbooks. Minor variants of earlier proofs were published by Laurent (1885), Peano (1889 and 1893), J. Edwards (1892), P. Haag (1893), J. K. Whittemore (1898), Vivanti (1899) and Pierpont (1905). Further progress was made in 1907-1909 when E. W. Hobson and W. H. Young found proofs with weaker conditions than those of Schwarz and Dini. In 1918, Carathéodory gave a different proof based on the Lebesgue integral.[7]

## Theorem of Schwarz

In mathematical analysis, Schwarz's theorem (or Clairaut's theorem on equality of mixed partials)[9] named after Alexis Clairaut and Hermann Schwarz, states that for a function ${\displaystyle f\colon \Omega \to \mathbb {R} }$ defined on a set ${\displaystyle \Omega \subset \mathbb {R} ^{n}}$, if ${\displaystyle \mathbf {p} \in \mathbb {R} ^{n}}$ is a point such that some neighborhood of ${\displaystyle \mathbf {p} }$ is contained in ${\displaystyle \Omega }$ and ${\displaystyle f}$ has continuous second partial derivatives on that neighborhood of ${\displaystyle \mathbf {p} }$, then for all i and j in ${\displaystyle \{1,2\ldots ,\,n\}.}$

${\displaystyle {\frac {\partial ^{2}}{\partial x_{i}\,\partial x_{j}}}f(\mathbf {p} )={\frac {\partial ^{2}}{\partial x_{j}\,\partial x_{i}}}f(\mathbf {p} ).}$

The partial derivatives of this function commute at that point.

One easy way to establish this theorem (in the case where ${\displaystyle n=2}$, ${\displaystyle i=1}$, and ${\displaystyle j=2}$, which readily entails the result in general) is by applying Green's theorem to the gradient of ${\displaystyle f.}$

An elementary proof for functions on open subsets of the plane is as follows (by a simple reduction, the general case for the theorem of Schwarz easily reduces to the planar case).[10] Let ${\displaystyle f(x,y)}$ be a differentiable function on an open rectangle Ω containing a point ${\displaystyle (a,b)}$ and suppose that ${\displaystyle df}$ is continuous with continuous ${\displaystyle \partial _{x}\partial _{y}f}$ and ${\displaystyle \partial _{y}\partial _{x}f}$ over Ω. Define

{\displaystyle {\begin{aligned}u\left(h,\,k\right)&=f\left(a+h,\,b+k\right)-f\left(a+h,\,b\right),\\v\left(h,\,k\right)&=f\left(a+h,\,b+k\right)-f\left(a,\,b+k\right),\\w\left(h,\,k\right)&=f\left(a+h,\,b+k\right)-f\left(a+h,\,b\right)-f\left(a,\,b+k\right)+f\left(a,\,b\right).\end{aligned}}}

These functions are defined for ${\displaystyle \left|h\right|,\,\left|k\right|<\varepsilon }$, where ${\displaystyle \varepsilon >0}$ and ${\displaystyle \left[a-\varepsilon ,\,a+\varepsilon \right]\times \left[b-\varepsilon ,\,b+\varepsilon \right]}$ is contained in Ω.

By the mean value theorem, for fixed h and k non-zero, ${\displaystyle \theta ,\,\theta ^{\prime },\,\,\phi ,\,\,\phi ^{\prime }}$ can be found in the open interval ${\displaystyle (0,1)}$ with

{\displaystyle {\begin{aligned}w\left(h,\,k\right)&=u\left(h,\,k\right)-u\left(0,\,k\right)=h\,\partial _{x}u\left(\theta h,\,k\right)\\&=h\,\left[\partial _{x}f\left(a+\theta h,\,b+k\right)-\partial _{x}f\left(a+\theta h,\,b\right)\right]\\&=hk\,\partial _{y}\partial _{x}f\left(a+\theta h,\,b+\theta ^{\prime }k\right)\\w\left(h,\,k\right)&=v\left(h,\,k\right)-v\left(h,\,0\right)=k\,\partial _{y}v\left(h,\,\phi k\right)\\&=k\left[\partial _{y}f\left(a+h,\,b+\phi k\right)-\partial _{y}f\left(a,\,b+\phi k\right)\right]\\&=hk\,\partial _{x}\partial _{y}f\left(a+\phi ^{\prime }h,\,b+\phi k\right).\end{aligned}}}

Since ${\displaystyle h,\,k\neq 0}$, the first equality below can be divided by ${\displaystyle hk}$:

{\displaystyle {\begin{aligned}hk\,\partial _{y}\partial _{x}f\left(a+\theta h,\,b+\theta ^{\prime }k\right)&=hk\,\partial _{x}\partial _{y}f\left(a+\phi ^{\prime }h,\,b+\phi k\right),\\\partial _{y}\partial _{x}f\left(a+\theta h,\,b+\theta ^{\prime }k\right)&=\partial _{x}\partial _{y}f\left(a+\phi ^{\prime }h,\,b+\phi k\right).\end{aligned}}}

Letting ${\displaystyle h,\,k}$ tend to zero in the last equality, the continuity assumptions on ${\displaystyle \partial _{y}\partial _{x}f}$ and ${\displaystyle \partial _{x}\partial _{y}f}$ now imply that

${\displaystyle {\frac {\partial ^{2}}{\partial x\partial y}}f\left(a,\,b\right)={\frac {\partial ^{2}}{\partial y\partial x}}f\left(a,\,b\right).}$

This account is a straightforward classical method found in many text books, for example in Burkill, Apostol and Rudin.[10][11][12]

Although the derivation above is elementary, the approach can also be viewed from a more conceptual perspective so that the result becomes more apparent.[13][14][15][16][17] Indeed the difference operators ${\displaystyle \Delta _{x}^{t},\,\,\Delta _{y}^{t}}$ commute and ${\displaystyle \Delta _{x}^{t}f,\,\,\Delta _{y}^{t}f}$ tend to ${\displaystyle \partial _{x}f,\,\,\partial _{y}f}$ as ${\displaystyle t}$ tends to 0, with a similar statement for second order operators.[a] Here, for ${\displaystyle z}$ a vector in the plane and ${\displaystyle u}$ a directional vector, the difference operator is defined by

${\displaystyle \Delta _{u}^{t}f(z)={f(z+tu)-f(z) \over t}.}$

By the fundamental theorem of calculus for ${\displaystyle C^{1}}$ functions ${\displaystyle f}$ on an open interval ${\displaystyle I}$ with ${\displaystyle (a,b)\subset I}$

${\displaystyle \int _{a}^{b}f^{\prime }(x)\,dx=f(b)-f(a).}$

Hence

${\displaystyle |f(b)-f(a)|\leq (b-a)\,\sup _{c\in (a,b)}|f^{\prime }(c)|}$.

This is a generalized version of the mean value theorem. Recall that the elementary discussion on maxima or minima for real-valued functions implies that if ${\displaystyle f}$ is continuous on ${\displaystyle [a,b]}$ and differentiable on ${\displaystyle (a,b)}$, then there is a point ${\displaystyle c}$ in ${\displaystyle (a,b)}$ such that

${\displaystyle {f(b)-f(a) \over b-a}=f^{\prime }(c).}$

For vector-valued functions with ${\displaystyle V}$ a finite-dimensional normed space, there is no analogue of the equality above, indeed it fails. But since ${\displaystyle \inf f^{\prime }\leq f^{\prime }(c)\leq \sup f^{\prime }}$, the inequality above is a useful substitute. Moreover, using the pairing of the dual of ${\displaystyle V}$ with its dual norm, yields the following inequality:

${\displaystyle \|f(b)-f(a)\|\leq (b-a)\,\sup _{c\in (a,b)}\|f^{\prime }(c)\|}$.

These versions of the mean valued theorem are discussed in Rudin, Hörmander and elsewhere.[19][20]

For ${\displaystyle f}$ a ${\displaystyle C^{2}}$ function on an open set in the plane, define ${\displaystyle D_{1}=\partial _{x}}$ and ${\displaystyle D_{2}=\partial _{y}}$. Furthermore for ${\displaystyle t\neq 0}$ set

${\displaystyle \Delta _{1}^{t}f(x,y)=[f(x+t,y)-f(x,y)]/t,\,\,\,\,\,\,\Delta _{2}^{t}f(x,y)=[f(x,y+t)-f(x,y)]/t}$.

Then for ${\displaystyle (x_{0},y_{0})}$ in the open set, the generalized mean value theorem can be applied twice:

${\displaystyle \left|\Delta _{1}^{t}\Delta _{2}^{t}f(x_{0},y_{0})-D_{1}D_{2}f(x_{0},y_{0})\right|\leq \sup _{0\leq s\leq 1}\left|\Delta _{1}^{t}D_{2}f(x_{0},y_{0}+ts)-D_{1}D_{2}f(x_{0},y_{0})\right|\leq \sup _{0\leq r,s\leq 1}\left|D_{1}D_{2}f(x_{0}+tr,y_{0}+ts)-D_{1}D_{2}f(x_{0},y_{0})\right|.}$

Thus ${\displaystyle \Delta _{1}^{t}\Delta _{2}^{t}f(x_{0},y_{0})}$ tends to ${\displaystyle D_{1}D_{2}f(x_{0},y_{0})}$ as ${\displaystyle t}$ tends to 0. The same argument shows that ${\displaystyle \Delta _{2}^{t}\Delta _{1}^{t}f(x_{0},y_{0})}$ tends to ${\displaystyle D_{2}D_{1}f(x_{0},y_{0})}$. Hence, since the difference operators commute, so do the partial differential operators ${\displaystyle D_{1}}$ and ${\displaystyle D_{2}}$, as claimed.[21][22][23][24][25]

Remark. By two applications of the classical mean value theorem,

${\displaystyle \Delta _{1}^{t}\Delta _{2}^{t}f(x_{0},y_{0})=D_{1}D_{2}f(x_{0}+t\theta ,y_{0}+t\theta ^{\prime })}$

for some ${\displaystyle \theta }$ and ${\displaystyle \theta ^{\prime }}$ in ${\displaystyle (0,1)}$. Thus the first elementary proof can be reinterpreted using difference operators. Conversely, instead of using the generalized mean value theorem in the second proof, the classical mean valued theorem could be used.

## Proof of Clairaut's theorem using iterated integrals

The properties of repeated Riemann integrals of a continuous function F on a compact rectangle [a,b] × [c,d] are easily established.[26] The uniform continuity of F implies immediately that the functions ${\displaystyle g(x)=\int _{c}^{d}F(x,y)\,dy}$ and ${\displaystyle h(y)=\int _{a}^{b}F(x,y)\,dx}$ are continuous.[27] It follows that

${\displaystyle \int _{a}^{b}\int _{c}^{d}F(x,y)\,dy\,dx=\int _{c}^{d}\int _{a}^{b}F(x,y)\,dx\,dy}$;

moreover it is immediate that the iterated integral is positive if F is positive.[28] The equality above is a simple case of Fubini's theorem, involving no measure theory. Titchmarsh (1939) proves it in a straightforward way using Riemann approximating sums corresponding to subdivisions of a rectangle into smaller rectangles.

To prove Clairaut's theorem, assume f is a differentiable function on an open set U, for which the mixed second partial derivatives fyx and fxy exist and are continuous. Using the fundamental theorem of calculus twice,

${\displaystyle \int _{c}^{d}\int _{a}^{b}f_{yx}(x,y)\,dx\,dy=\int _{c}^{d}f_{y}(b,y)-f_{y}(a,y)\,dy=f(b,d)-f(a,d)-f(b,c)+f(a,c).}$

Similarly

${\displaystyle \int _{a}^{b}\int _{c}^{d}f_{xy}(x,y)\,dy\,dx=\int _{a}^{b}f_{x}(x,d)-f_{x}(x,c)\,dx=f(b,d)-f(a,d)-f(b,c)+f(a,c).}$

The two iterated integrals are therefore equal. On the other hand, since fxy(x,y) is continuous, the second iterated integral can be performed by first integrating over x and then afterwards over y. But then the iterated integral of fyxfxy on [a,b] × [c,d] must vanish. However, if the iterated integral of a continuous function function F vanishes for all rectangles, then F must be identically zero; for otherwise F or F would be strictly positive at some point and therefore by continuity on a rectangle, which is not possible. Hence fyxfxy must vanish identically, so that fyx = fxy everywhere.[29][30][31][32][33]

## Sufficiency of twice-differentiability

A weaker condition than the continuity of second partial derivatives (which is implied by the latter) which suffices to ensure symmetry is that all partial derivatives are themselves differentiable.[34] Another strengthening of the theorem, in which existence of the permuted mixed partial is asserted, was provided by Peano in a short 1890 note on Mathesis:

If ${\displaystyle f:E\to \mathbb {R} }$ is defined on an open set ${\displaystyle E\subset \mathbb {R} ^{2}}$; ${\displaystyle \partial _{1}f(x,\,y)}$ and ${\displaystyle \partial _{2,1}f(x,\,y)}$ exist everywhere on ${\displaystyle E}$; ${\displaystyle \partial _{2,1}f}$ is continuous at ${\displaystyle \left(x_{0},\,y_{0}\right)\in E}$, and if ${\displaystyle \partial _{2}f(x,\,y_{0})}$ exists in a neighborhood of ${\displaystyle x=x_{0}}$, then ${\displaystyle \partial _{1,2}f}$ exists at ${\displaystyle \left(x_{0},\,y_{0}\right)}$ and ${\displaystyle \partial _{1,2}f\left(x_{0},\,y_{0}\right)=\partial _{2,1}f\left(x_{0},\,y_{0}\right)}$.[35]

## Distribution theory formulation

The theory of distributions (generalized functions) eliminates analytic problems with the symmetry. The derivative of an integrable function can always be defined as a distribution, and symmetry of mixed partial derivatives always holds as an equality of distributions. The use of formal integration by parts to define differentiation of distributions puts the symmetry question back onto the test functions, which are smooth and certainly satisfy this symmetry. In more detail (where f is a distribution, written as an operator on test functions, and φ is a test function),

${\displaystyle \left(D_{1}D_{2}f\right)[\phi ]=-\left(D_{2}f\right)\left[D_{1}\phi \right]=f\left[D_{2}D_{1}\phi \right]=f\left[D_{1}D_{2}\phi \right]=-\left(D_{1}f\right)\left[D_{2}\phi \right]=\left(D_{2}D_{1}f\right)[\phi ].}$

Another approach, which defines the Fourier transform of a function, is to note that on such transforms partial derivatives become multiplication operators that commute much more obviously.[a]

## Requirement of continuity

The symmetry may be broken if the function fails to have differentiable partial derivatives, which is possible if Clairaut's theorem is not satisfied (the second partial derivatives are not continuous).

The function f(x, y), as shown in equation (1), does not have symmetric second derivatives at its origin.

An example of non-symmetry is the function (due to Peano)[36][37]

${\displaystyle f(x,\,y)={\begin{cases}{\frac {xy\left(x^{2}-y^{2}\right)}{x^{2}+y^{2}}}&{\mbox{ for }}(x,\,y)\neq (0,\,0),\\0&{\mbox{ for }}(x,\,y)=(0,\,0).\end{cases}}}$

(1)

This can be visualized by the polar form ${\displaystyle f(r\cos(\theta ),r\sin(\theta ))={\frac {r^{2}\sin(4\theta )}{4}}}$; it is everywhere continuous, but its derivatives at (0, 0) cannot be computed algebraically. Rather, the limit of difference quotients shows that ${\displaystyle f_{x}(0,0)=f_{y}(0,0)=0}$, so the graph ${\displaystyle z=f(x,y)}$ has a horizontal tangent plane at (0, 0), and the partial derivatives ${\displaystyle f_{x},f_{y}}$ exist and are everywhere continuous. However, the second partial derivatives are not continuous at (0, 0), and the symmetry fails. In fact, along the x-axis the y-derivative is ${\displaystyle f_{y}(x,0)=x}$, and so:

${\displaystyle f_{yx}(0,0)=\lim _{\varepsilon \to 0}{\frac {f_{y}(\varepsilon ,0)-f_{y}(0,0)}{\varepsilon }}=1.}$

In contrast, along the y-axis the x-derivative ${\displaystyle f_{x}(0,y)=-y}$, and so ${\displaystyle f_{xy}(0,0)=-1}$. That is, ${\displaystyle f_{yx}\neq f_{xy}}$ at (0, 0), although the mixed partial derivatives do exist, and at every other point the symmetry does hold.

The above function, written in a cylindrical coordinate system, can be expressed as

${\displaystyle f(r,\,\theta )={\frac {r^{2}\sin {4\theta }}{4}},}$

showing that the function oscillates four times when traveling once around an arbitrarily small loop containing the origin. Intuitively, therefore, the local behavior of the function at (0, 0) cannot be described as a quadratic form, and the Hessian matrix thus fails to be symmetric.

In general, the interchange of limiting operations need not commute. Given two variables near (0, 0) and two limiting processes on

${\displaystyle f(h,\,k)-f(h,\,0)-f(0,\,k)+f(0,\,0)}$

corresponding to making h → 0 first, and to making k → 0 first. It can matter, looking at the first-order terms, which is applied first. This leads to the construction of pathological examples in which second derivatives are non-symmetric. This kind of example belongs to the theory of real analysis where the pointwise value of functions matters. When viewed as a distribution the second partial derivative's values can be changed at an arbitrary set of points as long as this has Lebesgue measure 0. Since in the example the Hessian is symmetric everywhere except (0, 0), there is no contradiction with the fact that the Hessian, viewed as a Schwartz distribution, is symmetric.

## In Lie theory

Consider the first-order differential operators Di to be infinitesimal operators on Euclidean space. That is, Di in a sense generates the one-parameter group of translations parallel to the xi-axis. These groups commute with each other, and therefore the infinitesimal generators do also; the Lie bracket

[Di, Dj] = 0

is this property's reflection. In other words, the Lie derivative of one coordinate with respect to another is zero.

## Application to differential forms

The Clairaut-Schwarz theorem is the key fact needed to prove that for every ${\displaystyle C^{\infty }}$ (or at least twice differentiable) differential form ${\displaystyle \omega \in \Omega ^{k}(M)}$, the second exterior derivative vanishes: ${\displaystyle d^{2}\omega :=d(d\omega )=0}$. This implies that every differentiable exact form (i.e., a form ${\displaystyle \alpha }$ such that ${\displaystyle \alpha =d\omega }$ for some form ${\displaystyle \omega }$) is closed (i.e., ${\displaystyle d\alpha =0}$), since ${\displaystyle d\alpha =d(d\omega )=0}$.[38]

In the middle of the 18th century, the theory of differential forms was first studied in the simplest case of 1-forms in the plane, i.e. ${\displaystyle A\,dx+B\,dy}$, where ${\displaystyle A}$ and ${\displaystyle B}$ are functions in the plane. The study of 1-forms and the differentials of functions began with Clairaut's papers in 1739 and 1740. At that stage his investigations were interpreted as ways of solving ordinary differential equations. Formally Clairaut showed that a 1-form ${\displaystyle \omega =A\,dx+B\,dy}$ on an open rectangle is closed, i.e. ${\displaystyle d\omega =0}$, if and only ${\displaystyle \omega }$ has the form ${\displaystyle df}$ for some function ${\displaystyle f}$ in the disk. The solution for ${\displaystyle f}$ can be written by Cauchy's integral formula

${\displaystyle f(x,y)=\int _{x_{0}}^{x}A(x,y)\,dx+\int _{y_{0}}^{y}B(x,y)\,dy;}$

while if ${\displaystyle \omega =df}$, the closed property ${\displaystyle d\omega =0}$ is the identity ${\displaystyle \partial _{x}\partial _{y}f=\partial _{y}\partial _{x}f}$. (In modern language this is one version of the Poincaré lemma.)[39]

## Notes

1. ^ a b These can also be rephrased in terms of the action of operators on Schwartz functions on the plane. Under Fourier transform, the difference and differential operators are just multiplication operators.[18]
1. ^ "Young's Theorem" (PDF). University of California Berkeley. Archived from the original (PDF) on 2006-05-18. Retrieved 2015-01-02.
2. ^ Allen 1964, pp. 300–305.
3. ^
4. ^ Sandifer 2007, pp. 142–147, footnote: Comm. Acad. Sci. Imp. Petropol. 7 (1734/1735) 1740, 174-189, 180-183; Opera Omnia, 1.22, 34-56..
5. ^
6. ^
7. ^ a b
8. ^
9. ^ James 1966, p. [page needed].
10. ^ a b Burkill 1962, pp. 154–155
11. ^
12. ^
13. ^ Hörmander 2015, pp. 7, 11. This condensed account is possibly the shortest.
14. ^ Dieudonné 1960, pp. 179–180.
15. ^ Godement 1998b, pp. 287–289.
16. ^ Lang 1969, pp. 108–111.
17. ^ Cartan 1971, pp. 64–67.
18. ^ Hörmander 2015, Chapter VII.
19. ^ Hörmander 2015, p. 6.
20. ^ Rudin 1976, p. [page needed].
21. ^ Hörmander 2015, p. 11.
22. ^
23. ^
24. ^
25. ^
26. ^ Titchmarsh 1939, p. [page needed].
27. ^ Titchmarsh 1939, pp. 23–25.
28. ^ Titchmarsh 1939, pp. 49–50.
29. ^ Spivak 1965, p. 61.
30. ^
31. ^
32. ^ Axler 2020, pp. 142–143.
33. ^ Marshall, Donald E., Theorems of Fubini and Clairaut (PDF), University of Washington
34. ^ Hubbard & Hubbard 2015, pp. 732–733.
35. ^ Rudin 1976, pp. 235–236.
36. ^ Hobson 1921, pp. 403–404.
37. ^ Apostol 1974, pp. 358–359.
38. ^
39. ^