# Closed and exact differential forms

(Redirected from Exact differential form)

In mathematics, especially vector calculus and differential topology, a closed form is a differential form α whose exterior derivative is zero ( = 0), and an exact form is a differential form that is the exterior derivative of another differential form β. Thus, an exact form is in the image of d, and a closed form is in the kernel of d.

For an exact form α, α = for some differential form β of one-lesser degree than α. The form β is called a "potential form" or "primitive" for α. Since d2 = 0, β is not unique, but can be modified by the addition of the differential of a two-step-lower-order form.

Because d2 = 0, any exact form is automatically closed. The question of whether every closed form is exact depends on the topology of the domain of interest. On a contractible domain, every closed form is exact by the Poincaré lemma. More general questions of this kind on an arbitrary differentiable manifold are the subject of de Rham cohomology, which allows one to obtain purely topological information using differential methods.

## Examples

For more details on this topic, see Winding number.
Vector field corresponding to "dθ".

The simplest example of a form which is closed but not exact is the 1-form "dθ" (quotes because it is not the derivative of a globally defined function), defined on the punctured plane ${\displaystyle \mathbf {R} ^{2}\setminus \{0\},}$ which is locally given as the derivative of the argument—note that argument is locally but not globally defined, since a loop around the origin increases (or decreases, depending on direction) the argument by 2π, which corresponds to the integral:

${\displaystyle \oint _{S^{1}}d\theta =2\pi ,}$

and for general paths is known as the winding number. The differential of the argument is however globally defined (except at the origin), since differentiation only requires local data and different values of the argument differ by a constant, so the derivatives of different local definitions are equal; this line of thought is generalized in the notion of covering spaces.

Explicitly, the form is given as:

${\displaystyle d\theta ={\frac {1}{x^{2}+y^{2}}}\left(-y\,dx+x\,dy\right),}$

which is not defined at the origin. This can be computed from a formula for the argument, most simply via arctan(y/x) (y/x is the slope of the line passing through (x,y), and arctan converts slope to angle), recognizing 1/(x2+y2) as corresponding to the derivative of arctan, which is 1/(x2+1) (these agree on the line y=1). While the differential is correctly computed by symbolically differentiating this expression, this formula is only strictly correct on the halfplane x>0, and properly one must use a correct formula for the argument.

This form generates the de Rham cohomology group ${\displaystyle H_{dR}^{1}(\mathbf {R} ^{2}\setminus \{0\})\cong \mathbf {R} ,}$ meaning that any closed form ${\displaystyle \omega }$ is the sum of an exact form ${\displaystyle df}$ and a multiple of ${\displaystyle d\theta :}$ ${\displaystyle \omega =df+k\cdot d\theta ,}$ where ${\displaystyle \textstyle {k={\frac {1}{2\pi }}\oint _{S^{1}}\omega }}$ accounts for a non-trivial contour integral around the origin, which is the only obstruction to a closed form on the punctured plane (locally the derivative of a potential function[disambiguation needed]) being the derivative of a globally defined function.

## Examples in low dimensions

Differential forms in R2 and R3 were well known in the mathematical physics of the nineteenth century. In the plane, 0-forms are just functions, and 2-forms are functions times the basic area element dxdy, so that it is the 1-forms

${\displaystyle \alpha =f(x,y)\,dx+g(x,y)\,dy}$

that are of real interest. The formula for the exterior derivative d here is

${\displaystyle d\alpha =(g_{x}-f_{y})\,dx\wedge dy\,}$

where the subscripts denote partial derivatives. Therefore the condition for ${\displaystyle \alpha }$ to be closed is

${\displaystyle f_{y}=g_{x}.\,}$

In this case if h(x,y) is a function then

${\displaystyle dh=h_{x}\,dx+h_{y}\,dy.\,}$

The implication from 'exact' to 'closed' is then a consequence of the symmetry of second derivatives, with respect to x and y.

The gradient theorem asserts that a 1-form is exact if and only if the line integral of the form depends only on the endpoints of the curve, or equivalently, if the integral around any smooth closed curve is zero.

### Vector field analogies

On a Riemannian manifold, or more generally a pseudo-Riemannian manifold, k-forms correspond to k-vector fields (by duality via the metric), so there is a notion of a vector field corresponding to a closed or exact form.

In 3 dimensions, an exact vector field (thought of as a 1-form) is called a conservative vector field, meaning that it is the derivative (gradient) of a 0-form (function), called the scalar potential. A closed vector field (thought of as a 1-form) is one whose derivative (curl) vanishes, and is called an irrotational vector field.

Thinking of a vector field as a 2-form instead, a closed vector field is one whose derivative (divergence) vanishes, and is called an incompressible flow (sometimes solenoidal vector field).

The concepts of conservative and incompressible vector fields generalize to n dimensions, because gradient and divergence generalize to n dimensions; curl is defined only in three dimensions, thus the concept of irrotational vector field does not generalize in this way.

## Poincaré lemma

The Poincaré lemma states that if B is an open ball in Rn, any smooth closed p-form ω defined on B is exact, for any integer p with 1 ≤ pn.[1]

Translating if necessary, it can be assumed that the ball B has centre 0. Let αs be the flow on Rn defined by αsx = esx. For s ≤ 0 it carries B into itself and induces an action on functions and differential forms. The derivative of the flow is the vector field X defined on functions f by Xf = dsf)/ds|s = 0: it is the radial vector field r∂/∂r = ∑ xi ∂/∂xi. Often it is convenient to write the flow multiplicatively as a function of t = es, setting βt = αs, so that βtx = t x. Only if 0 < t ≤ 1 will βt carry B into itself. The derivative of the flow on forms defines the Lie derivative with respect to X given by LX ω = dsω) /ds|s=0. In particular

${\displaystyle \displaystyle {{d \over ds}\alpha _{s}\omega =\alpha _{s}L_{X}\omega ,}}$

so by the chain rule

${\displaystyle \displaystyle {{d \over dt}\beta _{t}\omega =t^{-1}\beta _{t}L_{X}\omega .}}$

Since αs commutes with the exterior derivative d, so does the Lie derivative LX. Moreover if ιX denotes interior multiplication or contraction by the vector field X, then by Cartan's formula

${\displaystyle \displaystyle {d\iota _{X}+\iota _{X}d=L_{X}.}}$

Now define

${\displaystyle \displaystyle {h\omega =\int _{0}^{1}\beta _{t}\omega \,{dt \over t}.}}$

Then h commutes with LX, since LX commutes with αs and βt. Furthermore h commutes with d. It also has the important property that

${\displaystyle \displaystyle {h\circ L_{X}={\rm {id}}=L_{X}\circ h,}}$

by the fundamental theorem of calculus since

${\displaystyle \displaystyle {hL_{X}\omega =\int _{0}^{1}\beta _{t}L_{X}\omega \,{dt \over t}=\int _{0}^{1}{d \over dt}(\beta _{t}\omega )\,dt=[\beta _{t}\omega ]_{0}^{1}=\omega .}}$

Now set k = h ∘ ιX. Then

${\displaystyle \displaystyle {dk+kd=dh\iota _{X}+h\iota _{X}d=h(d\iota _{X}+\iota _{X}d)=hL_{X}={\rm {id}}.}}$

Thus

${\displaystyle \displaystyle {dk+kd={\rm {id}}.}}$

(In the language of homological algebra, k is a "contracting homotopy".)

It now follows that if ω is closed, so that dω = 0, then d(kω) = (dk + kd)ω = ω, so that ω is exact and the Poincaré lemma is proved.

The same method applies to any open set in Rn that is star-shaped about 0, i.e. any open set containing 0 and invariant under βt for 0 < t < 1.

Example. In two dimensions the Poincaré lemma can be proved directly for closed 1-forms and 2-forms as follows.[2]

If ω = p dx + q dy is a closed 1-form on (a,b) × (c,d), then py = qx. If ω = df then p = fx and q = fy. Set

${\displaystyle \displaystyle {g(x,y)=\int _{a}^{x}p(t,y)\,dt,}}$

so that gx = p. Then h = fg must satisfy hx = 0 and hy = qgy. The right hand side here is independent of x since its partial derivative with respect to x is 0. So

${\displaystyle \displaystyle {h(x,y)=\int _{c}^{y}q(a,s)\,ds-g(a,y)=\int _{c}^{y}q(a,s)\,ds,}}$

and hence

${\displaystyle \displaystyle {f(x,y)=\int _{a}^{x}p(t,y)\,dt+\int _{c}^{y}q(a,s)\,ds.}}$

Similarly if Ω = r dxdy then Ω = d(a dx + b dy) with bxay = r. Thus a solution is given by a = 0 and

${\displaystyle \displaystyle {b(x,y)=\int _{a}^{x}r(t,y)\,dt.}}$

## Formulation as cohomology

When the difference of two closed forms is an exact form, they are said to be cohomologous to each other. That is, if ζ and η are closed forms, and one can find some β such that

${\displaystyle \zeta -\eta =d\beta \,}$

then one says that ζ and η are cohomologous to each other. Exact forms are sometimes said to be cohomologous to zero. The set of all forms cohomologous to a given form (and thus to each other) is called a de Rham cohomology class; the general study of such classes is known as cohomology. It makes no real sense to ask whether a 0-form (smooth function) is exact, since d increases degree by 1; but the clues from topology suggest that only the zero function should be called "exact". The cohomology classes are identified with locally constant functions.

Using contracting homotopies similar to the one used in the proof of the Poincaré lemma, it can be shown that de Rham cohomology is homotopy-invariant. Non-contractible in general have non-trivial de Rham cohomology. For instance, on the circle S1, parametrized by t in [0, 1], the closed 1-form dt is not exact.

## Application in electrodynamics

In electrodynamics, the case of the magnetic field ${\displaystyle {\vec {B}}(\mathbf {r} )}$ produced by a stationary electrical current is important. There one deals with the vector potential ${\displaystyle {\vec {A}}(\mathbf {r} )}$ of this field. This case corresponds to k=2, and the defining region is the full ${\displaystyle \mathbb {R} ^{3}\,.}$ The current-density vector is ${\displaystyle {\vec {j}}\,.}$ It corresponds to the current two-form

${\displaystyle \mathbf {I} :=j_{1}(x,y,z)\,{\rm {d}}x_{2}\wedge {\rm {d}}x_{3}+j_{2}(x,y,z)\,{\rm {d}}x_{3}\wedge {\rm {d}}x_{1}+j_{3}(x,y,z)\,{\rm {d}}x_{1}\wedge {\rm {d}}x_{2}.}$

For the magnetic field ${\displaystyle {\vec {B}}}$ one has analogous results: it corresponds to the induction two-form ${\displaystyle \Phi _{B}:=B_{1}{\rm {d}}x_{2}\wedge {\rm {d}}x_{3}+\cdots ,}$ and can be derived from the vector potential ${\displaystyle {\vec {A}}}$, or the corresponding one-form ${\displaystyle \mathbf {A} }$,

${\displaystyle {\vec {B}}={\rm {curl\,\,}}{\vec {A}}=\left\{{\frac {\partial A_{3}}{\partial x_{2}}}-{\frac {\partial A_{2}}{\partial x_{3}}},{\frac {\partial A_{1}}{\partial x_{3}}}-{\frac {\partial A_{3}}{\partial x_{1}}},{\frac {\partial A_{2}}{\partial x_{1}}}-{\frac {\partial A_{1}}{\partial x_{2}}}\right\},{\text{ or }}\Phi _{B}={\rm {d}}\mathbf {A} .}$

Thereby the vector potential ${\displaystyle {\vec {A}}}$ corresponds to the potential one-form

${\displaystyle \mathbf {A} :=A_{1}\,{\rm {d}}x_{1}+A_{2}\,{\rm {d}}x_{2}+A_{3}\,{\rm {d}}x_{3}.}$

The closedness of the magnetic-induction two-form corresponds to the property of the magnetic field that it is source-free:   ${\displaystyle {\rm {div\,\,}}{\vec {B}}\equiv 0,}$ i.e. there are no magnetic monopoles.

In a special gauge, ${\displaystyle {\rm {div\,\,}}{\vec {A}}{\stackrel {!}{=}}0}$, this implies for i = 1, 2, 3

${\displaystyle A_{i}({\vec {r}})=\int {\frac {\mu _{0}j_{i}({\vec {r}}^{\,'})\,\,dx_{1}'dx_{2}'dx_{3}'}{4\pi |{\vec {r}}-{\vec {r}}^{\,'}|}}\,.}$

(Here ${\displaystyle \mu _{0}}$ is a constant, the magnetic vacuum permeability.)

This equation is remarkable, because it corresponds completely to a well-known formula for the electrical field ${\displaystyle {\vec {E}}}$, namely for the electrostatic Coulomb potential ${\displaystyle \,\phi (x_{1},x_{2},x_{3})}$ of a charge density ${\displaystyle \rho (x_{1},x_{2},x_{3})}$. At this place one can already guess that

• ${\displaystyle {\vec {E}}}$ and ${\displaystyle {\vec {B}},}$
• ${\displaystyle \rho }$ and ${\displaystyle {\vec {j}},}$
• ${\displaystyle \,\phi }$ and ${\displaystyle {\vec {A}}}$

can be unified to quantities with six rsp. four nontrivial components, which is the basis of the relativistic invariance of the Maxwell equations.

If the condition of stationarity is left, on the l.h.s. of the above-mentioned equation one must add, in the equations for ${\displaystyle A_{i}\,,}$ to the three space coordinates, as a fourth variable also the time t, whereas on the r.h.s., in ${\displaystyle j_{i}'\,,}$ the so-called "retarded time",   ${\displaystyle t':=t-{\frac {|{\vec {r}}-{\vec {r}}^{\,'}|}{c}}\,,}$ must be used, i.e. it is added to the argument of the current-density. Finally, as before, one integrates over the three primed space coordinates. (As usual c is the vacuum velocity of light.)

## Notes

1. ^ Warner 1983, pp. 155-156
2. ^ Napier & Ramachandran 2011, pp. 443-444

## References

• Flanders, Harley (1989), Differential forms with applications to the physical sciences, New York: Dover Publications, ISBN 978-0-486-66169-8.
• Warner, Frank W. (1983), Foundations of differentiable manifolds and Lie groups, Graduate Texts in Mathematics, 94, Springer, ISBN 0-387-90894-3
• Napier, Terrence; Ramachandran, Mohan (2011), An introduction to Riemann surfaces, Birkhäuser, ISBN 978-0-8176-4693-6