# Closed and exact differential forms

In mathematics, especially vector calculus and differential topology, a closed form is a differential form α whose exterior derivative is zero ( = 0), and an exact form is a differential form, α, that is the exterior derivative of another differential form β. Thus, an exact form is in the image of d, and a closed form is in the kernel of d.

For an exact form α, α = for some differential form β of degree one less than that of α. The form β is called a "potential form" or "primitive" for α. Since the exterior derivative of a closed form is zero, β is not unique, but can be modified by the addition of any closed form of degree one less than that of α.

Because d2 = 0, any exact form is necessarily closed. The question of whether every closed form is exact depends on the topology of the domain of interest. On a contractible domain, every closed form is exact by the Poincaré lemma. More general questions of this kind on an arbitrary differentiable manifold are the subject of de Rham cohomology, which allows one to obtain purely topological information using differential methods.

## Examples

A simple example of a form which is closed but not exact is the 1-form $d\theta$ [note 1] given by the derivative of argument on the punctured plane $\mathbf {R} ^{2}\setminus \{0\}$ . Since $\theta$ is not actually a function (see the next paragraph) $d\theta$ is not an exact form. Still, $d\theta$ has vanishing derivative and is therefore closed.

Note that the argument $\theta$ is only defined up to an integer multiple of $2\pi$ since a single point $p$ can be assigned different arguments $r$ , $r+2\pi$ , etc. We can assign arguments in a locally consistent manner around $p$ , but not in a globally consistent manner. This is because if we trace a loop from $p$ counterclockwise around the origin and back to $p$ , the argument increases by $2\pi$ . Generally, the argument $\theta$ changes by

$\oint _{S^{1}}d\theta$ over a counter-clockwise oriented loop $S^{1}$ .

Even though the argument $\theta$ is not technically a function, the different local definitions of $\theta$ at a point $p$ differ from one another by constants. Since the derivative at $p$ only uses local data, and since functions that differ by a constant have the same derivative, the argument has a globally well-defined derivative "$d\theta$ ".[note 2]

The upshot is that $d\theta$ is a one-form on $\mathbf {R} ^{2}\setminus \{0\}$ that is not actually the derivative of any well-defined function $\theta$ . We say that $d\theta$ is not exact. Explicitly, $d\theta$ is given as:

$d\theta ={\frac {-y\,dx+x\,dy}{x^{2}+y^{2}}}$ ,

which by inspection has derivative zero. Because $d\theta$ has vanishing derivative, we say that it is closed.

This form generates the de Rham cohomology group $H_{dR}^{1}(\mathbf {R} ^{2}\setminus \{0\})\cong \mathbf {R} ,$ meaning that any closed form $\omega$ is the sum of an exact form $df$ and a multiple of $d\theta :$ $\omega =df+k\ d\theta ,$ where $\textstyle {k={\frac {1}{2\pi }}\oint _{S^{1}}\omega }$ accounts for a non-trivial contour integral around the origin, which is the only obstruction to a closed form on the punctured plane (locally the derivative of a potential function) being the derivative of a globally defined function.

## Examples in low dimensions

Differential forms in R2 and R3 were well known in the mathematical physics of the nineteenth century. In the plane, 0-forms are just functions, and 2-forms are functions times the basic area element dxdy, so that it is the 1-forms

$\alpha =f(x,y)\,dx+g(x,y)\,dy$ that are of real interest. The formula for the exterior derivative d here is

$d\alpha =(g_{x}-f_{y})\,dx\wedge dy$ where the subscripts denote partial derivatives. Therefore the condition for $\alpha$ to be closed is

$f_{y}=g_{x}.$ In this case if h(x, y) is a function then

$dh=h_{x}\,dx+h_{y}\,dy.$ The implication from 'exact' to 'closed' is then a consequence of the symmetry of second derivatives, with respect to x and y.

The gradient theorem asserts that a 1-form is exact if and only if the line integral of the form depends only on the endpoints of the curve, or equivalently, if the integral around any smooth closed curve is zero.

### Vector field analogies

On a Riemannian manifold, or more generally a pseudo-Riemannian manifold, k-forms correspond to k-vector fields (by duality via the metric), so there is a notion of a vector field corresponding to a closed or exact form.

In 3 dimensions, an exact vector field (thought of as a 1-form) is called a conservative vector field, meaning that it is the derivative (gradient) of a 0-form (smooth scalar field), called the scalar potential. A closed vector field (thought of as a 1-form) is one whose derivative (curl) vanishes, and is called an irrotational vector field.

Thinking of a vector field as a 2-form instead, a closed vector field is one whose derivative (divergence) vanishes, and is called an incompressible flow (sometimes solenoidal vector field). The term incompressible is used because a non-zero divergence corresponds to the presence of sources and sinks in analogy with a fluid.

The concepts of conservative and incompressible vector fields generalize to n dimensions, because gradient and divergence generalize to n dimensions; curl is defined only in three dimensions, thus the concept of irrotational vector field does not generalize in this way.

## Poincaré lemma

The Poincaré lemma states that if B is an open ball in Rn, any smooth closed p-form ω defined on B is exact, for any integer p with 1 ≤ pn.

Translating if necessary, it can be assumed that the ball B has centre 0. Let αs be the flow on Rn defined by αs x = es x. For s ≥ 0 it carries B into itself and induces an action on functions and differential forms. The derivative of the flow is the vector field X defined on functions f by Xf = d(αsf)/ds: it is the radial vector field r /r = −∑ xi /xi. The derivative of the flow on forms defines the Lie derivative with respect to X given by LX ω = d(αsω) /ds. In particular

$\displaystyle {{d \over ds}\alpha _{s}\omega =\alpha _{s}L_{X}\omega ,}$ Now define

$\displaystyle {h\omega =-\int _{0}^{\infty }\alpha _{t}\omega \,dt.}$ By the fundamental theorem of calculus we have that

$\displaystyle {hL_{X}\omega =-\int _{0}^{\infty }\alpha _{t}L_{X}\omega \,dt=-\int _{0}^{\infty }{d \over dt}(\alpha _{t}\omega )\,dt=-[\alpha _{t}\omega ]_{0}^{\infty }=\omega .}$ With $\displaystyle {\iota _{X}}$ being the interior multiplication or contraction by the vector field X, Cartan's formula writes:

$\displaystyle {L_{X}=d\iota _{X}+\iota _{X}d.}$ Using the fact that d commutes with h (as αs does with d) we get:

$\displaystyle {hL_{X}\omega =h(d\iota _{X}+\iota _{X}d)\omega =d(h\iota _{X}\omega )+h\iota _{X}d\omega =\omega }$ .

It now follows that if ω is closed, i. e. = 0, then d(h ιXω) = ω, so that ω is exact and the Poincaré lemma is proved.

(In the language of homological algebra, hιX is a "contracting homotopy".)

The same method applies to any open set in Rn that is star-shaped about 0, i.e. any open set containing 0 and invariant under αt for $\displaystyle {1 .

Example: In two dimensions the Poincaré lemma can be proved directly for closed 1-forms and 2-forms as follows.

If ω = p dx + q dy is a closed 1-form on (a, b) × (c, d), then py = qx. If ω = df then p = fx and q = fy. Set

$\displaystyle {g(x,y)=\int _{a}^{x}p(t,y)\,dt,}$ so that gx = p. Then h = fg must satisfy hx = 0 and hy = qgy. The right hand side here is independent of x since its partial derivative with respect to x is 0. So

$\displaystyle {h(x,y)=\int _{c}^{y}q(a,s)\,ds-g(a,y)=\int _{c}^{y}q(a,s)\,ds,}$ and hence

$\displaystyle {f(x,y)=\int _{a}^{x}p(t,y)\,dt+\int _{c}^{y}q(a,s)\,ds.}$ Similarly, if Ω = r dxdy then Ω = d(a dx + b dy) with bxay = r. Thus a solution is given by a = 0 and

$\displaystyle {b(x,y)=\int _{a}^{x}r(t,y)\,dt.}$ ## Formulation as cohomology

When the difference of two closed forms is an exact form, they are said to be cohomologous to each other. That is, if ζ and η are closed forms, and one can find some β such that

$\zeta -\eta =d\beta$ then one says that ζ and η are cohomologous to each other. Exact forms are sometimes said to be cohomologous to zero. The set of all forms cohomologous to a given form (and thus to each other) is called a de Rham cohomology class; the general study of such classes is known as cohomology. It makes no real sense to ask whether a 0-form (smooth function) is exact, since d increases degree by 1; but the clues from topology suggest that only the zero function should be called "exact". The cohomology classes are identified with locally constant functions.

Using contracting homotopies similar to the one used in the proof of the Poincaré lemma, it can be shown that de Rham cohomology is homotopy-invariant. In general, non-contractible differentiable manifolds have non-trivial de Rham cohomology. For instance, on the circle S1, parametrized by t in [0, 1], the closed 1-form dt is not exact.

## Application in electrodynamics

In electrodynamics, the case of the magnetic field ${\vec {B}}(\mathbf {r} )$ produced by a stationary electrical current is important. There one deals with the vector potential ${\vec {A}}(\mathbf {r} )$ of this field. This case corresponds to k = 2, and the defining region is the full $\mathbb {R} ^{3}\,.$ The current-density vector is ${\vec {j}}\,.$ It corresponds to the current two-form

$\mathbf {I} :=j_{1}(x_{1},x_{2},x_{3})\,{\rm {d}}x_{2}\wedge {\rm {d}}x_{3}+j_{2}(x_{1},x_{2},x_{3})\,{\rm {d}}x_{3}\wedge {\rm {d}}x_{1}+j_{3}(x_{1},x_{2},x_{3})\,{\rm {d}}x_{1}\wedge {\rm {d}}x_{2}.$ For the magnetic field ${\vec {B}}$ one has analogous results: it corresponds to the induction two-form $\Phi _{B}:=B_{1}{\rm {d}}x_{2}\wedge {\rm {d}}x_{3}+\cdots ,$ and can be derived from the vector potential ${\vec {A}}$ , or the corresponding one-form $\mathbf {A}$ ,

${\vec {B}}={\rm {curl\,\,}}{\vec {A}}=\left\{{\frac {\partial A_{3}}{\partial x_{2}}}-{\frac {\partial A_{2}}{\partial x_{3}}},{\frac {\partial A_{1}}{\partial x_{3}}}-{\frac {\partial A_{3}}{\partial x_{1}}},{\frac {\partial A_{2}}{\partial x_{1}}}-{\frac {\partial A_{1}}{\partial x_{2}}}\right\},{\text{ or }}\Phi _{B}={\rm {d}}\mathbf {A} .$ Thereby the vector potential ${\vec {A}}$ corresponds to the potential one-form

$\mathbf {A} :=A_{1}\,{\rm {d}}x_{1}+A_{2}\,{\rm {d}}x_{2}+A_{3}\,{\rm {d}}x_{3}.$ The closedness of the magnetic-induction two-form corresponds to the property of the magnetic field that it is source-free:   ${\rm {div\,\,}}{\vec {B}}\equiv 0,$ i.e. that there are no magnetic monopoles.

In a special gauge, $\operatorname {div} {\vec {A}}{~{\stackrel {!}{=}}~}0$ , this implies for i = 1, 2, 3

$A_{i}({\vec {r}})=\int {\frac {\mu _{0}j_{i}({\vec {r}}^{\,'})\,\,dx_{1}'dx_{2}'dx_{3}'}{4\pi |{\vec {r}}-{\vec {r}}^{\,'}|}}\,.$ (Here $\mu _{0}$ is a constant, the magnetic vacuum permeability.)

This equation is remarkable, because it corresponds completely to a well-known formula for the electrical field ${\vec {E}}$ , namely for the electrostatic Coulomb potential $\,\phi (x_{1},x_{2},x_{3})$ of a charge density $\rho (x_{1},x_{2},x_{3})$ . At this place one can already guess that

• ${\vec {E}}$ and ${\vec {B}},$ • $\rho$ and ${\vec {j}},$ • $\,\phi$ and ${\vec {A}}$ can be unified to quantities with six rsp. four nontrivial components, which is the basis of the relativistic invariance of the Maxwell equations.

If the condition of stationarity is left, on the l.h.s. of the above-mentioned equation one must add, in the equations for $A_{i}\,,$ to the three space coordinates, as a fourth variable also the time t, whereas on the r.h.s., in $j_{i}'\,,$ the so-called "retarded time",   $t':=t-{\frac {|{\vec {r}}-{\vec {r}}^{\,'}|}{c}}\,,$ must be used, i.e. it is added to the argument of the current-density. Finally, as before, one integrates over the three primed space coordinates. (As usual c is the vacuum velocity of light.)