Differential operator

From Wikipedia, the free encyclopedia
Jump to: navigation, search

In mathematics, a differential operator is an operator defined as a function of the differentiation operator. It is helpful, as a matter of notation first, to consider differentiation as an abstract operation that accepts a function and returns another function (in the style of a higher-order function in computer science).

This article considers mainly linear operators, which are the most common type. However, non-linear differential operators, such as the Schwarzian derivative also exist.

Notations[edit]

The most common differential operator is the action of taking the derivative itself. Common notations for taking the first derivative with respect to a variable x include:

{d \over dx},  D,\,  D_x,\, and \partial_x.

When taking higher, nth order derivatives, the operator may also be written:

{d^n \over dx^n}, D^n\,, or D^n_x.\,

The derivative of a function f of an argument x is sometimes given as either of the following:

[f(x)]'\,\!
f'(x).\,\!

The D notation's use and creation is credited to Oliver Heaviside, who considered differential operators of the form

\sum_{k=0}^n c_k D^k

in his study of differential equations.

One of the most frequently seen differential operators is the Laplacian operator, defined by

\Delta=\nabla^{2}=\sum_{k=1}^n {\partial^2\over \partial x_k^2}.

Another differential operator is the Θ operator, or theta operator, defined by[1]

\Theta = z {d \over dz}.

This is sometimes also called the homogeneity operator, because its eigenfunctions are the monomials in z:

\Theta (z^k) = k z^k,\quad k=0,1,2,\dots

In n variables the homogeneity operator is given by

\Theta = \sum_{k=1}^n x_k \frac{\partial}{\partial x_k}.

As in one variable, the eigenspaces of Θ are the spaces of homogeneous polynomials.

The result of applying the differential to the left[clarification needed] and to the right[clarification needed], and the difference obtained when applying the differential operator to the left and to the right, are denoted by arrows as follows:

f \overleftarrow{\partial_x} g = g \partial_x f
f \overrightarrow{\partial_x} g = f \partial_x g
f \overleftrightarrow{\partial_x} g = f \partial_x g - g \partial_x f.

Such a bidirectional-arrow notation is frequently used for describing the probability current of quantum mechanics.

Del[edit]

The differential operator del, also called nabla operator, is an important vector differential operator. It appears frequently in physics in places like the differential form of Maxwell's Equations. In three-dimensional Cartesian coordinates, del is defined:

\nabla = \mathbf{\hat{x}} {\partial \over \partial x}  + \mathbf{\hat{y}} {\partial \over \partial y} + \mathbf{\hat{z}} {\partial \over \partial z}.

Del is used to calculate the gradient, curl, divergence, and Laplacian of various objects.

Adjoint of an operator[edit]

Given a linear differential operator T

Tu = \sum_{k=0}^n a_k(x) D^k u

the adjoint of this operator is defined as the operator T^* such that

\langle Tu,v \rangle = \langle u, T^*v \rangle

where the notation \langle\cdot,\cdot\rangle is used for the scalar product or inner product. This definition therefore depends on the definition of the scalar product.

Formal adjoint in one variable[edit]

In the functional space of square integrable functions, the scalar product is defined by

\langle f, g \rangle = \int_a^b f(x) \, \overline{g(x)} \,dx ,

where the line over g(x) denotes the complex conjugate of g(x). If one moreover adds the condition that f or g vanishes for x \to a and x \to b, one can also define the adjoint of T by

T^*u = \sum_{k=0}^n (-1)^k D^k [\overline{a_k(x)}u].\,

This formula does not explicitly depend on the definition of the scalar product. It is therefore sometimes chosen as a definition of the adjoint operator. When T^* is defined according to this formula, it is called the formal adjoint of T.

A (formally) self-adjoint operator is an operator equal to its own (formal) adjoint.

Several variables[edit]

If Ω is a domain in Rn, and P a differential operator on Ω, then the adjoint of P is defined in L2(Ω) by duality in the analogous manner:

\langle f, P^* g\rangle_{L^2(\Omega)} = \langle P f, g\rangle_{L^2(\Omega)}

for all smooth L2 functions f, g. Since smooth functions are dense in L2, this defines the adjoint on a dense subset of L2: P* is a densely defined operator.

Example[edit]

The Sturm–Liouville operator is a well-known example of a formal self-adjoint operator. This second-order linear differential operator L can be written in the form

Lu = -(pu')'+qu=-(pu''+p'u')+qu=-pu''-p'u'+qu=(-p) D^2 u +(-p') D u + (q)u.\;\!

This property can be proven using the formal adjoint definition above.

\begin{align}
L^*u & {} = (-1)^2 D^2 [(-p)u] + (-1)^1 D [(-p')u] + (-1)^0 (qu) \\
 & {} = -D^2(pu) + D(p'u)+qu \\
 & {} = -(pu)''+(p'u)'+qu \\
 & {} = -p''u-2p'u'-pu''+p''u+p'u'+qu \\
 & {} = -p'u'-pu''+qu \\
 & {} = -(pu')'+qu \\
 & {} = Lu
\end{align}

This operator is central to Sturm–Liouville theory where the eigenfunctions (analogues to eigenvectors) of this operator are considered.

Properties of differential operators[edit]

Differentiation is linear, i.e.,

D(f+g) = (Df)+(Dg)\,
D(af) = a(Df)\,

where f and g are functions, and a is a constant.

Any polynomial in D with function coefficients is also a differential operator. We may also compose differential operators by the rule

(D_1 \circ D_2)(f) = D_1(D_2(f)).\,

Some care is then required: firstly any function coefficients in the operator D2 must be differentiable as many times as the application of D1 requires. To get a ring of such operators we must assume derivatives of all orders of the coefficients used. Secondly, this ring will not be commutative: an operator gD isn't the same in general as Dg. In fact we have for example the relation basic in quantum mechanics:

Dx - xD = 1.\,

The subring of operators that are polynomials in D with constant coefficients is, by contrast, commutative. It can be characterised another way: it consists of the translation-invariant operators.

The differential operators also obey the shift theorem.

Several variables[edit]

The same constructions can be carried out with partial derivatives, differentiation with respect to different variables giving rise to operators that commute (see symmetry of second derivatives).

Coordinate-independent description[edit]

In differential geometry and algebraic geometry it is often convenient to have a coordinate-independent description of differential operators between two vector bundles. Let E and F be two vector bundles over a differentiable manifold M. An R-linear mapping of sections P : Γ(E) → Γ(F) is said to be a kth-order linear differential operator if it factors through the jet bundle Jk(E). In other words, there exists a linear mapping of vector bundles

i_P: J^k(E) \rightarrow F\,

such that

P = i_P\circ j^k

where jk: Γ(E) → Γ(Jk(E)) is the prolongation that associates to any section of E its k-jet.

This just means that for a given sections s of E, the value of P(s) at a point x ∈ M is fully determined by the kth-order infinitesimal behavior of s in x. In particular this implies that P(s)(x) is determined by the germ of s in x, which is expressed by saying that differential operators are local. A foundational result is the Peetre theorem showing that the converse is also true: any (linear) local operator is differential.

Relation to commutative algebra[edit]

An equivalent, but purely algebraic description of linear differential operators is as follows: an R-linear map P is a kth-order linear differential operator, if for any k + 1 smooth functions f_0,\ldots,f_k \in C^\infty(M) we have

[f_k,[f_{k-1},[\cdots[f_0,P]\cdots]]=0.

Here the bracket [f,P]:\Gamma(E)\rightarrow \Gamma(F) is defined as the commutator

[f,P](s)=P(f\cdot s)-f\cdot P(s).\,

This characterization of linear differential operators shows that they are particular mappings between modules over a commutative algebra, allowing the concept to be seen as a part of commutative algebra.

Examples[edit]

 \frac{\partial}{\partial z} = \frac{1}{2} \left( \frac{\partial}{\partial x} - i \frac{\partial}{\partial y} \right) \quad,\quad \frac{\partial}{\partial\bar{z}}= \frac{1}{2} \left( \frac{\partial}{\partial x} + i \frac{\partial}{\partial y} \right) \ .

This approach is also used to study functions of several complex variables and functions of a motor variable.

History[edit]

The conceptual step of writing a differential operator as something free-standing is attributed to Louis François Antoine Arbogast in 1800.[2]

See also[edit]

References[edit]

  1. ^ E. W. Weisstein. "Theta Operator". Retrieved 2009-06-12. 
  2. ^ James Gasser (editor), A Boole Anthology: Recent and classical studies in the logic of George Boole (2000), p. 169; Google Books.

External links[edit]