Jump to content

Bramble–Hilbert lemma

From Wikipedia, the free encyclopedia
(Redirected from Bramble-Hilbert lemma)

In mathematics, particularly numerical analysis, the Bramble–Hilbert lemma, named after James H. Bramble and Stephen Hilbert, bounds the error of an approximation of a function by a polynomial of order at most in terms of derivatives of of order . Both the error of the approximation and the derivatives of are measured by norms on a bounded domain in . This is similar to classical numerical analysis, where, for example, the error of linear interpolation can be bounded using the second derivative of . However, the Bramble–Hilbert lemma applies in any number of dimensions, not just one dimension, and the approximation error and the derivatives of are measured by more general norms involving averages, not just the maximum norm.

Additional assumptions on the domain are needed for the Bramble–Hilbert lemma to hold. Essentially, the boundary of the domain must be "reasonable". For example, domains that have a spike or a slit with zero angle at the tip are excluded. Lipschitz domains are reasonable enough, which includes convex domains and domains with continuously differentiable boundary.

The main use of the Bramble–Hilbert lemma is to prove bounds on the error of interpolation of function by an operator that preserves polynomials of order up to , in terms of the derivatives of of order . This is an essential step in error estimates for the finite element method. The Bramble–Hilbert lemma is applied there on the domain consisting of one element (or, in some superconvergence results, a small number of elements).

The one-dimensional case

[edit]

Before stating the lemma in full generality, it is useful to look at some simple special cases. In one dimension and for a function that has derivatives on interval , the lemma reduces to

where is the space of all polynomials of degree at most and indicates the th derivative of a function .

In the case when , , , and is twice differentiable, this means that there exists a polynomial of degree one such that for all ,

This inequality also follows from the well-known error estimate for linear interpolation by choosing as the linear interpolant of .

Statement of the lemma

[edit]

[dubiousdiscuss]

Suppose is a bounded domain in , , with boundary and diameter . is the Sobolev space of all function on with weak derivatives of order up to in . Here, is a multiindex, and denotes the derivative times with respect to , times with respect to , and so on. The Sobolev seminorm on consists of the norms of the highest order derivatives,

and

is the space of all polynomials of order up to on . Note that for all and , so has the same value for any .

Lemma (Bramble and Hilbert) Under additional assumptions on the domain , specified below, there exists a constant independent of and such that for any there exists a polynomial such that for all

The original result

[edit]

The lemma was proved by Bramble and Hilbert [1] under the assumption that satisfies the strong cone property; that is, there exists a finite open covering of and corresponding cones with vertices at the origin such that is contained in for any .

The statement of the lemma here is a simple rewriting of the right-hand inequality stated in Theorem 1 in.[1] The actual statement in [1] is that the norm of the factorspace is equivalent to the seminorm. The norm is not the usual one but the terms are scaled with so that the right-hand inequality in the equivalence of the seminorms comes out exactly as in the statement here.

In the original result, the choice of the polynomial is not specified, and the value of constant and its dependence on the domain cannot be determined from the proof.

A constructive form

[edit]

An alternative result was given by Dupont and Scott [2] under the assumption that the domain is star-shaped; that is, there exists a ball such that for any , the closed convex hull of is a subset of . Suppose that is the supremum of the diameters of such balls. The ratio is called the chunkiness of .

Then the lemma holds with the constant , that is, the constant depends on the domain only through its chunkiness and the dimension of the space . In addition, can be chosen as , where is the averaged Taylor polynomial, defined as

where

is the Taylor polynomial of degree at most of centered at evaluated at , and is a function that has derivatives of all orders, equals to zero outside of , and such that

Such function always exists.

For more details and a tutorial treatment, see the monograph by Brenner and Scott.[3] The result can be extended to the case when the domain is the union of a finite number of star-shaped domains, which is slightly more general than the strong cone property, and other polynomial spaces than the space of all polynomials up to a given degree.[2]

Bound on linear functionals

[edit]

This result follows immediately from the above lemma, and it is also called sometimes the Bramble–Hilbert lemma, for example by Ciarlet.[4] It is essentially Theorem 2 from.[1]

Lemma Suppose that is a continuous linear functional on and its dual norm. Suppose that for all . Then there exists a constant such that

References

[edit]
  1. ^ a b c d J. H. Bramble and S. R. Hilbert. Estimation of linear functionals on Sobolev spaces with application to Fourier transforms and spline interpolation. SIAM J. Numer. Anal., 7:112–124, 1970.
  2. ^ a b Todd Dupont and Ridgway Scott. Polynomial approximation of functions in Sobolev spaces. Math. Comp., 34(150):441–463, 1980.
  3. ^ Susanne C. Brenner and L. Ridgway Scott. The mathematical theory of finite element methods, volume 15 of Texts in Applied Mathematics. Springer-Verlag, New York, second edition, 2002. ISBN 0-387-95451-1
  4. ^ Philippe G. Ciarlet. The finite element method for elliptic problems, volume 40 of Classics in Applied Mathematics. Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, 2002. Reprint of the 1978 original [North-Holland, Amsterdam]. ISBN 0-89871-514-8
[edit]