Jump to content

Operator norm: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
mNo edit summary
Expand
Line 2: Line 2:


It is simple to prove that this is the same condition on ''L'' as [[continuity (mathematics)|continuity]], for the topologies induced from the norms.
It is simple to prove that this is the same condition on ''L'' as [[continuity (mathematics)|continuity]], for the topologies induced from the norms.

In the case of a [[matrix]] ''A'' acting as a linear transformation, from '''R'''<sup>''m''</sup> to '''R'''<sup>''n''</sup>, or from '''C'''<sup>''m''</sup> to '''C'''<sup>''n''</sup>, one can prove directly that ''A'' must be bounded. In fact the function

''f''(''v'') = ||''A''(''v'')||

is continuous as a function of ''v'', for any norm ||.||; and the set of ''v'' with ||''v''|| = 1 is [[compact]], being a closed, bounded subset. The [[matrix norm]] of ''A'' is by definition the [[supremum]] of ''f''. In this case it is attained somewhere, again by the compactness of the domain.

In general the '''operator norm''' of a ''bounded'' linear transformation ''L'' from ''V'' to ''W'', where ''V'' and ''W'' are both normed real vector spaces (or both normed complex) vector spaces is defined as the ''supremum'' of the ||''L''(''v'')|| taken over all ''v'' in ''V'' of norm 1. This definition uses the property ||''c.v''|| = |''c''|.||''v''|| where ''c'' is a scalar, to restrict attention to ''v'' with ||''v''|| = 1. Geometrically we need (for real scalars) to look at one vector only on each ''ray'' out from the origin 0.

Note that there are two different norms here: that in ''V'' and that in ''W''. Even if ''V'' = ''W'' we might wish to take distinct norms on it. In fact given two norms ||.|| and |||.||| on ''V'', the identity operator on ''V'' will have an operator norm, in passing from ''V'' with ||.|| as norm to ''V'' with |||.|||, only if we can say

|||''v''||| < ''C''.||''v''||

for some absolute constant ''C'', for all ''v''. When ''V'' is finite-dimensional, we can be sure of this: for example in the case of two dimensions the conditions ||''v''|| = 1 and |||''v''||| may define a rectangle and an ellipse respectively, centred at 0. Whatever their proportions and orientations, we can magnify the rectangle so that the the ellipse fits inside the enlarged rectangle; and ''vice versa''.

This is, however, a phenomenon of finite dimensions. That all norms will turn out to be equivalent: they stay within constant multiples of each other, and from a topological point of view they give the same open sets. This all fails for infinite-dimensional spaces. This can be seen, for example, by considering the [[differentiation]] operator D, as applied to trigonometric polynomials. We can take the [[root mean square]] as norm: since D(e<sup>''inx''</sup>) = ''in''e<sup>''inx''</sup>, the norms of D applied to the finite-dimensional subspaces of the [[Hilbert space]] grow beyond any bounds. Therefore an operator as fundamental as D can fail to have an operator norm.

A basic theorem applies the [[Baire category theorem]] to show that if ''L'' has as domain and range [[Banach space]]s, it will be bounded. That is, in the example just given, D must not be defined on all square-integrable [[Fourier series]]; and indeed we know that they ca represent [[continuous]] but nowhere differentiable functions. The intuition is that if ''L'' magnifies norms of some vectors by as large a number as we choose, we should be able to ''condense singularities'' - choose a vector ''v'' that ''sums up'' others for which it would be contradictory for ||''L''(''v'')|| to be finite - showing that the domain of ''L'' cannot be the whole of ''V''.

Revision as of 11:31, 15 December 2003

In functional analysis, a linear transformation L between normed vector spaces is said to be bounded, or to be a bounded linear operator, if the ratio of the norms of L(v) and v is bounded above, over all non-zero vectors v.

It is simple to prove that this is the same condition on L as continuity, for the topologies induced from the norms.

In the case of a matrix A acting as a linear transformation, from Rm to Rn, or from Cm to Cn, one can prove directly that A must be bounded. In fact the function

f(v) = ||A(v)||

is continuous as a function of v, for any norm ||.||; and the set of v with ||v|| = 1 is compact, being a closed, bounded subset. The matrix norm of A is by definition the supremum of f. In this case it is attained somewhere, again by the compactness of the domain.

In general the operator norm of a bounded linear transformation L from V to W, where V and W are both normed real vector spaces (or both normed complex) vector spaces is defined as the supremum of the ||L(v)|| taken over all v in V of norm 1. This definition uses the property ||c.v|| = |c|.||v|| where c is a scalar, to restrict attention to v with ||v|| = 1. Geometrically we need (for real scalars) to look at one vector only on each ray out from the origin 0.

Note that there are two different norms here: that in V and that in W. Even if V = W we might wish to take distinct norms on it. In fact given two norms ||.|| and |||.||| on V, the identity operator on V will have an operator norm, in passing from V with ||.|| as norm to V with |||.|||, only if we can say

|||v||| < C.||v||

for some absolute constant C, for all v. When V is finite-dimensional, we can be sure of this: for example in the case of two dimensions the conditions ||v|| = 1 and |||v||| may define a rectangle and an ellipse respectively, centred at 0. Whatever their proportions and orientations, we can magnify the rectangle so that the the ellipse fits inside the enlarged rectangle; and vice versa.

This is, however, a phenomenon of finite dimensions. That all norms will turn out to be equivalent: they stay within constant multiples of each other, and from a topological point of view they give the same open sets. This all fails for infinite-dimensional spaces. This can be seen, for example, by considering the differentiation operator D, as applied to trigonometric polynomials. We can take the root mean square as norm: since D(einx) = ineinx, the norms of D applied to the finite-dimensional subspaces of the Hilbert space grow beyond any bounds. Therefore an operator as fundamental as D can fail to have an operator norm.

A basic theorem applies the Baire category theorem to show that if L has as domain and range Banach spaces, it will be bounded. That is, in the example just given, D must not be defined on all square-integrable Fourier series; and indeed we know that they ca represent continuous but nowhere differentiable functions. The intuition is that if L magnifies norms of some vectors by as large a number as we choose, we should be able to condense singularities - choose a vector v that sums up others for which it would be contradictory for ||L(v)|| to be finite - showing that the domain of L cannot be the whole of V.