Theorems and definitions in linear algebra

From Wikipedia, the free encyclopedia
Jump to: navigation, search

This article collects the main theorems and definitions in linear algebra.

Contents

Vector spaces[edit]

A vector space( or linear space) V over a number field² F consists of a set on which two operations (called addition and scalar multiplication, respectively) are defined so, that for each pair of elements x, y, in V there is a unique element x + y in V, and for each element a in F and each element x in V there is a unique element ax in V, such that the following conditions hold.

  • (VS 1) For all x, y in V, x+y=y+x (commutativity of addition).
  • (VS 2) For all x, y, z in V, (x+y)+z=x+(y+z) (associativity of addition).
  • (VS 3) There exists an element in V denoted by 0 such that x+0=x for each x in V.
  • (VS 4) For each element x in V there exists an element y in V such that x+y=0.
  • (VS 5) For each element x in V, 1x=x.
  • (VS 6) For each pair of element a,b in F and each element x in V, (ab)x=a(bx).
  • (VS 7) For each element a in F and each pair of elements x,y in V, a(x+y)=ax+ay.
  • (VS 8) For each pair of elements a,b in F and each pair of elements x in V, (a+b)x=ax+bx.

Subspaces[edit]

A subspace W of a vector space V over a field F is a subset of V which also has the properties that W is closed under scalar addition and multiplication. That is, For all x, y in W, x and y are in V and for any c in F, cx+y is in W.

Linear combinations[edit]

Linear combination

Systems of linear equations[edit]

Linear dependence[edit]

Linear independence[edit]

Bases[edit]

Dimension[edit]

Linear transformations and matrices[edit]

Change of coordinate matrix
Clique
Coordinate vector relative to a basis
Dimension theorem
Dominance relation
Identity matrix
Identity transformation
Incidence matrix
Inverse of a linear transformation
Inverse of a matrix
Invertible linear transformation
Isomorphic vector spaces
Isomorphism
Kronecker delta
Left-multiplication transformation
Linear operator
Linear transformation
Matrix representing a linear transformation
Nullity of a linear transformation
Null space
Ordered basis
Product of matrices
Projection on a subspace
Projection on the x-axis
Range
Rank of a linear transformation
Reflection about the x-axis
Rotation
Similar matrices
Standard ordered basis for F_n
Standard representation of a vector space with respect to a basis
Zero transformation

P.S. coefficient of the differential equation, differentiability of complex function,vector space of functionsdifferential operator, auxiliary polynomial[disambiguation needed], to the power of a complex number, exponential function.

{\color{Blue}~2.1} N(T) and R(T) are subspaces[edit]

Let V and W be vector spaces and I: VW be linear. Then N(T) and R(T) are subspaces of V and W, respectively.

{\color{Blue}~2.2} R(T)= span of T(basis in V)[edit]

Let V and W be vector spaces, and let T: V→W be linear. If \beta={v_1,v_2,\ldots,v_n} is a basis for V, then

\mathrm{R(T)}=\mathrm{span}(T(\beta\mathrm{))}=\mathrm{span}({T(v_1),T(v_2),\ldots,T(v_n)}).

{\color{Blue}~2.3} Dimension theorem[edit]

Let V and W be vector spaces, and let T: V → W be linear. If V is finite-dimensional, then

\mathrm{nullity}(T)+\mathrm{rank}(T)=\dim(V).

{\color{Blue}~2.4} one-to-one ⇔ N(T) = {0}[edit]

Let V and W be vector spaces, and let T: V→W be linear. Then T is one-to-one if and only if N(T)={0}.

{\color{Blue}~2.5} one-to-one ⇔ onto ⇔ rank(T) = dim(V)[edit]

Let V and W be vector spaces of equal (finite) dimension, and let T:VW be linear. Then the following are equivalent.

(a) T is one-to-one.
(b) T is onto.
(c) rank(T) = dim(V).

{\color{Blue}~2.6}{w_1,w_2,\ldots,w_n}= exactly one T (basis),[edit]

Let V and W be vector space over F, and suppose that {v_1, v_2,\ldots,v_n} is a basis for V. For w_1, w_2,\ldots,w_n in W, there exists exactly one linear transformation T: V→W such that \mathrm{T}(v_i)=w_i for i=1,2,\ldots,n.
Corollary. Let V and W be vector spaces, and suppose that V has a finite basis {v_1,v_2,\ldots,v_n}. If U, T: V→W are linear and U(v_i)=T(v_i) for i=1,2,\ldots,n, then U=T.

{\color{Blue}~2.7} T is vector space[edit]

Let V and W be vector spaces over a field F, and let T, U: V→W be linear.

(a) For all aF, a\mathrm{T}+\mathrm{U} is linear.
(b) Using the operations of addition and scalar multiplication in the preceding definition, the collection of all linear transformations form V to W is a vector space over F.

{\color{Blue}~2.8} linearity of matrix representation of linear transformation[edit]

Let V and W be finite-dimensional vector spaces with ordered bases β and γ, respectively, and let T, U: V→W be linear transformations. Then

(a)[T+U]_\beta^\gamma=[T]_\beta^\gamma+[U]_\beta^\gamma and
(b)[aT]_\beta^\gamma=a[T]_\beta^\gamma for all scalars a.

{\color{Blue}~2.9} composition law of linear operators[edit]

Let V,w, and Z be vector spaces over the same field f, and let T:V→W and U:W→Z be linear. then UT:V→Z is linear.

{\color{Blue}~2.10} law of linear operator[edit]

Let v be a vector space. Let T, U1, U2\mathcal{L}(V). Then
(a) T(U1+U2)=TU1+TU2 and (U1+U2)T=U1T+U2T
(b) T(U1U2)=(TU1)U2
(c) TI=IT=T
(d) a(U1U2)=(aU1)U2=U1(aU2) for all scalars a.

{\color{Blue}~2.11} [UT]αγ=[U]βγ[T]αβ[edit]

Let V, W and Z be finite-dimensional vector spaces with ordered bases α β γ, respectively. Let T: V⇐W and U: W→Z be linear transformations. Then

[UT]_\alpha^\gamma=[U]_\beta^\gamma[T]_\alpha^\beta.

Corollary. Let V be a finite-dimensional vector space with an ordered basis β. Let T,U∈\mathcal{L}(V). Then [UT]β=[U]β[T]β.

{\color{Blue}~2.12} law of matrix[edit]

Let A be an m×n matrix, B and C be n×p matrices, and D and E be q×m matrices. Then

(a) A(B+C)=AB+AC and (D+E)A=DA+EA.
(b) a(AB)=(aA)B=A(aB) for any scalar a.
(c) ImA=AIm.
(d) If V is an n-dimensional vector space with an ordered basis β, then [Iv]β=In.

Corollary. Let A be an m×n matrix, B1,B2,...,Bk be n×p matrices, C1,C1,...,C1 be q×m matrices, and a_1,a_2,\ldots,a_k be scalars. Then

A\Bigg(\sum_{i=1}^k a_iB_i\Bigg)=\sum_{i=1}^k a_iAB_i

and

\Bigg(\sum_{i=1}^k a_iC_i\Bigg)A=\sum_{i=1}^k a_iC_iA.

{\color{Blue}~2.13} law of column multiplication[edit]

Let A be an m×n matrix and B be an n×p matrix. For each j (1\le j\le p) let u_j and v_j denote the jth columns of AB and B, respectively. Then
(a) u_j=Av_j
(b) v_j=Be_j, where e_j is the jth standard vector of Fp.

{\color{Blue}~2.14} [T(u)]γ=[T]βγ[u]β[edit]

Let V and W be finite-dimensional vector spaces having ordered bases β and γ, respectively, and let T: V→W be linear. Then, for each u ∈ V, we have

[T(u)]_\gamma=[T]_\beta^\gamma[u]_\beta.

{\color{Blue}~2.15} laws of LA[edit]

Let A be an m×n matrix with entries from F. Then the left-multiplication transformation LA: Fn→Fm is linear. Furthermore, if B is any other m×n matrix (with entries from F) and β and γ are the standard ordered bases for Fn and Fm, respectively, then we have the following properties.
(a) [L_A]_\beta^\gamma=A.
(b) LA=LB if and only if A=B.
(c) LA+B=LA+LB and LaA=aLA for all a∈F.
(d) If T:Fn→Fm is linear, then there exists a unique m×n matrix C such that T=LC. In fact, \mathrm{C}=[L_A]_\beta^\gamma.
(e) If W is an n×p matrix, then LAE=LALE.
(f ) If m=n, then L_{I_n}=I_{F^n}.

{\color{Blue}~2.16} A(BC)=(AB)C[edit]

Let A,B, and C be matrices such that A(BC) is defined. Then A(BC)=(AB)C; that is, matrix multiplication is associative.

{\color{Blue}~2.17} T-1is linear[edit]

Let V and W be vector spaces, and let T:V→W be linear and invertible. Then T−1: W →V is linear.

{\color{Blue}~2.18} [T-1]γβ=([T]βγ)-1[edit]

Let V and W be finite-dimensional vector spaces with ordered bases β and γ, respectively. Let T:V→W be linear. Then T is invertible if and only if [T]_\beta^\gamma is invertible. Furthermore, [T^{-1}]_\gamma^\beta=([T]_\beta^\gamma)^{-1}

Lemma. Let T be an invertible linear transformation from V to W. Then V is finite-dimensional if and only if W is finite-dimensional. In this case, dim(V)=dim(W).

Corollary 1. Let V be a finite-dimensional vector space with an ordered basis β, and let T:V→V be linear. Then T is invertible if and only if [T]β is invertible. Furthermore, [T−1]β=([T]β)−1.

Corollary 2. Let A be an n×n matrix. Then A is invertible if and only if LA is invertible. Furthermore, (LA)−1=LA−1.

{\color{Blue}~2.19} V is isomorphic to W ⇔ dim(V)=dim(W)[edit]

Let W and W be finite-dimensional vector spaces (over the same field). Then V is isomorphic to W if and only if dim(V)=dim(W).

Corollary. Let V be a vector space over F. Then V is isomorphic to Fn if and only if dim(V)=n.

{\color{Blue}~2.20} ??[edit]

Let V and W be finite-dimensional vector spaces over F of dimensions n and m, respectively, and let β and γ be ordered bases for V and W, respectively. Then the function ~\Phi: \mathcal{L}(V,W)→Mm×n(F), defined by ~\Phi(T)=[T]_\beta^\gamma for T∈\mathcal{L}(V,W), is an isomorphism.

Corollary. Let V and W be finite-dimensional vector spaces of dimension n and m, respectively. Then \mathcal{L}(V,W) is finite-dimensional of dimension mn.

{\color{Blue}~2.21} Φβ is an isomorphism[edit]

For any finite-dimensional vector space V with ordered basis β, Φβ is an isomorphism.

{\color{Blue}~2.22} ??[edit]

Let β and β' be two ordered bases for a finite-dimensional vector space V, and let Q=[I_V]_{\beta'}^\beta. Then
(a) Q is invertible.
(b) For any v\in V, ~[v]_\beta=Q[v]_{\beta'}.

{\color{Blue}~2.23} [T]β'=Q-1[T]βQ[edit]

Let T be a linear operator on a finite-dimensional vector space V,and let β and β' be two ordered bases for V. Suppose that Q is the change of coordinate matrix that changes β'-coordinates into β-coordinates. Then

~[T]_{\beta'}=Q^{-1}[T]_\beta Q.

Corollary. Let A∈Mn×n(F), and le t γ be an ordered basis for Fn. Then [LA]γ=Q−1AQ, where Q is the n×n matrix whose jth column is the jth vector of γ.

{\color{Blue}~2.24}[edit]

{\color{Blue}~2.25}[edit]

{\color{Blue}~2.26}[edit]

{\color{Blue}~2.27} p(D)(x)=0 (p(D)∈C)⇒ x(k)exists (k∈N)[edit]

Any solution to a homogeneous linear differential equation with constant coefficients has derivatives of all orders; that is, if x is a solution to such an equation, then x^{(k)} exists for every positive integer k.

{\color{Blue}~2.28} {solutions}= N(p(D))[edit]

The set of all solutions to a homogeneous linear differential equation with constant coefficients coincides with the null space of p(D), where p(t) is the auxiliary polynomial with the equation.

Corollary. The set of all solutions to s homogeneous linear differential equation with constant coefficients is a subspace of \mathrm{C}^\infty.

{\color{Blue}~2.29} derivative of exponential function[edit]

For any exponential function f(t)=e^{ct}, f'(t)=ce^{ct}.

{\color{Blue}~2.30} {e-at} is a basis of N(p(D+aI))[edit]

The solution space for the differential equation,

y'+a_0y=0

is of dimension 1 and has \{e^{-a_0t}\}as a basis.

Corollary. For any complex number c, the null space of the differential operator D-cI has {e^{ct}} as a basis.

{\color{Blue}~2.31} e^{ct} is a solution[edit]

Let p(t) be the auxiliary polynomial for a homogeneous linear differential equation with constant coefficients. For any complex number c, if c is a zero of p(t), then to the differential equation.

{\color{Blue}~2.32} dim(N(p(D)))=n[edit]

For any differential operator p(D) of order n, the null space of p(D) is an n_dimensional subspace of C.

Lemma 1. The differential operator D-cI: C to C is onto for any complex number c.

Lemma 2 Let V be a vector space, and suppose that T and U are linear operators on V such that U is onto and the null spaces of T and U are finite-dimensional, Then the null space of TU is finite-dimensional, and

dim(N(TU))=dim(N(U))+dim(N(U)).

Corollary. The solution space of any nth-order homogeneous linear differential equation with constant coefficients is an n-dimensional subspace of C.

{\color{Blue}~2.33} ecit is linearly independent with each other (ci are distinct)[edit]

Given n distinct complex numbers c_1, c_2,\ldots,c_n, the set of exponential functions \{e^{c_1t},e^{c_2t},\ldots,e^{c_nt}\} is linearly independent.

Corollary. For any nth-order homogeneous linear differential equation with constant coefficients, if the auxiliary polynomial has n distinct zeros c_1, c_2, \ldots, c_n, then \{e^{c_1t},e^{c_2t},\ldots,e^{c_nt}\} is a basis for the solution space of the differential equation.

Lemma. For a given complex number c and positive integer n, suppose that (t-c)^n is athe auxiliary polynomial of a homogeneous linear differential equation with constant coefficients. Then the set

\beta=\{e^{c_1t},e^{c_2t},\ldots,e^{c_nt}\}

is a basis for the solution space of the equation.

{\color{Blue}~2.34} general solution of homogeneous linear differential equation[edit]

Given a homogeneous linear differential equation with constant coefficients and auxiliary polynomial

(t-c_1)^n_1(t-c_2)^n_2\cdots(t-c_k)^n_k,

where n_1, n_2,\ldots,n_k are positive integers and c_1, c_2, \ldots, c_n are distinct complex numbers, the following set is a basis for the solution space of the equation:

\{e^{c_1t}, te^{c_1t},\ldots,t^{n_1-1}e^{c_1t},\ldots,e{c_kt},te^{c_kt},\ldots,t^{n_k-1}e^{c_kt}\}.

Elementary matrix operations and systems of linear equations[edit]

Elementary matrix operations[edit]

1. Matrix rows can be interchanged 2. Matrix rows can be multiplied by a non-zero real number(i.e. number can be either positive or negative) 3. Any row can be changed by adding or subtracting corresponding row elements with another row

Elementary matrix[edit]

Rank of a matrix[edit]

The rank of a matrix A is the number of pivot columns after the reduced row echelon form of A.

Matrix inverses[edit]

System of linear equations[edit]

Determinants[edit]

If

A = \begin{pmatrix}
a & b \\
c & d \\
\end{pmatrix}

is a 2×2 matrix with entries form a field F, then we define the determinant of A, denoted det(A) or |A|, to be the scalar ad-bc.


*Theorem 1: linear function for a single row.
*Theorem 2: nonzero determinant ⇔ invertible matrix

Theorem 1: The function det: M2×2(F) → F is a linear function of each row of a 2×2 matrix when the other row is held fixed. That is, if u,v, and w are in F² and k is a scalar, then

\det\begin{pmatrix}
u + kv\\
w\\
\end{pmatrix}
=\det\begin{pmatrix}
u\\
w\\
\end{pmatrix}
+ k\det\begin{pmatrix}
v\\
w\\
\end{pmatrix}

and

\det\begin{pmatrix}
w\\
u + kv\\
\end{pmatrix}
=\det\begin{pmatrix}
w\\
u\\
\end{pmatrix}
+ k\det\begin{pmatrix}
w\\
v\\
\end{pmatrix}

Theorem 2: Let A \in M2×2(F). Then thee deter minant of A is nonzero if and only if A is invertible. Moreover, if A is invertible, then

A^{-1}=\frac{1}{\det(A)}\begin{pmatrix}
A_{22}&-A_{12}\\
-A_{21}&A_{11}\\
\end{pmatrix}

Diagonalization[edit]

Characteristic polynomial of a linear operator/matrix

{\color{Blue}~5.1} diagonalizable⇔basis of eigenvector[edit]

A linear operator T on a finite-dimensional vector space V is diagonalizable if and only if there exists an ordered basis β for V consisting of eigenvectors of T. Furthermore, if T is diagonalizable, \beta= {v_1,v_2,\ldots,v_n} is an ordered basis of eigenvectors of T, and D = [T]β then D is a diagonal matrix and D_{jj} is the eigenvalue corresponding to v_j for 1\le j \le n.

{\color{Blue}~5.2} eigenvalue⇔det(AIn)=0[edit]

Let A∈Mn×n(F). Then a scalar λ is an eigenvalue of A if and only if det(AIn)=0

{\color{Blue}~5.3} characteristic polynomial[edit]

Let A∈Mn×n(F).
(a) The characteristic polynomial of A is a polynomial of degree n with leading coefficient(-1)n.
(b) A has at most n distinct eigenvalues.

{\color{Blue}~5.4} υ to λ⇔υ∈N(T-λI)[edit]

Let T be a linear operator on a vector space V, and let λ be an eigenvalue of T.
A vector υ∈V is an eigenvector of T corresponding to λ if and only if υ≠0 and υ∈N(T-λI).

{\color{Blue}~5.5} vi to λi⇔vi is linearly independent[edit]

Let T be a linear operator on a vector space V, and let \lambda_1,\lambda_2,\ldots,\lambda_k, be distinct eigenvalues of T. If v_1,v_2,\ldots,v_k are eigenvectors of t such that \lambda_i corresponds to v_i (1\le i\le k), then {v_1,v_2,\ldots,v_k} is linearly independent.

{\color{Blue}~5.6} characteristic polynomial splits[edit]

The characteristic polynomial of any diagonalizable linear operator splits.

{\color{Blue}~5.7} 1 ≤ dim(Eλ) ≤ m[edit]

Let T be alinear operator on a finite-dimensional vectorspace V, and let λ be an eigenvalue of T having multiplicity m. Then 1 \le\dim(E_{\lambda})\le m.

{\color{Blue}~5.8} S = S1S2 ∪ ...∪ Sk is linearly independent[edit]

Let T be a linear operator on a vector space V, and let \lambda_1,\lambda_2,\ldots,\lambda_k, be distinct eigenvalues of T. For each i=1,2,\ldots,k, let S_i be a finite linearly independent subset of the eigenspace E_{\lambda_i}. Then S=S_1\cup S_2 \cup\cdots\cup S_k is a linearly independent subset of V.

{\color{Blue}~5.9} ⇔T is diagonalizable[edit]

Let T be a linear operator on a finite-dimensional vector space V that the characteristic polynomial of T splits. Let \lambda_1,\lambda_2,\ldots,\lambda_k be the distinct eigenvalues of T. Then
(a) T is diagonalizable if and only if the multiplicity of \lambda_i is equal to \dim(E_{\lambda_i}) for all i.
(b) If T is diagonalizable and \beta_i is an ordered basis for E_{\lambda_i} for each i, then \beta=\beta_1\cup \beta_2\cup \cup\beta_k is an ordered basis^2 for V consisting of eigenvectors of T.

Test for diagonlization

Inner product spaces[edit]

Inner product, standard inner product on Fn, conjugate transpose, adjoint[disambiguation needed], Frobenius inner product, complex/real inner product space, norm, length, conjugate linear, orthogonal, perpendicular, orthogonal, unit vector, orthonormal, normalization.

{\color{Blue}~6.1} properties of linear product[edit]

Let V be an inner product space. Then for x,y,z\in V and c \in f, the following staements are true.
(a) \langle x,y+z\rangle=\langle x,y\rangle+\langle x,z\rangle.
(b) \langle x,cy\rangle=\bar{c}\langle x,y\rangle.
(c) \langle x,\mathit{0}\rangle=\langle\mathit{0},x\rangle=0.
(d) \langle x,x\rangle=0 if and only if x=\mathit{0}.
(e) If\langle x,y\rangle=\langle x,z\rangle for all x\in V, then y=z.

{\color{Blue}~6.2} law of norm[edit]

Let V be an inner product space over F. Then for all x,y\in V and c\in F, the following statements are true.
(a) \|cx\|=|c|\cdot\|x\|.
(b) \|x\|=0 if and only if x=0. In any case, \|x\|\ge0.
(c)(Cauchy-Schwarz In equality)|\langle x,y\rangle|\le\|x\|\cdot\|y\|.
(d)(Triangle Inequality)\|x+y\|\le\|x\|+\|y\|.

orthonormal basis, Gram–Schmidt process, Fourier coefficients, orthogonal complement, orthogonal projection

{\color{Blue}~6.3} span of orthogonal subset[edit]

Let V be an inner product space and S=\{v_1,v_2,\ldots,v_k\} be an orthogonal subset of V consisting of nonzero vectors. If y∈span(S), then

y=\sum_{i=1}^n{\langle y,v_i \rangle \over \|v_i\|^2}v_i

{\color{Blue}~6.4} Gram-Schmidt process[edit]

Let V be an inner product space and S=\{w_1,w_2,\ldots,w_n\} be a linearly independent subset of V. DefineS'=\{v_1,v_2,\ldots,v_n\}, where v_1=w_1 and

v_k=w_k-\sum_{j=1}^{k-1}{\langle w_k, v_j\rangle\over\|v_j\|^2}v_j

Then S' is an orhtogonal set of nonzero vectors such that span(S')=span(S).

{\color{Blue}~6.5} orthonormal basis[edit]

Let V be a nonzero finite-dimensional inner product space. Then V has an orthonormal basis β. Furthermore, if β =\{v_1,v_2,\ldots,v_n\} and x∈V, then

x=\sum_{i=1}^n\langle x,v_i\rangle v_i.

Corollary. Let V be a finite-dimensional inner product space with an orthonormal basis β =\{v_1,v_2,\ldots,v_n\}. Let T be a linear operator on V, and let A=[T]β. Then for any i and j, A_{ij}=\langle T(v_j), v_i\rangle.

{\color{Blue}~6.6} W by orthonormal basis[edit]

Let W be a finite-dimensional subspace of an inner product space V, and let y∈V. Then there exist unique vectors u∈W and z∈W such that y=u+z. Furthermore, if \{v_1,v_2,\ldots,v_k\} is an orthornormal basis for W, then

u=\sum_{i=1}^k\langle y,v_i\rangle v_i.

S=\{v_1,v_2,\ldots,v_k\} Corollary. In the notation of Theorem 6.6, the vector u is the unique vector in W that is "closest" to y; thet is, for any x∈W, \|y-x\|\ge\|y-u\|, and this inequality is an equality if and onlly if x=u.

{\color{Blue}~6.7} properties of orthonormal set[edit]

Suppose that S=\{v_1,v_2,\ldots,v_k\} is an orthonormal set in an n-dimensional inner product space V. Than
(a) S can be extended to an orthonormal basis \{v_1, v_2, \ldots,v_k,v_{k+1},\ldots,v_n\} for V.
(b) If W=span(S), then S_1=\{v_{k+1},v_{k+2},\ldots,v_n\} is an orhtonormal basis for W(using the preceding notation).
(c) If W is any subspace of V, then dim(V)=dim(W)+dim(W).

Least squares approximation, Minimal solutions to systems of linear equations

{\color{Blue}~6.8} linear functional representation inner product[edit]

Let V be a finite-dimensional inner product space over F, and let g:V→F be a linear transformation. Then there exists a unique vector y∈ V such that \rm{g}(x)=\langle x, y\rangle for all x∈ V.

{\color{Blue}~6.9} definition of T*[edit]

Let V be a finite-dimensional inner product space, and let T be a linear operator on V. Then there exists a unique function T*:V→V such that \langle\rm{T}(x),y\rangle=\langle x, \rm{T}^*(y)\rangle for all x,y ∈ V. Furthermore, T* is linear

{\color{Blue}~6.10} [T*]β=[T]*β[edit]

Let V be a finite-dimensional inner product space, and let β be an orthonormal basis for V. If T is a linear operator on V, then

[T^*]_\beta=[T]^*_\beta.

{\color{Blue}~6.11} properties of T*[edit]

Let V be an inner product space, and let T and U be linear operators onV. Then
(a) (T+U)*=T*+U*;
(b) (cT)*=\bar c T* for any c∈ F;
(c) (TU)*=U*T*;
(d) T**=T;
(e) I*=I.

Corollary. Let A and B be n×nmatrices. Then
(a) (A+B)*=A*+B*;
(b) (cA)*=\bar c A* for any c∈ F;
(c) (AB)*=B*A*;
(d) A**=A;
(e) I*=I.

{\color{Blue}~6.12} Least squares approximation[edit]

Let A ∈ Mm×n(F) and y∈Fm. Then there exists x_0 ∈ Fn such that (A*A)x_0=A*y and \|Ax_0-Y\|\le\|Ax-y\| for all x∈ Fn

Lemma 1. let A ∈ Mm×n(F), x∈Fn, and y∈Fm. Then

\langle Ax, y\rangle _m =\langle x, A*y\rangle _n

Lemma 2. Let A ∈ Mm×n(F). Then rank(A*A)=rank(A).

Corollary.(of lemma 2) If A is an m×n matrix such that rank(A)=n, then A*A is invertible.

{\color{Blue}~6.13} Minimal solutions to systems of linear equations[edit]

Let A ∈ Mm×n(F) and b∈ Fm. Suppose that Ax=b is consistent. Then the following statements are true.
(a) There existes exactly one minimal solution s of Ax=b, and s∈R(LA*).
(b) The vector s is the only solution to Ax=b that lies in R(LA*); that is, if u satisfies (AA*)u=b, then s=A*u.

Canonical forms[edit]

References[edit]