Rank–nullity theorem

From Wikipedia, the free encyclopedia
Jump to navigation Jump to search
Rank–nullity theorem

The rank-nullity theorem is a fundamental theorem in linear algebra which relates the dimensions of a linear map's kernel and image with the dimension of its domain.

Stating the Theorem[edit]

Let , be vector spaces, of which must be finite dimensional. Let . Then it holds true that[1]

,

whereby

and .

One can refine this theorem via the splitting lemma to be a statement about an isomorphism of spaces, not just dimensions.

Matrices[edit]

As we know that [2], matrices immediately come to mind when discussing linear maps. In the case of a matrix, the dimension of the domain is given by , the number of columns in the matrix. Thusly the Rank-Nullity Theorem for a given matrix rather immediately becomes

.

Proofs[edit]

Here we provide two proofs. The first[3] operates in the general case, using linear maps. The second proof[4] looks at the homogeneous system for with rank and shows explicitly that there exist a set of linearly independent solutions that span the kernel of .

It should be noted that the theorem holds true for any linear map with a finite dimensional domain, but a potentially infinitely dimensional codomain. This means that matrices are only a subset of linear maps that the theorem is true for, leaving the first proof more rigorous. Of course when the codomain is finitely dimensional, the proofs are equivalent.

First proof[edit]

Let be vector spaces over some field and defined as in the statment of the theorem with .

As is a subspace, there exists a basis for it. Suppose and let

be such a basis.

We may now, by the Steiniz exchange lemma, extend with linearly independent vectors to form a full basis of .

Let

such that

is a basis for .

From this, we know that

.

We now claim that is a basis for . By definition, is generating; it remains to be shown that it is also linearly independent to conlcude that it is a basis.

Let
for some .
Thus, owing to the linearity of , it follows that
.
This is a contradiction to being a basis, unless all are equal to zero. This shows that is linearly independent, and more specifically that it is a basis for .

To summarise, we have , a basis for , and , a basis for .

Finally we may state that

.

This concludes our proof.

Second proof[edit]

Let with linearly independent columns (i.e. ). We will show that:

  1. There exists a set of linearly independent solutions to the homogeneous system .
  2. That every other solution is a linear combination of these solutions.

To do this, we will produce a matrix whose columns form a basis of the null space of .

Without loss of generality, assume that the first columns of are linearly independent. So, we can write

,

where

with linearly independent column vectors, and
, each of whose columns are linear combinations of the columns of .

This means that for some (see rank factorization) and, hence,

.

Let

,

where is the identity matrix. We note that satisfies

Therefore, each of the columns of are particular solutions of .

Furthermore, the columns of are linearly independent because will imply for :

Therefore, the column vectors of constitute a set of linearly independent solutions for .

We next prove that any solution of must be a linear combination of the columns of .

For this, let

be any vector such that . Note that since the columns of are linearly independent, implies .

Therefore,


This proves that any vector that is a solution of must be a linear combination of the special solutions given by the columns of . And we have already seen that the columns of are linearly independent. Hence, the columns of constitute a basis for the null space of . Therefore, the nullity of is . Since equals rank of , it follows that . This concludes our proof.

Reformulations and generalizations[edit]

This theorem is a statement of the first isomorphism theorem of algebra for the case of vector spaces; it generalizes to the splitting lemma.

In more modern language, the theorem can also be phrased as follows: if

0 → UVR → 0

is a short exact sequence of vector spaces, then

.

Here R plays the role of im T and U is ker T, i.e.

In the finite-dimensional case, this formulation is susceptible to a generalization: if

0 → V1V2 → ... → Vr → 0

is an exact sequence of finite-dimensional vector spaces, then

[5]

The rank–nullity theorem for finite-dimensional vector spaces may also be formulated in terms of the index of a linear map. The index of a linear map , where and are finite-dimensional, is defined by

.

Intuitively, is the number of independent solutions of the equation , and is the number of independent restrictions that have to be put on to make solvable. The rank–nullity theorem for finite-dimensional vector spaces is equivalent to the statement

.

We see that we can easily read off the index of the linear map from the involved spaces, without any need to analyze in detail. This effect also occurs in a much deeper result: the Atiyah–Singer index theorem states that the index of certain differential operators can be read off the geometry of the involved spaces.

Notes[edit]

  1. ^ Friedberg; Insel; Spence. Linear Algebra. Pearson. p. 70. ISBN 9780321998897. 
  2. ^ Friedberg; Insel; Spence. Linear Algebra. pp. 103–104. ISBN 9780321998897. 
  3. ^ Friedberg; Insel; Spence. Linear Algebra. Pearson. p. 70. ISBN 9780321998897. 
  4. ^ Banerjee, Sudipto; Roy, Anindya (2014), Linear Algebra and Matrix Analysis for Statistics, Texts in Statistical Science (1st ed.), Chapman and Hall/CRC, ISBN 978-1420095388 
  5. ^ Zaman, Ragib. "Dimensions of vector spaces in an exact sequence". Mathematics Stack Exchange. Retrieved 27 October 2015. 

References[edit]