Jump to content

User:SirMeowMeow/sandbox/Matrices

From Wikipedia, the free encyclopedia

Definition[edit]

For some natural choice of rows and columns, a matrix of size over a field is a collection of elements indexed by .[1][a] Unless specified, the elements of a matrix are assumed to be scalars, but may also be elements from a ring, or something more general.

The set of all matrices with elements from and with indices from is denoted or .[2]

Notation[edit]

Example

Let be an matrix whose elements are from . Any individual entry may be referenced as for the -th row and the -th column.

Rows[edit]

The -th row vector in a matrix is defined as the subset of elements which shares some -th index, and ordered by the -th index.


The column index can be omitted for brevity by simply noting for the -th row.

Columns[edit]

Let be a function which maps a matrix to a partition of its elements with an equivalence relation on the column row index, ordered by its column index.

Addition of Matrices[edit]

Let be matrices from . Then the sum of matrices is defined as entry-wise field addition.

Scaling of Matrices[edit]

Let be a matrix from , and let . The scalar multiplication of matrices is defined:

Transposition[edit]

As a matrix is a collection of double-indexed scalars , the transposition is a function of the form , defined as a mapping which swaps the positions of indices.

Observations[edit]

The transposition of a product is equal to the product of their transpositions, but in reverse order.

Matrix-Vector Product[edit]

Let .

Let be a matrix, let , and let .


A matrix-vector product is is a mapping , such that:

Column Perspective[edit]

Let , and let . Then the matrix-vector product can be defined as the linear combination of pairing scalar coefficients in to vectors in .

Row Perspective[edit]

Product of Matrices[edit]

Let and be matrices. Then the product is defined:

For all natural pairs .

Column Perspective[edit]

For the product , the -th column of the matrix is defined by the application of on the -th column of .

Rank and Image[edit]

The rank of a matrix is the number of independent column vectors. The image of a matrix is the span of its columns.

An injective matrix is any full-rank matrix.

A surjective matrix is any full row-rank matrix.

Kernel and Nullity[edit]

The kernel of a matrix is the set of vectors which map to .

The nullity is the dimension of the kernel.

Identity Matrix[edit]

For any matrix there also exists matrices which act as the unique right and left identity element under the product of maps.

Any matrix which fulfills this condition is known as the identity matrix, denoted or with a subscript for some dimension . All identity matrices are square matrices whose values are defined for any index :

An example of an identity matrix of dimensions.

Inverse Matrix[edit]

A matrix is invertible if there exists a matrix such that:

  • An invertible matrix may also be known as a non-singular matrix, a linear isomorphism or bijection.
  • The set of all invertible matrices of size is known as the .
  • All invertible matrices are full-rank square matrices, and thus the kernel is trivial.
  • For endomorphisms over finite-dimensional modules, surjection, injection, and bijection are all equivalent conditions.
  • The determinant of an invertible matrix is non-zero.

Left Inverse[edit]

Although only square matrices are strictly invertible, an injective matrix will have a left-inverse by definition.

Right Inverse[edit]

Orthonormal Matrix[edit]

An orthonormal matrix is an invertible matrix which preserves the norms between vector spaces. It is also the matrix where the transpose is the multiplicative inverse .

The set of all orthogonal matrices over forms the orthogonal group . The subset of which has only a determinant of is known as the special orthogonal group , and all matrices from this group are rotational matrices.

Observations[edit]

  • Let . If is orthonormal then .
  • The determinant of is either or .
  • If is orthonormal then so is its transpose.

Example[edit]

Gram-Schmidt Process[edit]

Given the columns of a full-rank matrix, the Gram-Schmidt process can generate a similar orthonormal basis.

Trace of a Square Matrix[edit]

Let .

The trace of a product of matrices is the sum of their individual traces.

Matrix Decompositions[edit]

Rank Factorization (CR)[edit]

Rank factorization, or the column-row (CR) form of a matrix means to decompose , where represents independent columns from , and represents independent rows from .

This factorization is motivated mostly by pedagogy and demonstrates basic properties of matrix multiplication.

Orthonormal (QR)[edit]

Singular Value Decomposition (SVD)[edit]

Any real or complex matrix of size may be decomposed into the triple product ,[b] where is and orthonormal, is is a positive-definite real diagonal matrix, and is and orthonormal.

For we have a relationship between rows and "concepts." For we have a relationship between columns and concepts. For we have a matrix which represents the eigenvalues of each concept.

Notes[edit]

  1. ^ Equivalently, .
  2. ^ Or for real matrices.

Citations[edit]

Sources[edit]

Textbook[edit]
  • Katznelson, Yitzhak; Katznelson, Yonatan R. (2008). A (Terse) Introduction to Linear Algebra. American Mathematical Society. ISBN 978-0-8218-4419-9.
  • Roman, Steven (2005). Advanced Linear Algebra (2nd ed.). Springer. ISBN 0-387-24766-1.
  • Süli, Endre; Mayers, David (2011) [2003]. An Introduction to Numerical Analysis. Cambridge University Press. ISBN 978-0-521-00794-8.
  • Trefethen, Lloyd Nicholas; Bau III, David (1997). Numerical Linear Algebra. SIAM. ISBN 978-0-898713-61-9.

Web[edit]