Jump to content

Cramer's rule: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
No edit summary
A19grey (talk | contribs)
Line 129: Line 129:
* [http://video.google.com/videoplay?docid=5240579335412465654 MIT Linear Algebra Lecture on Cramer's Rule] at Google Video, from MIT OpenCourseWare
* [http://video.google.com/videoplay?docid=5240579335412465654 MIT Linear Algebra Lecture on Cramer's Rule] at Google Video, from MIT OpenCourseWare
* [http://www.algebrasolver.totalh.com Online calculator to solve a system of equations using Cramer's Rule]
* [http://www.algebrasolver.totalh.com Online calculator to solve a system of equations using Cramer's Rule]
* [http://planetmath.org/encyclopedia/ProofOfCramersRule.html Proof of Cramer's Rule]



[[Category:Linear algebra]]
[[Category:Linear algebra]]

Revision as of 22:34, 8 October 2008

Cramer's rule is a theorem in linear algebra, which gives the solution of a system of linear equations in terms of determinants. It is named after Gabriel Cramer (1704 – 1752), who published the rule in his 1750 Introduction à l'analyse des lignes courbes algébriques, although Colin Maclaurin also published the method in his 1748 Treatise of Geometry (and probably knew of the method as early as 1729).[1]

Computationally, it is inefficient for large matrices and thus not used in practical applications which may involve many equations. However, as no pivoting is needed, it is more efficient than Gaussian elimination for small matrices, particularly when SIMD operations are used.

Cramer's rule is of theoretical importance in that it gives an explicit expression for the solution of the system.

Elementary formulation

The system of equations is represented in matrix multiplication form as:

where the square matrix is invertible and the vector is the column vector of the variables: .

The theorem then states that:

where is the matrix formed by replacing the ith column of by the column vector . For simplicity, a single symbol like is sometimes used to represent and the notation is used to represent . Thus, Equation (1) can be compactly written as

Abstract formulation

Let R be a commutative ring, A an n×n matrix with coefficients in R. Then

where Adj(A) denotes the adjugate of A, det(A) is the determinant, and I is the identity matrix. If det(A) is invertible in R, then the inverse matrix of A is

If R is a field (such as the field of real numbers), then this gives a formula for the inverse of A, provided det(A) ≠ 0. The formula is, however, of limited practical value for large matrices, as there are other more efficient ways of generating the inverse matrix, such as by Gauss-Jordan elimination.

Example

Consider the linear system

which in matrix format is

Then, x and y can be found with Cramer's rule as:

and

The rules for 3×3 are similar. Given:

which in matrix format is

x, y and z can be found as follows:

Applications to differential geometry

Cramer's rule is also extremely useful for solving problems in differential geometry. Consider the two equations and . When u and v are independent variables, we can define and .

Finding an equation for is a trivial application of Cramer's rule.

First, calculate the first derivatives of F, G, x, and y:

Substituting dx, dy into dF and dG, we have:

Since u, v are both independent, the coefficients of du, dv must be zero. So we can write out equations for the coefficients:

Now, by Cramer's rule, we see that:

This is now a formula in terms of two Jacobians:

Similar formulae can be derived for , , .

Applications to algebra

Cramer's rule can be used to prove the Cayley–Hamilton theorem of linear algebra, as well as Nakayama's lemma, which is fundamental in commutative ring theory.

Relevance to eigenvalues

Cramer's rule can be used to formulate the characteristic equation in eigenvalue problems. An eigenvector x is often expressed as

This can be viewed as a linear system of equations in which the coefficient matrix is the expression in the parentheses, the matrix of the unknowns is x, and the right hand side matrix is zero. According to Cramer's rule, solutions to x are expressed as a quotient. The numerator will always be zero, since here c=0. Hence to find the non-trivial solutions (not all zeros), the denominator must also equal zero; that is, the determinant of the coefficient matrix must be zero. Then the solutions of the equation are given by:

which is the familiar characteristic equation.

Applications to integer programming

Cramer's rule can be used to prove that an integer programming problem whose constraint matrix is totally unimodular and whose right-hand side is all integer has integer basic solutions. This makes the integer program substantially easier to solve.

References

  1. ^ Carl B. Boyer, A History of Mathematics, 2nd edition (Wiley, 1968), p. 431.