Portal:Linear algebra
Portal maintenance status: (October 2018)
|
Introduction
Linear algebra is the branch of mathematics concerning linear equations such as
linear functions such as
and their representations through matrices and vector spaces.
Linear algebra is central to almost all areas of mathematics. For instance, linear algebra is fundamental in modern presentations of geometry, including for defining basic objects such as lines, planes and rotations. Also, functional analysis may be basically viewed as the application of linear algebra to spaces of functions. Linear algebra is also used in most sciences and engineering areas, because it allows modeling many natural phenomena, and efficiently computing with such models. For nonlinear systems, which cannot be modeled with linear algebra, linear algebra is often used as a first-order approximation.
Selected general articles
- In the mathematical subfield of numerical analysis, numerical stability is a generally desirable property of numerical algorithms. The precise definition of stability depends on the context. One is numerical linear algebra and the other is algorithms for solving ordinary and partial differential equations by discrete approximation.
In numerical linear algebra the principal concern is instabilities caused by proximity to singularities of various kinds, such as very small or nearly colliding eigenvalues. On the other hand, in numerical algorithms for differential equations the concern is the growth of round-off errors and/or initially small fluctuations in initial data which might cause a large deviation of final answer from the exact solution . Read more...
A vector space (also called a linear space) is a collection of objects called vectors, which may be added together and multiplied ("scaled") by numbers, called scalars. Scalars are often taken to be real numbers, but there are also vector spaces with scalar multiplication by complex numbers, rational numbers, or generally any field. The operations of vector addition and scalar multiplication must satisfy certain requirements, called axioms, listed below.
Euclidean vectors are an example of a vector space. They represent physical quantities such as forces: any two forces (of the same type) can be added to yield a third, and the multiplication of a force vector by a real multiplier is another force vector. In the same vein, but in a more geometric sense, vectors representing displacements in the plane or in three-dimensional space also form vector spaces. Vectors in vector spaces do not necessarily have to be arrow-like objects as they appear in the mentioned examples: vectors are regarded as abstract mathematical objects with particular properties, which in some cases can be visualized as arrows. Read more...
MATLAB (matrix laboratory) is a multi-paradigm numerical computing environment and proprietary programming language developed by MathWorks. MATLAB allows matrix manipulations, plotting of functions and data, implementation of algorithms, creation of user interfaces, and interfacing with programs written in other languages, including C, C++, C#, Java, Fortran and Python.
Although MATLAB is intended primarily for numerical computing, an optional toolbox uses the MuPAD symbolic engine, allowing access to symbolic computing abilities. An additional package, Simulink, adds graphical multi-domain simulation and model-based design for dynamic and embedded systems. Read more...- In linear algebra, the quotient of a vector space V by a subspace N is a vector space obtained by "collapsing" N to zero. The space obtained is called a quotient space and is denoted V/N (read V mod N or V by N). Read more...
- The geometric algebra (GA) of a vector space is an algebra over a field, noted for its multiplication operation called the geometric product on a space of elements called multivectors, which is a superset of both the scalars
and the vector space
. Mathematically, a geometric algebra may be defined as the Clifford algebra of a vector space with a quadratic form. Clifford's contribution was to define a new product, the geometric product, that united the Grassmann and Hamilton algebras into a single structure. Adding the dual of the Grassmann exterior product (the "meet") allows the use of the Grassmann–Cayley algebra, and a conformal version of the latter together with a conformal Clifford algebra yields a conformal geometric algebra (CGA) providing a framework for classical geometries. In practice, these and several derived operations allow a correspondence of elements, subspaces and operations of the algebra with geometric interpretations.
The scalars and vectors have their usual interpretation, and make up distinct subspaces of a GA. Bivectors provide a more natural representation of pseudovector quantities in vector algebra such as oriented area, oriented angle of rotation, torque, angular momentum, electromagnetic field and the Poynting vector. A trivector can represent an oriented volume, and so on. An element called a blade may be used to represent a subspace ofand orthogonal projections onto that subspace. Rotations and reflections are represented as elements. Unlike vector algebra, a GA naturally accommodates any number of dimensions and any quadratic form such as in relativity. Read more...
A linear system in three variables determines a collection of planes. The intersection point is the solution.
In mathematics, a system of linear equations (or linear system) is a collection of two or more linear equations involving the same set of variables. For example,
:
is a system of three equations in the three variables x, y, z. A solution to a linear system is an assignment of values to the variables such that all the equations are simultaneously satisfied. A solution to the system above is given by
:
since it makes all three equations valid. The word "system" indicates that the equations are to be considered collectively, rather than individually.
In mathematics, the theory of linear systems is the basis and a fundamental part of linear algebra, a subject which is used in most parts of modern mathematics. Computational algorithms for finding the solutions are an important part of numerical linear algebra, and play a prominent role in engineering, physics, chemistry, computer science, and economics. A system of non-linear equations can often be approximated by a linear system (see linearization), a helpful technique when making a mathematical model or computer simulation of a relatively complex system. Read more...
In mathematics, scalar multiplication is one of the basic operations defining a vector space in linear algebra (or more generally, a module in abstract algebra). In common geometrical contexts, scalar multiplication of a real Euclidean vector by a positive real number multiplies the magnitude of the vector without changing its direction. The term "scalar" itself derives from this usage: a scalar is that which scales vectors. Scalar multiplication is the multiplication of a vector by a scalar (where the product is a vector), and must be distinguished from inner product of two vectors (where the product is a scalar). Read more...
The vector projection of a vector a on (or onto) a nonzero vector b (also known as the vector component or vector resolution of a in the direction of b) is the orthogonal projection of a onto a straight line parallel to b. It is a vector parallel to b, defined as
:Read more...
In linear algebra, an inner product space is a vector space with an additional structure called an inner product. This additional structure associates each pair of vectors in the space with a scalar quantity known as the inner product of the vectors. Inner products allow the rigorous introduction of intuitive geometrical notions such as the length of a vector or the angle between two vectors. They also provide the means of defining orthogonality between vectors (zero inner product). Inner product spaces generalize Euclidean spaces (in which the inner product is the dot product, also known as the scalar product) to vector spaces of any (possibly infinite) dimension, and are studied in functional analysis. The first usage of the concept of a vector space with an inner product is due to Peano, in 1898.
An inner product naturally induces an associated norm, thus an inner product space is also a normed vector space. A complete space with an inner product is called a Hilbert space. An (incomplete) space with an inner product is called a pre-Hilbert space, since its completion with respect to the norm induced by the inner product is a Hilbert space. Inner product spaces over the field of complex numbers are sometimes referred to as unitary spaces. Read more...
In the theory of vector spaces, a set of vectors is said to be linearly dependent if one of the vectors in the set can be defined as a linear combination of the others; if no vector in the set can be written in this way, then the vectors are said to be linearly independent. These concepts are central to the definition of dimension.
A vector space can be of finite-dimension or infinite-dimension depending on the number of linearly independent basis vectors. The definition of linear dependence and the ability to determine whether a subset of vectors in a vector space is linearly dependent are central to determining a basis for a vector space. Read more...- In linear algebra, the linear span (also called the linear hull or just span) of a set of vectors in a vector space is the intersection of all linear subspaces which each contain every vector in that set. The linear span of a set of vectors is therefore a vector space. Spans can be generalized to matroids and modules.
For expressing that a vector space V is a span of a set S, one commonly uses the following phrases: S spans V; V is spanned by S; S is a spanning set of V; S is a generating set of V. Read more... - Basic Linear Algebra Subprograms (BLAS) is a specification that prescribes a set of low-level routines for performing common linear algebra operations such as vector addition, scalar multiplication, dot products, linear combinations, and matrix multiplication. They are the de facto standard low-level routines for linear algebra libraries; the routines have bindings for both C and Fortran. Although the BLAS specification is general, BLAS implementations are often optimized for speed on a particular machine, so using them can bring substantial performance benefits. BLAS implementations will take advantage of special floating point hardware such as vector registers or SIMD instructions.
It originated as a Fortran library in 1979 and its interface was standardized by the BLAS Technical (BLAST) Forum, whose latest BLAS report can be found on the netlib website. This Fortran library is known as the reference implementation (sometimes confusingly referred to as the BLAS library) and is not optimized for speed but is in the public domain. Read more... - Suppose V and W are vector spaces over the field K. The cartesian product V × W can be given the structure of a vector space over K (Halmos 1974, §18) by defining the operations componentwise:
- (v1, w1) + (v2, w2) = (v1 + v2, w1 + w2)
- α (v, w) = (α v, α w)
- In mathematics and vector algebra, the cross product or vector product (occasionally directed area product to emphasize the geometric significance) is a binary operation on two vectors in three-dimensional space
and is denoted by the symbol
. Given two linearly independent vectors
and
, the cross product,
, is a vector that is perpendicular to both
and
and thus normal to the plane containing them. It has many applications in mathematics, physics, engineering, and computer programming. It should not be confused with the dot product (projection product).
If two vectors have the same direction (or have the exact opposite direction from one another, i.e. are not linearly independent) or if either one has zero length, then their cross product is zero. More generally, the magnitude of the product equals the area of a parallelogram with the vectors for sides; in particular, the magnitude of the product of two perpendicular vectors is the product of their lengths. The cross product is anticommutative (i.e.,) and is distributive over addition (i.e.,
). The space
together with the cross product is an algebra over the real numbers, which is neither commutative nor associative, but is a Lie algebra with the cross product being the Lie bracket. Read more...
- In mathematics, a linear map (also called a linear mapping, linear transformation or, in some contexts, linear function) is a mapping V → W between two modules (including vector spaces) that preserves (in the sense defined below) the operations of addition and scalar multiplication.
An important special case is when V = W, in which case the map is called a linear operator, or an endomorphism of V. Sometimes the term linear function has the same meaning as linear map, while in analytic geometry it does not. Read more... - The following tables provide a comparison of numerical analysis software. Read more...
- In linear algebra, an n-by-n square matrix A is called invertible (also nonsingular or nondegenerate) if there exists an n-by-n square matrix B such that
:Read more...
In mathematics, the exterior product or wedge product of vectors is an algebraic construction used in geometry to study areas, volumes, and their higher-dimensional analogues. The exterior product of two vectors u and v, denoted by u ∧ v, is called a bivector and lives in a space called the exterior square, a vector space that is distinct from the original space of vectors. The magnitude of u ∧ v can be interpreted as the area of the parallelogram with sides u and v, which in three dimensions can also be computed using the cross product of the two vectors. Like the cross product, the exterior product is anticommutative, meaning that u ∧ v = −(v ∧ u) for all vectors u and v, but, unlike the cross product, the exterior product is associative. One way to visualize a bivector is as a family of parallelograms all lying in the same plane, having the same area, and with the same orientation—a choice of clockwise or counterclockwise.
When regarded in this manner, the exterior product of two vectors is called a 2-blade. More generally, the exterior product of any number k of vectors can be defined and is sometimes called a k-blade. It lives in a space known as the kth exterior power. The magnitude of the resulting k-blade is the volume of the k-dimensional parallelotope whose edges are the given vectors, just as the magnitude of the scalar triple product of vectors in three dimensions gives the volume of the parallelepiped generated by those vectors. Read more...- In linear algebra, a minor of a matrix A is the determinant of some smaller square matrix, cut down from A by removing one or more of its rows or columns. Minors obtained by removing just one row and one column from square matrices (first minors) are required for calculating matrix cofactors, which in turn are useful for computing both the determinant and inverse of square matrices. Read more...
Scalars are real numbers used in linear algebra, as opposed to vectors. This image shows a Euclidean vector. Its coordinates x and y are scalars, as is its length, but v is not a scalar.
A scalar is an element of a field which is used to define a vector space. A quantity described by multiple scalars, such as having both direction and magnitude, is called a vector.
In linear algebra, real numbers or other elements of a field are called scalars and relate to vectors in a vector space through the operation of scalar multiplication, in which a vector can be multiplied by a number to produce another vector. More generally, a vector space may be defined by using any field instead of real numbers, such as complex numbers. Then the scalars of that vector space will be the elements of the associated field. Read more...- In mathematics, matrix multiplication or matrix product is a binary operation that produces a matrix from two matrices with entries in a field, or, more generally, in a ring or even a semiring. The matrix product is designed for representing the composition of linear maps that are represented by matrices. Matrix multiplication is thus a basic tool of linear algebra, and as such has numerous applications in many areas of mathematics, as well as in applied mathematics, statistics, physics, economics, and engineering. In more detail, if A is an n × m matrix and B is an m × p matrix, their matrix product AB is an n × p matrix, in which the m entries across a row of A are multiplied with the m entries down a column of B and summed to produce an entry of AB. When two linear maps are represented by matrices, then the matrix product represents the composition of the two maps.
The definition of matrix product requires that the entries belong to a semiring, and does not require multiplication of elements of the semiring to be commutative. In many applications, the matrix elements belong to a field, although the tropical semiring is also a common choice for graph shortest path problems. Even in the case of matrices over fields, the product is not commutative in general, although it is associative and is distributive over matrix addition. The identity matrices (which are the square matrices whose entries are zero outside of the main diagonal and 1 on the main diagonal) are identity elements of the matrix product. It follows that the n × n matrices over a ring form a ring, which is noncommutative except if n = 1 and the ground ring is commutative. Read more...
In linear algebra, the transpose of a matrix is an operator which flips a matrix over its diagonal, that is it switches the row and column indices of the matrix by producing another matrix denoted as AT (also written A′, Atr, tA or At). It is achieved by any one of the following equivalent actions:- reflect A over its main diagonal (which runs from top-left to bottom-right) to obtain AT,
- write the rows of A as the columns of AT,
- write the columns of A as the rows of AT.
Formally, the i-th row, j-th column element of AT is the j-th row, i-th column element of A: Read more...
A sparse matrix obtained when solving a finite element problem in two dimensions. The non-zero elements are shown in black.
In numerical analysis and computer science, a sparse matrix or sparse array is a matrix in which most of the elements are zero. By contrast, if most of the elements are nonzero, then the matrix is considered dense. The number of zero-valued elements divided by the total number of elements (e.g., m × n for an m × n matrix) is called the sparsity of the matrix (which is equal to 1 minus the density of the matrix).
Conceptually, sparsity corresponds to systems that are loosely coupled. Consider a line of balls connected by springs from one to the next: this is a sparse system as only adjacent balls are coupled. By contrast, if the same line of balls had springs connecting each ball to all other balls, the system would correspond to a dense matrix. The concept of sparsity is useful in combinatorics and application areas such as network theory, which have a low density of significant data or connections. Read more...
In mathematics, a set B of elements (vectors) in a vector space V is called a basis, if every element of V may be written in a unique way as a (finite) linear combination of elements of B. The coefficients of this linear combination are referred to as components or coordinates on B of the vector. The elements of a basis are called basis vectors.
Equivalently B is a basis if its elements are linearly independent and every element of V is a linear combination of elements of B. In more general terms, a basis is a linearly independent spanning set. Read more...
In mathematics, orthogonality is the generalization of the notion of perpendicularity to the linear algebra of bilinear forms. Two elements u and v of a vector space with bilinear form B are orthogonal when B(u, v) = 0. Depending on the bilinear form, the vector space may contain nonzero self-orthogonal vectors. In the case of function spaces, families of orthogonal functions are used to form a basis.
By extension, orthogonality is also used to refer to the separation of specific features of a system. The term also has specialized meanings in other fields including art and chemistry. Read more...- The following tables provide a comparison of linear algebra software libraries, either specialized or general purpose libraries with significant linear algebra coverage. Read more...
- In linear algebra, linear transformations can be represented by matrices. If
is a linear transformation mapping
to
and
is a column vector with
entries, then
:Read more...
- In mathematics, the dot product or scalar product is an algebraic operation that takes two equal-length sequences of numbers (usually coordinate vectors) and returns a single number. In Euclidean geometry, the dot product of the Cartesian coordinates of two vectors is widely used and often called inner product (or rarely projection product); see also inner product space.
Algebraically, the dot product is the sum of the products of the corresponding entries of the two sequences of numbers. Geometrically, it is the product of the Euclidean magnitudes of the two vectors and the cosine of the angle between them. These definitions are equivalent when using Cartesian coordinates. In modern geometry, Euclidean spaces are often defined by using vector spaces. In this case, the dot product is used for defining lengths (the length of a vector is the square root of the dot product of the vector by itself) and angles (the cosine of the angle of two vectors is the quotient of their dot product by the product of their lengths). Read more... - In mathematics, a linear combination is an expression constructed from a set of terms by multiplying each term by a constant and adding the results (e.g. a linear combination of x and y would be any expression of the form ax + by, where a and b are constants). The concept of linear combinations is central to linear algebra and related fields of mathematics.
Most of this article deals with linear combinations in the context of a vector space over a field, with some generalizations given at the end of the article. Read more...
In mathematics, physics, and engineering, a Euclidean vector (sometimes called a geometric or spatial vector, or—as here—simply a vector) is a geometric object that has magnitude (or length) and direction. Vectors can be added to other vectors according to vector algebra. A Euclidean vector is frequently represented by a line segment with a definite direction, or graphically as an arrow, connecting an initial point A with a terminal point B, and denoted by
A vector is what is needed to "carry" the point A to the point B; the Latin word vector means "carrier". It was first used by 18th century astronomers investigating planet rotation around the Sun. The magnitude of the vector is the distance between the two points and the direction refers to the direction of displacement from A to B. Many algebraic operations on real numbers such as addition, subtraction, multiplication, and negation have close analogues for vectors, operations which obey the familiar algebraic laws of commutativity, associativity, and distributivity. These operations and associated laws qualify Euclidean vectors as an example of the more generalized concept of vectors defined simply as elements of a vector space. Read more...- In mathematics, a block matrix or a partitioned matrix is a matrix that is interpreted as having been broken into sections called blocks or submatrices. Intuitively, a matrix interpreted as a block matrix can be visualized as the original matrix with a collection of horizontal and vertical lines, which break it up, or partition it, into a collection of smaller matrices. Any matrix may be interpreted as a block matrix in one or more ways, with each interpretation defined by how its rows and columns are partitioned.
This notion can be made more precise for anby
matrix
by partitioning
into a collection
, and then partitioning
into a collection
. The original matrix is then considered as the "total" of these groups, in the sense that the
entry of the original matrix corresponds in a 1-to-1 way with some
offset entry of some
, where
and
. Read more...
- In vector algebra, a branch of mathematics, the triple product is a product of three 3-dimensional vectors, usually Euclidean vectors. The name "triple product" is used for two different products, the scalar-valued scalar triple product and, less often, the vector-valued vector triple product. Read more...
- In mathematics, the tensor product V ⊗ W of two vector spaces V and W (over the same field) is itself a vector space, together with an operation of bilinear composition, denoted by ⊗, from ordered pairs in the Cartesian product V × W into V ⊗ W, in a way that generalizes the outer product. The tensor product of V and W is the vector space generated by the symbols v ⊗ w, with v ∈ V and w ∈ W, in which the relations of bilinearity are imposed for the product operation ⊗, and no other relations are assumed to hold. The tensor product space is thus the "freest" (or most general) such vector space, in the sense of having the fewest constraints.
The tensor product of (finite dimensional) vector spaces has dimension equal to the product of the dimensions of the two factors:
:
In particular, this distinguishes the tensor product from the direct sum vector space, whose dimension is the sum of the dimensions of the two summands:
:Read more...
The row vectors of a matrix. The row space of this matrix is the vector space generated by linear combinations of the row vectors.
In linear algebra, the column space (also called the range or image) of a matrix A is the span (set of all possible linear combinations) of its column vectors. The column space of a matrix is the image or range of the corresponding matrix transformation.
Letbe a field. The column space of an m × n matrix with components from
is a linear subspace of the m-space
. The dimension of the column space is called the rank of the matrix and is at most min(m, n). A definition for matrices over a ring
is also possible. Read more...
- In mathematics, the seven-dimensional cross product is a bilinear operation on vectors in seven-dimensional Euclidean space. It assigns to any two vectors a, b in R7 a vector a × b also in R7. Like the cross product in three dimensions, the seven-dimensional product is anticommutative and a × b is orthogonal both to a and to b. Unlike in three dimensions, it does not satisfy the Jacobi identity, and while the three-dimensional cross product is unique up to a sign, there are many seven-dimensional cross products. The seven-dimensional cross product has the same relationship to the octonions as the three-dimensional product does to the quaternions.
The seven-dimensional cross product is one way of generalising the cross product to other than three dimensions, and it is the only other non-trivial bilinear product of two vectors that is vector-valued, anticommutative and orthogonal. In other dimensions there are vector-valued products of three or more vectors that satisfy these conditions, and binary products with bivector results. Read more... - In linear algebra, the rank of a matrix A is the dimension of the vector space generated (or spanned) by its columns. This corresponds to the maximal number of linearly independent columns of A. This, in turn, is identical to the dimension of the space spanned by its rows. Rank is thus a measure of the "nondegenerateness" of the system of linear equations and linear transformation encoded by A. There are multiple equivalent definitions of rank. A matrix's rank is one of its most fundamental characteristics.
The rank is commonly denoted rank(A) or rk(A); sometimes the parentheses are not written, as in rank A. Read more...
In mathematics, a bivector or 2-vector is a quantity in exterior algebra or geometric algebra that extends the idea of scalars and vectors. If a scalar is considered an order zero quantity, and a vector is an order one quantity, then a bivector can be thought of as being of order two. Bivectors have applications in many areas of mathematics and physics. They are related to complex numbers in two dimensions and to both pseudovectors and quaternions in three dimensions. They can be used to generate rotations in any number of dimensions, and are a useful tool for classifying such rotations. They also are used in physics, tying together a number of otherwise unrelated quantities.
Bivectors are generated by the exterior product on vectors: given two vectors a and b, their exterior product a ∧ b is a bivector, as is the sum of any bivectors. Not all bivectors can be generated as a single exterior product. More precisely, a bivector that can be expressed as an exterior product is called simple; in up to three dimensions all bivectors are simple, but in higher dimensions this is not the case. The exterior product of two vectors is anticommutative and alternating, so b ∧ a is the negation of the bivector a ∧ b, producing the opposite orientation, and a ∧ a is the zero bivector. Read more...- In linear algebra, the outer product of two coordinate vectors is a matrix. If the two vectors have dimensions n and m, then their outer product is an n × m matrix. If the first vector is taken as a column vector, then the outer product is the matrix of columns proportional to this vector, where the proportionality of each column is a component of the second vector.
The outer product introduces tensor algebra since the outer product of two vectorsand
is their tensor product
which is the matrixgiven by
. More generally, the outer product is an instance of Kronecker products. Read more...
In mathematics, particularly linear algebra and numerical analysis, the Gram–Schmidt process is a method for orthonormalising a set of vectors in an inner product space, most commonly the Euclidean space Rn equipped with the standard inner product. The Gram–Schmidt process takes a finite, linearly independent set S = {v1, ..., vk} for k ≤ n and generates an orthogonal set {{{1}}} that spans the same k-dimensional subspace of Rn as S.
The method is named after Jørgen Pedersen Gram and Erhard Schmidt, but Pierre-Simon Laplace had been familiar with it before Gram and Schmidt. In the theory of Lie group decompositions it is generalized by the Iwasawa decomposition. Read more...- In linear algebra, an eigenvector or characteristic vector of a linear transformation is a non-zero vector that changes by only a scalar factor when that linear transformation is applied to it. More formally, if T is a linear transformation from a vector space V over a field F into itself and v is a vector in V that is not the zero vector, then v is an eigenvector of T if T(v) is a scalar multiple of v. This condition can be written as the equation
:
where λ is a scalar in the field F, known as the eigenvalue, characteristic value, or characteristic root associated with the eigenvector v. Read more... - Let V be a vector space over a field F and let X be any set. The functions X → V can be given the structure of a vector space over F where the operations are defined pointwise, that is, for any f, g : X → V, any x in X, and any c in F, define
:
When the domain X has additional structure, one might consider instead the subset (or subspace) of all such functions which respect that structure. For example, if X is also vector space over F, the set of linear maps X → V form a vector space over F with pointwise operations (often denoted Hom(X,V)). One such space is the dual space of V: the set of linear functionals V → F with addition and scalar multiplication defined pointwise. Read more... - In mathematics, any vector space V has a corresponding dual vector space (or just dual space for short) consisting of all linear functionals on V, together with the vector space structure of pointwise addition and scalar multiplication by constants.
The dual space as defined above is defined for all vector spaces, and to avoid ambiguity may also be called the algebraic dual space. When defined for a topological vector space, there is a subspace of the dual space, corresponding to continuous linear functionals, called the continuous dual space. Read more... - In mathematics, and more specifically in linear algebra and functional analysis, the kernel (also known as null space or nullspace) of a linear map L : V → W between two vector spaces V and W, is the set of all elements v of V for which L(v) = 0, where 0 denotes the zero vector in W. That is, in set-builder notation,
:Read more...
- In linear algebra, Cramer's rule is an explicit formula for the solution of a system of linear equations with as many equations as unknowns, valid whenever the system has a unique solution. It expresses the solution in terms of the determinants of the (square) coefficient matrix and of matrices obtained from it by replacing one column by the column vector of right-hand-sides of the equations. It is named after Gabriel Cramer (1704–1752), who published the rule for an arbitrary number of unknowns in 1750, although Colin Maclaurin also published special cases of the rule in 1748 (and possibly knew of it as early as 1729).
Cramer's rule implemented in a naïve way is computationally inefficient for systems of more than two or three equations. In the case of n equations in n unknowns, it requires computation of n + 1 determinants, while Gaussian elimination produces the result with the same computational complexity as the computation of a single determinant.[verification needed] Cramer's rule can also be numerically unstable even for 2×2 systems. However, it has recently been shown that Cramer's rule can be implemented in O(n3) time, which is comparable to more common methods of solving systems of linear equations, such as Gaussian elimination (consistently requiring 2.5 times as many arithmetic operations for all matrix sizes), while exhibiting comparable numeric stability in most cases. Read more... - In multilinear algebra, a multivector, sometimes called Clifford number, is an element of the exterior algebra Λ(V) of a vector space V. This algebra is graded, associative and alternating, and consists of linear combinations of simple k-vectors (also known as decomposable k-vectors or k-blades) of the form
:
whereare in V.
"Multivector" may mean either homogeneous elements (all terms of the linear combination have the same grade or degree k, that is are the product of the same number k of vectors), which are referred to as k-vectors or p-vectors, or may allow sums of terms in different degrees. Read more...
The row vectors of a matrix. The row space of this matrix is the vector space generated by linear combinations of the row vectors.
In linear algebra, the column space (also called the range or image) of a matrix A is the span (set of all possible linear combinations) of its column vectors. The column space of a matrix is the image or range of the corresponding matrix transformation.
Letbe a field. The column space of an m × n matrix with components from
is a linear subspace of the m-space
. The dimension of the column space is called the rank of the matrix and is at most min(m, n). A definition for matrices over a ring
is also possible. Read more...
- An early electromechanical programmable computer, the Z3, included floating-point arithmetic (replica on display at Deutsches Museum in Munich).
In computing, floating-point arithmetic (FP) is arithmetic using formulaic representation of real numbers as an approximation so as to support a trade-off between range and precision. For this reason, floating-point computation is often found in systems which include very small and very large real numbers, which require fast processing times. A number is, in general, represented approximately to a fixed number of significant digits (the significand) and scaled using an exponent in some fixed base; the base for the scaling is normally two, ten, or sixteen. A number that can be represented exactly is of the following form:
:
where significand is an integer (i.e., in Z), base is an integer greater than or equal to two, and exponent is also an integer.
For example:
:
The term floating point refers to the fact that a number's radix point (decimal point, or, more commonly in computers, binary point) can "float"; that is, it can be placed anywhere relative to the significant digits of the number. This position is indicated as the exponent component, and thus the floating-point representation can be thought of as a kind of scientific notation. Read more...
Did you know...
- ... that Vera Faddeeva's 1950 book Computational methods of linear algebra was one of the first publications in that field of mathematics?
Need help?
Do you have a question about Linear algebra that you can't find the answer to?
Consider asking it at the Wikipedia reference desk.
Selected images
In the three-dimensional Euclidean space, planes represent solutions of linear equations and their intersections represent the common solutions
Subcategories
Topics
Associated Wikimedia
The following Wikimedia Foundation sister projects provide more on this subject:
Wikibooks
Books
Commons
Media
Wikinews
News
Wikiquote
Quotations
Wikisource
Texts
Wikiversity
Learning resources
Wiktionary
Definitions
Wikidata
Database
- What are portals?
- List of portals
