Jump to content

Einstein notation: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
distinguishing between different types of indices
Unknown (talk | contribs)
This article is introducing Einstein notation, it doesn't make sense to use upper indices when defining it in non-Einstein notation
Line 8: Line 8:
actually means
actually means


:<math> y = \sum_{i=1}^3 c_i x^i = c_1 x^1 + c_2 x^2 + c_3 x^3.</math>
:<math> y = \sum_{i=1}^3 c_i x_i = c_1 x_1 + c_2 x_2 + c_3 x_3.</math>


The upper indices are not [[exponents]], but instead different axes. Thus, for example, <math>x^2</math> should be read as "x-two", not "x squared", and corresponds to the traditional y-axis.
The upper indices are not [[exponents]], but instead different axes. Thus, for example, <math>x^2</math> should be read as "x-two", not "x squared", and corresponds to the traditional y-axis.
Line 27: Line 27:
The basic idea of Einstein notation is that a [[covector]] and a vector can form a scalar:
The basic idea of Einstein notation is that a [[covector]] and a vector can form a scalar:


:<math> y = c_1 x^1+c_2x^2+c_3x^3+ \cdots + c_nx^n \, </math>
:<math> y = c_1 x_1+c_2 x_2+c_3 x_3+ \cdots + c_n x_n \, </math>


This is typically written as an explicit sum:
This is typically written as an explicit sum:


:<math> y = \sum_{i=1}^n c_ix^i</math>
:<math> y = \sum_{i=1}^n c_i x_i</math>


A scalar is invariant under transformations of basis. When the basis is changed, the components of a vector change by a linear transformation described by a matrix, while the covector changes by the inverse matrix. This is designed to guarantee that the linear function associated with the covector, the sum above, is the same no matter what the basis is. Since it is only this sum which is invariant under changes of basis, not the individual terms in the sum, this led Einstein to propose the convention that repeated indices imply the sum:
A scalar is invariant under transformations of basis. When the basis is changed, the components of a vector change by a linear transformation described by a matrix, while the covector changes by the inverse matrix. This is designed to guarantee that the linear function associated with the covector, the sum above, is the same no matter what the basis is. Since it is only this sum which is invariant under changes of basis, not the individual terms in the sum, this led Einstein to propose the convention that repeated indices imply the sum:

Revision as of 04:08, 12 May 2010

In mathematics, especially in applications of linear algebra to physics, the Einstein notation or Einstein summation convention is a notational convention useful when dealing with coordinate formulae. It was introduced by Albert Einstein in 1916.[1]

According to this convention, when an index variable appears twice in a single term, once in an upper (superscript) and once in a lower (subscript) position, it implies that we are summing over all of its possible values. In typical applications, the index values are 1,2,3 (representing the three dimensions of physical Euclidean space), or 0,1,2,3 or 1,2,3,4 (representing the four dimensions of space-time, or Minkowski space), but they can have any range, even (in some applications) an infinite set. Thus in three dimensions

actually means

The upper indices are not exponents, but instead different axes. Thus, for example, should be read as "x-two", not "x squared", and corresponds to the traditional y-axis.

Abstract index notation is a way of presenting the summation convention so that it is made clear that it is independent of coordinates.

In general relativity, the Greek alphabet and the Roman alphabet are used to distinguish whether summing over 1,2,3 or 0,1,2,3 (usually Roman, i, j, ... for 1,2,3 and Greek, , , ... for 0,1,2,3). As in sign conventions, the convention used in practice varies: Roman and Greek may be reversed.

When there is a fixed basis, one can work with only subscripts, but in general one must distinguish between superscripts and subscripts; see below.

In some fields, Einstein notation is referred to simply as index notation, or indicial notation. The use of the implied summation of repeated indices is also referred to as the Einstein Sum Convention.

Introduction

The basic idea of Einstein notation is that a covector and a vector can form a scalar:

This is typically written as an explicit sum:

A scalar is invariant under transformations of basis. When the basis is changed, the components of a vector change by a linear transformation described by a matrix, while the covector changes by the inverse matrix. This is designed to guarantee that the linear function associated with the covector, the sum above, is the same no matter what the basis is. Since it is only this sum which is invariant under changes of basis, not the individual terms in the sum, this led Einstein to propose the convention that repeated indices imply the sum:

In Einstein notation, covector indices are subscripts and vector indices are superscripts. The position of the index has a specific meaning. It is important, of course, not to confuse a superscript with an exponent—all the relations with superscripts and subscripts are linear, they involve no power higher than the first. Here, the superscripted i above the symbol x represents an integer-valued index running from 1 to n.

The virtue of Einstein notation is that it represents the invariant quantities with a simple notation.

Vector representations

First, we can use Einstein notation in linear algebra to distinguish easily between vectors and covectors: upper indices are used to label components (coordinates) of vectors, while lower indices are used to label components of covectors. However, vectors themselves (not their components) have lower indices, and covectors have upper indices.[2] Given a vector space V and its dual space , one indexes vectors (elements of V) with subscripts, as in , and covectors with superscripts, as in . However, the coordinates of vectors and covectors follow the opposite convention: if are a basis for V and are the dual basis for , then vectors are expressed as:

and covectors are expressed as

This is because a component of a vector (one of its coordinates, in some basis) is the value of a covector: the coefficient of is the value of the corresponding covector in the dual basis: . Note that is a covector, but is a scalar. In other words, since basis vectors are given lower indices and coordinates are labeled with upper indices, summation notation suggests pairing them (in the obvious way) to express the vector.

In terms of covariance and contravariance of vectors, lower indices represent 'components' of covariant vectors (covectors), while upper indices represent components of contravariant vectors (vectors): they transform covariantly (resp., contravariantly) with respect to change of basis.

A particularly confusing notation is to use the same letter both for a (co)vector and its components, as in:

Here does not mean "the covector v", but rather, "the components of the vector v".

Mnemonics

  • "Upper indices go up to down; lower indices go left to right"
  • You can stack vectors (column matrices) side-by-side:

Hence the lower index indicates which column you are in.

  • You can stack covectors (row matrices) top-to-bottom:

Hence the upper index indicates which row you are in.

Superscripts and subscripts vs. only subscripts

In the presence of a non-degenerate form (an isomorphism , for instance a Riemannian metric or Minkowski metric), one can raise and lower indices.

A basis gives such a form (via the dual basis), hence when working on with a fixed basis, one can work with just subscripts.

However, if one changes coordinates, the way that coefficients change depends on the variance of the object, and one cannot ignore the distinction; see covariance and contravariance of vectors.

Common operations in this notation

In Einstein notation, the usual element reference for the th row and th column of matrix becomes . We can then write the following operations in Einstein notation as follows.

Inner product

Given a row vector and a column vector of the same size, we can take the inner product , which is a scalar: it's evaluating the covector on the vector.

Multiplication of a vector by a matrix

Given a matrix and a (column) vector , the coefficients of the product are given by .

Similarly, is equivalent to .

Matrix multiplication

We can represent matrix multiplication as:

This expression is equivalent to the more conventional (and less compact) notation:

Trace

Given a square matrix , summing over a common index yields the trace.

Outer product

The outer product of the column vector u by the row vector v yields an M × N matrix A:

In Einstein notation, we have:

Since i and j represent two different indices, and in this case over two different ranges M and N respectively, the indices are not eliminated by the multiplication. Both indices survive the multiplication to become the two indices of the newly-created matrix A.

Coefficients on tensors and related

Given a tensor field and a basis (of linearly independent vector fields), the coefficients of the tensor field in a basis can be computed by evaluating on a suitable combination of the basis and dual basis, and inherits the correct indexing. We list notable examples.

Throughout, let be a basis of vector fields (a moving frame).

which follows from the formula

This also applies for some operations that are not tensorial, for instance:

where is the covariant derivative. Equivalently,

  • commutator coefficients

where is the Lie bracket. Equivalently,

Vector dot product

In mechanics and engineering, vectors in 3D space are often described in relation to orthogonal unit vectors i, j and k.

If the basis vectors i, j, and k are instead expressed as e1, e2, and e3, a vector can be expressed in terms of a summation:

In Einstein notation, the summation symbol is omitted since the index i is repeated once as an upper index and once as a lower index, and we simply write

Using e1, e2, and e3 instead of i, j, and k, together with Einstein notation, we obtain a concise algebraic presentation of vector and tensor equations. For example,

Since

where is the Kronecker delta, which is equal to 1 when i = j, and 0 otherwise, we find

One can use to lower indices of the vectors; namely, and . Then

Note that, despite for any fixed , it is incorrect to write

since on the right hand side the index is repeated both times as an upper index and so there is no summation over according to the Einstein convention. Rather, one should explicitly write the summation:

Vector cross product

For the cross product,

where and , with the Levi-Civita symbol defined by:

One then recovers

from

.

In other words, if , then , so that .

Abstract definitions

In the traditional usage, one has in mind a vector space V  with finite dimension n, and a specific basis of V. We can write the basis vectors as e1, e2, ..., en. Then if v is a vector in V, it has coordinates relative to this basis.

The basic rule is:

In this expression, it was assumed that the term on the right side was to be summed as i  goes from 1 to n, because the index i does not appear on both sides of the expression. (Or, using Einstein's convention, because the index i  appeared twice.)

An index that is summed over is a summation index. Here, the i is known as a summation index. It is also known as a dummy index since the result is not dependent on it; thus we could also write, for example:

An index that is not summed over is a free index and should be found in each term of the equation or formula. Compare dummy indices and free indices with free variables and bound variables.

The value of the Einstein convention is that it applies to other vector spaces built from V  using the tensor product and duality. For example, , the tensor product of V  with itself, has a basis consisting of tensors of the form . Any tensor in can be written as:

.

V*, the dual of V, has a basis e1, e2, ..., en which obeys the rule

Here δ is the Kronecker delta, so is 1 if i =j  and 0 otherwise.

As

the row-column coordinates on a matrix correspond to the upper-lower indices on the tensor product.

Examples

Einstein summation is clarified with the help of a few simple examples. Consider four-dimensional spacetime, where indices run from 0 to 3:

The above example is one of contraction, a common tensor operation. The tensor becomes a new tensor by summing over the first upper index and the lower index. Typically the resulting tensor is renamed with the contracted indices removed:

For a familiar example, consider the dot product of two vectors a and b. The dot product is defined simply as summation over the indices of a and b:

which is our familiar formula for the vector dot product. Remember it is sometimes necessary to change the components of a in order to lower its index; however, this is not necessary in Euclidean space, or any space with a metric equal to its inverse metric (e.g., flat spacetime).

See also

Notes

  1. ^ Einstein, Albert (1916). "The Foundation of the General Theory of Relativity" (PDF). Annalen der Physik. Retrieved 2006-09-03. {{cite journal}}: Cite has empty unknown parameter: |coauthors= (help)
  2. ^ This applies only for numerical indices. The situation is the opposite for abstract indices. Then, vectors themselves carry upper abstract indices and covectors carry lower abstract indices. Elements of a basis of vectors may carry a lower numerical index and an upper abstract index.

References

External links