Summation

From Wikipedia, the free encyclopedia
(Redirected from Summation sign)

In mathematics, summation is the addition of a sequence of numbers, called addends or summands; the result is their sum or total. Beside numbers, other types of values can be summed as well: functions, vectors, matrices, polynomials and, in general, elements of any type of mathematical objects on which an operation denoted "+" is defined.

Summations of infinite sequences are called series. They involve the concept of limit, and are not considered in this article.

The summation of an explicit sequence is denoted as a succession of additions. For example, summation of [1, 2, 4, 2] is denoted 1 + 2 + 4 + 2, and results in 9, that is, 1 + 2 + 4 + 2 = 9. Because addition is associative and commutative, there is no need of parentheses, and the result is the same irrespective of the order of the summands. Summation of a sequence of only one element results in this element itself. Summation of an empty sequence (a sequence with no elements), by convention, results in 0.

Very often, the elements of a sequence are defined, through a regular pattern, as a function of their place in the sequence. For simple patterns, summation of long sequences may be represented with most summands replaced by ellipses. For example, summation of the first 100 natural numbers may be written as 1 + 2 + 3 + 4 + ⋯ + 99 + 100. Otherwise, summation is denoted by using Σ notation, where is an enlarged capital Greek letter sigma. For example, the sum of the first n natural numbers can be denoted as

For long summations, and summations of variable length (defined with ellipses or Σ notation), it is a common problem to find closed-form expressions for the result. For example,[a]

Although such formulas do not always exist, many summation formulas have been discovered—with some of the most common and elementary ones being listed in the remainder of this article.

Notation[edit]

Capital-sigma notation[edit]

The summation symbol

Mathematical notation uses a symbol that compactly represents summation of many similar terms: the summation symbol, , an enlarged form of the upright capital Greek letter sigma. This is defined as

where i is the index of summation; ai is an indexed variable representing each term of the sum; m is the lower bound of summation, and n is the upper bound of summation. The "i = m" under the summation symbol means that the index i starts out equal to m. The index, i, is incremented by one for each successive term, stopping when i = n.[b]

This is read as "sum of ai, from i = m to n".

Here is an example showing the summation of squares:

In general, while any variable can be used as the index of summation (provided that no ambiguity is incurred), some of the most common ones include letters such as ,[c] , , and ; the latter is also often used for the upper bound of a summation.

Alternatively, index and bounds of summation are sometimes omitted from the definition of summation if the context is sufficiently clear. This applies particularly when the index runs from 1 to n.[1] For example, one might write that:

Generalizations of this notation are often used, in which an arbitrary logical condition is supplied, and the sum is intended to be taken over all values satisfying the condition. For example:

is an alternative notation for the sum of over all (integers) in the specified range. Similarly,

is the sum of over all elements in the set , and

is the sum of over all positive integers dividing .[d]

There are also ways to generalize the use of many sigma signs. For example,

is the same as

A similar notation is used for the product of a sequence, where , an enlarged form of the Greek capital letter pi, is used instead of

Special cases[edit]

It is possible to sum fewer than 2 numbers:

  • If the summation has one summand , then the evaluated sum is .
  • If the summation has no summands, then the evaluated sum is zero, because zero is the identity for addition. This is known as the empty sum.

These degenerate cases are usually only used when the summation notation gives a degenerate result in a special case. For example, if in the definition above, then there is only one term in the sum; if , then there is none.

Formal definition[edit]

Summation may be defined recursively as follows:

, for ;
, for .

Measure theory notation[edit]

In the notation of measure and integration theory, a sum can be expressed as a definite integral,

where is the subset of the integers from to , and where is the counting measure over the integers.

Calculus of finite differences[edit]

Given a function f that is defined over the integers in the interval [m, n], the following equation holds:

This is known as a telescoping series and is the analogue of the fundamental theorem of calculus in calculus of finite differences, which states that:

where

is the derivative of f.

An example of application of the above equation is the following:

Using binomial theorem, this may be rewritten as:

The above formula is more commonly used for inverting of the difference operator , defined by:

where f is a function defined on the nonnegative integers. Thus, given such a function f, the problem is to compute the antidifference of f, a function such that . That is, This function is defined up to the addition of a constant, and may be chosen as[2]

There is not always a closed-form expression for such a summation, but Faulhaber's formula provides a closed form in the case where and, by linearity, for every polynomial function of n.

Approximation by definite integrals[edit]

Many such approximations can be obtained by the following connection between sums and integrals, which holds for any increasing function f:

and for any decreasing function f:

For more general approximations, see the Euler–Maclaurin formula.

For summations in which the summand is given (or can be interpolated) by an integrable function of the index, the summation can be interpreted as a Riemann sum occurring in the definition of the corresponding definite integral. One can therefore expect that for instance

since the right-hand side is by definition the limit for of the left-hand side. However, for a given summation n is fixed, and little can be said about the error in the above approximation without additional assumptions about f: it is clear that for wildly oscillating functions the Riemann sum can be arbitrarily far from the Riemann integral.

Identities[edit]

The formulae below involve finite sums; for infinite summations or finite summations of expressions involving trigonometric functions or other transcendental functions, see list of mathematical series.

General identities[edit]

(distributivity)[3]
(commutativity and associativity)[3]
(index shift)
for a bijection σ from a finite set A onto a set B (index change); this generalizes the preceding formula.
(splitting a sum, using associativity)
(a variant of the preceding formula)
(the sum from the first term up to the last is equal to the sum from the last down to the first)
(a particular case of the formula above)
(commutativity and associativity, again)
(another application of commutativity and associativity)
(splitting a sum into its odd and even parts, for even indexes)
(splitting a sum into its odd and even parts, for odd indexes)
(distributivity)
(distributivity allows factorization)
(the logarithm of a product is the sum of the logarithms of the factors)
(the exponential of a sum is the product of the exponential of the summands)

Powers and logarithm of arithmetic progressions[edit]

for every c that does not depend on i
(Sum of the simplest arithmetic progression, consisting of the first n natural numbers.)[2]: 52 
(Sum of first odd natural numbers)
(Sum of first even natural numbers)
(A sum of logarithms is the logarithm of the product)
(Sum of the first squares, see square pyramidal number.) [2]: 52 
(Nicomachus's theorem) [2]: 52 

More generally, one has Faulhaber's formula for

where denotes a Bernoulli number, and is a binomial coefficient.

Summation index in exponents[edit]

In the following summations, a is assumed to be different from 1.

(sum of a geometric progression)
(special case for a = 1/2)
(a times the derivative with respect to a of the geometric progression)
(sum of an arithmetico–geometric sequence)

Binomial coefficients and factorials[edit]

There exist very many summation identities involving binomial coefficients (a whole chapter of Concrete Mathematics is devoted to just the basic techniques). Some of the most basic ones are the following.

Involving the binomial theorem[edit]

the binomial theorem
the special case where a = b = 1
, the special case where p = a = 1 − b, which, for expresses the sum of the binomial distribution
the value at a = b = 1 of the derivative with respect to a of the binomial theorem
the value at a = b = 1 of the antiderivative with respect to a of the binomial theorem

Involving permutation numbers[edit]

In the following summations, is the number of k-permutations of n.

, where and denotes the floor function.

Others[edit]

Harmonic numbers[edit]

(the nth harmonic number)
(a generalized harmonic number)

Growth rates[edit]

The following are useful approximations (using theta notation):

for real c greater than −1
(See Harmonic number)
for real c greater than 1
for non-negative real c
for non-negative real c, d
for non-negative real b > 1, c, d

History[edit]

  • In 1772, usage of Σ and Σn is attested by Lagrange.[7][9]
  • In 1823, the capital letter S is attested as a summation symbol for series. This usage was apparently widespread.[7]
  • In 1829, the summation symbol Σ is attested by Fourier and C. G. J. Jacobi.[7] Fourier's use includes lower and upper bounds, for example:[10][11]

See also[edit]

Notes[edit]

  1. ^ For details, see Triangular number.
  2. ^ For a detailed exposition on summation notation, and arithmetic with sums, see Graham, Ronald L.; Knuth, Donald E.; Patashnik, Oren (1994). "Chapter 2: Sums". Concrete Mathematics: A Foundation for Computer Science (PDF) (2nd ed.). Addison-Wesley Professional. ISBN 978-0201558029.[permanent dead link]
  3. ^ in contexts where there is no possibility of confusion with the imaginary unit
  4. ^ Although the name of the dummy variable does not matter (by definition), one usually uses letters from the middle of the alphabet ( through ) to denote integers, if there is a risk of confusion. For example, even if there should be no doubt about the interpretation, it could look slightly confusing to many mathematicians to see instead of in the above formulae involving .

References[edit]

  1. ^ "Summation Notation". www.columbia.edu. Retrieved 2020-08-16.
  2. ^ a b c d Handbook of Discrete and Combinatorial Mathematics, Kenneth H. Rosen, John G. Michaels, CRC Press, 1999, ISBN 0-8493-0149-1.
  3. ^ a b "Calculus I - Summation Notation". tutorial.math.lamar.edu. Retrieved 2020-08-16.
  4. ^ Burton, David M. (2011). The History of Mathematics: An Introduction (7th ed.). McGraw-Hill. p. 414. ISBN 978-0-07-338315-6.
  5. ^ Leibniz, Gottfried Wilhelm (1899). Gerhardt, Karl Immanuel (ed.). Der Briefwechsel von Gottfried Wilhelm Leibniz mit Mathematikern. Erster Band. Berlin: Mayer & Müller. p. 154.
  6. ^ a b Cajori (1929), pp. 181-182.
  7. ^ a b c d Cajori (1929), p. 61.
  8. ^ Euler, Leonhard (1755). Institutiones Calculi differentialis (in Latin). Petropolis. p. 27.
  9. ^ Lagrange, Joseph-Louis (1867–1892). Oeuvres de Lagrange. Tome 3 (in French). Paris. p. 451.{{cite book}}: CS1 maint: location missing publisher (link)
  10. ^ Mémoires de l'Académie royale des sciences de l'Institut de France pour l'année 1825, tome VIII (in French). Paris: Didot. 1829. pp. 581-622.
  11. ^ Fourier, Jean-Baptiste Joseph (1888–1890). Oeuvres de Fourier. Tome 2 (in French). Paris: Gauthier-Villars. p. 149.

Bibliography[edit]

External links[edit]

  • Media related to Summation at Wikimedia Commons