Summation

From Wikipedia, the free encyclopedia
Jump to: navigation, search
This article is about finite summation. For infinite summation, see Series (mathematics).
"Sum" redirects here. For other uses, see Sum (disambiguation).
Calculation results
Addition (+)
\scriptstyle\left.\begin{matrix}\scriptstyle\text{summand}+\text{summand}\\\scriptstyle\text{augend}+\text{addend}\end{matrix}\right\}= \scriptstyle\text{sum}
Subtraction (−)
\scriptstyle\text{minuend}-\text{subtrahend}= \scriptstyle\text{difference}
Multiplication (×)
\scriptstyle\left.\begin{matrix}\scriptstyle\text{multiplicand}\times\text{multiplicand}\\\scriptstyle\text{multiplicand}\times\text{multiplier}\end{matrix}\right\}= \scriptstyle\text{product}
Division (÷)
\scriptstyle\frac{\scriptstyle\text{dividend}}{\scriptstyle\text{divisor}}= \scriptstyle\text{quotient}
Modulation (mod)
\scriptstyle\text{dividend}\mod\text{divisor}= \scriptstyle\text{remainder}
Exponentiation
\scriptstyle\text{base}^\text{exponent}= \scriptstyle\text{power}
nth root (√)
\scriptstyle\sqrt[\text{degree}]{\scriptstyle\text{radicand}}= \scriptstyle\text{root}
Logarithm (log)
\scriptstyle\log_\text{base}(\text{antilogarithm})= \scriptstyle\text{logarithm}

Summation is the operation of adding a sequence of numbers; the result is their sum or total. If numbers are added sequentially from left to right, any intermediate result is a partial sum, prefix sum, or running total of the summation. The numbers to be summed (called addends, or sometimes summands) may be integers, rational numbers, real numbers, or complex numbers. Besides numbers, other types of values can be added as well: vectors, matrices, polynomials and, in general, elements of any additive group (or even monoid). For finite sequences of such elements, summation always produces a well-defined sum (possibly by virtue of the convention for empty sums).

The summation of an infinite sequence of values is called a series. A value of such a series may often be defined, by means of a limit (although sometimes the value may be infinite, and often no value results at all). Another notion involving limits of finite sums is integration. The term summation has a special meaning related to extrapolation in the context of divergent series.

The summation of the sequence [1, 2, 4, 2] is an expression whose value is the sum of each of the members of the sequence. In the example, 1 + 2 + 4 + 2 = 9. Since addition is associative the value does not depend on how the additions are grouped, for instance (1 + 2) + (4 + 2) and 1 + ((2 + 4) + 2) both have the value 9; therefore, parentheses are usually omitted in repeated additions. Addition is also commutative, so permuting the terms of a finite sequence does not change its sum (for infinite summations this property may fail; see absolute convergence for conditions under which it still holds).

There is no special notation for the summation of such explicit sequences, as the corresponding repeated addition expression will do. There is only a slight difficulty if the sequence has fewer than two elements: the summation of a sequence of one term involves no plus sign (it is indistinguishable from the term itself) and the summation of the empty sequence cannot even be written down (but one can write its value "0" in its place). If, however, the terms of the sequence are given by a regular pattern, possibly of variable length, then a summation operator may be useful or even essential. For the summation of the sequence of consecutive integers from 1 to 100 one could use an addition expression involving an ellipsis to indicate the missing terms: 1 + 2 + 3 + 4 + ... + 99 + 100. In this case the reader easily guesses the pattern; however, for more complicated patterns, one needs to be precise about the rule used to find successive terms, which can be achieved by using the summation operator "Σ". Using this sigma notation the above summation is written as:

\sum_{i \mathop =1}^{100}i.

The value of this summation is 5050. It can be found without performing 99 additions, since it can be shown (for instance by mathematical induction) that

\sum_{ i \mathop =1}^ni = \frac{n(n+1)}{2}

for all natural numbers n (see Triangular number). More generally, formulae exist for many summations of terms following a regular pattern.

The term "indefinite summation" refers to the search for an inverse image of a given infinite sequence s of values for the forward difference operator, in other words for a sequence, called antidifference of s, whose finite differences are given by s. By contrast, summation as discussed in this article is called "definite summation".

When it is necessary to clarify that numbers are added with their signs, the term algebraic sum[1] is used. For example, in electric circuit theory Kirchhoff's circuit laws consider the algebraic sum of currents in a network of conductors meeting at a point, assigning opposite signs to currents flowing in and out of the node.

Notation[edit]

Capital-sigma notation[edit]

Mathematical notation uses a symbol that compactly represents summation of many similar terms: the summation symbol, , an enlarged form of the upright capital Greek letter Sigma. This is defined as:

\sum_{i \mathop =m}^n a_i = a_m + a_{m+1} + a_{m+2} +\cdots+ a_{n-1} + a_n.

Where, i represents the index of summation; ai is an indexed variable representing each successive term in the series; m is the lower bound of summation, and n is the upper bound of summation. The "i = m" under the summation symbol means that the index i starts out equal to m. The index, i, is incremented by 1 for each successive term, stopping when i = n.[2]

Here is an example showing the summation of exponential terms (all terms to the power of 2):

\sum_{i \mathop =3}^6 i^2 = 3^2+4^2+5^2+6^2 = 86.

Informal writing sometimes omits the definition of the index and bounds of summation when these are clear from context, as in:

\sum a_i^2 = \sum_{ i \mathop =1}^n a_i^2.

One often sees generalizations of this notation in which an arbitrary logical condition is supplied, and the sum is intended to be taken over all values satisfying the condition. For example:

\sum_{0\le k< 100} f(k)

is the sum of f(k) over all (integers) k in the specified range,

\sum_{x \mathop \in S} f(x)

is the sum of f(x) over all elements x in the set S, and

\sum_{d|n}\;\mu(d)

is the sum of \mu(d) over all positive integers d dividing n.[3]

There are also ways to generalize the use of many sigma signs. For example,

\sum_{\ell,\ell'}

is the same as

\sum_\ell\sum_{\ell'}.

A similar notation is applied when it comes to denoting the product of a sequence, which is similar to its summation, but which uses the multiplication operation instead of addition (and gives 1 for an empty sequence instead of 0). The same basic structure is used, with \prod, an enlarged form of the Greek capital letter Pi, replacing the \sum.

Special cases[edit]

It is possible to sum fewer than 2 numbers:

  • If the summation has one summand x, then the evaluated sum is x.
  • If the summation has no summands, then the evaluated sum is zero, because zero is the identity for addition. This is known as the empty sum.

These degenerate cases are usually only used when the summation notation gives a degenerate result in a special case. For example, if n=m in the definition above, then there is only one term in the sum; if n=m-1, then there is none.

Formal definition[edit]

Summation may be defined recursively as follows

\sum_{i=a}^a g(i)=g(a) \,
\sum_{i=a}^b g(i)=g(b)+\sum_{i=a}^{b-1} g(i), for b > a.

Measure theory notation[edit]

In the notation of measure and integration theory, a sum can be expressed as a definite integral,

\sum_{k \mathop =a}^b f(k) = \int_{[a,b]} f\,d\mu

where [a, b] is the subset of the integers from a to b, and where \mu is the counting measure.

Fundamental theorem of discrete calculus[edit]

Indefinite sums can be used to calculate definite sums with the formula:[4]

\sum_{k=a}^b f(k)=\Delta^{-1}f(b+1)-\Delta^{-1}f(a)

Approximation by definite integrals[edit]

Many such approximations can be obtained by the following connection between sums and integrals, which holds for any:

increasing function f:

\int_{s=a-1}^{b} f(s)\ ds \le \sum_{i=a}^{b} f(i) \le \int_{s=a}^{b+1} f(s)\ ds.

decreasing function f:

\int_{s=a}^{b+1} f(s)\ ds \le \sum_{i=a}^{b} f(i) \le \int_{s=a-1}^{b} f(s)\ ds.

For more general approximations, see the Euler–Maclaurin formula.

For summations in which the summand is given (or can be interpolated) by an integrable function of the index, the summation can be interpreted as a Riemann sum occurring in the definition of the corresponding definite integral. One can therefore expect that for instance

\frac{b-a}{n}\sum_{i=0}^{n-1} f\left(a+i\frac{b-a}n\right) \approx \int_a^b f(x)\ dx,

since the right hand side is by definition the limit for n\to\infty of the left hand side. However for a given summation n is fixed, and little can be said about the error in the above approximation without additional assumptions about f: it is clear that for wildly oscillating functions the Riemann sum can be arbitrarily far from the Riemann integral.

Identities[edit]

The formulae below involve finite sums; for infinite summations or finite summations of expressions involving trigonometric functions or other transcendental functions, see list of mathematical series

General manipulations[edit]

\sum_{n=s}^t C\cdot f(n) = C\cdot \sum_{n=s}^t f(n), where C is a constant
\sum_{n=s}^t f(n) + \sum_{n=s}^{t} g(n) = \sum_{n=s}^t \left[f(n) + g(n)\right] \;
\sum_{n=s}^t f(n) - \sum_{n=s}^{t} g(n) = \sum_{n=s}^t \left[f(n) - g(n)\right] \;
\sum_{n=s}^t f(n) = \sum_{n=s+p}^{t+p} f(n-p) \;
\sum_{n\in B} f(n) = \sum_{m\in A} f(\sigma(m)), for a bijection σ from a finite set A onto a finite set B; this generalizes the preceding formula.
\sum_{n=s}^j f(n) + \sum_{n=j+1}^t f(n) = \sum_{n=s}^t f(n) \;
\sum_{i=k_0}^{k_1}\sum_{j=l_0}^{l_1} a_{i,j} = \sum_{j=l_0}^{l_1}\sum_{i=k_0}^{k_1} a_{i,j} \;
\sum_{n=0}^t f(2n) + \sum_{n=0}^t f(2n+1) = \sum_{n=0}^{2t+1} f(n) \;
\sum_{n=0}^t \sum_{i=0}^{z-1} f(z\cdot n+i) = \sum_{n=0}^{z\cdot t+z-1} f(n) \;
\sum_{n=s}^t \ln f(n) = \ln \prod_{n=s}^t f(n) \;
c^{\left[\sum_{n=s}^t f(n) \right]} = \prod_{n=s}^t c^{f(n)} \;

Some summations of polynomial expressions[edit]

\sum_{i=m}^n 1 = n+1-m \,
\sum_{i=1}^n \frac{1}{i} = H_n (See Harmonic number)
\sum_{i=1}^n \frac{1}{i^k} = H^k_n (See Generalized harmonic number)
\sum_{i=m}^n i = \frac{n(n+1)}{2} - \frac{m(m-1)}{2} = \frac{(n+1-m)(n+m)}{2} (see arithmetic series)
\sum_{i=0}^n i = \sum_{i=1}^n i = \frac{n(n+1)}{2} (Special case of the arithmetic series)
\sum_{i=0}^n i^2 = \frac{n(n+1)(2n+1)}{6} = \frac{n^3}{3} + \frac{n^2}{2} + \frac{n}{6} (see square pyramidal number)
\sum_{i=0}^n i^3 = \left(\frac{n(n+1)}{2}\right)^2 = \frac{n^4}{4} + \frac{n^3}{2} + \frac{n^2}{4} = \left[\sum_{i=1}^n i\right]^2 \,
\sum_{i=0}^n i^4 = \frac{n(n+1)(2n+1)(3n^2+3n-1)}{30} = \frac{n^5}{5} + \frac{n^4}{2} + \frac{n^3}{3} - \frac{n}{30} \,
\sum_{i=0}^n i^p = \frac{(n+1)^{p+1}}{p+1} + \sum_{k=1}^p\frac{B_k}{p-k+1}{p\choose k}(n+1)^{p-k+1} where B_k denotes a Bernoulli number (see Faulhaber's formula)


The following formulae are manipulations of \sum_{i=0}^n i^3 = \left(\sum_{i=0}^n i\right)^2 generalized to begin a series at any natural number value (i.e., m \in \mathbb{N} ):

\left(\sum_{i=m}^n i\right)^2 = \sum_{i=m}^n ( i^3 - im(m-1) ) \,
\sum_{i=m}^n i^3 = \left(\sum_{i=m}^n i\right)^2 + m(m-1)\sum_{i=m}^n i \,

Some summations involving exponential terms[edit]

In the summations below a is a constant not equal to 1

\sum_{i=m}^{n-1} a^i = \frac{a^m-a^n}{1-a} (m < n; see geometric series)
\sum_{i=0}^{n-1} a^i = \frac{1-a^n}{1-a} (geometric series starting at a_1=1)
\sum_{i=0}^{n-1} i a^i = \frac{a-na^n+(n-1)a^{n+1}}{(1-a)^2} \,
\sum_{i=0}^{n-1} i 2^i = 2+(n-2)2^{n} (special case when a = 2)
\sum_{i=0}^{n-1} \frac{i}{2^i} = 2-\frac{n+1}{2^{n-1}} (special case when a = 1/2)

Some summations involving binomial coefficients and factorials[edit]

There exist enormously many summation identities involving binomial coefficients (a whole chapter of Concrete Mathematics is devoted to just the basic techniques). Some of the most basic ones are the following.

\sum_{i=0}^n {n \choose i} = 2^n \,
\sum_{i=1}^{n} i{n \choose i} = n2^{n-1} \,
\sum_{i=0}^{n} i!\cdot{n \choose i} = \sum_{i=0}^{n} {}_{n}P_{i} = \lfloor n!\cdot e \rfloor \,
\sum_{i=0}^{n-1} {i \choose k} = {n \choose k+1} \,
\sum_{i=0}^n {n \choose i}a^{(n-i)} b^i=(a + b)^n, the binomial theorem
\sum_{i=0}^n i\cdot i! = (n+1)! - 1 \,
\sum_{i=1}^n {}_{i+k}P_{k+1} = \sum_{i=1}^n \prod_{j=0}^k (i+j) = \frac{(n+k+1)!}{(n-1)!(k+2)} \,
\sum_{i=0}^n {m+i-1 \choose i} = {m+n \choose n} \,

Growth rates[edit]

The following are useful approximations (using theta notation):

\sum_{i=1}^n i^c \in \Theta(n^{c+1}) for real c greater than −1
\sum_{i=1}^n \frac{1}{i} \in \Theta(\log n) (See Harmonic number)
\sum_{i=1}^n c^i \in \Theta(c^n) for real c greater than 1
\sum_{i=1}^n \log(i)^c \in \Theta(n \cdot \log(n)^{c}) for non-negative real c
\sum_{i=1}^n \log(i)^c \cdot i^d \in \Theta(n^{d+1} \cdot \log(n)^{c}) for non-negative real c, d
\sum_{i=1}^n \log(i)^c \cdot i^d \cdot b^i \in \Theta (n^d \cdot \log(n)^c \cdot b^n) for non-negative real b > 1, c, d

See also[edit]

Notes[edit]

  1. ^ Oxford English Dictionary, 2nd ed. - algebraic (esp. of a sum): taken with consideration of the sign (plus or minus) of each term.
  2. ^ For a detailed exposition on summation notation, and arithmetic with sums, see Graham, Ronald L.; Knuth, Donald E.; Patashnik, Oren (1994). "Chapter 2: Sums". Concrete Mathematics: A Foundation for Computer Science (2nd Edition). Addison-Wesley Professional. ISBN 978-0201558029. 
  3. ^ Although the name of the dummy variable does not matter (by definition), one usually uses letters from the middle of the alphabet (i through q) to denote integers, if there is a risk of confusion. For example, even if there should be no doubt about the interpretation, it could look slightly confusing to many mathematicians to see x instead of k in the above formulae involving k. See also typographical conventions in mathematical formulae.
  4. ^ "Handbook of discrete and combinatorial mathematics", Kenneth H. Rosen, John G. Michaels, CRC Press, 1999, ISBN 0-8493-0149-1

Further reading[edit]

External links[edit]