# (ε, δ)-definition of limit

Whenever a point x is within δ units of c, f(x) is within ε units of L

In calculus, the (ε, δ)-definition of limit ("epsilon-delta definition of limit") is a formalization of the notion of limit. It was first given by Bernard Bolzano in 1817, followed by a less precise form by Augustin-Louis Cauchy. The definitive modern statement was ultimately provided by Karl Weierstrass.[1][2]

## History

Isaac Newton was aware, in the context of the derivative concept, that the limit of the ratio of evanescent quantities was not itself a ratio, as when he wrote:

Those ultimate ratios ... are not actually ratios of ultimate quantities, but limits ... which they can approach so closely that their difference is less than any given quantity...

Occasionally Newton explained limits in terms similar to the epsilon-delta definition.[3] Augustin-Louis Cauchy gave a definition of limit in terms of a more primitive notion he called a variable quantity. He never gave an epsilon-delta definition of limit[citation needed]. Some of Cauchy's proofs contain indications of the epsilon, delta method. Whether or not his foundational approach can be considered a harbinger of Weierstrass's is a subject of scholarly dispute. Grabiner feels that it is, while Schubring (2005) disagrees[dubious ].[1]

## Informal statement

Let f be a function. To say that

$\lim_{x \to c}f(x) = L \,$

means that f(x) can be made as close as desired to L by making the independent variable x close enough, but not equal, to the value c.

How close is "close enough to c" depends on how close one wants to make f(x) to L. It also of course depends on which function f is and on which number c is. Therefore let the positive number ε (epsilon) be how close one wishes to make f(x) to L; strictly one wants the distance to be less than ε. Further, if the positive number δ is how close one will make x to c, and if the distance from x to c is less than δ (but not zero), then the distance from f(x) to L will be less than ε. Therefore δ depends on ε. The limit statement means that no matter how small ε is made, δ can be made small enough.

The letters ε and δ can be understood as "error" and "distance", and in fact Cauchy used ε as an abbreviation for "error" in some of his work.[1] In these terms, the error (ε) in the measurement of the value at the limit can be made as small as desired by reducing the distance (δ) to the limit point.

This definition also works for functions with more than one input value. In those cases, δ can be understood as the radius of a circle or sphere or higher-dimensional analogy, in the domain of the function and centered at the point where the existence of a limit is being proven, for which every point inside produces a function value less than ε from the value of the function at the limit point.

## Precise statement

The $(\varepsilon, \delta)$ definition of the limit of a function is as follows:[4]

Let $f(x)$ be a function defined on an open interval containing $c$ (except possibly at $c$) and let $L$ be a real number. Then we may make the statement

$\lim_{x \to c} f(x) = L \,$

if and only if:

If the value of $x$ is within a specified $\delta$ units from $c$, this implies that $f(x)$ is within a specified $\varepsilon$ units from $L$.

or, symbolically,

$\forall \varepsilon > 0\ \exists \ \delta > 0 : \forall x\ (0 < |x - c | < \delta \ \Rightarrow \ |f(x) - L| < \varepsilon).$

All that the statement

$0 < | x - c | < \delta$

means is that $x$ is within $\delta$ units of $c$, since all it really states is that the magnitude of the difference between $x$ and $c$ is greater than $0$ and no more than $\delta$. In this sense the condition term of the requirements for a limit to exist first asserts that an arbitrary $\delta$ should be picked, and then the range of surrounding $x$ values calculated. In exactly the same manner, the conclusion, that

$|f(x) - L| < \varepsilon,$

in plain English reduces to that $f(x)$ must remain within $\varepsilon$ units of $L$. In other words, in order for the limit to exist, one must be able to pick a small $x$ window around $c$, and deduce that the value of the function $f(x)$ must remain bounded within a certain calculable range.

## Worked Example

Let us prove the statement that

$\lim_{x \to 5} (3x - 3) = 12.$

This is easily shown through graphical understandings of the limit, and as such serves as a strong basis for introduction to proof. According to the formal definition above, a limit statement is correct if and only if confining $x$ to $\delta$ units of $c$ will inevitably confine $f(x)$ to $\varepsilon$ units of $L$. In this specific case, this means that the statement is true if and only if confining $x$ to $\delta$ units of 5 will inevitably confine

$3x - 3$

to $\varepsilon$ units of 12. The overall key to showing this implication is to demonstrate how $\delta$ and $\varepsilon$ must be related to each other such that the implication holds. Mathematically, we want to show that

$0 < | x - 5 | < \delta \ \Rightarrow \ | (3x - 3) - 12 | < \varepsilon .$

Simplifying, factoring, and dividing 3 on the right hand side of the implication yields

$| x - 5 | < \varepsilon / 3 ,$

which looks strikingly similar in form to the left hand side now. To complete the proof, we are granted the mathematical freedom to choose a delta $\delta$ such that the implication holds. A quick look at the expression

$0 < | x - 5 | < \delta \ \Rightarrow \ | x - 5 | < \varepsilon / 3$

encourages that one choose that

$\delta = \varepsilon / 3 .$

Substituting back in to the above yields

$0 < | x - 5 | < \varepsilon / 3 \ \Rightarrow \ | x - 5 | < \varepsilon / 3 ,$

which is clearly true since the two sides are equivalent. And thus the proof is completed. Though it may seem unnecessary, the key to the proof lies in the ability of one to choose boundaries in $x$, and then conclude corresponding boundaries in $f(x)$, which in this case were related by a factor of 3, which in retrospect is entirely due to the slope of 3 in the line

$y = 3x - 3 .$

## Continuity

A function f is said to be continuous at c if it is both defined at c and its value at c equals the limit of f as x approaches c:

$\lim_{x\to c} f(x) = f(c).$

If the condition 0 < |x − c| is left out of the definition of limit, then requiring f(x) to have a limit at c would be the same as requiring f(x) to be continuous at c.

f is said to be continuous on an interval I if it is continuous at every point c of I.

## Comparison with infinitesimal definition

Keisler proved that a hyperreal definition of limit reduces the quantifier complexity by two quantifiers.[5] Namely, $f(x)$ converges to a limit L as $x$ tends to a if and only if for every infinitesimal e, the value $f(x+e)$ is infinitely close to L; see microcontinuity for a related definition of continuity, essentially due to Cauchy. Infinitesimal calculus textbooks based on Robinson's approach provide definitions of continuity, derivative, and integral at standard points in terms of infinitesimals. Once notions such as continuity have been thoroughly explained via the approach using microcontinuity, the epsilon, delta approach is presented, as well. Hrbacek argues that the definitions of continuity, derivative, and integration in Robinson-style non-standard analysis must be grounded in the ε-δ method in order to cover also non-standard values of the input[6] Bŀaszczyk et al. argue that microcontinuity is useful in developing a transparent definition of uniform continuity, and characterize the criticism by Hrbacek as a "dubious lament".[7] Hrbacek proposes an alternative nonstandard analysis, which unlike Robinson's has many "levels" of infinitesimals, so that limits at one level can be defined in terms of infinitesimals at the next level.[8]

## Notes

1. ^ a b c Grabiner, Judith V. (March 1983), "Who Gave You the Epsilon? Cauchy and the Origins of Rigorous Calculus", The American Mathematical Monthly (Mathematical Association of America) 90 (3): 185–194, doi:10.2307/2975545, JSTOR 2975545, archived from the original on 2009-05-03, retrieved 2009-05-01
2. ^ . Accessed 2009-05-01.
3. ^ Pourciau, B. (2001), "Newton and the Notion of Limit", Historia Mathematica 28 (1)
4. ^ Exner, George (2000). Inside calculus. Springer. p. 2. ISBN 978-0-387-98932-7.
5. ^ Keisler, H. Jerome (2008), "Quantifiers in limits", Andrzej Mostowski and foundational studies, IOS, Amsterdam, pp. 151–170
6. ^ Hrbacek, K. (2007), "Stratified Analysis?", in Van Den Berg, I.; Neves, V., The Strength of Nonstandard Analysis, Springer
7. ^ Bŀaszczyk, Piotr; Katz, Mikhail; Sherry, David (2012), "Ten misconceptions from the history of analysis and their debunking", Foundations of Science, arXiv:1202.4153, doi:10.1007/s10699-012-9285-8
8. ^ Hrbacek, K. (2009). "Relative set theory: Internal view". Journal of Logic and Analysis 1.

## Bibliography

• Schubring, Gert (2005), Conflicts Between Generalization, Rigor, and Intuition: Number Concepts Underlying the Development of Analysis in 17th–19th Century France and Germany, Springer, ISBN 0-387-22836-5