# (ε, δ)-definition of limit

Whenever a point x is within δ units of c, f(x) is within ε units of L

In calculus, the (εδ)-definition of limit ("epsilondelta definition of limit") is a formalization of the notion of limit. The concept is due to Augustin-Louis Cauchy, who never gave an (${\displaystyle \varepsilon ,\delta }$) definition of limit in his Cours d'Analyse, but occasionally used ${\displaystyle \varepsilon ,\delta }$ arguments in proofs. It was first given as a formal definition by Bernard Bolzano in 1817, and the definitive modern statement was ultimately provided by Karl Weierstrass.[1][2] It makes rigorous the following informal notion: the dependent expression f(x) approaches the value L as the variable x approaches the value c if f(x) can be made as close as desired to L by taking x sufficiently close to c.

## History

Although the Greeks examined limiting process, such as the Babylonian method, they probably had no concept similar to the modern limit.[3] The need for the concept of a limit arose in the 1600s when Pierre de Fermat attempted to find the slope of the tangent line at a point ${\displaystyle x}$ of a function such as ${\displaystyle f(x)=x^{2}}$. Using a non-zero, but almost zero quantity, ${\displaystyle E}$, Fermat performed the following calculation:

{\displaystyle {\begin{aligned}{\text{slope}}&={\frac {f(x+E)-f(x)}{E}}\\&={\frac {(x+E)^{2}-x^{2}}{E}}\\&={\frac {x^{2}+2xE+E^{2}-x^{2}}{E}}\\&={\frac {2xE+E^{2}}{E}}=2x+E=2x.\end{aligned}}}

The key to the above calculation is that since ${\displaystyle E}$ is non-zero one can divide ${\displaystyle f(x+E)-f(x)}$ by ${\displaystyle E}$, but since ${\displaystyle E}$ is close to 0, ${\displaystyle 2x+E}$ is essentially ${\displaystyle 2x}$.[4] Quantities such as ${\displaystyle E}$ are called infinitesimals. The problem with this calculation is that mathematicians of the era were unable to rigorously define a quantity with properties of ${\displaystyle E}$[5] although it was common practice to 'neglect' higher power infinitesimals and this seemed to yield correct results.

This problem reappeared later in the 1600s at the center of the development of calculus because calculations such as Fermat's are important to the calculation of derivatives. Isaac Newton first developed calculus via an infinitesimal quantity called a fluxion. He developed them in reference to the idea of an "infinitely small moment in time..."[6] However, Newton later rejected fluxions in favor of a theory of ratios that is close to the modern ${\displaystyle \varepsilon {\text{–}}\delta }$ definition of the limit.[6] Moreover, Newton was aware that the limit of the ratio of vanishing quantities was not itself a ratio, as he wrote:

Those ultimate ratios ... are not actually ratios of ultimate quantities, but limits ... which they can approach so closely that their difference is less than any given quantity...

Additionally, Newton occasionally explained limits in terms similar to the epsilon–delta definition.[7] Gottfried Wilhelm Leibniz developed an infinitesimal of his own and tried to provide it with a rigorous footing, but it was still greeted with unease by some mathematicians and philosophers.[8]

Augustin-Louis Cauchy gave a definition of limit in terms of a more primitive notion he called a variable quantity. He never gave an epsilon–delta definition of limit (Grabiner 1981). Some of Cauchy's proofs contain indications of the epsilon–delta method. Whether or not his foundational approach can be considered a harbinger of Weierstrass's is a subject of scholarly dispute. Grabiner feels that it is, while Schubring (2005) disagrees.[dubious ][1] Nakane concludes that Cauchy and Weierstrass gave the same name to different notions of limit.[9][unreliable source?]

Eventually, Weierstrass and Bolzano are credited with providing a rigorous footing for calculus in the form of the modern ${\displaystyle \varepsilon {\text{–}}\delta }$ definition of the limit. [1][10] The need for reference to an infinitesimal ${\displaystyle E}$ was then removed [11] and Fermat's computation turned into the computation of the following limit:

${\displaystyle \lim _{h\to 0}{\frac {f(x+h)-f(x)}{h}}.}$

This is not to say that the limiting definition was free of problems as, although it removed the need for infinitesimals, it did require the construction of the real numbers by Richard Dedekind.[12] This is also not to say that infinitesimals have no place in modern mathematics as later mathematicians were able to rigorously create infinitesimal quantities as part of the hyperreal number or surreal number systems. Moreover, it is possible to rigorously develop calculus with these quantities and they have other mathematical uses.[13]

## Informal statement

A viable intuitive or provisional definition is that a "function f approaches the limit L near a (symbolically, ${\displaystyle \lim _{x\to a}f(x)=L\,}$) if we can make f(x) as close as we like to L by requiring that x be sufficiently close to, but unequal to, a."[14]

When we say that two things are close (such as f(x) and L or x and a) we mean that the distance between them is small. When f(x), L, x, and a are real numbers, the distance between two numbers is the absolute value of the difference of the two. Thus, when we say f(x) is close to L we mean ${\displaystyle |f(x)-L|}$ is small. When we say that x and a are close, we mean that ${\displaystyle |x-a|}$ is small.[15]

When we say that we can make f(x) as close as we like to L, we mean that for all non-zero distances, ${\displaystyle \varepsilon }$, we can make the distance between f(x) and L smaller than ${\displaystyle \varepsilon }$.[15]

When we say that we can make f(x) as close as we like to L by requiring that x be sufficiently close to, but, unequal to, a, we mean that for every non-zero distance ${\displaystyle \varepsilon }$, there is some non-zero distance ${\displaystyle \delta }$ such that if the distance between x and a is less than ${\displaystyle \delta }$ then the distance between f(x) and L is smaller than ${\displaystyle \varepsilon }$.[15]

The aspect that must be grasped is that the definition requires the following conversation. One is provided with any challenge ${\displaystyle \varepsilon >0}$ for a given f,a, and L. One must answer with a ${\displaystyle \delta >0}$ such that ${\displaystyle 0<|x-a|<\delta }$ implies that ${\displaystyle |f(x)-L|<\varepsilon }$. If one can provide an answer for any challenge, one has proven that the limit exists.

## Precise statement and related statements

### Precise statement for real valued functions

The ${\displaystyle (\varepsilon ,\delta )}$ definition of the limit of a function is as follows:[15]

Let ${\displaystyle f}$ be a real-valued function defined on a subset ${\displaystyle D}$ of the real numbers. Let ${\displaystyle c}$ be a limit point of ${\displaystyle D}$ and let ${\displaystyle L}$ be a real number. We say that

${\displaystyle \lim _{x\to c}f(x)=L}$

if for every ${\displaystyle \varepsilon >0}$ there exists a ${\displaystyle \delta }$ such that, for all ${\displaystyle x\in D}$, if ${\displaystyle 0<|x-c|<\delta }$, then ${\displaystyle |f(x)-L|<\varepsilon }$.

Symbolically:

${\displaystyle \lim _{x\to c}f(x)=L\iff (\forall \varepsilon >0,\,\exists \ \delta >0,\,\forall x\in D,\,0<|x-c|<\delta \ \Rightarrow \ |f(x)-L|<\varepsilon )}$

If ${\displaystyle D=[a,b]}$ or ${\displaystyle D=\mathbb {R} }$, then the condition that ${\displaystyle c}$ is a limit point can be replaced with the simpler condition that c belongs to D, since closed real intervals and the entire real line are perfect sets.

### Precise statement for functions between metric spaces

The definition can be generalized to functions that map between metric spaces. These spaces come with a function, called a metric, that takes two points in the space and returns a real number that represents the distance between the two points.[16] The generalized definition is as follows:[17]

Suppose ${\displaystyle f}$ is defined on a subset ${\displaystyle D}$ of a metric space ${\displaystyle X}$ with a metric ${\displaystyle d_{X}(x,y)}$ and maps into a metric space ${\displaystyle Y}$ with a metric ${\displaystyle d_{Y}(x,y)}$. Let ${\displaystyle c}$ be a limit point of ${\displaystyle D}$ and let ${\displaystyle L}$ be a point of ${\displaystyle Y}$.

We say that

${\displaystyle \lim _{x\to c}f(x)=L}$

if for every ${\displaystyle \varepsilon >0}$ there exists a ${\displaystyle \delta }$ such that, for all ${\displaystyle x\in D}$, if ${\displaystyle 0, then ${\displaystyle d_{Y}(f(x),L)<\varepsilon }$.

Since ${\displaystyle d(x,y)=|x-y|}$ is a metric on the real numbers, one can show that this definition generalizes the first definition for real functions.[18]

### Negation of the precise statement

The negation of the definition is as follows:[19]

Suppose ${\displaystyle f}$ is defined on a subset ${\displaystyle D}$ of a metric space ${\displaystyle X}$ with a metric ${\displaystyle d_{X}(x,y)}$ and maps into a metric space ${\displaystyle Y}$ with a metric ${\displaystyle d_{Y}(x,y)}$. Let ${\displaystyle c}$ be a limit point of ${\displaystyle D}$ and let ${\displaystyle L}$ be a point of ${\displaystyle Y}$.

We say that

${\displaystyle \lim _{x\to c}f(x)\neq L}$

if there exists an ${\displaystyle \varepsilon >0}$ such that for all ${\displaystyle \delta >0}$ there is an ${\displaystyle x\in D}$ such that ${\displaystyle 0 and ${\displaystyle d_{Y}(f(x),L)>\varepsilon }$.

We say that ${\displaystyle \lim _{x\to c}f(x)}$ does not exist if for all ${\displaystyle L\in Y}$, ${\displaystyle \lim _{x\to c}f(x)\neq L}$.

For the negation of a real valued function defined on the real numbers, simply set ${\displaystyle d_{Y}(x,y)=d_{X}(x,y)=|x-y|}$.

### Precise statement for limits at infinity

The precise statement for limits at infinity is as follows:[16]

Suppose ${\displaystyle f}$ is defined on a subset ${\displaystyle D}$ of a metric space ${\displaystyle X}$ with a metric ${\displaystyle d_{X}(x,y)}$ and maps into a metric space ${\displaystyle Y}$ with a metric ${\displaystyle d_{Y}(x,y)}$. Let ${\displaystyle L\in Y}$.

We say that

${\displaystyle \lim _{x\to \infty }f(x)=L}$

if for every ${\displaystyle \varepsilon >0}$, there is a real number ${\displaystyle N>0}$ such that there is an ${\displaystyle x\in D}$ where ${\displaystyle d_{X}(x,0)>N}$ and such that if ${\displaystyle d_{X}(x,0)>N}$ and ${\displaystyle x\in D}$, then ${\displaystyle d_{Y}(f(x),L)<\varepsilon }$.

## Worked examples

### Example 1

We will show that

${\displaystyle \lim _{x\to 0}x\sin {\left({\frac {1}{x}}\right)}=0}$.

We let ${\displaystyle \varepsilon >0}$ be given. We need to find a ${\displaystyle \delta >0}$ such that ${\displaystyle |x-0|<\delta }$ implies ${\displaystyle \left|x\sin \left({\frac {1}{x}}\right)-0\right|<\varepsilon }$.

Since sine is bounded above by 1 and below by −1,

{\displaystyle {\begin{aligned}\left|x\sin {\left({\frac {1}{x}}\right)}-0\right|&=\left|x\sin {\left({\frac {1}{x}}\right)}\right|\\&=|x|\left|\sin {\left({\frac {1}{x}}\right)}\right|\\&\leq |x|.\end{aligned}}}

Thus, if we take ${\displaystyle \delta =\varepsilon }$, then ${\displaystyle |x|=|x-0|<\delta }$ implies ${\displaystyle \left|x\sin {\left({\frac {1}{x}}\right)}-0\right|\leq |x|<\varepsilon }$, which completes the proof.

### Example 2

Let us prove the statement that

${\displaystyle \lim _{x\to a}x^{2}=a^{2}}$

for any real number ${\displaystyle a}$.

Let ${\displaystyle \varepsilon >0}$ be given. We will find a ${\displaystyle \delta >0}$ such that ${\displaystyle |x-a|<\delta }$ implies ${\displaystyle |x^{2}-a^{2}|<\varepsilon }$.

We start by factoring:

${\displaystyle |x^{2}-a^{2}|=|(x-a)(x+a)|=|x-a||x+a|.}$

We recognize that ${\displaystyle |x-a|}$ is the term bounded by ${\displaystyle \delta }$ so we can presuppose a bound of 1 and later pick something smaller than that for ${\displaystyle \delta }$.[20]

So we suppose ${\displaystyle |x-a|<1}$. Since ${\displaystyle |x|-|y|\leq |x-y|}$ holds in general for real numbers ${\displaystyle x}$ and ${\displaystyle y}$, we have

${\displaystyle |x|-|a|\leq |x-a|<1.}$

Thus,

${\displaystyle |x|<1+|a|.}$

Thus via the triangle inequality,

${\displaystyle |x+a|\leq |x|+|a|<2|a|+1.}$

Thus, if we further suppose that

${\displaystyle |x-a|<{\frac {\varepsilon }{2|a|+1}}}$

then

${\displaystyle |x^{2}-a^{2}|<\varepsilon .}$

In summary, we set

${\displaystyle \delta =\min {\left(1,{\frac {\varepsilon }{2|a|+1}}\right)}.}$

So, if ${\displaystyle |x-a|<\delta }$, then

{\displaystyle {\begin{aligned}|x^{2}-a^{2}|&=|x-a||x+a|\\&<{\frac {\varepsilon }{2|a|+1}}(|x+a|)\\&<{\frac {\varepsilon }{2|a|+1}}(2|a|+1)\\&=\varepsilon .\end{aligned}}}

Thus, we have found a ${\displaystyle \delta }$ such that ${\displaystyle |x-a|<\delta }$ implies ${\displaystyle |x^{2}-a^{2}|<\varepsilon }$. Thus, we have shown that

${\displaystyle \lim _{x\to a}x^{2}=a^{2}}$

for any real number ${\displaystyle a}$.

### Example 3

Let us prove the statement that

${\displaystyle \lim _{x\to 5}(3x-3)=12.}$

This is easily shown through graphical understandings of the limit, and as such serves as a strong basis for introduction to proof. According to the formal definition above, a limit statement is correct if and only if confining ${\displaystyle x}$ to ${\displaystyle \delta }$ units of ${\displaystyle c}$ will inevitably confine ${\displaystyle f(x)}$ to ${\displaystyle \varepsilon }$ units of ${\displaystyle L}$. In this specific case, this means that the statement is true if and only if confining ${\displaystyle x}$ to ${\displaystyle \delta }$ units of 5 will inevitably confine

${\displaystyle 3x-3}$

to ${\displaystyle \varepsilon }$ units of 12. The overall key to showing this implication is to demonstrate how ${\displaystyle \delta }$ and ${\displaystyle \varepsilon }$ must be related to each other such that the implication holds. Mathematically, we want to show that

${\displaystyle 0<|x-5|<\delta \ \Rightarrow \ |(3x-3)-12|<\varepsilon .}$

Simplifying, factoring, and dividing 3 on the right hand side of the implication yields

${\displaystyle |x-5|<\varepsilon /3,}$

which immediately gives the required result if we choose

${\displaystyle \delta =\varepsilon /3.}$

Thus the proof is completed. The key to the proof lies in the ability of one to choose boundaries in ${\displaystyle x}$, and then conclude corresponding boundaries in ${\displaystyle f(x)}$, which in this case were related by a factor of 3, which is entirely due to the slope of 3 in the line

${\displaystyle y=3x-3.}$

## Continuity

A function f is said to be continuous at c if it is both defined at c and its value at c equals the limit of f as x approaches c:

${\displaystyle \lim _{x\to c}f(x)=f(c).}$

The ${\displaystyle (\varepsilon ,\delta )}$ definition for a continuous function can be obtained from the definition of a limit by replacing ${\displaystyle 0<|x-c|<\delta }$ with ${\displaystyle |x-c|<\delta }$ to ensure that f is defined at c and equals the limit.

f is said to be continuous on an interval I if it is continuous at every point c of I.

## Comparison with infinitesimal definition

Keisler proved that a hyperreal definition of limit reduces the quantifier complexity by two quantifiers.[21] Namely, ${\displaystyle f(x)}$ converges to a limit L as ${\displaystyle x}$ tends to a if and only if for every infinitesimal e, the value ${\displaystyle f(x+e)}$ is infinitely close to L; see microcontinuity for a related definition of continuity, essentially due to Cauchy. Infinitesimal calculus textbooks based on Robinson's approach provide definitions of continuity, derivative, and integral at standard points in terms of infinitesimals. Once notions such as continuity have been thoroughly explained via the approach using microcontinuity, the epsilon–delta approach is presented as well. Karel Hrbáček argues that the definitions of continuity, derivative, and integration in Robinson-style non-standard analysis must be grounded in the εδ method in order to cover also non-standard values of the input.[22] Błaszczyk et al. argue that microcontinuity is useful in developing a transparent definition of uniform continuity, and characterize the criticism by Hrbáček as a "dubious lament".[23] Hrbáček proposes an alternative non-standard analysis, which (unlike Robinson's) has many "levels" of infinitesimals, so that limits at one level can be defined in terms of infinitesimals at the next level.[24]

## References

1. ^ a b c Grabiner, Judith V. (March 1983), "Who Gave You the Epsilon? Cauchy and the Origins of Rigorous Calculus" (PDF), The American Mathematical Monthly, 90 (3): 185–194, doi:10.2307/2975545, JSTOR 2975545, archived (PDF) from the original on 2009-05-04, retrieved 2009-05-01
2. ^
3. ^ Stillwell, John (1989). Mathematics and its history. New York: Springer-Verlag. pp. 38–39. ISBN 978-1-4899-0007-4.
4. ^ Stillwell, John (1989). Mathematics and its history. New York: Springer-Verlag. p. 104. ISBN 978-1-4899-0007-4.
5. ^ Stillwell, John (1989). Mathematics and its history. New York: Springer-Verlag. p. 106. ISBN 978-1-4899-0007-4.
6. ^ a b Buckley, Benjamin Lee (2012). The continuity debate : Dedekind, Cantor, du Bois-Reymond and Peirce on continuity and infinitesimals. p. 31. ISBN 9780983700487.
7. ^ Pourciau, B. (2001), "Newton and the Notion of Limit", Historia Mathematica, 28 (1): 18–30, doi:10.1006/hmat.2000.2301
8. ^ Buckley, Benjamin Lee (2012). The continuity debate : Dedekind, Cantor, du Bois-Reymond and Peirce on continuity and infinitesimals. p. 32. ISBN 9780983700487.
9. ^ Nakane, Michiyo. Did Weierstrass's differential calculus have a limit-avoiding character? His definition of a limit in εδ style. BSHM Bull. 29 (2014), no. 1, 51–59.
10. ^
11. ^ Buckley, Benjamin Lee (2012). The continuity debate : Dedekind, Cantor, du Bois-Reymond and Peirce on continuity and infinitesimals. p. 33. ISBN 9780983700487.
12. ^ Buckley, Benjamin Lee (2012). The continuity debate : Dedekind, Cantor, du Bois-Reymond and Peirce on continuity and infinitesimals. pp. 32–35. ISBN 9780983700487.
13. ^ Tao, Terence (2008). Structure and randomness : pages from year one of a mathematical blog. Providence, R.I.: American Mathematical Society. pp. 95–110. ISBN 978-0-8218-4695-7.
14. ^ Spivak, Michael (2008). Calculus (4th ed.). Houston, Tex.: Publish or Perish. p. 90. ISBN 978-0914098911.
15. ^ a b c d Spivak, Michael (2008). Calculus (4th ed.). Houston, Tex.: Publish or Perish. p. 96. ISBN 978-0914098911.
16. ^ a b Rudin, Walter (1976). Principles of Mathematical Analysis. McGraw-Hill Science/Engineering/Math. p. 30. ISBN 978-0070542358.
17. ^ Rudin, Walter (1976). Principles of Mathematical Analysis. McGraw-Hill Science/Engineering/Math. p. 83. ISBN 978-0070542358.
18. ^ Rudin, Walter (1976). Principles of Mathematical Analysis. McGraw-Hill Science/Engineering/Math. p. 84. ISBN 978-0070542358.
19. ^ Spivak, Michael (2008). Calculus (4th ed.). Houston, Tex.: Publish or Perish. p. 97. ISBN 978-0914098911.
20. ^ Spivak, Michael (2008). Calculus (4th ed.). Houston, Tex.: Publish or Perish. p. 95. ISBN 978-0914098911.
21. ^ Keisler, H. Jerome (2008), "Quantifiers in limits" (PDF), Andrzej Mostowski and foundational studies, IOS, Amsterdam, pp. 151–170
22. ^ Hrbacek, K. (2007), "Stratified Analysis?", in Van Den Berg, I.; Neves, V. (eds.), The Strength of Nonstandard Analysis, Springer
23. ^ Błaszczyk, Piotr; Katz, Mikhail; Sherry, David (2012), "Ten misconceptions from the history of analysis and their debunking", Foundations of Science, 18: 43–74, arXiv:1202.4153, doi:10.1007/s10699-012-9285-8
24. ^ Hrbacek, K. (2009). "Relative set theory: Internal view". Journal of Logic and Analysis. 1.