Jump to content

Infinitesimal: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
No edit summary
Line 2: Line 2:


When used as an adjective in the vernacular, "infinitesimal" means extremely small.
When used as an adjective in the vernacular, "infinitesimal" means extremely small.

Modern mathematicians have changed the definition of the infinitely small as the introductory paragraph non of this machinery
existed when they were first used. Essentially a small number (not necessarily positive) can be added to another number
and be thrown away - a non reversible operation.


== History of the infinitesimal ==
== History of the infinitesimal ==

Revision as of 12:48, 16 August 2006

In mathematics, an infinitesimal, or infinitely small number, is a number that is smaller in absolute value than any positive real number. A number x is an infinitesimal if and only if for every integer n, |nx| is less than 1, no matter how large n is. In that case, 1/x is larger in absolute value than any positive real number. Nonzero infinitesimals, obviously, are not real numbers, so "operations" on them are not familiar.

When used as an adjective in the vernacular, "infinitesimal" means extremely small.

Modern mathematicians have changed the definition of the infinitely small as the introductory paragraph non of this machinery

existed when they were first used. Essentially a small number (not necessarily positive) can be added to another number 
and be thrown away - a non reversible operation.

History of the infinitesimal

The first mathematician to make use of infinitesimals was Archimedes, although he did not believe in the existence of physical infinitesimals. See the article on how Archimedes used infinitesimals. The Archimedean property is the property of an ordered algebraic structure of having no nonzero infinitesimals.

In India from the 12th century until the 16th century, infinitesimals were discovered for use with differential calculus by Indian mathematician Bhaskara and various Keralese mathematicians.

When Newton and Leibniz developed the calculus, they made use of infinitesimals. A typical argument might go:

To find the derivative f'(x) of the function f(x) = x², let dx be an infinitesimal. Then,
since dx is infinitesimally small.

This argument, while intuitively appealing, and producing the correct result, is not mathematically rigorous. The use of infinitesimals was attacked as incorrect by Bishop Berkeley in his work The Analyst: or a discourse addressed to an infidel mathematician. The fundamental problem is that dx is first treated as non-zero (because we divide by it), but later discarded as if it were zero.

It was not until the second half of the nineteenth century that the calculus was given a formal mathematical foundation by Karl Weierstrass and others using the notion of a limit. In the 20th century, it was found that infinitesimals could after all be treated rigorously. Neither formulation is right or wrong, and both give the same results if used correctly.

Modern uses of infinitesimals

Infinitesimals are legitimate quantities in the non-standard analysis of Abraham Robinson, which makes use of hyperreal numbers. In this theory, the above computation of the derivative of f(x) = x² can be justified with a minor modification: we have to talk about the standard part of the difference quotient, and the standard part of x + dx is x.

Alternatively, we can have synthetic differential geometry or smooth infinitesimal analysis with its roots in category theory. This approach departs dramatically from the classical logic used in conventional mathematics by denying the law of excluded middle--i.e., not (ab) does not have to mean a = b. A nilsquare or nilpotent infinitesimal can then be defined. This is a number x where x ² = 0 is true, but x ≠ 0 can also be true at the same time. With an infinitesimal such as this, algebraic proofs using infinitesimals are quite rigorous, including the one given above.

See also