Jump to content

Talk:Machine epsilon

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by 86.24.142.189 (talk) at 17:21, 5 January 2013 (→‎Definition). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Definition

This page defines the machine epsilon as "the smallest floating point number such that". What I have seen more commonly is the definition to be the largest floating point number such that . In fact the equation provided gives the latter definition. Granted the two definitions lead to numbers adjacent on the floating point number line, but I would like to see this article either switch to the other definition or else discuss the presence of two definitions in use. Any thoughts? --Jlenthe 01:01, 9 October 2006 (UTC)[reply]

Hi, with a quick Google check I found as expected some confirmations for the "smallest thingy making a difference" definition, not "biggest making no difference". IIRC that's also what I learned two and a half decades ago. 212.82.251.209 20:48, 3 December 2006 (UTC)[reply]
Yes; if it didn't make a difference, it wouldn't be an "epsilon". --Quuxplusone 01:05, 19 December 2006 (UTC)[reply]

The definition now appears to have changed away from either of those in the first comment in this note, to "an upper bound on the relative error due to rounding". That definition, assuming rounding to nearest, gives us half the value that is generally used, so for example 2^(-24) for IEEE binary32, instead of 2^(-23). The larger value appears as FLT_EPSILON in float.h for C/C++. Then in the table below, the standard definition is used in the last three columns for binary32 and binary64, in contradiction with the header. (As if to make the result match FLT_EPSILSON and DBL_EPSILON??) 86.24.142.189 (talk) 17:21, 5 January 2013 (UTC)[reply]

Code or definition wrong

If you use the code with

float machEps = 1.00001f;

you get smaller numbers with 1+eps>1, Instead the relativ difference between two floating point numers is computed. --Mathemaduenn 10:20, 10 October 2006 (UTC)[reply]

I don't see any code in the current article with
float machEps = 1.0001f;
Therefore, I guess this has been resolved, and I'm removing the {{contradict}} tag now. --Quuxplusone 01:05, 19 December 2006 (UTC)[reply]
No, the point Mathemaduenn was trying to make was that if you start with machEps = 1.00001f rather than machEps = 1.0f then you end up with a smaller machine epsilon.
However, that we know (from calculating 2^(-23)) the correct answer should be 1.19209E-07 suggests that the algorithm in "Approximation using C" is wrong. As far as I can tell, Mathemaduenn is claiming that the algorithm in "Approximation using C" produces the relativ difference rather than the machine epsilon. This confusion of definitions is in fact covered in this article under "Other definitions". -- Tom Fitzhenry 13:43, 7 October 2009 (UTC) —Preceding unsigned comment added by 130.88.199.107 (talk) [reply]

Calculation wrong

This is wrong: the difference between these numbers is 0.00000000000000000000001, or 2−23. 0.00000000000000000000001 is 10-23, but that is probably not the right number. —Preceding unsigned comment added by 193.78.112.2 (talk) 06:43, 19 October 2007 (UTC)[reply]

It's a binary fraction, not a decimal fraction. I agree it's confusing, but I can't think of a better way to explain it. -- BenRG 19:52, 19 October 2007 (UTC)[reply]

References

The page lists 'David Goldberg (March 1991). "What Every Computer Scientist Should Know About Floating-Point Arithmetic".' as one of it's references. That paper defines machine epsilon as the largest possible relative error when a real number is approximated by a floating point number closest to it. ("..the largest of the bounds in (2) above.."). The formula give is as mentioned by Jlenthe above. So either that paper should be removed from the list of references, or the definition provided in it should also be mentioned. —Preceding unsigned comment added by Gautamsewani (talkcontribs) 16:11, 18 August 2008 (UTC)[reply]

C++ style cast

"we can simply take the difference of 1.0 and double(long long (1.0) + 1)."

It's problematic because double(long long(1.0) + 1) evaluates to 2. It should be reinterpret_cast, but I don't think that it will improve the readability. bungalo (talk) 09:32, 5 October 2009 (UTC)[reply]

About to Make Major Changes

As of October 20, 2009, the information on this page is wrong. Unless anyone has some strong thoughts to the contrary, in a few days I will make some major changes to put it right. Jfgrcar (talk) 03:54, 21 October 2009 (UTC)[reply]

I corrected this section today, October 25, 2009. I did not alter the examples, which are still wrong. —Preceding unsigned comment added by Jfgrcar (talkcontribs) 00:15, 26 October 2009 (UTC)[reply]

"How to determine machine epsilon"

The section "How to determine machine epsilon" contains strange implementations that try to approximate machine epsilon.

In standard C we can nowadays simply use the DBL_EPSILON constant from float.h. And more generally, you can use the nextafter family of functions from math.h; for example, "nextafter(1.0, 2.0) - 1.0" should evaluate to DBL_EPSILON if I'm not mistaken. (By the way, the C standard even gives an example showing what this constant should be if you use IEEE floating point numbers: DBL_EPSILON = 2.2204460492503131E-16 = 0X1P-52.)

In Java, we have methods like java.lang.Math.nextAfter and java.lang.Math.ulp. Again, no need to use approximations and iterations. — Miym (talk) 07:22, 26 October 2009 (UTC)[reply]

I think in fortran, you can call epsilon(one) (I know this because this line caused an error when I was trying to convert C to fortran with f2c). 78.240.11.120 (talk) 13:49, 25 February 2012 (UTC)[reply]

"...do not provide methods to change the rounding mode..."

Section "Values for standard hardware floating point arithmetics" claims that "while the standard allows several methods of rounding, programming languages and systems vendors do not provide methods to change the rounding mode from the default: round-to-nearest with the tie-breaking scheme round-to-even."

This claim seems to be incorrect. First, "programming languages" do provide such methods: the C standard provides the functions fesetround and fegetround and macros such as FE_DOWNWARD and FE_UPWARD in fenv.h. Second, "system vendors" do implement these: I just tried in a standard Gnu/Linux environment and these seem to be working (mostly) as expected. — Miym (talk) 07:37, 26 October 2009 (UTC)[reply]

Machine epsilon

Why is the table at the article's beginning listing the machine epsilon as pow(2, -53), when the calculation (correctly) arrives at the conclusion that it is indeed pow(2, -52) (for double precision, i.e. p=52? —Preceding unsigned comment added by 109.90.227.146 (talk) 21:14, 28 September 2010 (UTC)[reply]

Conversion of "Approximation using Java" to Matlab code

I don't know if this is worth putting on the page, but I've just converted the Java estimation code for double type in Matlab. If it's worth adding, here it is to save other people converting it.

function calculateMachineEpsilonDouble()
    machEps = double(1.0);
    done = true;
    while done
        machEps = machEps/2.0;
        done = (double(1.0 + (machEps/2.0)) ~= 1.0);
    end
    fprintf('Machine Epsilon = %s\n', num2str(machEps));
end

137.108.145.10 (talk) 15:29, 3 February 2011 (UTC)[reply]

In MATLAB there exists an eps() function that will give you back the machine eps Gicmo (talk) 22:41, 25 January 2012 (UTC)[reply]

Approximation using C#

I ported the C version into a C# version. I don't know if this would be valuable or worth adding to the article, so I'm adding it here and letting someone else make the judgement call.

static void Main(string[] args)
{
    float machEps = 1.0f;
    do
    {
        Console.WriteLine(machEps.ToString("f10") + "\t" + (1.0f + machEps).ToString("f15"));
        machEps /= 2.0f;
    }while((float)(1.0f + (machEps / 2.0f)) != 1.0f);
    Console.WriteLine("Calculated Machine Epsilon: " + machEps.ToString("f15"));
}

Approximation using Prolog

This Prolog code approximates the machine epsilon.

epsilon(X):-
	Y is (1.0 + X),
	Y = 1.0,
	write(X).
epsilon(X):-
	Y is X/2,
	epsilon(Y).

An example execution in SWI-Prolog:

1 ?- epsilon(1.0).
1.1102230246251565e-16
true .

--201.209.40.226 (talk) 04:23, 30 July 2011 (UTC)[reply]

In practice

"The following are encountered in practice?" Whose practice? If you ask for the machine epsilon of a double in any programming language I can think of that has a specific function for it (eps functions in Matlab and Octave, finfo in numpy, std::numeric_limits<double>::eps() in C++), then you get , not . Yes, I realise that there are other definitions you can use, but I find it very weird to say that "in practice" you find these numbers, whereas actual people who have to deal with epsilon deal with the smallest step you can take in the mantissa for a given exponent. JordiGH (talk) 01:08, 1 September 2011 (UTC)[reply]

inconsistency

The article states for double "64-bit doubles give 2.220446e-16, which is 2-52 as expected." but the table at the top lists 2-53. Brianbjparker (talk) 17:23, 8 March 2012 (UTC)[reply]

confusing definition

The definition of machine epsilon given use a definition of precision p that excludes the implicit bit so e.g. for double uses a p of 52 rather than the usual definition of p=53. This is very confusing-- the definition and table should be changed to use the standard definition of p including the implicit bit. Brianbjparker (talk) 17:39, 8 March 2012 (UTC)[reply]

I changed the definition of p throughout to refer to IEEE 754 precision p-- i.e. including implicit bit. Brianbjparker (talk) 23:21, 8 March 2012 (UTC)[reply]

Simpler python

The python example uses numpy which is not always installed. Wouldn't the following example, using the standard sys module be more relevant, even if restricted to floats ?

    In [1]: import sys
    In [2]: sys.float_info.epsilon
    Out[2]: 2.220446049250313e-16

Frédéric Grosshans (talk) 19:49, 15 March 2012 (UTC)[reply]

C++ sample

Those programs given here all uses an aproximation, you have to note that when calculating 'double' you are actually dealing with numbers of 80 bits (not 64 bits). 64 bits is only a storage, what the procesor does is calculating with 80 bit (or more) precission. The simplest way to calculate machine epsilon can be as such:

for double:

#include <iostream>
#include <stdint.h>
#include <iomanip>

int main()
{
        union
        {
                double f;
                uint64_t i;
        } u1, u2, u3;
			   
        u1.i = 0x3ff0000000000000ul;
	// one (exponent set to 1023 and mantissa to zero
	// one bit from mantissa is implicit

        u2.i = 0x3ff0000000000001ul;
	// one and a little more

        u3.f = u2.f - u1.f;

        std::cout << "machine epsilon: " << std::setprecision(18) << u3.f << std::endl;
}

above program gives:

/home/tomek/roboczy/test$ g++ -O2 -o test test.cpp && ./test
machine epsilon: 2.22044604925031308e-16

and a sample for 32 bit (float)

#include <iostream>
#include <stdint.h>
#include <iomanip>

int main()
{
        union
        {
                float f;
                uint32_t i;
        } u1, u2, u3;
			   
        u1.i = 0x3F800000;
        u2.i = 0x3F800001;
        u3.f = u2.f - u1.f;

        std::cout << "machine epsilon: " << std::setprecision(18) << u3.f << std::endl;
}

and it gives:

/home/tomek/roboczy/test$ g++ -O2 -o test test.cpp && ./test
machine epsilon: 1.1920928955078125e-07

of course above *decimal* values are only an aproximation so I don't see the sense to print them directly. Also C++ users can use std::numeric_limits to get those constants:

       std::cout << std::numeric_limits<float>::epsilon() << std::endl;
       std::cout << std::numeric_limits<double>::epsilon() << std::endl;


Mathematical proof

And a next sample how to calculate it in a more 'mathematical' manner. Let's talk about double:

                S EEEEEEEEEEE FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF
                0 1        11 12                                                63
		The value V represented by the word may be determined as follows:

		* If E=2047 and F is nonzero, then V=NaN ("Not a number")
		* If E=2047 and F is zero and S is 1, then V=-Infinity
		* If E=2047 and F is zero and S is 0, then V=Infinity
		* If 0<E<2047 then V=(-1)**S * 2 ** (E-1023) * (1.F) where "1.F" is intended
		  to represent the binary number created by prefixing F with an implicit
		  leading 1 and a binary point.
		* If E=0 and F is nonzero, then V=(-1)**S * 2 ** (-1022) * (0.F) These are
		  "unnormalized" values.
		* If E=0 and F is zero and S is 1, then V=-0
		* If E=0 and F is zero and S is 0, then V=0

so if you want to set a double to 1.0 you have to set exponent to 1023 and mantissa to zero (one bit is implicit), e.g.

0 01111111111 0000000000000000000000000000000000000000000000000000

if you want a 'little' more than one you have to change the last bit to one:

0 01111111111 0000000000000000000000000000000000000000000000000001

the last bit above has value 2^-52 (not 2^-53) and the machine epsilon can be caltulated in this way:

I have changed values in the first table (binary32, binary64), the rest I don't have a time to test.

--Tsowa (talk) 13:14, 3 November 2012 (UTC)[reply]