Jump to content

Floating-point arithmetic: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
mantissa is not informals
m rvv by 212.219.188.210
Line 1: Line 1:
A '''floating-point number''' is a [[digital]] representation for a number in a certain subset of the [[rational number|rational numbers]], and is often used to approximate an arbitrary [[real number]] on a [[computer]]. In particular, it represents an integer or [[fixed-point arithmetic|fixed-point]] number (the [[significand]] or, informally, the '''mantissa''') multiplied by a base (usually 2 in computers) to some integer power (the [[exponent]]). When the base is 2, it is the binary analogue of [[scientific notation]] (in base 10).
{{Cleanup}}
''For the [[IEEE floating-point standard|IEEE binary floating-point standard]], see its page.''

A '''floating-point number''' is a [[digital]] representation for a number in a certain subset of the [[rational number|rational numbers]], and is often used to approximate an arbitrary [[real number]] on a [[computer]]. In particular, it represents an integer or [[fixed-point arithmetic|fixed-point]] number (the [[significand]] or '''[[mantissa]]''') multiplied by a base (usually 2 in computers) to some integer power (the [[exponent]]). When the base is 2, it is the binary analogue of [[scientific notation]] (in base 10).


A ''floating-point calculation'' is an arithmetic operation on floating-point numbers. This often involves some approximation or rounding because the result of an operation may not be exactly representable—floating-point numbers are of limited precision and can therefore only represent a finite set of values, and if a result is not exactly one of those values then a choice of which value to use has to be made, and the result will then be inexact.
A ''floating-point calculation'' is an arithmetic operation on floating-point numbers. This often involves some approximation or rounding because the result of an operation may not be exactly representable—floating-point numbers are of limited precision and can therefore only represent a finite set of values, and if a result is not exactly one of those values then a choice of which value to use has to be made, and the result will then be inexact.
Line 13: Line 10:


In addition, floating-point representations often include the special values +∞, −∞ (positive and negative infinity), and [[NaN]] ('Not a Number'). Infinities are used when results are too large to be represented, and NaNs indicate an invalid operation or undefined result.
In addition, floating-point representations often include the special values +∞, −∞ (positive and negative infinity), and [[NaN]] ('Not a Number'). Infinities are used when results are too large to be represented, and NaNs indicate an invalid operation or undefined result.

== Usage in computing ==
While in the examples above the numbers are represented in the [[decimal]] system (that is the base of numeration, ''b'' = 10), computers usually do so in the [[binary numeral system|binary]] system, which means that ''b'' = 2. In computers, floating-point numbers are sized by the number of [[bit|bits]] used to store them. This size is usually 32 bits or 64 bits, often called "single-precision" and "double-precision". A few machines offer larger sizes; Intel [[FPU|FPUs]] such as the [[Intel 8087]] (and its descendants integrated into the [[x86]] architecture) offer 80 bit floating point numbers for intermediate results, and several systems offer 128 bit floating-point, generally implemented in software.
[http://babbage.cs.qc.edu/courses/cs341/IEEE-754.html This website] can be used to calculate the floating point representation of a decimal number.


== Problems with floating-point ==
== Problems with floating-point ==
Line 18: Line 19:


Errors in floating-point computation can include:
Errors in floating-point computation can include:
{| border=1
|-
! Error !! Examples
|-
|colspan=2|
* Rounding
* Rounding
** Non-representable numbers: for example, the [[literal]] 0.1 cannot be represented exactly by a binary floating-point number
|-
** Rounding of arithmetic operations: for example 2/3 might yield 0.6666667
|
* Absorption: 1&times;10<sup>15</sup> + 1 = 1&times;10<sup>15</sup>
:* Non-representable numbers
* Cancellation: subtraction between nearly equivalent operands
|the [[literal]] 0.1 cannot be represented exactly by a binary floating-point number
* [[Arithmetic overflow|Overflow]], which usually yields an infinity
|-
* [[Arithmetic underflow|Underflow]] (often defined as an inexact tiny result outside the range of the [[normal number (computing)|normal number]]s for a format), which yields zero, a [[subnormal]] number, or the smallest normal number
|
* Invalid operations (such as an attempt to calculate the square root of a negative number). Invalid operations yield a result of [[NaN]] (not a number).
:* Rounding of arithmetic operations
* Rounding errors: unlike the fixed-point counterpart, the application of [[dither]] in a floating point environment is nearly impossible. See external references for more information about the difficulty of applying dither and the rounding error problems in floating point systems
| 2/3 might yield 0.6666667
|-
|
:* Truncation of decimal in convertions to int
| (int)(.6/.2) may yield 2 rather than 3
|-
|
* Absorption
| 1&times;10<sup>15</sup> + 1 = 1&times;10<sup>15</sup>
|-
|
* Cancellation
| subtraction between nearly equivalent operands
|-
|
* [[Arithmetic overflow|Overflow]]
| which usually yields an infinity
|-
|
* [[Arithmetic underflow|Underflow]]
|(often defined as an inexact tiny result outside the range of the [[normal number (computing)|normal number]]s for a format), which yields zero, a [[subnormal]] number, or the smallest normal number
|-
|
* Invalid operations
|(such as an attempt to calculate the square root of a negative number). Invalid operations yield a result of [[NaN]] (not a number).
|-
|}

Unlike the fixed-point counterpart, the application of [[dither]] in a floating point environment is nearly impossible. See external references for more information about the difficulty of applying dither and the rounding error problems in floating point systems


Floating point representation is more likely to be appropriate when proportional accuracy over a range of scales is needed. When fixed accuracy is required, fixed point is usually a better choice.
Floating point representation is more likely to be appropriate when proportional accuracy over a range of scales is needed. When fixed accuracy is required, fixed point is usually a better choice.
Line 88: Line 56:


== Examples ==
== Examples ==
* The value of Pi, [[pi|&pi;]] = 3.1415926...<sub>10</sub> decimal, which is equivalent to binary 11.001001000011111101101010100...<sub>2</sub>. If we represent this in [[single precision]] which allocates 24 bits for the significand, it will become 1.10010010000111111011011 &times; 2<sup>1</sup> after [[rounding]] up. The first bit is zero which indicates a positive number. The exponent will representat 1 by adding a bias 01111111 totaling 10000000. The significand is (1)10010010000111111011011 (the first bit (1) will not actually be written in the encoding since it can implicitly be deduced to be 1 based on the exponent). So the final encoded single precision approximation to &pi; is 01000000010010010000111111011011 or 0x40920fdb in [[hexadecimal]]. This represents 13176795/4194304 = 3.14159274... which gives the first seven, almost eight, digits of &pi;.
* The value of Pi, [[pi|&pi;]] = 3.1415926...<sub>10</sub> decimal, which is equivalent to binary 11.001001000011111...<sub>2</sub>. When represented in a computer that allocates 17 bits for the significand, it will become 0.11001001000011111 &times; 2<sup>2</sup>. Hence the floating-point representation would start with bits 011001001000011111 and end with bits 10 (which represent the exponent 2 in the binary system). The first zero indicates a positive number, the ending 10<sub>2</sub> = 2<sub>10</sub>.


* The value of -0.375<sub>10</sub> = -0.011<sub>2</sub> or -0.11 &times; 2<sup>&minus;1</sup>. In [[two's complement]] notation, &minus;1 is represented as 11111111 (assuming 8 bits are used in the exponent). In floating-point notation, the number would start with a 1 for the sign bit, followed by 110000... and then followed by 11111111 at the end, or 1110...011111111 (where ... are zeros).
* The value of -0.375<sub>10</sub> = -0.011<sub>2</sub> or -0.11 &times; 2<sup>&minus;1</sup>. In [[two's complement]] notation, &minus;1 is represented as 11111111 (assuming 8 bits are used in the exponent). In floating-point notation, the number would start with a 1 for the sign bit, followed by 110000... and then followed by 11111111 at the end, or 1110...011111111 (where ... are zeros).


=== Hidden bit ===
=== Hidden bit ===
When using binary (''b'' = 2), one bit, called the '''hidden bit''' or the '''implied bit''', can be omitted if all numbers are required to be normalized. The leading digit (most significant bit) of the significand of a normalized binary floating-point number is always 1. This means that this bit does not need to be stored explicitly, since for a normalized number it can be understood to be 1.
When using binary (''b'' = 2), one bit, called the '''hidden bit''' or the '''implied bit''', can be omitted if all numbers are required to be normalized. The leading digit (most significant bit) of the significand of a normalized binary floating-point number is always non-zero; in particular it is always 1. This means that this bit does not need to be stored explicitly, since for a normalized number it can be understood to be 1.


The [[IEEE 754]] standard exploits this fact. Requiring all numbers to be normalized means that 0 cannot be represented; typically some special representation of zero is chosen. In the IEEE standard this special code also encompasses [[denormal|denormal numbers]], which allow for [[gradual underflow]]. The normalized numbers are also known as the [[normal number (computing)|normal numbers]].
The [[IEEE 754]] standard exploits this fact. Requiring all numbers to be normalized means that 0 cannot be represented; typically some special representation of zero is chosen. In the IEEE standard this special code also encompasses [[denormal|denormal numbers]], which allow for [[gradual underflow]]. The normalized numbers are also known as the [[normal number (computing)|normal numbers]].


=== Note ===
=== Note ===
''Although the examples in this article use a consistent system of floating-point notation, some use a notation is different from the IEEE standard.'' For example, in IEEE 754, the exponent is between the sign bit and the significand, not at the end of the number. Also the IEEE exponent uses a biased integer instead of a two's complement number. The reader should note that the examples serve the purpose of illustrating how floating-point numbers could be represented, but the actual bits shown in the article are different from those in an IEEE 754-compliant representation. The placement of the bits in the IEEE standard enables two floating-point numbers to be compared bitwise (''sans'' sign bit) to yield a result without interpreting the actual values. The arbitrary system used in this article cannot do the same.
''Although the examples in this article use a consistent system of floating-point notation, the notation is different from the IEEE standard.'' For example, in IEEE 754, the exponent is between the sign bit and the significand, not at the end of the number. Also the IEEE exponent uses a biased integer instead of a two's complement number. The reader should note that the examples serve the purpose of illustrating how floating-point numbers could be represented, but the actual bits shown in the article are different from those in an IEEE 754-compliant representation. The placement of the bits in the IEEE standard enables two floating-point numbers to be compared bitwise (''sans'' sign bit) to yield a result without interpreting the actual values. The arbitrary system used in this article cannot do the same.
<!-- Some good wikipedians with spare time can rewrite the examples using the IEEE standard if desired, though the current version is good enough as textbook examples for it highlights all the major components of a floating-point notation. This also illustrates that a non-standard notation system also works as long as it is consistent. -->
<!-- Some good wikipedians with spare time can rewrite the examples using the IEEE standard if desired, though the current version is good enough as textbook examples for it highlights all the major components of a floating-point notation. This also illustrates that a non-standard notation system also works as long as it is consistent. -->



Revision as of 16:31, 26 May 2006

A floating-point number is a digital representation for a number in a certain subset of the rational numbers, and is often used to approximate an arbitrary real number on a computer. In particular, it represents an integer or fixed-point number (the significand or, informally, the mantissa) multiplied by a base (usually 2 in computers) to some integer power (the exponent). When the base is 2, it is the binary analogue of scientific notation (in base 10).

A floating-point calculation is an arithmetic operation on floating-point numbers. This often involves some approximation or rounding because the result of an operation may not be exactly representable—floating-point numbers are of limited precision and can therefore only represent a finite set of values, and if a result is not exactly one of those values then a choice of which value to use has to be made, and the result will then be inexact.

A floating-point number a can be represented by two numbers m and e, such that a = m × be. In any such system we pick a base b (called the base of numeration, also the radix) and a precision p (how many digits to store). m (which is called the significand or, informally, mantissa) is either a p-digit or p+1-digit number (in the IEEE floating-point standard, there is usually an implicit binary 1 to the left of the binary point and p digits to the right) of the form ±d.ddd...ddd (each digit being a digit in the base, b). If the leading digit of m is non-zero then the number is said to be normalized. Some descriptions use a separate sign bit (s, which represents −1 or +1) and require m to be positive. e is called the exponent.

This scheme allows a large range of magnitudes to be represented within a given size of field, which is not possible in a fixed-point notation.

As an example, a floating-point number with four decimal digits (b = 10, p = 4) and an exponent range of ±4 could be used to represent 43210, 4.321, or 0.0004321, but would not have enough precision to represent 432.123 and 43212.3 (which would have to be rounded to 432.1 and 43210). Of course, in practice, the number of digits is usually larger than four.

In addition, floating-point representations often include the special values +∞, −∞ (positive and negative infinity), and NaN ('Not a Number'). Infinities are used when results are too large to be represented, and NaNs indicate an invalid operation or undefined result.

Usage in computing

While in the examples above the numbers are represented in the decimal system (that is the base of numeration, b = 10), computers usually do so in the binary system, which means that b = 2. In computers, floating-point numbers are sized by the number of bits used to store them. This size is usually 32 bits or 64 bits, often called "single-precision" and "double-precision". A few machines offer larger sizes; Intel FPUs such as the Intel 8087 (and its descendants integrated into the x86 architecture) offer 80 bit floating point numbers for intermediate results, and several systems offer 128 bit floating-point, generally implemented in software. This website can be used to calculate the floating point representation of a decimal number.

Problems with floating-point

Floating-point numbers usually behave very similarly to the real numbers they are used to approximate. However, this can easily lead programmers into over-confidently ignoring the need for numerical analysis. There are many cases where floating-point numbers do not model real numbers well, even in simple cases such as representing the decimal fraction 0.1, which cannot be exactly represented in any binary floating-point format. For this reason, financial software tends not to use a binary floating-point number representation. See: http://www2.hursley.ibm.com/decimal/

Errors in floating-point computation can include:

  • Rounding
    • Non-representable numbers: for example, the literal 0.1 cannot be represented exactly by a binary floating-point number
    • Rounding of arithmetic operations: for example 2/3 might yield 0.6666667
  • Absorption: 1×1015 + 1 = 1×1015
  • Cancellation: subtraction between nearly equivalent operands
  • Overflow, which usually yields an infinity
  • Underflow (often defined as an inexact tiny result outside the range of the normal numbers for a format), which yields zero, a subnormal number, or the smallest normal number
  • Invalid operations (such as an attempt to calculate the square root of a negative number). Invalid operations yield a result of NaN (not a number).
  • Rounding errors: unlike the fixed-point counterpart, the application of dither in a floating point environment is nearly impossible. See external references for more information about the difficulty of applying dither and the rounding error problems in floating point systems

Floating point representation is more likely to be appropriate when proportional accuracy over a range of scales is needed. When fixed accuracy is required, fixed point is usually a better choice.

Properties of floating point arithmetic

Arithmetic using the floating point number system has two important properties that differ from those of arithmetic using real numbers.

Floating point arithmetic is not associative. This means that in general for floating point numbers x, y, and z:

Floating point arithmetic is also not distributive. This means that in general:

In short, the order in which operations are carried out can change the output of a floating point calculation. This is important in numerical analysis since two mathematically equivalent formulas may not produce the same numerical output, and one may be substantially more accurate than the other.

For example, with most floating-point implementations, (1e100 - 1e100) + 1.0 will give the result 1.0, whereas (1e100 + 1.0) - 1e100 gives 0.0.

The reason for this has to do with the range versus precision trade off inherent to floating point formats. By the nature of floating point representation, the larger a value, the less precise it is, in absolute terms. Following the above example of 1e10010, the significand of this value would be a 1 followed by a decimal point, and then a bunch of 0's. However, these zero's do not actually represent the tenths, and hundredths, and thousandths, etc. of the value, they represent digits in position 100, 99, 98, etc. In order to store the actual result of the intermediate operation ((1e100 + 1.0)), you would need to have a significand with 100 decimal digits (or 101 depending on whether or not the leading 1 is implicit in the format). in order to reach all the way from the most significant digit in the 100th position to the least significant digit, the 1 that was added to it, which now lives in the 0 position.

IEEE standard

The IEEE has standardized the computer representation for binary floating-point numbers in IEEE 754. This standard is followed by almost all modern machines. Notable exceptions include IBM Mainframes, which have both hexadecimal and IEEE 754 data types, and Cray vector machines, where the T90 series had an IEEE version, but the SV1 still uses Cray floating-point format.

As of 2000, the IEEE 754 standard is currently under revision. See: IEEE 754r

Examples

  • The value of Pi, π = 3.1415926...10 decimal, which is equivalent to binary 11.001001000011111...2. When represented in a computer that allocates 17 bits for the significand, it will become 0.11001001000011111 × 22. Hence the floating-point representation would start with bits 011001001000011111 and end with bits 10 (which represent the exponent 2 in the binary system). The first zero indicates a positive number, the ending 102 = 210.
  • The value of -0.37510 = -0.0112 or -0.11 × 2−1. In two's complement notation, −1 is represented as 11111111 (assuming 8 bits are used in the exponent). In floating-point notation, the number would start with a 1 for the sign bit, followed by 110000... and then followed by 11111111 at the end, or 1110...011111111 (where ... are zeros).

Hidden bit

When using binary (b = 2), one bit, called the hidden bit or the implied bit, can be omitted if all numbers are required to be normalized. The leading digit (most significant bit) of the significand of a normalized binary floating-point number is always non-zero; in particular it is always 1. This means that this bit does not need to be stored explicitly, since for a normalized number it can be understood to be 1.

The IEEE 754 standard exploits this fact. Requiring all numbers to be normalized means that 0 cannot be represented; typically some special representation of zero is chosen. In the IEEE standard this special code also encompasses denormal numbers, which allow for gradual underflow. The normalized numbers are also known as the normal numbers.

Note

Although the examples in this article use a consistent system of floating-point notation, the notation is different from the IEEE standard. For example, in IEEE 754, the exponent is between the sign bit and the significand, not at the end of the number. Also the IEEE exponent uses a biased integer instead of a two's complement number. The reader should note that the examples serve the purpose of illustrating how floating-point numbers could be represented, but the actual bits shown in the article are different from those in an IEEE 754-compliant representation. The placement of the bits in the IEEE standard enables two floating-point numbers to be compared bitwise (sans sign bit) to yield a result without interpreting the actual values. The arbitrary system used in this article cannot do the same.

See also

References