Arithmetic underflow
The term arithmetic underflow (also floating point underflow, or just underflow) is a condition in a computer program where the result of a calculation is a number of more precise absolute value than the computer can actually represent in memory on its central processing unit (CPU).
Arithmetic underflow can occur when the true result of a floating point operation is smaller in magnitude (that is, closer to zero) than the smallest value representable as a normal floating point number in the target datatype.[1] Underflow can in part be regarded as negative overflow of the exponent of the floating point value. For example, if the exponent part can represent values from −128 to 127, then a result with a value less than −128 may cause underflow.
Storing values that are too low in an integer variable (e.g., attempting to store −1 in an unsigned integer) is properly referred to as integer overflow, or more broadly, integer wraparound. The term underflow normally refers to floating point numbers only, which is a separate issue. It is not possible in most floating-point designs to store a too-low value, as usually they are signed and have a negative infinity value.
Underflow gap
The interval between −fminN and fminN, where fminN is the smallest positive normal floating point value, is called the underflow gap. This is because the size of this interval is many orders of magnitude larger than the distance between adjacent normal floating point values just outside the gap. For instance, if the floating point datatype can represent 20 bits, the underflow gap is 221 times larger than the absolute distance between adjacent floating point values just outside the gap.[2]
In older designs, the underflow gap had just one usable value, zero. When an underflow occurred, the true result was replaced by zero (either directly by the hardware, or by system software handling the primary underflow condition). This replacement is called "flush to zero".
The 1984 edition of IEEE 754 introduced subnormal numbers. The subnormal numbers (including zero) fill the underflow gap with values where the absolute distance between adjacent values is the same as for adjacent values just outside the underflow gap. This enables "gradual underflow", where a nearest subnormal value is used, just as a nearest normal value is used when possible. Even when using gradual underflow, the nearest value may be zero.[3]
The absolute distance between adjacent floating point values just outside the gap is called the machine epsilon, typically characterized by the largest value whose sum with the value 1 will result in the answer with value 1 in that floating point scheme.[4] This can be written as , where is a function which converts the real value into the floating point representation. While the machine epsilon is not to be confused with the underflow level (assuming subnormal numbers), it is closely related. The machine epsilon is dependent on the number of bits which make up the significand, whereas the underflow level depends on the number of digits which make up the exponent field. In most floating point systems, the underflow level is smaller than the machine epsilon.
Handling of underflow
The occurrence of an underflow may set a ("sticky") status bit, raise an exception, at the hardware level generate an interrupt, or may cause some combination of these effects.
As specified in IEEE 754, the underflow condition is only signaled if there is also a loss of precision. Typically this is determined as the final result being inexact. However, if the user is trapping on underflow, this may happen regardless of consideration for loss of precision. The default handling in IEEE 754 for underflow (as well as other exceptions) is to record as a floating point status that underflow has occurred. This is specified for the application-programming level, but often also interpreted as how to handle it at the hardware level.
See also
- Denormal number
- Floating-point arithmetic
- IEEE 754
- Integer overflow
- Logarithmic number system
- Machine epsilon
- Normal number (computing)
References
- ^ Coonen, Jerome T (1980). "An implementation guide to a proposed standard for floating-point arithmetic". Computer. 13 (1): 68–79. doi:10.1109/mc.1980.1653344. S2CID 206445847.
- ^ Sun Microsystems (2005). Numerical Computation Guide. Oracle. Retrieved 21 April 2018.
- ^ Demmel, James (1984). "Underflow and the Reliability of Numerical Software". SIAM Journal on Scientific and Statistical Computing. 5 (4): 887–919. doi:10.1137/0905062.
- ^ Heath, Michael T. (2002). Scientific Computing (Second ed.). New York: McGraw-Hill. p. 20. ISBN 0-07-239910-4.