Normal number (computing)
|This article does not cite any references or sources. (December 2009)|
In computing, a normal number is a non-zero number in a floating-point representation which is within the balanced range supported by a given floating-point format: it is a floating point number that can be represented without leading zeros in its significand.
The magnitude of the smallest normal number in a format is given by bemin, where b is the base (radix) of the format (usually 2 or 10) and emin depends on the size and layout of the format.
Similarly, the magnitude of the largest normal number in a format is given by
- bemax × (b − b1−p),
where p is the precision of the format in digits and emax is (−emin)+1.
For example, in the smallest decimal format, the range of positive normal numbers is 10−95 through 9.999999 × 1096.
Non-zero numbers smaller in magnitude than the smallest normal number are called denormal (or subnormal) numbers. Zero is neither normal nor subnormal.