Sign bit

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by Incnis Mrsi (talk | contribs) at 15:18, 20 January 2014 (Undid revision 591506554 by Octahedron80 (talk) page numbers removed). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

In computer science, the sign bit is a bit in a signed number representation that indicates the sign of a number. Only signed numeric data types have a sign bit, and its place is usually the leftmost, where the most significant bit in unsigned numbers resides. Floating point numbers in IEEE format are always signed, with the sign bit in the leftmost position. Typically if the sign bit is 1 then the number is negative (in the case of two's complement integers) or non-positive (for ones' complement integers, sign-and-magnitude integers, and floating point numbers), while 0 indicates a non-negative number.

In the two's complement representation, the sign bit has the weight −2w−1 where w is the number of bits. In the ones' complement representation, the most negative value is 1 − 2w−1, but there are two representations of zero, one for each value of the sign bit. In a sign-and-magnitude representation of numbers, the value of the sign bit determines whether the numerical value is positive or negative (Bryant 2003, pp. 52–54).

When an 8-bit value is added to a 16-bit value using signed arithmetic, the processor unit propagates the sign bit through the high order half of the 16-bit register holding the 8-bit value – a process called sign extension or sign propagation.[1] The process of sign extension is used whenever a smaller signed data type needs to be converted into a larger signed data type while still retaining its original numerical value (Bryant 2003, pp. 61–62).

References

Bryant, Randal; O'Hallaron, David (2003). "2". Computer Systems: a Programmer's Perspective. Upper Saddle River, New Jersey: Prentice Hall. ISBN 0-13-034074-X.