Sign bit: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
DrWolfen (talk | contribs)
Added information about the differences of what the signed bit means in 2's compliments, 1's compliment and sign-magnitude
DrWolfen (talk | contribs)
copied wikilinks that were there previously into the section I added
Line 2: Line 2:
In [[computer science]], the '''sign bit''' is a [[bit]] in a [[computer numbering format]] that indicates the [[sign (mathematics)|sign]] of a number. In [[IEEE]] format, the '''sign bit''' is the leftmost bit ([[most significant bit]]). Typically if the sign bit is '''1''' the number is negative (in the case of [[two's complement]] [[integer]]s) or non-positive (for [[Signed number representations#Ones' complement|ones' complement]] [[integer]]s, [[sign-magnitude]] [[integer]]s, and [[floating point]] numbers), while '''0''' indicates a positive number.
In [[computer science]], the '''sign bit''' is a [[bit]] in a [[computer numbering format]] that indicates the [[sign (mathematics)|sign]] of a number. In [[IEEE]] format, the '''sign bit''' is the leftmost bit ([[most significant bit]]). Typically if the sign bit is '''1''' the number is negative (in the case of [[two's complement]] [[integer]]s) or non-positive (for [[Signed number representations#Ones' complement|ones' complement]] [[integer]]s, [[sign-magnitude]] [[integer]]s, and [[floating point]] numbers), while '''0''' indicates a positive number.


In the two's complement representation, the sign bit represents the largest negative value that the bit vector can represent (which would be 2^(w-1) where w represents the length of the bit vector). In the one's complement representation, the sign bit represents the same thing, except that the largest negative value is (2^(w-1) - 1) rather than 2^(w-1). In the sign magnitude representation of bit vectors, the sign bit simply determines whether the value of the given vector is positive or negative. <ref>{{cite book |last1= Bryant|first1= Randal|last2= O'Hallaron |first2= David|title= Computer Systems: a Programmer's Perspective |year=2003 |publisher= Prentice Hall |location= Upper Saddle River, New Jersey |language= english |isbn= 0-13-034074-X |pages=52-54|chapter=2}} </ref>
In the [[two's complement]] representation, the sign bit represents the largest negative value that the bit vector can represent (which would be 2^(w-1) where w represents the length of the bit vector). In the [[Signed number representations#Ones' complement|ones' complement]] representation, the sign bit represents the same thing, except that the largest negative value is (2^(w-1) - 1) rather than 2^(w-1). In the [[sign-magnitude]] representation of bit vectors, the sign bit simply determines whether the value of the given vector is positive or negative. <ref>{{cite book |last1= Bryant|first1= Randal|last2= O'Hallaron |first2= David|title= Computer Systems: a Programmer's Perspective |year=2003 |publisher= Prentice Hall |location= Upper Saddle River, New Jersey |language= english |isbn= 0-13-034074-X |pages=52-54|chapter=2}} </ref>


When an 8-bit value is added to a 16-bit value using signed arithmetic, the [[microprocessor]] propagates the sign bit through the high order half of the 16-bit [[Processor register|register]] holding the 8-bit value – a process called [[sign extension]] or [[sign propagation]].<ref>http://www.adrc.net/data-dictionary/s1.htm</ref>
When an 8-bit value is added to a 16-bit value using signed arithmetic, the [[microprocessor]] propagates the sign bit through the high order half of the 16-bit [[Processor register|register]] holding the 8-bit value – a process called [[sign extension]] or [[sign propagation]].<ref>http://www.adrc.net/data-dictionary/s1.htm</ref>

Revision as of 01:24, 24 March 2012

In computer science, the sign bit is a bit in a computer numbering format that indicates the sign of a number. In IEEE format, the sign bit is the leftmost bit (most significant bit). Typically if the sign bit is 1 the number is negative (in the case of two's complement integers) or non-positive (for ones' complement integers, sign-magnitude integers, and floating point numbers), while 0 indicates a positive number.

In the two's complement representation, the sign bit represents the largest negative value that the bit vector can represent (which would be 2^(w-1) where w represents the length of the bit vector). In the ones' complement representation, the sign bit represents the same thing, except that the largest negative value is (2^(w-1) - 1) rather than 2^(w-1). In the sign-magnitude representation of bit vectors, the sign bit simply determines whether the value of the given vector is positive or negative. [1]

When an 8-bit value is added to a 16-bit value using signed arithmetic, the microprocessor propagates the sign bit through the high order half of the 16-bit register holding the 8-bit value – a process called sign extension or sign propagation.[2]

See also

References

  1. ^ Bryant, Randal; O'Hallaron, David (2003). "2". Computer Systems: a Programmer's Perspective. Upper Saddle River, New Jersey: Prentice Hall. pp. 52–54. ISBN 0-13-034074-X.
  2. ^ http://www.adrc.net/data-dictionary/s1.htm