The paragraph on alternating signs is not correct. Just consider the binary representation of decimal 9, being 1001, which is its unique NAF form.
It is however correct that the Booth Algorithm determines a signed digit representation where the signs of the non-zero digits alternate.
BUT the Booth algorithm does not generate a NAF representation! Take as an example a binary string with an isolated 1, say ......, which by the Booth Algorithm converts into ......
The original algorithm for converting a binary number into its equivalent and unique NAF form was given by Reitwiesner in 1960, but normally it is described by the following right-to-left algorithm:
Input: in 2's complement
for to do
Encoding the NAF of an m-bit number using m+1 bits
The article currently states that "[b]cause every non-zero value has to be adjacent to two 0's, the NAF representation can be implemented such that it only takes a maximum of m + 1 bits for a value that would normally be represented in binary with m bits." Can someone provide more details on this "implementation"? 18.104.22.168 (talk) 19:39, 27 April 2013 (UTC)
NAF example in Obtaining NAF
While correcly labeling binary and NAF digits for one NAF digit more, it is not striking because zm − 2 is dropped. Also, how to show the zeroes?
Input E = (em−1 em − 2 … e3 e2 e1 e0)2 Output Z = (zm zm−1 zm − 2 … 0 z2 0 z0)NAF