Data Structure - Signed and Unsigned Nos
Data Structure - Signed and Unsigned Nos
Data Structure - Signed and Unsigned Nos
and Architecture I
1
Representation of Negative Integer Numbers [Signed
Numbers]
Four methods used:
Signed – Magnitude: The convention by which one bit in the representation of a
number indicates the numeric sign, while the rest indicate the magnitude.
One’s Compliment: a system in which negative numbers are represented by the
arithmetic negative of the value. It is obtained by inverting all the bits in the binary
representation of the number (swapping 0's for 1's and vice-versa).
Two’s Compliment: equivalent to taking the ones' complement and then adding one
Biased or Excess representation: uses a pre-specified number K as a biasing
value. A value is represented by the unsigned number which is K greater than the
intended value.
2
Sign-Magnitude (cont.)
• Sign-Magnitude
– ‘N’ bits can represent values from - (2N-1 -1) to + (2N-1 -1).
– e. g.
• + 25 = 000110012 , msb = 0 showing a +ve number
• - 25 = 100110012, msb = 1 showing a -ve
For example, in 8-bit signed magnitude, positive 3 is: 00000011 and Negative 3 is:
10000011
Computers perform arithmetic operations on signed magnitude numbers in much the same way as
Humans often ignore the signs of the operands while performing a calculation, applying the appropriate sign after the
calculation is complete.
5
Binary addition is as easy as it gets. You need to know only four rules:
0 + 0 = 0 0 + 1 = 1
1 + 0 = 1 1 + 1 = 10
The simplicity of this system makes it possible for digital circuits to carry out arithmetic operations.
6
Example:
Using signed magnitude binary arithmetic, find the sum of 75 and 46.
First, convert 75 and 46 to binary, and arrange as a sum, but separate the (positive) sign bits from the magnitude bits.
7
Just as in decimal arithmetic, we find the sum starting with the rightmost bit and
work left.
9
In the second bit, we have a carry, so we note it above the third bit.
10
Example:
Using signed magnitude binary arithmetic, find the sum of 75 and 46.
Once we have worked our way through all eight bits, we are done.
In this example, we were careful careful to pick two values whose sum would fit
into seven bits. If that is not the case, we have a problem.
12
Example:
Using signed magnitude binary arithmetic, find the sum of 107 and 46.
We see that the carry from the seventh bit overflows and is discarded,
giving us the erroneous result: 107 + 46 = 25.
13
and - 25.
• Because the signs are the same, all we do is add the numbers
and supply the negative sign when we are done.
14
and - 25.
• The sign of the result gets the sign of the number that is larger.
– Note the “borrows” from the second and sixth bits.
15
Signed magnitude representation is easy for people to understand, but it requires complicated computer
hardware.
Another disadvantage of signed magnitude is that it allows two different representations for zero: positive
For these reasons (among others) computers systems employ complement systems for numeric value
representation.
One’s Compliment
• N-bits can represent values from; - (2n-1 -1) to + (2n-1 -1). The negative of a value is
obtained by inverting or complementing each bit [‘0’ to ‘1’ and ‘1’ to ‘0’]
• The procedure goes both ways from +ve to –ve or from –ve to +ve
• The msb is the sign bit [‘0’ for +ve, ‘1’ for –ve]
• Two representation for zero:
– + 0 = 000000002
– - 0 = 111111112
• In 1’s complement; the number and its negative adds up to [2N – 1].
• For an 8-bit representation, the largest number is + 12710 and the smallest
number is number - 12710 16
17
One’s Compliment
With one’s complement addition, the carry bit is “carried around” and
Example: Using one’s complement binary arithmetic, find the sum of 48 and -
19
One’s Compliment
Although the “end carry around” adds some complexity, one’s complement is simpler to implement than
signed magnitude.
But it still has the disadvantage of having two different representations for zero: positive zero and negative
zero.
Two’s Compliment
If the number is negative, find the one’s complement of the number and then add 1.
Example:
Two’s Compliment
With two’s complement arithmetic, all we do is adding our two binary numbers. Just discard any carries emitting from
Two’s Compliment
When we use any finite number of bits to represent a number, we always run the risk of the result of our
Two’s Compliment-Try
Example:
Using two’s complement binary arithmetic, find the sum of 107 and 46.
We see that the nonzero carry from the seventh bit overflows into the sign bit,
Rule for detecting signed two’s complement overflow: When the “carry in” and the
“carry out” of the sign bit differ, overflow has occurred.
Two’s Compliment-Alternative way.
24
25
Using the same number of bits, unsigned integers can express twice as many values as signed numbers.
Signed number overflow means nothing in the context of unsigned numbers, which set a carry flag instead of an
overflow flag.
If a carry out of the leftmost bit occurs with an unsigned number, overflow has occurred.
28
Discussion
Perform the following 6-bit additions assuming the values are:
unsigned, 1’s complement, 2’s complement, and excess bias.
Report an overflow
Benefits
1) Signed – Magnitude?
2) One’s Compliment?
3) Two’s Compliment?
30
Normalization
• It also ensures that each number has only one possible bit
pattern to represent it
Floating Point Numbers
• Computer representation of a floating-point number consists of three fixed-size fields:
33
Floating Point Numbers
34
Floating Point Numbers
36
Representation of Floating Point Numbers
Floating-Point Representation
Floating-point numbers allow an arbitrary number of decimal places to the right of the decimal point.
For example:
40
Examples of Floating Point Numbers
• How would 763.5be represented as a single precision IEEE floating point number?
763.5 = 1011111011.12 = 1.0111110111 x 29
• Hence the sign bit = 0; b’se is a +ve number the characteristic e = 9+127 =
136= 100010002
41
Examples of Floating Point Numbers
• What single-precision number does the following 32 bit word represent:
1 1000 0001 010 0000 0000 0000 0000 00002
42
43
Floating-Point To doRepresentation
ANSWER:
Bit pattern ⇒ 1 10000001 01000000000000000000000
44
Suppose that IEEE-754 32-bit floating-point representation pattern is 1 01111110 100 0000
45
Floating Point Numbers
• When discussing floating-point numbers, it is important to understand the
terms range, precision, and accuracy.
• The range of a numeric integer format is the difference between the largest and
smallest values that is can express.
• Accuracy refers to how closely a numeric representation approximates a true
value.
• The precision of a number indicates how much information we have about a
value.
• Most of the time, greater precision leads to better accuracy, but this is not always
true.
– For example, 3.1333 is a value of pi that is accurate to two digits, but has 5
digits of precision.
46
47
Floating-point addition and subtraction are done using methods analogous to how we perform calculations
The first thing that we do is express both operands in the same exponential power (shift the smaller number to the right until its exponent
Then we add the numbers (significands), preserving the exponent in the sum.
Floating-Point Representation
Example:
• The quotient of one number divided by another has a mantissa equal to the quotient of the
dividend‘s mantissa divided by the divisor‘s mantissa. The exponent of the quotient equals the
exponent of the dividend minus the exponent of the divisor. Any required normalization is
performed after the division.
(f1 x 2 e1 ) / (f2 x 2 e2 ) = (f3 x 2e3 )
where f3 = f1/f2, and e3 = e1-e2.
30
Ms.Diana Rwegasira
Floating Point Multiplication
31
Ms.Diana Rwegasira
Floating Point Division
32
Ms.Diana Rwegasira
Booth’s Algorithm
• The booth algorithm is a multiplication algorithm that allows us
to multiply the two signed binary integers in 2's complement,
respectively.
26
Ms.Diana Rwegasira
Booth’s Algorithm
• If M is the multiplicand, A the accumulator register, Q the multiplier and Q-1 the
extra appended bit on the right of Q, then Booth's algorithm may be executed as
shown below:
• 1. Initialization
a.Load A and Q-1 with zeros.
b.Load M with the multiplicand- the first
c.Load Q with the multiplier- the second
• 2. Repeat n times:
a.If the Q0 = 0 and Q-1 = 0, add 0 to A. else
b.if the Q0 =0 and Q-1 =1, Add M to A. else
c.If the Q0 =1 and Q-1 =0, Subtract M from A. else
d.If the Q0 = 1 and Q-1 = 1 add 0 to A
e.Shift AQQ-1 one bit right using sign extension.
• The result is in the AQ register.
25
Example of Booth’s Algorithm; 7X 3
27
Ms.Diana Rwegasira
Find 3 × (9),
Find 3 × (−6),
Error Detection and Correction
• It is physically impossible for any data recording or transmission medium to be 100% perfect of the
time over its entire expected useful life.
• Thus, error detection and correction is critical to accurate data transmission, storage and retrieval.
• Check digits, appended to the end of a long number can provide some protection against data
input errors.
– The last character of UPC barcodes and ISBNs are check digits.
60
Ms.Diana Rwegasira
Error Detection and Correction
• Longer data streams require more economical and sophisticated error detection mechanisms.
• Cyclic redundancy checking (CRC) codes provide error detection for large blocks of data.
• In systematic error detection a group of error control bits is appended to the end of the block of
transmitted data.
– This group of bits is called a syndrome.
61
Ms.Diana Rwegasira
Conclusion
• Computers store data in the form of bits, bytes, and words using the binary
numbering system.
• Hexadecimal numbers are formed using four-bit groups called nibbles (or
nybbles).
• Floating-point numbers are usually coded using the IEEE 754 floating-point
standard.
62