Data Structure - Signed and Unsigned Nos

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 62

CS 151: Computer Organization

and Architecture I

Data Structure-Signed and Unsigned Numbers

1
Representation of Negative Integer Numbers [Signed
Numbers]
 Four methods used:
 Signed – Magnitude: The convention by which one bit in the representation of a
number indicates the numeric sign, while the rest indicate the magnitude.
 One’s Compliment: a system in which negative numbers are represented by the
arithmetic negative of the value. It is obtained by inverting all the bits in the binary
representation of the number (swapping 0's for 1's and vice-versa).
 Two’s Compliment: equivalent to taking the ones' complement and then adding one
 Biased or Excess representation: uses a pre-specified number K as a biasing
value. A value is represented by the unsigned number which is K greater than the
intended value.
2
Sign-Magnitude (cont.)
• Sign-Magnitude
– ‘N’ bits can represent values from - (2N-1 -1) to + (2N-1 -1).

– e. g.
• + 25 = 000110012 , msb = 0 showing a +ve number
• - 25 = 100110012, msb = 1 showing a -ve

• Two representations for zeros:


– + 0 = 000000002
– - 0 = 100000002
• For an 8-bit representation; largest number is + 12710 and smallest number
is –12710.
3
4

Signed Integer Representation

 For example, in 8-bit signed magnitude, positive 3 is: 00000011 and Negative 3 is:

10000011

 Computers perform arithmetic operations on signed magnitude numbers in much the same way as

humans carry out pencil and paper arithmetic.

 Humans often ignore the signs of the operands while performing a calculation, applying the appropriate sign after the

calculation is complete.
5

Signed Integer Representation

 Binary addition is as easy as it gets. You need to know only four rules:

0 + 0 = 0 0 + 1 = 1

1 + 0 = 1 1 + 1 = 10

 The simplicity of this system makes it possible for digital circuits to carry out arithmetic operations.
6

Signed Integer Representation

Example:
Using signed magnitude binary arithmetic, find the sum of 75 and 46.

 First, convert 75 and 46 to binary, and arrange as a sum, but separate the (positive) sign bits from the magnitude bits.
7

Signed Integer Representation


8

Signed Integer Representation

 Just as in decimal arithmetic, we find the sum starting with the rightmost bit and

work left.
9

Signed Integer Representation

 In the second bit, we have a carry, so we note it above the third bit.
10

Signed Integer Representation

 The third and fourth bits also give us carries.


11

Signed Integer Representation

 Example:

 Using signed magnitude binary arithmetic, find the sum of 75 and 46.

 Once we have worked our way through all eight bits, we are done.

In this example, we were careful careful to pick two values whose sum would fit
into seven bits. If that is not the case, we have a problem.
12

Signed Integer Representation

 Example:

 Using signed magnitude binary arithmetic, find the sum of 107 and 46.

 We see that the carry from the seventh bit overflows and is discarded,
giving us the erroneous result: 107 + 46 = 25.
13

Signed Integer Representation

 The signs in signed magnitude representation work just like the

signs in pencil and paper arithmetic.

 Example: Using signed magnitude binary arithmetic, find the sum of - 46

and - 25.

• Because the signs are the same, all we do is add the numbers
and supply the negative sign when we are done.
14

Signed Integer Representation

 Mixed sign addition (or subtraction) is done the same way.

 Example: Using signed magnitude binary arithmetic, find the sum of 46

and - 25.

• The sign of the result gets the sign of the number that is larger.
– Note the “borrows” from the second and sixth bits.
15

Signed Integer Representation

 Signed magnitude representation is easy for people to understand, but it requires complicated computer

hardware.

 Another disadvantage of signed magnitude is that it allows two different representations for zero: positive

zero and negative zero.

 For these reasons (among others) computers systems employ complement systems for numeric value

representation.
One’s Compliment

• N-bits can represent values from; - (2n-1 -1) to + (2n-1 -1). The negative of a value is
obtained by inverting or complementing each bit [‘0’ to ‘1’ and ‘1’ to ‘0’]
• The procedure goes both ways from +ve to –ve or from –ve to +ve
• The msb is the sign bit [‘0’ for +ve, ‘1’ for –ve]
• Two representation for zero:
– + 0 = 000000002
– - 0 = 111111112
• In 1’s complement; the number and its negative adds up to [2N – 1].
• For an 8-bit representation, the largest number is + 12710 and the smallest
number is number - 12710 16
17

One’s Compliment

 With one’s complement addition, the carry bit is “carried around” and

added to the sum.

 Example: Using one’s complement binary arithmetic, find the sum of 48 and -

19

We note that 19 in one’s complement is 00010011, so -19 in one’s


complement is: 11101100.
18

One’s Compliment

 Although the “end carry around” adds some complexity, one’s complement is simpler to implement than

signed magnitude.

 But it still has the disadvantage of having two different representations for zero: positive zero and negative

zero.

 Two’s complement solves this problem.

 Two’s complement is the radix complement of the binary numbering system.


Two’s Complement
• N-bits number can represent values from: -2n-1 to +(2n-1 -1)
• The negative of a value is obtained by adding ‘1’ to the one’s compliment
negative. Perform One’s complement first then add ‘1’.
• The procedure goes both ways from +ve to –ve or from –ve to +ve
• The msb is the sign bit [‘0’ for +ve, ‘1’ for –ve]
• One representation for zero:
– + 0 = 000000002
– - 0 = 000000002
• In 2’s complement; the number and its negative adds up to [2N].
• For an 8-bit representation, the largest number is + 12710 and the smallest number
19
is number - 12810
20

Two’s Compliment

 To express a value in two’ s complement:

 If the number is positive, just convert it to binary and you’re done.

 If the number is negative, find the one’s complement of the number and then add 1.

 Example:

 In 8-bit one’s complement, positive 3 is: 00000011

 Negative 3 in one’s complement is: 11111100

 Adding 1 gives us -3 in two’s complement form: 11111101.


21

Two’s Compliment

 With two’s complement arithmetic, all we do is adding our two binary numbers. Just discard any carries emitting from

the high order bit.


22

Two’s Compliment

 When we use any finite number of bits to represent a number, we always run the risk of the result of our

calculations becoming too large to be stored in the computer.

 While we can’t always prevent overflow, we can always detect overflow.

 In complement arithmetic, an overflow condition is easy to detect.


23

Two’s Compliment-Try

 Example:

 Using two’s complement binary arithmetic, find the sum of 107 and 46.

 We see that the nonzero carry from the seventh bit overflows into the sign bit,

giving us the erroneous result: 107 + 46 = -103.

Rule for detecting signed two’s complement overflow: When the “carry in” and the
“carry out” of the sign bit differ, overflow has occurred.
Two’s Compliment-Alternative way.

• Example: complement + 25 and – 25


+25 =
1’s complement =
add 1

• Alternative way of finding 2’s complement:


– Bring down all the trailing zeros
– Bring down the first one (1) from right
– complement all additional bits

What about – 25=?

24
25

Signed, One’s and Two’s Compliment Summary

 Signed and unsigned numbers are both useful.

 For example, memory addresses are always unsigned.

 Using the same number of bits, unsigned integers can express twice as many values as signed numbers.

 Trouble arises if an unsigned value “wraps around.”

 In four bits: 1111 + 1 = 0000.

 Good programmers stay alert for this kind of problem.


26

Signed, One’s and Two’s Compliment Summary

 Overflow and carry are tricky ideas.

 Signed number overflow means nothing in the context of unsigned numbers, which set a carry flag instead of an

overflow flag.

 If a carry out of the leftmost bit occurs with an unsigned number, overflow has occurred.

 Carry and overflow occur independently of each other.


Excess (Biased) Representation

• This representation allows operations on the biased numbers


to be the same as for unsigned integers, but actually
represents both positive and negative values.
Excess (Biased) Representation
• Msb is a sign bit; ‘0’ for negative number, ‘1’ for Positive number
• All numbers are represented by
– first adding them to the bias [bias (b) = 2n-1 ]
– and then encoding this sum as an ordinary unsigned number’ normally using 2’s
complement
– i.e. [K + bias (b)]
• For n-bits number can represent values from -b to +(2n-1 -b)
• Since the bias is commonly chosen to be 2n-1 , this is yielding to a range of - 2n-1 to + (2n-1 -1)
– For example: convert 12 to its complement using biased method. Use 8-bits
12 = 0000 11002
• Bias = 28-1 = 27 = 128 = 1000 00002

adding 0000 11002 + 1000 00002 = 1000 11002


= 0111 01002
Convert it in 2’s complement

28
Discussion
Perform the following 6-bit additions assuming the values are:
unsigned, 1’s complement, 2’s complement, and excess bias.

(a) 100 111 + 111 110


(b) 010 110 + 100 001

Report an overflow
Benefits

1) Signed – Magnitude?

2) One’s Compliment?

3) Two’s Compliment?

4) Biased or Excess representation?

30
Normalization

• The signed magnitude, 1’s complement, and 2’s complement


representations as such are not useful in scientific or business
applications that deal with real number values over a wide range.

• Floating-point representation solves this problem.


• Representation of a floating point number is as shown: -

s Exponent (e) Significand or Mantissa


31
Normalisation
• Normalisation ensures that maximum accuracy of a number for
a given range of bits.

• It also ensures that each number has only one possible bit
pattern to represent it
Floating Point Numbers
• Computer representation of a floating-point number consists of three fixed-size fields:

• This is the standard arrangement of these fields:


• The one-bit sign field is the sign of the stored value.
• The size of the exponent field, determines the range of values that can be
represented.
• The size of the significand determines the precision of the representation.

33
Floating Point Numbers

• The IEEE-754 single precision floating point standard uses an 8-


bit exponent and a 23-bit significand.

• The IEEE-754 double precision standard uses an 11-bit exponent


and a 52-bit significand.

34
Floating Point Numbers

• The significand of a floating-point number is always preceded by


an implied binary point.

• Thus, the significand always contains a fractional binary value.

• The exponent indicates the power of 2 to which the significand is


raised.
35
IEEE 754 FORMATS

36
Representation of Floating Point Numbers

• The fraction represents a number less than 1, but the


significand of the floating point number is 1 plus the fraction
part
– i. e. if ‘e’ is the biased exponent (value of exponent
field) and ‘f’ is the value of the fraction field
 the number is represented as
1.f  2e-127
• To obtaining the floating point representation of a number requires
generating each of the components, the sign bit, the characteristic
(i. e. biased representation of the exponent) and the mantissa.
37
Representation of Floating Point Numbers
• The sign bit is 1 for a negative value and 0 for a positive value.

• The mantissa is obtained by converting the decimal


number into a binary fraction.

• Once the binary fraction is obtained, the significand is


generated by shifting the radix point to the proper position.
The number of places that the radix point is shifted determines
the exponent.

• The excess or biased form of the exponent gives the


characteristic. 38
39

Floating-Point Representation

 Floating-point numbers allow an arbitrary number of decimal places to the right of the decimal point.

 For example: 0.5  0.25 = 0.125

 They are often expressed in scientific notation.

 For example:

0.125 = 1.25  10-1

5,000,000 = 5.0  106


Examples of Floating Point Numbers
• Example: Represent the following number 0.125 into a single-precision binary
number.
Soln. Changing into binary 0.125 =
0.0012
the number is +ve, then 0 for sign bit, Normalizing =
1.0  2-3
1.0 is the significand, and from 1.f  2e-127
e – 127 = -3
then: e = -3+127 = 124 is the characteristic (i. e. the
biased exponent)

40
Examples of Floating Point Numbers
• How would 763.5be represented as a single precision IEEE floating point number?
763.5 = 1011111011.12 = 1.0111110111 x 29

• Hence the sign bit = 0; b’se is a +ve number the characteristic e = 9+127 =
136= 100010002

• The 23-bit mantissa is


011 1110 1110 0000 0000 0000

• The corresponding IEEE single precision representation is

41
Examples of Floating Point Numbers
• What single-precision number does the following 32 bit word represent:
1 1000 0001 010 0000 0000 0000 0000 00002

Soln.: msb = 1 indicates –ve number e = 1000 00012


= 127 + 2= 129
hence 2e-127
2 129-127 = 22

Fraction Part: Convert .012 into decimal =

= 0.25 making the significand of 1.25

Hence the number is – 1.25  22


=-5

42
43

Floating-Point To doRepresentation

 Convert -5.010 to IEEE-754 32-bit floating-point representation (single


precision)

 ANSWER:
 Bit pattern ⇒ 1 10000001 01000000000000000000000
44

Floating-Point Repres To doentation

 Suppose that IEEE-754 32-bit floating-point representation pattern is 1 01111110 100 0000

0000 0000 0000 0000.

 Sign bit S = 1 ⇒ negative number

Exponent = 0111 1110 = 126 (in normalized form)


 2 10

Fraction is 1.1 (with an implicit leading 1) = 1 + 2^-1 = 1.5


 2 10

The number is -1.5 × 2(126-127) = -0.75


 10
To do
• What single-precision number does the following 32 bit
word represent: 0 1000 0010 101 0000 0000 0000
0000 0000

• Express -26.625 into a single-precision binary number

45
Floating Point Numbers
• When discussing floating-point numbers, it is important to understand the
terms range, precision, and accuracy.
• The range of a numeric integer format is the difference between the largest and
smallest values that is can express.
• Accuracy refers to how closely a numeric representation approximates a true
value.
• The precision of a number indicates how much information we have about a
value.
• Most of the time, greater precision leads to better accuracy, but this is not always
true.
– For example, 3.1333 is a value of pi that is accurate to two digits, but has 5
digits of precision.
46
47

Floating-Point Representation Addition & substraction

 Floating-point addition and subtraction are done using methods analogous to how we perform calculations

 The first thing that we do is express both operands in the same exponential power (shift the smaller number to the right until its exponent

would match the larger exponent)

 Then we add the numbers (significands), preserving the exponent in the sum.

 Then we round significand to appropriate number of bits

 If the exponent requires adjustment, we do so at the end of the calculation


48

Floating-Point Representation

 Example:

Find the sum of 0.5 and 0.4375 in binary using FP arithmetic


 10 10
49

Floating-Point Representation- multiplication & Division

 Floating-point multiplication is also carried out as how we perform multiplication.

 We multiply the two operands and add their exponents.

 If the exponent requires adjustment, we do so at the end of the calculation.


FP Arithmetic +/-
• To add or subtract two floating point numbers, their number points must be aligned. This is the
same as making their exponents equal.
• For example: 0.125 = 0.12 × 2 -2
• 1.5 = 0.112 × 2 +1
• Since the numbers will be normalized initially, the exponents can only be adjusted by shifting right.
Otherwise, significant bits from the fraction will be lost.
• For each bit position that a mantissa is shifted to the right, the exponent of the number is increased
by 1.
• Check for zero
• Add/subtract exponents
• Multiply/divide significands (watch sign)
• Normalize
• All intermediate results should be in double length storage
29
Ms.Diana Rwegasira
FP Arithmetic x/
• i. e. The product of two floating point numbers is computed by multiplying the two mantissas and
using the sum of the two exponents as the exponent of the product. The product will be normalized
and the exponent adjusted accordingly.

(f1 x 2 e1 ) * (f2 x 2 e2 ) = (f3 x 2e3 )


where f3 = f1 x f2, and e3 = e1+e2.

• The quotient of one number divided by another has a mantissa equal to the quotient of the
dividend‘s mantissa divided by the divisor‘s mantissa. The exponent of the quotient equals the
exponent of the dividend minus the exponent of the divisor. Any required normalization is
performed after the division.
(f1 x 2 e1 ) / (f2 x 2 e2 ) = (f3 x 2e3 )
where f3 = f1/f2, and e3 = e1-e2.

30
Ms.Diana Rwegasira
Floating Point Multiplication

31
Ms.Diana Rwegasira
Floating Point Division

32
Ms.Diana Rwegasira
Booth’s Algorithm
• The booth algorithm is a multiplication algorithm that allows us
to multiply the two signed binary integers in 2's complement,
respectively.

• It is also used to speed up the performance of the multiplication


process.

• It is very efficient too


Booth’s Algorithm

26
Ms.Diana Rwegasira
Booth’s Algorithm
• If M is the multiplicand, A the accumulator register, Q the multiplier and Q-1 the
extra appended bit on the right of Q, then Booth's algorithm may be executed as
shown below:
• 1. Initialization
a.Load A and Q-1 with zeros.
b.Load M with the multiplicand- the first
c.Load Q with the multiplier- the second
• 2. Repeat n times:
a.If the Q0 = 0 and Q-1 = 0, add 0 to A. else
b.if the Q0 =0 and Q-1 =1, Add M to A. else
c.If the Q0 =1 and Q-1 =0, Subtract M from A. else
d.If the Q0 = 1 and Q-1 = 1 add 0 to A
e.Shift AQQ-1 one bit right using sign extension.
• The result is in the AQ register.
25
Example of Booth’s Algorithm; 7X 3

27
Ms.Diana Rwegasira
Find 3 × (9),
Find 3 × (−6),
Error Detection and Correction

• It is physically impossible for any data recording or transmission medium to be 100% perfect of the
time over its entire expected useful life.

• Thus, error detection and correction is critical to accurate data transmission, storage and retrieval.

• Check digits, appended to the end of a long number can provide some protection against data
input errors.
– The last character of UPC barcodes and ISBNs are check digits.

60
Ms.Diana Rwegasira
Error Detection and Correction
• Longer data streams require more economical and sophisticated error detection mechanisms.

• Cyclic redundancy checking (CRC) codes provide error detection for large blocks of data.

• Checksums and CRCs are examples of systematic error detection.

• In systematic error detection a group of error control bits is appended to the end of the block of
transmitted data.
– This group of bits is called a syndrome.

61
Ms.Diana Rwegasira
Conclusion

• Computers store data in the form of bits, bytes, and words using the binary
numbering system.

• Hexadecimal numbers are formed using four-bit groups called nibbles (or
nybbles).

• Signed integers can be stored in one’s complement, two’s complement, or


signed magnitude representation.

• Floating-point numbers are usually coded using the IEEE 754 floating-point
standard.
62

You might also like