COA UNIT-III PPTs Dr.G.Bhaskar ECE

Download as pdf or txt
Download as pdf or txt
You are on page 1of 64

22CS2112: COMPUTER ORGANIZATION

AND ARCHITECTURE
UNIT-III

By
Dr. G. Bhaskar
Associate Professor
Dept. of ECE
Email Id:[email protected]
UNIT 3
• Data Representation: Data types, Complements,
• Fixed Point Representation,
• Floating Point Representation.
• Computer Arithmetic: Addition and subtraction,
• multiplication Algorithms,
• Division Algorithms,
• Floating – point Arithmetic operations.
• Decimal Arithmetic unit, Decimal Arithmetic operations.
Computer data types
• Computer programs or application may use different types of data based on the
problem or requirement.

• Given below is different types of data that computer uses:


• Numeric data – Integer and Real numbers
• Non-numeric data – Character data, address data, logical data
• Let’s study about each with further sub-categories.

• Numeric data
• It can be of the following two types:
• Integers
• Real Numbers
• Real numbers can be represented as:
• Fixed point representation
• Floating point representation
• Character data
• A sequence of character is called character data.
• A character may be alphabetic (A-Z or a-z), numeric (0-9),
special character (+, #, *, @, etc.) or combination of all of
these. A character is represented by group of bits.

• When set of multiple character are combined together they
form a meaningful data. A character is represented in
standard ASCII format.Another popular format is EBCDIC
used in large computer systems.

• Example of character data
• Rajneesh1#
• 229/3, xyZ
• Mission Milap – X/10
• Logical data
• A logical data is used by computer systems to take logical decisions.
• Logical data is different from numeric or alphanumeric data in the
way that numeric and alphanumeric data may be associated with
numbers or characters but logical data is denoted by either of two
values true (T) or false(F).

• You can see the example of logical data in construction of truth
table in logic gates.
• A logical data can also be statement consisting of numeric or
character data with relational symbols (>, <, =, etc.).

• Character set
• Character sets can of following types in computers:
• Alphabetic characters- It consists of alphabet characters A-Z or a-z.
• Numeric characters- It consists of digits from 0 to 9.
• Special characters- Special symbols are +, *, /, -, ., <, >, =, @, %, #,
etc.
Fixed point representation
• In computers, fixed-point representation is a real data
type for numbers. Fixed point representation can
convert data into binary form, and then the data is
processed, stored, and used by the computer. It has a
fixed number of bits for the integral and fractional
parts. For example, if given fixed-point representation
is IIIII.FFF, we can store a minimum value of 00000.001
and a maximum value of 99999.999.
• There are three parts of the fixed-point number
representation: Sign bit, Integral part, and Fractional
part.
• Sign bit:- The fixed-point number representation
in binary uses a sign bit. The negative number has
a sign bit 1, while a positive number has a bit 0.
• Integral Part:- The integral part in fixed-point
numbers is of different lengths at different places.
It depends on the register's size; for an 8-bit
register, the integral part is 4 bits.
• Fractional part:- The Fractional part is of different
lengths at different places. It depends on the
registers; for an 8-bit register, the fractional part
is 3 bits.
How to write numbers in Fixed-point
notation?
• Now that we have learned about fixed-point number
representation, let's see how to represent it.
• The number considered is 4.5
• Step 1: We will convert the number 4.5 to binary
form. 4.5 = 100.1
• Step 2: Represent the binary number in fixed-point
notation with the following format.
• Fixed Point Notation of 4.5
Floating Point
Representations
Floating-point
arithmetic

❑ We often incur floating-point programming.


– Floating point greatly simplifies working with large (e.g., 270) and
small (e.g., 2-17) numbers
❑ We’ll focus on the IEEE 754 standard for floating-point arithmetic.
– How FP numbers are represented
– Limitations of FP numbers
– FP addition and multiplication
Floating-point
representation
❑ IEEE numbers are stored using a kind of scientific notation.

 mantissa * 2exponent

❑ We can represent floating-point numbers with three binary


fields: a sign bit s, an exponent field e, and a fraction field f.

s e f

❑ The IEEE 754 standard defines several different precisions.


— Single precision numbers include an 8-bit exponent field
and a 23-bit fraction, for a total of 32 bits.
— Double precision numbers have an 11-bit exponent field
and a 52-bit fraction, for a total of 64 bits.
Sign
s e f

❑ The sign bit is 0 for positive numbers and 1 for negative


numbers.
❑ But unlike integers, IEEE values are stored in signed magnitude
format.
Mantissa
s e f
❑ There are many ways to write a number in scientific notation, but
there is always a unique normalized representation, with exactly one
non-zero digit to the left of the point.
0.232 × 103 = 23.2 × 101 = 2.32 * 102 = …

01001 = 1.001× 23 = …
❑ What’s the normalized representation of 00101101.101 ?
00101101.101
= 1.01101101 × 25

❑ What’s the normalized representation of 0.0001101001110 ?


0.0001101001110
= 1.110100111 × 2-4
Mantissa
s e f
❑ There are many ways to write a number in scientific notation, but
there is always a unique normalized representation, with exactly one
non-zero digit to the left of the point.
0.232 × 103 = 23.2 × 101 = 2.32 * 102 = …

01001 = 1.001× 23 = …
❑ The field f contains a binary fraction.
❑ The actual mantissa of the floating-point value is (1 + f).
– In other words, there is an implicit 1 to the left of the binary
point.
– For example, if f is 01101…, the mantissa would be 1.01101…
❑ A side effect is that we get a little more precision: there are 24 bits in
the mantissa, but we only need to store 23 of them.
❑ But, what about value 0?
Exponent
s e f
❑ There are special cases that require encodings
– Infinities (overflow)
– NAN (divide by zero)
❑ For example:
– Single-precision: 8 bits in e → 256 codes; 11111111 reserved for
special cases → 255 codes; one code (00000000) for zero → 254
codes; need both positive and negative exponents → half
positives (127), and half negatives (127)
– Double-precision: 11 bits in e → 2048 codes; 111…1 reserved for
special cases → 2047 codes; one code for zero → 2046 codes;
need both positive and negative exponents → half positives
(1023), and half negatives (1023)
Exponent
s e f

❑ The e field represents the exponent as a biased number.


– It contains the actual exponent plus 127 for single precision, or
the actual exponent plus 1023 in double precision.
– This converts all single-precision exponents from -126 to +127
into unsigned numbers from 1 to 254, and all double-precision
exponents from -1022 to +1023 into unsigned numbers from 1 to
2046.
❑ Two examples with single-precision numbers are shown below.
– If the exponent is 4, the e field will be 4 + 127 = 131 (100000112).
– If e contains 01011101 (9310), the actual exponent is 93 - 127 = -
34.
❑ Storing a biased exponent means we can compare IEEE values as if
they were signed integers.
Mapping Between e and
Actual Exponent
Actual
e
Exponent
0000 0000 Reserved
0000 0001 1-127 = -126 -12610
0000 0010 2-127 = -125 -12510
… …
0111 1111 010
… …
1111 1110 254-127=127 12710
1111 1111 Reserved
Converting an IEEE 754 number
to decimal
s e f

❑ The decimal value of an IEEE number is given by the formula:

(1 - 2s) * (1 + f) * 2e-bias

❑ Here, the s, f and e fields are assumed to be in decimal.


– (1 - 2s) is 1 or -1, depending on whether the sign bit is 0
or 1.
– We add an implicit 1 to the fraction field f, as mentioned
earlier.
– Again, the bias is either 127 or 1023, for single or double
precision.
Example IEEE-decimal conversion
❑ Let’s find the decimal value of the following IEEE number.

1 01111100 11000000000000000000000

❑ First convert each individual field to decimal.


– The sign bit s is 1.
– The e field contains 01111100 = 12410.
– The mantissa is 0.11000… = 0.7510.
❑ Then just plug these decimal values of s, e and f into our formula.

(1 - 2s) * (1 + f) * 2e-bias

❑ This gives us (1 - 2) * (1 + 0.75) * 2124-127 = (-1.75 * 2-3) = -0.21875.


Converting a decimal number
to IEEE 754
❑ What is the single-precision representation of 347.625?

1. First convert the number to binary: 347.625 = 101011011.1012.


2. Normalize the number by shifting the binary point until there is
a single 1 to the left:

101011011.101 x 20 = 1.01011011101 x 28

3. The bits to the right of the binary point comprise the fractional
field f.
4. The number of times you shifted gives the exponent. The field e
should contain: exponent + 127.
5. Sign bit: 0 if positive, 1 if negative.
Example
❑ What is the single-precision representation of 639.6875

639.6875 = 1001111111.10112
= 1.0011111111011×29

s=0
e = 9 + 127 = 136 = 10001000
f = 0011111111011

The single-precision representation is:


0 10001000 00111111110110000000000
Examples: Compare FP
1.
numbers
0 0111 1111 110…0
( <, > ? )0 1000 0000 110…0
+1.112 × 2 (127-127) =1.7510 +1.112 × 2 (128-127) = 11.12=3.510

0 0111 1111 110…0 0 1000 0000 110…0


+ 0111 1111 < + 1000 0000
directly comparing exponents as unsigned values gives result

2. 1 0111 1111 110…0 1 1000 0000 110…0


-f × 2(0111 1111 ) -f × 2(1000 0000)
For exponents: 0111 1111 < 1000 0000
So -f × 2(0111 1111 ) > -f × 2(1000 0000)
Special Values (single-
precision)
E F meaning Notes

00000000 0…0 0 +0.0 and -0.0

Valid Unnormalized
00000000 X…X
number =(-1)S x 2-126 x (0.F)

11111111 0…0 Infinity

11111111 X…X Not a Number


E Real F Value
Exponent
0000 0000 Reserved 000…0 010
xxx…x Unnormalized
(-1)S x 2-126 x (0.F)
0000 0001 -12610
0000 0010 -12510
… … Normalized
0111 1111 010 (-1)S x 2e-127 x (1.F)
… …
1111 1110 12710
1111 1111 Reserved 000…0 Infinity
xxx…x NaN
Range of numbers
❑ Normalized (positive range; negative is symmetric)

smallest 00 00000010 0000000000000000000000 +2-126(1+0) = 2-126

largest 011111110 11111111111111111111111 +2127(2-2-23)

❑ Unnormalized
smallest 00 00000000 0000000000000000000001 +2-126(2-23) = 2-149

largest 00000000011 111111111111111111111 +2-126(1-2-23)

2-126 2127(2-2-23)

0 2-149 2-126(1-2-23)

Positive overflow
Positive underflow
In comparison
❑ The smallest and largest possible 32-bit integers in two’s
complement are only -231 and 231 - 1
❑ How can we represent so many more values in the IEEE 754
format, even though we use the same number of bits as regular
integers?

what’s the next representable FP number?


2-126

+2-126(1+2-23) differ from the smallest number by 2-149


Finiteness
❑ There aren’t more IEEE numbers.
❑ With 32 bits, there are 232, or about 4 billion, different bit patterns.
– These can represent 4 billion integers or 4 billion reals.
– But there are an infinite number of reals, and the IEEE format
can only represent some of the ones from about -2128 to +2128.
– Represent same number of values between 2n and 2n+1 as 2n+1
and 2n+2

2 4 8 16
❑ Thus, floating-point arithmetic has “issues”
– Small roundoff errors can accumulate with multiplications or
exponentiations, resulting in big errors.
– Rounding errors can invalidate many basic arithmetic
principles such as the associative law, (x + y) + z = x + (y + z).
❑ The IEEE 754 standard guarantees that all machines will produce
the same results—but those results may not be mathematically
accurate!
Limits of the IEEE
representation
❑ Even some integers cannot be represented in the IEEE
format.
int x = 33554431;
float y = 33554431;
printf( "%d\n", x );
printf( "%f\n", y );

33554431
33554432.000000

❑ Some simple decimal numbers cannot be represented exactly


in binary to begin with.

0.1010 = 0.0001100110011...2
0.10
❑ During the Gulf War in 1991, a U.S. Patriot missile failed to intercept
an Iraqi Scud missile, and 28 Americans were killed.
❑ A later study determined that the problem was caused by the
inaccuracy of the binary representation of 0.10.
– The Patriot incremented a counter once every 0.10 seconds.
– It multiplied the counter value by 0.10 to compute the actual
time.
❑ However, the (24-bit) binary representation of 0.10 actually
corresponds to 0.099999904632568359375, which is off by
0.000000095367431640625.
❑ This doesn’t seem like much, but after 100 hours the time ends up
being off by 0.34 seconds—enough time for a Scud to travel 500
meters!
❑ Professor Skeel wrote a short article about this.
Roundoff Error and the Patriot Missile. SIAM News, 25(4):11, July 1992.
Floating-point addition
example
❑ To get a feel for floating-point operations, we’ll do an addition
example.
– To keep it simple, we’ll use base 10 scientific notation.
– Assume the mantissa has four digits, and the exponent
has one digit.
❑ An example for the addition:

99.99 + 0.161 = 100.151


❑ As normalized numbers, the operands would be written as:

9.999 * 101 1.610 * 10-1

31
Steps 1-2: the actual addition
1. Equalize the exponents.
The operand with the smaller exponent should be rewritten by
increasing its exponent and shifting the point leftwards.

1.610 * 10-1 = 0.01610 * 101

With four significant digits, this gets rounded to: 0.016

This can result in a loss of least significant digits—the rightmost 1 in


this case. But rewriting the number with the larger exponent could
result in loss of the most significant digits, which is much worse.

2. Add the mantissas.

9.999 * 101
+ 0.016 * 101
10.015 * 101
Steps 3-5: representing the result
3. Normalize the result if necessary.

10.015 * 101 = 1.0015 * 102

This step may cause the point to shift either left or right, and the
exponent to either increase or decrease.

4. Round the number if needed.

1.0015 * 102 gets rounded to 1.002 * 102

5. Repeat Step 3 if the result is no longer normalized.


We don’t need this in our example, but it’s possible for rounding to
add digits—for example, rounding 9.9995 yields 10.000.

Our result is 1.002*102 , or 100.2 . The correct answer is 100.151, so we have


the right answer to four significant digits, but there’s a small error already.

33
Example
❑Calculate 0 1000 0001 110…0 plus 0 1000 0010 00110..0
both are single-precision IEEE 754 representation

1. 1st number: 1.112  2 (129-127); 2nd number: 1.00112  2(130-127)


2. Compare the e field: 1000 0001 < 1000 0010
3.Align exponents to 1000 0010; so the 1st number becomes:
0.1112  23
4. Add mantissa
1.0011
+0.1110
10.0001
5. So the sum is: 10.0001  23 = 1.00001  24
So the IEEE 754 format is: 0 1000 0011 000010…0

34
Multiplication
❑ To multiply two floating-point values, first multiply their magnitudes
and add their exponents.

9.999 * 101
* 1.610 * 10-1
16.098 * 100

❑ You can then round and normalize the result, yielding 1.610 * 101.
❑ The sign of the product is the exclusive-or of the signs of the
operands.
– If two numbers have the same sign, their product is positive.
– If two numbers have different signs, the product is negative.

00=0 01=1 10=1 11=0

❑ This is one of the main advantages of using signed magnitude.

35
The history of floating-point
computation
❑ In the past, each machine had its own implementation of
floating-point arithmetic hardware and/or software.
– It was impossible to write portable programs that would
produce the same results on different systems.
❑ It wasn’t until 1985 that the IEEE 754 standard was adopted.
– Having a standard at least ensures that all compliant
machines will produce the same outputs for the same
program.

36
Floating-point hardware
❑ When floating point was introduced in microprocessors, there
wasn’t enough transistors on chip to implement it.
– You had to buy a floating point co-processor (e.g., the
Intel 8087)
❑ As a result, many ISA’s use separate registers for floating
point.
❑ Modern transistor budgets enable floating point to be on chip.
– Intel’s 486 was the first x86 with built-in floating point
(1989)
❑ Even the newest ISA’s have separate register files for floating
point.
– Makes sense from a floor-planning perspective.

37
DIVISION IN BINARY
DIVISION HARDWARE
DIVISION FLOW CHART
Floating point operations
• Addition X+Y (adjusted Xm + Ym) 2Ye where Xe ≤ Ye
• Subtraction X-Y (adjusted Xm - Ym) 2Ye where Xe ≤ Ye
• Multiplication X x Y(adjusted Xm x Ym) 2Xe+Ye
• Division X/Y(adjusted Xm / Ym) 2Xe-Ye
Algorithm FP Addition/Subtraction
• Let X and Y be the FP numbers involved in
addition/subtraction, where Ye > Xe.
• Basic steps:
• Compute Ye - Xe, a fixed point subtraction
• Shift the mantissa of Xm by (Ye - Xe) steps to the
right to form Xm2Ye-Xe if Xe is smaller than Ye else
the mantissa of Ym will have to be adjusted.
• Compute Xm2Ye-Xe ± Ym
• Determine the sign of the result
• Normalize the resulting value, if necessary
Multiplication and Division
Results in FP arithmetic
• FP arithmetic results will have to be produced in normalised form.
• Adjusting the bias of the resulting exponent is required. Biased
representation of exponent causes a problem when the exponents
are added in a multiplication or subtracted in the case of division,
resulting in a double biased or wrongly biased exponent. This must
be corrected. This is an extra step to be taken care of by FP
arithmetic hardware.
• When the result is zero, the resulting mantissa has an all zero but
not the exponent. A special step is needed to make the exponent
bits zero.
• Overflow – is to be detected when the result is too large to be
represented in the FP format.
• Underflow – is to be detected when the result is too small to be
represented in the FP format. Overflow and underflow are
automatically detected by hardware, however, sometimes the
mantissa in such occurrence may remain in denormalised form.
• Handling the guard bit (which are extra bits) becomes an issue
when the result is to be rounded rather than truncated.
DECIMAL ADDER
What is a BCD Adder ?
• Decimal adder and BCD Adder both are same.
• A BCD adder, also known as a Binary-Coded Decimal
adder, is a digital circuit that performs addition
operations on Binary-Coded Decimal numbers. BCD is a
numerical representation that uses a four-bit binary
code to represent each decimal digit from 0 to 9. BCD
encoding allows for direct conversion between binary
and decimal representations, making it useful for
arithmetic operations on decimal numbers.
• The purpose of a BCD adder is to add two BCD
numbers together and produce a BCD result. It follows
specific rules to ensure accurate decimal results. The
BCD adder circuit typically consists of multiple stages,
each representing one decimal digit, and utilizes binary
addition circuits combined with BCD-specific rules.
Working of Decimal Adder
• We take a 4-bit Binary-Adder, which takes addend and
augend bits as an input with an input carry 'Carry in'.
• The Binary-Adder produces five outputs, i.e., Z8, Z4, Z2, Z1,
and an output carry K.
• With the help of the output carry K and Z8, Z4, Z2, Z1
outputs, the logical circuit is designed to identify the Cout
• Cout = K + Z8*Z4 + Z8*Z2
• The Z8, Z4, Z2, and Z1 outputs of the binary adder are
passed into the 2nd 4-bit binary adder as an Augend.
• The addend bit of the 2nd 4-bit binary adder is designed in
such a way that the 1st and the 4th bit of the addend
number are 0 and the 2nd and the 3rd bit are the same as
Cout. When the value of Cout is 0, the addend number will be
0000, which produce the same result as the 1st 4-bit binary
number. But when the value of the Cout is 1, the addend bit
will be 0110, i.e., 6, which adds with the augent to get the
valid BCD number.
• Example: 1001+1000
• First, add both the numbers using a 4-bit binary adder and pass the
input carry to 0.
• The binary adder produced the result 0001 and carried output 'K' 1.
• Then, find the Cout value to identify that the produced BCD is invalid
or valid using the expression Cout=K+Z8.Z4+Z8.Z2.
K=1
Z8 = 0
Z4 = 0
Z2 = 0
Cout = 1+0*0+0*0
Cout = 1+0+0
Cout = 1
• The value of Cout is 1, which expresses that the produced BCD code
is invalid. Then, add the output of the 1st 4-bit binary adder with
0110.
= 0001+0110
= 0111
• The BCD is represented by the carry output as:
BCD=Cout Z8 Z4 Z2 Z1=1 0 1 1 1
Algorithm for Decimal Adder
• BCD Addition of Given Decimal Number
• BCD addition of a given decimal number involves performing
addition operations on the individual BCD digits of the number.
• Step 1: Convert the decimal number into BCD representation:
• Take each decimal digit and convert it into its BCD equivalent, which
is a four-bit binary code.
• For example, the decimal number 456 would be represented as
0100 0101 0110 in BCD.
• Step 2: Align the BCD numbers for addition:
Ensure that the BCD numbers to be added have the same number
of digits.
If necessary, pad the shorter BCD number with leading zeros to
match the length of the longer BCD number.
• Step 3: Perform binary addition on the BCD digits:
• Start from the rightmost digit and add the corresponding BCD digits
of the two numbers.
• If the sum of the BCD digits is less than or equal to 9 (0000 to 1001
in binary), it represents a valid BCD digit.
• If the sum is greater than 9 (1010 to 1111 in binary), it indicates a
carry-out, and a correction is needed.
• Step 4: Handle carry-out and correction:
• When a carry-out occurs, it needs to be propagated to the next
higher-order digit. Add the carry-out to the next higher-order digit's
BCD digit and continue the process until all digits have been
processed.Step 5: Obtain the final BCD result:
• Once all the BCD digits have been processed, the resulting BCD
numbers represent the decimal sum of the original BCD numbers.
Decimal Arithmetic
Operation
Decimal Arithmetic Operation

• Decimal numbers in BCD are stored in


computer registers in group of 4 bits.
• Each 4 bit group must be taken as a unit when
performing decimal microoperation
• The following are the decimal arithmetic
microoperation symbols:
Decimal Arithmetic Operation
Decimal Arithmetic Operation

• Incrementing/Decrementing a register is
same for binary and BCD that binary
counter goes through 16 states from 0000 to
1111.
• The BCD counter goes through 10 states
from 0000 to 1001 and back to 0000.
• A decimal shift right or left is proceeded by
d to indicate a shift over four bits that
hold the decimal digit.
Addition and Subtraction

• Adecimal data can be added in 3 different ways


Addition and Subtraction
Addition and Subtraction
Addition and Subtraction
• The parallel method uses a decimal arithmetic
unit composed of as many BCD adder as there are
digits in the number.
• In digit serial bit parallel method, the digits are
applied to a single BCD adder serially, while the
bits of each coded digit are transferred in parallel.
• In serial adder the bits are shifted one at a
time through full adder. The binary sum formed
after four shifts must be corrected into valid BCD
digit.
• If it is ≥1010, the binary sum is corrected by
adding 0110 and generates a carry for next pair
of digit.
Addition and Subtraction

• The parallel method is fast but requires a


large number of BCDadders.
• The digit serial bit parallel method
requires only one BCD adder which is
shared by all the digits, so it is slower than
parallel method.
• The serial method requires a minimum
amount of equipment that is only one
full adder, but is very slow.

You might also like