COA UNIT-III PPTs Dr.G.Bhaskar ECE
COA UNIT-III PPTs Dr.G.Bhaskar ECE
COA UNIT-III PPTs Dr.G.Bhaskar ECE
AND ARCHITECTURE
UNIT-III
By
Dr. G. Bhaskar
Associate Professor
Dept. of ECE
Email Id:[email protected]
UNIT 3
• Data Representation: Data types, Complements,
• Fixed Point Representation,
• Floating Point Representation.
• Computer Arithmetic: Addition and subtraction,
• multiplication Algorithms,
• Division Algorithms,
• Floating – point Arithmetic operations.
• Decimal Arithmetic unit, Decimal Arithmetic operations.
Computer data types
• Computer programs or application may use different types of data based on the
problem or requirement.
• Numeric data
• It can be of the following two types:
• Integers
• Real Numbers
• Real numbers can be represented as:
• Fixed point representation
• Floating point representation
• Character data
• A sequence of character is called character data.
• A character may be alphabetic (A-Z or a-z), numeric (0-9),
special character (+, #, *, @, etc.) or combination of all of
these. A character is represented by group of bits.
•
• When set of multiple character are combined together they
form a meaningful data. A character is represented in
standard ASCII format.Another popular format is EBCDIC
used in large computer systems.
•
• Example of character data
• Rajneesh1#
• 229/3, xyZ
• Mission Milap – X/10
• Logical data
• A logical data is used by computer systems to take logical decisions.
• Logical data is different from numeric or alphanumeric data in the
way that numeric and alphanumeric data may be associated with
numbers or characters but logical data is denoted by either of two
values true (T) or false(F).
•
• You can see the example of logical data in construction of truth
table in logic gates.
• A logical data can also be statement consisting of numeric or
character data with relational symbols (>, <, =, etc.).
•
• Character set
• Character sets can of following types in computers:
• Alphabetic characters- It consists of alphabet characters A-Z or a-z.
• Numeric characters- It consists of digits from 0 to 9.
• Special characters- Special symbols are +, *, /, -, ., <, >, =, @, %, #,
etc.
Fixed point representation
• In computers, fixed-point representation is a real data
type for numbers. Fixed point representation can
convert data into binary form, and then the data is
processed, stored, and used by the computer. It has a
fixed number of bits for the integral and fractional
parts. For example, if given fixed-point representation
is IIIII.FFF, we can store a minimum value of 00000.001
and a maximum value of 99999.999.
• There are three parts of the fixed-point number
representation: Sign bit, Integral part, and Fractional
part.
• Sign bit:- The fixed-point number representation
in binary uses a sign bit. The negative number has
a sign bit 1, while a positive number has a bit 0.
• Integral Part:- The integral part in fixed-point
numbers is of different lengths at different places.
It depends on the register's size; for an 8-bit
register, the integral part is 4 bits.
• Fractional part:- The Fractional part is of different
lengths at different places. It depends on the
registers; for an 8-bit register, the fractional part
is 3 bits.
How to write numbers in Fixed-point
notation?
• Now that we have learned about fixed-point number
representation, let's see how to represent it.
• The number considered is 4.5
• Step 1: We will convert the number 4.5 to binary
form. 4.5 = 100.1
• Step 2: Represent the binary number in fixed-point
notation with the following format.
• Fixed Point Notation of 4.5
Floating Point
Representations
Floating-point
arithmetic
mantissa * 2exponent
s e f
01001 = 1.001× 23 = …
❑ What’s the normalized representation of 00101101.101 ?
00101101.101
= 1.01101101 × 25
01001 = 1.001× 23 = …
❑ The field f contains a binary fraction.
❑ The actual mantissa of the floating-point value is (1 + f).
– In other words, there is an implicit 1 to the left of the binary
point.
– For example, if f is 01101…, the mantissa would be 1.01101…
❑ A side effect is that we get a little more precision: there are 24 bits in
the mantissa, but we only need to store 23 of them.
❑ But, what about value 0?
Exponent
s e f
❑ There are special cases that require encodings
– Infinities (overflow)
– NAN (divide by zero)
❑ For example:
– Single-precision: 8 bits in e → 256 codes; 11111111 reserved for
special cases → 255 codes; one code (00000000) for zero → 254
codes; need both positive and negative exponents → half
positives (127), and half negatives (127)
– Double-precision: 11 bits in e → 2048 codes; 111…1 reserved for
special cases → 2047 codes; one code for zero → 2046 codes;
need both positive and negative exponents → half positives
(1023), and half negatives (1023)
Exponent
s e f
(1 - 2s) * (1 + f) * 2e-bias
1 01111100 11000000000000000000000
(1 - 2s) * (1 + f) * 2e-bias
101011011.101 x 20 = 1.01011011101 x 28
3. The bits to the right of the binary point comprise the fractional
field f.
4. The number of times you shifted gives the exponent. The field e
should contain: exponent + 127.
5. Sign bit: 0 if positive, 1 if negative.
Example
❑ What is the single-precision representation of 639.6875
639.6875 = 1001111111.10112
= 1.0011111111011×29
s=0
e = 9 + 127 = 136 = 10001000
f = 0011111111011
Valid Unnormalized
00000000 X…X
number =(-1)S x 2-126 x (0.F)
❑ Unnormalized
smallest 00 00000000 0000000000000000000001 +2-126(2-23) = 2-149
2-126 2127(2-2-23)
0 2-149 2-126(1-2-23)
Positive overflow
Positive underflow
In comparison
❑ The smallest and largest possible 32-bit integers in two’s
complement are only -231 and 231 - 1
❑ How can we represent so many more values in the IEEE 754
format, even though we use the same number of bits as regular
integers?
2 4 8 16
❑ Thus, floating-point arithmetic has “issues”
– Small roundoff errors can accumulate with multiplications or
exponentiations, resulting in big errors.
– Rounding errors can invalidate many basic arithmetic
principles such as the associative law, (x + y) + z = x + (y + z).
❑ The IEEE 754 standard guarantees that all machines will produce
the same results—but those results may not be mathematically
accurate!
Limits of the IEEE
representation
❑ Even some integers cannot be represented in the IEEE
format.
int x = 33554431;
float y = 33554431;
printf( "%d\n", x );
printf( "%f\n", y );
33554431
33554432.000000
0.1010 = 0.0001100110011...2
0.10
❑ During the Gulf War in 1991, a U.S. Patriot missile failed to intercept
an Iraqi Scud missile, and 28 Americans were killed.
❑ A later study determined that the problem was caused by the
inaccuracy of the binary representation of 0.10.
– The Patriot incremented a counter once every 0.10 seconds.
– It multiplied the counter value by 0.10 to compute the actual
time.
❑ However, the (24-bit) binary representation of 0.10 actually
corresponds to 0.099999904632568359375, which is off by
0.000000095367431640625.
❑ This doesn’t seem like much, but after 100 hours the time ends up
being off by 0.34 seconds—enough time for a Scud to travel 500
meters!
❑ Professor Skeel wrote a short article about this.
Roundoff Error and the Patriot Missile. SIAM News, 25(4):11, July 1992.
Floating-point addition
example
❑ To get a feel for floating-point operations, we’ll do an addition
example.
– To keep it simple, we’ll use base 10 scientific notation.
– Assume the mantissa has four digits, and the exponent
has one digit.
❑ An example for the addition:
31
Steps 1-2: the actual addition
1. Equalize the exponents.
The operand with the smaller exponent should be rewritten by
increasing its exponent and shifting the point leftwards.
9.999 * 101
+ 0.016 * 101
10.015 * 101
Steps 3-5: representing the result
3. Normalize the result if necessary.
This step may cause the point to shift either left or right, and the
exponent to either increase or decrease.
33
Example
❑Calculate 0 1000 0001 110…0 plus 0 1000 0010 00110..0
both are single-precision IEEE 754 representation
34
Multiplication
❑ To multiply two floating-point values, first multiply their magnitudes
and add their exponents.
9.999 * 101
* 1.610 * 10-1
16.098 * 100
❑ You can then round and normalize the result, yielding 1.610 * 101.
❑ The sign of the product is the exclusive-or of the signs of the
operands.
– If two numbers have the same sign, their product is positive.
– If two numbers have different signs, the product is negative.
35
The history of floating-point
computation
❑ In the past, each machine had its own implementation of
floating-point arithmetic hardware and/or software.
– It was impossible to write portable programs that would
produce the same results on different systems.
❑ It wasn’t until 1985 that the IEEE 754 standard was adopted.
– Having a standard at least ensures that all compliant
machines will produce the same outputs for the same
program.
36
Floating-point hardware
❑ When floating point was introduced in microprocessors, there
wasn’t enough transistors on chip to implement it.
– You had to buy a floating point co-processor (e.g., the
Intel 8087)
❑ As a result, many ISA’s use separate registers for floating
point.
❑ Modern transistor budgets enable floating point to be on chip.
– Intel’s 486 was the first x86 with built-in floating point
(1989)
❑ Even the newest ISA’s have separate register files for floating
point.
– Makes sense from a floor-planning perspective.
37
DIVISION IN BINARY
DIVISION HARDWARE
DIVISION FLOW CHART
Floating point operations
• Addition X+Y (adjusted Xm + Ym) 2Ye where Xe ≤ Ye
• Subtraction X-Y (adjusted Xm - Ym) 2Ye where Xe ≤ Ye
• Multiplication X x Y(adjusted Xm x Ym) 2Xe+Ye
• Division X/Y(adjusted Xm / Ym) 2Xe-Ye
Algorithm FP Addition/Subtraction
• Let X and Y be the FP numbers involved in
addition/subtraction, where Ye > Xe.
• Basic steps:
• Compute Ye - Xe, a fixed point subtraction
• Shift the mantissa of Xm by (Ye - Xe) steps to the
right to form Xm2Ye-Xe if Xe is smaller than Ye else
the mantissa of Ym will have to be adjusted.
• Compute Xm2Ye-Xe ± Ym
• Determine the sign of the result
• Normalize the resulting value, if necessary
Multiplication and Division
Results in FP arithmetic
• FP arithmetic results will have to be produced in normalised form.
• Adjusting the bias of the resulting exponent is required. Biased
representation of exponent causes a problem when the exponents
are added in a multiplication or subtracted in the case of division,
resulting in a double biased or wrongly biased exponent. This must
be corrected. This is an extra step to be taken care of by FP
arithmetic hardware.
• When the result is zero, the resulting mantissa has an all zero but
not the exponent. A special step is needed to make the exponent
bits zero.
• Overflow – is to be detected when the result is too large to be
represented in the FP format.
• Underflow – is to be detected when the result is too small to be
represented in the FP format. Overflow and underflow are
automatically detected by hardware, however, sometimes the
mantissa in such occurrence may remain in denormalised form.
• Handling the guard bit (which are extra bits) becomes an issue
when the result is to be rounded rather than truncated.
DECIMAL ADDER
What is a BCD Adder ?
• Decimal adder and BCD Adder both are same.
• A BCD adder, also known as a Binary-Coded Decimal
adder, is a digital circuit that performs addition
operations on Binary-Coded Decimal numbers. BCD is a
numerical representation that uses a four-bit binary
code to represent each decimal digit from 0 to 9. BCD
encoding allows for direct conversion between binary
and decimal representations, making it useful for
arithmetic operations on decimal numbers.
• The purpose of a BCD adder is to add two BCD
numbers together and produce a BCD result. It follows
specific rules to ensure accurate decimal results. The
BCD adder circuit typically consists of multiple stages,
each representing one decimal digit, and utilizes binary
addition circuits combined with BCD-specific rules.
Working of Decimal Adder
• We take a 4-bit Binary-Adder, which takes addend and
augend bits as an input with an input carry 'Carry in'.
• The Binary-Adder produces five outputs, i.e., Z8, Z4, Z2, Z1,
and an output carry K.
• With the help of the output carry K and Z8, Z4, Z2, Z1
outputs, the logical circuit is designed to identify the Cout
• Cout = K + Z8*Z4 + Z8*Z2
• The Z8, Z4, Z2, and Z1 outputs of the binary adder are
passed into the 2nd 4-bit binary adder as an Augend.
• The addend bit of the 2nd 4-bit binary adder is designed in
such a way that the 1st and the 4th bit of the addend
number are 0 and the 2nd and the 3rd bit are the same as
Cout. When the value of Cout is 0, the addend number will be
0000, which produce the same result as the 1st 4-bit binary
number. But when the value of the Cout is 1, the addend bit
will be 0110, i.e., 6, which adds with the augent to get the
valid BCD number.
• Example: 1001+1000
• First, add both the numbers using a 4-bit binary adder and pass the
input carry to 0.
• The binary adder produced the result 0001 and carried output 'K' 1.
• Then, find the Cout value to identify that the produced BCD is invalid
or valid using the expression Cout=K+Z8.Z4+Z8.Z2.
K=1
Z8 = 0
Z4 = 0
Z2 = 0
Cout = 1+0*0+0*0
Cout = 1+0+0
Cout = 1
• The value of Cout is 1, which expresses that the produced BCD code
is invalid. Then, add the output of the 1st 4-bit binary adder with
0110.
= 0001+0110
= 0111
• The BCD is represented by the carry output as:
BCD=Cout Z8 Z4 Z2 Z1=1 0 1 1 1
Algorithm for Decimal Adder
• BCD Addition of Given Decimal Number
• BCD addition of a given decimal number involves performing
addition operations on the individual BCD digits of the number.
• Step 1: Convert the decimal number into BCD representation:
• Take each decimal digit and convert it into its BCD equivalent, which
is a four-bit binary code.
• For example, the decimal number 456 would be represented as
0100 0101 0110 in BCD.
• Step 2: Align the BCD numbers for addition:
Ensure that the BCD numbers to be added have the same number
of digits.
If necessary, pad the shorter BCD number with leading zeros to
match the length of the longer BCD number.
• Step 3: Perform binary addition on the BCD digits:
• Start from the rightmost digit and add the corresponding BCD digits
of the two numbers.
• If the sum of the BCD digits is less than or equal to 9 (0000 to 1001
in binary), it represents a valid BCD digit.
• If the sum is greater than 9 (1010 to 1111 in binary), it indicates a
carry-out, and a correction is needed.
• Step 4: Handle carry-out and correction:
• When a carry-out occurs, it needs to be propagated to the next
higher-order digit. Add the carry-out to the next higher-order digit's
BCD digit and continue the process until all digits have been
processed.Step 5: Obtain the final BCD result:
• Once all the BCD digits have been processed, the resulting BCD
numbers represent the decimal sum of the original BCD numbers.
Decimal Arithmetic
Operation
Decimal Arithmetic Operation
• Incrementing/Decrementing a register is
same for binary and BCD that binary
counter goes through 16 states from 0000 to
1111.
• The BCD counter goes through 10 states
from 0000 to 1001 and back to 0000.
• A decimal shift right or left is proceeded by
d to indicate a shift over four bits that
hold the decimal digit.
Addition and Subtraction