0% found this document useful (0 votes)
1 views8 pages

21CS403Notes 4

The document outlines the teaching plan for a lecture on floating point operations, parallelism, and computer arithmetic for a B.Tech course in Computer Organization and Architecture. It details the objectives, intended learning outcomes, teaching methodologies, and key concepts such as floating-point arithmetic, parallel processing, and sub word parallelism. Additionally, it includes sample questions and a student summary section to reinforce learning outcomes.

Uploaded by

kalpananakka35
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
1 views8 pages

21CS403Notes 4

The document outlines the teaching plan for a lecture on floating point operations, parallelism, and computer arithmetic for a B.Tech course in Computer Organization and Architecture. It details the objectives, intended learning outcomes, teaching methodologies, and key concepts such as floating-point arithmetic, parallel processing, and sub word parallelism. Additionally, it includes sample questions and a student summary section to reinforce learning outcomes.

Uploaded by

kalpananakka35
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

GMR Institute of Technology

GMRIT/ADM/F-44
Rajam, AP REV.: 00
(An Autonomous Institution Affiliated to JNTUGV, AP)

Cohesive Teaching – Learning Practices (CTLP)

Class 4th Sem. – B. Tech Department: CSE/AIML/AIDS


Course Computer Organization and Architecture Course Code 21CS403
Prepared by Dr. K. Srividya, Ms. Santhoshini Sahu, Mrs. A Vineela, Mrs. A Bhavani, Mr. B. M.
Sreenivasa Rao
Lecture Topic Floating point operations, Parallelism and computer arithmetic, sub word
parallelism
Course Outcome (s) CO3 Program Outcome (s) PO1, PO12
Duration 50 Min Lecture 20-22 Unit II
Pre-requisite (s) Micro operations

1. Objective

❖ Understand floating point arithmetic operations.


❖ Understand sub word parallelism.

2. Intended Learning Outcomes (ILOs)

At the end of this session the students will be able to:

1. Summarize the arithmetic operations for floating point representation.


2. Summarize sub word parallelism.

3. 2D Mapping of ILOs with Knowledge Dimension and Cognitive Learning Levels of RBT

Cognitive Learning Levels


Knowledge
Remember Understand Apply Analyse Evaluate Create
Dimension
Factual
Conceptual A,B, C
Procedural
Meta Cognitive

4. Teaching Methodology

❖ Power Point Presentation, Chalk Talk, visual presentation

5. Evocation

6. Deliverables

Lecture -20: Floating point operations


Floating-Point Operations:
Many high-level programming languages have a facility for specifying floating-point numbers. The
most common way is to specify them by a real declaration statement as opposed to fixed-point
numbers, which are specified by an integer declaration statement. Any computer that has a compiler
for such high-level programming language must have a provision for handling floating-point
arithmetic operations. The operations are quite often included in the internal hardware. If no hardware
is available for the operations, the compiler must be designed with a package of floating-point
software subroutines. Although the hardware method is more expensive, it is so much more efficient
than the software method that floating-point hardware is included in.1ttost computers and is omitted
only in very small ones.
Basic Considerations
A floating-point number in computer registers consists of two parts: a mantissa m and an exponent e.
The two parts represent a number obtained from multiplying m times a radix r raised to the value of
e; thus

The mantissa may be a fraction or an integer. The location of the radix point and the value of the
radix r are assumed and are not included in the registers. For example, assume a fraction
representation and a radix 10.
The decimal number 537.25 is represented in a register with m = 53725 and e = 3 and is interpreted
to represent the floating-point number

.
A floating-point number i s normalized if the most significant digit of the mantissa is nonzero. In this
way the mantissa contains the maximum possible number of significant digits. A zero cannot be
normalized because it does not have a nonzero digit. It is represented in floating-point by all 0' s in
the mantissa and exponent.
Addition and Subtraction:
During addition or subtraction, the two floating-point operands are in AC and BR. The sum or
difference is formed in the AC. The algorithm can be divided into four consecutive parts:
1. Check for zeros.
2. Align the mantissas.
3. Add or subtract the mantissas.
4. Normalize the result.
A floating-point number that is zero cannot be normalized. If this number is used during the
computation, the result may also be zero. Instead of checking for zeros during the normalization
process we check for zeros at the beginning and terminate the process if necessary. The alignment of
the mantissas must be carried out prior to their operation. After the mantissas are added or subtracted,
the result may be unnormalized. The normalization procedure ensures that the result is normalized
prior to its transfer to memory.
The flowchart for adding or subtracting two floating-point binary numbers is shown in Fig. 10-15. If
Figure 1: Addition and subtraction of floating-point numbers

Multiplication:
The multiplication of two floating-point numbers requires that we multiply the mantissas and add the
exponents. No comparison of exponents or alignment of mantissas is necessary. The multiplication
of the mantissas is performed in the same way as in fixed-point to provide a double-precision product.
The double-precision answer is used in fixed-point numbers to increase the accuracy of the product.
In floating-point, the range of a single-precision mantissa combined with the exponent is usually
accurate enough so that only single precision numbers are maintained. Thus the half most significant
bits of the mantissa product and the exponent will be taken together to form a single precision
floating-point product. The multiplication algorithm can be subdivided into four parts:
1. Check for zeros.
2. Add the exponents.
3. Multiply the mantissas.
4. Normalize the product.
The flowchart for floating-point multiplication is shown in Fig. 2

Figure 2: Multiplication of floating-point numbers

Division:
Floating-point division requires that the exponents be subtracted and the mantissas divided. The
mantissa division is done as in fixed-point except that the dividend has a single-precision mantissa
that is placed in the AC. Remember that the mantissa dividend is a fraction and not an integer. For
integer representation, a single-precision dividend must be placed in register Q and register A must
be cleared. The zeros in A are to the left of the binary point and have no signficance. In fraction
representation, a single-precision dividend is placed in register A and register Q is cleared. The zeros
in Q are to the right of the binary point and have no signficance. The division of two normalized
floating-point numbers will always result in a normalized quotient provided that a dividend alignment
is carried out before the division. Therefore, unlike the other operations, the quotient obtained after
the division does not require normalization. The division algorithm can be subdivided into five parts:
1. Check for zeros.
2. Initialize registers and evaluate the sign.
3. Align the dividend.
4. Subtract the exponents.
5. Divide the mantissas.
The flowchart for floating-point division is shown in Fig.3.
Figure 3: Division of floating-point numbers

Lecture -21: Parallelism and computer arithmetic:

Parallel Processing:
Parallel processing can be described as a class of techniques which enables the system to achieve
simultaneous data-processing tasks to increase the computational speed of a computer system. A
parallel processing system can carry out simultaneous data-processing to achieve faster execution
time. For instance, while an instruction is being processed in the ALU component of the CPU, the
next instruction can be read from memory. The primary purpose of parallel processing is to enhance
the computer processing capability and increase its throughput. A parallel processing system can be
achieved by having a multiplicity of functional units that perform identical or different operations
simultaneously. The data can be distributed among various multiple functional units.
The following diagram shows one possible way of separating the execution unit into eight functional
units operating in parallel.
The operation performed in each functional unit is indicated in each block of the diagram:

The adder and integer multiplier performs the arithmetic operation with integer numbers. The
floating-point operations are separated into three circuits operating in parallel. The logic, shift, and
increment operations can be performed concurrently on different data. All units are independent of
each other, so one number can be shifted while another number is being incremented.

Lecture -18: Sub word parallelism

A sub word is a lower precision unit of data contained within a word. In sub word parallelism, multiple
sub words are packed into a word and then process whole words. With the appropriate sub word
boundaries this technique results in parallel processing of sub words. Since the same instruction is
applied to all sub words within the word, This is a form of SIMD (Single Instruction Multiple Data)
processing.
It is possible to apply sub word parallelism to noncontiguous sub words of different sizes within a
word. In practical implementation is simple if sub words are same size and they are contiguous within
a word. The data parallel programs that benefit from sub word parallelism tend to process data that
are of the same size.
For example if word size is 64bits and sub words sizes are 8, 16 and 32 bits. Hence an instruction
operates on eight 8bit sub words, four 16bit sub words, two 32bit sub words or one 64bit sub word in
parallel.
Sub word parallelism is an efficient and flexible solution for media processing, because the algorithms
exhibit a great deal of data parallelism on lower precision data. The basic components of multimedia
objects are usually simple integers with 8, 12, or 16 bits of precision. Sub word parallelism is also
useful for computations unrelated to multimedia that exhibit data parallelism on lower precision data.
One key advantage of sub word parallelism is that it allows general-purpose processors to exploit
wider word sizes even when not processing high-precision data. The processor can achieve more sub
word parallelism on lower precision data rather than wasting much of the word-oriented data paths
and registers. Sub word parallelism is an efficient organization for media processing, whether on a
general-purpose microprocessor or a specially designed media processor. Graphics and audio
applications can take advantage of performing simultaneous operations on short vectors.
Keywords

❖ Fixed point representation.


❖ Floating point representation

7. Sample Questions

Remember:

1. Define overflow.
2. Draw flowchart of floating point division.

Understand

1. How floating point operations performed in computer?


2. How to handle divide overflow problem in integer or floating point division?

8. Stimulating Question (s)


1. -

9. Mind Map
11. Student Summary

At the end of this session, the facilitator (Teacher) shall randomly pick-up few students to
summarize the deliverables.

12. Reading Materials

13. Scope for Mini Project

NIL ---------------

You might also like