0% found this document useful (0 votes)
1 views9 pages

Csa - Unit-4

The document is a study material for the Computer System Architecture course at Government Arts College, Chidambaram, covering various topics such as digital logic circuits, computer organization, CPU architecture, computer arithmetic, and parallel processing. It details concepts like pipelining, vector processing, and memory organization, emphasizing their applications in fields like scientific simulations, machine learning, and cryptography. The material is prepared by Prof. P. Sankar for BSc-Computer Science students in 2024.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
1 views9 pages

Csa - Unit-4

The document is a study material for the Computer System Architecture course at Government Arts College, Chidambaram, covering various topics such as digital logic circuits, computer organization, CPU architecture, computer arithmetic, and parallel processing. It details concepts like pipelining, vector processing, and memory organization, emphasizing their applications in fields like scientific simulations, machine learning, and cryptography. The material is prepared by Prof. P. Sankar for BSc-Computer Science students in 2024.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

GOVERNEMENT ARTS COLLEGE,

CHIDAMBARAM
(Affiliated to Annamalai University)

COMPUTER SYSTEM ARCHITECTURE

Study Material

For

III-CS
BSc-Computer Science

Prepared By

Prof.P.Sankar.,M.C.A.,M.Phil.,(P.hD).,
Faculty of Computer Science & Application

PG DEPARTMENT OF COMPUTER SCIENCE

2024
COMPUTER SYSTEM ARCHITECTURE

Unit – I: (12Hours)
Digital Logic Circuits: Combinational Circuits – Flip-Flops. Data Representation: Data Types
– Complements – Fixed and Floating Point Representation – Other Binary Codes – Error
Detection Codes.
Unit – II: (12Hours)
Register Transfer and Microoperations: Register Transfer Language – Register Transfer –
Arithmetic Microoperations – Logic Micro operations – Shift Micro operations – Arithmetic
Logic Shift Unit. Basic Computer Organization and Design: Instruction Codes – Computer
Registers – Computer Instructions – Timing and Control – Instruction Cycle – Memory
Reference Instructions – Input/Output and Interrupt.
Unit – III: (12Hours)
Central Processing Unit : General Register Organization – Stack Organization – Instruction
Formats – Addressing Modes – Data Transfer and Manipulation – Program Control –
Reduced Instruction Set Computer.
Unit – IV: (12Hours)
Computer Arithmetic: Addition and Subtraction - Multiplication Algorithms – Division
Algorithms. Pipeline and Vector Processing: Parallel processing – Pipelining – Arithmetic
pipeline – Instruction pipeline – Vector Processing – Array Processor.
Unit – V: (12Hours)
Memory Organization: Memory Hierarchy – Main Memory – Auxiliary Memory –
Associative Memory – Cache Memory – Virtual Memory – Memory Management Hardware.
Multiprocessors: Characteristics of Multiprocessors – Interconnection Structures –
Interprocessor Arbitration
UNIT-IV
COMPUTER ARITHMETIC:

Computer arithmetic refers to the set of rules and methods used by computers to perform
mathematical operations, such as addition, subtraction, multiplication, and division. This
includes:

1. Binary arithmetic: The use of binary numbers (0s and 1s) to represent and perform
arithmetic operations.

2. Fixed-point arithmetic: A method of representing numbers using a fixed number of digits


after the decimal point.

3. Floating-point arithmetic: A method of representing numbers using a fixed number of


digits and a scaling factor (exponent).

4. Integer arithmetic: The use of integers (whole numbers) to perform arithmetic operations.

5. Modular arithmetic: A system of arithmetic where numbers "wrap around" after reaching a
certain value (modulus).

6. Trigonometric and transcendental functions: Approximation methods for calculating


trigonometric and transcendental functions like sin, cos, and exp.

Computer arithmetic is crucial for various applications, including:

1. Scientific simulations

2. Cryptography

3. Graphics rendering

4. Machine learning

5. Financial calculations
What Is Parallel Processing?
Parallel processing is a computing technique when multiple streams of
calculations or data processing tasks co-occur through numerous
central processing units (CPUs) working concurrently.

Parallel processing uses two or more processors or CPUs simultaneously to handle


various components of a single activity. Systems can slash a program’s execution
time by dividing a task’s many parts among several processors. Multi-core
processors, frequently found in modern computers, and any system with more than
one CPU are capable of performing parallel processing.

When processing is done in parallel, a big job is broken down into several
smaller jobs better suited to the number, size, and type of available
processing units. After the task is divided, each processor starts working on
its part without talking to the others. Instead, they use software to stay in
touch with each other and find out how their tasks are going.

After all the program parts have been processed, the result is a fully
processed program segment. This is true whether the number of processors
and tasks and processors were equal and they all finished simultaneously or
one after the other.

There are two types of parallel processes: fine-grained and coarse-grained.


Tasks communicate with one another numerous times per second in fine-
grained parallelism to deliver results in real-time or very close to real-time.
The slower speed of coarse-grained parallel processes results from their
infrequent communication.

A parallel processing system can process data simultaneously to complete


tasks more quickly. For instance, the system could receive the next
instruction from memory as the current instruction is processed by the
CPU’s arithmetic-logic unit (ALU). The main goal of parallel processing is to
boost a computer’s processing power and increase throughput, or the
volume of work one can do in a given time. One can use many functional
units to create a parallel processing system by carrying out similar or
dissimilar activities concurrently.

Parallel processing is useful for:

1. Scientific simulations

2. Data analytics

3. Machine learning

4. Image and video processing

5. Cryptography

Types of Parallel Processing


There are various varieties of parallel processing, such as MMP, SIMD, MISD, SISD, and
MIMD, of which SIMD is probably the most popular. Single instruction multiple data, or
SIMD, is a parallel processing type where a computer has two or more processors that all
follow the same instruction set but handle distinct data types.

1. SISD (Single Instruction, Single Data): One instruction is executed at a time, and it
operates on a single data element. This is the traditional von Neumann architecture.

2. SIMD (Single Instruction, Multiple Data): One instruction is executed simultaneously on


multiple data elements. This is commonly used in vector processing, graphics processing
units (GPUs), and digital signal processing.

3. MISD (Multiple Instruction, Single Data): Multiple instructions are executed


simultaneously, but they all operate on the same data element. This architecture is less
common, but can be found in some specialized systems.
4. MIMD (Multiple Instruction, Multiple Data): Multiple instructions are executed
simultaneously, and each instruction operates on a different data element. This is the most
flexible and powerful architecture, used in multi-core processors, clusters, and grids.

5. MMP (Massively Parallel Processing): This is not part of Flynn's original taxonomy, but it
refers to a system with a large number of processing units, often in a SIMD or MIMD
configuration, working together to solve complex problems.

PIPELINING:

Pipelining is a technique for breaking down a sequential process into various sub-operations
and executing each sub-operation in its own dedicated segment that runs in parallel with all
other segments.

The most significant feature of a pipeline technique is that it allows several computations to
run in parallel in different parts at the same time.

By associating a register with every segment in the pipeline, the process of computation can
be made overlapping. The registers provide separation and isolation among every segment,
allowing each to work on different data at the same time.

An input register for each segment, followed by a combinational circuit, can be used to
illustrate the structure of a pipeline organisation. To better understand the pipeline
organisation, consider an example of a combined multiplication and addition operation.
A stream of numbers is used to perform the combined multiplication and addition operation,
such as:

for i = 1, 2, 3, ……., 7

Ai* Bi + Ci

The operation to be done on the numbers is broken down into sub-operations, each of which
is implemented in a segment of a pipeline. We can define the sub-operations performed in
every segment of the pipeline as:

Input Ai, and Bi

R1 ← Ai, R2 ← Bi

Multiply, and input Ci

R3 ← R1 * R2, R4 ← Ci

Add Ci to the product

R5 ← R3 + R4
The output of a given segment’s combinational circuit is used as an input register for the next
segment. The register R3 is used here as one of the input registers for a combinational adder
circuit, as shown in the block diagram. The pipeline organisation, in general, is applicable to
two areas of computer design. It includes:

1. Instruction Pipeline – An instruction pipeline receives sequential instructions from


memory while prior instructions are implemented in other portions. Pipeline processing can
be seen in both the data and instruction streams. Read more on Instruction Pipeline here.

2. Arithmetic Pipeline – An arithmetic pipeline separates a given arithmetic problem into


subproblems that can be executed in different pipeline segments. It’s used for multiplication,
floating-point operations, and a variety of other calculations. Read more on Arithmetic
Pipeline here.

VECTOR PROCESSING:

Vector processing is a computer method that can process numerous data


components at once. It operates on every element of the entire vector in one
operation, or in parallel, to avoid the overhead of the processing loop. Yet
simultaneous operations must be independent of one another in order for vector
processing to be effective.
In traditional scalar processing, instructions operate on a single data
element at a time. In contrast, vector processing operates on a vector (a one-
dimensional array) of data elements, performing the same operation on all
elements in parallel.

Vector processing is particularly useful for:

1. Scientific simulations (e.g., weather forecasting, fluid dynamics)


2. Signal processing (e.g., image, audio, video processing)
3. Machine learning (e.g., neural networks, deep learning)
4. Data analytics (e.g., data mining, statistical analysis)
5. Graphics rendering (e.g., 3D graphics, game engines)
Benefits of vector processing include:

1. Improved performance (increased throughput)


2. Reduced processing time
3. Increased parallelism
4. Efficient use of processing resources

You might also like