0% found this document useful (0 votes)
94 views9 pages

COA Chapter6

Uploaded by

abduwasi ahmed
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
94 views9 pages

COA Chapter6

Uploaded by

abduwasi ahmed
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

Chapter 6: Functional Organization

Chapter 6

Functional organization

6.1 Implementation of Simple Data Paths


In computer architecture, a data path is a collection of functional units, such as registers, ALUs
(Arithmetic Logic Units), and multiplexers, interconnected to facilitate the movement and
manipulation of data within a computer system. The implementation of simple data paths involves
designing and organizing these components to perform basic operations.
6.1.1 Components of Simple Data Paths

1. Registers:

 Registers are small, fast storage locations within the CPU used for temporary data storage.
 They store operands, intermediate results, and control information during program
execution.
 Common registers include the program counter (PC), instruction register (IR), and general-
purpose registers.

2. ALU (Arithmetic Logic Unit):

 The ALU performs arithmetic and logic operations on data.


 Arithmetic operations include addition, subtraction, multiplication, and division.
 Logic operations involve bitwise operations like AND, OR, and NOT.

3. Multiplexers:

 Multiplexers (mux) are used to select one of several input data sources and direct it to the
output.
 They are crucial for controlling the flow of data within the data path.

4. Data Paths:

 Data paths connect various components, allowing data to flow from one part of the CPU
to another.
 They include buses (data buses, address buses) that transfer data between components.

Compiled By: B.G 1


Chapter 6: Functional Organization

6.1.2 Data Path Operations


1. Instruction Fetch:
 Involves retrieving an instruction from memory and placing it in the instruction register
(IR).
 The program counter (PC) is updated to point to the next instruction.
2. Data Transfer:
 Movement of data between registers and memory.
 Involves loading data into registers or storing data from registers into memory.
3. Arithmetic Operations:
 ALU performs arithmetic calculations on data stored in registers.
 Results are stored back in registers.
4. Logic Operations:
 ALU executes logical operations on binary data.
 Useful for decision-making and data manipulation.
6.1.3 Design Considerations
1. Clocking Mechanism:
 Synchronization of operations through a clock signal to ensure proper sequencing.
 The clock signal determines when each operation should occur.
2. Control Signals:
 Control signals generated by the control unit dictate the operation of various
components.
 Signals activate specific functions like reading from or writing to registers.
3. Bus Structure:
 The organization of data buses (e.g., data bus, address bus) influences data transfer
speed and efficiency.
In summary, the implementation of simple data paths involves careful design and interconnection
of registers, ALUs, multiplexers, and buses to facilitate the flow of data and the execution of basic
operations within a computer system. This forms the foundation for more advanced concepts in
computer architecture, such as pipelining and parallel processing.

Compiled By: B.G 2


Chapter 6: Functional Organization

6.2 Control Unit: Hardwired Realization vs. Microprogrammed Realization


The control unit is a crucial component in a computer's architecture responsible for generating
control signals that coordinate the activities of other hardware components within the CPU. There
are two primary approaches to implement the control unit: hardwired realization and
microprogrammed realization.
6.2.1 Hardwired Realization
In hardwired control units, the control signals are generated using combinational logic circuits.
The design is fixed, and the control signals are directly derived from the instruction opcode or
other inputs. It involves a direct mapping of the instruction set to the control signals without an
intermediate step.
Key Characteristics:
1. Dedicated Logic Circuits:
 Control signals are directly generated using dedicated combinational logic circuits.
 The design is specific and tailored to the instruction set architecture.
2. Efficiency:
 Hardwired control units are generally fast and efficient because there is no additional
layer of interpretation.
3. Fixed Functionality:
 The control unit's functionality is fixed and rigid. Any changes or updates require
hardware modifications.
4. Complexity:
 Complex instruction sets may lead to complex and extensive hardwired control units.
6.2.2 Microprogrammed Realization
Microprogrammed control units use a control memory that stores microinstructions.
Microinstructions are sets of control signals that dictate the operations of the CPU. It provides a
more flexible and modular approach to control unit design.
Key Characteristics:
1. Control Memory:
 Microinstructions are stored in a control memory, often implemented as a read-only
memory (ROM).
 Each microinstruction corresponds to a specific control signal configuration.
Compiled By: B.G 3
Chapter 6: Functional Organization

2. Flexibility:
 Offers greater flexibility as changes to the control unit can be made by updating the
microinstructions without altering the hardware.
3. Modularity
 Microprogrammed control units are modular, allowing for easier modification and
maintenance.
4. Simpler Hardware:
 The hardware is simplified compared to hardwired control units, making it easier to
design and implement.
6.2.3 Comparison
1. Adaptability:
 Hardwired control units are less adaptable to changes in instruction sets, requiring
significant hardware modifications.
 Microprogrammed control units are more adaptable, allowing for easier updates and
modifications.
2. Complexity vs. Flexibility:
 Hardwired units may become complex for larger instruction sets.
 Microprogrammed units trade some simplicity for flexibility, enabling easier
management of complex instruction sets.
3. Speed:
 Hardwired control units are generally faster due to the direct implementation of control
signals.
 Microprogrammed units may introduce additional cycles for accessing the control
memory, potentially slowing down the execution.
In summary, the choice between hardwired and microprogrammed realization depends on factors
such as instruction set complexity, flexibility requirements, and the need for adaptability.
Hardwired control units are efficient but less flexible, while microprogrammed control units offer
more flexibility at the cost of some speed. The decision often revolves around the specific design
goals and trade-offs relevant to the targeted computer architecture.

Compiled By: B.G 4


Chapter 6: Functional Organization

6.3 Instruction Pipelining


Instruction pipelining is a technique used in computer architecture to improve the overall
performance of a processor by allowing the simultaneous execution of multiple instructions. The
idea is to break down the instruction execution into a series of stages, with each stage handled by
a separate segment of the processor. This way, multiple instructions can be in different stages of
execution concurrently.
6.3.1 Basic Concepts
1. Pipeline Stages:
 Instructions pass through a series of stages in the pipeline, such as instruction fetch,
instruction decode, execute, memory access, and write back.
 Each stage performs a specific operation on the instruction.
2. Parallel Execution:
 Multiple instructions can be in different stages of the pipeline simultaneously, leading to
parallel execution.
 This increases the overall throughput of the processor.
3. Pipeline Registers:
 Between each stage, there are registers that temporarily hold the intermediate results of
an instruction.
 These registers facilitate the smooth flow of data between pipeline stages.
4. Clocking:
 Pipelining relies on a clocking mechanism to synchronize the movement of instructions
through the pipeline.
 Each pipeline stage operates on an instruction during a clock cycle.
6.3.2 Pipeline Hazards
1. Data Hazards:
 Read After Write (RAW): A subsequent instruction depends on the result of a previous
one.
 Write After Write (WAW): Two instructions attempt to write to the same destination.
2. Structural Hazards:
 Arise when multiple instructions require the same hardware resource simultaneously.
 Examples include conflicts for the use of the ALU or memory.
Compiled By: B.G 5
Chapter 6: Functional Organization

3. Control Hazards:
 Occur due to changes in the control flow, such as branches or jumps.
 Resolving control hazards may require flushing the pipeline and restarting the execution.
6.3.3 Advantages
1. Increased Throughput:
 Pipelining allows the processor to handle multiple instructions concurrently, improving
overall throughput.
2. Resource Utilization:
 Different segments of the pipeline can work on different instructions simultaneously,
leading to better resource utilization.
3. Reduced Cycle Time:
 Pipelining reduces the cycle time for each instruction, as different stages operate
concurrently.
6.3.4 Challenges
1. Pipeline Hazards:
 Hazards may introduce stalls or delays in the pipeline, impacting performance gains.
2. Complex Control:
 Managing hazards and ensuring proper synchronization require complex control
mechanisms.
3. Increased Power Consumption:
 Pipelining may lead to increased power consumption due to the concurrent operation of
multiple pipeline stages.
6.3.5 Types of Pipelining
1. Instruction Pipelining:
 Each stage of the pipeline corresponds to a phase in the instruction execution cycle.
2. Superscalar Pipelining:
 Multiple pipelines operate independently, allowing for the execution of more than one
instruction per clock cycle.
3. VLIW (Very Long Instruction Word):
 Instructions are scheduled at compile-time for simultaneous execution.

Compiled By: B.G 6


Chapter 6: Functional Organization

6.4 Introduction to Instruction-Level Parallelism (ILP)


Instruction-Level Parallelism (ILP) is a concept in computer architecture that involves executing
multiple instructions in parallel to improve overall processor performance. It aims to exploit
parallelism within a single stream of instructions to enhance throughput and reduce the overall
execution time of a program. ILP is crucial for achieving high performance in modern processors.
6.4.1 Overview
1. Parallel Execution:
 ILP focuses on finding opportunities to execute multiple instructions simultaneously
within a single instruction stream.
2. Dependency Analysis:
 Analyzing dependencies among instructions to identify those that can be executed
concurrently without violating the program's semantics.
3. Types of Dependencies:
 Data Dependencies (Read After Write - RAW, Write After Read - WAR, Write After
Write - WAW): Dependencies based on data flow between instructions.
 Control Dependencies: Dependencies based on control flow, such as branches.
6.4.2 Techniques for Exploiting ILP
1. Superscalar Processors:
 Superscalar architecture involves having multiple execution units within a processor that
can operate concurrently.
 Instructions are dispatched to different units simultaneously, allowing for parallel
execution.
2. Out-of-Order Execution:
 In traditional in-order execution, instructions are executed in the order they appear in the
program.
 Out-of-order execution allows the processor to execute instructions that are not dependent
on the completion of previous instructions, improving utilization of execution units.

Compiled By: B.G 7


Chapter 6: Functional Organization

3. Speculative Execution:
 Predicting the outcome of branch instructions and executing instructions speculatively
based on the prediction.
 If the prediction is correct, the results are kept; otherwise, the speculatively executed
instructions are discarded.
4. Scoreboarding:
 A hardware-based technique that tracks the status of instructions in the pipeline to
identify and resolve dependencies.
5. Register Renaming:
 Technique to eliminate register data hazards by using additional physical registers to hold
temporary values.
6.4.3 Advantages of ILP
1. Increased Throughput:
 ILP enables the concurrent execution of multiple instructions, leading to higher overall
throughput.
2. Resource Utilization:
 Efficient use of processor resources, as idle units can be utilized for executing
independent instructions.
3. Reduced Execution Time:
 Parallel execution reduces the overall execution time of a program.
6.4.4 Challenges and Considerations
1. Dependency Handling:
 Efficient handling of dependencies is critical to avoiding hazards and ensuring correct
program execution.
2. Complexity:
 ILP introduces complexity to processor design and control mechanisms, making it
challenging to implement.
3. Power Consumption:
 Increased parallelism may result in higher power consumption, especially in superscalar
and out-of-order execution architectures.

Compiled By: B.G 8


Chapter 6: Functional Organization

4. Compiler Support:
 Exploiting ILP often requires collaboration with compilers to schedule and optimize
instructions for parallel execution.
6.4.5 Future Trends
1. Vectorization:
 Exploiting parallelism through vector operations, where a single instruction operates on
multiple data elements simultaneously.
2. SIMD and GPU Architectures:
 Single Instruction, Multiple Data (SIMD) architectures and Graphics Processing Units
(GPUs) are designed to handle parallelism for specific types of applications.

Compiled By: B.G 9

You might also like