COA Chapter6
COA Chapter6
Chapter 6
Functional organization
1. Registers:
Registers are small, fast storage locations within the CPU used for temporary data storage.
They store operands, intermediate results, and control information during program
execution.
Common registers include the program counter (PC), instruction register (IR), and general-
purpose registers.
3. Multiplexers:
Multiplexers (mux) are used to select one of several input data sources and direct it to the
output.
They are crucial for controlling the flow of data within the data path.
4. Data Paths:
Data paths connect various components, allowing data to flow from one part of the CPU
to another.
They include buses (data buses, address buses) that transfer data between components.
2. Flexibility:
Offers greater flexibility as changes to the control unit can be made by updating the
microinstructions without altering the hardware.
3. Modularity
Microprogrammed control units are modular, allowing for easier modification and
maintenance.
4. Simpler Hardware:
The hardware is simplified compared to hardwired control units, making it easier to
design and implement.
6.2.3 Comparison
1. Adaptability:
Hardwired control units are less adaptable to changes in instruction sets, requiring
significant hardware modifications.
Microprogrammed control units are more adaptable, allowing for easier updates and
modifications.
2. Complexity vs. Flexibility:
Hardwired units may become complex for larger instruction sets.
Microprogrammed units trade some simplicity for flexibility, enabling easier
management of complex instruction sets.
3. Speed:
Hardwired control units are generally faster due to the direct implementation of control
signals.
Microprogrammed units may introduce additional cycles for accessing the control
memory, potentially slowing down the execution.
In summary, the choice between hardwired and microprogrammed realization depends on factors
such as instruction set complexity, flexibility requirements, and the need for adaptability.
Hardwired control units are efficient but less flexible, while microprogrammed control units offer
more flexibility at the cost of some speed. The decision often revolves around the specific design
goals and trade-offs relevant to the targeted computer architecture.
3. Control Hazards:
Occur due to changes in the control flow, such as branches or jumps.
Resolving control hazards may require flushing the pipeline and restarting the execution.
6.3.3 Advantages
1. Increased Throughput:
Pipelining allows the processor to handle multiple instructions concurrently, improving
overall throughput.
2. Resource Utilization:
Different segments of the pipeline can work on different instructions simultaneously,
leading to better resource utilization.
3. Reduced Cycle Time:
Pipelining reduces the cycle time for each instruction, as different stages operate
concurrently.
6.3.4 Challenges
1. Pipeline Hazards:
Hazards may introduce stalls or delays in the pipeline, impacting performance gains.
2. Complex Control:
Managing hazards and ensuring proper synchronization require complex control
mechanisms.
3. Increased Power Consumption:
Pipelining may lead to increased power consumption due to the concurrent operation of
multiple pipeline stages.
6.3.5 Types of Pipelining
1. Instruction Pipelining:
Each stage of the pipeline corresponds to a phase in the instruction execution cycle.
2. Superscalar Pipelining:
Multiple pipelines operate independently, allowing for the execution of more than one
instruction per clock cycle.
3. VLIW (Very Long Instruction Word):
Instructions are scheduled at compile-time for simultaneous execution.
3. Speculative Execution:
Predicting the outcome of branch instructions and executing instructions speculatively
based on the prediction.
If the prediction is correct, the results are kept; otherwise, the speculatively executed
instructions are discarded.
4. Scoreboarding:
A hardware-based technique that tracks the status of instructions in the pipeline to
identify and resolve dependencies.
5. Register Renaming:
Technique to eliminate register data hazards by using additional physical registers to hold
temporary values.
6.4.3 Advantages of ILP
1. Increased Throughput:
ILP enables the concurrent execution of multiple instructions, leading to higher overall
throughput.
2. Resource Utilization:
Efficient use of processor resources, as idle units can be utilized for executing
independent instructions.
3. Reduced Execution Time:
Parallel execution reduces the overall execution time of a program.
6.4.4 Challenges and Considerations
1. Dependency Handling:
Efficient handling of dependencies is critical to avoiding hazards and ensuring correct
program execution.
2. Complexity:
ILP introduces complexity to processor design and control mechanisms, making it
challenging to implement.
3. Power Consumption:
Increased parallelism may result in higher power consumption, especially in superscalar
and out-of-order execution architectures.
4. Compiler Support:
Exploiting ILP often requires collaboration with compilers to schedule and optimize
instructions for parallel execution.
6.4.5 Future Trends
1. Vectorization:
Exploiting parallelism through vector operations, where a single instruction operates on
multiple data elements simultaneously.
2. SIMD and GPU Architectures:
Single Instruction, Multiple Data (SIMD) architectures and Graphics Processing Units
(GPUs) are designed to handle parallelism for specific types of applications.