0% found this document useful (0 votes)
96 views7 pages

RISC, CISC & Pipeline Notes

The document provides an overview of two CPU architecture designs: Reduced Instruction Set Computer (RISC) and Complex Instruction Set Computer (CISC), highlighting their key features, advantages, and disadvantages. RISC focuses on a simplified instruction set for faster execution, while CISC utilizes complex instructions to reduce program size. Additionally, it discusses pipelining as a technique to improve CPU performance by overlapping instruction execution stages.

Uploaded by

sahilnagaland
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
96 views7 pages

RISC, CISC & Pipeline Notes

The document provides an overview of two CPU architecture designs: Reduced Instruction Set Computer (RISC) and Complex Instruction Set Computer (CISC), highlighting their key features, advantages, and disadvantages. RISC focuses on a simplified instruction set for faster execution, while CISC utilizes complex instructions to reduce program size. Additionally, it discusses pipelining as a technique to improve CPU performance by overlapping instruction execution stages.

Uploaded by

sahilnagaland
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

The Assam Kaziranga University

Computer Science and Engineering


Computer Organization & Architecture

Reduced Instruction Set Computer (RISC)

RISC (Reduced Instruction Set Computer) is a CPU design strategy focused on simplifying the
instructions executed by a computer. The idea is to use a small set of simple instructions that
can execute in a single clock cycle, allowing for faster and more efficient processing. Here are
some key features and principles of RISC architecture:

1. Simplified Instruction Set


RISC processors have a limited set of instructions, each designed to perform a small, specific
operation.
By keeping instructions simple, RISC processors avoid complex and multi-step instructions,
aiming to complete each operation in a single clock cycle.

2. Single-Cycle Execution
One of the main principles of RISC is that each instruction is executed in a single clock cycle,
which increases efficiency and speed.

3. Load and Store Architecture


RISC uses a load/store model, meaning it separates memory access and arithmetic
operations.
Only the LOAD and STORE instructions access memory, while all other operations work
directly with the CPU registers, reducing memory access time and improving speed.

4. Large Number of Registers


RISC architecture often includes a larger number of general-purpose registers, allowing more
data to be stored within the CPU itself.
This minimizes the need to access slower memory, which further enhances processing speed.

5. Pipelining
Pipelining is heavily utilized in RISC architecture. It allows multiple instructions to be processed
simultaneously, with each part of the CPU handling a different stage of execution for a set of
instructions.
This leads to more efficient use of the CPU and a faster overall instruction execution.

6. Fixed-Length Instructions
RISC instructions are usually of fixed length, which simplifies decoding and aligns better with
pipelining.

7 | Department of CSE |
The Assam Kaziranga University
Computer Science and Engineering
Computer Organization & Architecture

This uniformity in instruction length helps streamline the instruction cycle and contributes to the
processor's overall efficiency.

7. Optimized Compiler Design


RISC relies on compilers to translate high-level code into optimized machine code, ensuring
that the code can be executed efficiently by the processor.
Compilers play a crucial role in converting complex operations into simpler RISC instructions,
thereby offloading some of the complexity from the CPU.

8. Advantages of RISC
High Speed: Simplified instructions and single-cycle execution make RISC processors faster.
Efficiency in Power and Area: Reduced instructions and simpler circuitry can make RISC
processors more power-efficient.
Ease of Pipelining: Consistent instruction length and simplicity allow for efficient pipelining.
Scalability: RISC design is easily scalable, making it suitable for a wide range of applications
from embedded systems to high-performance computing.

9. Disadvantages of RISC
Dependency on Compiler: RISC systems rely on sophisticated compilers to manage code
optimization and handle complex tasks.
Limited Instruction Set: Complex operations may require multiple simple instructions, which
could lead to longer programs.
Memory Use: While efficient, the use of multiple instructions for certain operations can
sometimes increase memory usage.

10. Examples of RISC Processors


Some well-known RISC architectures include ARM, MIPS, and SPARC. These processors are
widely used in mobile devices, gaming consoles, and embedded systems due to their power
efficiency and performance.

RISC architecture represents a design philosophy focused on speed, simplicity, and efficiency,
playing a vital role in modern computing across various applications, especially where high
performance and energy efficiency are critical.

7 | Department of CSE |
The Assam Kaziranga University
Computer Science and Engineering
Computer Organization & Architecture

Complex Instruction Set Computer (CISC)

CISC (Complex Instruction Set Computer) is a CPU architecture design approach focused on
implementing a broad set of instructions, where each instruction can perform complex
operations. The goal is to minimize the number of instructions per program, even if individual
instructions take more cycles to execute. Here are the key characteristics and principles of
CISC architecture:

1. Extensive Instruction Set


CISC processors feature a large and diverse set of instructions, including complex commands
that can perform multiple tasks in a single instruction.
Instructions in CISC may vary in length and complexity, allowing for higher-level operations to
be done within fewer lines of code.

2. Multi-Cycle Instructions
CISC instructions are generally designed to accomplish tasks that would take multiple steps in
RISC architecture.
These instructions often take multiple clock cycles to execute but can reduce the overall
number of instructions needed for a task.

3. Memory-to-Memory Operations
CISC architecture allows instructions to directly access memory without needing to load data
into registers first.
This can simplify programming since operations can be performed directly on memory
locations, reducing the need for intermediate instructions.

4. Fewer Registers
Since CISC processors can work directly with memory, they generally require fewer
general-purpose registers than RISC.
This design allows for more complex operations at the cost of increased memory access time,
but CISC compensates for this with instructions designed to handle memory directly.

5. Microcode Control Unit


CISC processors often use microcode to control complex instructions. Microcode is a layer of
instructions that translates high-level CISC instructions into a sequence of simpler steps
executed by the processor.
This allows the CPU to handle complex instructions without significant hardware complexity,
improving flexibility.

7 | Department of CSE |
The Assam Kaziranga University
Computer Science and Engineering
Computer Organization & Architecture

6. Variable-Length Instructions
CISC instructions often have variable lengths, meaning that they are not all a fixed number of
bits. This allows more flexibility in the types of instructions and operations supported.
While this can make instruction decoding more complex, it allows for a rich set of instructions
that can handle a wide range of tasks.

7. Focus on Reducing Program Size


CISC is designed to reduce the number of instructions per program, which in turn minimizes
the amount of memory required for storing the program.
Complex instructions mean that fewer instructions need to be written, which can reduce overall
program size.

8. Advantages of CISC
Reduced Code Size: Complex instructions allow tasks to be completed with fewer instructions,
making the code more compact.
Easier Compiler Design: Since CISC instructions closely match high-level language constructs,
compilers have to perform fewer optimizations, simplifying compiler design.
Efficient Use of Memory: With reduced program size, less memory is needed to store
instructions, which can be advantageous in certain systems.

9. Disadvantages of CISC
Slower Instruction Execution: Due to the complexity of instructions, CISC processors may
require multiple clock cycles to execute each instruction, leading to slower execution.
Complexity in Decoding: Variable-length and complex instructions make the instruction
decoding process more complicated.
Increased Power Consumption: The complexity of CISC instructions often translates to
increased power usage, making it less suitable for power-sensitive applications.

10. Examples of CISC Processors


Common examples of CISC architecture include the Intel x86 family and IBM System/360
mainframes. x86 processors, widely used in personal computers, are known for their CISC
design, which supports a vast and varied set of instructions.

CISC architecture provides flexibility and reduces the complexity of program code by
implementing powerful, multi-step instructions. It has traditionally been used in systems where
minimizing code size is important, although the trade-offs in power and speed have led to
increased interest in RISC approaches for many applications.

7 | Department of CSE |
The Assam Kaziranga University
Computer Science and Engineering
Computer Organization & Architecture

Pipeline

Pipelining is a technique in computer architecture that allows for the overlapping execution of
multiple instructions by breaking down the instruction execution process into separate stages.
Each stage of the pipeline processes a different part of an instruction, enabling multiple
instructions to be in different stages of execution simultaneously. This significantly improves
the throughput and overall performance of the CPU. Here are the key aspects and stages
involved in pipelining:

1. Definition of Pipelining
Pipelining is a method where multiple instructions are processed in a CPU by breaking down
instruction execution into a sequence of stages. Each stage completes part of the instruction
(e.g., fetch, decode, execute).
With pipelining, once one stage completes its task for a particular instruction, it passes that
instruction to the next stage and begins work on a new instruction.

2. Stages in a Pipeline
A typical instruction pipeline includes the following stages:
Fetch (IF): The instruction is fetched from memory.
Decode (ID): The fetched instruction is decoded to understand what operation is to be
performed.
Execute (EX): The CPU executes the operation, often involving ALU (Arithmetic Logic Unit)
operations.
Memory Access (MEM): If required, data is accessed from or written to memory.
Write-back (WB): The result is written back to a register or memory, completing the instruction
cycle.

3. Pipeline Throughput and Latency


Throughput: This is the number of instructions the pipeline can process per unit of time.
Pipelining increases throughput by allowing multiple instructions to be in different stages
simultaneously.
Latency: This is the time it takes for a single instruction to pass through the entire pipeline.
While pipelining does not reduce latency for individual instructions, it improves overall
throughput.

4. Pipeline Hazards

7 | Department of CSE |
The Assam Kaziranga University
Computer Science and Engineering
Computer Organization & Architecture

Structural Hazards: Occur when hardware resources are insufficient to support all instructions
in the pipeline simultaneously. For example, two instructions might need access to memory at
the same time.
Data Hazards: Arise when instructions in the pipeline depend on the results of previous
instructions that have not yet completed. This can lead to incorrect results if not handled
properly.
Control Hazards: Occur when the pipeline must deal with branches and jumps, leading to
uncertainty about the next instruction to fetch. For example, branch instructions may change
the flow of the program, causing delays.

5. Handling Pipeline Hazards


Stalling (Pipeline Stall): The pipeline may be temporarily halted to allow a previous instruction
to complete, ensuring no dependencies are violated.
Forwarding (Data Forwarding): Allows intermediate results to be passed directly to subsequent
stages, reducing data hazards.
Branch Prediction: Techniques to predict the outcome of branch instructions to reduce control
hazards. If the prediction is correct, the pipeline continues smoothly; if incorrect, a pipeline
flush may occur, removing incorrectly fetched instructions.

6. Pipeline Efficiency
Pipeline efficiency is determined by how effectively the pipeline can avoid stalls and handle
hazards.
Ideal pipeline performance is achieved when the pipeline is fully utilized, meaning each stage
is active with an instruction at all times.

7. Types of Pipelines
Instruction Pipeline: Focuses on overlapping the fetch, decode, execute, and write-back stages
for multiple instructions.
Arithmetic Pipeline: Used in CPUs for floating-point and integer arithmetic operations, breaking
down complex mathematical operations into stages for faster execution.

8. Superscalar and Advanced Pipelining


Superscalar Architecture: Uses multiple pipelines to execute more than one instruction per
clock cycle, increasing the instruction throughput.
Out-of-Order Execution: Allows instructions to be executed as soon as their dependencies are
resolved, rather than strictly following program order.

7 | Department of CSE |
The Assam Kaziranga University
Computer Science and Engineering
Computer Organization & Architecture

9. Advantages of Pipelining
Increased Throughput: Pipelining allows more instructions to be processed over time, resulting
in higher instruction throughput.
Efficient CPU Utilization: Pipelining keeps multiple parts of the CPU active, making more
efficient use of CPU resources.
Reduced Instruction Time: For large instruction streams, the average time per instruction
decreases as more instructions are processed in parallel.

10. Disadvantages of Pipelining


Complexity of Hazard Handling: Pipeline hazards add complexity to CPU design and require
additional circuitry or strategies to resolve them.
Pipeline Stalls and Wasted Cycles: Stalls can lead to idle cycles, reducing efficiency if not
managed well.
Increased Hardware Cost: Additional resources are needed to support pipelining (e.g., branch
predictors, forwarding units), increasing design complexity and cost.

11. Applications of Pipelining


Widely used in modern processors to enhance performance in applications requiring
high-speed data processing, such as multimedia, scientific computations, and gaming.
Often used in GPUs (Graphics Processing Units) and DSPs (Digital Signal Processors) to
handle parallel processing of data-intensive tasks.

Pipelining is essential in modern CPU design, allowing processors to achieve higher


performance by parallelizing instruction execution. While managing hazards and stalls poses
challenges, advanced techniques like branch prediction, superscalar execution, and
out-of-order processing make pipelining an effective tool for boosting processing speed in
computer architecture.

7 | Department of CSE |

You might also like