0% found this document useful (0 votes)
9 views4 pages

ACA CIE-1 Notes

Uploaded by

Keerti Sakre
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views4 pages

ACA CIE-1 Notes

Uploaded by

Keerti Sakre
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 4

Pipelining: Introduction and Pipeline Hazards

Pipelining Overview

 Pipelining divides instruction execution into stages (e.g., fetch, decode, execute),
allowing multiple instructions to be processed simultaneously at different stages.
 Goal: Improve throughput by overlapping instruction execution, similar to an
assembly line.

Pipeline Hazards

1. Structural Hazards

 Cause: Resource conflict when multiple instructions need the same hardware resource
(e.g., ALU or memory).
 Example: If one instruction is accessing memory while another needs it for a write, a
structural hazard occurs.
 Solution: Add more resources or stall the pipeline.

2. Data Hazards

 Cause: Dependencies between instructions.


1. RAW (Read After Write): Instruction needs data before it’s written by a
previous one.
 Example:

I1: ADD R1, R2, R3 // R1 = R2 + R3


I2: SUB R4, R1, R5 // Needs R1, but I1 hasn’t written it yet.

2. WAR (Write After Read): Instruction writes before a previous read.


3. WAW (Write After Write): Multiple instructions write to the same register
out of order.
 Solution: Use data forwarding (bypassing) or pipeline stalls.

3. Control Hazards

 Cause: Branch or jump instructions alter the program flow.


 Example:

I1: BEQ R1, R2, LABEL // Branch instruction

I2: ADD R3, R4, R5 // Incorrect if branch is taken


Flynn's Classification

Flynn's classification categorizes computer architectures based on the number of instruction


streams and data streams they can process simultaneously. It consists of four categories:

1. SISD (Single Instruction Single Data):


o Definition: A single processor executes one instruction at a time and operates
on one piece of data.
o Example: Traditional single-core processors.
o Use: Suitable for general-purpose tasks with a sequential program flow.
2. SIMD (Single Instruction Multiple Data):
o Definition: One instruction operates on multiple pieces of data simultaneously
(data parallelism).
o Example: Graphics Processing Units (GPUs), vector processors.
o Use: Common in scientific computing, image processing, and multimedia.
3. MISD (Multiple Instruction Single Data):
o Definition: Multiple instructions operate on a single data stream.
o Example: Rarely used in practice, but found in fault-tolerant systems.
o Use: Used in redundant systems where different instructions process the same
data for reliability.
4. MIMD (Multiple Instruction Multiple Data):
o Definition: Multiple processors execute different instructions on different
pieces of data simultaneously.
o Example: Multi-core processors, distributed systems.
o Use: Common in parallel computing for tasks like simulations and large-scale
data processing.

Instruction Set Architecture (ISA)

ISA defines the set of instructions a processor can understand and execute. It acts as the
interface between software and hardware. Key components of ISA include:

1. Instruction Format: The binary encoding of instructions that the processor can
decode and execute. It includes:
o Opcode: Specifies the operation (e.g., ADD, SUB).
o Operands: Specify the data (e.g., registers, immediate values).
2. Addressing Modes: Determines how to access operands in memory or registers.
Common modes include:
o Immediate Addressing: Operand is part of the instruction.
o Register Addressing: Operand is in a register.
o Direct/Indirect Addressing: Operand is in memory, either directly or
referenced through a pointer.
3. Registers: Small, fast storage locations in the CPU used to hold data during
execution.
o General-purpose: Used for a wide variety of tasks.
o Special-purpose: Used for specific functions (e.g., program counter, status
registers).
4. Control Unit: Part of the ISA responsible for decoding and executing instructions by
issuing control signals to the rest of the CPU.
5. Types of ISAs:
o RISC (Reduced Instruction Set Computing): A simplified set of
instructions, allowing for faster execution of simple operations (e.g., ARM,
MIPS).
o CISC (Complex Instruction Set Computing): A larger set of instructions,
often more complex, but capable of performing more tasks per instruction
(e.g., x86).

Memory Organization

Memory organization refers to how memory is structured and managed in a system. Key
aspects include:

1. Memory Hierarchy: Organizes memory based on speed and size. Common levels
include:
o Registers: Fastest and smallest memory located in the CPU.
o Cache: Small, fast memory that stores frequently used data to reduce access
time.
 L1 Cache: Located closest to the CPU cores.
 L2 Cache: A larger, slower cache that serves as a backup to L1.
o Main Memory (RAM): Volatile memory used for storing data and
instructions during runtime.
o Secondary Storage: Non-volatile storage like hard drives and SSDs for long-
term data storage.
2. Memory Addressing: The method used to access data in memory.
o Linear Addressing: A simple, flat address space where memory locations are
accessed linearly.
o Segmented Addressing: Memory is divided into segments (e.g., code, data).
o Paged Addressing: Memory is divided into fixed-size pages, often used in
virtual memory systems.
3. Virtual Memory: A technique that allows programs to use more memory than is
physically available by swapping data between RAM and disk storage.
o Page Table: Maps virtual addresses to physical addresses.
4. Memory Management: The process of allocating and deallocating memory to
programs. Common methods include:
o Static Memory Allocation: Memory is allocated at compile time.
o Dynamic Memory Allocation: Memory is allocated at runtime (e.g., heap
memory).
o Memory Protection: Ensures programs cannot access each other’s memory to
prevent errors or security breaches.

You might also like