ACA CIE-1 Notes
ACA CIE-1 Notes
Pipelining Overview
Pipelining divides instruction execution into stages (e.g., fetch, decode, execute),
allowing multiple instructions to be processed simultaneously at different stages.
Goal: Improve throughput by overlapping instruction execution, similar to an
assembly line.
Pipeline Hazards
1. Structural Hazards
Cause: Resource conflict when multiple instructions need the same hardware resource
(e.g., ALU or memory).
Example: If one instruction is accessing memory while another needs it for a write, a
structural hazard occurs.
Solution: Add more resources or stall the pipeline.
2. Data Hazards
3. Control Hazards
ISA defines the set of instructions a processor can understand and execute. It acts as the
interface between software and hardware. Key components of ISA include:
1. Instruction Format: The binary encoding of instructions that the processor can
decode and execute. It includes:
o Opcode: Specifies the operation (e.g., ADD, SUB).
o Operands: Specify the data (e.g., registers, immediate values).
2. Addressing Modes: Determines how to access operands in memory or registers.
Common modes include:
o Immediate Addressing: Operand is part of the instruction.
o Register Addressing: Operand is in a register.
o Direct/Indirect Addressing: Operand is in memory, either directly or
referenced through a pointer.
3. Registers: Small, fast storage locations in the CPU used to hold data during
execution.
o General-purpose: Used for a wide variety of tasks.
o Special-purpose: Used for specific functions (e.g., program counter, status
registers).
4. Control Unit: Part of the ISA responsible for decoding and executing instructions by
issuing control signals to the rest of the CPU.
5. Types of ISAs:
o RISC (Reduced Instruction Set Computing): A simplified set of
instructions, allowing for faster execution of simple operations (e.g., ARM,
MIPS).
o CISC (Complex Instruction Set Computing): A larger set of instructions,
often more complex, but capable of performing more tasks per instruction
(e.g., x86).
Memory Organization
Memory organization refers to how memory is structured and managed in a system. Key
aspects include:
1. Memory Hierarchy: Organizes memory based on speed and size. Common levels
include:
o Registers: Fastest and smallest memory located in the CPU.
o Cache: Small, fast memory that stores frequently used data to reduce access
time.
L1 Cache: Located closest to the CPU cores.
L2 Cache: A larger, slower cache that serves as a backup to L1.
o Main Memory (RAM): Volatile memory used for storing data and
instructions during runtime.
o Secondary Storage: Non-volatile storage like hard drives and SSDs for long-
term data storage.
2. Memory Addressing: The method used to access data in memory.
o Linear Addressing: A simple, flat address space where memory locations are
accessed linearly.
o Segmented Addressing: Memory is divided into segments (e.g., code, data).
o Paged Addressing: Memory is divided into fixed-size pages, often used in
virtual memory systems.
3. Virtual Memory: A technique that allows programs to use more memory than is
physically available by swapping data between RAM and disk storage.
o Page Table: Maps virtual addresses to physical addresses.
4. Memory Management: The process of allocating and deallocating memory to
programs. Common methods include:
o Static Memory Allocation: Memory is allocated at compile time.
o Dynamic Memory Allocation: Memory is allocated at runtime (e.g., heap
memory).
o Memory Protection: Ensures programs cannot access each other’s memory to
prevent errors or security breaches.