Mini COA
Mini COA
Hierarchy of Memory:
Magnetic Disks:
Magnetic disks, such as hard disk drives (HDDs), store and
retrieve data using a system of rotating platters coated with a
magnetizable material. These disks are extensively used for
high-capacity, reliable storage in computers and data centers.
Working Principle of Magnetic Disks:
1. Structure & Components:
8 | Page
Optical Disks:
Optical disks, such as CDs, DVDs, and Blu-ray discs, store
data using laser technology. These disks are popular for
multimedia storage, software distribution, and archival
purposes.
Working Principle of Optical Disks:
1. Structure & Components:
o Made of polycarbonate material with a reflective
aluminum layer.
o Data is stored as microscopic pits (indentations) and
lands (flat surfaces) on the disk surface.
o A laser beam is used for reading and writing information.
2. Data Encoding & Retrieval:
o When reading, a low-intensity laser is directed at the
spinning disk.
o Pits do not reflect light well, while lands do, forming a
pattern representing binary data (0s and 1s).
o The photodetector interprets these reflections,
converting them into digital data.
o Writing data involves altering the disk surface using a
high-intensity laser, permanently encoding information.
3. Access Mechanism & Speed:
o Optical disks read data sequentially from a spiral track
starting at the center.
10 | P a g e
Magnetic
Storage Laser-based pits
encoding on
Mechanism and lands
platters
Slower
Faster (high RPM,
Speed (sequential
random access)
access)
Limited
Capacity Higher (terabytes)
(gigabytes)
11 | P a g e
Internal storage
Multimedia
for OS,
Usage distribution,
applications, and
archival storage
backups
Both magnetic disks and optical disks play crucial roles in
modern data management, with HDDs being dominant in
everyday computing due to their large capacity and speed,
while optical disks remain useful for portability and archival
purposes.
Definition of RAID
RAID 0 (Striping)
RAID 1 (Mirroring)
1. Memory Subsystem
2. Bus Architecture
6. Storage Controllers
ISA specifies:
1. Hardware-Software Compatibility:
o Ensures that software programs can run efficiently on a
processor.
o Standardizes interactions between applications and
CPU instructions.
22 | P a g e
2. Performance Optimization:
o Defines instruction execution efficiency.
o Impacts the speed, power consumption, and parallelism
of the processor.
3. Scalability & Portability:
o Allows different generations of processors to run the
same software.
o Supports cross-platform development by defining
common instructions.
4. Influence on Processor Design:
o Determines whether a CPU follows RISC (Reduced
Instruction Set Computing) or CISC (Complex
Instruction Set Computing) principles.
o Shapes microarchitecture design decisions, affecting
efficiency and complexity.
5. Impact on Application Development:
o Software developers optimize applications based on ISA
characteristics.
o High-performance computing (HPC), AI, and gaming
depend on specialized ISA instructions.
General-
Accumulator-
Purpose Stack-based
Feature based
Register Architecture
Architecture
(RISC)
Multiple
Single
Register general- Uses a stack
accumulator for
Usage purpose structure
computations
registers
Load-Store
Instruction (explicit Memory-intensive Operates on
Type memory computations stack (push/pop)
access)
Faster (due to Slower (due to
Execution Moderate (due to
register memory
Speed stack handling)
operations) dependency)
Simplifies
Complex, requires
Instruction Simple & execution but
memory fetch for
Complexity uniform complicates
arithmetic
indexing
Limited efficiency
Pipelining Highly efficient Less effective due to stack
dependence
Accumulator Architecture
Example:
Stack Architecture
Definition:
Functionality:
Example:
Advantages:
28 | P a g e
Limitations:
Characteristics:
Example:
Example:
Computing C = A + B using two-address instructions:
Efficiency Comparison
30 | P a g e
Use Cases
Definition:
31 | P a g e
Key Advancements:
2. Execution Speed
4. Instruction Length
5. Hardware Complexity
6. Use Cases
Summary Table:
35 | P a g e
Definition:
Types of Flip-Flops:
Application Areas:
Definition:
Working Principle:
Truth Table:
37 | P a g e
Circuit Diagram:
Sequential circuits are logic circuits where the output depends on both
the current input and past states. These circuits utilize memory
elements, such as flip-flops, to store previous states, enabling time-
dependent operations.
Combinational
Feature Sequential Circuits
Circuits
Memory No, depends only on
Yes, stores previous states
Dependency current input
Clock Signal
Often required Not needed
Required
38 | P a g e
Combinational
Feature Sequential Circuits
Circuits
Output
Current input + past state Only current input
Influence
Common Logic gates (AND,
Flip-flops, registers, counters
Components OR, NOT)
CPUs, finite state machines, Arithmetic operations,
Use Cases
communication protocols multiplexers
2. Data-Path Implementation
3. Control Implementation
Definition:
Definition:
Advantages of Pipelining:
Limitations of Pipelining:
Introduction
Without pipelining, each instruction completes all five stages before the
next begins, leading to longer execution time.
Challenges of Pipelining
Introduction
Mitigation Techniques
Introduction
Here, the SUB instruction depends on the updated value of R1, but
if pipelining executes both instructions concurrently, it may read an
incorrect or old value.
49 | P a g e
4. Write After Read (WAR) - Anti Dependency:
This hazard occurs when an instruction writes to a register before a
preceding instruction has completed reading from it. Example:
5. SUB R4, R1, R5 ; Reads R1
6. ADD R1, R2, R3 ; Writes to R1
If ADD executes before MUL, R1 will contain the wrong final value.
Mitigation Techniques
Introduction
Branch hazards stem from instructions that alter the program's control
flow. Some common causes include:
1. Conditional Branches:
Instructions like IF-ELSE statements depend on runtime
conditions, making their outcome uncertain until evaluation.
2. Unconditional Jumps:
Direct jumps in code disrupt sequential instruction execution,
forcing the processor to adjust its pipeline.
3. Loop Conditions:
Iterative structures cause frequent branching, requiring rapid
decision-making to avoid unnecessary stalls.
51 | P a g e
When a branch instruction is encountered, the processor must determine
whether to proceed sequentially or jump to a new address. Delays in
making this decision result in pipeline stalls.
1. Branch Prediction:
o The processor predicts whether a branch will be taken or not.
o If the prediction is correct, execution continues smoothly; if
incorrect, instructions must be flushed and restarted.
o Modern CPUs use dynamic branch prediction, employing
history-based predictors like two-bit saturating counters and
branch history tables.
2. Delayed Branching:
o Introduces delay slots, allowing independent instructions to
execute while the branch resolves.
o The compiler rearranges instructions to utilize these delay
slots efficiently, reducing pipeline stalls.
o This technique is common in RISC architectures.
3. Branch Target Buffer (BTB):
o A cache-like structure storing previous branch addresses and
predictions.
o If a branch is encountered, the BTB provides an early
prediction of the target address, expediting execution.
o Reduces the need for recomputing branch destinations.
4. Speculative Execution:
o The processor speculatively executes instructions beyond the
branch while waiting for the actual branch resolution.
o If speculation proves incorrect, results are discarded, requiring
rollback mechanisms.
o Used in out-of-order execution architectures to enhance
efficiency.
5. Static Branch Prediction:
o Instead of learning from execution history, fixed strategies
predict branch behavior.
o Examples include always assuming forward branches are
not taken or loop-ending branches will be taken.
o Simple but less effective than dynamic prediction.
6. Loop Unrolling:
52 | P a g e
o Reduces the number of branching operations in loops by
expanding loop iterations.
o Decreases the frequency of branch hazards, benefiting
execution performance.
o Compilers optimize loops using unrolling techniques to
minimize branching stalls.