2.
1 Uniform Memory Access (UMA) - "Equal Access for All"
👉 Imagine a library where everyone takes the same time to reach the bookshelves. 📚
Definition: All processors take equal time to access memory.
How it Works: Each processor connects to a single shared memory with the same
speed.
Best For: Small systems with fewer processors.
Example: Your desktop computer or a basic server.
Think of it like: A single big fridge in the kitchen where everyone gets food at the same speed.
🏠
2.2 Non-Uniform Memory Access (NUMA) - "Faster for Nearby, Slower for
Far"
👉 Imagine a large supermarket 🛒—if you’re closer to an aisle, you get items faster than
someone far away.
Definition: Some processors access memory faster than others, depending on location.
How it Works: Memory is divided into multiple sections, and each processor has its
own closest section.
Best For: Large-scale, high-performance computing (HPC) systems.
Example: Supercomputers, data centers.
Think of it like: A house with multiple fridges—people grab food faster from the fridge closest
to them! 🏠🏠
2.3 Cache-Only Memory Architecture (COMA) - "All Memory is Cache"
👉 Imagine a fast-food restaurant where every table already has some food, so you don’t need to
go to the kitchen. 🍔
Definition: Memory is only in the processor’s cache (no centralized memory).
How it Works: Data moves dynamically, staying close to where it’s needed.
Best For: Systems with thousands of processors (massive parallel computing).
Example: Supercomputers used in scientific simulations.
Think of it like: A restaurant where waiters keep food already on the table instead of bringing
it from the kitchen every time! 🍽️
Key Takeaways 📝
1. UMA – Same memory speed for all (like a single fridge for the whole family).
2. NUMA – Faster for nearby processors (like multiple fridges in different rooms).
3. COMA – No fixed memory; data moves around (like food already at your table).
4. Advantages of Shared Memory Architecture:
Easier to Code – Use global variables instead of complex messaging.
Fast Data Sharing – No need to send messages—just access memory directly!
High Performance (for small systems) – Works great when there are fewer processors and
fast memory access is needed.
. Challenges in Shared Memory Architecture
Key Takeaways 📝
1. Scalability Issues – Too many processors slow down memory access (like traffic on a
busy road). 🚗
2. Synchronization Overhead – Preventing data conflicts adds complexity (like taking
turns writing on a shared board). 📝
3. Hardware Complexity – Needs advanced tech to manage memory properly (like a
smart toll system). 🚦
4. Non-Uniform Performance – Some processors work faster than others due to memory
placement (like students sitting close to a teacher). 🎓
Shared Memory vs. Distributed Memory Architecture
Feature 🏷️ Shared Memory 🖥️ Distributed Memory 🌐
All processors share the same Each processor has its own separate
Memory Access 🧠
memory. memory.
Data is shared through global Processors communicate via message
Communication 📢
memory. passing.
Speed 🚀 Faster for small systems (low Can be faster for large systems (scales
Feature 🏷️ Shared Memory 🖥️ Distributed Memory 🌐
latency). better).
Limited (more processors = slower Highly scalable (more processors can be
Scalability 📈
performance). added easily).
Programming
Easier (data sharing is automatic). Harder (requires sending messages).
Complexity 🖥️
If memory fails, the whole system More fault-tolerant (if one node fails,
Failure Handling ⚠️
may fail. others keep running).