COA Unit 4
COA Unit 4
Unit 4
Memory
Memory :-
The computer memory holds the data and instructions needed to process raw data and
produce output. It is the same as a human mind, where data, information, and instructions
are stored. It is a data storage device or a data storage component where instructions for
processing data are kept along with the data that has to be processed. Both the input and
the output can be held here.
1. Registers
Registers are tiny, super-fast memory units located inside the CPU (the brain of
the computer). They hold the data and instructions the CPU needs to process
right away. They are the fastest type of memory but only store small amounts
of data (usually 16 to 64 bits).
2. Cache Memory
Cache memory is a small, quick-access memory located near the CPU. It
temporarily stores frequently used data and instructions that were recently
fetched from the main memory (RAM). Its purpose is to speed up data access
for the CPU, making programs run faster by providing quick access to
commonly used data.
• Magnetic Disk
Magnetic disks (like hard drives) are circular plates made from metal,
plastic, or magnetized materials. These disks store data and work at high
speeds but are slower than RAM and cache. They are commonly used for
long-term storage of data.
• Magnetic Tape
Magnetic tape is a device used for storing large amounts of data, often
for backups. It’s a plastic strip coated with magnetic material. However,
accessing data from magnetic tape is slower compared to other storage
types, so it's typically used for archiving or backup purposes.
System-Supported Memory Standards : -
According to the memory Hierarchy, the system-supported memory
standards are defined below:
Level 1 2 3 4
Secondary
Name Register Cache Main Memory
Memory
less than 16
Size <1 KB <16GB >100 GB
MB
DRAM
Implementati On-chip/
Multi-ports (capacitor Magnetic
on SRAM
memory)
0.25ns to
Access Time 0.5 to 25ns 80ns to 250ns 50 lakh ns
0.5ns
20000 to 1 5000 to
Bandwidth 1000 to 5000 20 to 150
lakh MB 15000
Operating Operating
Managed by Compiler Hardware
System System
In short, RAM is where your computer keeps everything it's working on right now,
and once the device is turned off, everything stored in RAM is lost.
The primary compositions of a static RAM are flip-flops that store the binary information.
The nature of the stored information is volatile, i.e. it remains valid as long as power is
applied to the system. The static RAM is easy to use and takes less time performing read
and write operations as compared to dynamic RAM.
The dynamic RAM exhibits the binary information in the form of electric charges that are
applied to capacitors. The capacitors are integrated inside the chip by MOS transistors. The
dynamic RAM consumes less power and provides large storage capacity in a single
memory chip.
RAM chips are available in a variety of sizes and are used as per the system requirement.
The following block diagram demonstrates the chip interconnection in a 128 * 8 RAM chip.
•A 128 * 8 RAM chip has a memory capacity of 128 words of eight bits (one
byte) per word. This requires a 7-bit address and an 8-bit bidirectional data bus.
•The 8-bit bidirectional data bus allows the transfer of data either from
memory to CPU during a read operation or from CPU to memory during
a write operation.
•The read and write inputs specify the memory operation, and the two chip
select (CS) control inputs are for enabling the chip only when the
microprocessor selects it.
•The bidirectional data bus is constructed using three-state buffers.
•The output generated by three-state buffers can be placed in one of the three
possible states which include a signal equivalent to logic 1, a signal equal to
logic 0, or a high-impedance state.
The following function table specifies the operations of a 128 * 8 RAM chip.
From the functional table, we can conclude that the unit is in operation only when CS1 = 1
and CS2 = 0. The bar on top of the second select variable indicates that this input is
enabled when it is equal to 0.
A ROM memory is used for keeping programs and data that are permanently
resident in the computer.
ROM chips are also available in a variety of sizes and are also used as per the system
requirement. The following block diagram demonstrates the chip interconnection in a
512 * 8 ROM chip.
A ROM chip has a similar organization as a RAM chip. However, a ROM can
only perform read operation; the data bus can only operate in an output mode.
•The 9-bit address lines in the ROM chip specify any one of the 512 bytes stored
in it.
•The value for chip select 1 and chip select 2 must be 1 and 0 for the unit to
operate. Otherwise, the data bus is said to be in a high-impedance state.
2D and 2.5D Memory organization :-
2D Memory organization –
In 2D organization, memory is divided in the form of rows and columns(Matrix). Each
row contains a word, now in this memory organization, there is a decoder. A decoder
is a combinational circuit that contains n input lines and 2n output lines. One of the
output lines selects the row by the address contained in the MAR and the word which
is represented by that row gets selected and is either read or written through the
data lines.
Cache memory is much faster than RAM, but it's also smaller and more expensive. It's
used to improve the speed of the CPU by storing the data and instructions that the
CPU uses most often. Even though it costs more than RAM, it's cheaper than the very
fastest storage, like CPU registers.
In simple terms, cache memory acts like a shortcut for the CPU, helping it avoid
delays caused by having to fetch data from the slower main memory.
2. Small Size: Cache memory is smaller in size compared to RAM, as it only stores
a subset of the most frequently accessed data.
3. Costly: Cache memory is more expensive than main memory due to its high-
speed technology.
4. Acts as a Buffer: It acts as an intermediary between the CPU and the main
memory, speeding up data access by storing commonly used information.
5. Volatile: Like main memory, cache memory is volatile, meaning it loses all
stored data when the computer is turned off.
6. Levels of Cache: Cache memory is often organized into multiple levels (L1, L2,
and sometimes L3), with L1 being the smallest and fastest, and L3 being larger
but slower.
8. Dynamic and Static: There are two main types of cache memory – dynamic
and static. Static cache is faster but more expensive and complex.
9. Transparent to the User: Cache memory operates automatically and is
managed by the system without user intervention.
Levels of Memory :-
1. Level 1 - Registers:
Registers are the fastest type of memory, located directly within the CPU. They
store data that the CPU needs to process immediately. Common types of
registers include the Accumulator, Program Counter, and Address Register.
Registers are extremely fast but very limited in size.
•Associative Mapping
•Set-Associative Mapping
1. Direct Mapping -
Direct Mapping in cache memory is a simple method used to map data from the main
memory to the cache. In direct mapping, each block of main memory is mapped to exactly
one cache line (a slot in the cache memory). This means that a specific location in the cache
is reserved for a specific location in the main memory.
1. Memory Division: The main memory is divided into blocks, and the cache is
divided into lines (slots).
2. Mapping Process: Each block of the main memory is mapped to a single line in
the cache using a mapping function. This function uses the address of the data
in main memory to determine where it should be placed in the cache.
• Tag: The part of the address that helps to identify which block of data in
the main memory is being referred to.
• Index: The part of the address that specifies which cache line a block of
data will be mapped to.
• Block Offset: The part of the address used to locate the specific data
within the cache block (or memory block).
4. Cache Access: When the CPU wants to access a particular piece of data, it
checks the cache by using the index part of the address. If the data is found in
the cache (a cache hit), it is used. If the data is not found (a cache miss), the
data is fetched from the main memory and placed in the corresponding cache
line.
This simple mapping helps in fast data retrieval but can lead to inefficiency if multiple
memory blocks constantly conflict and need to replace each other in the same cache
line.
2. Associative Mapping :-
Associative Mapping (or Fully Associative Mapping) in cache memory is a more
flexible technique than direct mapping for storing data in the cache. In associative
mapping, any block of memory can be placed in any line of the cache, unlike direct
mapping, where each memory block is mapped to a specific cache line.
• Tag: The portion of the address that is used to identify the block of
memory being referred to.
• Block Offset: The part of the address that specifies the exact location
within the cache block (or memory block).
• No Index: Unlike direct mapping, there is no index in associative
mapping, as any cache line can hold any memory block.
3. Cache Access: When the CPU needs to access a specific memory address, it
checks all the cache lines to see if the block is stored in any of them. This is
done by comparing the Tag in the memory address with the tags of the data
stored in the cache lines.
• If the tag matches, it’s a cache hit, and the data is retrieved.
• If no tag matches, it’s a cache miss, and the data is fetched from main
memory and placed in one of the cache lines.
4. Replacement Policy: Since any memory block can be placed in any cache line,
when the cache is full and a new block of data needs to be loaded, a
replacement policy is used to decide which block to evict. Common
replacement policies include:
• Least Recently Used (LRU): The block that has been used the least
recently is replaced.
• First-In, First-Out (FIFO): The block that has been in the cache the
longest is replaced.
• Random: A random block is replaced.
How It Works:
1. Cache is Divided into Sets:
• The cache is divided into sets (groups of cache lines), and each set
contains multiple lines (slots where data can be stored).
• For example, in a 2-way set-associative cache, each set has 2 cache
lines. In a 4-way set-associative cache, each set has 4 cache lines.
2. Memory is Divided:
• When data from memory is needed, the Set Index is used to find the
correct set in the cache.
• Once the set is found, the data can go into any cache line within that
set.
4. Checking the Cache:
• When the CPU looks for data, it checks the set by comparing the Tag
with the tags in the cache lines of that set.
• If there’s a match, it’s a cache hit (data is found).
• If there’s no match, it’s a cache miss, and the data is fetched from the
main memory and put into the cache.
5. Replacing Data:
• If the set is already full, a replacement policy (like Least Recently Used
(LRU) or First-In-First-Out (FIFO)) is used to decide which data to
replace.
Example:
Imagine a cache with 4 sets and 2 cache lines per set (2-way set-associative). If the
memory has 16 blocks:
• When a memory block needs to be stored, the Set Index tells you which of the
4 sets it goes to.
• Once the set is found, the block can go into either of the 2 lines in that set.
Advantages:
• Fewer Conflicts: Unlike Direct Mapping, where each memory block can only
go to one specific line, Set-Associative Mapping lets a block go into any line in
a set. This reduces the chances of cache conflicts and helps store more data.
• Better Performance: It’s faster and more flexible than Direct Mapping, so
there are fewer misses.
Disadvantages:
• Slower Than Direct Mapping: Since the cache has to check multiple lines
within a set, it takes longer than direct mapping (which only checks one line).
• More Complex: It’s harder to build than direct mapping because of the need to
check multiple lines and handle the replacement of data.
Associative
Feature Direct Mapping Set-Associative Mapping
Mapping
Each memory block Any memory block Memory block maps to a set,
Mapping
maps to a specific can go to any but within the set, it can go
Method
cache line. cache line. to any cache line.
Fixed mapping (one
Cache Flexible (no fixed Flexible (but limited to a set
line per memory
Organization mapping). of lines).
block).
Slower (needs to
Fast (one cache line Moderate (only checks lines
Access Time check all cache
check). in a set).
lines).
More conflicts (only Very few conflicts Less conflicts than direct
Cache
one cache line per (any line can store mapping but more than
Conflicts
block). any block). associative mapping.
Most efficient, A compromise between
Less efficient for
Efficiency fewer cache direct and associative
cache hits.
misses. mapping.
Memory block 0
Memory block 0 Memory block 0 can go to
maps to cache line
Example can go to any any line in set 0, memory
0, block 1 to cache
cache line. block 1 to any line in set 1.
line 1.
Auxiliary Memory
Auxiliary Memory (also known as Secondary Memory) refers to storage devices that are
used to store data and programs that are not currently in use by the computer's main
memory (RAM). It is slower than primary memory (RAM) but offers much larger storage
capacity and persists data even when the power is turned off.
2. Large Storage Capacity: Auxiliary memory has much larger storage capacity
compared to primary memory. It can store gigabytes, terabytes, or even more,
while RAM is usually limited to a few gigabytes.
3. Slower Access Speed: Auxiliary memory is slower than primary memory (RAM)
because it is designed for long-term storage rather than fast data access.
Accessing data in auxiliary memory can take several milliseconds, whereas
primary memory is much faster, typically in nanoseconds.
• Magnetic tapes are durable and can last for many years if stored
properly. They are commonly used for long-term archival storage,
especially in enterprises.
5. Data Backup and Archiving:
• Magnetic tapes are primarily used for data backup and archival
storage. Large organizations often use tape libraries (a collection of tape
drives and tapes) for storing backups of critical data.
• They are also used for storing data in a compressed format to save
space.
Use Cases:
• Enterprise Backup: Due to their cost-effectiveness and large capacity,
magnetic tapes are commonly used for backing up data in businesses and
large organizations.
• Data Archiving: Tape storage is used for storing data that is not accessed
frequently but needs to be preserved for compliance, regulatory, or historical
purposes.
Magnetic Disks:
Magnetic disks are a type of storage device that uses a spinning disk coated with a
magnetic material to read and write data. They are often referred to as hard disk
drives (HDDs) or simply hard drives.
• Magnetic disks are generally cheaper than SSDs but more expensive
than magnetic tapes.
• They offer a good balance of storage capacity, speed, and cost, making
them suitable for general-purpose computing and storage.
5. Data Retrieval:
Use Cases:
• Primary Storage: Magnetic disks are commonly used as the primary storage
medium in personal computers, laptops, and servers.
• Backup Storage: They are also used for backup purposes, though they are not
as cost-effective for large-scale archival storage as magnetic tapes.
• Data Storage in Data Centers: HDDs are commonly used in data centers and
cloud storage systems, where large amounts of data need to be stored and
accessed quickly.
•Segmentation
Paging
Paging divides memory into small fixed-size blocks called pages. When the computer
runs out of RAM, pages that aren’t currently in use are moved to the hard drive, into
an area called a swap file. The swap file acts as an extension of RAM. When a page is
needed again, it is swapped back into RAM, a process known as page swapping. This
ensures that the operating system (OS) and applications have enough memory to
run.
Demand Paging: The process of loading the page into memory on demand
(whenever a page fault occurs) is known as demand paging. The process includes
the following steps are as follows:
Page Fault :-