0% found this document useful (0 votes)
50 views32 pages

COA Unit 4

The document discusses computer memory organization and architecture, detailing various types of memory including registers, cache, RAM, and secondary storage. It explains the memory hierarchy, semiconductor memory types, and the differences between 2D and 2.5D memory organization. Additionally, it highlights the characteristics and functions of cache memory, emphasizing its role in enhancing CPU performance.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
50 views32 pages

COA Unit 4

The document discusses computer memory organization and architecture, detailing various types of memory including registers, cache, RAM, and secondary storage. It explains the memory hierarchy, semiconductor memory types, and the differences between 2D and 2.5D memory organization. Additionally, it highlights the characteristics and functions of cache memory, emphasizing its role in enhancing CPU performance.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 32

Computer Organization And Architecture

Unit 4
Memory

Memory :-
The computer memory holds the data and instructions needed to process raw data and
produce output. It is the same as a human mind, where data, information, and instructions
are stored. It is a data storage device or a data storage component where instructions for
processing data are kept along with the data that has to be processed. Both the input and
the output can be held here.

Memory Hierarchy Design :-

1. Registers
Registers are tiny, super-fast memory units located inside the CPU (the brain of
the computer). They hold the data and instructions the CPU needs to process
right away. They are the fastest type of memory but only store small amounts
of data (usually 16 to 64 bits).

2. Cache Memory
Cache memory is a small, quick-access memory located near the CPU. It
temporarily stores frequently used data and instructions that were recently
fetched from the main memory (RAM). Its purpose is to speed up data access
for the CPU, making programs run faster by providing quick access to
commonly used data.

3. Main Memory (RAM)


Main memory, also known as RAM (Random Access Memory), is where the
computer stores the data and instructions that are actively being used by the
CPU. It's bigger than the cache but slower. It holds everything you’re currently
working on (like open applications) but loses all the data once the computer is
turned off.

• Types of Main Memory


• Static RAM (SRAM): This type of memory stores data in a way that
doesn't require constant refreshing, so it’s faster but takes up
more space. It’s typically used for cache memory.
• Dynamic RAM (DRAM): This memory needs to be refreshed
regularly to keep data intact. It’s slower than SRAM but can hold
more data in less space, so it’s commonly used for main memory.
4. Secondary Storage
Secondary storage includes devices like hard drives (HDDs) and solid-state
drives (SSDs), which store data long-term. They have much larger storage
capacities than RAM but are slower to access. These are used to store things
like your files, programs, and the operating system. Unlike RAM, the data stays
intact even when the computer is turned off.

• Magnetic Disk
Magnetic disks (like hard drives) are circular plates made from metal,
plastic, or magnetized materials. These disks store data and work at high
speeds but are slower than RAM and cache. They are commonly used for
long-term storage of data.

• Magnetic Tape
Magnetic tape is a device used for storing large amounts of data, often
for backups. It’s a plastic strip coated with magnetic material. However,
accessing data from magnetic tape is slower compared to other storage
types, so it's typically used for archiving or backup purposes.
System-Supported Memory Standards : -
According to the memory Hierarchy, the system-supported memory
standards are defined below:

Level 1 2 3 4

Secondary
Name Register Cache Main Memory
Memory

less than 16
Size <1 KB <16GB >100 GB
MB

DRAM
Implementati On-chip/
Multi-ports (capacitor Magnetic
on SRAM
memory)

0.25ns to
Access Time 0.5 to 25ns 80ns to 250ns 50 lakh ns
0.5ns

20000 to 1 5000 to
Bandwidth 1000 to 5000 20 to 150
lakh MB 15000

Operating Operating
Managed by Compiler Hardware
System System

Backing from Main from Secondary


From cache from ie
Mechanism Memory Memory
Semiconductor Memory :-
Semiconductor memory is a type of electronic memory that stores digital data using
semiconductor materials, usually silicon. It stores data in binary format, with "1s" and
"0s" representing electrical charges. This type of memory is the most common in
devices like computers, smartphones, and other electronic gadgets.

Types of Semiconductor Memory


1. Random Access Memory (RAM):
• Type: Volatile (it loses its data when the power is turned off).
• Function: RAM stores data temporarily for active applications. It holds
information that the CPU is actively using or processing.
• Speed: Extremely fast. The CPU can quickly access data from RAM.
• Capacity: Usually smaller than ROM (Read-Only Memory).
• Applications: RAM is used to store data for running programs, open
files, and browser tabs. It is essential for smooth, fast performance when
your computer or phone is running applications.

In short, RAM is where your computer keeps everything it's working on right now,
and once the device is turned off, everything stored in RAM is lost.

1. Dynamic RAM (DRAM)


• Description: DRAM is the most common type of RAM used in computers and
other devices. It stores data as electrical charges in capacitors, which need to
be refreshed regularly to maintain the information.
• Characteristics:
• Volatile: Loses all data when power is turned off.
• Slower than SRAM: Needs refreshing every few milliseconds.
• More dense: Can store more data per chip, making it cheaper and
commonly used for main memory.
• Applications: Used in most desktop computers, laptops, and smartphones as
the primary memory (main memory/RAM).

2. Static RAM (SRAM)


• Description: Unlike DRAM, SRAM stores data using flip-flops (transistor
circuits) that don’t need to be refreshed, allowing faster access to data.
• Characteristics:
• Volatile: Like DRAM, SRAM loses data when power is turned off.
• Faster than DRAM: Does not need refreshing and has quicker access
times.
• Less dense and more expensive: Stores less data per chip compared to
DRAM.
• Applications: Used in cache memory (L1, L2) and other places where high
speed is needed but storage requirements are lower.

2. Read-Only Memory (ROM):


• Type: Non-volatile (it keeps its data even when the power is turned off).
• Function: ROM stores critical data that the system needs to start up or
run basic functions, like firmware.
• Speed: Slower than RAM but sufficient for its specific purpose.
• Capacity: Typically smaller than RAM but crucial for essential data
storage.
• Applications: ROM is used for storing firmware, system boot-up
instructions, and other permanent data that doesn't change during
normal operation.

1. Programmable ROM (PROM)


• Description: PROM is a type of ROM that can be written to once by the user or
manufacturer after the chip has been produced. It uses a special device called
a PROM programmer to write data into the memory.
• Characteristics:
• Non-volatile: Retains data even without power.
• One-time programmable: Once data is written, it cannot be changed or
erased.
• Less expensive than Mask ROM: PROM is cheaper to produce
compared to Mask ROM since it is programmed later in the
manufacturing process.
• Applications: Used in situations where firmware or configuration data needs
to be written once and doesn't need to be updated frequently.

2. Erasable Programmable ROM (EPROM)


• Description: EPROM is a type of ROM that can be erased and reprogrammed
multiple times. Erasing is done by exposing the chip to ultraviolet (UV) light,
which clears the data, allowing it to be reprogrammed.
• Characteristics:
• Non-volatile: Retains data without power.
• Reprogrammable: Data can be erased with UV light and rewritten
multiple times.
• Slower write times: Writing to an EPROM takes more time than reading
from it.
• Applications: Used in systems that may require periodic updates to firmware
or configurations, such as embedded systems and micro-controllers.

3. Electrically Erasable Programmable ROM (EEPROM)


• Description: EEPROM is similar to EPROM, but instead of using UV light to
erase the data, it can be erased and rewritten electrically. This allows for more
flexibility as data can be updated without removing the chip from the circuit.
• Characteristics:
• Non-volatile: Retains data without power.
• Reprogrammable: Data can be erased and reprogrammed using
electrical signals.
• Byte-level erasure: Unlike EPROM, which erases the entire chip,
EEPROM allows data to be erased and rewritten at the byte level, offering
more precision.
• Applications: Commonly used for storing small amounts of data that need to
be updated occasionally, such as configuration settings, passwords, or small
firmware updates.

RAM integrated circuit chips :-


The RAM integrated circuit chips are further classified into two possible operating
modes, static and dynamic.

The primary compositions of a static RAM are flip-flops that store the binary information.
The nature of the stored information is volatile, i.e. it remains valid as long as power is
applied to the system. The static RAM is easy to use and takes less time performing read
and write operations as compared to dynamic RAM.

The dynamic RAM exhibits the binary information in the form of electric charges that are
applied to capacitors. The capacitors are integrated inside the chip by MOS transistors. The
dynamic RAM consumes less power and provides large storage capacity in a single
memory chip.

RAM chips are available in a variety of sizes and are used as per the system requirement.
The following block diagram demonstrates the chip interconnection in a 128 * 8 RAM chip.
•A 128 * 8 RAM chip has a memory capacity of 128 words of eight bits (one
byte) per word. This requires a 7-bit address and an 8-bit bidirectional data bus.
•The 8-bit bidirectional data bus allows the transfer of data either from
memory to CPU during a read operation or from CPU to memory during
a write operation.
•The read and write inputs specify the memory operation, and the two chip
select (CS) control inputs are for enabling the chip only when the
microprocessor selects it.
•The bidirectional data bus is constructed using three-state buffers.
•The output generated by three-state buffers can be placed in one of the three
possible states which include a signal equivalent to logic 1, a signal equal to
logic 0, or a high-impedance state.
The following function table specifies the operations of a 128 * 8 RAM chip.
From the functional table, we can conclude that the unit is in operation only when CS1 = 1
and CS2 = 0. The bar on top of the second select variable indicates that this input is
enabled when it is equal to 0.

ROM integrated circuit :-


The primary component of the main memory is RAM integrated circuit chips, but a
portion of memory may be constructed with ROM chips.

A ROM memory is used for keeping programs and data that are permanently
resident in the computer.

ROM chips are also available in a variety of sizes and are also used as per the system
requirement. The following block diagram demonstrates the chip interconnection in a
512 * 8 ROM chip.

A ROM chip has a similar organization as a RAM chip. However, a ROM can
only perform read operation; the data bus can only operate in an output mode.

•The 9-bit address lines in the ROM chip specify any one of the 512 bytes stored
in it.

•The value for chip select 1 and chip select 2 must be 1 and 0 for the unit to
operate. Otherwise, the data bus is said to be in a high-impedance state.
2D and 2.5D Memory organization :-
2D Memory organization –
In 2D organization, memory is divided in the form of rows and columns(Matrix). Each
row contains a word, now in this memory organization, there is a decoder. A decoder
is a combinational circuit that contains n input lines and 2n output lines. One of the
output lines selects the row by the address contained in the MAR and the word which
is represented by that row gets selected and is either read or written through the
data lines.

Advantages of 2D Memory Organization:


• Simplicity: The design is straightforward and cost-effective.
• Ease of manufacturing: It is easier and less expensive to manufacture than
more complex designs.
• Widely used: Common in standard memory applications for general
computing tasks.

Disadvantages of 2D Memory Organization:


• Limited performance: As data density increases, performance may be limited
due to the physical constraints of the layout. There can be bottlenecks in
accessing the memory.
• Space constraints: The area available for placing memory cells is limited in a
2D plane, reducing potential capacity and speed.

2.5D Memory organization –


In 2.5D Organization the scenario is the same but we have two different decoders
one is a column decoder and another is a row decoder. Column decoder is used to
select the column and a row decoder is used to select the row. The address from the
MAR goes as the decoders’ input. Decoders will select the respective cell through the
bit outline, then the data from that location will be read or through the bit, inline data
will be written at that memory location.

Advantages of 2.5D Memory Organization:


• Improved Performance: By using multiple memory chips connected in
parallel, data can be accessed more quickly, improving performance.
• Higher Memory Bandwidth: 2.5D designs allow for more memory to be
accessed simultaneously, improving overall system throughput.
• Compact Design: Memory and processing units can be integrated in a smaller
form factor compared to traditional systems.
• Scalability: It offers better scalability by adding more memory units without
significantly increasing the footprint.
Disadvantages of 2.5D Memory Organization:
• Cost: The fabrication process for 2.5D memory can be more expensive
compared to traditional 2D designs, especially due to the use of interposers
and the complexity of inter-chip communication.
• Complexity: The design and integration of multiple memory chips on a single
substrate require sophisticated manufacturing processes.
• Heat Management: With more components packed into a smaller area, heat
dissipation can become an issue, requiring advanced cooling solutions.

Difference between 2D and 2.5D Memory Organization:

Feature 2D Memory Organization 2.5D Memory Organization


Memory cells in a single flat grid Multiple memory chips placed side by
Structure
(rows and columns) side on an interposer
Access Sequential access to rows and Parallel access to multiple memory
Method columns chips
Performanc Slower, limited by single chip Faster, high bandwidth with parallel
e access memory chips
Limited by the size of individual More memory capacity by adding
Capacity
chips chips horizontally
Higher cost due to complexity and
Cost Lower cost due to simplicity
advanced fabrication
General computing, standard High-performance computing, GPUs,
Applications
DRAM, flash memory data centers, HBM
Cache Memory:-
Cache memory is a small and super-fast storage area inside a computer that helps
the CPU (Central Processing Unit) access data quickly. It stores copies of frequently
used data and instructions, so when the CPU needs them, it can grab them from the
cache instead of fetching them from the slower main memory (RAM). This helps
speed up the overall performance of the computer.

Cache memory is much faster than RAM, but it's also smaller and more expensive. It's
used to improve the speed of the CPU by storing the data and instructions that the
CPU uses most often. Even though it costs more than RAM, it's cheaper than the very
fastest storage, like CPU registers.

In simple terms, cache memory acts like a shortcut for the CPU, helping it avoid
delays caused by having to fetch data from the slower main memory.

Characteristics of Cache Memory :-


1. High Speed: Cache memory is much faster than the main memory (RAM),
allowing quick access to frequently used data and instructions.

2. Small Size: Cache memory is smaller in size compared to RAM, as it only stores
a subset of the most frequently accessed data.

3. Costly: Cache memory is more expensive than main memory due to its high-
speed technology.

4. Acts as a Buffer: It acts as an intermediary between the CPU and the main
memory, speeding up data access by storing commonly used information.

5. Volatile: Like main memory, cache memory is volatile, meaning it loses all
stored data when the computer is turned off.

6. Levels of Cache: Cache memory is often organized into multiple levels (L1, L2,
and sometimes L3), with L1 being the smallest and fastest, and L3 being larger
but slower.

7. Improves CPU Performance: By holding frequently accessed data close to the


CPU, cache memory reduces the time it takes for the CPU to fetch data,
improving overall processing speed.

8. Dynamic and Static: There are two main types of cache memory – dynamic
and static. Static cache is faster but more expensive and complex.
9. Transparent to the User: Cache memory operates automatically and is
managed by the system without user intervention.

Levels of Memory :-
1. Level 1 - Registers:
Registers are the fastest type of memory, located directly within the CPU. They
store data that the CPU needs to process immediately. Common types of
registers include the Accumulator, Program Counter, and Address Register.
Registers are extremely fast but very limited in size.

2. Level 2 - Cache Memory:


Cache memory is faster than main memory (RAM) but slower than registers. It
temporarily stores frequently used data and instructions, allowing for quicker
access by the CPU. Cache memory reduces the time it takes to fetch data from
the slower main memory, improving the overall performance of the computer.
Cache is usually divided into different levels (L1, L2, and sometimes L3), with L1
being the fastest and smallest.

3. Level 3 - Main Memory (RAM):


Main memory, or RAM (Random Access Memory), is where the computer
stores data that is actively being used or processed. It’s larger than cache
memory but slower. Data stored in RAM is lost when the computer is turned
off, making it volatile. RAM is essential for running programs and tasks that the
CPU is working on.

4. Level 4 - Secondary Memory:


Secondary memory includes external storage devices like hard drives, SSDs,
CDs, and USB drives. It is not as fast as RAM or cache memory, but it has much
larger storage capacity and retains data even when the power is off. Secondary
memory is used to store programs, files, and other data for long-term use.
Cache Mapping
There are three different types of mapping used for the purpose of cache memory
which is as follows:
•Direct Mapping

•Associative Mapping

•Set-Associative Mapping

1. Direct Mapping -
Direct Mapping in cache memory is a simple method used to map data from the main
memory to the cache. In direct mapping, each block of main memory is mapped to exactly
one cache line (a slot in the cache memory). This means that a specific location in the cache
is reserved for a specific location in the main memory.

Here’s how Direct Mapping works:

1. Memory Division: The main memory is divided into blocks, and the cache is
divided into lines (slots).

2. Mapping Process: Each block of the main memory is mapped to a single line in
the cache using a mapping function. This function uses the address of the data
in main memory to determine where it should be placed in the cache.

3. Address Structure: A memory address is typically divided into three parts:

• Tag: The part of the address that helps to identify which block of data in
the main memory is being referred to.
• Index: The part of the address that specifies which cache line a block of
data will be mapped to.
• Block Offset: The part of the address used to locate the specific data
within the cache block (or memory block).
4. Cache Access: When the CPU wants to access a particular piece of data, it
checks the cache by using the index part of the address. If the data is found in
the cache (a cache hit), it is used. If the data is not found (a cache miss), the
data is fetched from the main memory and placed in the corresponding cache
line.

5. Replacement: In direct mapping, if a new memory block needs to be loaded


into a cache line that already holds data from another memory block, the
existing data is replaced.
Advantages of Direct Mapping:
• Simplicity: It’s easy to implement because the mapping of memory blocks to
cache lines is straightforward and fixed.
• Speed: Cache lookup is fast because the index directly maps to a cache line.

Disadvantages of Direct Mapping:


• Cache Conflicts: If multiple memory locations map to the same cache line,
they will conflict and cause frequent cache misses. This is known as cache
conflict.
• Limited Flexibility: The fixed mapping between memory and cache lines
means that the cache might not be fully utilized, especially if multiple
frequently used memory blocks map to the same cache line.

Example of Direct Mapping:


Suppose a computer has 16 memory blocks and 4 cache lines. In direct mapping,
each block of memory would map to a specific cache line based on the index part of
the memory address.
If memory block 0 maps to cache line 0, memory block 1 maps to cache line 1, and so
on, memory block 4 would again map to cache line 0, memory block 5 to cache line 1,
and so on.

This simple mapping helps in fast data retrieval but can lead to inefficiency if multiple
memory blocks constantly conflict and need to replace each other in the same cache
line.

2. Associative Mapping :-
Associative Mapping (or Fully Associative Mapping) in cache memory is a more
flexible technique than direct mapping for storing data in the cache. In associative
mapping, any block of memory can be placed in any line of the cache, unlike direct
mapping, where each memory block is mapped to a specific cache line.

How Associative Mapping Works:


1. Memory Division: Just like in other mapping techniques, the main memory is
divided into blocks, and the cache is divided into lines (slots). However, in
associative mapping, there is no fixed relationship between memory blocks
and cache lines. Any memory block can be placed in any cache line.

2. Address Structure: In associative mapping, the memory address is typically


divided into two parts:

• Tag: The portion of the address that is used to identify the block of
memory being referred to.
• Block Offset: The part of the address that specifies the exact location
within the cache block (or memory block).
• No Index: Unlike direct mapping, there is no index in associative
mapping, as any cache line can hold any memory block.
3. Cache Access: When the CPU needs to access a specific memory address, it
checks all the cache lines to see if the block is stored in any of them. This is
done by comparing the Tag in the memory address with the tags of the data
stored in the cache lines.

• If the tag matches, it’s a cache hit, and the data is retrieved.
• If no tag matches, it’s a cache miss, and the data is fetched from main
memory and placed in one of the cache lines.
4. Replacement Policy: Since any memory block can be placed in any cache line,
when the cache is full and a new block of data needs to be loaded, a
replacement policy is used to decide which block to evict. Common
replacement policies include:

• Least Recently Used (LRU): The block that has been used the least
recently is replaced.
• First-In, First-Out (FIFO): The block that has been in the cache the
longest is replaced.
• Random: A random block is replaced.

Advantages of Associative Mapping:


• Flexibility: Any memory block can be stored in any cache line, which reduces
the chance of cache conflicts (a common issue in direct mapping).
• Higher Cache Hit Rate: Since data can be placed anywhere in the cache, the
cache can be used more efficiently, leading to potentially fewer cache misses.

Disadvantages of Associative Mapping:


• Complexity: Searching through all cache lines to find the correct block adds
complexity to the hardware. Every time the CPU needs to access data, it must
compare the tag with every cache line.
• Slower Access Time: Since all the cache lines need to be checked, the lookup
time can be slower compared to direct mapping, where a specific cache line is
checked directly using the index.
• Cost: The hardware required to implement associative mapping is more
complex and expensive.

Example of Associative Mapping:


Let’s assume a cache with 4 cache lines and 16 memory blocks. In associative
mapping, any memory block can be stored in any of the 4 cache lines. For instance:
• Memory block 3 can be placed in cache line 0, 1, 2, or 3.
• Memory block 7 can also be placed in any cache line.
• The cache will store the most recently used data, and when a new block is
required, one of the existing blocks will be replaced according to the
replacement policy.

3. Set Associative Mapping :-


Set-Associative Mapping is a method used to store data in cache memory. It’s a mix
of Direct Mapping and Associative Mapping, designed to improve cache efficiency
while keeping things simpler than fully associative mapping.

How It Works:
1. Cache is Divided into Sets:

• The cache is divided into sets (groups of cache lines), and each set
contains multiple lines (slots where data can be stored).
• For example, in a 2-way set-associative cache, each set has 2 cache
lines. In a 4-way set-associative cache, each set has 4 cache lines.
2. Memory is Divided:

• The memory address is split into three parts:


• Tag: Helps identify the data block.
• Set Index: Tells you which set the data should go to.
• Block Offset: Specifies the location within the cache block (or
data).
3. Mapping Data:

• When data from memory is needed, the Set Index is used to find the
correct set in the cache.
• Once the set is found, the data can go into any cache line within that
set.
4. Checking the Cache:

• When the CPU looks for data, it checks the set by comparing the Tag
with the tags in the cache lines of that set.
• If there’s a match, it’s a cache hit (data is found).
• If there’s no match, it’s a cache miss, and the data is fetched from the
main memory and put into the cache.
5. Replacing Data:

• If the set is already full, a replacement policy (like Least Recently Used
(LRU) or First-In-First-Out (FIFO)) is used to decide which data to
replace.

Example:
Imagine a cache with 4 sets and 2 cache lines per set (2-way set-associative). If the
memory has 16 blocks:

• When a memory block needs to be stored, the Set Index tells you which of the
4 sets it goes to.
• Once the set is found, the block can go into either of the 2 lines in that set.

Advantages:
• Fewer Conflicts: Unlike Direct Mapping, where each memory block can only
go to one specific line, Set-Associative Mapping lets a block go into any line in
a set. This reduces the chances of cache conflicts and helps store more data.
• Better Performance: It’s faster and more flexible than Direct Mapping, so
there are fewer misses.
Disadvantages:
• Slower Than Direct Mapping: Since the cache has to check multiple lines
within a set, it takes longer than direct mapping (which only checks one line).
• More Complex: It’s harder to build than direct mapping because of the need to
check multiple lines and handle the replacement of data.

Difference between Direct, Associative, Set-Associative Mappings:

Associative
Feature Direct Mapping Set-Associative Mapping
Mapping
Each memory block Any memory block Memory block maps to a set,
Mapping
maps to a specific can go to any but within the set, it can go
Method
cache line. cache line. to any cache line.
Fixed mapping (one
Cache Flexible (no fixed Flexible (but limited to a set
line per memory
Organization mapping). of lines).
block).
Slower (needs to
Fast (one cache line Moderate (only checks lines
Access Time check all cache
check). in a set).
lines).
More conflicts (only Very few conflicts Less conflicts than direct
Cache
one cache line per (any line can store mapping but more than
Conflicts
block). any block). associative mapping.
Most efficient, A compromise between
Less efficient for
Efficiency fewer cache direct and associative
cache hits.
misses. mapping.
Memory block 0
Memory block 0 Memory block 0 can go to
maps to cache line
Example can go to any any line in set 0, memory
0, block 1 to cache
cache line. block 1 to any line in set 1.
line 1.
Auxiliary Memory
Auxiliary Memory (also known as Secondary Memory) refers to storage devices that are
used to store data and programs that are not currently in use by the computer's main
memory (RAM). It is slower than primary memory (RAM) but offers much larger storage
capacity and persists data even when the power is turned off.

Key Characteristics of Auxiliary Memory:


1. Non-Volatile: Unlike primary memory (RAM), data in auxiliary memory is
retained even when the computer is powered off. This makes it ideal for long-
term storage.

2. Large Storage Capacity: Auxiliary memory has much larger storage capacity
compared to primary memory. It can store gigabytes, terabytes, or even more,
while RAM is usually limited to a few gigabytes.

3. Slower Access Speed: Auxiliary memory is slower than primary memory (RAM)
because it is designed for long-term storage rather than fast data access.
Accessing data in auxiliary memory can take several milliseconds, whereas
primary memory is much faster, typically in nanoseconds.

4. Permanent Data Storage: Auxiliary memory is used for permanent storage of


the operating system, applications, and personal files. Data remains in
auxiliary memory even after the system is turned off.

Types of Auxiliary Memory:


1. Hard Disk Drives (HDD):

• One of the most common forms of auxiliary memory.


• Uses mechanical parts to read and write data to a spinning disk.
• Provides large storage capacity at a relatively low cost.
• Slower than solid-state drives but still widely used for bulk storage.
2. Solid-State Drives (SSD):

• A newer and faster form of auxiliary memory.


• Uses flash memory to store data, with no moving parts, making it more
durable and faster than HDDs.
• More expensive than HDDs but offers faster data transfer speeds.
3. Optical Discs (CDs, DVDs, Blu-ray):

• Use laser technology to read and write data on discs.


• Used for media storage, backups, and distribution of software and
movies.
• Generally slower and have lower storage capacity compared to HDDs
and SSDs.
4. USB Flash Drives:

• Portable storage devices that use flash memory.


• They are small, easy to carry, and commonly used for transferring data
between computers.
• Typically offer lower capacity and slower speeds compared to SSDs but
are very convenient for small data transfers.
5. Magnetic Tapes:

• Used primarily for backup and archival storage.


• Tapes are cost-effective for storing large amounts of data but are slow to
access.
• Common in enterprise-level backup systems.
6. Network-Attached Storage (NAS):

• A system that allows multiple users or computers to access the same


data over a network.
• Provides centralized storage and is often used in businesses for file
sharing and backup.

Role of Auxiliary Memory:


• Long-Term Storage: Auxiliary memory is used for storing operating systems,
software programs, documents, images, videos, and other data that need to be
kept permanently.
• Data Backup: It is used for data backups to ensure important files and
systems can be recovered in case of failure in primary memory or accidental
data loss.
• Data Transfer: Auxiliary memory devices like USB drives and external hard
drives are commonly used to transfer data between different computers or
locations.

Magnetic Tapes and Disks


Magnetic tapes and magnetic disks are both forms of auxiliary storage that have
been widely used in computer systems for data storage, backup, and retrieval. Both
use magnetic properties to store data but differ in their physical forms, technology,
and primary use cases.
Magnetic Tapes:
Magnetic tapes are a type of storage medium that uses a long, narrow strip of plastic
film coated with a magnetic material to record data. Data is written to the tape in a
linear sequence, similar to how audio or video data was stored on old cassette tapes.

Key Characteristics of Magnetic Tapes:


1. Storage Capacity:

• Magnetic tapes offer large storage capacities, typically ranging from


several gigabytes to several terabytes.
• They are especially suited for long-term data archiving and backup.
2. Sequential Access:

• Magnetic tapes are sequential access devices, meaning data must be


read in a specific order from the beginning of the tape to the location
where the desired data is stored.
• This makes them slower to access compared to random access devices
like hard drives.
3. Cost-Effective:

• Magnetic tapes are inexpensive relative to other storage devices like


hard drives or solid-state drives (SSDs). This makes them an ideal choice
for large-scale data storage and backups.
4. Durability:

• Magnetic tapes are durable and can last for many years if stored
properly. They are commonly used for long-term archival storage,
especially in enterprises.
5. Data Backup and Archiving:

• Magnetic tapes are primarily used for data backup and archival
storage. Large organizations often use tape libraries (a collection of tape
drives and tapes) for storing backups of critical data.
• They are also used for storing data in a compressed format to save
space.

Use Cases:
• Enterprise Backup: Due to their cost-effectiveness and large capacity,
magnetic tapes are commonly used for backing up data in businesses and
large organizations.
• Data Archiving: Tape storage is used for storing data that is not accessed
frequently but needs to be preserved for compliance, regulatory, or historical
purposes.

Magnetic Disks:
Magnetic disks are a type of storage device that uses a spinning disk coated with a
magnetic material to read and write data. They are often referred to as hard disk
drives (HDDs) or simply hard drives.

Key Characteristics of Magnetic Disks (HDDs):


1. Random Access:

• Magnetic disks are random access devices, meaning data can be


accessed directly and in any order, unlike magnetic tapes, which require
sequential access.
• This allows much faster data retrieval and storage compared to tapes.
2. Storage Capacity:

• Magnetic disks offer a wide range of storage capacities, from a few


gigabytes in smaller drives to several terabytes in high-capacity drives.
• Modern HDDs can store up to 18-20 terabytes or more in some cases.
3. Speed:

• The speed of a magnetic disk depends on its rotation speed (measured


in RPM – revolutions per minute) and seek time (the time it takes to
locate data on the disk). Modern hard drives typically operate at speeds
of 5400 RPM or 7200 RPM, with faster, enterprise-grade models going up
to 10,000 RPM or more.
• While HDDs are slower than SSDs, they are still much faster than
magnetic tapes.
4. Cost:

• Magnetic disks are generally cheaper than SSDs but more expensive
than magnetic tapes.
• They offer a good balance of storage capacity, speed, and cost, making
them suitable for general-purpose computing and storage.
5. Data Retrieval:

• Data is read or written to the disk by a read/write head, which moves


across the disk's surface. The disk is divided into sectors and tracks,
which are used to organize the data.
6. Durability:
• Magnetic disks are more prone to damage from physical shocks and
wear due to their moving parts, making them less durable than solid-
state drives (SSDs). However, they can still provide many years of reliable
service when properly maintained.

Use Cases:
• Primary Storage: Magnetic disks are commonly used as the primary storage
medium in personal computers, laptops, and servers.
• Backup Storage: They are also used for backup purposes, though they are not
as cost-effective for large-scale archival storage as magnetic tapes.
• Data Storage in Data Centers: HDDs are commonly used in data centers and
cloud storage systems, where large amounts of data need to be stored and
accessed quickly.

Differences Between Magnetic Tapes and Magnetic Disks:


Feature Magnetic Tapes Magnetic Disks (HDDs)
Access Sequential access (data is read in Random access (data can be read
Method order) in any order)
Faster (random access, faster data
Speed Slower (due to sequential access)
retrieval)
Very large capacity (ideal for Large capacity, but generally
Capacity
archiving) smaller than tapes
Inexpensive and cost-effective for More expensive than tapes but
Cost
long-term storage cheaper than SSDs
Less durable than SSDs (due to
Durability Durable for long-term storage
moving parts)
Backup, archival storage, long-term Primary storage, secondary
Use Case
data preservation storage, general-purpose
Associative Memory :-
Associative Memory, also known as Content-Addressable Memory (CAM), is a type of
memory in which data is accessed based on its content rather than its address. In
traditional memory systems like random access memory (RAM), data is accessed by
providing a specific memory address. However, in associative memory, the search for
data is done by comparing the content of the stored data with the input search data.

Key Characteristics of Associative Memory:


1. Content-Based Search:

• In associative memory, you don't need to know the specific address


where the data is stored. Instead, you provide a search value, and the
memory compares it with the stored data. If there is a match, the
corresponding data is retrieved.
• It can be seen as searching through data in a "content-addressed"
manner.
2. Parallel Search:

• Associative memory can search through all its entries simultaneously in


parallel, unlike traditional memory, which accesses one memory
location at a time. This makes associative memory faster for certain
tasks where data needs to be retrieved quickly based on content.
3. Faster Lookup:
• Since all entries in associative memory are searched at once, it typically
provides faster lookup times than conventional memory, especially for
operations like pattern matching.
4. Data Storage:

• Data is stored in associative memory as pairs of keys and values. The


key is the content you're searching for, and the value is the
corresponding data returned by the memory.
5. Limited Capacity:

• While associative memory is fast, it is typically more limited in size and


more expensive to implement than traditional RAM. This is because the
hardware required for parallel searching is more complex.
Virtual Memory
Virtual memory is a memory management technique used by operating systems to
give the appearance of a large, continuous block of memory to applications, even if
the physical memory (RAM) is limited. It allows the system to compensate for physical
memory shortages, enabling larger applications to run on systems with less RAM.

A memory hierarchy, consisting of a computer system’s memory and a disk, enables a


process to operate with only some portions of its address space in memory. A virtual
memory is what its name indicates- it is an illusion of a memory that is larger than
the real memory. We refer to the software component of virtual memory as a virtual
memory manager. The basis of virtual memory is the noncontiguous memory
allocation model. The virtual memory manager removes some components from
memory to make room for other components.
The size of virtual storage is limited by the addressing scheme of the computer
system and the amount of secondary memory available not by the actual number of
main storage locations.

Types of Virtual Memory


In a computer, virtual memory is managed by the Memory Management Unit (MMU),
which is often built into the CPU. The CPU generates virtual addresses that
the MMU translates into physical addresses.
There are two main types of virtual memory:
•Paging

•Segmentation

Paging
Paging divides memory into small fixed-size blocks called pages. When the computer
runs out of RAM, pages that aren’t currently in use are moved to the hard drive, into
an area called a swap file. The swap file acts as an extension of RAM. When a page is
needed again, it is swapped back into RAM, a process known as page swapping. This
ensures that the operating system (OS) and applications have enough memory to
run.
Demand Paging: The process of loading the page into memory on demand
(whenever a page fault occurs) is known as demand paging. The process includes
the following steps are as follows:
Page Fault :-

A page fault happens when a running program accesses a memory page


that is mapped into the virtual address space but not loaded in physical
memory. Since actual physical memory is much smaller than virtual
memory, page faults happen. In case of a page fault, the Operating System
might have to replace one of the existing pages with the newly needed
page. Different page replacement algorithms suggest different ways to
decide which page to replace. The target for all the algorithms is to reduce
the number of page faults.

Page Replacement Algorithms


Page replacement algorithms are techniques used in operating systems to
manage memory efficiently when the virtual memory is full. When a new
page needs to be loaded into physical memory , and there is no free space,
these algorithms determine which existing page to replace.
If no page frame is free, the virtual memory manager performs a page
replacement operation to replace one of the pages existing in memory with
the page whose reference caused the page fault. It is performed as follows:
The virtual memory manager uses a page replacement algorithm to select
one of the pages currently in memory for replacement, accesses the page
table entry of the selected page to mark it as “not present” in memory, and
initiates a page-out operation for it if the modified bit of its page table entry
indicates that it is a dirty page.
Common Page Replacement Techniques
•First In First Out (FIFO)

•Optimal Page replacement

•Least Recently Used (LRU)

•Most Recently Used (MRU)

First In First Out (FIFO)


This is the simplest page replacement algorithm. In this algorithm, the
operating system keeps track of all pages in the memory in a queue, the
oldest page is in the front of the queue. When a page needs to be replaced
page in the front of the queue is selected for removal.
Example 1: Consider page reference string 1, 3, 0, 3, 5, 6, 3 with 3 page
frames.
Find the number of page faults using FIFO Page Replacement Algorithm.
Optimal Page Replacement
In this algorithm, pages are replaced which would not be used for the
longest duration of time in the future.
Example-2: Consider the page references 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 3
with 4 page frame. Find number of page fault using Optimal Page
Replacement Algorithm.

Least Recently Used


In this algorithm, page will be replaced which is least recently used.
Example-3: Consider the page reference string 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3,
2, 3 with 4 page frames. Find number of page faults using LRU Page
Replacement Algorithm.

Most Recently Used (MRU)


In this algorithm, page will be replaced which has been used recently.
Belady’s anomaly can occur in this algorithm.
Example 4: Consider the page reference string 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3,
2, 3 with 4 page frames. Find number of page faults using MRU Page
Replacement Algorithm.

You might also like