0% found this document useful (0 votes)
152 views

Memory Hierarchy

The memory hierarchy arranges computer storage in pyramid structure from fastest to slowest - CPU registers, cache memory, main memory, magnetic disks, optical disks. Cost of memory increases as speed and capacity decrease at each level. Different mapping techniques like direct, fully associative, and set associative mapping are used to determine how blocks in main memory are mapped to cache memory.

Uploaded by

Sravana Jyothi
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
152 views

Memory Hierarchy

The memory hierarchy arranges computer storage in pyramid structure from fastest to slowest - CPU registers, cache memory, main memory, magnetic disks, optical disks. Cost of memory increases as speed and capacity decrease at each level. Different mapping techniques like direct, fully associative, and set associative mapping are used to determine how blocks in main memory are mapped to cache memory.

Uploaded by

Sravana Jyothi
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 10

Memory hierarchy

The Computer memory hierarchy looks like a pyramid structure which is used to describe
the differences among memory types. It separates the computer storage based on hierarchy.
Level 0: CPU registers
Level 1: Cache memory
Level 2: Main memory or primary memory
Level 3: Magnetic disks or secondary memory
Level 4: Optical disks or magnetic types or tertiary Memory

In Memory Hierarchy the cost of memory, capacity is inversely proportional to speed. Here
the devices are arranged in a manner Fast to slow, that is form register to Tertiary memory.
Let us discuss each level in detail:
Level-0 − Registers
The registers are present inside the CPU. As they are present inside the CPU, they have least
access time. Registers are most expensive and smallest in size generally in kilobytes. They
are implemented by using Flip-Flops.
Level-1 − Cache
Cache memory is used to store the segments of a program that are frequently accessed by the
processor. It is expensive and smaller in size generally in Megabytes and is implemented by
using static RAM.
Level-2 − Primary or Main Memory
It directly communicates with the CPU and with auxiliary memory devices through an I/O
processor. Main memory is less expensive than cache memory and larger in size generally in
Gigabytes. This memory is implemented by using dynamic RAM.
Level-3 − Secondary storage
Secondary storage devices like Magnetic Disk are present at level 3. They are used as
backup storage. They are cheaper than main memory and larger in size generally in a few
TB.
Level-4 − Tertiary storage
Tertiary storage devices like magnetic tape are present at level 4. They are used to store
removable files and are the cheapest and largest in size (1-20 TB).
Let us see the memory levels in terms of size, access time, bandwidth.
Level Register Cache Primary memory Secondary memory
Bandwidth 4k to 32k MB/sec 800 to 5k MB/sec 400 to 2k MB/sec 4 to 32 MB/sec
Size Less than 1KB Less than 4MB Less than 2 GB Greater than 2 GB
Access time 2 to 5nsec 3 to 10 nsec 80 to 400 nsec 5ms
Managed by Compiler Hardware Operating system OS or user
Why memory Hierarchy is used in systems?
Memory hierarchy is arranging different kinds of storage present on a computing device
based on speed of access. At the very top, the highest performing storage is CPU registers
which are the fastest to read and write to. Next is cache memory followed by conventional
DRAM memory, followed by disk storage with different levels of performance including
SSD, optical and magnetic disk drives.
To bridge the processor memory performance gap, hardware designers are increasingly
relying on memory at the top of the memory hierarchy to close / reduce the performance gap.
This is done through increasingly larger cache hierarchies (which can be accessed by
processors much faster), reducing the dependency on main memory which is slower.

Cache mapping techniques


What is Cache Mapping?
As we know that the cache memory bridges the mismatch of speed between the main memory
and the processor. Whenever a cache hit occurs,

 The word that is required is present in the memory of the cache.


 Then the required word would be delivered from the cache memory to the CPU.
And, whenever a cache miss occurs,

 The word that is required isn’t present in the memory of the cache.
 The page consists of the required word that we need to map from the main memory.
 We can perform such a type of mapping using various different techniques of cache
mapping.
Let us discuss different techniques of cache mapping in this article.

Process of Cache Mapping


The process of cache mapping helps us define how a certain block that is present in the main
memory gets mapped to the memory of a cache in the case of any cache miss.

In simpler words, cache mapping refers to a technique using which we bring the main memory
into the cache memory. Here is a diagram that illustrates the actual process of mapping:

Now, before we proceed ahead, it is very crucial that we note these points:

Important Note:
 The main memory gets divided into multiple partitions of equal size, known as the frames
or blocks.
 The cache memory is actually divided into various partitions of the same sizes as that of
the blocks, known as lines.
 The main memory block is copied simply to the cache during the process of cache
mapping, and this block isn’t brought at all from the main memory.

Techniques of Cache Mapping


One can perform the process of cache mapping using these three techniques given as follows:
1. K-way Set Associative Mapping

2. Direct Mapping

3. Fully Associative Mapping

1. Direct Mapping
In the case of direct mapping, a certain block of the main memory would be able to map a cache
only up to a certain line of the cache. The total line numbers of cache to which any distinct block
can map are given by the following:

Cache line number = (Address of the Main Memory Block )


Modulo (Total number of lines in Cache)
For example,
 Let us consider that particular cache memory is divided into a total of ‘n’ number of lines.
 Then, the block ‘j’ of the main memory would be able to map to line number only of the
cache (j mod n).
The Need for Replacement Algorithm
In the case of direct mapping,

 There is no requirement for a replacement algorithm.


 It is because the block of the main memory would be able to map to a certain line of the
cache only.
 Thus, the incoming (new) block always happens to replace the block that already exists,
if any, in this certain line.

Division of Physical Address


In the case of direct mapping, the division of the physical address occurs as follows:

2. Fully Associative Mapping


In the case of fully associative mapping,

 The main memory block is capable of mapping to any given line of the cache that’s
available freely at that particular moment.
 It helps us make a fully associative mapping comparatively more flexible than direct
mapping.

For Example
Let us consider the scenario given as follows:

Here, we can see that,

 Every single line of cache is available freely.


 Thus, any main memory block can map to a line of the cache.
 In case all the cache lines are occupied, one of the blocks that exists already needs to be
replaced.

The Need for Replacement Algorithm


In the case of fully associative mapping,

 The replacement algorithm is always required.


 The replacement algorithm suggests a block that is to be replaced whenever all the
cache lines happen to be occupied.
 So, replacement algorithms such as LRU Algorithm, FCFS Algorithm, etc., are employed.

Division of Physical Address


In the case of fully associative mapping, the division of the physical address occurs as follows:
3. K-way Set Associative Mapping
In the case of k-way set associative mapping,

 The grouping of the cache lines occurs into various sets where all the sets consist of k
number of lines.
 Any given main memory block can map only to a particular cache set.
 However, within that very set, the block of memory can map any cache line that is freely
available.
 The cache set to which a certain main memory block can map is basically given as
follows:
Cache set number = ( Block Address of the Main Memory ) Modulo (Total Number of sets
present in the Cache)

For Example
Let us consider the example given as follows of a two-way set-associative mapping:

In this case,

 k = 2 would suggest that every set consists of two cache lines.


 Since the cache consists of 6 lines, the total number of sets that are present in the cache
= 6 / 2 = 3 sets.
 The block ‘j’ of the main memory is capable of mapping to the set number only (j mod 3)
of the cache.
 Here, within this very set, the block ‘j’ is capable of mapping to any cache line that is
freely available at that moment.
 In case all the available cache lines happen to be occupied, then one of the blocks that
already exist needs to be replaced.

The Need for Replacement Algorithm


In the case of k-way set associative mapping,

 The k-way set associative mapping refers to a combination of the direct mapping as well
as the fully associative mapping.
 It makes use of the fully associative mapping that exists within each set.
 Therefore, the k-way set associative mapping needs a certain type of replacement
algorithm.

Division of Physical Address


In the case of fully k-way set mapping, the division of the physical address occurs as follows:

Special Cases
 In case k = 1, the k-way set associative mapping would become direct mapping. Thus,
Direct Mapping = one-way set associative mapping

 In the case of k = The total number of lines present in the cache, then the k-way set
associative mapping would become fully associative mapping.

You might also like