0% found this document useful (0 votes)
55 views48 pages

TM103 Chapter 6

The document discusses memory and its organization, describing different types of memory like RAM, ROM, DRAM, and SRAM. It explains the memory hierarchy from registers to cache to main memory to secondary storage. The document also covers cache memory mapping schemes like direct, associative, and set-associative mapping and their influence on system performance.

Uploaded by

johnnader1254
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
55 views48 pages

TM103 Chapter 6

The document discusses memory and its organization, describing different types of memory like RAM, ROM, DRAM, and SRAM. It explains the memory hierarchy from registers to cache to main memory to secondary storage. The document also covers cache memory mapping schemes like direct, associative, and set-associative mapping and their influence on system performance.

Uploaded by

johnnader1254
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 48

Chapter 6

Memory

Prepared by Dr. Ahmad Mikati


Lecture Overview
Introduction to this Lecture's activities
In this lecture, you will learn about memory and its organization. This
lecture covers basic memory concepts, such as RAM and the various
memory devices, and addresses the more advanced concepts of the
memory hierarchy, including cache memory.

In addition, you will learn about some types of cache mapping schemes,
and their influence on the performance of the whole computer system.
This lecture gives a thorough presentation of direct mapping, associative
mapping, and set-associative mapping techniques for cache.

November 27, 2023 TM103 - Arab Open University 2


Lecture Overview
Introduction to this Lecture's activities
After completing the lecture, you will be able to:

1. Describe the concepts of hierarchical memory organization.

2. Illustrate how each level of memory contributes to system


performance.

3. Identify the concepts behind cache memory, and address translation.

4. Demonstrate how each cache scheme contributes to system


performance.

November 27, 2023 TM103 - Arab Open University 3


Lecture Overview

 Introduction
 Types of Memory
 The Memory Hierarchy
 Cache Memory

November 27, 2023 TM103 - Arab Open University 4


Introduction
Most computers are built using the Von Neumann model, which is
centered on memory. The programs that perform the processing are
stored in memory. Memory lies at the heart of the stored-program
computer.
In previous lectures, you have studied the components from which
memory is built, and the ways in which memory is accessed by various
ISAs.
In the remaining sections, the focus will be on memory organization.
You will examine the various types of memory, and the way each is part
of the memory hierarchy system. You will then have a look at the cache
memory (a special high-speed memory) and the mapping schemes used.
A clear understanding of these ideas is essential for the analysis of
system performance.

November 27, 2023 TM103 - Arab Open University 5


Lecture Overview

 Introduction
 Types of Memory
 The Memory Hierarchy
 Cache Memory

November 27, 2023 TM103 - Arab Open University 6


Types of Memory

A common question many people ask is “why are there so many


different types of computer memory?”

• New technologies continue to be introduced in an attempt


to match the improvements in CPU design.
• The speed of memory has to, somewhat, keep pace with
the CPU, or the memory becomes a bottleneck.

November 27, 2023 TM103 - Arab Open University 7


Types of Memory
There are two kinds of main memory
 Random access memory: RAM
 Read-only-memory: ROM

RAM
- RAM is a read-write memory.
- RAM is the memory to which computer specifications refer; if you
buy a computer with 4GB of memory, it has a 4GB of RAM.
- RAM is also the “main memory” we have continually referred to
throughout this text.
- RAM is used to store programs and data that the computer needs
when executing programs.
- RAM is volatile: it loses data once the power is turned off.

There are two general types of RAM memory in today’s computers:


DRAM and SRAM
November 27, 2023 TM103 - Arab Open University 8
Types of Memory
Dynamic RAM – DRAM:
DRAM is constructed of tiny capacitors that leak electricity.
DRAM requires a recharge every few milliseconds to maintain its data.
 Advantages of DRAM
• It is much denser (can store many bits per chip),
• uses less power,
• less expensive,
• generates less heat than SRAM

Static RAM – SRAM


SRAM holds its contents as long as power is available (No need for recharge).
 SRAM consists of circuits similar to the D flip-flops.
 SRAM is faster.
 SRAM is much more expensive than DRAM.

Both technologies, DRAM and SRAM, are often used in combination:


DRAM for main memory and SRAM for cache.
9
November 27, 2023 TM103 - Arab Open University
Types of Memory
ROM
- ROM stores critical information necessary to operate the system,
such as the program necessary to boot the computer.
- ROM is not volatile and always retains its data.
- This type of memory is also used in embedded systems or any
systems where the programming does not need to change
Examples of Systems that uses ROM: Toys, Automobiles, calculators,
printers, etc.

There are five basic different types of ROM: ROM, PROM, EPROM,
EEPROM, and flash memory.

November 27, 2023 TM103 - Arab Open University 10


Types of Memory
PROM (programmable read-only memory)
• PROMs can be programmed by the user with the appropriate equipment.
• Whereas ROMs are hardwired, PROMs have fuses that can be blown to
program the chip.
• Once programmed, the data and instructions in PROM cannot be changed.

EPROM (erasable PROM)


• EPROM is programmable and reprogrammable
• Erasing an EPROM requires a special tool that emits ultraviolet light.
• To reprogram an EPROM, the entire chip must first be erased.

EEPROM (electrically erasable PROM)


• No special tools are required for erasure
• Erasure is performed by applying an electric field
• Only portions of the chip can be erased, one byte at a time.

Flash memory
• Flash memory is essentially EEPROM
• It can be written or erased in blocks, removing the one-byte-at-a-time
limitation.
• Flash memory is faster than EEPROM. 11
Lecture Overview

 Introduction
 Types of Memory
 The Memory Hierarchy
 Cache Memory

November 27, 2023 TM103 - Arab Open University 12


The Memory Hierarchy
One of the most important considerations in understanding the
performance capabilities of a modern processor is the memory
hierarchy. Unfortunately, not all memory is created equal, and some
types are far less efficient and thus cheaper than others.
As a rule, the faster memory is, the more expensive it is per bit of
storage.
To deal with this disparity, today’s computer systems use a
combination of memory types to provide the best performance at the
best cost. This approach is called Hierarchical Memory.
The base types that normally constitute the hierarchical memory
system include:
Registers, cache, main memory, and secondary memory.

November 27, 2023 TM103 - Arab Open University 13


The Memory Hierarchy

We classify memory based on its “distance” from the processor


Distance is usually measured by the number of machine cycles
required for access. The closer memory is to the processor, the faster
and smaller it should be.
Faster memory is more expensive. Thus, faster memories tend to be
smaller than slower ones, due to cost.
In todays’ computers:
- Memories close to the CPU are high speed, low capacity memories
(i.e. cache memory)
- Memories far from the CPU are low speed high capacity memories
(i.e. Hard Disk)
- In between comes the main RAM memory.

November 27, 2023 TM103 - Arab Open University 14


The Memory Hierarchy
The Memory hierarchy is depicted in figure 6.1:

Figure 6.1: Memory Hierarchy

November 27, 2023 TM103 - Arab Open University 15


The Memory Hierarchy

To access a particular piece of data:


1. The CPU first sends a request to its nearest memory,
usually cache.
2. If the data is not in cache, then main memory is
queried.
3. If the data is not in main memory, then the request
goes to disk.
4. Once the data is located, then the data, and a number
of its nearby data elements are fetched into cache
memory.

November 27, 2023 TM103 - Arab Open University 16


The Memory Hierarchy

This leads us to some definitions:


• A Hit: is when data is found at a given memory level.
• A Miss: is when it is not found.
• The Hit Rate: is the percentage of time data is found
at a given memory level.
• The Miss rate: is the percentage of time it is not.
Note that: Miss rate = 1 - Hit Rate.
• The Hit Time: is the time required to access data at a
given memory level.
• The miss penalty: is the time required to process a
miss, including the time that it takes to replace a block
of memory plus the time it takes to deliver the data to
the processor.

November 27, 2023 TM103 - Arab Open University 17


The Memory Hierarchy
Figure 6.2 shows the process of seeking (in RED) and getting
Miss!
the needed data (in Green):
Data at location X Data address Miss! Data address
X X
Cache
CPU RAM
Data block 2 Data at
Data at
address X addresses
Data block 1
X, X+1, X+2 …

Hit! Data address


X

Hard Disk Drive


HDD Data at addresses
X, X+1, X+2 …
Figure 6.2: Seeking and getting data
November 27, 2023 TM103 - Arab Open University 18
The Memory Hierarchy
The CPU requests the content of location X from cache memory and the
result is a Miss.
The request goes to lower levels, until a Hit is recorded. The result is a block
of data that is transferred from the component that has the needed data to
the previous component, until it is placed in the cache memory (the
content of X, X+1, X+2 …)

“Once the data is located, then the data, and a number of its nearby data
elements are fetched into cache memory”. Why?
• The hope is that this extra data will be referenced in the near future, which,
in most cases, it is.
- Why?
→ because programs tend to exhibit a property known as locality.
• Advantage: After a miss, there is a high probability to achieve several hits .

Although there is one miss on, there might be several hits in cache on the
newly retrieved block afterward, due to locality.
November 27, 2023 TM103 - Arab Open University 19
The Memory Hierarchy
Locality of Reference
Locality of Reference is a way of organizing data inside memory in such a
way that data nearly requested will be closely located inside the memory.
“Once the data is located, then the data, and a number of its nearby data
elements are fetched into cache memory”. This is because of the hope that
this extra data will be referenced in the near future, which, in most cases,
happens since programs tend to exhibit a locality property.
The Advantage is that after a Miss, there is a high probability to achieve
several Hits.
Example: In the absence of branches, the PC in MARIE is incremented by
one after each instruction fetch.

Thus, if memory location X is accessed at time t, there is a high probability


that memory location X + 1 will also be accessed in the near future.
This clustering of memory references into groups is an example of locality
of reference.
November 27, 2023 TM103 - Arab Open University 20
The Memory Hierarchy
There are three forms of locality:

Temporal locality: Recently-accessed data elements tend to be accessed again.


Spatial locality: Accesses tend to cluster.
Sequential locality: Instructions tend to be accessed sequentially.

The locality principle provides the opportunity for a system to use a small
amount of very fast memory to effectively accelerate the majority of
memory accesses.
Typically, only a small amount of the entire memory space is being
accessed at any given time, and the values from a slower memory to a
smaller but faster memory that resides higher in the hierarchy.
This results in a memory system that can store a large amount of
information in a large but low-cost memory, yet provide nearly the same
access speeds that would result from using the fast but expensive memory.
November 27, 2023 TM103 - Arab Open University 21
Lecture Overview

 Introduction
 Types of Memory
 The Memory Hierarchy
 Cache Memory
• Introduction
• Cache Mapping Schemes
- Direct mapping
- Fully associative
- Set Associative

November 27, 2023 TM103 - Arab Open University 22


Cache Memory-Introduction

A computer processor is very fast and is constantly reading information


from memory.
It often has to wait for the information to arrive, because the memory
access times are slower than the processor speed.
A cache memory is a small, temporary, but fast memory that the
processor uses for information it is likely to need again in the very near
future.
Hence, the purpose of cache memory is to speed up accesses by
storing recently used data closer to the CPU, instead of storing it in
main memory.
Although cache is much smaller than main memory, its access time is a
fraction of that of main memory.

November 27, 2023 TM103 - Arab Open University 23


Cache Memory-Introduction
Unlike main memory, which is accessed by address, cache is typically
accessed by content; hence, it is often called content addressable memory.
Because of this, a single large cache memory isn’t always desirable— it
takes longer to search.
Accessing data inside the cache memory is faster than accessing the same
data from the main memory
Therefore, frequently used data are copied into the cache memory.
While main memory is typically composed of DRAM with, say, a 60ns
access time, cache is typically composed of SRAM, providing faster access
with a much shorter cycle time than DRAM. A typical cache access time is
10ns.
Moreover, if the data has been copied to cache, the address of the data in
cache is not the same as the main memory address.
For example, data located at main memory address 2E3 could be located in
the very first location in cache.

November 27, 2023 TM103 - Arab Open University 24


Cache Memory-Introduction

How, then, does the CPU locate data when it has been copied into cache?
The CPU uses a specific mapping scheme.
A mapping Scheme “converts” the main memory address into a cache
location.
It gives special significance to the bits in the main memory address.
We first divide the bits into distinct groups we call fields.
Depending on the mapping scheme, we may have two or three fields.
It determines where the data is placed when it is originally copied into
cache.
It also provides a method for the CPU to find previously copied data when
searching cache.

November 27, 2023 TM103 - Arab Open University 25


Cache Mapping Schemes

Before we discuss mapping schemes, it is important to


understand how data is copied into cache.
• Main memory and cache are both divided into the same size
blocks (the size of these blocks varies).
• When a memory address is generated, cache is searched first to
see if the required word exists there.
• When the requested word is not found in cache, the entire main
memory block in which the word resides is loaded into cache.
 You should note that:
• The main memory is bigger than cache
• So, there are more blocks in main memory than there are in
cache blocks!
• Main memory blocks compete for cache locations!
November 27, 2023 TM103 - Arab Open University 26
Cache Mapping Schemes – Direct Mapped Cache
Direct Mapped Cache
Direct mapped cache assigns cache mappings using a modular approach.
If X is the location of a block in main memory, Y is the location of this block in
cache, and N is the total number of blocks in cache, X and Y are related by the
equation:

Y = X mod N
Note: Y is the remainder of the division: X/N
Example: A cache memory contains 10 blocks. How will the following main
memory blocks: 0, 6, 10, 15, 25, 32 be mapped to the cache?
Here N=10(starting from block 0 till block 9)
• Block 0 will be placed in cache block: 0 mod 10 = 0
• Block 6 will be placed in cache block: 6 mod 10 = 6
• Block 10 will be placed in cache block: 10 mod 10 = 0
• Block 15 will be placed in cache block: 15 mod 10 = 5
• Block 25 will be placed in cache block: 25 mod 10 = 5
• Block 32 will be places in cache block: 32 mod 10 = 2
November 27, 2023 TM103 - Arab Open University 27
Cache Mapping Schemes – Direct Mapped Cache

In the last example, blocks 5, 15, 25, 35, … are all placed in Block 5 in cache!
How does the CPU know which block actually resides in cache block 5 at any
given time?
A tag identifies each block that is copied to cache.
This tag is stored with the block, inside the cache.
A valid bit is also added to each cache block to identify its validity.
November 27, 2023 TM103 - Arab Open University 28
Cache Mapping Schemes – Direct Mapped Cache
To perform direct mapping, the binary main memory address is partitioned into
three fields:
1) Offset (Word) field
• Uniquely identifies an address within a specific block (a unique word)
• The number of words/bytes in each block dictates the number of bits in the
offset field
Example: If a block of memory contains 8=23 words, we need 3 bits in the offset
field to identify (address) one of these 8 words in the block.

2) Block field
• It must select a unique block of cache
• The number of blocks in cache dictates the number of bits in the block field
Example: If a cache contains 16=24 blocks, we need 4 bits to identify (address)

one of these 16 blocks.

3) Tag field
• Whatever is left over!
• Do not forget that: when a block of memory is copied to cache, this tag is
stored with the block and uniquely identifies this block.
November 27, 2023 TM103 - Arab Open University 29
Cache Mapping Schemes – Direct Mapped Cache
Example 1: Assume a byte-addressable memory consists of 214 bytes, cache has 16
blocks, and each block has 8 bytes. How many bits do we have in the tag, block and
offset fields?
• The number of memory blocks are: 214/8 = 214/23=211 blocks
- Each main memory address requires 14 bits. These are divided into three
fields as follows:
- We have 8 = 23 words in each block so we need 3 bits to identify one of
these words: the rightmost 3 bits reflect the offset field.
- We have 16=24 blocks in cache. We need 4 bits to select a specific block
in cache, so the block field consists of the middle 4 bits.
- The remaining 7 bits make up the tag field (14 – (4 + 3)).

Figure 6.3: Explanation of each field in Direct Mapping

November 27, 2023 TM103 - Arab Open University 30


Cache Mapping Schemes – Direct Mapped Cache
Example 2: Suppose a computer using direct mapped cache has 2 20 words of main
memory, and a cache of 32 blocks, where each cache block contains 16 words.
a. How many blocks of main memory are there?
b. What is the format of a memory address as seen by the cache, i.e., what are the
sizes of the tag, block, and word fields?
c. To which cache block will the memory reference 0DB6316 map?

Answer:
a. 220/24 = 216 blocks
b. 20 bit addresses with 11 bits in the tag field, 5 in the block field, and 4 in the word
field
c. 0DB63 = 00001100101 10110 0111, which is Block 22

November 27, 2023 TM103 - Arab Open University 31


Cache Mapping Schemes – Direct Mapped Cache

Summary:
• Direct mapped cache maps main memory blocks in a
modular fashion to cache blocks. The mapping depends
on:
- The number of bits in the main memory address (how many
addresses exist in main memory)
- The number of blocks are in cache (which determines the size of
the block field)
- How many addresses (either bytes or words) are in a block (which
determines the size of the offset field)

November 27, 2023 TM103 - Arab Open University 32


Cache Mapping Schemes – Direct Mapped Cache

Summary:
• Direct mapped cache is not as expensive as other caches
because the mapping scheme does not require any
searching.
- Each main memory block has a specific location to which it maps
in cache.
- A main memory address is converted to a cache address.
- The block field identifies one unique cache block.
- The CPU knows “a priori” the cache block number in which it may
find needed data.

November 27, 2023 TM103 - Arab Open University 33


Cache Mapping Schemes – Fully associative Cache

Fully associative Cache

In a fully associative cache scheme, a main memory block can be placed


anywhere in cache.
The only way to find a block mapped this way is to search all of cache!
This requires the entire cache to be built from associative memory so it
can be searched in parallel.
A single search must compare the requested tag to ALL tags in cache to
determine whether the desired data block is present in cache.
Associative memory requires special hardware to allow associative
searching, and is, thus, quite expensive.

November 27, 2023 TM103 - Arab Open University 34


Cache Mapping Schemes – Fully associative Cache

Using associative mapping, the main memory address is partitioned


into two pieces, the tag and the word (offset).
As for direct mapped cache, the tag must be stored with each block in
cache.
The word (offset) specifies a given word in the block.

Figure 6.4 shows the general format of the fully associative mapping
scheme.

Figure 6.4: Fully associative format

November 27, 2023 TM103 - Arab Open University 35


Cache Mapping Schemes – Fully associative Cache

Example 1: For a memory configuration with 214 words, and


fully associative cache with 16 blocks, and blocks of 8 words:

Answer:
• We have 8=23 words in each block. The word (offset) field
consists of 3 bits.
• The tag field is 11 bits.

Figure 6.5: Detailed format of example 1

November 27, 2023 TM103 - Arab Open University 36


Cache Mapping Schemes – Fully associative Cache
Example 2: Suppose that a computer using fully associative cache has 216
words of main memory and a cache of 64 blocks, where each cache block
contains 32 words.
a. How many blocks of main memory are there?
b. What is the format of a memory address as seen by the cache, i.e.,
what are the sizes of the tag and word fields?
c. To which cache block will the memory reference F8C9 map?
Answer:
d. 216/25 = 211 .
e. 16 bit addresses with 11 bits in the tag field and 5 in the word field.
f. Since it is associative cache, it can map anywhere.

Figure 6.6: Detailed format of example 2


November 27, 2023 TM103 - Arab Open University 37
Cache Mapping Schemes – Fully associative Cache

 With direct mapping, if a block already occupies the cache


location where a new block must be placed, the block
currently in cache is removed
• It is written back to main memory if it has been modified or simply
overwritten if it has not been changed.

 With fully associative mapping, when cache is full, we need


a replacement algorithm to decide which block we wish to
throw out of cache
• We call this our victim block.
• Examples of Replacement algorithms: Least recently used (LRU), First
In First Out (FIFO).

November 27, 2023 TM103 - Arab Open University 38


Cache Mapping Schemes – Set associative Cache

Why moving to set-associative mapping scheme?


• Fully associative is not restrictive. It allows a block from main
memory to be placed anywhere.
- However, it requires a larger tag to be stored with the block (which
results in a larger cache)
- It requires special hardware for searching of all blocks in cache
simultaneously (which implies a more expensive cache).
• Set associative cache mapping is a scheme somewhere in the
middle.
• Set associative cache mapping is a combination of direct and
associative mapping schemes
- It takes the advantages of direct-mapping (low cost) and fully-
associative (not restrictive)

November 27, 2023 TM103 - Arab Open University 39


Cache Mapping Schemes – Set associative Cache

Set associative Cache


Set associative cache mapping is a combination of direct and associative
mapping schemes.
It takes the advantages of direct-mapping (low cost) and fully-associative (not
restrictive).

How does the N-way set associative cache mapping


scheme work?
 The cache memory is divided into sets of N blocks
 Instead of mapping to a single cache block (as in direct-mapping
scheme), an address maps to a set of N blocks in cache
 Once the desired set is located, the cache is treated as associative
memory
• The tag of the main memory address is compared to the tags of each
valid block in the set.

November 27, 2023 TM103 - Arab Open University 40


Cache Mapping Schemes – Set associative Cache
In set-associative cache mapping, the main memory address is partitioned
into three pieces: the tag field, the set field, and the word field.
• The tag and word (offset) fields assume the same roles as before.
• The set field indicates into which cache set the main memory block maps.
Figure 6.7 shows the general format of the fully associative mapping scheme.

Figure 6.7: N-way Set Associative format

For example, a 2-way set associative cache can be conceptualized as shown


in figure 6.8.

Figure 6.8: Set associative logical View


November 27, 2023 TM103 - Arab Open University 41
Cache Mapping Schemes – Set associative Cache

Example 1: A main word addressable memory contains 214 words. A


cache contains 16 blocks, where each block contains 8 words. Show how
the main memory address is partitioned if the system uses 2-way set
associative mapping scheme.
Answer:
• A memory address is composed of 14 bits.
• The cache consists of a total of 16 blocks, and each set has 2 blocks ( since
we are using 2-way)
• The number of sets in cache is: 16/2 = 8 sets
• To address one of the 8=23 sets we need 3 bits. Therefore, the set field is 3
bits
• Each block contains 8=23 words, so the word field is 3 bits.
• The remaining 14-(3+3) = 8 bits creates the tag field.

Figure 6.9: Detailed format of example 1


November 27, 2023 TM103 - Arab Open University 42
Cache Mapping Schemes – Set associative Cache

Example 2: Assume a system’s memory has 64M words. Blocks are 64


words in length and the cache consists of 16K blocks. Show the
format for a main memory address assuming a 4-way set associative
cache mapping scheme. Be sure to include the fields as well as their
sizes.
Answer:
Each address has 26 bits, and there are 8 in the tag field, 12 in the set
field and 6 in the word field.
N.B: 12 bits since 16k /4 (since 4-way) = 4k, which is 2 12

Figure 6.10: Detailed format of example 2

November 27, 2023 TM103 - Arab Open University 43


All-in-one Example

To wrap up things, let us have an example that covers the three schemes.
Example:
Suppose a byte-addressable memory contains 1MB and the
cache consists of 32 blocks, where each block contains 16
bytes.
Using the schemes below, specify their different fields, and
determine where the main memory address 326A016 maps to
in cache by specifying either the cache block or cache set:
• A direct mapping scheme is used.
• A fully associative mapping scheme is used.
• A 4-way set associative mapping scheme is used.

November 27, 2023 TM103 - Arab Open University 44


Example - Solution
Direct mapping:
- Each main memory address requires 20 bits. These are divided
into three fields as follows:
- We have 16 = 24 words in each block so we need 4 bits to
identify one of these words: the rightmost 4 bits reflect the
offset field.
- We have 32=25 blocks in cache. We need 5 bits to select a
specific block in cache, so the block field consists of the
middle 5 bits.
- The remaining 11 bits make up the tag field.

- The address 326A0 maps to cache block 01010 (block A 16=1010)


November 27, 2023 TM103 - Arab Open University 45
Example - Solution

Fully associative:
- We have 16 = 24 words in each block so we need 4 bits to
identify one of these words: the rightmost 4 bits reflect the
offset field.
- The remaining 16 bits make up the tag field.

- We cannot determine to which cache block the memory


address 326A0 maps. The address can map anywhere in cache:
The whole cache needs to be searched before the desired data
could be found (by comparing the tag in the address to all tags
in cache).
November 27, 2023 TM103 - Arab Open University 46
Example - Solution

4-way set associative:


- We have 16 = 24 words in each block so we need 4 bits to identify one
of these words: the rightmost 4 bits reflect the offset field.
- The cache consists of a total of 32 blocks, and each set has 4 blocks
( since we are using 4-way)
- The number of sets in cache is: 32/4 = 8 sets
- To address one of the 8=23 sets we need 3 bits. Therefore, the set
field is 3 bits
- The remaining 20-(4+3) = 13 bits creates the tag field.

- The main memory address 326A0 maps to cache set 010 2 = 210. We
cannot know which block in the set is addressed, the set still needs to
be searched before the desired data could be found (by comparing the
tag of the address to all tags in cache set 2)
November 27, 2023 TM103 - Arab Open University 47
End of chapter 6

Try to solve all exercises related to chapter 6


Good Luck on your Final Exam!

You might also like