0% found this document useful (0 votes)
60 views35 pages

4.cache Memory

Uploaded by

bereketderso14
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
60 views35 pages

4.cache Memory

Uploaded by

bereketderso14
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 35

CACHE MEMORY

ORGANIZATION

CACHE MEMORY ORGANIZATION

06/28/2024
INTRODUCTION
 Every computer contains several types of memory devices
to store the instructions and data required for its operation.
 The memory devices of a computer system are of four types

i) CPU registers
ii) cache memory
iii) main memory
iv) secondary memory

 The main memory stores the programs and data that are in
active use. The main memory operates at very low speed as
compare to CPU . The cache memory serves as a buffer
between the main memory and the CPU so that the CPU
can operate near to its maximum speed.
BLOCK DIAGRAM OF A MEMORY
UNIT

 A memory unit is a collection of storage cells with associated circuits needed to


transfer information in and out of the device.

m 2m ×w w
Address A RAM Data D

Output enable OE
Write enable WE
Chip select CS

(fig-1 2m ×w-bit or 2m –word memory)


06/28/2024
 A memory unit is a collection of storage cells with associated circuits needed
to transfer information in and out of the device.

 The fig-1 shows a 2m ×w-bit RAM IC and its control lines.

 The RAM has m-bit unidirectional address bus A and w-bit bidirectional data
bus D.

 The three control signals are

1. Write enable line (WE)


2. Chip select line (CS)
3. Output enable line (OE)

06/28/2024
06/28/2024
MEMORY ADDRESSES

Binary decimal Memory contest


10101111001000111

0000000000 0 1000111111001010
0000000001 1 1000101011110001
…… …..
……
…… …..
..….
1111001101011010
1111111101 1021
1111111110 1022 1100110010101111
1111111111 1023 1110110010101100

(Fig-2 Content of a 1024×16 Memory)


06/28/2024
The memory unit is specified by the number of words it contains and the
number of bits in each words. The address lines select one particular word.

Each word in memory is assigned an identification number called an


address , starting from 0 up to 2k -1.where k is the number of address lines.

The selection of a specific word inside memory is done by applying the k-bit
address to the address line.

A decoder accepts this address and opens the path needed to select the
word specified.

06/28/2024
MEMORY HIERARCHY

(fig-3 memory hierarchy)


06/28/2024
The computer memory is organized in to a hierarchy.
i) Inboard memory-These are internal to the computer
a) CPU registers- High speed memory located with in the CPU and are
less in number.
b) Cache memory-Logically positioned between the CPU and main
memory. Its storage capacity less than main memory and is faster than main
memory.
c) Main memory-It stores the program that are in active use. Accessing
data from main memory is slower because of its large capacity.

ii) Outboard storage- It can not be accessed directly by the CPU.


a) Magnetic disk-It is much larger in capacity and slower than main
memory.
It stores the system programs, large data files that are not continuously required
by the CPU.
iii) Offline storage-It is a data storage device that are not under the control of a
processing unit.
a) Magnetic tap- Information stored here are rarely accessed ,off-line
storage is less expensive than secondary storage .
06/28/2024
Three main characteristics of memory are
1. capacity
2.access time
3. cost
If one go down the memory hierarchy the following occur:

Decreasing cost per bit Co1 > Co2 > Co3 > Co4 >Co5 >Co6

Increasing capacity Ca1 < Ca2 < Ca3 < Ca4 < Ca5 <Ca6

Increasing access time t1 < t2 < t3 < t4 < t5 < t6

Decreasing frequency of access of the memory by the processor

f1 > f2 > f3 > f4 > f5 > f6

06/28/2024
CACHE MEMORY
 Cache memory serves as a buffer between the CPU and main memory. It is
intended to give faster memory speed and a larger memory capacity at less
price.

(Fig-4 cache memory and main memory)


06/28/2024
 The (fig-4 a) shows a relatively large and slow main memory together with a
smaller and fastest cache memory.

 The (fig-4 b) shows the use of multiple levels of cache . The L2 cache is
slower and typically larger than L1 cache, and L3 cache is slower and
typically larger than the L2 cache.

 Cache hit- when the CPU finds a requested data item in the cache, it is
called cache hit.
 Cache miss-when the CPU does not find a data it needs in the cache, it is
called cache miss.
 Hit ratio-It is the percentage of data found in the cache.
 Access time-It is the total time taken to bring the required data form the
memory to CPU.

06/28/2024
 In the figure -4(a) if
 t1= time required to read from cache memory
 t2= time required to read from main memory
 h = is the hit ratio
 Access time = h*t1 + (1-h)*(t1+t2)

In the figure-4(b) if
 t1=time required to read the data from L1 cache
 t2= time required to read the data from L2 cache
 t3= time required to read the data from L3 cache
 t4= time required to read the data from main memory
 h1=hit ratio in L1 cache
 h2=hit ratio in L2 cache
 h3=hit ratio in L3 cache
 Access time = h1t1 + h2(1-h1)(t1+t2) + h3(1-h1)(1-h2)(t1+t2+t3) +

(1-h1)(1-h2)(1-h3)(t1+t2+t3+t4)

06/28/2024
CACHE ORGANISATION-

Cache data
Hit memory
Cache tag
memory

Address Control Data


( figure- 5 Basic structure of a cache )

The above fig shows the principal components of a cache . Memory words
are stored in a cache data memory and are grouped into small pages called
cache blocks or lines. The contents of the cache’s data memory are thus
copies of a set of main memory blocks. Each cache block is marked with its
block address referred to as tag.
06/28/2024
CACHE AND MAIN MEMORY
STRUCTURE

(Fig-6 Cache/Main Memory Structure) 06/28/2024


The Fig-7 depict the structure of a cache/main memory consists of up to 2 n
address words , with each word having a unique n-bit address.

For the mapping purposes memory is considered to consist of a number


of fixed length blocks of K word each.

There are M= 2n /K blocks , called lines. Each line contains K words ,


plus a few tag bits each.

06/28/2024
MAPPING MEMORY TO CACHE
There are fewer cache lines than main memory blocks so an algorithm
is needed for mapping main memory blocks into cache lines.

Three techniques can be used


1. Direct mapping
2. Set associative mapping
3. Fully associative mapping

06/28/2024
DIRECT MAPPING
In direct mapping is the simplest technique . It maps each blocks of main
memory into only one possible cache line . The mapping is expressed as

P=K module N
Where P=cache line number
K=main memory block number
N=number of lines in the cache
The Kth block of main memory has to be placed in (KmodN) th cache
position.
The physical address is divided into three field

physical address

TAG CACHE BLOCK EFFECT WORD EFFECT


06/28/2024
Direct Mapping
Cache Line Table

Cache line Main Memory blocks held


0, m, 2m, 3m…2s-m
0

1,m+1, 2m+1…2s-m+1
1


m-1, 2m-1,3m-1…2s-1
m-1

(Fig-7 Direct mapping cache line table)


Explanation of Direct mapping-

MM=512 words

00 011 1 0
01 CM=64 words
001 111 1 mod 4 1
9
10 110
11 001
29
N=4 blocks
30
31

M=32 blocks

(Fig-8 Direct mapping cache) 06/28/2024


Let the main memory size MM = 512 words
The cache memory size CM = 64 words
 The block size = 16 words
 Then Number of blocks in main memory
 M = MM/block size =152/64 = 32 blocks
 Number of blocks in cache memory N = CM/block size=64/16 = 4 blocks
 Number of blocks in a particular cluster = M/N = 32/4 = 8

TAG 000 001 010 011 100 101 110 111 Cache
Memory
Main Memory { 0, 4, 8, 12, 16, 20, 24, 28 } --- 0 00
Blocks
blocks { 1, 5, 9, 13, 17, 21, 25, 29}--- 1 01
{ 2, 6, 10, 14, 18, 22, 26, 30}--- 2 10
{3, 7, 11, 15, 19, 23, 27, 31}--- 3 11

Let the cache the block number – 5 , 7, 12 , 26


06/28/2024
Example – 1
Suppose the processor want to access the data content in physical address
(400)10 from the memory. Find out whether it is a cache hit or a miss.
Sol-
The binary equivalent of (400)10 =(110010000)2
since 110 is not present in the cache block number 01. so it is miss

1 1 0 0 1 0 0 0 0

TAG=log(M/N) cache word offset


block log(P)
offset
log N
06/28/2024
Example-2
The processor want to access the data content in memory location (199) 10.
We have to find whether it is a hit or a miss.
Explanation-
Binary equivalent of (199)10 = (011000111)2

0 1 1 0 0 0 1 1 1

The tag bit 011 present in the block no 00 . So it is a hit.

Advantages-
1. simplest mapping technique.
2. Less number of tag bits and less tag comparator required.
Disadvantages-
1. It is a slowest mapping technique and requires more block movement.

06/28/2024
ASSOCIATIVE MAPPING
In associative mapping any block of main memory can be placed any where
in cache memory . In this case, the physical memory address can be divided
as a tag and a word field.
The tag field uniquely identifies a block of main memory. To determine
whether a block is in cache or not, each cache blocks are to be tested.
Hence the number of tag comparator is equal to number of cache blocks.

physical address
log M = s log P = w

TAG Word offset

06/28/2024
Address length = (s + w) bits
Number of addressable units = 2(s+w)

Block size = line size = P = 2w words or bytes


Number of blocks in main memory = M=2s
Size of tag = s bits

(Fig-9 associative memory mapping)

Advantages-
1. Each block of main memory can be mapped in to any block of cache
memory.
Disadvantages-
1. More tag bits are there and hence more TAG memory06/28/2024
required.
2. More comparator required and hence it is more costly.
SET ASSOCITIVE MAPPING
The set associative mapping is having the advantages of both direct mapping
and associative mapping.
Here the cache is divided into logical sets.
In 4-way set association each set is allocated with four cache blocks.
The physical address is divided into three fields.
1. word offset
2. set offset
3. tag information

physical address
TAG Set offset
Word offset w

log (MK/N) log K log P

06/28/2024
 The cache consists of a number sets, each of which consists of a number of
lines. The relationships are

m=v*k
i = j modulo v
Where
 i = cache set number
 j = main memory block number
 m = number of lines in the cache
 v = number of sets
 k = number of lines in each set
 This is referred to as k-way set-associative mapping

06/28/2024
Explanation of Set associative memory

MM=512 words

0
0 0111 CM= 641words
sets
1001 111 0mod2 91
1101
1 0101
29
N=4 blocks
30
31

M=32 blocks

(Fig-10 Set associative cache memory) 06/28/2024


The above figure shows a 2-way set associative memory

 Let the main memory size MM = 512 words


 The cache memory size CM = 64 words
 The block size = 16 words
 Then Number of blocks in main memory
 M = MM/block size =152/64 = 32 blocks
 Number of blocks in cache memory N = CM/block size=64/16 = 4 blocks
 Number of set = 2
 Number of blocks in a particular set = 2

TAG

0000 0001 0010 0011 0100 0101 0110 0111 1000 1001 1010 1011 1100 1101 1110 1111
0 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30
set0 1 3 5 7 9 11 13 15 17 19 21 23 25
27 29 31 set1

06/28/2024
Example- 1
The processor want to access the data of memory location (122)10 .
Find whether it is a hit or a miss.

Solution-

The binary equivalent of (122)10 = (001111010)2


. Tag set offset word offset
0 0 1 1 1 1 0 1 0

The set no ‘1’ does not contain the tag bit 0011. So it is a miss.

06/28/2024
 Example-2

The processor want to access the data of the memory address


442.Find whether it is a hit or miss.
Solution-
The binary equivalent of (442)10 = (110111010)2

1 1 0 1 1 1 0 1 0

The tag bit 1101 is present in the set no 1.So it is a hit.

Advantages-
1. Number of TAG comparator required is equal to the number of blocks with in a set.

Disadvantages-
1. Complex in structure.

06/28/2024
UPDATION TECHNIQUE
The site effect of memory hierarchy is the data inconsistency . Same
information is available differently at different places.
Proper updation technique reduce this problem.
 Write-back updation
 Write-through updation

Write-back updation
Here the cache act like a buffer by receiving data from the processor and
writing data back to main memory whenever the system bus is available.
Advantage-
The processor is freed up to continue with other tasks while main memory is
updated at later time.
Disadvantage-
The cost and complexity of cache subsequently increase.

06/28/2024
Write through updation
The processor handles write to main memory instead of the cache.
The cache may update its contents as the data comes through from the
processor. The write operation does not end until the processor has
write the data back to main memory.
Advantage-
The cache does not have to be complex , which thus makes it less
expansive.
Disadvantage-
The processor must wait until the main memory accepts the data before
moving on to its next task

06/28/2024
CONCLUSION-
The goal of memory hierarchy is to obtain a cost per bit close to that of
the least expansive memory and access time close to that of the fastest
memory.
To reduce the speed difference between CPU and main memory cache
memory is used.
Various updation techniques are used to maintain the consistency of
data in memory.

06/28/2024
THANK YOU

06/28/2024

You might also like