0% found this document useful (0 votes)
34 views22 pages

Computer Organization: Lec #10: Cache Memory Bnar Mustafa

The document discusses cache memory and its role in improving computer performance. It defines cache as a small, fast memory located between the CPU and main memory that stores frequently accessed instructions and data. When the CPU requests data, it first checks the cache (cache hit) and avoids accessing the slower main memory if the data is present in cache. If not present (cache miss), the data is fetched from main memory and stored in cache for future fast access. The document discusses different cache organizations like direct mapping, set associative mapping and caching principles to optimize cache performance.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
34 views22 pages

Computer Organization: Lec #10: Cache Memory Bnar Mustafa

The document discusses cache memory and its role in improving computer performance. It defines cache as a small, fast memory located between the CPU and main memory that stores frequently accessed instructions and data. When the CPU requests data, it first checks the cache (cache hit) and avoids accessing the slower main memory if the data is present in cache. If not present (cache miss), the data is fetched from main memory and stored in cache for future fast access. The document discusses different cache organizations like direct mapping, set associative mapping and caching principles to optimize cache performance.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 22

Computer Organization

Lec #10 : Cache Memory

Bnar Mustafa
[email protected]

Spring 2022

Computer Organization - Bnar Mustafa 1


What is Cache?
• “A small amount of computer memory with very short access time(fast)
used for storage of frequently used instructions or data”,

• Cache Memory plays a significant role in reducing the processing time of a


program by provide swift access to data/instructions. Cache memory is
small and fast while the main memory is big and slow.

• When a program references a memory location, it is likely to reference that


memory location again soon.
• A small but fast cache memory, in which the contents of the most
commonly accessed locations are maintained, can be placed between the
CPU and the main memory.
• When a program executes, the cache memory is searched first.

2
• When the processor needs to read or write a location in main
memory, it first checks for a corresponding entry in the cache:

a. If the processor finds that the memory location is in the cache,


a cache hit has occurred and data is read from cache
b. If the processor does not find the memory location in the
cache, a cache miss has occurred. For a cache miss, the
cache allocates a new entry and copies in data from main
memory, then the request is fulfilled from the contents of
the cache.

• We can improve Cache performance using higher cache block


size, higher associativity, reduce miss rate, reduce miss penalty,
and reduce the time to hit in the cache.

Computer Organization - Bnar Mustafa 3


Computer Organization - Bnar Mustafa 4
Caching Principle
The purpose of cache memory is to provide the fastest access to resources
without compromising on size and price of the memory. The processor
attempting to read a byte of data, first looks at the cache memory. If the byte
does not exist in cache memory, it searches for the byte in the main memory.
Once the byte is found in the main memory, the block containing a fixed
number of byte is read into the cache memory and then on to the processor.
The probability of finding subsequent byte in the cache memory increases as
the block read into the cache memory earlier contains relevant bytes to the
process.

Computer Organization - Bnar Mustafa 5


Computer Organization - Bnar Mustafa 6
Why is cache memory fast?
• Faster electronics used
• A cache memory has fewer locations than a main memory,
which reduces the access time
• The cache is placed both physically closer and logically
closer the CPU than the main memory

Computer Organization - Bnar Mustafa 7


Types of Cache
• L1/L2/L3 Cache
• RAM Cache
• Disk Cache
• Software Level Cache

Time taken by a program to execute with a cache depends on


• The number of instructions needed to perform the task.
• The average number of CPU cycles needed to perform the
desired task.
• The CPU’s cycle time.

Computer Organization - Bnar Mustafa 8


L1/L2/L3 Cache (Cache Memory)
• Cache closer to the CPU that stores recently accessed data from
RAM
• Holds instructions to be executed next and variables for the
CPU
• Level 1 Cache is stored on the CPU
• L2/L3 Cache stored near, but not on the CPU
• L1/L2/L3 Cache is more expensive than RAM

Computer Organization - Bnar Mustafa 9


Typical Cache Organization
The cache connects to the processor via data, control, and address
lines. The data and address line also attach to the data and address
buffers, which attach to a system bus from which main memory is
reached.

• Cache hit:- when the data and address buffers are disabled and the
communication is only between processor and cache with no
system bus traffic { data was found on cache then deliver from
cache to processor}

• Cache miss:- the desired address is loaded onto the system bus
and the data are returned through the data buffer to both the cache
and the processor.

Computer Organization - Bnar Mustafa 10


Computer Organization - Bnar Mustafa 11
Mapping Function
• Because there are fewer cache line than main memory blocks there are
algorithm to mapping main memory blocks to cache line:-
• Direct mapping
• Associative mapping
• Set associative mapping

Example:
• If the cache hold 64 k byte =216 byte, and the data transferred between
main memory and the cache in blocks of 4 byte each, this means that the
cache organized as 16k=214 line of 4 byte each.
• The main memory consist of 16 M bytes, with each byte directly
addressable by 24- bit (224=16M byte), thus for mapping purpose we can
consider main memory to consist of 4M blocks of 4 byte each.

Computer Organization - Bnar Mustafa 12


Direct Mapping Techniques
Map each block of main memory into only one possible cache line
i=j mod m
i :- cache line number
j :- main memory block number
m :- number of lines in the cache

each main memory address can be viewed as three fields

13
Using last example by direct mapping

Address length of main memory =24 bit


No. of word in each block = 4 (block size)
Size of cache = 214 line
W (which depend on block size) , where block =22 ,so w=2 bit (the power )
r= line :- no. of bit for addressing in cache =14
Field size =24 bit
s= field width –w ; s= 24-2 = 22 bit
Tag = (s-r) = (22-14) = 8 bit
Final organization of main memory address field to be

Tag Line Address Word


8-bit 14 bit 2 bit

No. of block in main memory = 2s+w / 2w =224 / 22 =222 = 4M block


Computer Organization - Bnar Mustafa 14
Why using direct mapping (advantages)
• Simple
• Inexpensive to implement (cheap)
Disadvantages of direct mapping
• Fixed cache location for every given block.

2- Associative Mapping
• In this type the designer must overcome the disadvantages of
direct mapping by permitting each main memory block need to
be loaded to any line of the cache, in this case, the cache control
logic interprets a memory address simply as a tag and a word
field.

Computer Organization - Bnar Mustafa 15


• Address length = (s+w) bits
• Number of addressable units = 2s+w words or bytes
• Block size = line size = 2w words or bytes
• Number of block in main memory = 2s+w / 2w = 2s
• Number of line in cache = undetermined
• Size of tag = s bits

Main Memory field :

Depending on the same example in previous type of main memory


field 24 bit and 4 words in each block

Computer Organization - Bnar Mustafa 16


Example:
a main memory address consist of 22 – bit tag and 2 bit for words
mapping in associative techniques so if we having the following
memory address
• 16339C h to be mapped to tag number 058CE7 h
• With the Associative mapping there is flexibility as to which block to
replace when a new block reads in to the cache.
• The disadvantage of this techniques focus in the complexity in the
circuit required to examine the tags of all cache lines in parallel.

Computer Organization - Bnar Mustafa 17


3. Set associative mapping
• Is a compromise that exhibits the strengths of both direct and
associative approaches while reducing these disadvantages. In this
case cache is divided in to v set each of which consists of k lines.
• Let i : cache set number
• J: main memory block number
• m: number of lines in the cache
• With set associative mapping block Bj can be mapped into any of the
lines of set i,
• Therefore memory address field simply partitioned into three fields:-

Computer Organization - Bnar Mustafa 18


• Main memory address length = (s+w) bits
• Number of addressable units in M.M = 2s+w words or bytes
• Block size= line size= 2w words or bytes
• Number of blocks in M.M. = 2s+w /2w = 2s
• Number of lines in set = k
• Number of set = v= 2d
• Number of line in cache = k v =k * 2d
• Size of tag =( s-d ) bits

Example : shows our example of size of main memory & cache


memory, the same for the previous examples so by set associative
mapping suppose there are two lines in each set:-

Computer Organization - Bnar Mustafa 19


K=2
Number of line in cache= 214 because size of cache is 16 k byte
So
Number of line in the cache = k.v
214 = 2 v so v= 214 / 2= 213
d= 13 (set field consist of 13 bit)
tag = (s-d)
block = 4 words so w=2 bits and s= 24 – 2 = 22 bits

Tag= 22-13 = 9 bits

Computer Organization - Bnar Mustafa 20


Associative Memory
• A memory unit accessed by the content (material) is called Associative
memory or Content Addressable Memory (CAM).
• This type of memory is accessed simultaneously and in parallel based
on the data content rather than the specific address or location. if a word
is written in associative memory, no address is given. Memory is
capable of finding empty unused space to store the word, or part of the
word specified. memory detects all words that match the specified
content and marks them for reading.

Virtual memory
• Virtual memory is not a storage unit, its a technique. In virtual memory, even
such programs which have a larger size than the main memory are allowed to
be executed.
• Virtual Memory increases the capacity of main memory.
Computer Organization - Bnar Mustafa 21
Good luck 

Computer Organization - Bnar Mustafa 22

You might also like