0% found this document useful (0 votes)
11 views11 pages

IES 4 Memory System Mechanisms

IES Memory System Mechanisms

Uploaded by

MALATHI .L
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views11 pages

IES 4 Memory System Mechanisms

IES Memory System Mechanisms

Uploaded by

MALATHI .L
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 11

Introduction to Embedded Systems

Memory System Mechanisms

1
Memory system mechanisms
CPUs

• Caches.
• Memory management.
Cache in a memory system
address data
cache
controll
cache
main
CPU
memory
er address
data data
2
Cache operation
• Many main memory locations are mapped onto one cache
entry.
• May have caches for:
• instructions;
• data;
• data + instructions (unified).
• Memory access time is no longer deterministic.

3
Cache related Terms
• Cache hit: required location is in cache.
• Cache miss: required location is not in cache.
• Working set: set of locations used by program in a time
interval.
Types of misses
• Compulsory (cold): location has never been accessed.
• Capacity: working set is too large.
• Conflict: multiple locations in working set map to same
cache entry.
4
Multiple levels of cache

• h = cache hit rate.


• tcache = cache access time, tmain = main memory
access time.
• Average memory access time:
• tav = htcache + (1-h)tmain
Memory System Performance

CPU L1 cache
L2 cache

5
Multi-level cache access time
• h1 = cache hit rate.
• h2 = hit rate on L2.
• Average memory access time:
• tav = h1tL1 + (h2-h1)tL2 + (1- h2-h1)tmain
Replacement policies
• Replacement policy: strategy for choosing which cache entry to throw
out to make room for a new memory location.
• Two popular strategies:
• Random.
• Least-recently used (LRU).

6
Cache organizations
• Fully-associative: any memory location can be stored anywhere
in the cache (almost never implemented).
• Direct-mapped: each memory location maps onto exactly
one cache entry.
• N-way set-associative: each memory location can go into one of
n sets.
Cache performance benefits
• Keep frequently-accessed locations in fast cache.
• Cache retrieves more than one word at a time.
• Sequential accesses are faster after first access.
7
Direct-mapped cache

1 0xabcd byte byte byte ...


valid tag data
ca che block

address
tag index offset
=

hit valu
byte
e
8
Write operations
• Write-through: immediately copy write to main
memory.
• Write-back: write to main memory only when location is
removed from cache.
Direct-mapped cache locations
• Many locations map onto the same cache block.
• Conflict misses are easy to generate:
• Array a[] uses locations 0, 1, 2, …
• Array b[] uses locations 1024, 1025, 1026, …
• Operation a[i] + b[i] generates conflict misses.

9
Set-associative cache
A set of direct-mapped caches
• Cache request is sent to all banks simultaneously
• If any of the sets has the location, cache reports a hit.

10
Example caches
• StrongARM:
• 16 Kbyte, 32-way, 32-byte block instruction cache.
• 16 Kbyte, 32-way, 32-byte block data cache (write-back).
• C55x:
• Various models have 16KB, 24KB cache.
• Can be used as scratch pad memory.

Scratch pad memories


• Alternative to cache:
• Software determines what is stored in scratch pad.
• Provides predictable behavior at the cost of software control.
• C55x cache can be configured as scratch pad.

11

You might also like