Computer Mapping and Different Memory

Download as pdf or txt
Download as pdf or txt
You are on page 1of 9

Memory Hierarchy Design and its

Characteristics
In the Computer System Design, Memory Hierarchy is an enhancement to organize the memory
such that it can minimize the access time. The Memory Hierarchy was developed based on a
program behavior known as locality of references.The figure below clearly demonstrates the
different levels of memory hierarchy :

This Memory Hierarchy Design is divided into 2 main types:


1. External Memory or Secondary Memory –
Comprising of Magnetic Disk, Optical Disk, Magnetic Tape i.e. peripheral storage devices
which are accessible by the processor via I/O Module.
2. Internal Memory or Primary Memory –
Comprising of Main Memory, Cache Memory & CPU registers. This is directly accessible
by the processor.

Characteristics of computer memory system:

1) Location

CPU registers
Cache
Internal Memory(main)
External (secondary)

2) Capacity

Word size : Typically equal to the number of bits used to represent a number and to the
instruction length.
For addresses of length A (in bits), the number of addressable units is 2^A.

3) Unit of Transfer
Word
Block

4) Access Method

* Sequential Access
Access must be made in a specific linear sequence.
Time to access an arbitrary record is highly variable.
* Direct access
Individual blocks or record have an address based on physical location.
Access is by direct access to general vicinity of desired information, then some search.
Access time is still variable, but not as much as sequential access.
* Random access
Each addressable location has a unique, physical location.
Access is by direct access to desired location,
Access time is constant and independent of prior accesses.
* Associative
Desired units of information are retrieved by comparing a sub-part of unit.
Location is needed.
Most useful for searching.

5) Performance

* Access Time (Latency)


For random access memory, latency is the time it takes to perform. A read or write operation
that is the time from the instant that the address is presented to the memory to the instant that
the data have been stored available for use.

* Memory Cycle Time


Access time + additional time required before a second access can begin(refresh time, for
example).

* Transfer Rate
Generally measured in bit/second.

Cache Memory is a special very high-speed memory. It is used to speed up and


synchronizing with high-speed CPU. Cache memory is costlier than main memory or disk
memory but economical than CPU registers. Cache memory is an extremely fast memory type
that acts as a buffer between RAM and the CPU.

Cache memory is used to reduce the average time to access data from the Main memory. The
cache is a smaller and faster memory which stores copies of the data from frequently used main
memory locations. There are various different independent caches in a CPU, which store
instructions and data.

Levels of memory:
● Level 1 or Register –
It is a type of memory in which data is stored and accepted that are immediately stored
in CPU. Most commonly used register is accumulator, Program counter, address register
etc.
● Level 2 or Cache memory –
It is the fastest memory which has faster access time where data is temporarily stored
for faster access.
● Level 3 or Main Memory –
It is memory on which computer works currently. It is small in size and once power is off
data no longer stays in this memory.
● Level 4 or Secondary Memory –
It is external memory which is not as fast as main memory but data stays permanently in
this memory.

Types of Cache –

● Primary Cache –
A primary cache is always located on the processor chip. This cache is small and its
access time is comparable to that of processor registers.
● Secondary Cache –
Secondary cache is placed between the primary cache and the rest of the memory. It is
referred to as the level 2 (L2) cache. Often, the Level 2 cache is also housed on the
processor chip.

Cache Performance:When the processor needs to read or write a location in main memory,
it first checks for a corresponding entry in the cache.

● If the processor finds that the memory location is in the cache, a cache hit has occurred
and data is read from cache
● If the processor does not find the memory location in the cache, a cache miss has
occurred. For a cache miss, the cache allocates a new entry and copies in data from
main memory, then the request is fulfilled from the contents of the cache.

The performance of cache memory is frequently measured in terms of a quantity called Hit ratio.

Hit ratio = hit / (hit + miss) = no. of hits/total accesses

Cache Memory Design

There are a few basic design elements that serve to classify and differentiate cache
architectures. They are listed down:
1. Block Size
2. Cache Size
3. Mapping Function
4. Replacement Algorithm
5. Write Policy

Cache Size:
It seems that moderately tiny caches will have a big impact on performance.
Block Size:
Block size is the unit of information changed between cache and main memory.As the block size
will increase from terribly tiny to larger sizes, the hit magnitude relation can initially increase as a
result of the principle of locality.the high chance that knowledge within the neck of the woods of
a documented word square measure possible to be documented within the close to future. As
the block size increases, a lot of helpful knowledge square measure brought into the cache.
Mapping Function:

When a replacement block of data is scan into the cache, the mapping performs determines that
cache location the block will occupy. Two constraints have an effect on the planning of the
mapping perform. First, once one block is scan in, another could be replaced.

Replacement Algorithm:
The replacement algorithmic rule chooses, at intervals, the constraints of the mapping perform,
which block to interchange once a replacement block is to be loaded into the cache and also the
cache already has all slots full of alternative blocks. We would wish to replace the block that’s
least possible to be required once more within the close to future. Although it’s impossible to
spot such a block, a fairly effective strategy is to interchange the block that has been within the
cache longest with no relevance.

Write Policy:

If the contents of a block within the cache square measure altered, then it’s necessary to write
down it back to main memory before exchange it. The written policy dictates once the memory
write operation takes place. At one extreme, the writing will occur whenever the block is
updated. At the opposite extreme, the writing happens only if the block is replaced.

Cache Mapping:

There are three different types of mapping used for the purpose of cache memory which are as
follows: Direct mapping, Associative mapping, and Set-Associative mapping. These are
explained below.

Direct Mapping –
The simplest technique, known as direct mapping, maps each block of main memory into only
one possible cache line. Or In Direct mapping, assign each memory block to a specific line in
the cache. If a line is previously taken up by a memory block when a new block needs to be
loaded, the old block is trashed. An address space is split into two parts index field and a tag
field. The cache is used to store the tag field whereas the rest is stored in the main memory.
Direct mapping`s performance is directly proportional to the Hit ratio.

For purposes of cache access, each main memory address can be viewed as consisting of
three fields. The least significant w bits identify a unique word or byte within a block of main
memory. In most contemporary machines, the address is at the byte level. The remaining s bits
specify one of the 2s blocks of main memory. The cache logic interprets these s bits as a tag of
s-r bits (most significant portion) and a line field of r bits. This latter field identifies one of the
m=2r lines of the cache.
1. Associative Mapping –
In this type of mapping, the associative memory is used to store content and addresses
of the memory word. Any block can go into any line of the cache. This means that the
word id bits are used to identify which word in the block is needed, but the tag becomes
all of the remaining bits. This enables the placement of any word at any place in the
cache memory. It is considered to be the fastest and the most flexible mapping form.

Set-associative Mapping –
This form of mapping is an enhanced form of direct mapping where the drawbacks of direct
mapping are removed. Set associative addresses the problem of possible thrashing in the direct
mapping method. It does this by saying that instead of having exactly one line that a block can
map to in the cache, we will group a few lines together creating a set. Then a block in memory
can map to any one of the lines of a specific set..Set-associative mapping allows that each word
that is present in the cache can have two or more words in the main memory for the same index
address. Set associative cache mapping combines the best of direct and associative cache
mapping techniques.
In this case, the cache consists of a number of sets, each of which consists of a number of
lines. The relationships are
Application of Cache Memory –

1. Usually, the cache memory can store a reasonable number of blocks at any given time,
but this number is small compared to the total number of blocks in the main memory.
2.The correspondence between the main memory blocks and those in the cache
is specified by a mapping function.

You might also like