UNIT 4 Memory
UNIT 4 Memory
SRAM
The SRAM memories consist of circuits capable of retaining the stored information as
long as the power is applied. That means this type of memory requires constant power.
SRAM memories are used to build Cache Memory.
SRAM Memory Cell: Static memories(SRAM) are memories that consist of circuits
capable of retaining their state as long as power is on. Thus this type of memories is
called volatile memories. The below figure shows a cell diagram of SRAM. A latch is
formed by two inverters connected as shown in the figure. Two transistors T1 and T2
are used for connecting the latch with two bit lines. The purpose of these transistors is
to act as switches that can be opened or closed under the control of the word line,
which is controlled by the address decoder. When the word line is at 0-level, the
transistors are turned off and the latch remains its information. For example, the cell is
at state 1 if the logic value at point A is 1 and at point B is 0. This state is retained as
long as the word line is not activated.
For Read operation, the word line is activated by the address input to the address
decoder. The activated word line closes both the transistors (switches) T1 and T2. Then
the bit values at points A and B can transmit to their respective bit lines. The sense/write
circuit at the end of the bit lines sends the output to the processor.
For Write operation, the address provided to the decoder activates the word line to
close both the switches. Then the bit value that to be written into the cell is provided
through the sense/write circuit and the signals in bit lines are then stored in the cell.
DRAM
DRAM stores the binary information in the form of electric charges that applied to
capacitors. The stored information on the capacitors tend to lose over a period of time
and thus the capacitors must be periodically recharged to retain their usage. The main
memory is generally made up of DRAM chips.
DRAM Memory Cell: Though SRAM is very fast, but it is expensive because of its
every cell requires several transistors. Relatively less expensive RAM is DRAM, due to
the use of one transistor and one capacitor in each cell, as shown in the below figure.,
where C is the capacitor and T is the transistor. Information is stored in a DRAM cell in
the form of a charge on a capacitor and this charge needs to be periodically recharged.
For storing information in this cell, transistor T is turned on and an appropriate voltage is
applied to the bit line. This causes a known amount of charge to be stored in the
capacitor. After the transistor is turned off, due to the property of the capacitor, it starts
to discharge. Hence, the information stored in the cell can be read correctly only if it is
read before the charge on the capacitors drops below some threshold value.
Types of DRAM
There are mainly 5 types of DRAM:
1. Asynchronous DRAM (ADRAM): The DRAM described above is the
asynchronous type DRAM. The timing of the memory device is controlled
asynchronously. A specialized memory controller circuit generates the necessary
control signals to control the timing. The CPU must take into account the delay in
the response of the memory.
2. Synchronous DRAM (SDRAM): These RAM chips’ access speed is directly
synchronized with the CPU’s clock. For this, the memory chips remain ready for
operation when the CPU expects them to be ready. These memories operate at
the CPU-memory bus without imposing wait states. SDRAM is commercially
available as modules incorporating multiple SDRAM chips and forming the
required capacity for the modules.
3. Double-Data-Rate SDRAM (DDR SDRAM): This faster version of SDRAM
performs its operations on both edges of the clock signal; whereas a standard
SDRAM performs its operations on the rising edge of the clock signal. Since they
transfer data on both edges of the clock, the data transfer rate is doubled. To
access the data at high rate, the memory cells are organized into two groups. Each
group is accessed separately.
4. Rambus DRAM (RDRAM): The RDRAM provides a very high data transfer rate
over a narrow CPU-memory bus. It uses various speedup mechanisms, like
synchronous memory interface, caching inside the DRAM chips and very fast
signal timing. The Rambus data bus width is 8 or 9 bits.
5. Cache DRAM (CDRAM): This memory is a special type DRAM memory with an
on-chip cache memory (SRAM) that acts as a high-speed buffer for the main
DRAM.
Difference between SRAM and DRAM
Below table lists some of the differences between SRAM and DRAM:
Levels of memory:
Level 1 or Register –
It is a type of memory in which data is stored and accepted that are immediately
stored in CPU. Most commonly used register is accumulator, Program counter,
address register etc.
Level 2 or Cache memory –
It is the fastest memory which has faster access time where data is temporarily
stored for faster access.
Level 3 or Main Memory –
It is memory on which computer works currently. It is small in size and once power
is off data no longer stays in this memory.
Level 4 or Secondary Memory –
It is external memory which is not as fast as main memory but data stays
permanently in this memory.
Cache Performance:
When the processor needs to read or write a location in main memory, it first checks for
a corresponding entry in the cache.
If the processor finds that the memory location is in the cache, a cache hit has
occurred and data is read from cache
If the processor does not find the memory location in the cache, a cache
miss has occurred. For a cache miss, the cache allocates a new entry and copies
in data from main memory, then the request is fulfilled from the contents of the
cache.
The performance of cache memory is frequently measured in terms of a quantity
called Hit ratio.
Hit ratio = hit / (hit + miss) = no. of hits/total accesses
We can improve Cache performance using higher cache block size, higher associativity,
reduce miss rate, reduce miss penalty, and reduce Reduce the time to hit in the cache.
Cache Mapping:
There are three different types of mapping used for the purpose of cache memory which
are as follows: Direct mapping, Associative mapping, and Set-Associative mapping.
These are explained below.
1. Direct Mapping –
The simplest technique, known as direct mapping, maps each block of main
memory into only one possible cache line. or
In Direct mapping, assigne each memory block to a specific line in the cache. If a
line is previously taken up by a memory block when a new block needs to be
loaded, the old block is trashed. An address space is split into two parts index field
and a tag field. The cache is used to store the tag field whereas the rest is stored
in the main memory. Direct mapping`s performance is directly proportional to the
Hit ratio.
2. i = j modulo m
3. where
4. i=cache line number
5. j= main memory block number
m=number of lines in the cache
For purposes of cache access, each main memory address can be viewed as
consisting of three fields. The least significant w bits identify a unique word or byte
within a block of main memory. In most contemporary machines, the address is at
the byte level. The remaining s bits specify one of the 2 s blocks of main memory.
The cache logic interprets these s bits as a tag of s-r bits (most significant portion)
and a line field of r bits. This latter field identifies one of the m=2 r lines of the
cache.
6. Associative Mapping –
In this type of mapping, the associative memory is used to store content and
addresses of the memory word. Any block can go into any line of the cache. This
means that the word id bits are used to identify which word in the block is needed,
but the tag becomes all of the remaining bits. This enables the placement of any
word at any place in the cache memory. It is considered to be the fastest and the
most flexible mapping form.
7. Set-associative Mapping –
This form of mapping is an enhanced form of direct mapping where the drawbacks
of direct mapping are removed. Set associative addresses the problem of possible
thrashing in the direct mapping method. It does this by saying that instead of
having exactly one line that a block can map to in the cache, we will group a few
lines together creating a set. Then a block in memory can map to any one of the
lines of a specific set..Set-associative mapping allows that each word that is
present in the cache can have two or more words in the main memory for the same
index address. Set associative cache mapping combines the best of direct and
associative cache mapping techniques.
In this case, the cache consists of a number of sets, each of which consists of a
number of lines. The relationships are
m = v * k
i= j mod v
where
i=cache set number
j=main memory block number
v=number of sets
m=number of lines in the cache number of sets
k=number of lines in each set
Types of Cache –
1. Primary Cache –
A primary cache is always located on the processor chip. This cache is small
and its access time is comparable to that of processor registers.
2. Secondary Cache –
Secondary cache is placed between the primary cache and the rest of the
memory. It is referred to as the level 2 (L2) cache. Often, the Level 2 cache is
also housed on the processor chip.
Locality of reference –
Since size of cache memory is less as compared to main memory. So to check
which part of main memory should be given priority and loaded in cache is decided
based on locality of reference.
Types of Locality of reference
1. Spatial Locality of reference
This says that there is a chance that element will be present in the close
proximity to the reference point and next time if again searched then more
close proximity to the point of reference.
2. Temporal Locality of reference
In this Least recently used algorithm will be used. Whenever there is page
fault occurs within a word will not only load word in main memory but
complete page fault will be loaded because spatial locality of reference rule
says that if you are referring any word next word will be referred in its register
that’s why we load complete page table so the complete block will be loaded.