0% found this document useful (0 votes)
140 views13 pages

UNIT 4 Memory

RAM is a type of volatile memory that can be directly accessed by the CPU. There are two main types of RAM - SRAM and DRAM. SRAM uses circuits to retain data as long as power is applied, making it faster but more expensive. DRAM stores data in capacitors that must be periodically refreshed to avoid data loss, making it slower but cheaper than SRAM. Cache memory is a small, fast memory located between the CPU and main memory that stores frequently used data to improve performance.

Uploaded by

SANJAY NAYAK
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
140 views13 pages

UNIT 4 Memory

RAM is a type of volatile memory that can be directly accessed by the CPU. There are two main types of RAM - SRAM and DRAM. SRAM uses circuits to retain data as long as power is applied, making it faster but more expensive. DRAM stores data in capacitors that must be periodically refreshed to avoid data loss, making it slower but cheaper than SRAM. Cache memory is a small, fast memory located between the CPU and main memory that stores frequently used data to improve performance.

Uploaded by

SANJAY NAYAK
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 13

UNIT 4 Memory

Different Types of RAM (Random Access Memory )


RAM (Random Access Memory) is a part of computer’s Main Memory which is directly
accessible by CPU. RAM is used to Read and Write data into it which is accessed by
CPU randomly. RAM is volatile in nature, it means if the power goes off, the stored
information is lost. RAM is used to store the data that is currently processed by the
CPU. Most of the programs and data that are modifiable are stored in RAM.
Integrated RAM chips are available in two form:
1. SRAM(Static RAM)
2. DRAM(Dynamic RAM)
The block diagram of RAM chip is given below.

SRAM
The SRAM memories consist of circuits capable of retaining the stored information as
long as the power is applied. That means this type of memory requires constant power.
SRAM memories are used to build Cache Memory.
SRAM Memory Cell: Static memories(SRAM) are memories that consist of circuits
capable of retaining their state as long as power is on. Thus this type of memories is
called volatile memories. The below figure shows a cell diagram of SRAM. A latch is
formed by two inverters connected as shown in the figure. Two transistors T1 and T2
are used for connecting the latch with two bit lines. The purpose of these transistors is
to act as switches that can be opened or closed under the control of the word line,
which is controlled by the address decoder. When the word line is at 0-level, the
transistors are turned off and the latch remains its information. For example, the cell is
at state 1 if the logic value at point A is 1 and at point B is 0. This state is retained as
long as the word line is not activated.
For Read operation, the word line is activated by the address input to the address
decoder. The activated word line closes both the transistors (switches) T1 and T2. Then
the bit values at points A and B can transmit to their respective bit lines. The sense/write
circuit at the end of the bit lines sends the output to the processor.
For Write operation, the address provided to the decoder activates the word line to
close both the switches. Then the bit value that to be written into the cell is provided
through the sense/write circuit and the signals in bit lines are then stored in the cell.
DRAM
DRAM stores the binary information in the form of electric charges that applied to
capacitors. The stored information on the capacitors tend to lose over a period of time
and thus the capacitors must be periodically recharged to retain their usage. The main
memory is generally made up of DRAM chips.
DRAM Memory Cell: Though SRAM is very fast, but it is expensive because of its
every cell requires several transistors. Relatively less expensive RAM is DRAM, due to
the use of one transistor and one capacitor in each cell, as shown in the below figure.,
where C is the capacitor and T is the transistor. Information is stored in a DRAM cell in
the form of a charge on a capacitor and this charge needs to be periodically recharged.
For storing information in this cell, transistor T is turned on and an appropriate voltage is
applied to the bit line. This causes a known amount of charge to be stored in the
capacitor. After the transistor is turned off, due to the property of the capacitor, it starts
to discharge. Hence, the information stored in the cell can be read correctly only if it is
read before the charge on the capacitors drops below some threshold value.

Types of DRAM
There are mainly 5 types of DRAM:
1. Asynchronous DRAM (ADRAM): The DRAM described above is the
asynchronous type DRAM. The timing of the memory device is controlled
asynchronously. A specialized memory controller circuit generates the necessary
control signals to control the timing. The CPU must take into account the delay in
the response of the memory.
2. Synchronous DRAM (SDRAM): These RAM chips’ access speed is directly
synchronized with the CPU’s clock. For this, the memory chips remain ready for
operation when the CPU expects them to be ready. These memories operate at
the CPU-memory bus without imposing wait states. SDRAM is commercially
available as modules incorporating multiple SDRAM chips and forming the
required capacity for the modules.
3. Double-Data-Rate SDRAM (DDR SDRAM): This faster version of SDRAM
performs its operations on both edges of the clock signal; whereas a standard
SDRAM performs its operations on the rising edge of the clock signal. Since they
transfer data on both edges of the clock, the data transfer rate is doubled. To
access the data at high rate, the memory cells are organized into two groups. Each
group is accessed separately.
4. Rambus DRAM (RDRAM): The RDRAM provides a very high data transfer rate
over a narrow CPU-memory bus. It uses various speedup mechanisms, like
synchronous memory interface, caching inside the DRAM chips and very fast
signal timing. The Rambus data bus width is 8 or 9 bits.
5. Cache DRAM (CDRAM): This memory is a special type DRAM memory with an
on-chip cache memory (SRAM) that acts as a high-speed buffer for the main
DRAM.
Difference between SRAM and DRAM
Below table lists some of the differences between SRAM and DRAM:

2D and 2.5D Memory organization


Internal structure of Memory either RAM or ROM is made of memory cells which
contains a memory bit. Basically group of 8 bits makes a word. Now the memory is
formed in multidimensional array of rows and columns. In which each cell stores a bit
and a complete row contains a word. A memory simply can be divided in this below
form.
2n = N
where,n is the no. of address lines and N is the total memory in bytes.
There will be 2n words.
2D Memory organization –
Basically in 2D organization memory is divides in the form of rows and columns. Each
row contains a word now in this memory organization there is a decoder. A decoder is a
combinational circuit which contains n input lines and 2 n output lines. One of the output
line will select the row which address is contained in the MAR. And the word which is
represented by the row that will get selected and either read or write through the data
lines.
2.5D Memory organization –
In 2.5D Organization the scenario is the same but we have two different decoders one is
column decoder and another is row decoder. Column decoder used to select the
column and row decoder is used to select the row. Address from the MAR will go in
decoders’ input. Decoders will select the respective cell. Through the bit outline, the
data from that location will be read or through the bit in line data will be written at that
memory location.

Read and Write Operations –


1. If the select line is in Read mode then the Word/bit which is represented by the
MAR that will be coming out to the data lines and get read.
2. If the select line is in write mode then the data from memory data register (MDR)
will go to the respective cell which is addressed by the memory address register (MAR).
3. With the help of the select line the data will get selected where the read and
write operations will take place.
Comparison between 2D & 2.5D Organizations –
1. In 2D organization hardware is fixed but in 2.5D hardware changes.
2. 2D Organization requires more no. of Gates while 2.5D requires less no. of Gates.
3. 2D is more complex in comparison to the 2.5D Organization.
4. Error correction is not possible in the 2D organization but In 2.5D error correction
is easy.
5. 2D is more difficult to fabricate in comparison to the 2.5D organization.
Cache Memory in Computer Organization
Cache Memory is a special very high-speed memory. It is used to speed up and
synchronizing with high-speed CPU. Cache memory is costlier than main memory or
disk memory but economical than CPU registers. Cache memory is an extremely fast
memory type that acts as a buffer between RAM and the CPU. It holds frequently
requested data and instructions so that they are immediately available to the CPU when
needed.
Cache memory is used to reduce the average time to access data from the Main
memory. The cache is a smaller and faster memory which stores copies of the data
from frequently used main memory locations. There are various different independent
caches in a CPU, which store instructions and data.

Levels of memory:
 Level 1 or Register –
It is a type of memory in which data is stored and accepted that are immediately
stored in CPU. Most commonly used register is accumulator, Program counter,
address register etc.
 Level 2 or Cache memory –
It is the fastest memory which has faster access time where data is temporarily
stored for faster access.
 Level 3 or Main Memory –
It is memory on which computer works currently. It is small in size and once power
is off data no longer stays in this memory.
 Level 4 or Secondary Memory –
It is external memory which is not as fast as main memory but data stays
permanently in this memory.
Cache Performance:
When the processor needs to read or write a location in main memory, it first checks for
a corresponding entry in the cache.
 If the processor finds that the memory location is in the cache, a cache hit has
occurred and data is read from cache
 If the processor does not find the memory location in the cache, a cache
miss has occurred. For a cache miss, the cache allocates a new entry and copies
in data from main memory, then the request is fulfilled from the contents of the
cache.
The performance of cache memory is frequently measured in terms of a quantity
called Hit ratio.
Hit ratio = hit / (hit + miss) = no. of hits/total accesses
We can improve Cache performance using higher cache block size, higher associativity,
reduce miss rate, reduce miss penalty, and reduce Reduce the time to hit in the cache.
Cache Mapping:
There are three different types of mapping used for the purpose of cache memory which
are as follows: Direct mapping, Associative mapping, and Set-Associative mapping.
These are explained below.
1. Direct Mapping –
The simplest technique, known as direct mapping, maps each block of main
memory into only one possible cache line. or
In Direct mapping, assigne each memory block to a specific line in the cache. If a
line is previously taken up by a memory block when a new block needs to be
loaded, the old block is trashed. An address space is split into two parts index field
and a tag field. The cache is used to store the tag field whereas the rest is stored
in the main memory. Direct mapping`s performance is directly proportional to the
Hit ratio.
2. i = j modulo m
3. where
4. i=cache line number
5. j= main memory block number
m=number of lines in the cache
For purposes of cache access, each main memory address can be viewed as
consisting of three fields. The least significant w bits identify a unique word or byte
within a block of main memory. In most contemporary machines, the address is at
the byte level. The remaining s bits specify one of the 2 s blocks of main memory.
The cache logic interprets these s bits as a tag of s-r bits (most significant portion)
and a line field of r bits. This latter field identifies one of the m=2 r lines of the
cache.
6. Associative Mapping –
In this type of mapping, the associative memory is used to store content and
addresses of the memory word. Any block can go into any line of the cache. This
means that the word id bits are used to identify which word in the block is needed,
but the tag becomes all of the remaining bits. This enables the placement of any
word at any place in the cache memory. It is considered to be the fastest and the
most flexible mapping form.
7. Set-associative Mapping –
This form of mapping is an enhanced form of direct mapping where the drawbacks
of direct mapping are removed. Set associative addresses the problem of possible
thrashing in the direct mapping method. It does this by saying that instead of
having exactly one line that a block can map to in the cache, we will group a few
lines together creating a set. Then a block in memory can map to any one of the
lines of a specific set..Set-associative mapping allows that each word that is
present in the cache can have two or more words in the main memory for the same
index address. Set associative cache mapping combines the best of direct and
associative cache mapping techniques.
In this case, the cache consists of a number of sets, each of which consists of a
number of lines. The relationships are
m = v * k
i= j mod v

where
i=cache set number
j=main memory block number
v=number of sets
m=number of lines in the cache number of sets
k=number of lines in each set

Application of Cache Memory –


1. Usually, the cache memory can store a reasonable number of blocks at
any given time, but this number is small compared to the total number of
blocks in the main memory.
2. The correspondence between the main memory blocks and those in the
cache is specified by a mapping function.

Types of Cache –
1. Primary Cache –
A primary cache is always located on the processor chip. This cache is small
and its access time is comparable to that of processor registers.
2. Secondary Cache –
Secondary cache is placed between the primary cache and the rest of the
memory. It is referred to as the level 2 (L2) cache. Often, the Level 2 cache is
also housed on the processor chip.

Locality of reference –
Since size of cache memory is less as compared to main memory. So to check
which part of main memory should be given priority and loaded in cache is decided
based on locality of reference.
Types of Locality of reference
1. Spatial Locality of reference
This says that there is a chance that element will be present in the close
proximity to the reference point and next time if again searched then more
close proximity to the point of reference.
2. Temporal Locality of reference
In this Least recently used algorithm will be used. Whenever there is page
fault occurs within a word will not only load word in main memory but
complete page fault will be loaded because spatial locality of reference rule
says that if you are referring any word next word will be referred in its register
that’s why we load complete page table so the complete block will be loaded.

GATE Practice Questions –


Que-1: A computer has a 256 KByte, 4-way set associative, write back data cache
with the block size of 32 Bytes. The processor sends 32-bit addresses to the cache
controller. Each cache tag directory entry contains, in addition, to address tag, 2
valid bits, 1 modified bit and 1 replacement bit. The number of bits in the tag field
of an address is
(A) 11
(B) 14
(C) 16
(D) 27
Answer: (C)
Explanation: https://www.geeksforgeeks.org/gate-gate-cs-2012-question-54/
Que-2: Consider the data given in previous question. The size of the cache tag
directory is
(A) 160 Kbits
(B) 136 bits
(C) 40 Kbits
(D) 32 bits
Answer: (A)
Explanation: https://www.geeksforgeeks.org/gate-gate-cs-2012-question-55/
Que-3: An 8KB direct-mapped write-back cache is organized as multiple blocks,
each of size 32-bytes. The processor generates 32-bit addresses. The cache
controller maintains the tag information for each cache block comprising of the
following.
1 Valid bit
1 Modified bit
As many bits as the minimum needed to identify the memory block mapped in the
cache. What is the total size of memory needed at the cache controller to store
meta-data (tags) for the cache?
(A) 4864 bits
(B) 6144 bits
(C) 6656 bits
(D) 5376 bits
Answer: (D)

You might also like