Memory Systems
Memory Systems
Introduction
Some Basic Concepts
Memory Addressing
CPU – Main memory connection
Internal organization of semiconductor memory chips
Semiconductor RAM memories
Static Memories
Dynamic Memories
Read Only Memories
Memory Hierarchy
Cache memory concept
Cache memory design parameters
Mapping Functions
OBJECTIVES
The maximum size of the memory that can be used in any computer is
determined by the addressing scheme.
Connection of the memory to the processor
If MAR is k bits long and MDR is n bits long, then the memory may
contain upto2K addressable locations and the n-bits of data are
transferred between the memory and processor.
This transfer takes place over the processor bus.
The processor bus has,
Address Line
Data Line
Control Line (R/W, MFC – Memory Function Completed)
The control line is used for coordinating data transfer.
The processor reads the data from the memory by loading the address
of the required memory location into MAR and setting the R/W line to
1.
The memory responds by placing the data from the addressed location
onto the data lines and confirms this action by asserting MFC signal.
Upon receipt of MFC signal, the processor loads the data onto the data
lines into MDR register.
The processor writes the data into the memory location by loading the
address of this location into MAR and loading the data into MDR sets
the R/W line to 0.
Measure of speed of memory…
Memory Access Time →It is the time that elapses between the
initiation of an operation and the completion of that operation.
Memory Cycle Time → It is the minimum time delay that is required
between the initiations of the two successive memory operations
MEMORY ADDRESSING
Buses
The buses that connect the three main parts of a computer system are
called the control bus, the data bus and the address bus.
1. Control bus →is used to pass signals (0 and 1 bits) between the three
components. For example, if the CPU wishes to read the contents of memory
rather than write to memory it signals this on the control bus.
3. Data bus→ is used to transfer data. After the control bus signals a read or write
operation and the address is placed on the address bus the computer
component concerned places the data on the data bus so the destination can
read it
The Computer’s Buses
As an example of the use of these buses, we can list the steps involved
in the CPU reading data from an address in memory.
These steps are:
The CPU places the memory address on the address bus
The CPU requests a memory read operation on the control bus
The memory recognizes the memory read operation and examines
the address on the address bus
Memory moves the data from cells at the address on the address
bus to the data bus
The CPU reads the data off the data bus.
INTERNAL ORGANIZATION OF MEMORY
CHIPS:
Memory cells are usually organized in the form of array, in which each
cell is capable of storing one bit of information.
Each row of cells constitutes a memory word and all cells of a row are
connected to a common line called as word line.
The cells in each column are connected to Sense / Write circuit by two
bit lines.
Figure shows the possible arrangements of memory cells.
The Sense / Write circuits are connected to data input or output lines
of the chip.
During a write operation, the sense / write circuit receives input
information and stores it in the cells of the selected word.
The data input and data output of each sense / write circuits are
connected to a single bidirectional data line that can be connected to a
data bus of the cpu.
R / W → specifies the required operation.
CS → Chip Select input selects a given chip in the multi-chip memory
system
Organization of bit cells in a memory chip
Memory chip consisting of 16 words of 8 bit each.
This is referred to as a 16x8 organization.
It can store 128 bits and requires 14 external connections for address,
data and control lines.
It also needed 2 lines for power supply and ground.
Consider a memory circuit with 1K cells.
this can be organized as a 128x8 memory.
Requiring a total of 19 external connections
Same can be organized into a 1Kx1 format
The standard SDRAM performs all actions on the rising edge of the
clock signal.
The double data rate SDRAM transfer data on both the edges (loading
edge, trailing edge).
The Bandwidth of DDR-SDRAM is doubled for long burst transfer.
To make it possible to access the data at high rate, the cell array is
organized into two banks.
Each bank can be accessed separately.
Consecutive words of a given block are stored in different banks.
Such interleaving of words allows simultaneous access to two words
that are transferred on successive edge of the clock.
Dynamic Memory System:
Both SRAM and DRAM chips are volatile, which means that they lose
the stored information if power is turned off.
Many applications require Non-volatile memory (which retains the
stored information if power is turned off).
E.g.: Operating System software has to be loaded from disk to
memory which requires the program that boots the Operating System.
i.e., it requires non-volatile memory.
Non- volatile memory is used in embedded system. Since the normal
operation involves only reading of stored data, a memory of this type
is called ROM.
ROM CELL
At Logic value ‗0‘ → Transistor (T) is connected to the ground point
(P).
Transistor switch is closed and voltage on bit line nearly drops to zero.
At Logic value ‗1‘ → Transistor switch is open. The bit line remains
at high voltage.
To read the state of the cell, the word line is activated.
A Sense circuit at the end of the bit line generates the proper output
value.
Different types of non-volatile memory are:
PROM
EPROM
EEPROM
Flash Memory
PROM: Programmable ROM:
Merit:
It provides flexibility.
It is faster.
It is less expensive because they can be programmed directly by
the user.
EPROM - Erasable reprogrammable
ROM:
EPROM allows the stored data to be erased and new data to be loaded.
In an EPROM cell, a connection to ground is always made at ‗P‘ and
a special transistor is used, which has the ability to function either as a
normal transistor or as a disabled transistor that is always turned off.
This transistor can be programmed to behave as a permanently open
switch, by injecting charge into it that becomes trapped inside.
Erasure requires dissipating the charges trapped in the transistor of
memory cells.
This can be done by exposing the chip to ultraviolet light, so that
EPROM chips are mounted in packages that have transparent
windows.
Merits:
It provides flexibility during the development phase of
digital system.
It is capable of retaining the stored information for a
long time.
Demerits:
The chip must be physically removed from the circuit for
reprogramming and its entire contents are erased by UV
light.
EEPROM:-Electrically Erasable ROM:
They are :
Temporal (The recently executed instruction are likely to be
executed again very soon),
Spatial (The instructions in close proximity to recently executed
instruction are likely to be executed soon).
If the active segment of the program is placed in cache memory, then
the total execution time can be reduced significantly.
The temporal aspects of the locality of reference suggest that
whenever an instruction item(instruction or data) is first needed, this
item should be brought into the cache where it will hopefully remain
until it is needed again.
The spatial aspects suggests that instead of fetching just one item from
the main memory to the cache, it is useful to fetch several items that
reside at adjacent addresses as well.
The term block is used to refer to a set of contiguous address locations
of some size.
Another term that is used to refer to a cache block is cache line.
Use of a Cache Memory
When a Read request is received from the processor ,the contents of a
block of memory words containing the location specified are
transferred into the cache one word at a time.
When the program references any of the locations in this block, the
desired contents are read directly from the cache.
The Cache memory stores a reasonable number of blocks at a given
time but this number is small compared to the total number of blocks
available in Main Memory.
The correspondence between main memory block and the block in
cache memory is specified by a mapping function
The Cache control hardware decides that which block should be
removed to create space for the new block that contains the referenced
word.
The collection of rule for making this decision is called the
replacement algorithm.
The cache control circuit determines whether the requested word
currently exists in the cache.
If it exists, then Read/Write operation will take place on appropriate
cache location.
In this case Read/Write hit will occur.
Example:
Consider a cache consisting of 128 blocks of 16 words each, for a total
of 2048(2K) words.
Main memory is addressable by a 16-bit address, it has 64K words.
I.e. 4K blocks of 16 words each.
Consecutive addresses refer to consecutive words.
Direct Mapping: