Explain in Detail Memory Classification. (Summer-2016, Summer-2015)
Explain in Detail Memory Classification. (Summer-2016, Summer-2015)
ROM
ROM stands for Read Only Memory. ROM does not lose its contents when the power is
turned OFF. ROM is also called non-volatile memory.
The microprocessor can only read from this memory.
The types of ROM are masked ROM, PROM, EPROM, EEPROM and Flash Memory.
Masked ROM
In this ROM, a bit pattern is permanently recorded by the masking and metallization
process.
Memory manufacturers are generally equipped to do this process. It is an expensive and
specialized process, but economical for large production quantities.
PROM
PROM stands for Programmable Read Only Memory.
This memory has nichrome or poly-silicon wires arranged in a matrix; these wires can be
functionally viewed as diodes or fuses.
This memory can be programmed by the user with a special PROM programmer that
selectively bums the fuses according to the bit pattern to be stored.
The process is known as “burning the PROM,” and the information stored is permanent.
EPROM
EPROM stands for Erasable Programmable Read Only Memory.
This memory stores a bit by charging the floating gate of an field effect transistor.
Information is stored by using an EPROM programmer, which applies high voltages to
charge the gate.
All the information can be erased by exposing the chip to ultraviolet light through its quartz
window, and the chip can be reprogrammed.
Because the chip can be reused many times, this memory is ideally suited for product
development, experimental projects, and college laboratories.
The disadvantages of EPROM are
1. It must be taken out of the circuit to erase it.
2. The entire chip must be erased.
3. The erasing process takes 15 to 20 minutes.
EE-PROM
EPROM stands for Electrically Erasable Programmable Read Only Memory.
This memory is functionally similar to EPROM, except that information can be changed by
using electrical signals at the register level rather than erasing all the information.
This has an advantage in field and remote control applications.
If EE-PROMs are used in the systems, they can be updated from a central computer by
using a remote link via telephone lines.
Similarly, in a process control where timing information needs to be changed, it can be
changed by sending electrical signals from a central place.
This memory also includes a Chip Erase mode, whereby the entire chip can be erased in 10
ms.
2 Dept: CE COA(3340705) Prof. Chintan N. Kanani
Unit–IV Memory Organization
CS WR CS
0 x NONE
1 0 READ SELECTED WORD
1 1 WRITE SELECTED WORD
Types of RAM
1. SRAM- Static Random Access Memory
2. DRAM- Dynamic Random Access Memory
a. FP- Fast Page RAM
b. EDO- Extended Data Output
c. SDRAM- Synchronous DRAM
SRAM
The static RAM is easier to use and has shorter read and write cycles.
The static RAM consists of internal flip-flops that store the binary information. The stored
information remains valid as long as power is applied to the unit.
The SRAM are low density, high power, expensive and fast in operation.
All computer memory modules used in today’s computers are of the SRAM type.
DRAM
The dynamic RAM stores the binary information in the form of electric charges that are
applied to capacitors.
It is a refreshing type memory. As long as power is maintained on the memory modules
the DRAM will hold its information.
The content of DRAM memory disappears from the memory within milliseconds, so in
order to maintain its data it has to be refreshed periodically.
This makes the DRAM memory much slower than the SRAM. The computer memories you
usually see are a form of DRAM, like SDRAM and DDR- SDRAM.
The capacitors are provided inside the chip by MOS transistors. The stored charge on the
capacitors tends to discharge with time and the capacitors must be periodically recharged
by refreshing the dynamic memory.
The DRAM offers reduced power consumption and larger storage capacity in a single
memory chip.
ROM
ROM stands for Read Only Memory.
ROM is used for storing programs that are permanently resident in computer and for look-
up tables..
The ROM portion of main memory is needed for storing an initial program called a
BOOTSTRAP LOADER.
The bootstrap loader is program whose function is to start the computer software
operating when power is turned on.
When the power is turned on, the hardware of the computer sets the program counter to
the first address of the bootstrap loader.
The bootstrap programs load a portion of the operating system, from disk to main memory
and control is then transferred to the operating system, which prepares the computer for
general use.
Types of ROM
1. PROM- Programmable Read Only Memory
2. EPROM- Erasable Programmable Read Only Memory
3. EEPROM- Electrically Erasable Programmable Read Only Memory
4. Example:- A micro computer system use a RAM chip of 1K X 8 and ROM chip of 2K
X 8 size. The system needs 4K RAM and 8K ROM.
1. How many RAM and ROM chips are required?
2. How many address lines needed to decode for RAM and ROM chips?
3. Write the memory address map for the microprocessor.
1. How many RAM and ROM chips are required?
Given that,
Size of single RAM chip is 1k X 8 or 1K
Required size of RAM chip is 4 KB
Number of RAM chips required = 4K X 8
1K X 8
= 4 chips
Size of single ROM chip is 2K X 8
Required size of ROM chip is 8 KB
Number of ROM chips required = 8K X 8
2K X 8
=4 chips
2. How many address lines needed to decode for RAM and ROM chips?
Number of address lines for RAM chip = log2 1K
= log2 (20 x 210)
= 10 log2 2
= 10 address lines
Number of address lines for ROM chip = log2 2K
= log2 (21 x 210)
= log2 211
= 11 address lines
3. Write the memory address map for the microprocessor.
For memory size, IK x 8
Number of address lines =10
Therefore, required memory space will be 0000 H - 03FFH
Memory Chip Address Map
RAM 1 0000 H – 03FF H
RAM 2 0400 H – 07FF H
RAM 3 0800 H – 0BFF H
RAM 4 0C00 H – 0FFF H
5. Example:-A Micro-Computer System employs RAM chips of 256 x 8 and ROM chips
of 1024 x 8 sizes. The system needs 4 K RAM and 8K ROM.
1. How many RAM and ROM chips are required?
2. How many address lines will be decided for RAM and ROM chips?
3. Write the memory address map for Micro- Computer.
1. How many RAM and ROM chips are required?
Given that,
Size of single RAM chip is 256 x 8
Required size of RAM chip is 4 KB
Number of RAM chips required = 4K x 8
256 x 8
= 4 x 1024 x 8
256 x 8
= 16 chips
Most general-purpose computers would run more efficiently if they were equipped with
additional storage beyond the capacity of the main memory.
The memory unit that communicates directly with the CPU is called the main memory.
Devices that provide backup storage are called auxiliary memory.
We classify memory based on its “distance” from the processor. The closer memory is to
the processor, the faster it should be.
As we move from top to bottom in hierarchy.
(a) Access speed will decrease
(b) Storage capacity will increase
(c) Cost per bit will decrease
While the I/O processor manages data transfer between auxiliary memory & main
memory, the cache organization is concerned with the transfer of information between
main memory and CPU. Thus each is involved with a different level in memory hierarchy
system.
The most common auxiliary memory devices used in computer systems are magnetic disks
and tapes. They are used for storing system programs, large data files, and other backup
information.
Only programs and data currently needed by the processor reside in main memory. All
other information is stored in auxiliary memory and transferred to main memory when
needed.
The magnetic disks used as backup storage. The main memory occupies a central position
by being able to communicate directly with the CPU and with auxiliary memory devices
through an I/O processor.
When programs not residing in main memory are needed by the CPU, they are brought in
from auxiliary memory.
A special very-high-speed memory called a cache is used to increase the speed of
processing by making current programs and data available to the CPU at a rapid rate.
The cache memory in computer systems is used to compensate for the speed differential
between main memory access time and processor logic.
Thus each is involved with a different level in the memory hierarchy system.
As the storage capacity of the memory increases, the cost per bit for storing binary
information decreases and the access time of the memory becomes longer.
The auxiliary memory has a large storage capacity, is relatively inexpensive, but has low
access speed compared to main memory.
9 Dept: CE COA(3340705) Prof. Chintan N. Kanani
Unit–IV Memory Organization
The cache memory is very small, relatively expensive, and has very high access speed.
The overall goal of using a memory hierarchy is to obtain the highest- possible average
access speed while minimizing the total cost of the entire memory system.
Advantages of memory Hierarchy :
o Decrease frequency of accesses to slow memory
o Decrease cost/bit
o Improve average access time
o Increase capacity
All disks rotate together at high speed and are not stopped or started for access purposes.
Bits are stored in the magnetized surface in spots along concentric circles called tracks.
The tracks are commonly divided into sections called sectors.
The subdivision of one disk surface into tracks and sectors.
Some units use a single RD/WR head for each disk surface.
In another disk system separate RD/WR heads are provided for each track in each surface.
Floppy disks are slower to access than hard disks and have less storage capacity, but they are
much less expensive. And most importantly, they are portable.
The disks used with a floppy disk drive are small removable disks made of plastic coated with
magnetic recording material.
Types of Floppies
There are two sizes commonly used, with diameters of 51/4 and 31/2 inches.
The 31/2 inch disks are smaller and can store more data than can the 51/4 inch disks.
51/4- inch:- This type of floppy is generally capable of storing between 100K and 1.2MB of
data. The most common sizes are 360K and 1.2MB.
31/2 - inch:- small size, microfloppies have a larger storage capacity. The most common size
is 1.44MB high-density.
2. Hard disk
Disks that are permanently attached to the unit assembly and cannot be removed by the
occasional user are called hard disks.
A hard disk commonly known as a HDD (hard disk drive).
It is a non-volatile storage device which stores digitally encoded data on rapidly rotating
platters with magnetic surfaces.
The storage capacity of a disk depends on the bits per inch of track and the tracks per inch
of surface.
3. Magnetic tape
Magnetic tape is a non-volatile storage medium consisting of a magnetic coating on a thin
plastic strip.
Nearly all recording tape is of this type, whether used for video, audio storage or general
purpose digital data storage using a computer.
It is used mainly for secondary storage.
Most magnetic tape units are capable of reading and writing in several different densities.
It consists of a memory array and logic for m words with n bits per word.
The argument register A and key register K each have n bits, one for each bit of a word,
The match register M has m bits, one for each memory word.
Each word in memory is compared in parallel with the content of the argument register.
The words that match the bits of the argument register set a corresponding bit in the match
register.
After the matching process, those bits in the match register that have been set indicate the
fact that their corresponding words have been matched.
The key register provides a mask for choosing a particular field or key in the argument
word.
The entire argument is compared with each memory word if the key register contains all
1’s.
Otherwise, only those bits in the argument that have 1’s in their corresponding position of
the key register are compared.
Thus the key provides a mask or identifying piece of information which specifies how the
reference to memory is made.
access.
These are three dimensions of the locality of reference property :
(i) Temporal Locality
(ii) Spatial Locality
(iii) Sequential Locality
Writing into Cache
An important aspect of cache organization is concerned with memory write requests.
When the CPU finds a word in cache during a read operation, the main memory is not
involved in the transfer. But, if the operation is a write, there are two ways that the system
cans precede.
1. Write - Through method
o The simplest and most commonly used procedure is to update main memory with
every memory writes operation, with cache memory being up-dated in parallel if it
contains the word at the specified address. This is called the write-through method.
o This method has the advantage that main memory always contains the same data
as the cache.
2. Write - Back method
o In this method only the cache location is updated during a write operation. The
location is then marked by a flag so that later when the word is removed from the
cache it is copied into main memory.
o The reason for the write-back method is that during the time a word resides in the
cache, it may be updated several times.
The associative memory stores both the address and content (data) of the memory word.
14 Dept: CE COA(3340705) Prof. Chintan N. Kanani
Unit–IV Memory Organization
This permits any location in cache to store any word from main memory.
The figure shows three words presently stored in the cache. The address value of 15 bits is
shown as a five-digit octal number and its corresponding 12-bit word is shown as a four-
digit octal number.
A CPU address of 15 bits is placed in the argument register and the associative memory is
searched for a matching address.
If the address is found the corresponding 12-bit data is read and sent to CPU.
If no match occurs, the main memory is accessed for the word.
The address data pairs then transferred to the associative cache memory.
If the cache is full, an address data pair must be displaced to make room for a pair that is
needed and not presently in the cache.
This constitutes a first-in first-one (FIFO) replacement policy.
Direct Mapping
The CPU address of 15 bits is divided into two fields.
The nine least significant bits constitute the index field and the remaining six bits from the
tag field.
The figure shows that main memory needs an address that includes both the tag and the
index.
The number of bits in the index field is equal to the number of address bits required to
access the cache memory.
The internal organization of the words in the cache memory is as shown in figure below.
Each word in cache consists of the data word and its associated tag.
15 Dept: CE COA(3340705) Prof. Chintan N. Kanani
Unit–IV Memory Organization
When a new word is first brought into the cache, the tag bits are stored alongside the data
bits.
When the CPU generates a memory request the index field is used for the address to access
the cache.
The tag field of the CPU address is compared with the tag in the word read from the cache.
If the two tags match, there is a hit and the desired data word is in cache.
If there is no match, there is a miss and the required word is read from main memory.
It is then stored in the cache together with the new tag, replacing the previous value.
The word at address zero is presently stored in the cache (index = 000, tag = 00, data =
1220).
Suppose that the CPU now wants to access the word at address 02000.
The index address is 000, so it is used to access the cache. The two tags are then compared.
The cache tag is 00 but the address tag is 02, which does not produce a match.
Therefore, the main memory is accessed and the data word 5670 is transferred to the CPU.
The cache word at index address 000 is then replaced with a tag of 02 and data of 5670.
The disadvantage of direct mapping is that two words with the same index in their address
but with different tag values cannot reside in cache memory at the same time.
Set-associative mapping
A third type of cache organization, called set associative mapping in that each word of cache
can store two or more words of memory under the same index address.
Each data word is stored together with its tag and the number of tag-data items in one
word of cache is said to form a set.
An example one set-associative cache organization for a set size of two is shown in figure
below.
Each index address refers to two data words and their associated terms.
Each tag required six bits and each data word has 12 bits, so the word length is 2 (6+12) =
36 bits.
An index address of nine bits can accommodate 512 words.
Thus the size of cache memory is 512 × 36. It can accommodate 1024 words or main
memory since each word of cache contains two data words.
In generation a set-associative cache of set size k will accommodate K word of main
memory in each word of cache.
The octal numbers listed in figure 9.8 are with reference to the main memory contents.
The words stored at addresses 01000 and 02000 of main memory are stored in cache
memory at index address 000.
Similarly, the words at addresses 02777 and 00777 are stored in cache at index address 777.
When the CPU generates a memory request, the index value of the address is used to
access the cache.
The tag field of the CPU address is then compared with both tags in the cache to determine
if a match occurs.
The comparison logic is done by an associative search of the tags in the set similar to an
associative memory search: thus the name "set-associative”.
When a miss occurs in a set-associative cache and the set is full, it is necessary to replace one
of the tag-data items with a new value.
The most common replacement algorithms used are: random replacement, first-in first-out
(FIFO), and least recently used (LRU).
Suppose that the computer has available auxiliary memory for storing 220 = 1024K words.
Thus auxiliary memory has a capacity for storing information equivalent to the capacity of
32 main memories.
Denoting the address space by N and the memory space by M, we then have for this
example N = 1024K and M = 32K.
In a multiprogramming computer system, programs and data are transferred to and from
auxiliary memory and main memory based on demands imposed by the CPU.
Suppose that program 1 is currently being executed in the CPU. Program 1 and a portion of
its associated data are moved from auxiliary memory into main memory as shown in figure.
Portions of programs and data need not be in contiguous locations in memory since
information is being moved in and out, and empty spaces may be available in scattered
locations in memory.
In our example, the address field of an instruction code will consist of 20 bits but physical
memory addresses must be specified with only 15 bits.
Thus CPU will reference instructions and data with a 20-bit address, but the information at
this address must be taken from physical memory because access to auxiliary storage for
individual words will be too long.
When no interrupts are pending, the interrupt line stays in the high-level state and no
interrupts are recognized by the CPU.
18 Dept: CE COA(3340705) Prof. Chintan N. Kanani
Unit–IV Memory Organization
The CPU responds to an interrupt request by enabling the interrupt acknowledge line.
This signal passes on to the next device through the PO (priority out) output only if device
1 is not requesting an interrupt.
If device 1 has a pending interrupt, it blocks the acknowledge signal from the next device
by placing a 0 in the PO output.
It then proceeds to insert its own interrupt vector address (VAD) into the data bus for the
CPU to use during the interrupt cycle.
A device with a 0 in its Pl input generates a 0 in its PO output to inform the next-lower
priority device that the acknowledge signal has been blocked.
A device that is requesting an interrupt and has a 1 in its Pl input will intercept the
acknowledge signal by placing a 0 in its PO output.
If the device does not have pending interrupts, it transmits the acknowledge signal to the
next device by placing a 1 in its PO output.
Thus the device with Pl = 1 and PO = 0 is the one with the highest priority that is requesting
an interrupt, and this device places its VAD on the data bus.
The daisy chain arrangement gives the highest priority to the device that receives the
interrupt acknowledge signal from the CPU.
The farther the device is from the first position; the lower is its priority.