0% found this document useful (0 votes)
43 views14 pages

MC 2

The document provides an overview of computer memory systems, comparing primary and secondary memory, detailing the memory hierarchy, and explaining key characteristics of memory devices. It covers types of memory including RAM and ROM, their subtypes, and concepts like virtual memory and cache memory. Additionally, it discusses cache architectures and design elements that differentiate cache systems.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
43 views14 pages

MC 2

The document provides an overview of computer memory systems, comparing primary and secondary memory, detailing the memory hierarchy, and explaining key characteristics of memory devices. It covers types of memory including RAM and ROM, their subtypes, and concepts like virtual memory and cache memory. Additionally, it discusses cache architectures and design elements that differentiate cache systems.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 14

Module 2: The Memory Systems

A comparison of primary and secondary memory:


Primary memory Secondary Memory
The Primary memory is categorized as The secondary is always as Non-volatile

Volatile and non-volatile

There memories are also called as Also known as External memory/ Backup
memory/ Auxiliary memory
Internal memory
Data is Directly accessed by the processing Data cannot be Directly accessed by the
unit processing unit. Copied to primary memory
first. Then CPU can access it.

It is Volatile memory means data cannot be It is non-Volatile memory means data can be
retrieved in case of power failure retrieved in case of power failure

Capacity is Typically 16 MB to 32 GB 200 GB to terabytes

Primary Memory can Accessed by data bus secondary Memory can Accessed by I/O
channels

More expensive Cheaper

Computer cannot work without this Computer can work with this
6 mostfets are used 1 mostfet and 1 capacitor is used
Eg : SRAM,DRAM Hard disk, floppy disk
Has Semiconductor memory Has Magnetic and optical memory

Memory Hierarchy

The Computer memory hierarchy looks like a pyramid structure which is


used to describe the differences among memory types.

• It separates the computer storage based on hierarchy.

• Level 0: CPU registers

• Level 1: Cache memory

• Level 2: Main memory or primary memory

• Level 3: Magnetic disks or secondary memory

• Level 4: Optical disks or magnetic types or tertiary Memory


Level-0 − Registers
• The registers are present inside the CPU. As they are present inside the CPU, they have

least access time.

• Registers are most expensive and smallest in size generally in kilobytes.

• They are implemented by using Flip-Flops.

Level-1 − Cache
• Cache memory is used to store the segments of a program that are frequently accessed by the
processor. It is expensive and smaller in size generally in Megabytes and is implemented by using
static RAM.

Level-2 − Primary or Main Memory


• It directly communicates with the CPU and with auxiliary memory devices through an I/O
processor.

• Main memory is less expensive than cache memory and larger in size generally in Gigabytes.

• This memory is implemented by using dynamic RAM.

Level-3 − Secondary storage


• Secondary storage devices like Magnetic Disk are present at level 3.

• They are used as backup storage. .

• They are cheaper than main memory and larger in size generally in a few TB.

Level-4 − Tertiary storage


• Tertiary storage devices like magnetic tape are present at level 4.

• They are used to store removable files and are the cheapest and largest in size (1-20 TB).

Key Characteristics of Memory Devices

1. Location
2. Capacity

3. Unit of Transfer

4. Access Method

5. Performance

6. Physical type

7. Physical characteristics

8. Organization

1. Location:

 It deals with the location of the memory device in the computer system. There are three
possible locations:
 CPU : This is often in the form of CPU registers and small amount of cache
 Internal or main: This is the main memory like RAM or ROM. The CPU can directly access
the main memory.
 External or secondary: It comprises of secondary storage devices like hard disks, magnetic
tapes. The CPU doesn’t access these devices directly. It uses device controllers to access
secondary storage devices.

2. Capacity:

 The capacity of any memory device is expressed in terms of:


I. word size
II. Number of words
 Word size: Words are expressed in bytes (8 bits). A word can however mean any number of
bytes. Commonly used word sizes are 1 byte (8 bits), 2bytes (16 bits) and 4 bytes (32 bits).
 Number of words: This specifies the number of words available in the particular memory
device. For example, if a memory device is given as 4K x 16.This means the device has a

word size of 16 bits and a total of 4096(4K) words in memory.


3. Unit of Transfer:

 It is the maximum number of bits that can be read or written into the memory at a time.
 In case of main memory, it is mostly equal to word size.
 In case of external memory, unit of transfer is not limited to the word size; it is often larger
and is referred to as blocks.

4. Access Methods

 It is a fundamental characteristic of memory devices. It is the sequence or order in which


memory can be accessed. There are three types of access methods:
 Random Access: If storage locations in a particular memory device can be accessed in any
order and access time is independent of the memory location being accessed. Such memory
devices are said to have a random access mechanism. RAM (Random Access Memory) IC’s
use this access method.
 Serial Access: If memory locations can be accessed only in a certain predetermined
sequence, this access method is called serial access. Magnetic Tapes, CD-ROMs employ
serial access methods.
 Semi random Access: Memory devices such as Magnetic Hard disks use this access method.
Here each track has a read/write head thus each track can be accessed randomly but access
within each track is restricted to a serial access.

5. Performance:

 The performance of the memory system is determined using three parameters Access Time:
In random access memories, it is the time taken by memory to complete the read/write
operation from the instant that an address is sent to the memory. For non-random access
memories, it is the time taken to position the read write head at the desired location.
Access time is widely used to measure performance of memory devices.
 Memory cycle time: It is defined only for Random Access Memories and is the sum of the
access time and the additional time required before the second access can commence.
 Transfer rate: It is defined as the rate at which data can be transferred into or out of a
memory unit.

6. Physical type:

 Memory devices can be either semiconductor memory (like RAM) or magnetic surface
memory (like Hard disks).

7.Physical Characteristics:

 Volatile/Non- Volatile: If a memory devices continues hold data even if power is turned off.
The memory device is non-volatile else it is volatile.

8. Organization:
 Erasable/Non-erasable: The memories in which data once programmed cannot be erased
are called Non-erasable memories. Memory devices in which data in the memory can be

erased is called erasable memory.


 E.g. RAM(erasable), ROM(non-erasable).

Memory & its types

 Computer memory is any physical device, used to store data, information or instruction
temporarily or permanently.
 It is the collection of storage units that stores binary information in the form of bits.
 The memory block is split into a small number of components, called cells.
 Each cell has a unique address to store the data in memory, ranging from zero to memory size
minus one.
 It is classified into
1. RAM – Random Access Memory
2. ROM – Read Only Memory

RAM ROM
It is a random access memory It is read only memory
Read and write operations can be performed Only read operation can be performed.
Data can be lost in volatile memory when the Data cannot be lost in non-volatile memory
power supply is trued off. when the power supply is turned off.
It is a faster and expensive memory It is a slower and less expensive memory
Storage data requires to be refreshed in RAM Storage data not requires to be refreshed in
ROM
The size of the chip is bigger than the ROM chip The size of the chip is smaller than the Ram
to store the data chip to store the data
Types of Ram : DRAM and SRAM Types of ROM: MROM, PROM, EPROM,
EEPROM

RAM
 It is also called read-write memory or the main memory or the primary memory.
 The programs and data that the CPU requires during the execution of a program are stored in
this memory.
 It is a volatile memory as the data is lost when the power is turned off.
 RAM is further classified into two types-
1. SRAM (Static Random Access Memory)
2. DRAM (Dynamic Random Access Memory).

SRAM DRAM
It is a static random access memory It is a dyamic random access memory
The access time of SRAM is slow The access time of DRAM is high
It uses flip-flops to store each bit of information It uses a capacitor to store each bit of
information
It does not require periodic refreshing to It requires periodically refreshing to preserve
preserve the information. the information
It uses in cache memory It is used in the main memory
The cost of SRAM is expensive The cost of DRAM is less expensive
It has a complex structure Its structure is simple
It requires low power consumption It requires low power consumption

ROM
 Stores crucial information essential to operate the system, like the program essential to boot the
computer.
 It is not volatile.
 Always retains its data.
 Used in embedded systems or where the programming needs no change.
 Used in calculators and peripheral devices.
 ROM is further classified into 4 types- ROM, PROM, EPROM, and EEPROM.
 Types of Read Only Memory (ROM) –
1. PROM (Programmable read-only memory) – It can be programmed by user. Once
programmed, the data and instructions in it cannot be changed.
2. EPROM (Erasable Programmable read only memory) – It can be reprogrammed. To erase
data from it, expose it to ultra violet light. To reprogram it, erase all the previous data.
3. EEPROM (Electrically erasable programmable read only memory) – The data can be erased
by applying electric field, no need of ultra violet light. We can erase only portions of the chip.
PROM EPROM EFPROM
A read only memory (ROM) A programmable ROM tha can A user-modifiable rom that can
that can be modified only once be erased and reused be erased and reprogrammed
by a user repeatedly through a normal
electrical voltage
Stands for programmable read Stands for erasable Stands for electrically erasable
only memory programmable read only programmable read- only
memory memory
Developed by wen tsing chow Developed by dov frohman in Developed by George perlegos
in 1956 1971 in 1978
Reprogrammable only once Can be reprogramed using Can be reprogramed using
ultraviolet light electrical charge

Virtual Memory
 In the most computer system, the physical main memory is not as large as address space of the
processor.
 When we try to run a program, if it do not completely fit into the main memory the parts of its

currently being executed are stored in main memory and remaining portion is stored in
secondary storage device such as HDD.
 When a new part of program is to be brought into main memory for execution and if the
memory is full, it must replace another part which is already is in main memory.
 As this secondary memory is not actually part of system memory, so for CPU, secondary memory
is Virtual Memory.
 Techniques that automatically more program and data blocks into physical memory when they
are required for execution are called virtual memory
 Virtual Memory is used to logically extend the size of main memory.
 When Virtual Memory is used, the address field is virtual address.
 A special hardware unit knows as MMU translates Virtual Address into Physical Address.

Cache Memory
 Cache Memory is a special very high- speed memory.
 It is used to speed up and synchronize with high-speed CPU.
 Cache memory is costlier than main memory or disk memory but more economical than CPU
registers.
 Cache memory is an extremely fast memory type that acts as a buffer between RAM and the
CPU.
 It holds frequently requested data and instructions so that they are immediately available to the
CPU when needed.
Level 1:
 It is the first level of cache memory, which is called Level 1 cache or L1 cache.
 In this type of cache memory, a small amount of memory is present inside the CPU itself. If a
CPU has four cores (quad core cpu), then each core will have its own level 1 cache.
 As this memory is present in the CPU, it can work at the same speed as of the CPU.
 The size of this memory ranges from 2KB to 64 KB.
 The L1 cache further has two types of caches: Instruction cache, which stores instructions
required by the CPU, and the data cache that stores the data required by the CPU.

Level 2:
 This cache is known as Level 2 cache or L2 cache.
 This level 2 cache may be inside the CPU or outside the CPU.
 All the cores of a CPU can have their own separate level 2 cache, or they can share one L2
cache among themselves.
 In case it is outside the CPU, it is connected with the CPU with a very high-speed bus.
 The memory size of this cache is in the range of 256 KB to the 512 KB.
 In terms of speed, they are slower than the L1 cache.

Level 3:
 It is known as Level 3 cache or L3 cache.
 This cache is not present in all the processors; some high-end processors may have this type
of cache.
 This cache is used to enhance the performance of Level 1 and Level 2 cache.
 It is located outside the CPU and is shared by all the cores of a CPU.
 Its memory size ranges from 1 MB to 8 MB.
 Although it is slower than L1 and L2 cache, it is faster than Random Access Memory (RAM).

Cache Memory

There are a few basic design elements that serve to classify and differentiate cache architectures.
They are listed down:

1. Cache Addresses

2. Cache Size

3. Mapping Function

4. Replacement Algorithm

5. Write Policy

6. Line Size
7. Number of caches

Cache Memory

1. Cache Addresses

 When virtual addresses are used, the cache can be placed between the processor and the
MMU or between the MMU and main memory.
 A logical cache, also known as a virtual cache, stores data using virtual addresses.
 The processor accesses the cache directly, without going through the MMU.

2. Cache Size:
 The size of the cache should be small enough so that the overall average cost per bit is close
to that of main memory alone and large enough so that the overall average access time is
close to that of the cache alone.
3. Mapping Function
 As there are fewer cache lines than main memory blocks, an algorithm is needed for
mapping main memory blocks into cache lines.
 Further, a means is needed for determining which main memory block currently occupies a
cache line.
 The choice of the mapping function dictates how the cache is organized. Three techniques
can be used: direct, associative, and set associative.
 DIRECT MAPPING: The simplest technique, known as direct mapping, maps each block of
main memory into only one possible cache line.
 The direct mapping technique is simple and inexpensive to implement.
 ASSOCIATIVE MAPPING: Associative mapping overcomes the disadvantage of direct
mapping by permitting each main memory block to be loaded into any line of the cach
 SET-ASSOCIATIVE MAPPING: Set-associative mapping is a compromise that exhibits the
strengths of both the direct and associative approaches. With set-associative mapping,
block can be mapped into any of the lines of set j.

4. Replacement Algorithms
 Once the cache has been filled, when a new block is brought into the cache, one of the
existing blocks must be replaced.
 For direct mapping, there is only one possible line for any particular block, and no choice is
possible.
 For the associative and set associative techniques, a replacement algorithm is needed.
 To achieve high speed, such an algorithm must be implemented in hardware.
 Least Recently Used (LRU), Least Frequently Used(LFU), First In First Out (FIFO) are some
replacement algorithms.

5. Write Policy
 When a block that is resident in the cache is to be replaced, there are two cases to consider.
 If the old block in the cache has not been altered, then it may be overwritten with a new
block without first writing out the old block.
 If at least one write operation has been performed on a word in that line of the cache, then
main memory must be updated by writing the line of cache out to the block of memory
before bringing in the new block.

6. Line Size
 When a block of data is retrieved and placed in the cache, not only the desired word but also
some number of adjacent words is retrieved.
 Basically, as the block size increases, more useful data are brought into the cache.
 The hit ratio will begin to decrease, however, as the block becomes even bigger and the
probability of using the newly fetched information becomes less than the probability of
reusing the information that has to be replaced.

7.Number of Caches
 When caches were originally introduced, the typical system had a single cache.
 More recently, the use of multiple caches has become an important aspect. There are two
design issues surrounding number of caches.
 MULTILEVEL CACHES: Most contemporary designs include both on-chip and external caches.
 The simplest such organization is known as a two-level cache, with the internal cache
designated as level 1 (L1) and the external cache designated as level 2 (L2).
 There can also be 3 or more levels of cache. This helps in reducing main memory accesses.
 UNIFIED VERSUS SPLIT CACHES: Earlier on-chip cache designs consisted of a single cache
used to store references to both data and instructions.
 This is the unified approach. More recently, it has become common to split the cache into
two: one dedicated to instructions and one dedicated to data. These two caches both exist
at the same level.
 This is the split cache. Using a unified cache or a split cache is another design issue.

Interleaved Memory
 The memory system in which successive addresses are evenly spread across
 memory bank to compensate for the relatively slow speed of DRAM.
 The contagious memory reads and writes are using each bank in term, resulting in higher
memory throughputs due to reduced waiting for memory banks to become ready for desired
operations.

Types of Interleaved Memory


1. High order interleaving:

 In high order memory interleaving, the most significant bits of the memory address decides
memory banks where a particular location resides.
 But, in low order interleaving the least significant bits of the memory address decides the
memory banks.
 The least significant bits are sent as addresses to each chip.
 One problem is that consecutive addresses tend to be in the same chip.
 The maximum rate of data transfer is limited by the memory cycle time. It is also known as
Memory Banking.
 The least significant bits are sent as addresses to each chip.
 One problem is that consecutive addresses tend to be in the same chip.
 The maximum rate of data transfer is limited by the memory cycle time.
 It is also known as Memory Banking.

2. Low order interleaving: The least significant bits select the memory bank (module) in low-order
interleaving. In this, consecutive memory addresses are in different memory modules, allowing
memory access faster than the cycle time.
Associative Memory
 This memory is also known as
content addressable memory.
 Its hardware organization consist of
argument register to hold CPU's
request & a key register to hold
predefined "mask" which is logically
'AND'ed with the CPU's request in
argument register.
 The result of CPU's request will be masked and remaining unmasked bits are then
parallel y searched against each word in the associative memory array as shown
below.
 During this parallel search as soon as the match is found the corresponding match
register bit will be set and the data word from this matched location is placed on the
OUTPUT for CPU.

In the case of segmentation,


we break a process address
space
blocks known as pages
Memory size The pages are blocks of The sections are blocks of
fixed size varying sizes.
Accountability The OS divides the available The compiler mainly
memory into individual calculates the size of
pages. individual segments, their
actual address as well as
virtual address
Speed This technique is This technique is
comparatively much faster comparatively the size of
in accessing memory individual segments, their
actual address as well as
virtual address
Size The available memory The user determines the
determines the individual individual segment sizes
page sizes
Fragmentation The paging technique may The segmentation
underutilize some of the technique may not use
pages-thus leading to some of the memory blocks
internal fragmentation at all. Thus, it may lead to
external fragmentation.
Logical address A logical address divides A logical address divides
into page offset and page into section offset and
number in the case of section number in the case
paging of segmentation.
Data storage In the case of paging, the In the case of segmentation,
page table leads to the the segmentation table
storage of the page data leads to the storage of the
segmentation data

You might also like