0% found this document useful (0 votes)
6 views40 pages

COA Chapter Seven

The document discusses memory organization in computer architecture, detailing the memory hierarchy which includes main memory, auxiliary memory, and cache memory. It explains the types of RAM and ROM, their characteristics, and the importance of cache memory in improving processing speed. Additionally, it covers mapping techniques such as associative, direct, and set-associative mapping, along with operations and replacement algorithms used in cache management.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views40 pages

COA Chapter Seven

The document discusses memory organization in computer architecture, detailing the memory hierarchy which includes main memory, auxiliary memory, and cache memory. It explains the types of RAM and ROM, their characteristics, and the importance of cache memory in improving processing speed. Additionally, it covers mapping techniques such as associative, direct, and set-associative mapping, along with operations and replacement algorithms used in cache management.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 40

Haramaya University

College of Computing and Informatics


Department of Computer Science

Computer Organization and Architecture(COA)

03/27/2025 1
Chapter 7
MEMORY ORGANIZATION
Topics
Memory Hierarchy
Main Memory
Auxiliary Memory
Cache Memory
Associative mapping
Direct mapping
Set-Associative mapping
Memory Hierarchy
• The memory unit is an essential
component in any digital computer
since it is needed for storing programs
and data.
• Not all accumulated information is
needed by the CPU at the same time
• Therefore, it is more economical to use
low-cost storage devices to serve as a
backup for storing the information that
is not currently used by CPU
Memory Hierarchy
• Computer Memory Hierarchy is a pyramid
structure that is commonly used to illustrate
the significant differences among memory
types.
• The memory unit that directly communicate
with CPU is called the main memory
• Devices that provide backup storage are
called auxiliary memory
• The memory hierarchy system consists of all
storage devices employed in a computer
system from the slow by high-capacity
auxiliary memory to a relatively faster main
memory, to an even smaller and faster cache
memory
Memory Hierarchy
Main memory
• Most of the main memory in a
general purpose computer is made
up of RAM integrated circuits chips,
but a portion of the memory may be
constructed with ROM chips
• RAM– Random Access memory
o Integrated RAM are available in two
possible operating modes, Static and
Dynamic
• ROM– Read Only memory
RAM
 A RAM chip is better suited for
communication with the CPU if it has one or
more control inputs that select the chip
when needed
 Read/write memory, that initially doesn’t
contain any data
 The computing system that it is used in
usually stores data at various locations to
retrieve it latter from these locations
 Its data pins are bidirectional (data can flow
into or out of the chip via these pins), as
opposite to those of ROM that are output
only
 It loses its data once the power is removed,
so it is a volatile memory
RAM…
Random-Access Memory
Types
Static RAM (SRAM)
 Each cell stores bit with a six-transistor
(Diode) circuit.
 Retains value indefinitely, as long as it is kept
powered.
 Relatively insensitive to disturbances such as
electrical noise.
 Faster and more expensive than DRAM.
Dynamic RAM (DRAM)
 Each cell stores bit with a capacitor and
transistor.
 Value must be refreshed every 10-100 ms.
 Sensitive to disturbances.
 Slower and cheaper than SRAM.
ROM
ROM is used for storing programs that
are Permanently resident in the
computer and for tables of constants that
do not change in value once the
production of the computer is completed.

The ROM portion of main memory is


needed for storing an initial program
called bootstrap loader, which is to start
the computer software operating when
power is turned off
ROM…
Types of ROM
 Masked ROM – programmed with its data when
the chip is fabricated
 PROM – programmable ROM, by the user using a
standard PROM programmer, by burning some
special type of fuses. Once programmed will not
be possible to program it again
 EPROM – erasable ROM; the chip can be erased
and chip reprogrammed; programming process
consists in charging some internal capacitors; the
UV light (method of erase) makes those
capacitors to leak their charge, thus resetting the
chip
 EEPROM – Electrically Erasable PROM; it is
possible to modify individual locations of the
memory, leaving others unchanged; one common
use of the EEPROM is in BIOS of personal
computers.
Auxiliary
Memory
 The main memory construction is costly. Therefore,
it has to be limited in size.
 The main memory is used to store only those
instructions and data which are to be used
immediately. However, a computer has to store a
large amount of information. The bulk of information
is stored in the auxiliary memory. This is also called
backing storage or secondary storage. They include
hard disk, floppy disks, CD-ROM, USB flash drives,
etc.

 When the electricity supply to the computer is off,


all data stored in the primary storage is destroyed.
On the other hand, this is not true for
AUXILLARY MEMORY DEVICES
Virtual memory
 Virtual memory is imaginary memory: it
gives you the illusion of a memory
arrangement that’s not physically there.
 A programmer can write the program
which requires more memory than the
capacity of main memory. Such a program
is executed by virtual memory technique.
Cache Memory

 Cache memory is a small fast memory and is sometimes


used to increase the speed of processing by making current
programs and data available to the CPU at a rapid rate.
 It is the fastest memory which kept between CPU and Main
Memory.
 Keeping the most frequently accessed instructions and data
in the fast cache memory.
 Locality of Reference: the references to memory tend to be
confined within a few localized areas in memory.
 Locality of Reference refers to the tendency of the computer
program to access instructions whose addresses are near one
another.
L1 ,L2 and L3 cache
L1 cache (2KB - 64KB)
• L1 cache (also known as primary cache or
Level 1 cache) is the top most cache in the
hierarchy of cache levels of a CPU.
• It is the fastest cache in the hierarchy. It
has a smaller size and a smaller delay (zero
wait-state) because it is usually built in to
the chip.
• SRAM (Static Random Access Memory) is
used for the implementation of L1.
L2 cache (256KB - 512KB)
• L2 cache (also known as secondary cache or
Level 2 cache) is the cache that is next to L1 in
the cache hierarchy.
• L2 is usually accessed only if the data looking
for is not found in L1.
• L2 is typically implemented using a DRAM
(Dynamic Random Access Memory). Most
times, L2 is soldered on to the motherboard
very close to the chip (but not on the chip
itself), but some processors like Pentium Pro
deviated from this standard
L3 cache (1MB -8MB)
 Level 3 Cache - With each cache
miss, it proceeds to the next level
cache. This is the largest among the
all the cache, even though it is
slower, its still faster than the RAM.
Difference between L1, L2 and L3
cache
• All processors rely on L1 cache, this is
usually located on the die of the processor
and is very fast memory (and expensive).
• L2 cache is slower, bigger and cheaper
than L1 cache. Older processors used L2
cache on the motherboard, nowadays it
tends to be built in to the processor.
• L3 cache is slower, bigger and cheaper
than L2 cache. Again this can be on chip
or on the motherboard.
Operations of cache memory
 The basic operation of the cache is as follows.
 When the CPU needs to access memory, the
cache is examined.
 If the word is found in the cache, it is read from the
fast memory.
 If the word addressed by the CPU is not found in the
cache, the main memory is accessed to read the
word.
 A block of words containing the one just accessed is
then transferred from main memory to cache
memory.
Cont…
Hit ratio: is the quantity used to measure the
performance of cache memory.
 The ratio of the number of hits divided by the
total CPU references (hits plus misses) to
memory.
Hit: the CPU finds the word in the cache
Miss: the word is not found in cache (CPU
must read main memory)
Memory Connection to CPU

 RAM and ROM chips are connected to a CPU


through the data and address buses.
 The low-order lines in the address bus select
the byte within the chips and other lines in the
address bus select a particular chip through its
chip select inputs.
Figure: Memory Connection to CPU
Mapping
Mapping is the transformation of data from main
memory to the cache memory.
There are three types mapping process:
 Associative mapping
 Direct mapping
 Set-associative mapping
Associative mapping
 The fastest and most flexible cache
organization uses an associative memory.
 The associative memory stores both the
address and content (data) of the memory
word.
Associative Mapping Cache (all number in octal)
Cont…
 Any location in cache to store any word from
main memory.
 The address value of 15 bits is shown as a
five-digit octal number and its corresponding
12-bit word is shown as a four- digit octal
number.
Direct mapping
 Each memory block has only one place to load in
Cache
 Mapping Table is made of RAM instead of CAM
 In the general case, there are 2k words in cache
memory and 2n words in main memory.
 n-bit memory address consists of 2 parts; k bits
of Index field and n-k bits of Tag field
 n-bit addresses are used to access main memory
and k-bit Index is used to access the Cache
Addressing Relationships between Main and Cache Memories
Operation
 The CPU generates a memory request with the
index field which is used for the address to access
the cache.
 The tag field of the CPU address is compared with
the tag in the word read from the cache.
 If the two tags match, there is a hit and the desired
data word is in cache.
 If there is no match, there is miss and the required
word is read from main memory.
 It is then stored in the cache together with the new
tag replacing the previous value.
Consider the numerical example of Direct Mapping shown in figure below
Set-Associative mapping
 Each data word is stored together with its tag
and the number of tag-data items in one word
of cache.
 Each memory block has a set of locations in
the Cache to load Set Associative Mapping
Cache with set size of two
 In general, a set-associative cache of set size k
will accommodate k words of main memory in
each word of cache.
Two-way set-associative mapping cache.
Operation

 When the CPU generates a memory request, the


index value of the address is used to access the
cache.
 The tag field of the CPU address is then
compared with both tags in the cache to
determine if a match occurs.
 The comparison logic is done by an associative
search of the tags in the set similar to an
associative memory search: thus the name “set-
associative.”
Cont…
• The hit ratio will improve as the set size
increases because more words with the same
index but different tags can reside in cache.
However, an increase in the set size increases
the number of bits in words of cache and
requires more complex comparison logic.
• When a miss occurs in a set-associative cache
and the set is full, it is necessary to replace one
of the tag-data items with a new value.
Cont…
The most common replacement algorithms used are:
 First-In, First-Out (FIFO): The FIFO procedure selects
for replacement the item that has been in the set the
longest.
 Least Recently Used (LRU): The LRU algorithm selects
for re-placement the item that has been least recently used
by the CPU
 Optimal Replacement: Replace page that will not be
used for longest period of time.
Both FIFO and LRU can be implemented by adding a few
extra bits in each word of cache.
Cont…
Writing into Cache
There are two ways of writing into memory:
 Write Through
 When writing into memory:
 If Hit, both Cache and memory is written in parallel
 If Miss, Memory is written
 Write-Back
 When writing into memory
 If Hit, only cache is written
!!
O U
K Y
A N
T H

You might also like