0% found this document useful (0 votes)
32 views22 pages

Computer Memory System

bghjj

Uploaded by

aaliyanadaf5
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
32 views22 pages

Computer Memory System

bghjj

Uploaded by

aaliyanadaf5
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 22

COMPUTER MEMORY SYSTEM

Some Basic Concepts

• Maximum size of memory is determined by addressing scheme


• A 16 bit computer that generates 16 bit address is capable of
addressing up to
216 memory locations
• 32 bit address can utilize a memory that contains up to 232 memory
locations
• Number of locations represent size of address space of computer
• Modern computers are byte addressable
Some Basic Concepts
• Fig. 5.1 shows possible address assignment for a byte
addressable 32 bit
computer
Some Basic Concepts
• Data transfer between memory & CPU takes place through MAR (Memory
Address
Register) & MDR ( Memory Data Register)
• If MAR is k bits long & MDR is n bits long, then memory unit contain up to
2k
addressable locations
• During a “memory cycle”, n bits of data are transferred between memory
& CPU that takes place over the processor bus, which has k address lines
& n data lines

Figure 5.2 Connection of main memory


Some Basic Concepts
• CPU initiates memory operation by loading appropriate data into MDR & MAR &
then setting
either read or write line to 1
• When operation is completed, memory control circuitry indicates this to CPU by
setting MFC to 1

FIGURE
5.2
Some Basic Concepts
• A measure of speed of memory is time that elapses between initiation &
completion of
operation which is called memory access time
• Memory cycle time is the minimum time delay required between initiation of 2
successive memory operations
• Cycle time is normally larger than the access time
Semiconductor RAM
• Memory cells are usually organized in the form of an array in which each cell is
capable of
storing one bit of information shown in figure 5.3
Semiconductor RAM
• Each row contains a memory word & all cells of a row are connected to a
common line
referred as word line which is driven by address decoder
• Cells in each column are connected to a Sense/ Write circuit by 2 bit lines
• Sense/ Write circuits are connected to data input/output lines
Semiconductor RAM
• Figure 5.3 is an example of small chip consisting of ?? words of ?? bits each
• This is referred as 16*8 organization
• Data input & data output of each Sense/Write circuit are connected to single
bidirectional data line in order to reduce number of pins
Semiconductor RAM
• Figure 5.3 is an example of small chip consisting of 16 words of 8 bits each
• This is referred as 16*8 organization
• Data input & data output of each Sense/Write circuit are connected to single
bidirectional data line in order to reduce number of pins
Semiconductor RAM
• R/W input specifies the required operation
• CS (Chip Select) input selects a given chip in a multichip memory
system
Random Access Memory (RAM)
• A memory unit is called Random Access Memory (RAM) if any location
can be accessed for a Read or Write operation in some fixed amount
of time that is independent of location’s address
Uses Transistors & Uses Capacitors & few
latches transistors
Power consumption is Power consumption is
low high
Read Only Memory (ROM)

• It is a type of storage that permanently stores data on


personal computers & other electronic devices
Cache Memories

◾ Processor is much faster than the main memory


 As a result, the processor has to spend much of its time waiting while
instructions
and data are being fetched from the main memory
 Major obstacle towards achieving good performance

◾ Speed of the main memory cannot be increased beyond a certain point


◾ Cache memory is an architectural arrangement which makes the main
memory
appear faster to the processor than it really is
◾ Cache memory is based on the property of computer programs known as
“locality of
reference”
Cache Memories
• The effectiveness of cache mechanism is based on a property of computer
programs
called the locality of reference
• In figure 5.13, when a read request is received, contents of blocks of
memory word containing location specified are transferred into cache
one word at a time
• When program references any location in this block, desired contents are
read directly from cache
Cache Memories
• Cache memory can store enough number of blocks but this number is small
compared to
total no. of blocks in main memory
• Correspondence between main memory blocks & those in cache is specified
by a
mapping function
• When cache is full & a memory word that is not in cache is referenced, the
cache control hardware decide which block should be removed to create
space for new block that contains referenced word
• Rules for making this decision are called replacement algorithms
Mapping Functions
• Transformation of data from main memory to cache memory is called
mapping
• For every word stored in cache, there is duplicate copy in main
memory
• CPU communicates with both memories cache & main memory
• If CPU finds a data in cache, then it is called a cache hit
• If CPU did not finds a data in cache, then it is called a cache miss
Replacement Algorithms
• LRU
• It replaces the page that has not been referenced for longest time
• Consider an example in which process requests for some pages
• Fault: Page fault occurs if requested page is not available in the page
frame
• Hit: If requested page is available in the page frame, then that is
called page hit
LRU (Least Recently Used)

Total No of Page Faults = 10


F – Fault
H - Hit

You might also like