0% found this document useful (0 votes)
39 views61 pages

Chapter Three

This document discusses memory and input/output devices. It covers topics like memory characteristics, semiconductor memory organization, static and dynamic RAM, and how larger memory is designed using smaller memory chips. Memory serves to store programs and data needed by the CPU. Not all information is needed at once, so backup storage like disks are used. Memory is classified based on location, capacity, access methods and performance metrics. Semiconductor memory includes registers, cache, RAM and ROM. Memory is organized into words addressed by bits. Larger memory is implemented using memory modules with several memory chips.

Uploaded by

Mawcha Haftu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
39 views61 pages

Chapter Three

This document discusses memory and input/output devices. It covers topics like memory characteristics, semiconductor memory organization, static and dynamic RAM, and how larger memory is designed using smaller memory chips. Memory serves to store programs and data needed by the CPU. Not all information is needed at once, so backup storage like disks are used. Memory is classified based on location, capacity, access methods and performance metrics. Semiconductor memory includes registers, cache, RAM and ROM. Memory is organized into words addressed by bits. Larger memory is implemented using memory modules with several memory chips.

Uploaded by

Mawcha Haftu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 61

Chapter Two

Memory & I/O Devices

COA Lec Notes March 2017 1


Contents
• Memory & Organization
– Introduction (memory)
– Memory characteristics
– Semiconductor memory & organization
• Memory hierarchy
• Cache Memory
• Virtual Memory
• Input/output
• Bus structures, types of buses
COA Lec Notes March 2017 2
Memory
• Memory is an essential component needed for storing
programs and data.
• Computer users accumulate large amounts of data
processing software, information…
• But not all info are required by the CPU at the same time
• Use low cost storage devices as backup to store info
currently not used by processor.
• Data currently needed by processor reside in main memory
• Devices that give backup storage are called auxiliary
memory (magnetic disks and tapes)
• They are used for storing system programs, large data files
and other back up information
• Only programs and data currently required by CPU reside in
the Main memory
COA Lec Notes March 2017 3
Memory Characteristics/classifications
• Location
• Capacity
• Unit of transfer
• Access methods
– Random
– Associative
– Sequential
– Direct
• Performance
– Access time
– Memory Cycle time
– Transfer Rate
• Physical type

COA Lec Notes March 2017 4


Semiconductor Memory &
organization
• Semiconductor Memory
– Registers
– Cache
– RAM
– ROM

COA Lec Notes March 2017 5


Memory Organization
• Smallest unit of info is bit the maximum amount of bits that can
have
have a uniqu address

• 8 bits together called byte


• Max. size of main memory determined by
addressing scheme bit addressible
every bit have a uniqu address

• Eg: for 16 bits address


word addressible
each word have a unique address
a word can be 1 byte, 2 byte, 3 byte

COA Lec Notes March 2017 6


Main Memory can be byte or word addressable

• Consider byte addressable---in each row 8


bits/cells Word size: Words are expressed in bytes
(8 bits). A word can however mean any
number of bytes. Commonly used word
sizes are 1 byte (8 bits),
2bytes (16 bits) and 4 bytes (32 bits).

and number of words

for example if we have a memory of size


64k x 8 each cell is of size 8 bit,
and we need 12 bit address to refference
to all the 64k cells

if it was 64k x 16 and it is byte addressable


we would need 24 bit address to hold
all the cells.

COA Lec Notes March 2017 7


Byte addressable , in 16 bits/cells word

COA Lec Notes March 2017 8


Word addressable

COA Lec Notes March 2017 9


Consider 32 bit address/word size and byte addressable

here the problem is that the row is


• Higher order 30 bits for addressing words byte addressable, so each byte
should have
a unique address so with 32 bit
• Lower two bits for accessing bytes within word address
we need 2 bits to differentiate the
byte
in the word(because the word is
also of size
32 which have 4 bytes) so to
reference those
bytes we use the four bits. and the
remaining 30 bits are used to
reference the whole word list
which is a bout 1 bilion(GB)

COA Lec Notes March 2017 10


Basic memory element for dynamic
RAM
here first the address line should
be
ON which means should be high ,
then any thing that we provide
through the bit line is stored on the
capacitor

we can use decoder to select the


address lines

COA Lec Notes March 2017 11


Basic memory element for static RAM
Cell

COA Lec Notes March 2017 12


SRAM vs DRAM
• DRAM cell is smaller and simpler than static
memory cell
• DRAM more dense…more cells per unit area
• Need refresh circuit
• SRAM more fast. Applicable in cache

COA Lec Notes March 2017 13


16x8 Internal memory organization of memory chips

COA Lec Notes March 2017 14


128x8 memory chip organization

COA Lec Notes March 2017 15


one bit(not byte)

Organization of 1kx1 memory


a word is the smallest unit that can be identified
uniqly
• Word size is 1 bit,
• data bus is 1 bit
• Address lines are
10 bits

COA Lec Notes March 2017 16


Organization of 1Kx1 as 32x32 1024 is 1k (32 x 32 = 1k) there
are 1024 memmory cells in each
case

• 32 words
organized in one
row
• Address lines
grouped in two
• 5 higher bits for
row address, next
lower bits for
column address
here we need to reference each bit. becaue
as you can see at the first case wehn 1k x 1 every
cell
had an address so the 32 words are 1 bit in size
each
so we need 5 bit address to represent the 32 bit
cells
in each row and 5 additional bits to represent each
row.

COA Lec Notes March 2017 17


Multiplexing address lines
• To reduce the number
of pins for external
connections, the row
and column address
are multiplexed.
• During read/write
operation, row
address is applied first,
then row address is
loaded to row address
latch
• Column address is sent
next and loaded to
column address latch
• Hence, desired cell is
accessed
bandwidth
is the rate of transfer of a
word

COA Lec Notes March 2017 18


Design of memory using memory chips
• Design of larger memory using smaller size
memory chips
• Examples
– 16kx8 using 4kx8 chips
– 64kx16 using 16kx8 chips
• Each chip has a control input called CS input
that can be enabled to accept or send data

COA Lec Notes March 2017 19


64kx16 memory module
here we need a total of 16 bits to
represent 64k x 16 memorry(it is
word addressibel so that each 16
bit has a unique address) two
chips are connected to the same
decoder output(so they have the
same address)
then each chip is 16k x 8 which
means we need 14 bits to access
those 16k rows the same imput of
the 14 bits is given to both the
chips so that the same row level is
selected in both chips then the
data is written in
full format(16 bits) - then at last all
the chip pairs are connected to
the data bus (16bit data bus -
combination of both the chips)

COA Lec Notes March 2017 20


Typical RAM memory modules

COA Lec Notes March 2017 21


RAM and ROM chips connection to
CPU
• RAM and Rom chips connected to CPU
through data and address buses
• Lower order address lines used to select
byte/word within the chips
• Higher order address bits used for selecting
chips through the chip select circuits

COA Lec Notes March 2017 22


RAM chips

COA Lec Notes March 2017 23


ROM Chips…

COA Lec Notes March 2017 24


Memory Connection to CPU

COA Lec Notes March 2017 25


Memory Map Address
• Address space assignment for the above
memory chips

COA Lec Notes March 2017 26


Memory Hierarchy
• Goal of the memory hierarchy
– Try to match the processor speed with the rate of information
transfer from the lowest element in the hierarchy
• Cache and main are fast memory level called primary
memory.
• The solid-state memory is followed by larger, less
expensive, and far slower magnetic memories that
consist typically of the hard disk and the tape.
• It is customary to call the disk the secondary memory,
while the tape is conventionally called the tertiary
memory.
• The memory hierarchy can be characterized by
parameters such as, access type, capacity, cycle time,
latency, bandwidth, and cost.

COA Lec Notes March 2017 27


Memory Hierarchy

COA Lec Notes March 2017 28


Memory Hierarchy…

COA Lec Notes March 2017 29


Data Transfer hierarchy
word is the minimum amount of data trnasfered between main cache and cpu
bloack is the minimum amount of data transferred between the main memmory and the
chach

COA Lec Notes March 2017 30


The sequence of events that takes place when
the processor makes a request for an item
• First, the item is sought in the first memory level of the memory
hierarchy. The probability of finding the requested item in the first
level is called the hit ratio h1.
• The probability of not finding (missing) the requested item in the
first level of the memory hierarchy is called miss ratio, (1-h1).
• When the requested item causes a “miss,” it is sought in the next
subsequent memory level.
• The probability of finding the requested item in the second memory
level, hit ratio of the second level, is h2.
• The miss ratio of the second memory level is (1 - h2).
• The process is repeated until the item is found.
• Upon finding the requested item, it is brought and sent to the
processor.

COA Lec Notes March 2017 31


Average access Time
• In a memory hierarchy that consists of three
levels, the average memory access time can
be expressed as follows:
• tav =t1+(1-h1)[t2+(1-h2)t3]
• The average access time of a memory level is
defined as the time required to access one
word in that level. In this equation, t1, t2, t3
represent, respectively, the access times of
the three levels.

COA Lec Notes March 2017 32


Cache Memory
• Cache is an intermediate stage between ultra-fast
registers and much slower main memory.
• Memory access is bottleneck for computer
performance efficiency is the tendency of a processor to access the same
set of memory locations repetitively over a short
• It's introduced solely to increase performance
period of time
of the
computer.
• In a program, a number of instructions are repeatedly
executed. E.g. Loops
• Localized areas of program are repeatedly executed,
the others less.
• This Phenomena is called Locality of reference
• Locality of reference present in most programs

COA Lec Notes March 2017 33


Cache Vs Main memory
• A cache memory is faster than main memory
– Faster electronics
– Fewer locations than a main memory, and hence
shallow decoding tree, which reduces access time
– The cache is placed closer to the CPU and avoids
communication delays over a shared bus.
• Cache is 5 to 10 times faster than main memory
• It is economical than use of fast memory device
to implement entire main memory

COA Lec Notes March 2017 34


Cache operation
• CPU makes read/write request
• The address generated by CPU always refer to location of main memory
• Memory access control circuitry determines whether or not the requested
word currently exists in cache
• For read request, the block of words containing the specified location is
transferred to cache
• For write request, either the cache and memory location updated
simultaneously, or update cache block only, and then writing back to main
memory when status bit is set to 1 during replacement
• If the write address not in cache, directly written to main memory , not
advantageous to bring the bock of data to cache
• Use mapping function to move data from main memory to cache
• Use Replacement Algorithms to free some of the cache lines when it is full

COA Lec Notes March 2017 35


Cache Organization
• Main memory consists of up to 2n addressable words
• For blocks of K words each, there are M =2n/K blocks
in main memory.
• The cache consists of m blocks, called lines.
• Each line contains K words, plus a tag of a few bits.
• The number of lines is considerably less than the
number of main memory blocks (m<<M).
• An individual line cannot be uniquely and permanently
dedicated to a particular block.
• Thus, each line includes a tag that identifies which
particular block is currently being stored.
• The tag is usually a portion of the main memory
address

COA Lec Notes March 2017 36


COA Lec Notes March 2017 37
Elements of cache design
– We would like the size of the cache to be small
enough so that the overall average cost per bit is close
to that of main memory
– large enough so that the overall average access time is
close to that of the cache alone.
– There are several other motivations for minimizing
cache size.
– The larger the cache, the larger the number of gates
involved in addressing the cache.
– The result is that large caches tend to be slightly
slower than small ones—even when built with the
same integrated circuit technology

COA Lec Notes March 2017 38


Mapping Function
• Because there are fewer cache lines than main
memory blocks, an algorithm is needed for
mapping main memory blocks into cache lines
• Consider 4k words of cache & 64k words of MM,
block size of 32 words.
– We have 128 cache lines and 2048 MM blocks
– At any instant, only 128 out of 2048 can reside in
cache
– hence, we need mapping function to put particular
block of MM into appropriate line of cache
• Three techniques can be used:
direct, associative, and set associative.
COA Lec Notes March 2017 39
1. Direct cache mapping
• Each address has a specific place in the cache.
• A block can only go into one place in the cache
• (Index) = (Block address) mod(# cache lines)
• Main memory address divided in to 3 fields
– TAG, line, word
• Disadvantage is one cache line may be busy
while others not
– It may say cache is full, even though it is not
COA Lec Notes March 2017 40
Direct Mapping
Cache

000
001
010
011

111
100
101
00001 00101 01001
110
01101 10001 10101 11001 11101

Memory

COA Lec Notes March 2017 41


Example
Consider 4k words
of cache & 64k
words of MM, and
block size is 32
words. Consider
direct mapping

COA Lec Notes March 2017 42


2. Fully Associative
• Search the entire cache for an address.
• Any line can store the contents of any memory
location.
• Main memory address divided in to two fields
– TAG
– word

COA Lec Notes March 2017 43


Example
Consider 4k
words of cache
& 64k words of
MM, and block
size is 32
words.
Consider fully
associative
mapping

COA Lec Notes March 2017 44


Associative Mapping Pros and Cons
• Flexibility as to which block to replace when a
new block is read into cache
– Replacement algorithms designed to maximize
cache hit ratio
• Complex circuitry required to examine the
tags of all cache lines in parallel
• When the processor wants an address, all tag
fields in the cache are checked to determine if
the data is already in the cache.
COA Lec Notes March 2017 45
3. N-way Set Associative Cache
• It is an intermediate between direct and set
associative mapping
• lines grouped into sets
• block of main memory can reside in any of
the lines with in set
• This reduces searching overhead
• main memory address divided in to 3 fields
– Tag, set identifier, word identifier

COA Lec Notes March 2017 46


Example
Consider 4k
words of cache
& 64k words of
MM, and block
size is 32 words.
consider 4-way
set associative
mapping

COA Lec Notes March 2017 47


Virtual Memory
• RAM is not enough to run all programs that users want at once.
• With no virtual memory, if RAM is filled, computer may say “not
possible to load more applications”.
• With virtual memory computer acts as it has very large RAM
– OS looks at RAM for areas that have not been used recently and
copy them onto the hard disk.
– This frees up space in RAM to load new application.
– The copying happens automatically, it is not noticeable and it feels
like it has unlimited RAM space.
• The area of hard disk that stores RAM image is called a page file.
• Address and memory spaces are divided into fixed groups of words
called page or frame/block
• The operating system moves data back and forth between the page file
and RAM

COA Lec Notes March 2017 48


Cache-RAM Vs RAM-Disk
• A cache stores a subset of the addresss space of
RAM.
• Cache is used to keep the most commonly used
sections of RAM in the cache.
• What if we wanted more RAM than the available?
• Use disk to extend the amount of memory
required.
• In effect, RAM acts like cache for disk.
• This idea of extending memory is called virtual
memory.
• Virtual means it's not RAM.

COA Lec Notes March 2017 49


COA Lec Notes March 2017 50
Virtual Memory
• Virtual memory is a memory management
technique that maps memory addresses used by
a program, called virtual addresses into physical
addresses
• It s implemented in hardware and software
• Hardware unit called memory management
unit(MMU) found in CPU translates virtual
address to main memory addresses
• Virtual memory combines computer’s RAM with
temporary space on hard disk.
• Virtual address space is the total number of
uniquely-addressable memory locations available
to an application not the amount of physical
memory installed in the system
COA Lec Notes March 2017 51
COA Lec Notes March 2017 52
Page Table
• Page table is an array of page table entries (PTEs)
indexed by virtual page number
• Like in cache, the virtual address is divided into
virtual page number (tag), and page offset
• Each PTE consists of a valid bit
– If valid bit is 1, then the virtual page is in RAM, and get
the physical page from PTE. This is called a page hit
– If it is 0, This is called a page fault. So, page is not in
RAM and get it from disk
• To minimize page fault, RAM is made fully
associative.
– Any page in disk can go into any page in RAM.

COA Lec Notes March 2017 53


Consider 32 bit virtual address, 1MB of RAM and
byte addressable

COA Lec Notes March 2017 54


Page table Entries indexed by virtual page
number

COA Lec Notes March 2017 55


Address Translation
• If the valid bit of the PTE is 1, then translate
the virtual page to a physical page, and
append the page offset.
• This gives physical address in RAM

COA Lec Notes March 2017 56


Example

COA Lec Notes March 2017 57


Example

COA Lec Notes March 2017 58


MMU

COA Lec Notes March 2017 59


Memory mapping in MMU

COA Lec Notes March 2017 60


Example

COA Lec Notes March 2017 61

You might also like