0% found this document useful (0 votes)
39 views57 pages

Memory Organisation

The document discusses different types of integrated circuits (ICs) used in computer memory organization. It describes the characteristics of different IC families like TTL, ECL, MOS, and CMOS. It then summarizes the key characteristics of different memory types including SRAM, DRAM, ROM, PROM, EPROM, EEPROM and Flash memory. Finally, it discusses memory hierarchy and how different levels of memory like registers, cache, main memory, magnetic disks, and tapes provide faster access but lower storage capacity.

Uploaded by

Arunava Sarkar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
39 views57 pages

Memory Organisation

The document discusses different types of integrated circuits (ICs) used in computer memory organization. It describes the characteristics of different IC families like TTL, ECL, MOS, and CMOS. It then summarizes the key characteristics of different memory types including SRAM, DRAM, ROM, PROM, EPROM, EEPROM and Flash memory. Finally, it discusses memory hierarchy and how different levels of memory like registers, cache, main memory, magnetic disks, and tapes provide faster access but lower storage capacity.

Uploaded by

Arunava Sarkar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 57

MEMORY

ORGANISATION
IC FAMILY
DTL Diod Transistor Logic
RTL Register Transistor Logic
TTL Transistor Transistor Logic [ Most Popular ]
ECL Emitter Coupled Logic [ High Speed Operation ]
MOS Metal Oxide Semiconductor [ MOSFET ]
CMOS Complementary MOS [ Low Power Consumption ]
I 2L Integrated Injection Logic

• MOS & I2L are used in circuits of High Component Density [ LSI ]
• The Limits on the number of circuits in SSI is the number of PINs
• TTL Series– 5400 and 7400 (Normal) / 9000 & 8000 (Industry Recent)
• TTL works on 5 Volts (Operating Voltage)
• ECL Series – 10000
• CMOS Series – 4000 ( 3 to 5 Volt Operating Voltage )
Positive & Negative Logic
CHARACTERISTICS OF IC
FANOUT
Number of standard loads that the output of a gate can drive without
hampering it’s normal operation. Maximum number of Similar type of
gates can be controlled by the one gate (Loading Rule)

POWER DISSIPATION
Supplied Power required to operate the gate ( mWatt)

PROPAGATION DELAY
Average Transition delay time for a signal to propagate from input to
output. It passes through the series of gates. The Sum of all gates
propagation delay is total propagation f circuit. Propagation delay high
means speed is low.
NOISE MARGIN
Un-wanted signal produce undesirable changes in the output (DC Noise &
AC Noise)
COMPARATIVE STUDY
IC FANOUT POWER PROP. NOISE
TYPE DISSIPATION DELAY MARGIN (Volt)
(mW) (µs)

TTL 10 10 10 0.4

ECL 25 25 2 0.2

CMOS 50 0.1 25 3
CLASSIFICATION OF MEMORY

ACCESS CAPABILITY
• RANDOM ACCESS MEMORY • READ/WRITE MEMORY
• SEQUENTIAL ACCESS MEMORY • READ ONLY MEMORY
• SEMI-RANDOM / DIRECT ACCESS MEMORY
• ASSOCIATIVE ACCESS MEMORY

TECHNOLOGY ROLE
• CORE MEMORY • MAIN MEMORY
• SEMICONDUCTOR MEMORY • SECONDARY MEMORY
• MAGNETIC BUBBLE • CACHE MEMOY
• VIRTUAL MEMORY
MEMORY CLASSIFICATION
STATIC-RAM (SRAM)

No refresh (6 FET transistors/bit) Rightside


DYNAMIC-RAM (DRAM)

Dynamic since
needs to be
refreshed
periodically (8 ms, Destructive
1% time) read-out
DRAM Refresh Cycles and Refresh Rate

Voltage 1 Written Refreshed Refreshed Refreshed


for 1

Threshold
voltage
10s of ms
0 Stored before needing
Voltage refresh cycle Time
for 0

Fig. 17.5 Variations in the voltage across a DRAM cell capacitor after writing
a 1 and subsequent refresh operations.
SRAM serves as cache memory, interfacing between DRAMs and the CPU.
CLASSIFICATION OF DRAM
1. Synchronous Dynamic RAM (SD RAM)
2. Rambus Dynamic RAM (RD RAM)

SYNCHRONOUS DYNAMIC RAM (SDRAM)


DRAM is synchronized by the system clock
Much faster
Speed is in Megahertz
A. Single Data Rate SDRAM (SDR –SDRAM)
B. Double Data Rate SDRAM (DDR -- SDRAM) [ E.g. DDR, DDR2, DDR3…]

RAMBUS DYNAMIC RAM (RDRAM)


Faster & more expensive than SDRAM
Is used with Intel Pentium IV chip
Require lots of space on a motherboard
Speed of transfer is not just a function of the speed of memory, Also depend on the
systembus
DYNAMIC RAM PACKAGING
Dynamic RAM Packaging : Memory components are usually packaged in
integrated circuit chip with small ceramic or plastic plate . These chips are themselves
assembled into Memory Module
Every memory module will be installed in each motherboard slot
More than two memory slots are available in now a day motherboards

Single In Line Memory Module (SIMM)


 Chips are connected with ceramic board in one side.
 72 PINs and 16MB,32MB…64MB meomry module
Double In Line Memory Module (DIMM)

 Chips are connected both side of ceramic board.


 Taking less space
 Most popular – 64PINs, 84PINs 4-pin dual in-line package (DIP) [ 16M 4 memory]
Vss D4 D3 CAS OE A9 A8 A7 A6 A5 A4 Vss
Legend:
24 23 22 21 20 19 18 17 16 15 14 13 Ai Address bit i
CAS Colu mn address strobe
Dj Data bit j
NC No connection
OE Output enable
1 2 3 4 5 6 7 8 9 10 11 12 RAS Row address strobe
WE Write enable
Vcc D1 D2 WE RAS NC A10 A0 A1 A2 A3 Vcc
ROM (Read Only Memory)
 Non-Volatile
 Information can only be readable, not possible to write information
 Content of ROM are not lost or erased even when power off
 Also called Field Store , Permanent Store or Dead Store
 Initially content is pre-recorded

PROM (Programmable ROM)


 Non-Volatile
 Program, data or any information are permanent in PROM
 After recorded once, that information can not be changed
 Information can only be readable
 E.g – Some types of program chips inbuilt in external device
EPROM (Erasable PROM)
In a PROM once fixed pattern is permanent and can not be altered.
The EPROM can be restructured to the initial state even through it
has been programmed previously.
Rewriting through Special machine where prevoius data will be
erased and new data is written with the help of Ultra-Violet rays
taking nearly 20 minutes. Called Masking
E.g – 8051 Microcontrollers

EEPROM (Electrically Erasable PROM)

 Non-Volatile
 Chips can be erased and re-programmed on Byte-By-Byte basis
 Selective erasing is possible
 Differentkindofvoltageforerasing(21Volt),writing(21Volt)
and reading(5Volt)
 High Cost, Low reliablility
 E.g – BIOS (Basic Input Output System Chip)
Flash Memory
 Non-Volatile
 Like EEPROM, Reading one cell is possible,
 But one Block (set of cells) must be written
 Greater Density & Lower Cost per bit and consume Low Power
 A flash cell is based on single transistors controlled by charge
 E.g – MP3 Player, Cell Phone, Memory Stick, Pen Drive

EEPROM or Flash memory organization.


S o u r c e l i n es
Control gate
Floating gate
Source
Word
lines
n

p subs-
trate

n+

B i t li n e s Drai n
Access Method: Each memory is a collection of various
memory location. Accessing the memory means finding and
reaching desired location and than reading information from
memory location. The information from locations can be accessed
as follows:
1. Random access
2. Sequential access
3. Direct access.
• Random Access: It is the access mode where each
memory location has a unique address. Using these
unique addresses each memory location can be
addressed independently in any order in equal amount
of time. Generally, main memories are random access
memories.
• Sequential Access: If storage locations can be
accessed only in a certain predetermined sequence,
the access method is known as serial or sequential
access. E.g Audio/ Video Cassette films(ribons),
• Direct Access / Semi-Random Access: In this
access information is stored on tracks and each track
has a separate read/write head. This features makes it
a semi random mode which is generally used in
magnetic disks. E.g – Hard-Disk
Memory Hierarchy

MEMORY HIERARCHY

Memory Hierarchy is to obtain the highest possible


access speed while minimizing the total cost of the memory system
Auxiliary memory
Magnetic
tapes I/O Main
processor memory
Magnetic
disks

CPU Cache
memory

COST SPEED
Register

Cache

Main Memory

Magnetic Disk

Magnetic Tape
CAPACITY
Main Memory

MAIN MEMORY
RAM and ROM Chips
Typical RAM chip
Chip select 1 CS1
Chip select 2 CS2
Read RD 128 x 8 8-bit data bus
RAM
Write WR
7-bit address AD 7

CS1 CS2 RD WR Memory function State of data bus


0 0 x x Inhibit High-impedence
0 1 x x Inhibit High-impedence
1 0 0 0 Inhibit High-impedence
1 0 0 1 Write Input data to RAM
1 0 1 x Read Output data from RAM
1 1 x x Inhibit High-impedence

Typical ROM chip

Chip select 1 CS1


Chip select 2 CS2
512 x 8 8-bit data bus
ROM
9-bit address AD 9
Main Memory

MEMORY ADDRESS MAP


Address space assignment to each memory chip

Example: 512 bytes RAM and 512 bytes ROM

Hexa Address bus


Component address 10 9 8 7 6 5 4 3 2 1
RAM 1 0000 - 007F 0 0 0 x x x x x x x
RAM 2 0080 - 00FF 0 0 1 x x x x x x x
RAM 3 0100 - 017F 0 1 0 x x x x x x x
RAM 4 0180 - 01FF 0 1 1 x x x x x x x
ROM 0200 - 03FF 1 x x x x x x x x x

Memory Connection to CPU


- RAM and ROM chips are connected to a CPU through the data and address
buses

- The low-order lines in the address bus select the byte within the chips and
other lines in the address bus select a particular chip through its chip select
inputs
CHIPS ALLOCATION IN MEMORY MODULE
 HORIZONTALAllocation
 VERTICALAllocation
 BOTHAllocation
HORIZONTAL ALLOCATION
VERTICAL ALLOCATION
BOTH HORIZONTAL & VERTICAL ALLOCATION
Bottleneck of Von Neumann Architecture

 Each instruction and data operand needed to be fetched from memory


 Intermediate and final result are also needed to store in memory
 Data path (Data Bus) between ALU and RAM will be all time busy
 As result un-reliable and critical single data path

SOLUTION : 1) Banked memory 2) Interleaved memory


BANKED Memory Organization

 Memory is divided into more than one independent block


 First 256 address location will be in BANK-0,Next consecutive
256 location in BANK-1 and soon.
 Lowerorder8bitsforinternaladdressforselectiveBank
 Higherorder2bitsforselectingparticularBankfromfour(4)
banks
INTERLEAVED Memory Organization

 Higherorder8bitsforinternaladdressingforabank
 Lowerorder2bitsforselectingparticularbank
 Consecutivelocationsaredistributedindifferentbanks

Problem : Design a 4 address bit memory into two(2) blocks


so that the consecutive address fall in alternative blocks
CACHE MEMORY

Locality of Reference
TEMPORAL Locality of Reference : Recently executed instruction is likely to be executed
again very soon (So, bring into cache and search in it first)
SPACIAL Locality of Reference : Instructions in close proximity to a recently executed
instruction, are also likely to be executed very soon. It suggest that instead of fetching a
single instruction from main memory, fetch multiple instructions that are resides at
adjacent address as well (block concept & program sequence)
Desktop, Drawer, and File Cabinet Analogy
Once the “working set” is in
the drawer, very few trips to
the file cabinet are needed.
Access cabinet
Access drawer
in 30 s
in 5 s

Register Access
file desktop in 2 s Cache
memory

Main
memory

Fig. 18.3 Items on a desktop (register) or in a drawer (cache) are more


readily accessible than those in a file cabinet (main memory).

Computer Architecture, Memory System


Feb. 2011 Slide 33
Design
HIT RATIO
h = (Number of references found in cache) / (total number of memory reference)
Performance of cache memory is measured in terms of Hit Ratio
Cache Hit : Found in cache (when CPU refers to cache memory and found the words in cache
Cache Miss : Not found in cache, then search in main memory for word

Hit Ration(h) = 0.9 = 9 / 10 = 9 / (9+1)

If total ten(10) number of memory references, then Nine (9) hits and One (1) miss.

Problem : Computer with cache access time = 100 ns , main memory access time =1000 ns
and hit ratio = 0.9. Find….
1. Average memory access time when we use cache (Ans. 200 ns)
2. Average memory access time without use of cache (Ans. 1000ns)
Look-through Cache : Cache is checked first for any memory reference, and main memory
access will be started only if there is a Cache Miss.

Average memory access time, TA = TC + (1-h)*TM


TC = Cache access time
TM = Main memory access time (only memory)

Look- Asside Cache : Start cache look-up and main memory access at the same
time(simultaneously) . Once cache hit is established , then the main memory access is
cancelled

Average memory access time, TA = h*TC + (1-h)*TM


TC = Cache access time
TM = Main memory access time (only memory)
Write through cache
Cache location and memory locations are updated simultaneously
Advantages : - Main memory is always up-to-date, so main memory always in consistent
Disadvantages :- Time is spent for every memory write (more time). Unnecessary write
operation in the main memory when a given cache word is updated several time during its
cache residency
Write Back / Copy Back cache
 Thepolicyistoupdateonlythecachelocationandtomarkitasupdatedwithan
associatedflagbit.Themainmemorylocationofthewordisupdatedlater,
 Whentheblockcontainingthismarkedword istoremovefromthecachetomake
room for new block
Advantages : Continuous updating of the same cache not effect the main memory (less
main memory access time)
Disadvantages : Unnecessary write operation because when a cache block is written back
to main memory, all words of that block are written back, even if only a single word has
been changed while the block was in the cache.
Approach 1 : The block of words that contains the requested word is copied from
main memory into cache. After entire block is loaded into the cache, the particular requested
word is forwarded to the processor
Approach 2 : (Load Through or Early Restart) This requested word may be sent to the
processor as soon as it is read from the main memory to reduce the processor’s waiting
period. Though the requested word immediately transfer to processor, but also copied the
block from main memory to cache memory. More Expensive

Write Miss : Addressed word is not in the cache ie. Location being written to is not in
the cache currently.

Write Around : Update the main memory bypassing the cache


Write Allocate : To write in the main memory and to load the cache with that block by
mapping (hit ratio increased)
Cache, Hit/Miss Rate, and Effective Access Time
Cache is transparent to user;
transfers occur automatically
Line
Word

Main
Reg Cache
CPU file
(slow)
(fast)
memory
memory

Data is in the cache


fraction h of the
time
(say, hit rate of Go to main 1 – h of the time
98%) (say, cache miss rate of 2%)

One level of cache with hit rate h


Teff = hTc + (1 – h)(Tm + Tc) = Tc + (1 – h)Tm
Computer Architecture, Memory System
Feb. 2011 Slide 38
Design
Multiple Cache Levels

Cleaner and easier


CPU CPU
to analyze CPU CPU
registers registers

Level-1 Level-2 Main Level-2 Level-1 Main


cache cache memory cache cache memory

(a) Level 2 between level 1 and main (b) Level 2 connected to “backside” bus

Fig. 18.1 Cache memories act as intermediaries between the


superfast processor and the much slower main memory.

Computer Architecture, Memory System


Feb. 2011 Slide 39
Design
Performance of a Two-Level Cache System
Example 18.1
A system with L1 and L2 caches has a CPI of 1.2 with no cache miss. There are
1.1 memory accesses on average per instruction.
What is the effective CPI with cache misses factored in?
What are the effective hit rate and miss penalty overall if L1 and L2 caches are
modeled as a single cache? CPU CPU
registers
Level Local hit rate Miss penalty 1%
L1 95 % 8 cycles 4%
95%
L2 80 % 60 cycles
Level-1 Level-2 Main
cache cache memory
Solution 8 60
cycles cycles
Teff = Tc + (1 – h1)[Tmedium + (1 – h2)Tm] (a) Level

Because Tc is included in the CPI of 1.2, we must account 2 between rest 1 and main
CPI = 1.2 + 1.1(1 – 0.95)[8 + (1 – 0.8)60] = 1.2 + 1.1 0.05 20 = 2.3
Overall: hit rate 99% (95% + 80% of 5%), miss penalty 60 cycles

Computer Architecture, Memory System


Feb. 2011 Slide 40
Design
Cache Memory Mapping
• The transformation of data from main memory to cache memory
is referred to as a mapping process, there are three types of
mapping:
– Associative mapping
– Direct mapping
– Set-associative mapping
Associative mapping
• The fastest and most flexible cache
organization uses an associative
memory
• The associative memory stores both
the address and data of the memory
word
• This permits any location in cache to
store any word from main memory
• It is also called Content-Addressable
memory

The address value of 15 bits is shown as a five-digit octal number


and its corresponding 12-bit word is shown as a four-digit octal
number
• A CPU address of 15 bits is places in the argument
register and the associative memory us searched for a
matching address
• If the address is found, the corresponding 12-bits data is
read and sent to the CPU
• If not, the main memory is accessed for the word
• If the cache is full, an address-data pair must be
displaced to make room for a pair that is needed and
not presently in the cache by replacement algo.
Disadvantages of Associative Mapping
• For searching a word, we have to search the all location in cache
(time consuming)
• For full cache, which old pair will be replaced by new pair from main
memory is a strict decision.
• Due to both word and address, it will take more space to store the
few words
Direct mapping

 The n bit memory address is divided in to two


fields: k-bits for the index and n-k bits for the tag
field

 Number of bits in the Index field is equal to


the number of address bits requested to access
the cache
Disadvantages : Hit ratio can be dropped considerably if two or more word, whose addresses
have the same index but different tags are access repeatedly. Ie. Accessing 00777, 01777 , 02777
address are alternatively in cache location 777 but with 00 , 01 ,02 tag alternatively
Less amount of data can be stored in cache
BLOCK Concept ( CACHE LINE )

 Main memory or Cache memory can be divided into a collection set


where each set contains a same number of words. Each set is called block
/ line
 Block size for main memory and cache memory are same
 Based on the memory address, mapping in cache memory was occurred
Direct mapping……

 For every Miss,entire block of 8 words must be transferred from main memory to cache
 Larger block will improve the better hit ratio
Disadvantages:When we access the data ofsame index butdifferent tags,the index with
Particular tag may be presented in a cache block.For accessing data with same index with
Another different tag, the previous block must be replaced with newblock
We cannot store with same index with different tags
Accessing a Direct-Mapped Cache
Example 1
Show cache addressing for a byte-addressable memory with 32-bit
addresses. Cache line W = 16 B. Cache size L = 4096 lines (64 KB).
Solution
Byte offset in line is log216 = 4 b. Cache line index is log24096 = 12 b.
This leaves 32 – 12 – 4 = 16 b for the tag.
12-bit line index in cache

16-bit line tag 4-bit byte offset in line

32-bit
address

Byte address in cache

Components of the 32-bit address in an example direct-mapped cache with byte


addressing.
Set Associative Cache Memory

• In the slide, each index address refers to two data words and their associated tags
(2-Way Set Associative Cache )
• Each tag requires six bits and each data word has 12 bits, so the word length is 2*(6+12) = 36
bits
• Set-Associative Mapping is an improvement over the direct-mapping in that each word of
cache can store two or more word of memory under the same index address

Disadvantages : Requires more complex comparison logic if we increase the Set size
Cache initialization with Valid Bits
• After Initialization (after power on), the Cache is considered to be empty.
• But in effect, it contains some Non-Valid data
• Extra bit (field) corresponding each word / block in a cache called “VALID BIT” to indicate
whether or not a word/block contains valid data
• Initially valid bit = 0,
• After first time data loaded from memory to cache, then valid bit set to 1. And stays set
unless the cache has to be initialized again.
• A word in cache is not replaced by another word unless the valid bit set to 1 and mismatch
occurs
• If a valid bit happens to be 0, the new word automatically replaces the invalid data
Accessing a Set-Associative Cache
Example 18.5
Show cache addressing scheme for a byte-addressable memory with
32-bit addresses. Cache line width 2W = 16 B. Set size 2S = 2 lines.
Cache size 2L = 4096 lines (64 KB).
Solution
Byte offset in line is log216 = 4 b. Cache set index is (log24096/2) = 11 b.
This leaves 32 – 11 – 4 = 17 b for the tag.
11-bit set index in cache

17-bit line tag 4-bit byte offset in line

32-bit
address

Address in cache used to


read out two candidate
items and their control info

Components of the 32-bit address in an example two-way set-associative cache. Slide 50


Design
Cache Address Mapping
Example 18.6
A 64 KB four-way set-associative cache is byte-addressable and
contains 32 B lines. Memory addresses are 32 b wide.
a. How wide are the tags in this cache?
b. Which main memory addresses are mapped to set number 5?

Solution
a. Address (32 b) = 5 b byte offset + 9 b set index + 18 b tag
b. Addresses that have their 9-bit set index equal to 5. These are of the
general form 214a + 255 + b; e.g., 160-191, 16 544-16 575, . . .

32-bit Tag Set index Offset


address
18 bits 9 bits 5 bits
Tag width = Set size = 4 32 B = 128 B Line width =
32 – 5 – 9 = 18 Number of sets = 216/27 = 29 32 B = 25 B

Computer Architecture, Memory System


Feb. 2011 Slide 51
Design
VIRTUAL MEMORY
Virtual Memory is a
concept used in some
large computer system
that permits the used to
construct the program
as though a large
memory space were not
available , but managed
by auxiliary memory

Why Virtual Memory is required


 Main Memory Space is not sufficient to run the large program
 PhysicalMainMemorysizeiskeptsmalltoreducethecostthoughtheprocessorhas
large logical address space
VirtualAddressSpace : Address used by a programmer. Set of Virtual
address is called ADDRESS SPACE
 PhysicalAddressSpace: Addresses in Main memory. Set of Physical address is
called MEMORY SPACE
A system with Virtual Address Concept can have large address space than small memory
space. Without Virtual concept, both are identical
PAGE & BLOCK

• BLOCK (Page Frame) : Physical main memory is


broken into groups of equal size set where
each set has same number of words
• PAGE : Auxiliary memory is also broken into
group of equal size set (number of words will
be same as block).

You might also like