0% found this document useful (0 votes)
24 views53 pages

Week 08 Lecture 01-02 SWE-212T CO&a Batch 2020F SED Final

Uploaded by

Mohammad Zawar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
24 views53 pages

Week 08 Lecture 01-02 SWE-212T CO&a Batch 2020F SED Final

Uploaded by

Mohammad Zawar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 53

SWE-212T COMPUTER

ORGANIZATION &
ARCHITECHTURE
Instructors:
Engr. Anila Saghir

Software Engineering Department


Lecture 15 and 16

12/09/2023 SWE-212 CO&A SED,SSUET 2


CLO-01
Explain the fundamental organization of computer
systems, CPU, memory unit and Input/Outputs and the
relations between its main components
[C2 (Understanding)]

12/09/2023 SWE-212 CO&A SED,SSUET 3


Chapter #4
Cache Memory
Chapter 4 introduces the concept of the internal memory elements. To begin, the
first section examines key characteristics of computer memories. The remainder
of the chapter examines an essential element of all modern computer systems:
cache memory.

12/09/2023 SWE-212 CO&A SED,SSUET 4


Introduction
• Although seemingly simple in concept, computer memory exhibits perhaps
the widest range of type, technology, organization, performance, and cost of
any feature of a computer system.
• No single technology is optimal in satisfying the memory requirements for a
computer system.
• As a consequence, the typical computer system is equipped with a hierarchy
of memory subsystems,
• some internal to the system (directly accessible by the processor)
• some external (accessible by the processor via an I/O module).

12/09/2023 SWE-212 CO&A SED,SSUET 5


Computer Memory System Overview
• Characteristics of Memory Systems:
• The complex subject of computer memory is made more
manageable if we classify memory systems according to their key
characteristics.
• The most important of these are listed in Table 4.1.

12/09/2023 SWE-212 CO&A SED,SSUET 6


Characteristics of Memory Systems

12/09/2023 SWE-212 CO&A SED,SSUET 7


Characteristics of Memory Systems
• Location
• Capacity
• Unit of transfer
• Access method
• Performance
• Physical type
• Physical characteristics
• Organisation

12/09/2023 SWE-212 CO&A SED,SSUET 8


Location
• The term location refers to whether memory is internal or
external to the computer.
• Internal memory is often equated with main memory, but there
are other forms of internal memory.
• The processor requires its own local memory, in the form of registers
• Further, as we will see, the control unit portion of the processor may also
require its own internal memory.
• Cache is another form of internal memory.
• External memory consists of
• peripheral storage devices, such as disk and tape, that are accessible to
the processor via I/O controllers.
12/09/2023 SWE-212 CO&A SED,SSUET 9
Capacity
• An obvious characteristic of memory is its capacity.
• For internal memory, this is typically expressed in terms of bytes
• (1 byte = 8 bits) or words.
• Common word lengths are 8, 16, and 32 bits.
• External memory capacity is typically expressed in terms of bytes.

12/09/2023 SWE-212 CO&A SED,SSUET 10


Unit of transfer
• A related concept is the unit of transfer.
• For internal memory, the unit of transfer is equal to the number of
electrical lines into and out of the memory module.
• Usually governed by data bus width
• This may be equal to the word length, but is often larger, such as 64, 128,
or 256 bits.
• For main memory, this is the number of bits read out of or written
into memory at a time.
• The unit of transfer need not equal a word or an addressable unit.
• For external memory, data are often transferred in much larger units than
a word, and these are referred to as blocks.
12/09/2023 SWE-212 CO&A SED,SSUET 11
Access method
• Another distinction among memory types is the method of
accessing units of data. These include the following:
• Sequential
• Start at the beginning and read through in order
• Access time depends on location of data and previous location
• e.g. tape
• Direct
• Individual blocks have unique address
• Access is by jumping to vicinity plus sequential search
• Access time depends on location and previous location
• e.g. disk
12/09/2023 SWE-212 CO&A SED,SSUET 12
Access method
• Random
• Individual addresses identify locations exactly
• Access time is independent of location or previous access
• e.g. RAM
• Associative
• Data is located by a comparison with contents of a portion of the store
• Access time is independent of location or previous access
• e.g. cache

12/09/2023 SWE-212 CO&A SED,SSUET 13


Performance
• Access time
• Time between presenting the address and getting the valid data
• Memory Cycle time
• Time may be required for the memory to “recover” before next access
• Cycle time is access + recovery
• Transfer Rate
• Rate at which data can be moved

12/09/2023 SWE-212 CO&A SED,SSUET 14


Physical Types
• Semiconductor
• RAM
• Magnetic
• Disk & Tape
• Optical
• CD & DVD
• Others
• Bubble
• Hologram

12/09/2023 SWE-212 CO&A SED,SSUET 15


Physical Characteristics
• Decay
• Volatility
• Erasable
• Power consumption

12/09/2023 SWE-212 CO&A SED,SSUET 16


Organisation
• Physical arrangement of bits to form words
• Not always obvious
• e.g. interleaved

12/09/2023 SWE-212 CO&A SED,SSUET 17


The Memory Hierarchy
• The design constraints on a computer’s memory can be summed
up by three questions:
• How much?
• How fast?
• How expensive?
• How much: If the capacity is there, applications will likely be developed
to use it.
• How fast: To achieve greatest performance, the memory must be able to
keep up with the processor.
• That is, as the processor is executing instructions, we would not want it to have to
pause waiting for instructions or operands.

12/09/2023 SWE-212 CO&A SED,SSUET 18


The Memory Hierarchy
• How Expensive: For a practical system, the cost of memory must be
reasonable in relationship to other components.
• As might be expected, there is a trade- off among the three key
characteristics of memory: capacity, access time, and cost.
• A variety of technologies are used to implement memory systems,
and across this spectrum of technologies, the following
relationships hold:
• Faster access time, greater cost per bit;
• Greater capacity, smaller cost per bit;
• Greater capacity, slower access time.

12/09/2023 SWE-212 CO&A SED,SSUET 19


The Memory Hierarchy
• The way out of this dilemma is not to rely on a
single memory component or technology, but to
employ a memory hierarchy.
• A typical hierarchy is illustrated in Figure 4.1.
• As one goes down the hierarchy, the following
occur:
• a. Decreasing cost per bit;
• b. Increasing capacity;
• c. Increasing access time;
• d. Decreasing frequency of access of the memory by
the processor.
12/09/2023 SWE-212 CO&A SED,SSUET 20
Cache Memory
• Cache memory is designed to combine the memory access time of
• Small, expensive and high-speed memory
with the
• large, less expensive and lower-speed memory.
• Cache is:
• Small amount of fast memory
• Sits between normal main memory and CPU
• May be located on CPU chip or module

12/09/2023 SWE-212 CO&A SED,SSUET 21


Cache Memory Principle
• There is a relatively large and slow main memory together with a
smaller, faster cache memory.
• Cache Principle:
• The cache contains a copy of portions of main memory. When the
processor attempts to read a word of memory, a check is made to
determine if the word is in the cache.
• If so, the word is delivered to the processor.
• If not, a block of main memory, consisting of some fixed number of words,
is read into the cache and then the word is delivered to the processor.

12/09/2023 SWE-212 CO&A SED,SSUET 22


Cache Memory Principle
• The concept is illustrated in Figure 4.3a.

12/09/2023 SWE-212 CO&A SED,SSUET 23


Multiple Levels of Cache
• Figure 4.3b depicts the use of multiple levels of cache.
• The L2 cache is slower and typically larger than the L1 cache, and
the L3 cache is slower and typically larger than the L2 cache.

12/09/2023 SWE-212 CO&A SED,SSUET 24


Cache/Main Memory Structure

12/09/2023 SWE-212 CO&A SED,SSUET 25


Cache operation – overview
• CPU requests contents of memory location
• Check cache for this data
• If present, get from cache (fast)
• If not present, read required block from main memory to cache
• Then deliver from cache to CPU
• Cache includes tags to identify which block of main memory is in
each cache slot

12/09/2023 SWE-212 CO&A SED,SSUET 26


Cache Read Operation - Flowchart

12/09/2023 SWE-212 CO&A SED,SSUET 27


Elements of Cache Design
• Addressing
• Size
• Mapping Function
• Replacement Algorithm
• Write Policy
• Block Size
• Number of Caches

12/09/2023 SWE-212 CO&A SED,SSUET 28


Cache Addressing
• Where does cache sit?
• Between processor and virtual memory management unit
• Between MMU and main memory
• Logical cache (virtual cache) stores data using virtual addresses
• Processor accesses cache directly, not thorough physical cache
• Cache access faster, before MMU address translation
• Virtual addresses use same address space for different applications
• Must flush cache on each context switch
• Physical cache stores data using main memory physical addresses

12/09/2023 SWE-212 CO&A SED,SSUET 29


Size does matter
• Cost
• More cache is expensive
• Speed
• More cache is faster (up to a point)
• Checking cache for data takes time

12/09/2023 SWE-212 CO&A SED,SSUET 30


Typical Cache Organization

12/09/2023 SWE-212 CO&A SED,SSUET 31


Mapping Function (Cache Mapping)
• Because there are fewer cache lines than main memory blocks, an
algorithm is needed for mapping main memory blocks into cache
lines. Further, a means is needed for determining which main
memory block currently occupies a cache line.
• The choice of the mapping function dictates how the cache is
organized.
• Three techniques can be used:
• direct,
• Associative
• set-associative.

12/09/2023 SWE-212 CO&A SED,SSUET 32


Direct Mapping

12/09/2023 SWE-212 CO&A SED,SSUET 33


Direct Mapping
• Each block of main memory maps to only one cache line
• i.e. if a block is in cache, it must be in one specific place
• Address is in two parts
• Least Significant w bits identify unique word
• Most Significant s bits specify one memory block
• The MSBs are split into a cache line field r and a tag of s-r (most
significant)

12/09/2023 SWE-212 CO&A SED,SSUET 34


12/09/2023 SWE-212 CO&A SED,SSUET 35
Summary Direct Mapping

12/09/2023 SWE-212 CO&A SED,SSUET 36


Direct Mapping pros & cons
• Simple
• Inexpensive
• Fixed location for given block
• If a program accesses 2 blocks that map to the same line repeatedly, cache
misses are very high
Associative Mapping
• A main memory block can load into any line of cache
• Memory address is interpreted as tag and word
• Tag uniquely identifies block of memory
• Every line’s tag is examined for a match
• Cache searching gets expensive
Associative Mapping from
Cache to Main Memory
Fully Associative Cache Organization
Associative
Mapping
Example
Associative Mapping
Address Structure
Word
Tag 22 bit 2 bit
• 22 bit tag stored with each 32 bit block of data
• Compare tag field with tag entry in cache to check for hit
• Least significant 2 bits of address identify which 16 bit word is
required from 32 bit data block
• e.g.
• Address Tag Data Cache line
• FFFFFC FFFFFC 24682468 3FFF
Associative Mapping Summary
• Address length = (s + w) bits
• Number of addressable units = 2s+w words or bytes
• Block size = line size = 2w words or bytes
• Number of blocks in main memory = 2s+ w/2w = 2s
• Number of lines in cache = undetermined
• Size of tag = s bits
Set Associative Mapping
• Cache is divided into a number of sets
• Each set contains a number of lines
• A given block maps to any line in a given set
• e.g. Block B can be in any line of set i
• e.g. 2 lines per set
• 2 way associative mapping
• A given block can be in one of 2 lines in only one set
Set Associative Mapping
Example
• 13 bit set number
• Block number in main memory is modulo 213
• 000000, 00A000, 00B000, 00C000 … map to same set
Mapping From Main Memory to Cache:
v Associative
Mapping From Main Memory to Cache:
k-way Associative
K-Way Set Associative Cache
Organization
Set Associative Mapping
Address Structure
Word
Tag 9 bit Set 13 bit 2 bit

• Use set field to determine cache set to look in


• Compare tag field to see if we have a hit
• e.g
• Address Tag Data Set number
• 1FF 7FFC 1FF 12345678 1FFF
• 001 7FFC 001 11223344 1FFF
Two Way Set Associative
Mapping Example
Set Associative Mapping Summary
• Address length = (s + w) bits
• Number of addressable units = 2s+w words or bytes
• Block size = line size = 2w words or bytes
• Number of blocks in main memory = 2d
• Number of lines in set = k
• Number of sets = v = 2d
• Number of lines in cache = kv = k * 2d
• Size of tag = (s – d) bits
Direct and Set Associative Cache
Performance Differences

• Significant up to at least 64kB for 2-way


• Difference between 2-way and 4-way at 4kB much less than 4kB to
8kB
• Cache complexity increases with associativity
• Not justified against increasing cache to 8kB or 16kB
• Above 32kB gives no improvement
• (simulation results)
12/09/2023 SWE-212 CO&A SED,SSUET 53

You might also like