0% found this document useful (0 votes)
8 views66 pages

COA Unit 4 Computer Memory System RRP

The document outlines the conventions for organizing slides on computer organization and architecture, focusing on memory systems. It covers characteristics of memory, memory hierarchy, cache memory principles, and requirements for memory management systems. Additionally, it discusses various memory types, access methods, and cache design elements, including mapping functions and performance parameters.

Uploaded by

ishas200528
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views66 pages

COA Unit 4 Computer Memory System RRP

The document outlines the conventions for organizing slides on computer organization and architecture, focusing on memory systems. It covers characteristics of memory, memory hierarchy, cache memory principles, and requirements for memory management systems. Additionally, it discusses various memory types, access methods, and cache design elements, including mapping functions and performance parameters.

Uploaded by

ishas200528
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 66

• Conventions to be followed for the slides

• RED Topics like 1


• BLUE Sub-topics like 1.1
• GREEN Mini-topics like 1.1.1
• BROWN textual data

1 04/06/2025
Computer organization and
Architecture

2 04/06/2025
Computer organization and
Architecture

Computer Memory System


• By:
• Dr. Rashmi Patil
Contents
• Characteristics of memory system
• The memory hierarchy
• Cache Memory-
• Cache memory principles
• Elements of cache design-
• Cache address and Size
• Mapping functions
• Replacement algorithms
• Write policy
• Line size
• Number of caches
• One level and two level cache
4 • Performance characteristics of two level cache- locality &
04/06/2025
operations.
• Case Study-
• Pentium IV cache organization

• Internal Memory-
• Semiconductor main memory
• Advanced DRAM organization

• External Memory-
• Hard Disk organization
• RAID- level 1 to level 6.

5 04/06/2025
Requirements of Memory
Management System
• Memory distribution: The system must be able to distribute
memory to processes as needed. Additionally, the system
must make sure that memory is allocated as effectively as
possible in order to reduce fragmentation.
• Memory security: The system must prevent other processes
from improperly accessing the memory allotted to each
process. Furthermore, it should make sure that processes are
unable to alter memory that does not belong to them.
• Deallocating memory: The system must be able to release
memory that is no longer required by a running process. This
involves restoring RAM to the system after it has been freed
up and is no longer needed.
6 04/06/2025
• Memory sharing: The system need to permit processes to
share memory.
Requirements of Memory
Management System
• Virtual memory: The system must be capable of offering virtual
memory, which enables programmes to access more memory
than is physically accessible. By switching data between the RAM
and the hard drive, this is accomplished.
• Memory fragmentation can happen when memory is frequently
allocated and deallocated; the system should avoid this.
Fragmentation might result in memory usage that is wasteful and
slow down the system.
• Memory mapping: The system ought to support memory
mapping, which permits the mapping of files to memory. As data
can be read and written directly from memory, this may speed up
file I/O processes.
• Memory
7 leaks are caused when a process04/06/2025
fails to deallocate
memory that it no longer requires. The system should be able to
identify and prevent memory leaks.
Characteristics of memory
system

8 04/06/2025
• Memory originated from Latin called ‘Memor’
means mindful, remembering.
• The term Location refers to whether
memory is Internal or External to the
computer.
• An obvious characteristic of memory is its
Capacity.
• For Internal memory, this is typically
expressed in terms of bytes (1 byte = 8 bits)
or words. Common word lengths are 8, 16,
and 32 bits.
• External memory capacity is typically
expressed in terms of Bytes, Megabytes,
Gigabytes
9
or more. 04/06/2025
• A related concept is the unit of transfer. For
internal memory, the unit of transfer is equal
to the number of data lines into and out of the
memory module.
• Word: The “natural” unit of organization of memory.
The size of a word is typically equal to the number of
bits used to represent an integer and to the
instruction length. Eg. Intel processors uses 8 bits to
represent a number.

• Addressable units: It is a size of a physical


memory. Eg. For 8086, addressable units are of 1
MB.

• Unit of transfer: For main memory, this is the


number of bits read out of or written into memory at
10 04/06/2025
a time.
• Another distinction among memory types is the
method of accessing.
• Sequential access: Organized into units of data,
called records. Access must be made in a specific
Linear Sequence. Example: Tapes
• Direct access: Involves a Shared Read–write
Mechanism. Example: Disk units
• Random Access: Any location can be selected at
Random and Directly addressed and accessed.
Usually wired. Example: Main memory and some cache
systems.
• Associative access: Is a random access type of
memory that enables one to make a comparison of
desired bit locations within a word for a specified
match, and to do this for all words simultaneously.
Example: Cache memories may employ associative
access.
11 04/06/2025
• Three Performance parameters are used.
• Access Time (Latency): For random-access
memory, this is the time it takes to perform a read
or write operation.

• Memory Cycle Time: This concept is primarily


applied to random-access memory and consists of
the access time plus any additional time required
before a second access can commence. Note that
memory cycle time is concerned with the system
bus, not the processor. Eg. Propagation Delays.

• Transfer Rate: This is the rate at which data can


be transferred into or out of a memory unit. For
random-access memory, it is equal to 1/(cycle
time).
12 04/06/2025
• For non-random-access memory, the following
relationship holds:

• A variety of Physical Types of memory have


been employed.
• The most common today are semiconductor
memory, magnetic surface memory, used for
disk
13 and tape, and optical and magneto-optical.
04/06/2025
• Several Physical Characteristics of data
storage are important.

• In a Volatile Memory, information decays


naturally or is lost when electrical power is
switched off.

• In a Nonvolatile Memory, information once


recorded remains without deterioration until
deliberately changed; no electrical power is
needed to retain information.

14 04/06/2025
• For random-access memory, the
Organization is a key design issue.

• In this context, organization refers to the


physical arrangement of a data in a memory.

15 04/06/2025
The Memory Hierarchy
• The design constraints on a computer’s
memory can be summed up by three
questions:
• How much?
• How fast?
• How expensive?
• So there are three key characteristics of
memory:
• Capacity
• Access Time
• Cost.
16 04/06/2025
• According to the characteristics of a memory,
A variety of technologies are used to
implement memory systems.

• And across this spectrum of technologies, the


following relationships hold:

• Faster access time, greater cost per bit


• Greater capacity, smaller cost per bit
• Greater capacity, slower access time

17 04/06/2025
18 Fig: The Memory Hierarchy 04/06/2025
• As one goes down the hierarchy, the
following occur:

• Decreasing cost per bit.

• Increasing capacity.

• Increasing access time.

• Decreasing frequency of access of the


memory by the processor.

19 04/06/2025
Cache Memory
• Cache Memory Principles

• Small amount of fast memory.

• Sits between normal main memory and CPU.

• May be located on CPU chip or module.

33 04/06/2025
• Cache Organization

34 04/06/2025
• Example: Smart Cache of i-Series…
(core-i3)
• L1 cache: 32 kb Instructions cache +32 kb data cache

• L2 cache: 256 kb (For Instructions and data per core)

• L3 cache: 8 mb (For Communication)

35 04/06/2025
• Cache/Main Memory Structure

36 04/06/2025
• Cache Operation – Overview
• CPU requests contents of memory location.
• Check cache for this data.
• If present, get from cache (fast).
• If not present, read required block from main
memory to cache.
• Then deliver from cache to CPU.
• Cache includes tags to identify which block of
main memory is in each cache slot.

37 04/06/2025
• Cache Read Operation - Flowchart

38 04/06/2025
• Typical Cache Organization

39 04/06/2025
• Elements of Cache Design

40 04/06/2025
• Cache Addresses
• Two types of Addresses:
1. Logical Cache:
• When virtual addresses are used, the system
designer may choose to place the cache between the
processor and the MMU or between the MMU and
main memory. A Logical Cache, also known as a
Virtual Cache, stores data using Virtual
Addresses.

41 04/06/2025
2. Physical Cache: The processor accesses the cache
directly, without going through the MMU. A physical
cache stores data using main memory Physical
Addresses.

42 04/06/2025
• Cache Size
• We would like the size of the cache like this:

• Size should be small enough so that the overall


average cost per bit is close to that of main
memory alone.

• Size should be large enough so that the overall


average access time is close to that of the
cache alone.

43 04/06/2025
• The larger the cache, the larger the
number of gates involved in addressing
the cache.

• The result is that large caches tend to be


slightly slower than small ones—even
though they are made up of same integrated
circuit technology and put in the same place
on chip and circuit board.

• The available chip and board area may also


limits cache size.
44 04/06/2025
• Mapping Function

• As there are less number of cache lines than


main memory blocks, an algorithm is needed
for mapping main memory blocks into cache
lines.

• The choice of the mapping function dictates


how the cache is organized. Three
techniques can be used: Direct,
Associative, and Set Associative.
45 04/06/2025
1. Direct Mapping
• The simplest technique.
• Maps each block of main memory into
only one possible cache line.
• The mapping is expressed as

i = j modulo m
• where
• i = cache line number
• j = main memory block number
• m = number of lines in the cache

46 04/06/2025
• Direct Mapping from Cache to Main
Memory

47 04/06/2025
• Disadvantage
• Its main disadvantage is that there is a
fixed cache location for any given block.

• Thus, if a program happens to reference


words repeatedly from two different blocks
that map into the same line, then the blocks
will be continually swapped in the cache, and
the hit ratio will be low (a phenomenon
known as Thrashing).

48 04/06/2025
• Example

49 04/06/2025
50 04/06/2025
51 04/06/2025
The effect of this mapping is that blocks of main
memory are assigned to lines of the cache as
follows:

52 04/06/2025
2. Associative Mapping

• Associative mapping overcomes the


disadvantage of direct mapping
• Does it by permitting each main memory
block to be loaded into any line of the
cache.
• In this case, the cache control logic interprets
a memory address simply as a Tag and a
Word field. The Tag field uniquely
identifies a block of main memory.

53 04/06/2025
54 04/06/2025
Fully Associative Cache Organization

55 04/06/2025
• Disadvantage
• The principal disadvantage of associative
mapping is the complex circuitry required to
examine the tags of all cache lines in
parallel.

• So, cost of circuit will be high.

56 04/06/2025
3. Set-associative Mapping
• Set-associative mapping uses strengths of both
the direct and associative approaches while
reducing their disadvantages.
• In this case, the cache consists of a number sets,
each of which consists of a number of lines. The
relationships are-
m=v*k
i = j modulo v
• i = cache set number
• j = main memory block number
• m = number of lines in the cache
• v = number of sets
• k = 57number of lines in each set 04/06/2025
58 04/06/2025
59 04/06/2025
Its 2018!!
don’t
forget me.
I am
coming….

60 04/06/2025
61 04/06/2025
Don’t
forget
me…

62 04/06/2025
• Currently the largest capacity hard drive(SSD)

63 04/06/2025
• Servers of different companies

Microsoft Google

64 04/06/2025
• Servers of different companies

Facebook Microsoft Natick server

65 04/06/2025
End of Unit-II
Thank you!

66 04/06/2025

You might also like