CEA201 Group6

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 20

CEA201 - GROUP 6

Nguyễn Văn Thương


June
Nguyễn Xuân Ý
Trương Việt Hoàng
Lê Quang Minh Đà
Phan Thị Minh
Phương

MEMORY
H IE R A R C H Y
An important part of Computer Achitecture.
page 2

“T H E R O L E O F
MEMORY
H IE R A R C H Y
IN M O D E R N
COMPUTING”
CEA201 - GROUP6

INTRODUCTION
Importance of
memory hier ar chy in
comput er systems
Increases access time and cost balance

Speed o f communication
Performance can be improved by
reducing the number o f levels required
t o access and manipulate data
GROUP 6

R E G IS T E R M E M O R Y
A. Registers are a small and very fast memory used t o speed up the
processing o f computer programs by providing direct access t o the
required values.
Characteristics: It is used t o store and process important data and
information during the execution o f computer instructions and
operations.

B.. They help reduce access time t o data and addresses,


increase processing speed, and provide important information
f o r process control and management.
R E G IS T E R M E M O R Y
Copy a sticky Copy a sticky
note, then write note, then write
your thoughts. your thoughts.

C. Benef it s:
Fast access speed
High performance
Temporary
information
storage

Limit:
Limited capacity
Limited accessibility
Data loss
Registers have great
benefits in storing
and processing data
quickly and efficiently.
R E G IS T E R M E M O R Y
Copy a sticky Copy a sticky
note, then note, then
write your write your
thoughts. thoughts.

D.Temporary value storage


Address and control
Manage and store
status

For example:
In accessing data from main memory, address registers are used to store the target
memory address. For example, when performing a read from a specific location in
memory, the address of that location is stored in the address register before the read
operation is performed.
CACHE
MEMORY

A.Cache memory is a type of fast and near-CPU temporary memory used to store data that
the CPU tends to access quickly and frequently.
The main purpose of cache memory is to improve the performance of a computer system
by reducing the time it takes to access data from main memory (RAM).

B.Cache has different levels (L1, L2, L3) based on distance to CPU and size. Higher levels usually
have larger sizes but slower access times. The purpose of using multiple cache levels is to take
advantage of the more local principle and increase data buffering capacity, which improves
overall system performance.
.
CACHE
MEMORY

C. Cache works on the principle of locality of reference, where data and instructions tend to be
accessed again within a short period of time. When the CPU requests access to a location in
main memory, the cache checks if the data exists in the cache. If the data is already cached (hit),
it is returned to the CPU without access to main memory, minimizing access time. In case the
data does not exist in the cache (miss), it will be loaded from main memory into the cache for
later access.

D. Cache memory has a significant positive impact on computer system performance. By


increasing data access speed, reducing the load on main memory, and improving overall
performance, cache memory enhances processing power and speeds up CPU operations.
M A IN M E M O R Y
(RAM)
Role
Main memory stores data and instructions that are currently being used by the CPU. It is
volatile, meaning that it loses its contents when the computer is turned off or restarted.
Data that needs to be saved permanently is stored on the hard drive or other types of
non- volatile storage devices.

Characteristics
Main memory is faster than other types of storage devices because it is directly
connected to the CPU through a high-speed bus. The amount of main memory a
computer has can greatly impact its performance. More memory allows the computer to
store more data and run more programs simultaneously, which can lead to faster
processing speeds.
M A IN M E M O R Y
(RAM)

C. Cache works on the principle of locality of reference, where data and instructions tend to be
accessed again within a short period of time. When the CPU requests access to a location in
main memory, the cache checks if the data exists in the cache. If the data is already cached (hit),
it is returned to the CPU without access to main memory, minimizing access time. In case the
data does not exist in the cache (miss), it will be loaded from main memory into the cache for
later access.

D. Cache memory has a significant positive impact on computer system performance. By


increasing data access speed, reducing the load on main memory, and improving overall
performance, cache memory enhances processing power and speeds up CPU operations.
T Y P E S O F RAM: ::
S R A M A N D D R A M S RAM DRAM

S t a tic R A M (S R A M )
Static RAM (SRAM) is a type of RAM that uses flip-flops to store data. It is faster and
more expensive than DRAM, but it does not need to be constantly refreshed like DRAM
does.
SRAM is used in applications where speed is critical, such as cache memory in CPUs, or
in networking devices where fast access to data is important.

D yna mic R A M (D R A M )
Dynamic RAM (DRAM) is a type of RAM that uses capacitors to store data. It is slower and
cheaper than SRAM, but it needs to be constantly refreshed to maintain the data stored in
it. DRAM is used in applications where cost is a major factor, such as in personal
computers and mobile devices. It is also used in graphics cards and other high-performance
computing applications.
M A I N M E M O R Y
(RAM)
V irtual M emory
Virtual memory is a technique that enables a computer to use more memory than it physically has. It does this
by temporarily transferring data from the RAM to the hard drive, freeing up space for other processes to run.
Paging
Paging is a memory management technique that divides memory into fixed-size pages. These pages are then used to
store data and programs. When a program needs more memory than is available in RAM, the operating system swaps
out the least recently used pages to the hard drive, freeing up space for new data.
Segmentation
Segmentation is a memory management technique that divides memory into logical segments based on the
program's requirements. Each segment contains a specific type of data, such as code or data.

D. Importance of main memory in balancing capacity and access speed


Main memory is important because it stores data and instructions that are currently being used by the CPU. It is much
faster than other types of storage devices, such as hard drives or solid-state drives because it is directly connected to
the CPU through a high-speed bus. The amount of main memory a computer has can greatly impact its performance.
More memory allows the computer to store more data and run more programs simultaneously, which can lead to faster
processing speeds.
SECON D A R Y
STORAGE
A. Definition and examples of secondary
storage devices (hard drives, solid-state
drives , et c.) B. Role of secondary storage in long-term data
Secondary storage devices are non-volatile storage and retrieval
devices that hold data until it is deleted or Secondary storage devices are mainly used for
overwritten. They are mainly used for the the permanent and long-term storage of
permanent and long-term storage of programs and data. They are much cheaper
programs and data. Examples of secondary than primary storage devices and can hold
storage devices include hard disks, solid- much more data. Secondary storage devices
state drives (SSDs), CDs, DVDs, pen/flash are also non-volatile, meaning that they retain
drives, and tape drives. Secondary storage their contents even when the computer is
is about two orders of magnitude cheaper turned off or restarted. This makes them ideal
than primary storage. for long- term data storage and retrieval .
SECON D A R Y

STOR A GE
C. Comparison of secondary storage with D. Hierarchical storage management and
primary memory in terms of capacity and its relationship with memory hierarchy
access speed Secondary storage devices are Hierarchical storage management (HSM), also known
m uch cheaper than prim ary storage devicesand as Tiered storage, is a data storage and Data
can hold much m ore data. However, management technique that automatically moves
they are also much slower than primary data between high-cost and low-cost storage media.
storage devices. Primary storage devices are directly HSM systems exist because high-speed storage
connected to the CPU through a high-speed bus, devices, such as solid state drive arrays, are more
which allows for m uch faster access expensive (per byte stored) than slower devices, such
speeds. Secondary storage devices are typically as hard disk drives, optical discs and magnetic tape
connected to the computer through slower drives.
interfaces such as USB or SATA.
PERFORMANCE
I M PL I C AT I ON S A N D
OPT IMIZATION
A. Impact of memory hierarchy on system performance
Faster Access Speed
Reduced Latency
Increased Bandwidth
Lower Energy
Consumption

B. Strategies for optimizing


memory hierarchy:
caching, prefetching, and
memory hierarchy design

Optimizing the memory hierarchy involves utilizing caching, prefetching, and designing an efficient
memory structure. These strategies aim to reduce access latency, improve data availability, and
enhance overall system performance.
PERFORMANCE
IM P L IC A T IO N S
A ND
OPTIMIZATION
C. Trade-offs between memory size, access speed, and cost
Memory systems involve trade-offs between size, access speed, and cost. Increasing memory size allows
for more data storage but can be more expensive. Faster access speed improves retrieval time but
tends to be costlier. Achieving an optimal balance requires designing a memory hierarchy with
different levels, considering the specific requirements and cost constraints of the system.

D. C a s e studies and real-world examples of memory hierarchy optimization


Real-world examples of memory hierarchy optimization include Intel's Smart Cache Technology, AMD's
Infinity Fabric, memory hierarchy optimization in graphics processing units (GPUs), memory optimization
in database systems, and memory hierarchy optimizations in high-performance computing (HPC)
systems.
These techniques involve multi-level caching, prefetching, interconnect technologies, and data
placement strategies to improve system performance.
VIII. Emerging Trends in Memory Hierarchy
A. Advancements in memory technologies: non-volatile memory, 3D stacking, etc.:
Advancements in memory technologies include the development of non-volatile memory (NVM), 3D
stacking, persistent memory, and emerging technologies like memristors. These advancements offer
benefits such as persistent storage, increased memory density, improved data transfer rates, and
the ability to combine speed and persistence. They have had a significant impact on data storage,
access speeds, power efficiency, and performance in various computing applications.
B. Integration of memory and storage technologies (storage-class memory):
Storage-class memory (SCM) integrates memory and storage technologies, providing high-performance, low-latency access
with persistence. It combines the advantages of volatile memory (DRAM) and non-volatile storage (NAND flash) to eliminate
the performance bottleneck caused by data movement. SCM types include PCM, ReRAM, MRAM, and 3D XPoint. SCM
offers low latency, persistence, high density, durability, and energy efficiency. Challenges include compatibility, data
consistency, and cost. SCM has the potential to enhance various applications, such as databases, in-memory computing,
analytics, and AI.

C. Potential future developments and their implications on memory hierarchy:


Future developments in memory technologies could simplify the memory hierarchy by integrating SCM as the primary
memory, introduce emerging memory technologies, create hybrid memory systems, enable in-memory computing, and drive
memory- centric architectures. These advancements have the potential to improve performance, energy efficiency, scalability,
and data handling capabilities, leading to innovation in various fields.
CONCLUSION

A. Recap of the role of memory hierarchy in modern computing

-There are four major storage levels: internal, main memory, on-line mass storage,
and off-line bulk storage. The role of the memory hierarchy is to improve the
efficiency of accessing data by optimizing the use of faster and smaller memory.

-Secondary storage provides online mass storage, while tertiary and offline
storage offer off-line bulk storage.

-This organization ensures that data is accessed through the most appropriate and
fastest memory at each level, resulting in faster processing and improved
computing performance.
B . Importance of unders tanding and optimizing memory hiera rchy f or perf orma nce.

Understanding and optimizing memory hierarchy is very important to increase the performance of
the computer system. When performing memory optimization, we need to ensure that the data is
stored at the right memory hierarchies and is easily accessible. In the memory hierarchy, the data
is stored at different levels, from cache to RAM and to the hard disk.
-Understanding and optimizing memory hierarchies is an important part of optimizing the
performance of a computer system and plays an important role in many areas of computer
science, including the field of computer intelligence. artificial intelligence, machine learning, and data
science.

C. Final thoughts on the future of memory hierarchy in evolving computer systems

-As computing technology continues to rapidly evolve, the future of memory hierarchy in
computer systems holds many possibilities. One trend that is becoming increasingly important is
the use of non-volatile memory, which retains its data even without power and has the
potential to greatly enhance system performance.
-Another trend is the use of more complex memory hierarchies, which can include not only
cache, RAM, and disk, but also new types of memory such as persistent memory and storage-
class memory.
CEA210 - GROUP 6

T H A T '''
S
T HALL,,
ANKS!
Tha nk you f or pa rticipa ting .

You might also like