0% found this document useful (0 votes)
20 views

Module 5

Uploaded by

bishirvv5
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views

Module 5

Uploaded by

bishirvv5
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 23

MEMORY ORGANIZATION

Memory system design

Memory system design is crucial in computing systems, ensuring fast, efficient data storage, retrieval, and processing.

It involves understanding semiconductor memory technologies and the organization of memory within a
computer system

1. Semiconductor Memory Technologies

semiconductor memory stores data and instructions and makes it possible for smartphones, computers, medical equipment, and
industrial automation to function

The widespread use of semiconductor memory is fueled by its remarkable properties:

High Storage Density: Semiconductor memory can store a lot of information in a little space.
Fast Access Time: Information can be gotten rapidly from semiconductor memory, making it appropriate for elite execution
1. Semiconductor Memory Technologies
Semiconductor memories are essential components made from silicon chips that provide fast access and storage for data. They
vary in speed, cost, volatility, and density, leading to two main categories: volatile and non-volatile memory.
Volatile Memory: Loses data when power is turned off.
•Static RAM (SRAM): Uses flip-flops to store bits; faster and more expensive, used in cache memory. It doesn't
need to be refreshed but has low density.
•Dynamic RAM (DRAM): Uses capacitors to store bits; cheaper and denser than SRAM but slower and needs
periodic refreshing. Commonly used in main memory.
Non-Volatile Memory: Retains data without power.
•Read-Only Memory (ROM): Data is permanently programmed; used to store firmware.
•Flash Memory: Can be electrically erased and reprogrammed, offering higher flexibility than ROM. Used in
solid-state drives (SSDs), USB drives, and memory cards.
•Ferroelectric RAM (FeRAM), Magnetoresistive RAM (MRAM), and
Advantages of Semiconductor Memory
High Speed: Fast data retrieval from semiconductor memory enables responsive performance and smooth operation.
Applications like gaming, real-time video, and online transaction all depend on this.
Low power consumption: Compared to other types of memory, such as magnetic storage, semiconductor
memory is very energy efficient. This is important for laptops and mobile devices, as it extends battery life.
High storage density: Semiconductor memory can pack an enormous measure of information into a minuscule
space. Because of this, it is ideal for high-performance computing systems and portable devices like smartphones and tablets
where space is at a premium.
Scalability: Semiconductor memory innovation can be effortlessly scaled to satisfy the rising needs of registering. This
indicates that the capacity of semiconductor memory chips will also increase in tandem with our demand for data storage.
Non-volatile(except for RAM): Non-volatile semiconductor memory, such as read-only memory (ROM) and
flash memory, stores data even when the power is turned off. Because of this, they are excellent for storing long-term data like
operating systems and firmware.
Disadvantages of Semiconductor Memory
Volatile(for RAM): When the power is turned off, data stored in traditional RAM are lost. This can be risky on the off
chance that you are chipping away at something significant and the power goes out of the blue.
Can be much expensive: When compared to other types of storage, such as hard disk drives, high-performance or
large-capacity semiconductor memory can be expensive.
Limited lifespan(for Flash memory): Flash memory has a set number of compose cycles before it breaks down. As a
result, flash memory devices will eventually require replacement.
Security issues: Semiconductor memory can be helpless against information breaks and hacking. This is due to the
fact that the data is stored electronically and can be accessed in the event that the device is hacked.
Effect on the Environment: The process of manufacturing semiconductor memory chips can be resource-intensive
and harmful to the environment. However, efforts are being made to develop production methods that are more
environmentally friendly.
Applications of Semiconductor Memory
Semiconductor memory is used in a wide variety of applications, including:

Digital Cameras: Used for storing photographs and recordings.


Smartphones: Used for storing applications, music, photos and other valuable information.
Computers: Used for storing program instruction and working data.
USB drivers: Used for storing potable data storage.
Solid state drive(SSD): Used for high-performance storage in computers.
MP3 Player: Used to store music.
Memory Hierarchy
Types of Memory Hierarchy
This Memory Hierarchy Design is divided into 2 main types:

External Memory or Secondary Memory: Comprising of Magnetic Disk, Optical Disk,


and Magnetic Tape i.e. peripheral storage devices which are accessible by the processor via an I/O
Module.
Internal Memory or Primary Memory: Comprising of Main Memory, Cache Memory &
CPU registers . This is directly accessible by the processor.

Memory hierarchy optimizes access times and costs in computer systems


Memory Hierarchy Design
1. Registers

Registers are small, high-speed memory units located in the CPU. They are used to store the most frequently used data and instructions. Registers have
the fastest access time and the smallest storage capacity, typically ranging from 16 to 64 bits.

2. Cache Memory

Cache memory is a small, fast memory unit located close to the CPU. It stores frequently used data and instructions that have been recently accessed
from the main memory. Cache memory is designed to minimize the time it takes to access data by providing the CPU with quick access to frequently
used data.

3. Main Memory

Main memory , also known as RAM (Random Access Memory), is the primary memory of a computer system. It has a larger storage capacity than
cache memory, but it is slower. Main memory is used to store data and instructions that are currently in use by the CPU.
4. Secondary Storage
Secondary storage, such as hard disk drives (HDD) and solid-state drives (SSD) , is a non-volatile memory unit that has a larger storage capacity than main memory. It is used
to store data and instructions that are not currently in use by the CPU. Secondary storage has the slowest access time and is typically the least expensive type of memory in the
memory hierarchy.

5. Magnetic Disk
Magnetic Disks are simply circular plates that are fabricated with either a metal or a plastic or a magnetized material. The Magnetic disks work at a high speed inside the
computer and these are frequently used.
Random access
Faster Access
Less storage capacity

6. Magnetic Tape
Magnetic Tape is simply a magnetic recording device that is covered with a plastic film. It is generally used for the backup of data. In the case of a magnetic tape, the access
time for a computer is a little slower and therefore, it requires some amount of time for accessing the strip.
Sequential Access
Slower access
High storage Capacity
The memory hierarchy is a crucial concept in computer architecture that organizes memory into different levels based on speed, cost, and
capacity.

1. Performance Optimization
• Speed: The memory hierarchy allows for faster access to data. Higher-level memory (like cache) is much faster than lower-level memory (like main
memory and secondary storage). By placing frequently accessed data in faster memory, the system can significantly reduce access times.
• Reduced Latency: Accessing data from registers and cache is much quicker than accessing it from main memory or disk storage. This
minimizes latency, which is critical for performance in applications requiring rapid data processing.
2. Cost-Effectiveness
• Cost Management: Faster memory technologies (e.g., SRAM) are more expensive to manufacture than slower ones (e.g., DRAM or
disk storage). By using a combination of different memory types, systems can maintain performance without excessive costs. This hierarchical structure
allows designers to use cheaper, slower memory for less frequently accessed data while keeping critical data in more expensive, faster memory.
• Scalability: Memory hierarchy allows for more scalable systems by enabling the use of large
amounts of slower
memory for bulk storage while still providing fast access to smaller amounts of high-speed
memory.
3. Capacity Utilization
Memory interleaving

It is a Technique that divides memory into a number of modules such that Successive words in the address space
are placed in the Different modules.

Consecutive Word in a Module:


Figure-1: Consecutive Word in a Module

Let us assume 16 Data’s to be Transferred to the Four Module. Where Module 00 be Module 1, Module 01 be Module 2, Module 10
be Module 3 & Module 11 be Module 4. Also, 10, 20, 30….130 are the data to be transferred.

From the figure above in Module 1, 10 [Data] is transferred then 20, 30 & finally, 40 which are the Data. That means the data are
added consecutively in the Module till its max capacity.

Most significant bit (MSB) provides the Address of the Module & the least significant bit (LSB) provides the address of the data in the
module.

For Example, to get 90 (Data) 1000 will be provided by the processor. This 10 will indicate that the data is in module 10 (module 3) &
00 is the address of 90 in Module 10 (module 3). So,
Consecutive Word in Consecutive Module

Module 1 Contains Data : 10, 20, 30, 40


Module 2 Contains Data : 50, 60, 70, 80
Module 3 Contains Data : 90, 100, 110, 120
Module 4 Contains Data : 130, 140, 150, 160
Figure-2: Consecutive Word in Consecutive Module

Now again we assume 16 Data’s to be transferred to the Four Module. But Now the consecutive Data are added in Consecutive
Module. That is, 10 [Data] is added in Module 1, 20 [Data] in Module 2 and So on.

Least Significant Bit (LSB) provides the Address of the Module & Most significant bit (MSB) provides the address of the data in the
module.

For Example, to get 90 (Data) 1000 will be provided by the processor. This 00 will indicate that the data is in module 00 (module 1) &
10 is the address of 90 in Module 00 (module 1). That is,
Module 1 Contains Data : 10, 50, 90, 130
Module 2 Contains Data : 20, 60, 100, 140
Module 3 Contains Data : 30, 70, 110, 150
Module 4 Contains Data : 40, 80, 120, 160
• Advantage :
Memory interleaving is a technique used to enhance memory access
performance by distributing data across multiple memory banks,
allowing for parallel access and improved bandwidth
• The cache memory is one of the fastest memory. Though it is costlier than the main memory but more
useful than the registers. The cache memory basically acts as a buffer between the main memory and the
CPU
• Cache memory mapping techniques determine how main memory blocks are loaded into the cache
•Mappin gechniques

direct mapping,
set associative mapping,
and fully associative mapping.
Primary Terminologies
Some primary terminologies related to cache mapping are listed below:

Main Memory Blocks: The main memory is divided into equal-sized partitions called the main
memory blocks.
Cache Line: The cache is divided into equal partitions called the cache lines.
Block Size: The number of bytes or words in one block is called the block size.
Tag Bits: Tag bits are the identification bits that are used to identify which block of main memory is
present in the cache line.
Number of Cache Lines: The number of cache lines is determined by the ratio of cache size divided by
the block or line size.
Direct Mapping
In direct mapping physical address is divided into three parts i.e., Tag bits, Cache Line Number
and Byte offset. The bits in the cache line number represents the cache line in which the content
is present whereas the bits in tag are the identification bits that represents which block of main
memory is present in cache. The bits in the byte offset decides in which byte of the identified
block the required content is present.
Fully Associative Mapping
In fully associative mapping address is divided into two parts i.e., Tag bits
and Byte offset. The tag bits identify that which memory block is present
and bits in the byte offset field decides in which byte of the block the
required content is present.
Set Associative Mapping
In set associative mapping the cache blocks are divided in sets. It divides
address into three parts i.e., Tag bits, set number and byte offset. The bits in
set number decides that in which set of the cache the required block is present
and tag bits identify which block of the main memory is present. The bits in
the byte offset field gives us the byte of the block in which the content is
present.

You might also like