0% found this document useful (0 votes)
38 views21 pages

Unit IV Material

Uploaded by

indhureddy444
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
38 views21 pages

Unit IV Material

Uploaded by

indhureddy444
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 21

Unit IV

Memory Organization
Basic Concepts:

Memory organization refers to how data is structured, stored, and accessed in a computer's
memory. Here are some basic concepts:

1. Memory Hierarchy

 Registers: Small, fast storage locations in the CPU for temporary data.
 Cache: A small amount of high-speed memory located close to the CPU to speed up
access to frequently used data.
 Main Memory (RAM): Larger, slower memory used for active processes and data.
 Secondary Storage: Non-volatile storage like hard drives or SSDs, used for long-term
data retention.

2. Addressing Modes

 Determines how the CPU accesses data in memory. Common modes include:
o Immediate Addressing: The operand is specified directly in the instruction.
o Direct Addressing: The address of the operand is given explicitly.
o Indirect Addressing: The address of the operand is stored in another memory
location.

3. Memory Allocation

 Static Allocation: Memory is allocated at compile time (e.g., global variables).


 Dynamic Allocation: Memory is allocated at runtime (e.g., using malloc in C).

4. Virtual Memory

 An abstraction that allows the system to use more memory than is physically available by
using disk space to simulate additional RAM. It involves:
o Paging: Dividing memory into fixed-size pages, which can be loaded into
physical memory as needed.
o Segmentation: Dividing memory into variable-sized segments based on logical
divisions like functions or arrays.

5. Memory Management

 Involves managing the allocation, use, and release of memory to ensure efficient use and
prevent issues like memory leaks and fragmentation.
6. Data Structures

 The way data is organized in memory affects performance. Common structures include:
o Arrays: Contiguous memory locations for elements of the same type.
o Linked Lists: Elements (nodes) that are linked via pointers, allowing dynamic
memory usage.
o Trees and Hash Tables: Structures that facilitate efficient searching and
organizing of data.

7. Access Times

 Different types of memory have varying access speeds. Registers are fastest, followed by
cache, then RAM, and finally secondary storage.

Semiconductor RAM Memories:


Random Access Memory is a form of semiconductor memory technology that is applied
for reading and writing data in any order. It is used for such purposes as the computer or
processor memory where variables and others are stored and are needed on a random basis.
Data can be stored and read many times to and from this type of memory. In RAM the memory
location is arranged in such a way that any memory location requires equal time for reading and
writing.
Random Access Memory is volatile (Volatile memories are those memories that store the data
tentatively. More precisely one can understand that data saved in volatile memory remains only
till the power supply is ON. Once the supply gets OFF the stored data would get lost) memory
alternatively referred to as main memory, primary memory, or system memory.
Random-access memory can be easily programmed, erased and reprogrammed by the user
therefore is used in immense quantities in computer applications as current-day computing and
processing technology demands large amounts of memory to empower them to handle the
memory requirements.
Various types of RAM including SRAM, DRAM, SDRAM with its DDR3, DDR4, and
DDR5 variants are used in huge quantities.
Learn about various types of Computer Storage Devices here.
Types of RAM
RAM is majorly categorised into two categories:
o SRAM (Static Random Access Memory)
o DRAM (Dynamic Random Access Memory)
Static RAM (SRAM)
SRAM full form is Static Random Access Memory. It possesses an array of flip-flops that are
used to save the data. The memory cells consist of flip flops that hold the information till the
power supply is on.
The word static implies that the memory holds its contents as long as the electricity is being
supplied and the data is dumped when the power gets down because of its volatile nature.
In Static RAM, data is stored in FFs like structure and is implemented by BJT or MOSFET. A
flip-flop for a memory cell uses four or six transistors along with some wiring which does not
require refreshments. This makes static RAM significantly faster as compared to dynamic RAM.
Static Random Access Memory holds information as long as the power supply is on. Static
RAM’s are more expensive and consume more power and also have higher speeds than D-
RAMs. Static RAM is used to build the CPU’s speed-sensitive cache, while dynamic RAM
forms the larger system RAM space.
Check the various types of Input and Output Devices here.
Dynamic RAM (DRAM)
DRAM( Dynamic Random Access Memory) stores the data in the form of charges in the
capacitor and transistor pair available in the memory cell. DRAM is implemented using
MOSFETs.
The Dynamic Random Access Memory needs to be regularly refreshed so that the data should be
maintained. This is achieved by installing the memory on a refresh circuit that rewrites the
content several hundred times every second. It dissipates less power as compared to SRAM and
operates at a slower rate than as well.
Dynamic Random Access Memory is installed for most system memory as it is relatively cheap
and small. It consists of memory cells, which constitute a single capacitor and a single transistor.
Check the Components of Computer here.
Few other types of RAM are:
Synchronous Dynamic RAM (SDRAM)
SDRAM is a type of DRAM and works in sync with the CPU clock, which implies it waits for
the clock signal before acknowledging the data input. It simply works in contrast to Dynamic
Random Access Memory(responds instantly to data input). Mostly applied in Computer memory,
video game consoles, etc.
Single Data Rate Synchronous Dynamic RAM (SDR SDRAM)
The ‘single data rate’ symbolises how the memory processes. It can process one read and one
write instruction per clock cycle. Popularly used in Computer memory, video game consoles, etc.
Read more about ROM, here.
Double Data Rate Synchronous Dynamic RAM (DDR SDRAM)
DDR SDRAM works similar to SDR SDRAM just twice faster than it. DDR SDRAM can
process two reads and two write instructions per clock cycle. Popularly used in Computer
memory. The other upgraded versions of DDR SDRAM are DDR2, DDR3 and DDR4.
Graphics Double Data Rate Synchronous Dynamic RAM (GDDR SDRAM)
GDDR SDRAM is a variety of DDR SDRAM and is specifically designed for Video graphics
cards. The other upgraded version of GDDR SDRAM is GDDR2 SDRAM, GDDR3 SDRAM,
GDDR4 SDRAM, and GDDR5 SDRAM.
Flash Memory
Flash memory is a sort of non-volatile storage that holds all data after power off also. popularly
used in digital cameras, smartphones and tablets, hand-operated gaming systems and toys.

Read-Only Memories
ROM stands for Read-Only Memory. It is a non-volatile memory that is used to store
important information which is used to operate the system. As its name refers to read-only
memory, we can only read the programs and data stored on it. It is also a primary memory unit
of the computer system. It contains some electronic fuses that can be programmed for a piece of
specific information. The information is stored in the ROM in binary format. It is also known as
permanent memory.
Block Diagram of ROM
As shown in below diagram, there are k input lines and n output lines in it. The input
address from which we wish to retrieve the ROM content is taken using the k input lines. Since
each of the k input lines can have a value of 0 or 1, there are a total of 2 k addresses that can be
referred to by these input lines, and each of these addresses contains n bits of information that is
output from the ROM.
A ROM of this type is designated as a 2k x n ROM.

Block Diagram of ROM

Internal Structure of ROM


The internal structure of ROM have two basic components.
 Decoder
 OR gates

A circuit known as a decoder converts an encoded form, such as binary coded decimal,
or BCD, into a decimal form. As a result, the output is the binary equivalent of the input. The
outputs of the decoder will be the output of every OR gate in the ROM. Let’s use a 64 x 4 ROM
as an example. This read-only memory has 64 words with a 4 bit length. As a result, there
would be four output lines. Since there are only six input lines and there are 64 words in this
ROM, we can specify 64 addresses or minimum terms by choosing one of the 64 words that are
available on the output lines from the six input lines. Each address entered has a unique selected
word.
Working of ROM
A small, long-lasting battery within the computer powers the ROM, which is made up of
two primary components: the OR logic gates and the decoder. In ROM, the decoder receives
binary input and produces decimal output. The decoder’s decimal output serves as the input for
ROM’s OR gates. ROM chips have a grid of columns and rows that may be switched on and
off. If they are turned on, the value is 1, and the lines are connected by a diode. When the value
is 0, the lines are not connected. Each element in the arrangement represents one storage
element on the memory chip. The diodes allow only one direction of flow, with a specific
threshold known as forward break over. This determines the current required before the diode
passes the flow on. Silicon-based circuitry typically has a forward break-over voltage of 0.6 V.
ROM chips sometimes transmit a charge that exceeds the forward break over to the column
with a specified row that is grounded to a specific cell. When a diode is present in the cell, the
charge transforms to the binary system, and the cell is “on” with a value of 1.
Features of ROM
 ROM is a non-volatile memory.
 Information stored in ROM is permanent.
 Information and programs stored on it, we can only read and cannot modified.
 Information and programs are stored on ROM in binary format.
 It is used in the start-up process of the computer.
Types of Read-Only Memory (ROM)
Now we will discuss the types of ROM one by one:
1. MROM (Masked read-only memory): We know that ROM is as old as semiconductor
technology. MROM was the very first ROM that consists of a grid of word lines and bit lines
joined together transistor switches. This type of ROM data is physically encoded in the circuit
and only be programmed during fabrication. It was not so expensive.
2. PROM (Programmable read-only memory): PROM is a form of digital memory. In this
type of ROM, each bit is locked by a fuse or anti-fuse. The data stored in it are permanently
stored and can not be changed or erasable. It is used in low-level programs such
as firmware or microcode.
3. EPROM (Erasable programmable read-only memory): EPROM also called EROM, is a
type of PROM but it can be reprogrammed. The data stored in EPROM can be erased and
reprogrammed again by ultraviolet light. Reprogrammed of it is limited. Before the era of
EEPROM and flash memory, EPROM was used in microcontrollers.
4. EEPROM (Electrically erasable programmable read-only memory): As its name refers, it
can be programmed and erased electrically. The data and program of this ROM can be erased
and programmed about ten thousand times. The duration of erasing and programming of
the EEPROM is near about 4ms to 10ms. It is used in microcontrollers and remote keyless
systems.
Advantages of ROM
 It is cheaper than RAM and it is non-volatile memory.
 It is more reliable as compared to RAM.
 Its circuit is simple as compared to RAM.
 It doesn’t need refreshing time because it is static.
 It is easy to test.
Disadvantages of ROM
 It is a read-only memory, so it cannot be modified.
 It is slower as compared to RAM.
Difference Between RAM and ROM
RAM ROM

RAM stands for Random Access Memory. ROM stands for Read Only Memory.

Data in ROM can not modified or erased, you


You can modify , edit or erase data in RAM.
can only read data of ROM.

RAM is a volatile memory that stores data as ROM is a non-volatile memory that retian
long as power supply is given. data even after the power is turned off.

Speed of RAM is more then speed of ROM. ROM is slower then RAM.

RAM is costly as compared to ROM. ROM is cheap as compared to RAM.

A RAM chip can store only a few gigabytes A ROM chip can store multiple megabytes
(GB) of data. (MB) of data.

CPU cannot easily access data stored in


CPU can easily access data stored in RAM.
ROM.

RAM is used for the temporary storage of data ROM is used to store firmware, BIOS, and
currently being processed by the CPU. other data that needs to be retained.

Speed, Size and Cost:


When discussing memory organization, speed, size, and cost are three critical factors that
significantly impact performance and design. Here's a detailed exploration of each:
1. Speed
a. Access Time
 Definition: The time it takes to read or write data to memory. Measured in nanoseconds (ns) for
most semiconductor memories.
 Types:
o Random Access Time: The time to access any location in memory.
o Sequential Access Time: The time taken to access data in a specific sequence.
b. Latency and Bandwidth
 Latency: The delay before data transfer begins, affecting how quickly data can be accessed.
 Bandwidth: The amount of data that can be transferred to and from memory per unit of time,
typically measured in megabytes per second (MB/s) or gigabytes per second (GB/s).
c. Factors Affecting Speed
 Memory Type: SRAM is faster than DRAM due to its architecture. Cache memory (SRAM) is
quicker than main memory (DRAM).
 Technology: Advancements like DDR (Double Data Rate) improve data transfer rates by
allowing data to be sent on both the rising and falling edges of the clock signal.
 Memory Hierarchy: Systems often use a layered approach (registers, cache, RAM, and storage),
with each layer having varying speeds.
2. Size
a. Capacity
 Definition: The total amount of data that can be stored in memory, measured in bits, bytes,
kilobytes (KB), megabytes (MB), gigabytes (GB), and terabytes (TB).
 Types of Memory:
o SRAM: Generally smaller in capacity due to its higher cost and complexity, used for
cache memory.
o DRAM: Larger capacities are more common, suitable for main memory, with modern
modules often exceeding 16GB.
b. Form Factors
 Physical Size: Memory modules come in various sizes and formats (e.g., DIMMs, SO-DIMMs,
and LPDDR for mobile devices). The size affects the form factor of the device, such as laptops or
desktops.
c. Density
 Definition: The amount of data stored per unit area, which is important for determining how
much memory can fit in a given physical space. High-density memory can lead to smaller, more
powerful devices.

3. Cost
a. Cost per Bit
 Definition: The expense associated with storing one bit of data, which can vary significantly
across different types of memory.
 SRAM vs. DRAM: SRAM is typically more expensive per bit than DRAM because of its
complexity and lower density, making it suitable for smaller cache applications rather than large-
scale data storage.
b. Market Factors
 Supply and Demand: Prices can fluctuate based on manufacturing capabilities, demand for
specific memory types, and technological advancements.
 Production Costs: Costs related to fabrication, materials, and technology affect overall memory
prices.
c. Total Cost of Ownership
 Integration and Power Consumption: The total cost includes not only the purchase price but
also operational costs, such as power consumption. DRAM, while cheaper, can consume more
power due to refresh cycles.
 Performance vs. Cost: High-speed memory offers better performance but at a higher cost,
leading to trade-offs when designing systems.

Cache Memories
Cache memory is a small, high-speed storage area in a computer. The cache is a smaller
and faster memory that stores copies of the data from frequently used main memory locations.
There are various independent caches in a CPU, which store instructions and data. The most
important use of cache memory is that it is used to reduce the average time to access data from
the main memory.
By storing this information closer to the CPU, cache memory helps speed up the overall
processing time. Cache memory is much faster than the main memory (RAM). When the CPU
needs data, it first checks the cache. If the data is there, the CPU can access it quickly. If not, it
must fetch the data from the slower main memory.
Characteristics of Cache Memory
 Cache memory is an extremely fast memory type that acts as a buffer between RAM and the
CPU.
 Cache Memory holds frequently requested data and instructions so that they are immediately
available to the CPU when needed.
 Cache memory is costlier than main memory or disk memory but more economical than CPU
registers.
 Cache Memory is used to speed up and synchronize with a high-speed CPU.

Levels of Memory
 Level 1 or Register: It is a type of memory in which data is stored and accepted that are
immediately stored in the CPU. The most commonly used register is Accumulator, Program
counter , Address Register, etc.
 Level 2 or Cache memory: It is the fastest memory that has faster access time where data is
temporarily stored for faster access.
 Level 3 or Main Memory: It is the memory on which the computer works currently. It is
small in size and once power is off data no longer stays in this memory.
 Level 4 or Secondary Memory: It is external memory that is not as fast as the main memory
but data stays permanently in this memory.

Cache Performance
When the processor needs to read or write a location in the main memory, it first checks for a
corresponding entry in the cache.
 If the processor finds that the memory location is in the cache, a Cache Hit has occurred and
data is read from the cache.
 If the processor does not find the memory location in the cache, a cache miss has occurred.
For a cache miss, the cache allocates a new entry and copies in data from the main memory,
then the request is fulfilled from the contents of the cache
Performance considerations
Performance considerations in memory organization are crucial for optimizing the
efficiency and speed of computing systems. These factors directly impact how quickly and
effectively a system can access and process data. Here’s a detailed examination of key
performance considerations:

1. Access Speed

a. Latency

 Definition: The time taken to access data from memory. It includes the time to locate and retrieve
the data.
 Types:
o Read Latency: The time to read data.
o Write Latency: The time to write data.
 Impact: Lower latency leads to faster system responses, improving overall application
performance.

b. Throughput

 Definition: The amount of data that can be processed in a given amount of time, often measured
in MB/s or GB/s.
 Considerations: High throughput is essential for applications that handle large data sets, such as
databases and video processing.

2. Memory Hierarchy

a. Cache Memory

 Role: Provides a faster data access path by storing frequently used data close to the CPU.
 Levels:
o L1 Cache: Fastest and smallest, located on the CPU.
o L2 and L3 Caches: Larger but slower, typically shared between cores.
 Consideration: Efficient cache utilization can significantly reduce access times for frequently
used data.

b. Main Memory (RAM)

 Performance: Main memory speed affects overall system performance, especially in


multitasking and heavy applications.
 Optimization: Using faster DRAM (like DDR4/DDR5) can enhance performance compared to
older standards.

3. Data Locality

a. Temporal Locality

 Definition: If a particular memory location is accessed, it is likely to be accessed again soon.


 Implication: Caching mechanisms are designed to take advantage of temporal locality, keeping
recently accessed data close to the CPU.

b. Spatial Locality

 Definition: If a memory location is accessed, nearby locations are likely to be accessed soon.
 Implication: This consideration influences how data is organized and loaded into cache, affecting
cache line sizes and prefetching strategies.

4. Memory Bandwidth

 Definition: The maximum rate at which data can be read from or written to memory by the CPU.
 Considerations: Applications requiring high data throughput (e.g., graphics processing, scientific
computing) benefit from high memory bandwidth.

5. Refresh Rates and Latency in DRAM

 Refresh Mechanism: DRAM requires periodic refreshing to maintain data integrity, which can
introduce latency.
 Impact on Performance: Systems must balance refresh cycles with performance needs,
especially in time-sensitive applications.

6. Interleaving and Bank Organization

a. Memory Interleaving

 Definition: Distributing memory addresses across multiple memory banks to improve access
speed.
 Benefit: Allows simultaneous access to multiple banks, reducing wait times and improving
throughput.

b. Bank Organization

 Considerations: Memory banks should be organized to minimize access conflicts and maximize
parallelism.

7. Error Correction and Reliability

 ECC (Error-Correcting Code): A method used in memory systems to detect and correct single-
bit errors.
 Impact on Performance: While ECC enhances reliability, it can introduce additional latency and
overhead, which should be accounted for in performance assessments.

8. Power Consumption and Thermal Management

 Power Usage: Different memory types consume varying amounts of power, affecting overall
system efficiency.
 Thermal Throttling: High temperatures can lead to performance degradation; hence, effective
cooling solutions are necessary.

9. Scalability and Future-Proofing

 Scalability: As applications grow, the memory system should be capable of scaling to meet
increased demands without significant redesign.
 Future-Proofing: Designing memory systems with the potential for upgrades (e.g., supporting
newer memory standards) can enhance long-term performance.

Virtual Memory
Virtual memory is a memory management technique used by operating systems to give
the appearance of a large, continuous block of memory to applications, even if the physical
memory (RAM) is limited. It allows the system to compensate for physical memory shortages,
enabling larger applications to run on systems with less RAM.
A memory hierarchy, consisting of a computer system’s memory and a disk, enables a process
to operate with only some portions of its address space in memory. A virtual memory is what its
name indicates- it is an illusion of a memory that is larger than the real memory. We refer to the
software component of virtual memory as a virtual memory manager. The basis of virtual
memory is the noncontiguous memory allocation model. The virtual memory manager removes
some components from memory to make room for other components.
The size of virtual storage is limited by the addressing scheme of the computer system and the
amount of secondary memory available not by the actual number of main storage locations.
The History of Virtual Memory
Before virtual memory, computers used RAM and secondary memory for data storage. Early
computers used magnetic core in place of main memory and magnetic drums in place of
secondary memory. In the 1940s and 1950s, computer memory was very expensive and limited
in size. As programs became larger and more complex, developers had to worry that their
programs might use up all the available memory and cause the computer to run out of space to
work.
In those early days, if program was larger than available space then programmers use the
concept of overlaying. If some portion of the code that is not currently used move to overlying
if needed it back overwritten to memory this cause extensive programming. This is the reason
for developing virtual memory.
In 1956, German physicist Fritz-Rudolf Güntsch develop virtual memory. The first real
example of a virtual memory system was created at the University of Manchester in England
while developing the Atlas computer. This system used a method called paging, which allowed
virtual addresses (used by programs) to be mapped to the computer’s main memory. The Atlas
computer was built in 1959 and started working in 1962.
The first computer with virtual memory was released by Burroughs Corp in 1961. This version
of virtual memory do not using paging it oppose paging but support segmentation.
How Virtual Memory Works?
Virtual Memory is a technique that is implemented using both hardware and software. It maps
memory addresses used by a program, called virtual addresses, into physical addresses in
computer memory.
 All memory references within a process are logical addresses that are dynamically translated
into physical addresses at run time. This means that a process can be swapped in and out of
the main memory such that it occupies different places in the main memory at different times
during the course of execution.
 A process may be broken into a number of pieces and these pieces need not be continuously
located in the main memory during execution. The combination of dynamic run-time address
translation and the use of a page or segment table permits this.
If these characteristics are present then, it is not necessary that all the pages or segments are
present in the main memory during execution. This means that the required pages need to be
loaded into memory whenever required. Virtual memory is implemented using Demand Paging
or Demand Segmentation.
Types of Virtual Memory
In a computer, virtual memory is managed by the Memory Management Unit (MMU), which is
often built into the CPU. The CPU generates virtual addresses that the MMU translates into
physical addresses.
There are two main types of virtual memory:
 Paging
 Segmentation
Paging
Paging divides memory into small fixed-size blocks called pages. When the computer runs out
of RAM, pages that aren’t currently in use are moved to the hard drive, into an area called a
swap file. The swap file acts as an extension of RAM. When a page is needed again, it is
swapped back into RAM, a process known as page swapping. This ensures that the operating
system (OS) and applications have enough memory to run.
Segmentation
Segmentation divides virtual memory into segments of different sizes. Segments that aren’t
currently needed can be moved to the hard drive. The system uses a segment table to keep track
of each segment’s status, including whether it’s in memory, if it’s been modified, and its
physical address. Segments are mapped into a process’s address space only when needed.
Combining Paging and Segmentation
Sometimes, both paging and segmentation are used together. In this case, memory is divided
into pages, and segments are made up of multiple pages. The virtual address includes both a
segment number and a page number.

Virtual Memory vs Physical Memory


Feature Virtual Memory Physical Memory (RAM)

An abstraction that extends the The actual hardware (RAM) that stores data
Definition available memory by using disk and instructions currently being used by the
storage CPU

Location On the hard drive or SSD On the computer’s motherboard


Feature Virtual Memory Physical Memory (RAM)

Slower (due to disk I/O


Speed Faster (accessed directly by the CPU)
operations)

Smaller, limited by the amount of RAM


Capacity Larger, limited by disk space
installed

Lower (cost of additional disk


Cost Higher (cost of RAM modules)
storage)

Data
Indirect (via paging and swapping) Direct (CPU can access data directly)
Access

Volatility Non-volatile (data persists on disk) Volatile (data is lost when power is off)

What are the Applications of Virtual memory?


Virtual memory has the following important characteristics that increase the capabilities of the
computer system. The following are five significant characteristics of Lean.
 Increased Effective Memory: One major practical application of virtual memory is, virtual
memory enables a computer to have more memory than the physical memory using the disk
space. This allows for the running of larger applications and numerous programs at one time
while not necessarily needing an equivalent amount of DRAM.
 Memory Isolation: Virtual memory allocates a unique address space to each process and that
also plays a role in process segmentation. Such separation increases safety and reliability
based on the fact that one process cannot interact with and or modify another’s memory space
through a mistake, or even a deliberate act of vandalism.
 Efficient Memory Management: Virtual memory also helps in better utilization of the
physical memories through methods that include paging and segmentation. It can transfer
some of the memory pages that are not frequently used to disk allowing RAM to be used by
active processes when required in a way that assists in efficient use of memory as well as
system performance.
 Simplified Program Development: For case of programmers, they don’t have to consider
physical memory available in a system in case of having virtual memory. They can program
‘as if’ there is one big block of memory and this makes the programming easier and more
efficient in delivering more complex applications.
How to Manage Virtual Memory?
Here are 5 key points on how to manage virtual memory:
1. Adjust the Page File Size
 Automatic Management: All contemporary operating systems including Windows contain
the auto-configuration option for the size of the empirical page file. But depending on the size
of the RAM, they are set automatically, although the user can manually adjust the page file
size if required.
 Manual Configuration: For tuned up users, the setting of the custom size can sometimes
boost up the performance of the system. The initial size is usually advised to be set to the
minimum value of 1. To set the size of the swap space equal to 5 times the amount of physical
RAM and the maximum size 3 times the physical RAM.
2. Place the Page File on a Fast Drive
 SSD Placement: If this is feasible, the page file should be stored in the SSD instead of
the HDD as a storage device. It has better read and write times, and the virtual memory may
prove benefecial in an SSD.
 Separate Drive: Regarding systems having multiple drives involved, the page file needs to
be placed on a different drive than the os and that shall in turn improve its performance.
3. Monitor and Optimize Usage
 Performance Monitoring: Employ the software tools used in monitoring the performance of
the system in tracking the amounts of virtual memory. High page file usage may signify that
there is a lack of physical RAM or that virtual memory needs a change of settings or addition
in physical RAM.
 Regular Maintenance: Make sure there is no toolbar or other application running in the
background, take time and uninstall all the tool bars to free virtual memory.
4. Disable Virtual Memory for SSD
 Sufficient RAM: If for instance your system has a big physical memory, for example 16GB
and above then it would be advised to freeze the page file in order to minimize SSD usage.
But it should be done, in my opinion, carefully and only if the additional signals that one
decides to feed into his applications should not likely use all the available RAM.
5. Optimize System Settings
 System Configuration: Change some general properties of the system concerning virtual
memory efficiency. This also involves enabling additional control options in Windows such
as adjusting additional system setting option on the operating system, or using other options
in different operating systems such as Linux that provides different tools and commands to
help in adjusting how virtual memory is utilized.
 Regular Updates: Ensure that your drivers are run in their newest version because new
releases contain some enhancements and issues regarding memory management.
What are the Benifits of Using Virtual Memory?
 Many processes maintained in the main memory.
 A process larger than the main memory can be executed because of demand paging. The OS
itself loads pages of a process in the main memory as required.
 It allows greater multiprogramming levels by using less of the available (primary) memory
for each process.
 It has twice the capacity for addresses as main memory.
 It makes it possible to run more applications at once.
 Users are spared from having to add memory modules when RAM space runs out, and
applications are liberated from shared memory management.
 When only a portion of a program is required for execution, speed has increased.
 Memory isolation has increased security.
 It makes it possible for several larger applications to run at once.
 Memory allocation is comparatively cheap.
 It doesn’t require outside fragmentation.
 It is efficient to manage logical partition workloads using the CPU.
 Automatic data movement is possible.
What are the Limitation of Virtual Memory?
 It can slow down the system performance, as data needs to be constantly transferred between
the physical memory and the hard disk.
 It can increase the risk of data loss or corruption, as data can be lost if the hard disk fails or if
there is a power outage while data is being transferred to or from the hard disk.
 It can increase the complexity of the memory management system, as the operating system
needs to manage both physical and virtual memory.

Memory Management Requirements


Memory management keeps track of the status of each memory location, whether it is allocated
or free. It allocates the memory dynamically to the programs at their request and frees it for
reuse when it is no longer needed. Memory management meant to satisfy some requirements
that we should keep in mind.
These Requirements of memory management are:
1. Relocation –
The available memory is generally shared among a number of processes in a
multiprogramming system, so it is not possible to know in advance which other programs will
be resident in main memory at the time of execution of this program. Swapping the active
processes in and out of the main memory enables the operating system to have a larger pool
of ready-to-execute process. When a program gets swapped out to a disk memory, then it is
not always possible that when it is swapped back into main memory then it occupies the
previous memory location, since the location may still be occupied by another process. We
may need to relocate the process to a different area of memory. Thus there is a possibility
that program may be moved in main memory due to swapping.

The figure depicts a process image. The process image is occupying a continuous region
of main memory. The operating system will need to know many things including the location
of process control information, the execution stack, and the code entry. Within a program,
there are memory references in various instructions and these are called logical addresses.
After loading of the program into main memory, the processor and the operating system must
be able to translate logical addresses into physical addresses. Branch instructions contain the
address of the next instruction to be executed. Data reference instructions contain the address
of byte or word of data referenced.
2. Protection – There is always a danger when we have multiple programs at the same time as
one program may write to the address space of another program. So every process must be
protected against unwanted interference when other process tries to write in a process whether
accidental or incidental. Between relocation and protection requirement a trade-off occurs as
the satisfaction of relocation requirement increases the difficulty of satisfying the protection
requirement.

Prediction of the location of a program in main memory is not possible, that’s why it is
impossible to check the absolute address at compile time to assure protection. Most of the
programming language allows the dynamic calculation of address at run time. The memory
protection requirement must be satisfied by the processor rather than the operating system
because the operating system can hardly control a process when it occupies the processor.
Thus it is possible to check the validity of memory references.
3. Sharing – A protection mechanism must have to allow several processes to access the same
portion of main memory. Allowing each processes access to the same copy of the program
rather than have their own separate copy has an advantage. For example, multiple processes
may use the same system file and it is natural to load one copy of the file in main memory and
let it shared by those processes. It is the task of Memory management to allow controlled
access to the shared areas of memory without compromising the protection. Mechanisms are
used to support relocation supported sharing capabilities.
4. Logical organization – Main memory is organized as linear or it can be a one-dimensional
address space which consists of a sequence of bytes or words. Most of the programs can be
organized into modules, some of those are unmodifiable (read-only, execute only) and some
of those contain data that can be modified. To effectively deal with a user program, the
operating system and computer hardware must support a basic module to provide the required
protection and sharing. It has the following advantages:
 Modules are written and compiled independently and all the references from one module to
another module are resolved by `the system at run time.
 Different modules are provided with different degrees of protection.
 There are mechanisms by which modules can be shared among processes. Sharing can be
provided on a module level that lets the user specify the sharing that is desired.
5. Physical organization – The structure of computer memory has two levels referred to as
main memory and secondary memory. Main memory is relatively very fast and costly as
compared to the secondary memory. Main memory is volatile. Thus secondary memory is
provided for storage of data on a long-term basis while the main memory holds currently used
programs. The major system concern between main memory and secondary memory is the
flow of information and it is impractical for programmers to understand this for two reasons:

 The programmer may engage in a practice known as overlaying when the main memory
available for a program and its data may be insufficient. It allows different modules to be
assigned to the same region of memory. One disadvantage is that it is time-consuming for
the programmer.
 In a multiprogramming environment, the programmer does not know how much space will
be available at the time of coding and where that space will be located inside the memory.
Secondary Memory:
Secondary memory is a type of computer memory that is used for long-term storage of
data and programs. It is also known as auxiliary memory or external memory, and is distinct
from primary memory, which is used for short-term storage of data and instructions that are
currently being processed by the CPU.
Secondary memory devices are typically larger and slower than primary memory, but offer a
much larger storage capacity. This makes them ideal for storing large files such as documents,
images, videos, and other multimedia content.
Some examples of secondary memory devices include hard disk drives (HDDs), solid-state
drives (SSDs), magnetic tapes, optical discs such as CDs and DVDs, and flash memory such as
USB drives and memory cards. Each of these devices uses different technologies to store data,
but they all share the common feature of being non-volatile, meaning that they can store data
even when the computer is turned off.
Secondary memory devices are accessed by the CPU via input/output (I/O) operations, which
involve transferring data between the device and primary memory. The speed of these
operations is affected by factors such as the type of device, the size of the file being accessed,
and the type of connection between the device and the computer.
Overall, secondary memory is an essential component of modern computing systems and plays
a critical role in the storage and retrieval of data and programs.
Difference between Primary Memory and Secondary Memory:

Primary Memory Secondary Memory

Secondary memory is not accessed directly by


the Central Processing Unit(CPU). Instead, data
Primary memory is directly accessed by the
accessed from a secondary memory is first
Central Processing Unit(CPU).
loaded into Random Access Memory(RAM) and
is then sent to the Processing Unit.

RAM provides a much faster-accessing speed


to data than secondary memory. By loading Secondary Memory is slower in data accessing.
software programs and required files into Typically primary memory is six times faster
primary memory(RAM), computers can than secondary memory.
process data much more quickly.

Primary memory, i.e. Random Access


Secondary memory provides a feature of being
Memory(RAM) is volatile and gets
non-volatile, which means it can hold on to its
completely erased when a computer is shut
data with or without electrical power supply.
down.
Uses of Secondary Media:
 Permanent Storage: Primary Memory (RAM) is volatile, i.e. it loses all information when
the electricity is turned off, so in order to secure the data permanently in the device,
Secondary storage devices are needed.
 Portability: Storage mediums, like CDs, flash drives can be used to transfer the data from
one device to another.
Fixed and Removable Storage
Fixed Storage- Fixed storage is an internal media device that is used by a computer system to
store data, and usually, these are referred to as the Fixed disk drives or Hard Drives.
Fixed storage devices are literally not fixed, obviously, these can be removed from the system
for repairing work, maintenance purposes, and also for an upgrade, etc. But in general, this
can’t be done without a proper toolkit to open up the computer system to provide physical
access, and that needs to be done by an engineer. Technically, almost all of the data i.e. being
processed on a computer system is stored on some type of a built-in fixed storage device.
Types of fixed storage:
 Internal flash memory (rare)
 SSD (solid-state disk) units
 Hard disk drives (HDD)
Removable Storage- Removable storage is an external media device that is used by a computer
system to store data, and usually, these are referred to as the Removable Disks drives or the
External Drives. Removable storage is any type of storage device that can be removed/ejected
from a computer system while the system is running. Examples of external devices include
CDs, DVDs, and Blu-ray disk drives, as well as diskettes and USB drives. Removable storage
makes it easier for a user to transfer data from one computer system to another.
In storage factors, the main benefit of removable disks is that they can provide the fast data
transfer rates associated with storage area networks (SANs)
Types of Removable Storage:
 Optical discs (CDs, DVDs, Blu-ray discs)
 Memory cards
 Floppy disks
 Magnetic tapes
 Disk packs
 Paper storage (punched tapes, punched cards)
Secondary Storage Media
There are the following main types of storage media:
1. Magnetic storage media: Magnetic media is coated with a magnetic layer that is magnetized
in clockwise or anticlockwise directions. When the disk moves, the head interprets the data
stored at a specific location in binary 1s and 0s at reading.
Examples: hard disks, floppy disks, and magnetic tapes.
 Floppy Disk: A floppy disk is a flexible disk with a magnetic coating on it. It is packaged
inside a protective plastic envelope. These are one of the oldest types of portable storage
devices that could store up to 1.44 MB of data but now they are not used due to very little
memory storage.
 Hard disk: A hard disk consists of one or more circular disks called platters which are
mounted on a common spindle. Each surface of a platter is coated with magnetic material.
Both surfaces of each disk are capable of storing data except the top and bottom disks where
only the inner surface is used. The information is recorded on the surface of the rotating disk
by magnetic read/write heads. These heads are joined to a common arm known as the access
arm.
Hard disk drive components: Most of the basic types of hard drives contain a number of
disk platters that are placed around a spindle which is placed inside a sealed chamber. The
chamber also includes read/write heads and motors. Data is stored on each of these disks in
the arrangement of concentric circles called tracks which are divided further into sectors.
Though internal Hard drives are not very portable and are used internally in a computer
system, external hard disks can be used as a substitute for portable storage. Hard disks can
store data up to several terabytes.

2. Optical storage media In optical storage, media information is stored and read using a laser
beam. The data is stored as a spiral pattern of pits and ridges denoting binary 0 and binary 1.
Examples: CDs and DVDs
 Compact Disk: A Compact Disc drive(CDD) is a device that a computer uses to read data
that is encoded digitally on a compact disc(CD). A CD drive can be installed inside a
computer’s compartment, provided with an opening for easier disc tray access or it can be
used by a peripheral device connected to one of the ports provided in the computer system. A
compact disk or CD can store approximately 650 to 700 megabytes of data. A computer
should possess a CD Drive to read the CDs. There are three types of CDs:
CD- ROM CD-R CD-RW

It stands for Compact Disk – Read Only It stands for Compact It stands for Compact
Memory Disk- Recordable. Disk-Rewritable.

Data is written on these disks at the time


It can be read or
of manufacture. This data cannot be Data can be recorded on
written multiple times
changed, once is it written by the these disks but only once.
but a CD-RW drive
manufacturer, but can only be read. CD- Once the data is written in
needs to be installed on
ROMs are used for text, audio and video a CD-R, it cannot be
your computer before
distribution like games, encyclopedias, erased/modified.
editing a CD-RW.
and application software.
 DVD: It stands for Digital Versatile Disk or Digital Video Disk. It looks just like a CD and
uses similar technology as that of the CDs but allows tracks to be spaced closely enough to
store data that is more than six times the CD’s capacity. It is a significant advancement in
portable storage technology. A DVD holds 4.7 GB to 17 GB of data.
 Blue Ray Disk: This is the latest optical storage media to store high-definition audio and
video. It is similar to a CD or DVD but can store up to 27 GB of data on a single-layer disc
and up to 54 GB of data on a dual-layer disk. While CDs or DVDs use a red laser beam, the
blue-ray disk uses a blue laser to read/write data on a disk.
3. Solid State Memories Solid-state storage devices are based on electronic circuits with no
moving parts like the reels of tape, spinning discs, etc. Solid-state storage devices use special
memories called flash memory to store data. A solid state drive (or flash memory) is used
mainly in digital cameras, pen drives, or USB flash drives.

Pen Drives: Pen Drives or Thumb drives or Flash drives are the recently emerged portable
storage media. It is an EEPROM-based flash memory that can be repeatedly erased and written
using electric signals. This memory is accompanied by a USB connector which enables the pen
drive to connect to the computer. They have a capacity smaller than a hard disk but greater than
a CD. Pendrive has the following advantages:
 Transfer Files: A pen drive is plugged into a USB port of the system can be used as a device
to transfer files, documents, and photos to a PC and also vice versa. Similarly, selected files
can be transferred between a pen drive and any type of workstation.
 Portability: The lightweight nature and smaller size of a pen drive make it possible to carry it
from place to place which makes data transportation an easier task.
 Backup Storage: Most of the pen drives now come with the feature of having password
encryption, important information related to family, medical records, and photos can be stored
on them as a backup.
 Transport Data: Professionals/Students can now easily transport large data files and
video/audio lectures on a pen drive and gain access to them from anywhere. Independent PC
technicians can store work-related utility tools, various programs, and files on a high-speed 64
GB pen drive and move from one site to another.

Advantages:

1. Large storage capacity: Secondary memory devices typically have a much larger storage
capacity than primary memory, allowing users to store large amounts of data and programs.
2. Non-volatile storage: Data stored on secondary memory devices is typically non-volatile,
meaning it can be retained even when the computer is turned off.
3. Portability: Many secondary memory devices are portable, making it easy to transfer data
between computers or devices.
4. Cost-effective: Secondary memory devices are generally more cost-effective than primary
memory.

Disadvantages:

1. Slower access times: Accessing data from secondary memory devices typically takes longer
than accessing data from primary memory.
2. Mechanical failures: Some types of secondary memory devices, such as hard disk drives, are
prone to mechanical failures that can result in data loss.
3. Limited lifespan: Secondary memory devices have a limited lifespan, and can only withstand
a certain number of read and write cycles before they fail.
4. Data corruption: Data stored on secondary memory devices can become corrupted due to
factors such as electromagnetic interference, viruses, or physical damage.
5. Overall, secondary memory is an essential component of modern computing systems, but it
also has its limitations and drawbacks. The choice of a particular secondary memory device
depends on the user’s specific needs and requirements.

You might also like