0% found this document useful (0 votes)
46 views20 pages

Memory Management

Internal fragmentation occurs when allocated memory is larger than what a process needs, wasting the remaining memory. External fragmentation is when free memory is scattered in small blocks, making it difficult to allocate large contiguous blocks even when total free memory is sufficient. Memory management involves allocating and deallocating memory resources efficiently to ensure programs can access needed memory and minimize fragmentation issues through proper utilization of main memory.

Uploaded by

nehavj664
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
46 views20 pages

Memory Management

Internal fragmentation occurs when allocated memory is larger than what a process needs, wasting the remaining memory. External fragmentation is when free memory is scattered in small blocks, making it difficult to allocate large contiguous blocks even when total free memory is sufficient. Memory management involves allocating and deallocating memory resources efficiently to ensure programs can access needed memory and minimize fragmentation issues through proper utilization of main memory.

Uploaded by

nehavj664
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 20

Internal Fragmentation: This occurs when allocated memory is larger than what the process

needs. It happens when a block of memory is allocated to a process, but the process doesn't
use all of the allocated memory. The remaining memory within that block is wasted and
cannot be used by other processes

External fragmentation is a phenomenon in computer memory management where free


memory is scattered in small, non-contiguous blocks, making it difficult to allocate large
contiguous blocks of memory, even when the total free memory is sufficient.

Memory Management:
Memory management is a crucial aspect of computer systems that involves the allocation
and deallocation of memory resources.

It ensures that a computer's memory is used efficiently and that programs can access the
memory they need.

The task of subdividing the memory among different processes is called Memory
Management

Why Memory Management is Required?


Allocate and de-allocate memory before and after process execution.
To keep track of used memory space by processes.
To minimize fragmentation issues.
To proper utilization of main memory.
To maintain data integrity while executing of process.

Memory Management Requirements


Resolution:

When your computer is running multiple programs, they share the same
memory space. However, if a program gets moved out of memory temporarily
(swapped to disk, for example), it might not return to the exact same spot when
it comes back.

When a program gets swapped out to a disk memory, then it is not always possible that
when it is swapped back into main memory then it occupies the previous memory
location, since the location may still be occupied by another process. We may need to
relocate the process to a different area of memory. Thus there is a possibility that
program may be moved in main memory due to swapping.

After loading of the program into main memory, the processor and the operating
system must be able to translate logical addresses into physical addresses.

Protection:

There is always a danger when we have multiple programs at the same time as one
program may write to the address space of another program. So every process must be
protected against unwanted interference when other process tries to write in a process
whether accidental or incidental.

Prediction of the location of a program in main memory is not possible, that’s why it is
impossible to check the absolute address at compile time to assure protection. Most of
the programming language allows the dynamic calculation of address at run time.

sharing:

A protection mechanism must have to allow several processes to access the same
portion of main memory.

Allowing each processes access to the same copy of the program rather than have their
own separate copy has an advantage.

Local organization:

Programs are written in modules

Modules can be written and compiled independently

Different modules are provided with different degrees of protection(read-only, execute-only)

Share modules among processes

Physical Organization:

The structure of computer memory has two levels referred to as main memory and
secondary memory. Main memory is relatively very fast and costly as compared to
the secondary memory. Main memory is volatile. Thus secondary memory is
provided for storage of data on a long-term basis while the main memory holds
currently used programs. The major system concern between main memory and
secondary memory is the flow of information and it is impractical for programmers
to understand this for two reasons:

The programmer may engage in a practice known as overlaying when the main
memory available for a program and its data may be insufficient. It allows
different modules to be assigned to the same region of memory. One
disadvantage is that it is time-consuming for the programmer.
In a multiprogramming environment, the programmer does not know how much
space will be available at the time of coding and where that space will be located
inside the memory.

Memory partitioning:
Memory partitioning is a technique used in computer systems to manage and allocate
memory efficiently.

It involves dividing the available memory into distinct sections, or partitions, to organize
and control the use of memory by different processes or programs.

Memory partitioning is commonly used in operating systems to facilitate multitasking


and the execution of multiple processes concurrently.

There are different types of memory partitioning, including fixed partitioning, dynamic
partitioning, and paging.

1)Fixed Partitioning
Fixed partitioning is a memory management technique used in operating systems to divide the available
memory into fixed-sized partitions. Each partition is allocated to a specific program or process, and these
partitions do not change in size during the execution of the programs. There are two common alternatives
in fixed partitioning: equal-size fixed partitions and unequal-size fixed partitions.
Equal-Size Fixed Partitions:

In this approach, the total memory is divided into fixed-sized partitions, and each partition is of
the same size.
The advantage is simplicity and ease of management. Allocating and deallocating partitions
becomes straightforward.
However, it may lead to inefficient memory utilization, especially if the processes have different
memory requirements. Smaller processes may waste memory in larger partitions, and larger
processes may not fit into smaller partitions.

Unequal-Size Fixed Partitions:

In this approach, the total memory is divided into fixed-sized partitions, but these partitions can
have different sizes.
This allows for better utilization of memory, as partitions can be tailored to the specific needs of
processes. Smaller partitions can accommodate smaller processes, and larger partitions can
accommodate larger processes.
Managing unequal-size partitions can be more complex than equal-size partitions, as allocation
and deallocation require more sophisticated algorithms to find appropriate-sized holes in memory.

Each approach has its advantages and disadvantages, and the choice between them depends on the
characteristics of the applications running on the system and the desired trade-offs between simplicity
and memory efficiency. Fixed partitioning, in general, has limitations, such as fragmentation issues, where
the available memory may become fragmented over time, making it challenging to allocate contiguous
blocks of memory for larger processes. Modern operating systems often use dynamic memory
management techniques, such as paging or segmentation, to address some of these issues.
Advantages of Fixed Partitioning –

Easy to implement: The algorithms needed to implement Fixed Partitioning are


straightforward and easy to implement.
No external fragmentation: Fixed Partitioning eliminates the problem of external
fragmentation.
Suitable for systems with a fixed number of processes: Fixed Partitioning is well-
suited for systems with a fixed number of processes and known memory
requirements.
Prevents processes from interfering with each other: Fixed Partitioning ensures
that processes do not interfere with each other’s memory space.
Efficient use of memory: Fixed Partitioning ensures that memory is used efficiently
by allocating it to fixed-sized partitions.
Good for batch processing: Fixed Partitioning is ideal for batch processing
environments where the number of processes is fixed.

Disadvantages of Fixed Partitioning –

1. Internal Fragmentation:
Main memory use is inefficient. Any program, no matter how small, occupies an
entire partition. This can cause internal fragmentation.

2. External Fragmentation:
The total unused space (as stated above) of various partitions cannot be used to
load the processes even though there is space available but not in the contiguous
form (as spanning is not allowed).

3. Limit process size:


Process of size greater than the size of the partition in Main Memory cannot be
accommodated. The partition size cannot be varied according to the size of the
incoming process size. Hence, the process size of 32MB in the above-stated example
is invalid.

4. Limitation on Degree of Multiprogramming:


Partitions in Main Memory are made before execution or during system configure.
Main Memory is divided into a fixed number of partitions. Suppose if there are
partitions in RAM and are the number of processes, then condition must be fulfilled.
Number of processes greater than the number of partitions in RAM is invalid in
Fixed Partitioning.

2)Dynamic partitioning:
Dynamic partitioning tries to overcome the problems caused by fixed partitioning. In this
technique, the partition size is not declared initially. It is declared at the time of process
loading.

The first partition is reserved for the operating system. The remaining space is divided into
parts. The size of each partition will be equal to the size of the process. The partition size
varies according to the need of the process so that the internal fragmentation can be
avoided.

Advantages of Variable(Dynamic) Partitioning


No Internal Fragmentation: In variable Partitioning, space in the main memory is
allocated strictly according to the need of the process, hence there is no case of
internal fragmentation. There will be no unused space left in the partition.
No restriction on the Degree of Multiprogramming: More processes can be
accommodated due to the absence of internal fragmentation. A process can be
loaded until the memory is empty.
No Limitation on the Size of the Process: In Fixed partitioning, the process with a
size greater than the size of the largest partition could not be loaded and the
process can not be divided as it is invalid in the contiguous allocation technique.
Here, In variable partitioning, the process size can’t be restricted since the partition
size is decided according to the process size.
Disadvantages of Variable(Dynamic) Partitioning
Difficult Implementation: Implementing variable Partitioning is difficult as
compared to Fixed Partitioning as it involves the allocation of memory during run-
time rather than during system configuration.
External Fragmentation: There will be external fragmentation despite the absence
of internal fragmentation. For example, suppose in the above example- process
P1(2MB) and process P3(1MB) completed their execution. Hence two spaces are left
i.e. 2MB and 1MB. Let’s suppose process P5 of size 3MB comes. The space in
memory cannot be allocated as no spanning is allowed in contiguous allocation. The
rule says that the process must be continuously present in the main memory to get
executed. Hence it results in External Fragmentation.

Now P5 of size 3 MB cannot be accommodated despite the required available space


because in contiguous no spanning is allowed.

Overcome :
Must use compaction to shift processes so they are contiguous and all
free memory is in one block.

compaction :
Compaction is a technique to collect all the free memory present in the form of
fragments into one large chunk of free memory, which can be used to run other
processes.

It does that by moving all the processes towards one end of the memory and all the
available free space towards the other end of the memory so that it becomes
contiguous.

It is not always easy to do compaction. Compaction can be done only when the
relocation is dynamic and done at execution time. Compaction can not be done when
relocation is static and is performed at load time or assembly time

Before Compaction
Before compaction, the main memory has some free space between occupied space.
This condition is known as external fragmentation. Due to less free space between
occupied spaces, large processes cannot be loaded into them.
After Compaction
After compaction, all the occupied space has been moved up and the free space at the
bottom. This makes the space contiguous and removes external fragmentation.
Processes with large memory requirements can be now loaded into the main memory.

Advantages of Compaction
Reduces external fragmentation.
Make memory usage efficient.
Memory becomes contiguous.
Since memory becomes contiguous more processes can be loaded to memory,
thereby increasing scalability of OS.
Fragmentation of file system can be temporarily removed by compaction.
Improves memory utilization as their is less gap between memory blocks.

Disadvantages of Compaction
System efficiency reduces and latency is increased.
A huge amount of time is wasted in performing compaction.
CPU sits idle for a long time.
Not always easy to perform compaction.
It may cause deadlocks since it disturbs the memeory allocation process.

Dynamic Partitioning Placement Algorithm


1.) First Fit Algorithm

First Fit algorithm scans the linked list and whenever it finds the first big enough hole to store a
process, it stops scanning and load the process into that hole. This procedure produces two
partitions. Out of them, one partition will be a hole while the other partition will store the process.

First Fit algorithm maintains the linked list according to the increasing order of starting index. This is
the simplest to implement among all the algorithms and produces bigger holes as compare to the
other algorithms.

2)Worst Fit

Allocates a process to the partition which is largest sufficient among the freely available partitions
available in the main memory.

If a large process comes at a later stage, then memory will not have space to accommodate it.

Advantages of Worst-Fit Allocation :


Since this process chooses the largest hole/partition, therefore there will be large internal
fragmentation. Now, this internal fragmentation will be quite big so that other small processes
can also be placed in that leftover partition.

Disadvantages of Worst-Fit Allocation :


It is a slow process because it traverses all the partitions in the memory and then selects the
largest partition among all the partitions, which is a time-consuming process.

3)Best-fit algorithm

Chooses the block that is closest in size to the request

Worst performer overall


Since smallest block is found for process, the smallest amount of fragmentation is left

Memory compaction must be done more often


Paging:
•Main memory is partitioned into equal fixed size chunks that are relatively small ---- called
frames

•Each process is divided into small fixed sized chunks of the same size ---- called pages

•At a given point of time, some frames are in use & some are free

•Suppose process A stored on disk, consists of four pages.

•When process is to be loaded , OS finds four free frames & loads A’s pages

•These frames need not be contiguous

•OS maintains a page table for each process

•Page table consists of frame location for each page of the process

Pages of the process are brought into the main memory only when they are required
otherwise they reside in the secondary storage.

Different operating system defines different frame sizes. The sizes of each frame must be
equal. Considering the fact that the pages are mapped to the frames in Paging, page size
needs to be as same as frame size
Let us consider the main memory size 16 Kb and Frame size is 1 KB therefore the main memory will
be divided into the collection of 16 frames of 1 KB each.

There are 4 processes in the system that is P1, P2, P3 and P4 of 4 KB each. Each process is divided
into pages of 1 KB each so that one page can be stored in one frame.

Initially, all the frames are empty therefore pages of the processes will get stored in the contiguous
way
Let us consider that, P2 and P4 are moved to waiting state after some time. Now, 8 frames become
empty and therefore other pages can be loaded in that empty place. The process P5 of size 8 KB (8
pages) is waiting inside the ready queue.

Given the fact that, we have 8 non contiguous frames available in the memory and paging provides
the flexibility of storing the process at the different places. Therefore, we can load the pages of
process P5 in the place of P2 and P4.
Segmentation:

File management:
The file system is a crucial component of an operating system (OS) responsible for managing
and organizing data on secondary storage devices, such as hard drives, SSDs, and other non-
volatile storage mediums. It provides an abstraction layer between the user and the physical
storage, allowing users and applications to interact with stored data in a more user-friendly
and organized manner.

A file system is a software component that manages the organization, storage, and retrieval of
data on a computer's secondary storage devices. Its role in an operating system is to provide
a structured way to store, access, and manage files.

Here are the fundamental functions that a file system enables:

Create: This function allows users or applications to create new files. When a new file is
created, the file system allocates space on the storage device to store the file's data and
metadata (information about the file, such as its name, size, and location).
Delete: Users can delete files when they are no longer needed. The file system marks the
space occupied by the file as available for future use. However, the actual data may not be
immediately erased; it depends on the file system and its policies on data deletion.

Open: Before a file can be read from or written to, it must be opened. The open function
allows a user or application to establish a connection to a file, indicating an intention to
perform operations on it.

Close: After a file has been opened and the necessary operations (reading or writing) have
been performed, the file needs to be closed. Closing a file frees up system resources
associated with that file and ensures that changes are properly saved.

Read: This function allows users or applications to retrieve data from a file. The file system
manages the reading process, ensuring that the correct data is provided to the requesting
entity.

Write: Users or applications use the write function to store or modify data in a file. The file
system ensures that the data is correctly written to the designated location on the storage
device.

File organization:
File organization refers to the way in which data records are structured and stored within a
file. The choice of file organization is a crucial decision in database and file management
systems, as it directly impacts the efficiency of data access, updates, storage utilization,
maintenance efforts, and overall system reliability.

Important criteria while choosing a file organization:

•Short access time

•Ease of update

•Economy of storage

•Simple maintenance

•Reliability

File Organization Types

File Directories: Sometimes the file system consisting of millions of files, at that
situation it is very hard to manage the files. To manage these files grouped these files and
load one group into one partition. Each partition is called a directory. A directory structure
provides a mechanism for organizing many files in the file system

Operation on the directories: the various operations that can be performed on directory are:

1) Searching for a file: we need to search a directory structure to find the entry for a
particular file. Since files have symbolic names in a user-readable form and similar names
may indicate a relationship between files. We may want to able to find all files whose names
match a particular pattern.

2) Create a file: new files are created and added to the directory.

Delete a file: when we do not require to use a particular file, it is removed from the directory.

3) List a directory: when we want to list all the files in a particular directory, and the contents
of directory entry for each file in the list.

4) Rename a file: Whenever we need to change the name of the file, we can change the name.

5) Traverse the file system: in a directory structure, we may wish to access every directory,
and every file within a directory structure.

You might also like