0% found this document useful (0 votes)
23 views56 pages

Unit 3 Last Year

Uploaded by

appupavangm
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
23 views56 pages

Unit 3 Last Year

Uploaded by

appupavangm
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 56

UNIT - III

MEMORY
MANAGEMENT
Memory Management Techniques:

• The term Memory can be defined as a collection of data in a specific format.


• It is used to store instructions and process data.
• These programs, along with the information they access, should be in the main memory
during execution.
• The CPU fetches instructions from memory according to the value of the program
counter.
• To achieve a degree of multiprogramming and proper utilization of memory, memory
management is important.
• The set of all logical addresses generated by a program is referred to as a logical address
space.
• The set of all physical addresses corresponding to these logical addresses is referred to as
a physical address space.
Memory Management Techniques contd…

The memory management techniques can be classified into following main categories:
Contiguous memory management : In a Contiguous memory management, each program occupies a single
contiguous block of storage locations, i.e., a set of memory locations with consecutive addresses.
Non-Contiguous memory management :In this, the program is divided into blocks fixed size .
variable size and loaded at different portions of memory. That means program blocks are not stored adjacent
to each other.
Memory Management Techniques contd…

Single contiguous memory management schemes:


The Single contiguous memory management scheme is the simplest memory management scheme used in
the earliest generation of computer systems. In this scheme, the main memory is divided into two contiguous
areas or partitions. The operating systems reside permanently in one partition, generally at the lower
memory, and the user process is loaded into the other partition.

Multiple Partitioning:
Multiprogramming that allows more than one program to run concurrently. To switch between two processes,
the operating systems need to load both processes into the main memory. The operating system needs to
divide the available main memory into multiple parts to load multiple processes into the main memory.
The multiple partitioning schemes can be of two types:
Fixed Partitioning
Dynamic Partitioning
Memory Management Techniques contd…

Fixed Partitioning
The main memory is divided into several fixed-sized partitions in a fixed partition memory management
scheme or static partitioning. These partitions can be of the same size or different sizes. Each partition can
hold a single process

Dynamic Partitioning
The dynamic partitioning was designed to overcome the problems of a fixed partitioning scheme. In a dynamic
partitioning scheme, each process occupies only as much memory as they require when loaded for
processing. Requested processes are allocated memory until the entire physical memory is exhausted or the
remaining space is insufficient to hold the requesting process
Swapping

Swapping in the operating systems is a memory management technique in which we swap processes from
secondary memory to main memory or from main memory to secondary memory. The objective of swapping
in the operating system is to increase the degree of multi-programming and to increase main memory
utilization.

There are two steps in the process of Swapping in the operating system:
Swap In: Swap-in the process of bringing a process from secondary storage/hard-disc to main memory (RAM).
Swap Out: Swap-out takes the process out of the Main memory and puts it in the secondary storage.
Swapping contd…

Advantages of Swapping in Operating System


• Swapping processes improve the degree of multi-programming.
• It provides advantages of Virtual Memory for the user.
• Swapping reduces the average waiting time for the process because it allows multiple processes to execute
simultaneously.
• It helps in better memory management and efficient utilization of RAM.

Disadvantages of Swapping in Operating System


• The swapping algorithm must be perfect; otherwise, the number of Page Faults will increase, and
performance will decrease.
• Inefficiency will occur when there are common/shared resources between many processes.
• The user will lose information if there is heavy swapping and the computer loses its power.
Contiguous Memory Allocation

• Contiguous Memory allocation is achieved just by dividing the memory into the fixed-sized partition.
• The memory can be divided either in the fixed-sized partition or in the variable-sized partition in order to
allocate contiguous space to user processes.
Contiguous Memory Allocation contd…
Fixed-size Partition Scheme
• This technique is also known as Static partitioning.
• In this scheme, the system divides the memory into fixed-size partitions, the partitions may or may not be
the same size.
• In this partition scheme, each partition may contain exactly one process.
• Whenever any process terminates then the partition becomes available for another process.
• Let's take an example of fixed size partitioning scheme, we will divide a memory size of 15 KB into fixed-
size partitions:
Contiguous Memory Allocation contd…

Fixed-size Partition Scheme:


Suppose if there is some wastage inside the partition then it is termed Internal Fragmentation.
Contiguous Memory Allocation contd…
Advantages of Fixed-size Partition Scheme:
• This scheme is simple and is easy to implement
• It supports multiprogramming as multiple processes can be stored inside the main memory.
• Management is easy using this scheme
Disadvantages of Fixed-size Partition Scheme:
1. Internal Fragmentation
Suppose the size of the process is lesser than the size of the partition in that case some size of the partition gets
wasted and remains unused. This wastage inside the memory is generally termed as Internal fragmentation
2. Limitation on the size of the process
If in a case size of a process is more than that of a maximum-sized partition then that process cannot be loaded into
the memory. The size of the process cannot be larger than the size of the largest partition.
3. External Fragmentation
It is another drawback of the fixed-size partition scheme as total unused space by various partitions cannot be used
in order to load the processes even though there is the availability of space but it is not in the contiguous fashion.
4. Degree of multiprogramming is less
In this partition scheme, as the size of the partition cannot change according to the size of the process. Thus the
degree of multiprogramming is very less and is fixed
Contiguous Memory Allocation contd…
Variable-size Partition Scheme:
• This scheme is also known as Dynamic partitioning and is came into existence to overcome the drawback
i.e internal fragmentation that is caused by Static partitioning.
• In this partitioning, scheme allocation is done dynamically.
• As partition size varies according to the need of the process so in this partition scheme there is no internal
fragmentation.
Contiguous Memory Allocation contd…
Variable-size Partition Scheme:
Advantages of Variable-size Partition Scheme
• No Internal Fragmentation As in this partition scheme space in the main memory is allocated strictly
according to the requirement of the process thus there is no chance of internal fragmentation. Also, there
will be no unused space left in the partition.
• Degree of Multiprogramming is Dynamic As there is no internal fragmentation in this partition scheme
due to which there is no unused space in the memory. Thus more processes can be loaded into the
memory at the same time.
• No Limitation on the Size of Process In this partition scheme as the partition is allocated to the process
dynamically thus the size of the process cannot be restricted because the partition size is decided
according to the process size.
Contiguous Memory Allocation contd…

Variable-size Partition Scheme:


Disadvantages of Variable-size Partition Scheme
External Fragmentation: In the above diagram- process P1(3MB) and process P3(8MB) completed their
execution. Hence there are two spaces left i.e. 3MB and 8MB. Let’s there is a Process P4 of size 15 MB comes.
But the empty space in memory cannot be allocated as no spanning is allowed in contiguous allocation.
Because the rule says that process must be continuously present in the main memory in order to get
executed. Thus it results in External Fragmentation.
Difficult Implementation The implementation of this partition scheme is difficult as compared to the Fixed
Partitioning scheme as it involves the allocation of memory at run-time rather than during the system
configuration. As we know that OS keeps the track of all the partitions but here allocation and deallocation are
done very frequently and partition size will be changed at each time so it will be difficult for the operating
system to manage everything.
Segmentation

• A process is divided into Segments. The chunks that a program is divided into which are not necessarily all
of the same sizes are called segments.
• Segmentation gives the user’s view of the process which paging does not give.
• There are types of segmentation:
• Virtual memory segmentation – Each process is divided into a number of segments, not all of which are resident
at any one point in time.
• Simple segmentation – Each process is divided into a number of segments, all of which are loaded into memory at
run time, though not necessarily contiguously
• A table stores the information about all such segments and is called Segment Table. Segment Table – It
maps two-dimensional Logical address into one-dimensional Physical address.
• It’s each table entry has:
• Base Address: It contains the starting physical address where the segments reside in memory.
• Limit: It specifies the length of the segment.
Segmentation contd…
Segmentation contd…

Translation of Two dimensional Logical Address to dimensional Physical Address.


Segmentation contd…

Address generated by the CPU is divided into:


Segment number (s): Number of bits required to represent the segment.
Segment offset (d): Number of bits required to represent the size of the segment.
Advantages of Segmentation –
• No Internal fragmentation.
• Segment Table consumes less space in comparison to Page table in paging.
• As a complete module is loaded all at once, segmentation improves CPU utilization.
• The user’s perception of physical memory is quite similar to segmentation. Users can divide user
programmers into modules via segmentation. These modules are nothing more than the separate
processes’ codes.
• The user specifies the segment size, whereas in paging, the hardware determines the page size.
• Segmentation is a method that can be used to segregate data from security operations.
Segmentation contd…

Disadvantage of Segmentation –
• As processes are loaded and removed from the memory, the free memory space is broken into little
pieces, causing External fragmentation.
• Overhead is associated with keeping a segment table for each activity.
• Due to the need for two memory accesses, one for the segment table and the other for main memory,
access time to retrieve the instruction increases.
Paging

• In Operating Systems, Paging is a storage mechanism used to retrieve processes from the secondary
storage into the main memory in the form of pages.
• The main idea behind the paging is to divide each process in the form of pages. The main memory will also
be divided in the form of frames.
• One page of the process is to be stored in one of the frames of the memory.
• The pages can be stored at the different locations of the memory but the priority is always to find the
contiguous frames or holes.
• Different operating system defines different frame sizes.
• The sizes of each frame must be equal.
• Considering the fact that the pages are mapped to the frames in Paging, page size needs to be as same as
frame size.

Paging contd…
Paging contd…

• Let us consider the main memory size 16 Kb and Frame size is 1 KB therefore the main memory will be
divided into the collection of 16 frames of 1 KB each.
• There are 4 processes in the system that is P1, P2, P3 and P4 of 4 KB each. Each process is divided into
pages of 1 KB each so that one page can be stored in one frame.
• Initially, all the frames are empty therefore pages of the processes will get stored in the contiguous way.
• Let us consider the main memory size 16 Kb and Frame size is 1 KB therefore the main memory will be
divided into the collection of 16 frames of 1 KB each.
• There are 4 processes in the system that is P1, P2, P3 and P4 of 4 KB each. Each process is divided into
pages of 1 KB each so that one page can be stored in one frame.
• Initially, all the frames are empty therefore pages of the processes will get stored in the contiguous way.
• Frames, pages and the mapping between the two is shown in the image below.
Paging contd…
Paging contd…

• Let us consider that, P2 and P4 are moved to waiting state after some time.
• Now, 8 frames become empty and therefore other pages can be loaded in that empty place.
• The process P5 of size 8 KB (8 pages) is waiting inside the ready queue.
• Given the fact that, we have 8 non contiguous frames available in the memory and paging provides the
flexibility of storing the process at the different places.
• Therefore, we can load the pages of process P5 in the place of P2 and P4.
• When a page is to be accessed by the CPU by using the logical address, the operating system needs to
obtain the physical address to access that page physically.
• The logical address has two parts.
• Page Number
• Offset
• Memory management unit of OS needs to convert the page number to the frame number.
Paging contd…
Virtual Memory

• A computer can address more memory than the amount physically installed on the system. This extra
memory is actually called virtual memory.
• The main advantage is programs can be larger than physical memory.
• It allows us to have memory protection, because each virtual address is translated to a physical address.
• Entire program is not required to be loaded fully in main memory.
• Less number of I/O would be needed to load or swap each user program into memory.
• Modern microprocessors intended for general-purpose use, a memory management unit, or MMU, is built
into the hardware.
• The MMU's job is to translate virtual addresses into physical addresses.
• Virtual memory is commonly implemented by demand paging and demand segmentation.
• A basic example is given below −
Virtual Memory contd…
Demand Paging
A demand paging system is quite similar to a paging system with swapping where processes reside in
secondary memory and pages are loaded only on demand, not in advance.
Demand paging contd…

• While executing a program, if the CPU references a page which is not available in the main memory
because it was swapped out a little ago, the processor treats this invalid memory reference as a page fault.
• This transfers control to the operating system to demand the page back into the memory.
• Advantages
• Large virtual memory.
• More efficient use of memory.
• There is no limit on degree of multiprogramming.
Copy On Write(COW)

• Copy on Write or simply COW is a resource management technique.


• In UNIX like OS, fork() system call creates a duplicate process of the parent process which is called as the
child process.
• The idea behind a copy-on-write is that when a parent process creates a child process then both of these
processes initially will share the same pages in memory and these shared pages will be marked as copy-on-
write which means that if any of these processes will try to modify the shared pages then only a copy of
these pages will be created and the modifications will be done on the copy of pages by that process and
thus not affecting the other process.
• Suppose, there is a process P that creates a new process Q and then process P modifies page 3.
• The below figures shows what happens before and after process P modifies page 3.
Copy On Write(COW) contd…
Page Replacement
• if a process requests a new page and supposes there are no free frames, then the Operating system needs
to decide which page to replace.
• The operating system must use any page replacement algorithm in order to select the victim frame.
• The Operating system must then write the victim frame to the disk then read the desired page into the
frame and then update the page tables.
• Page replacement prevents the over-allocation of the memory by modifying the page-fault service routine.
• To reduce the overhead of page replacement a modify bit (dirty bit) is used in order to indicate whether
each page is modified.
• In Virtual Memory Management, Page Replacement Algorithms play an important role.
• The main objective of all the Page replacement policies is to decrease the maximum number of page
faults.
Page Replacement contd…
• First of all, find the location of the desired page on the disk.
• Find a free Frame: a) If there is a free frame, then use it. b) If there is no free frame then make use of the
page-replacement algorithm in order to select the victim frame. c) Then after that write the victim frame
to the disk and then make the changes in the page table and frame table accordingly.
• After that read the desired page into the newly freed frame and then change the page and frame tables.
• Restart the process.
Allocation of Frames
• The main memory of the operating system is divided into various frames.
• The process is stored in these frames, and once the process is saved as a frame, the CPU may run it.
• As a result, the operating system must set aside enough frames for each process.
• As a result, the operating system uses various algorithms in order to assign the frame.
There are mainly five ways of frame allocation algorithms in the OS. These are as follows:
• Equal Frame Allocation
• Proportional Frame Allocation
• Priority Frame Allocation
• Global Replacement Allocation
• Local Replacement Allocation
Allocation of Frames contd…
Equal Frame Allocation:
In equal frame allocation, the processes are assigned equally among the processes in the OS.
Example:
If the system has 30 frames and 7 processes, each process will get 4 frames. The 2 frames that are not assigned to
any system process may be used as a free-frame buffer pool in the system.
Disadvantage:
In a system with processes of varying sizes, assigning equal frames to each process makes
Proportional Frame Allocation:
• The proportional frame allocation technique assigns frames based on the size needed for execution and the total
number of frames in memory.
• The allocated frames for a process pi of size si are ai = (si/S)*m, in which S represents the total of all process
sizes, and m represents the number of frames in the system.
Example:
P1=10, p2=30 p1=(10/(10+30))*40=10 frames
Disadvantage:
• The only drawback of this algorithm is that it doesn't allocate frames based on priority.
Allocation of Frames contd…
Priority Frame Allocation:
Priority frame allocation assigns frames based on the number of frame allocations and the processes.
Suppose a process has a high priority and requires more frames that many frames will be allocated to it,
following that, lesser priority processes are allocated.
Global Replacement Allocation:
When a process requires a page that isn't currently in memory, it may put it in and select a frame from the all
frames sets, even if another process is already utilizing that frame. In other words, one process may take a
frame from another.
Local Replacement Allocation:
When a process requires a page that isn't already in memory, it can bring it in and assign it a frame from its set
of allocated frames.
Thrashing
• Thrashing is when the page fault and swapping happens very frequently at a higher rate, and then the
operating system has to spend more time swapping these pages.
• This state in the operating system is known as thrashing.
• Because of thrashing, the CPU utilization is going to be reduced or negligible.
Thrashing contd…
• The basic concept involved is that if a process is allocated to few frames, then there will be too many and
too frequent page faults.
• As a result, no valuable work would be done by the CPU, and the CPU utilization would fall drastically.
• The long-term scheduler would then try to improve the CPU utilization by loading some more processes
into the memory, thereby increasing the degree of multiprogramming.
Allocating kernel memory
Two strategies for managing free memory that is assigned to kernel processes:
1. Buddy system –
• Buddy allocation system is an algorithm in which a larger memory block is divided into small
parts to satisfy the request. This algorithm is used to give best fit.
• The two smaller parts of block are of equal size and called as buddies.
• In the same manner one of the two buddies will further divide into smaller parts until the
request is fulfilled.
• Benefit of this technique is that the two buddies can combine to form the block of larger size
according to the memory request.
• Example – If the request of 25Kb is made then block of size 32Kb is allocated.
Allocating kernel memory contd…
Four Types of Buddy System –
• Binary buddy system
• Fibonacci buddy system
• Weighted buddy system
• Tertiary buddy system
• Coalescing: It is defined as how quickly adjacent buddies can be combined to form larger segments this is
known as coalescing.
Allocating kernel memory contd…
2. Slab Allocation:
A second strategy for allocating kernel memory is known as slab allocation.
It eliminates fragmentation caused by allocations and deallocations.
This method is used to retain allocated memory that contains a data object of a certain type for
reuse upon subsequent allocations of objects of the same type.
In slab allocation memory chunks suitable to fit data objects of certain type or size are preallocated.
Cache does not free the space immediately after use although it keeps track of data which are
required frequently so that whenever request is made the data will reach very fast.
Two terms required are:
Slab – A slab is made up of one or more physically contiguous pages. The slab is the actual container
of data associated with objects of the specific kind of the containing cache.
Cache – Cache represents a small amount of very fast memory. A cache consists of one or more
slabs. There is a single cache for each unique kernel data structure.
Allocating kernel memory contd…

Example –
• A separate cache for a data structure representing processes descriptors
• Separate cache for file objects
• Separate cache for semaphores etc.
Allocating kernel memory contd…
In linux, a slab may in one of three possible states:
• Full – All objects in the slab are marked as used
• Empty – All objects in the slab are marked as free
• Partial – The slab consists of both
Benefits of slab allocator –
• No memory is wasted due to fragmentation because each unique kernel data structure has an associated
cache.
• Memory request can be satisfied quickly.
File system
File Concept
• A file is a named collection of related information that is recorded on secondary storage such as
magnetic disks, magnetic tapes and optical disks.
• In general, a file is a sequence of bits, bytes, lines or records whose meaning is defined by the
files creator and user.
File Structure:
• A File Structure should be according to a required format that the operating system can
understand.
• A file has a certain defined structure according to its type.
• A text file is a sequence of characters organized into lines.
• A source file is a sequence of procedures and functions.
• An object file is a sequence of bytes organized into blocks that are understandable by the
machine.
• Unix, MS-DOS support minimum number of file structure.
File system
File Type:
File type refers to the ability of the operating system to distinguish different types of file such as text files
source files and binary files etc.
• Ordinary files: These may have text, databases or executable program.
• Directory files: These files contain list of file names and other information related to these files.
• Special files: These files represent physical device like disks, terminals, printers, networks, tape drive etc.
File Access Methods
File access mechanism refers to the manner in which the records of a file may be accessed. There are several
ways to access files −
• Sequential access
• Direct/Random access
• Indexed sequential access
Sequential access:
• A sequential access is that in which the records are accessed in some sequence, i.e., the information in the
file is processed in order, one record after the other.
• This access method is the most primitive one. Example: Compilers usually access files in this fashion.
Direct/Random access:
• Random access file organization provides, accessing the records directly.
• Each record has its own address on the file with by the help of which it can be directly accessed for reading
or writing.
• The records need not be in any sequence within the file and they need not be in adjacent locations on the
storage medium.
File Access Methods contd…
Indexed sequential access:
• This mechanism is built up on base of sequential access.
• An index is created for each file which contains pointers to various blocks.
• Index is searched sequentially and its pointer is used to access the file directly.
Structures of Directory

• A directory is a container that is used to contain folders and files. It organizes files and folders in a
hierarchical manner.
There are several logical structures of a directory
• Single-level directory –
The single-level directory is the simplest directory structure.
• In it, all files are contained in the same directory which makes it easy to support and understand.
• Since all the files are in the same directory, they must have a unique name.
Structures of Directory contd….

Advantages:
• Since it is a single directory, so its implementation is very easy.
• If the files are smaller in size, searching will become faster.
• The operations like file creation, searching, deletion, updating are very easy in such a directory structure.
Disadvantages:
• There may chance of name collision because two files can have the same name.
• Searching will become time taking if the directory is large.
• This can not group the same type of files together.
Structures of Directory contd…
• Two-level directory –
In the two-level directory structure, each user has their own user files directory (UFD).
• The UFDs have similar structures, but each lists only the files of a single user.
• system’s master file directory (MFD) is searches whenever a new user ids logged in.
Structures of Directory contd…
Advantages:
• We can give full path like /User-name/directory-name/.
• Different users can have the same directory as well as the file name.
• Searching of files becomes easier due to pathname and user-grouping.
Disadvantages:
• A user is not allowed to share files with other users.
• Still, it not very scalable, two files of the same type cannot be grouped together in the same user.
Protection
• In a multiuser environment, all assets that require protection are classified as objects,
and those that wish to access these objects are referred to as subjects.
• The operating system grants different 'access rights' to different subjects.
• A mechanism that controls the access of programs, processes, or users to the resources
defined by a computer system is referred to as protection.
Need of Protection in Operating System:
• There may be security risks like unauthorized reading, writing, modification, or
preventing the system from working effectively for authorized users.
• It helps to ensure data security, process security, and program security against
unauthorized user access or program access.
• It is important to ensure no access rights, no viruses, no unauthorized access to the
existing data.
• Its purpose is to ensure that only the systems' policies access programs, resources, and
data.
Protection contd…
Goals of Protection in Operating System:
• The policies define how processes access the computer system's resources, such as the CPU, memory,
software, and even the operating system.
• It is the responsibility of both the operating system designer and the app programmer. Although, these
policies are modified at any time.
• It contains protection policies either established by itself, set by management or imposed individually by
programmers to ensure that their programs are protected to the greatest extent possible.
• It also provides a multiprogramming OS with the security that its users expect when sharing common
space such as files or directories.
File Structure
• Most of the Operating Systems use layering approach for every task including file systems. Every layer of
the file system is responsible for some activities.
• The image shown below, elaborates how the file system is divided in different layers, and also the
functionality of each layer.

File Structure contd…
I/O Control level –
Device drivers acts as interface between devices and Os, they help to transfer data between disk and
main memory. It takes block number a input and as output it gives low level hardware specific
instruction.
Basic file system –
It Issues general commands to device driver to read and write physical blocks on disk.It manages the
memory buffers and caches. A block in buffer can hold the contents of the disk block and cache
stores frequently used file system metadata.
File organization Module –
It has information about files, location of files and their logical and physical blocks.Physical blocks do
not match with logical numbers of logical block numbered from 0 to N. It also has a free space which
tracks unallocated blocks.
Logical file system –
It manages metadata information about a file i.e includes all details about a file except the actual
contents of file. It also maintains via file control blocks. File control block (FCB) has information
about a file – owner, size, permissions, location of file contents.
File Structure contd…
Advantages :
1.Duplication of code is minimized.
2.Each file system can have its own logical file system.

Disadvantages :
If we access many files at same time then it results in low performance.

You might also like