0% found this document useful (0 votes)
22 views

OperatingSystemII Note

The document discusses memory management in operating systems. It defines memory and its types, and explains why memory management is important for operating systems. It describes different memory management techniques like static and dynamic loading and linking. The document provides details about memory management functions in operating systems.

Uploaded by

Abdul Mahmud
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views

OperatingSystemII Note

The document discusses memory management in operating systems. It defines memory and its types, and explains why memory management is important for operating systems. It describes different memory management techniques like static and dynamic loading and linking. The document provides details about memory management functions in operating systems.

Uploaded by

Abdul Mahmud
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 27

JIGAWA STATE INSTITUTE OF INFORMATION TECHNOLOGY

LECTURE NOTES

ON

OPERATING SYSTEM II
(COM321)

BY:
MR ABUBAKAR BASHIR USMAN
INTRODUCTION
A computer system Operation can be divided roughly into four components: the hardware,
the operating system, the application programs and the user.

An operating system is software which acts as an interface between the end user and
computer hardware. Every computer must have at least one OS to run other programs.
Applications like Browsers, Word Processors, Spreadsheets, CorelDraw, Games, etc needs
some environment in which they will run and perform their task.The OS helps you to
communicate with the computer without knowing how to speak the computer's language. It
is not possible for the user to use any computer or mobile device without having an operating
system.
What is a Kernel?
The kernel is the central component of computer operating systems. The only job performed
by the kernel is to the manage the communication between the software and the hardware. A
Kernel is at the nucleus of operating system. It makes the communication between the
hardware and software possible.

Functions of an Operating System

In an operating system software performs each of the function:

1. Processor management: - Process management helps OS to create and delete


processes. It also provides mechanisms for synchronization and communication
among processes.

2. Memory management: - Memory management module performs the task of


allocation and de-allocation of memory space to programs in need of these resources.
3. File management: - It manages all the file-related activities such as organization
storage, retrieval, naming, sharing, and protection of files.

4. Device Management: Device management keeps tracks of all devices. This module
also responsible for this task is known as the I/O controller. It also performs the task
of allocation and de-allocation of the devices.

5. I/O System Management: One of the main objects of any OS is to hide the


peculiarities of those hardware devices from the user.

6. Secondary-Storage Management: Systems have several levels of storage which


includes primary storage, secondary storage, and cache storage. Instructions and data
must be stored in primary storage or cache so that a running program can reference it.

7. Security: - Security module protects the data and information of a computer system
against malware threat and authorized access.

8. Command interpretation: This module is interpreting commands given by the and


acting system resources to process that commands.

9. Networking: A distributed system is a group of processors which do not share


memory, hardware devices, or a clock. The processors communicate with one another
through the network.

10. Job accounting: Keeping track of time & resource used by various job and users.

11. Communication management: Coordination and assignment of compilers,


interpreters, and another software resource of the various users of the computer
systems.

Advantages of using Operating System


 Allows you to hide details of hardware by creating an abstraction
 Easy to use with a GUI
 Offers an environment in which a user may execute programs/applications
 The operating system must make sure that the computer system convenient to use
 Operating System acts as an intermediary among applications and the hardware
components
 It provides the computer system resources with easy to use format
 Acts as an intermediate between all hardware's and software's of the system

Disadvantages of using Operating System


 If any issue occurs in OS, you may lose all the contents which have been stored in
your system
 Operating system's software is quite expensive for small size organization which adds
burden on them. Example Windows
 It is never entirely secure as a threat can occur at any time

End of Introduction
MEMORY MANAGEMENT
The term Memory can be defined as a part of computer in which data or program
instructions can be stored for retrieval. The memory comprises a large array or group of
words or bytes, each with its own location. The primary motive of a computer system is to
execute programs. These programs, along with the information they access, should be in the
main memory during execution. The CPU fetches instructions from memory according to
the value of the program counter.  To achieve a degree of multiprogramming and proper
utilization of memory, memory management is important. Many memory management
methods exist, reflecting various approaches, and the effectiveness of each algorithm
depends on the situation. 

Memory is the most essential element of a computing system because without it computer
can’t perform simple tasks. Computer memory is two types:-
1. Primary Memory (RAM & ROM)
2. Secondary Memory (hard drive, CD, etc.)
Random access memory (RAM) is a primary-volatile memory and Read only memory
(ROM) is a primary-non-volatile memory.
S/N RAM ROM
1 Temporary storage Permanent storage
2 Store data in MBs Store data in GBs
3 Volatile Non-volatile
4 Used in normal operations Used for start-up process of computer
5 Writing data is faster Writing data is slower

Memory management is one of the functionalities of operating System which handles or


manages primary memory and moves processes back and forth between main memory and
disk during execution.
Memory management manages main memory to allocate a memory space to programs
(processes) that are to be executed or used and de-allocate memory space to programs
(processes) that are no longer in used. Computer can only execute or change data that is in
main memory. Therefore, every program we execute and every file we access must be copied
from a storage device into main memory.
Why Memory Management is required:
 Allocate and de-allocate memory before and after process execution.
 To keep track of used memory space by processes.
 To minimize fragmentation issues.
 To proper utilization of main memory.
All programs are loaded in the main memory for execution. Sometimes complete program is
loaded into the main memory, but sometimes a certain part of the programs is loaded into the
main memory only when it is called by the program, this mechanism is called Dynamic
loading , this enhance the performance.
A loader is responsible for bringing a program into memory and preparing it for execution. Its
main task is to read the compiled binary file (executable file) of a program from disk and load
it into the computer's memory.
To load a process into the main memory is done by a loader. There are two different types
of loading:

 Static loading: - loading the entire program into a fixed address. It requires more
memory space.
Example: In the case of the music player application, if it is statically loaded, all the
necessary components, such as the core music player functionality and any associated
libraries, would be loaded into memory when you double-click the program icon. This
includes everything the music player needs to operate, such as the user interface
elements, audio playback algorithms, and other relevant resources.
 Dynamic loading: - The entire program and all data of a process must be in physical
memory for the process to execute. So, the size of a process is limited to the size of
physical memory. To gain proper memory utilization, dynamic loading is used. In
dynamic loading, a routine is not loaded until it is called. All routines are residing on
disk in a re-locatable load format. One of the advantages of dynamic loading is that
unused routine is never loaded. This loading is useful when a large amount of code is
needed to handle it efficiently.
Example: In the context of the music player application, if it utilizes dynamic loading,
the loader would only load the essential components required to display the user
interface and handle basic operations when you double-click the program icon. For
example, it might load the main interface and basic playback functionality. As you
interact with the application and initiate specific actions, such as accessing a playlist or
applying audio effects, the loader would dynamically load the necessary components
into memory to fulfill those requests.

Differences between Static and Dynamic Loading


Static Loading Dynamic Loading

Static loading is used when you want to load your program In a Dynamically loaded program,
statically. Then at the time of compilation, the entire program references will be provided and the
will be linked and compiled without need of any external loading will be done at the time of
module or program dependency. execution.

Routines of the library are loaded into


At loading time, the entire program is loaded into memory
memory only when they are required
and starts its execution.
in the program.
Also, at times one program is dependent on some other program. In such case, rather than
loading all dependent programs, CPU links the dependent programs to the main executing
program when it’s required. This mechanism is called dynamic linking.
To perform a linking task a linker is used. A linker is a program that takes one or more
object files generated by a compiler and combine them into a single executable file.
A linker is responsible for combining multiple object files and libraries to create a single
executable file.
 Static linking: In static linking, the linker combines all necessary program modules
into a single executable program. So there is no runtime dependency. Some operating
systems support only static linking, in which system language libraries are treated like
any other object module.
 Dynamic linking: The basic concept of dynamic linking is similar to dynamic loading.
In dynamic linking, “Stub” is included for each appropriate library routine reference. A
stub is a small piece of code. When the stub is executed, it checks whether the needed
routine is already in memory or not. If not available then the program loads the routine
into memory.

Differences between Static and Dynamic Linking


Static Linking Dynamic Linking

Static linking is used to combine all other When dynamic linking is used, it does not need to
modules, which are required by a program link the actual module or library with the program.
into a single executable code. This helps OS Instead of it use a reference to the dynamic module
prevent any runtime dependency. provided at the time of compilation and linking.

MEMORY MANAGEMENT TECHNIQUES


Memory management has different techniques, ways or methods which handle or manage
primary memory and move processes back and forth between main memory and disk during
execution.
There are four Memory Management Techniques which are:
1. Swapping
2. Paging
3. Segmentation
4. Fixed partitioning
1. Swapping
Swapping is a memory management technique in which a process can be swapped
temporarily out of main memory (or move) to secondary storage (disk) and makes that
memory available to other processes. At some later time, the system swaps back the process
from the virtual memory (virtual memory is a technique where some portion or part of
secondary memory can be used as if it were part of main memory) to main memory.
A process needs to be in memory for execution. But sometimes there is not enough main
memory to hold all the currently active processes in timesharing system. So, excess process
are kept on disk and brought into run dynamically. Swapping is the process of bringing in
each process into main memory, running it for a while and then putting it back to the disk.
The total time taken by swapping process includes the time it takes to move the entire process
to a secondary disk and then to copy the process back to memory, as well as the time the
process takes to regain main memory.
The Figure on the next page shows how processes are swapped between disk drive
(Secondary Storage) and main memory (Primary Memory). The memory is usually divided
into two partitions: one for the resident operating system and one for the user processes.

Advantages of Swapping
1. Allows to make more efficient use of your available memory
2. Data will be swapped to hard disk drive if the computer runs out of RAM
otherwise, the computer will crash
3. Simple to implement

Disadvantages of Swapping
1. it leads to wastage of Memory which is called Fragmentation
2. Swapping processes from hard disk drive to Main Memory (RAM) leads to
wastage of CPU time (wastage of time)
Fragmentation
A Fragmentation is defined as when the process is loaded and removed after execution
from memory, it creates a small free hole. These holes cannot be assigned to new processes
because holes are not combined or do not fulfil the memory requirement of the process.  To
achieve a degree of multiprogramming, we must reduce the waste of memory or
fragmentation problem. It is generally termed as inability to use available memory space this
problem is known as Fragmentation.

In operating system there are two types of fragmentation:-

S.N Fragmentation & Description


.
1 External fragmentation
Total memory space is enough to satisfy a request or to reside a process in it, but it
is not contiguous, so it cannot be used.
2 Internal fragmentation
Memory block assigned to process is bigger. Some portion of memory is left
unused, as it cannot be used by another process.
The following diagram shows how fragmentation can cause waste of memory and a
compaction technique can be used to create more free memory out of fragmented memory –
External fragmentation can be reduced by compaction or shuffle memory contents to place
all free memory together in one large block. To make compaction feasible, relocation should
be dynamic.
The internal fragmentation can be reduced by effectively assigning the smallest partition but
large enough for the process.
2. Paging
Paging is a solution to external fragmentation problem and it is a memory management
technique in which process address space is broken into blocks of the same size called pages.
Similarly, main memory is divided into small fixed-sized blocks of (physical) memory
called frames and the size of a frame is kept the same as that of a page to have optimum
utilization of the main memory and to avoid external fragmentation.
A data structure called page table is used to keep track of the relation between a pages of a
process to a frame in physical memory.
Page Table
A page table is the data structure used by a virtual memory system in a computer operating
system to store the mapping between virtual addresses and physical addresses.
Virtual address is also known as logical address and is generated by the CPU, while physical
address is an address that can be seen by physical (main) memory (RAM).
Advantages of Paging
1. Paging is simple to implement and assumed as an efficient memory management
technique.
2. Paging reduces external fragmentation, but still suffers from internal fragmentation.
Disadvantages of Paging
1. Causes internal fragmentation
2. Page table requires extra memory space, so may not be good for a system having
small RAM.
3. Segmentation
Segmentation is a memory management technique in which each job is divided into several
segments of different sizes, one for each module that contains pieces that perform related
functions. Each segment is actually a different logical address space of the program.
When a process is to be executed, its corresponding segmentation is loaded into non-
contiguous memory though every segment is loaded into a contiguous block of available
memory.
Segmentation memory management works very similar to paging but here segments are of
variable-length where as in paging pages are of fixed size.
A program segment contains the program’s main function, utility functions, data structures,
and so on. The operating system maintains a segment map table for every process and a list
of free memory blocks along with segment numbers, their size and corresponding memory
locations in main memory. For each segment, the table stores the starting address of the
segment and the length of the segment. A reference to a memory location includes a value
that identifies a segment and an offset.

Advantages of Segmentation
1. No internal fragmentation
2. Segment Table consumes less space in comparison to page table in paging

Disadvantages of Segmentation
1. As processes are loaded and removed from the memory, the free memory space is
broken into little pieces, causing external fragmentation
4. Fixed Partitioning
Fixed partitioning: In fixed-sized memory partitioning, the main memory is
divided into blocks of the same or different sizes. Fixed-size memory partitioning can take
place before executing any processes or during the configuration of the system.
The earliest and one of the simplest technique which can be used to load more than one
processes into the main memory, In this technique, the main memory is divided into
partitions of equal or different sizes. The operating system always resides in the first partition
while the other partitions can be used to store user processes. The memory is assigned to the
processes in contiguous way.
Advantage of fixed partitioning
1. Simple to implement

Disadvantages of fixed partitioning


1. It causes both internal and external fragmentation
2. If the process size is larger than the size of maximum sized partition then that
process cannot be loaded into the memory. Therefore, a limitation can be imposed
on the process size that is it cannot be larger than the size of the largest partition.

VIRTUAL MEMORY
Virtual Memory is a storage allocation scheme in which secondary memory can be addressed
as though it were part of main memory. The addresses a program may use to reference
memory are distinguished from the addresses the memory system uses to identify physical
storage sites, and program generated addresses are translated automatically to the
corresponding machine addresses.
The size of virtual storage is limited by the addressing scheme of the computer system and
amount of secondary memory is available not by the actual number of the main storage
locations.

It is a technique that is implemented using both hardware and software. It maps memory
addresses used by a program, called virtual addresses, into physical addresses in computer
memory.

1. All memory references within a process are logical addresses that are dynamically
translated into physical addresses at run time. This means that a process can be
swapped in and out of main memory such that it occupies different places in main
memory at different times during the course of execution.
2. A process may be broken into number of pieces and these pieces need not be
continuously located in the main memory during execution. The combination of
dynamic run-time address translation and use of page or segment table permits this.
How Virtual Memory Is Implemented
Virtual Memory is a memory management technique that is implemented using both
hardware (MMU) and software (Operating System). The goal of virtual memory is to map
virtual memory addresses generated by an executing program into physical addresses in
computer memory. This concerns two main aspects; address translation (from virtual to
physical) and virtual address spaces management. The Virtual memory is generated on CPU
chip by Memory Management Unit or MMU.

Virtual memory techniques


Just like Memory management, the virtual memory has the same techniques with memory
management but the most popular ones are:

1. Swapping

2. paging

Demand paging
Demand paging is a type of swapping done in virtual memory systems. In demand paging,
the data is not copied from the disk to the RAM until they are needed or being demanded by
some program. The data will not be copied when the data is already available on the memory.
This is otherwise called a lazy evaluation because only the demanded pages of memory are
being swapped from the secondary storage (disk space) to the main memory. In contrast
during pure swapping, all the memory for a process is swapped from secondary storage to
main memory during the process start up. The process of loading the page into main memory
(RAM) on demand is known as demand paging.

A demand paging system is quite similar to a paging system with swapping where processes
reside in secondary memory and pages are loaded only on demand, not in advance. When a
context switch occurs, the operating system does not copy any of the old program’s pages out
to the disk or any of the new program’s pages into the main memory Instead, it just begins
executing the new program after loading the first page and fetches that program’s pages as
they are referenced. While executing a program, if the program references a page which is
not available in the main memory because it was swapped out a little ago, the processor treats
this invalid memory reference as a page fault and transfers control from the program to the
operating system to demand the page back into the memory.

Page Replacement
Page replacement is a technique which an Operating System decides which memory pages to
swap out, write to disk when a page of memory needs to be allocated. Paging happens
whenever a page fault occurs and a free page cannot be used for allocation purpose
accounting to reason that pages are not available or the number of free pages is lower than
required pages.
When the page that was selected for replacement and was paged out, is referenced again, it
has to read in from disk, and this requires for I/O completion. This process determines the
quality of the page replacement algorithm: the lesser the time waiting for page-ins, the better
is the algorithm.

A page replacement algorithm looks at the limited information about accessing the pages
provided by hardware, and tries to select which pages should be replaced to minimize the
total number of page misses, while balancing it with the costs of primary storage and
processor time of the algorithm itself. There are many different page replacement algorithms.
We evaluate an algorithm by running it on a particular string of memory reference and
computing the number of page faults.

Page Replacement Algorithms


Page replacement algorithms are:

1. first in first out (FIFO) algorithm


2. Optimal Page algorithm
3. Least Recently Used (LRU) algorithm

1. First In First Out


Oldest page in main memory is the one which will be selected for replacement. Easy to
implement, keep a list, replace pages from the tail and add new pages at the head.

 Reference string: 0,2,1,6,4,0,1,0,3,1,2,1


 4 frames (4 pages can be in a memory at a time per process)
2. Optimal Page Algorithm
An optimal page-replacement algorithm: - replace the page that will not be used for the
longest period of time. has the lowest page-fault rates amongst all the algorithms. An optimal
page-replacement algorithm is known as or called OPT or MIN. Use the time when a page
will be used. Replace a page that will not be used in near future

3. Least Recently used (LRU) Algorithm


Page which has not been in used for the longest time in the RAM (main memory) is the one
to be replaced. Easy to implement, Replace a page which is not in use for a longest time.
FILE SYSTEM AND IT’S MANAGEMENT
A file is a collection of related information that is recorded on secondary storage. Or file is a
collection of logically related entities. From user’s perspective a file is the smallest allotment
of logical secondary storage.

A file is an object on a computer that stores data, information, settings, or commands used


with a computer program. In a GUI (graphical user interface), such as Microsoft Windows,
files display as icons that relate to the program that opens the file. For example, the picture is
an icon associated with Adobe Acrobat PDF files. If this file was on your computer, double-
clicking the icon in Windows would open that file in Adobe Acrobat or the PDF reader
installed on the computer.

File Types, File Extensions & Functions


File type Usual extension Function
Executable exe, com, bin Read to run machine language program
Object obj, o Compiled, machine language not linked
Source Code C, java, pas, asm, a Source code in various languages
Batch bat, sh Commands to the command interpreter
Text Txt Textual data, documents
Word Processor wp, tex, rrf, doc Various word processor formats
Archive arc, zip, tar Related files grouped into one compressed file
Multimedia mpeg, mov, Jpeg For containing audio/video information

Common File Operations


1. Create
2. Update
3. Delete
4. Read
1. Create
A file is created using a software program on the computer. For example, to create a text
file you would use a text editor, to create an image file you would use an image editor, and to
create a word processor document you would use a word processor.

2. Update
File updating Changing values in one or more records of a file, especially a data file.
3. Delete
Delete is computer terminology for remove or erase. You can delete text from a document of
delete entire files or folders from your hard drive. When typing a document, you can remove
characters behind the cursor by pressing the delete key. If you want to remove characters in
front of the cursor, you can press the smaller delete key near the home and end buttons on the
keyboard. You can also remove entire sections of text by selecting the text you wish to delete
and pressing either delete button on the keyboard. Files and folders can be removed from
your hard drive by dragging them to the Recycle Bin (Windows) or the Trash (Macintosh)
and then emptying the trash. When you delete a file, it is actually not erased, but instead the
reference to the file is removed. This means deleted files are still intact until they are written
over. Special utilities such as Norton Unerase can recover accidentally deleted files.

4. Read
Ability to fetch the file from secondary storage in order to view the it (file).

File Directory
Collection of files in one place is a file directory. The directory contains information about
the files, including attributes, location and ownership. Much of this information, especially
that is concerned with storage, is managed by the operating system. The directory is itself a
file, accessible by various file management routines. Information contained in a device
directory for files are:
1. Name
2. Type
3. Address
4. Current length
5. Maximum length
6. Date last accessed
7. Date last updated
8. Protection information

Operation performed on directory are:

1. Search for a file


2. Create a file
3. Delete a file
4. List a directory
5. Rename a file
6. Traverse the file system
Advantages of maintaining directories are:
1. Efficiency: A file can be located more quickly.
2. Naming: It becomes convenient for users as two users can have same name for
different files or may have different name for same file.
3. Grouping: Logical grouping of files can be done by properties e.g. all java programs,
all games etc.

File Management Techniques


1. Create and organize files
2. How to manage (modifying, deleting, Security) and Back up files
3. Search files and directories

Security and Protection Mechanism On Files


1. Limitation of authorization to computer system
2. Applying hidden folder mechanism using attrib +h +s DirName and attrib –h –s
DirName for hiding and showing folder respectively or using Directory’s properties to
hide and show directory
3. Frequent use of anti-virus software for detecting and removing viruses or malwares
4. Passwording directories/ files
Process/processes Scheduling
Process scheduling is an essential part of Multiprogramming operating systems. Such
operating systems allow more than one process to be loaded into the executable memory at a
time and the loaded processes share the CPU using time multiplexing by process scheduler.

The OS maintains all PCBs in Process Scheduling Queues. The OS maintains a separate
queue for each of the process states and PCBs of all processes in the same execution state are
placed in the same queue. When the state of a process is changed, its PCB is unlinked from
its current queue and moved to its new state queue. The Operating System maintains the
following important process scheduling queues −

1. Job queue − this queue keeps all the processes in the system.
2. Ready queue − this queue keeps a set of all processes residing in main memory, ready
and waiting to execute. A new process is always put in this queue.
3. Waiting/ Device queue – when the process needs some I/O operation in order to
complete its execution, OS changes the state of the process from running to waiting

Types of CPU Scheduling


Here are two kinds of Scheduling methods:

1. Preemptive Scheduling
2. Non-Preemptive Scheduling

1. Preemptive Scheduling
In Preemptive Scheduling, the tasks are mostly assigned with their priorities. Sometimes it is
important to run a task with a higher priority before another lower priority task, even if the
lower priority task is still running. The lower priority task holds for some time and resumes
when the higher priority task finishes its execution. In preemptive scheduling process can be
interrupted, even before the completion.

2. Non-Preemptive Scheduling
In this type of scheduling method, the CPU has been allocated to a specific process. The
process that keeps the CPU busy will release the CPU either by switching context or
terminating. It is the only method that can be used for various hardware platforms. That’s
because it doesn’t need special hardware (for example, a timer) like preemptive scheduling.
In non-preemptive process is not interrupted until its life cycle is complete.

Schedulers

Schedulers are special system software which handles process scheduling in various ways.
Their main task is to select the jobs to be submitted into the system and to decide which
process to run. There are three types of schedulers available:

1. Long Term Scheduler


2. Short Term Scheduler
3. Medium Term Scheduler

1. Long Term Scheduler: -


Long term scheduler (Job Scheduler) runs less frequently. Long Term Schedulers decide
which program must get into the job queue. From the job queue, the Job Processor, selects
processes and loads them into the memory for execution. Primary aim of the Job Scheduler is
to maintain a good degree of Multiprogramming. An optimal degree of Multiprogramming
means the average rate of process creation is equal to the average departure rate of processes
from the execution memory.

2. Short Term Scheduler: -


This is also known as CPU Scheduler and its responsibility is to pick the process/job from
ready queue and dispatch to the CPU for execution.

3. Medium Term Scheduler: -


This scheduler removes the processes from memory (and from active contention for the
CPU), and thus reduces the degree of multiprogramming. At some later time, the process can
be reintroduced into memory and its execution van be continued where it left off. This
scheme is called swapping. The process is swapped out, and is later swapped in, by the
medium term scheduler.
Swapping may be necessary to improve the process mix, or because a change in memory
requirements has overcommitted available memory, requiring memory to be freed up. This
complete process is descripted in the below.

The OS can use different policies to manage each queue (FIFO, Round Robin, SJN etc.). The
OS scheduler determines how to move processes between the ready and run queues which
can only have one entry per processor core on the system.

Differences between Process Scheduling and CPU Scheduling


S/N Process Scheduling CPU Scheduling
1 Process scheduling is the CPU Scheduling is the mechanism to select which
mechanism to select which process has to be executed next and allocates the
process has to be brought into CPU to that process
ready queue
2 Is known as long-term scheduling Is known as short-term scheduling
3 Process scheduling is done by the CPU Scheduling is done by the short-term
long-term scheduler or job scheduler
scheduler

Below are different times with respect to a process.

Arrival Time: Time at which the process arrives in the ready queue.
Completion Time: Time at which process completes its execution.
Burst Time: Time required by a process for CPU execution.
Turn Around Time: Time Difference between completion time and arrival time.
Turn Around Time = Completion Time – Arrival Time
Waiting Time(W.T): Time Difference between turn around time and burst time.

Different Scheduling Algorithms


These algorithms are either non-preemptive or preemptive. Non-preemptive algorithms are
designed so that once a process enters the running state, it cannot be preempted until it
completes its allotted time, whereas the preemptive scheduling is based on priority where a
scheduler may preempt a low priority running process anytime when a high priority process
enters into a ready state.

1. First Come First Serve (FCFS): Simplest scheduling algorithm that schedules
according to arrival times of processes. First come first serve scheduling algorithm
states that the process that requests the CPU first is allocated the CPU first. It is
implemented by using the FIFO queue. When a process enters the ready queue, its
PCB is linked onto the tail of the queue. When the CPU is free, it is allocated to the
process at the head of the queue. The running process is then removed from the
queue. FCFS is a non-preemptive scheduling algorithm.

Process Arrival Time Burst time


P1 0 5
P2 1 6
P3 2 7
P4 3 8
P5 1 9

Gantt chart:
P1 P2 P5 P3 P4
0 5 11 20 27 35
Completion time Turnaround time (CT-AT) Waiting time(TAT-BT)
5 5 0
11 10 4
20 19 10
27 25 18
35 32 24

2. Shortest Job First (SJF): Process which has the shortest burst time are scheduled first.
If two processes have the same bust time then FCFS is used to break the tie. It is a pre-
emptive and non-pre-emptive scheduling algorithm.

Process Arrival Time Burst time


P1 0 5
P2 1 6
P3 2 7
P4 3 8
P5 1 9

Gantt chart:
P1 P2 P3 P4 P5
0 5 11 18 26 35
Completion time Turnaround time (CT-AT) Waiting time(TAT-BT)
5 5 0
11 10 4
18 16 9
26 23 15
35 34 25

Deadlock
A process in operating systems uses different resources and uses resources in following way.
1) Requests a resource
2) Use the resource
2) Releases the resource
Deadlock is a situation where a set of processes are blocked because each process is holding
a resource and waiting for another resource acquired by some other process.
Consider an example when two trains are coming toward each other on same track and there
is only one track, none of the trains can move once they are in front of each other. Similar
situation occurs in operating systems when there are two or more processes hold some
resources and wait for resources held by other(s). For example, in the below diagram, Process
1 is holding Resource 1 and waiting for resource 2 which is acquired by process 2, and
process 2 is waiting for resource 1.
Deadlock can arise if following four conditions hold simultaneously (Necessary
Conditions)
1. Mutual Exclusion: One or more than one resource is non-sharable (Only one process can
use at a time). There should be a resource that can only be held by one process at a time. In
the diagram below, there is a single instance of Resource 1 and it is held by Process 1 only.

2. Hold and Wait: A process is holding at least one resource and waiting for resources. A
process can hold multiple resources and still request more resources from other processes
which are holding them. In the diagram given below, Process 2 holds Resource 2 and
Resource 3 and is requesting the Resource 1 which is held by Process 1.

3. No Preemption: A resource cannot be taken from a process unless the process releases the
resource. A resource cannot be preempted from a process by force. A process can only
release a resource voluntarily. In the diagram below, Process 2 cannot preempt Resource 1
from Process 1. It will only be released when Process 1 relinquishes it voluntarily after its
execution is complete.
4. Circular Wait: A set of processes are waiting for each other in circular form. A process is
waiting for the resource held by the second process, which is waiting for the resource held by
the third process and so on, till the last process is waiting for a resource held by the first
process. This forms a circular chain. For example: Process 1 is allocated Resource2 and it is
requesting Resource 1. Similarly, Process 2 is allocated Resource 1 and it is requesting
Resource 2. This forms a circular wait loop.

Methods for handling deadlock


There are three ways to handle deadlock
1) Deadlock prevention or avoidance: The idea is to not let the system into deadlock state.
One can zoom into each category individually; Prevention is done by negating one of above
mentioned necessary conditions for deadlock.
Avoidance is kind of futuristic in nature. By using strategy of “Avoidance”, we have to make
an assumption. We need to ensure that all information about resources which process WILL
need are known to us prior to execution of the process. We use Banker’s algorithm (Which is
in-turn a gift from Dijkstra) in order to avoid deadlock.

2) Deadlock detection and recovery: Let deadlock occur, then do preemption to handle it
once occurred.

3) Ignore the problem all together: If deadlock is very rare, then let it happen and reboot the
system. This is the approach that both Windows and UNIX take.

Deadlock Detection
Deadlock detection, we can run an algorithm to check for cycle in the Resource Allocation
Graph. Presence of cycle in the graph is the sufficient condition for deadlock.

In the above diagram, resource 1 and resource 2 have single instances. There is a cycle
R1 → P1 → R2 → P2. So, Deadlock is confirmed.

You might also like