Revised OS Lab Manual 2022 23
Revised OS Lab Manual 2022 23
1
File Management operations -
Create File
Delete File
Rename File
Read File
Write File
Exit File
Directory Management operations-
Create Directory
Remove Directory
Set Working Directory
Exit
ALGORITHM: -
For creating a file -
1) declare file pointer.
2) store 0x3C in ah register.
3) get offset of file pointer in dx register.
4) if carry is generated
then error
else file is created successfully.
2
For read a file –
1) declare file pointer.
2) store 0x3F in ah register.
3) get offset of file pointer in dx register.
4) if carry is generated
then error
else shows number of bytes actually read.
3
RESULT: -
REFERENCES: -
1. Advanced MSDOS Programming, 2nd edition By Ray Duncan.
2. Operating System Concepts, 6th edition By Abraham Silberschatz, Peter Baer Galvin.
QUESTIONS FOR VIVA: -
1) Define Operating System. Explain different Operating System services.
2) What is system call? Explain with example.
3) What is Command interpreter?
4) What is file? Explain various file attributes.
5) Explain any three DOS interrupts.
4
SSBT’s College of Engineering & Technology, Bambhori, Jalgaon
Computer Engineering Department
Name of student:
Date of Performance: Date of Completion:
---------------------------------------------------------------------------------------------------------------------
EXPERIMENT NO. 2
TITLE: - CPU Scheduling Algorithm.
AIM: - Implementation of CPU Scheduling Algorithms - FCFS, SJF, Priority based, RR
scheduling algorithms.
HARDWARE / SOFTWARE REQUIREMENTS: - Turbo C, PC, Mouse.
THEORY: -
Scheduling –
Scheduling refers to the way processes are assigned to run on the available CPU’s. This
assignment is carried out by software’s known as scheduler next process to run. A scheduler is
an OS module that selects the next job to be admitted into the system and next process to run.
When more than one process is run able, the operating system must decide which one
first. The part of the operating system concerned with this decision is called the scheduler, and
algorithm it uses is called the scheduling algorithm.
Scheduling criteria -
4) Waiting time –amount of time a process has been waiting in the ready queue.
Response time – amount of time it takes from when a request was submitted until the first
response is produced, not output (for time-sharing environment).
8
The Scheduling algorithms can be divided into two categories with respect to how they
deal with clock interrupts.
1) Nonpreemptive Scheduling
A non-preemptive means that a running process retains the control of the CPU and all the
allocated resources until it surrenders controls to the OS on its own. This means that even if a
higher priority process enters the system, the running process cannot be enforced to give up the
control. This philosophy is better suited for real time system, where higher priority events need
an immediate attention and therefore need to interrupt the currently running process.
2) Preemptive Scheduling
A preemptive means on the other hand allows a higher priority process to replace a
currently running process even if its time slice is not over it has not requested for any I/O. this
requires context switching more frequently, thus reducing the throughput, but then it is better
suited for online, time processing where interactive users and high priority processes require
immediate attention.
Scheduling Algorithms -
FCFS is more predictable than most of other schemes since it offers time. FCFS scheme
is not useful in scheduling interactive users because it cannot guarantee good response time. The
code for FCFS scheduling is simple to write and understand.
Example:
Process Burst Time
P1 24
P2 3
P3 3
Assume processes arrive as: P1 , P2 , P3
The Gantt Chart for the schedule is:
9
Waitiing time for P1 = 0; P2 = 24; P3 = 277
Average waiting time: (0 + 242 + 27)/3 = 17
The SJF scheeduling is esspecially apppropriate forr batch jobs for which the
T t run timees are
known inn advance. Since
S the SJJF schedulinng algorithm
m gives the minimum
m avverage time for
f a
given sett of processees, it is probaably optimal.
Exam
mple of Non--Preemptive SJF -
P
Process Arrival Time Burst Timee
A
P1 0 7
P2 2 4
P3 4 1
P4 5 4
SJF (non-preemp
( ptive)
Exam
mple of Preem
mptive SJF –
Process Arrival Tim
me Burst Tim
me
P1 0 7
P2 2 4
P3 4 1
P4 5 4
10
SJF (preemptive)
( )
3] Prioriity Scheduliing -
4] Round
d Robin Sch
heduling –
R
Round-robin (RR) is onne of the siimplest scheeduling algoorithms for processes in i an
operatingg system. Ass the term is generally ussed, time slices are assiggned to eachh process in equal
e
portions and in circcular order, handling all processes without priiority (also known as cyclic c
executivee). Round-roobin schedu uling is simpple, easy to implement, and starvattion-free. Roound-
robin scheduling caan also be applied to other scheduling probblems, suchh as data packet
schedulinng in comp puter networrks. The name of the algorithm comes c from the round-robin
principlee known from
m other fieldds, where eacch person takkes an equal share of som mething in tuurn.
Example:
T
Time Quantuum = 20
11
Typicallyy, higher aveerage turnaro
ound time thhan SRTF, buut better respponse.
ALGOR
RITHM: -
IV
V. if Choice = 2
then call SJF(
S ) functiion
i. read the number off processes and
a burst tim
me of each prrocess.
ii. if P[i].B
Bt > P[i+1].BBt
then swap
s P[i]& P[i+1]
iii. calculaate waiting time,
t Waitinng Time = Waiting
W Time – Arrival Time
and Tu urnaround timme = Waitinng time + Buurst Time
iv. calculaate average waiting
w timee and average turnaroundd time.
v. print thhe Waiting tiime and Turrnaround tim
me of each prrocess and also
a print aveerage
waiting time and averrage turnarouund time.
V If Choice = 3
V.
then call Priority(
P ) fu
unction
12
i. read the number of process and burst time for process.
ii. if P[i].pri > p[i+1].pri
then swap P[i] & P[i+1]
iii. calculate waiting time, Waiting Time = Waiting Time – Arrival Time
and Turnaround time = Waiting time + Burst Time
iv. calculate average waiting time and average turnaround time.
v. print the Waiting time and Turnaround time of each process and also print average
waiting time and average turnaround time.
VI. If Choice =4
then call RoundRobin( ) function
i. read the number of processes and burst time of each process.
ii. read time quantum.
iii. if p[i].Bt > 0
then If p[i].Bt > quantum
allocate the process for execution as per round robin scheduling algorithm.
iv. calculate waiting time Wt Time = [Wt Time – Arrival Time]
and Turnaround time Tat = [Wt time + Burst Time]
iii. calculate average waiting time and average turnaround time
iv. print the Waiting time and Turnaround time of each process and also print average
waiting time and turnaround time.
VII. if choice = 5
then terminate the program
RESULT: -
REFERENCES: -
1. Operating System Concepts, 6th edition By Abraham Silberschatz, Peter Baer Galvin.
QUESTIONS FOR VIVA: -
1) What is process? Explain process management.
Name of student:
Date of Performance: Date of Completion:
---------------------------------------------------------------------------------------------------------------------
EXPERIMENT NO. 3
TITLE: - Memory Management.
AIM: - Implementation of various Memory Management strategies - First Fit, Best Fit, Next Fit
and Worst Fit.
HARDWARE / SOFTWARE REQUIREMENTS: - Turbo C, PC, Mouse.
THEORY: -
Several methods have been devised that increase the effectiveness of memory
management. Virtual memory systems separate the memory addresses used by a process from
actual physical addresses, allowing separation of processes and increasing the effectively
available amount of RAM using paging or swapping to secondary storage. The quality of the
virtual memory manager can have a big impact on overall system performance. Memory is
usually divided into fast primary storage and slow secondary storage. Memory management in
the operating system handles moving information between these two levels of memory.
The task of fulfilling an allocation request consists of finding a block of unused memory
of sufficient size. Even though this task seems simple, several issues make the implementation
complex. One of such problems is internal and external fragmentation, which arises when there
are many small gaps between allocated memory blocks, which are insufficient to fulfill the
request. Another is that allocator's metadata can inflate the size of (individually) small
allocations; this effect can be reduced by chunking. Usually, memory is allocated from a large
pool of unused memory area called the heap (also called the free store). Since the precise
location of the allocation is not known in advance, the memory is accessed indirectly, usually via
a pointer reference. The precise algorithm used to organize the memory area and allocate and
deallocate chunks is hidden behind an abstract interface and may use any of the methods
described below.
14
Memory Allocation Strategies –
1) First Fit – Allocate the first hole that is big enough. Searching can start at the beginning of
the set of holes. We can stop searching as such as we find a free hole that is large enough.
2) Best Fit – Allocate the smallest hole that is big enough we must search entire list, unless the
list is ordered by size. This strategy produces the smallest leftover hole.
3) Next Fit – For allocation searching can start from the previous first fit search ended or next
the hole is allocated after the previously allocated hole.
4) Worst Fit – Allocate the largest hole. Again we must search the entire list, unless it is sorted
by size, this strategy produces the largest leftover hole, which may be more useful than the
smaller leftover hole from a best fit approach.
ALGORITHM: -
1) read the number of blocks.
2) read the size of each block.
3) read the requested blocks.
4) read the size of each requested block.
5) read the users choice
if choice = 1
then search for the free hole which will be available very firstly
else if choice = 2
then start to find out the highest fit hole
else if choice = 3
then from next position start to find out next free hole
else if choice = 4
then allocate the hole having the largest size
else if choice = 5
then terminate the program.
RESULT: -
REFERENCES: -
1. Operating System Concepts, 6th edition By Abraham Silberschatz, Peter Baer Galvin.
QUESTIONS FOR VIVA: -
1) What is Memory Management?
2) Explain Contiguous and Non – Contiguous memory allocation.
3) Explain segmentation and paging concepts.
15
4) Explain memory allocation policies.
5) What is Virtual Memory?
16
SSBT’s College of Engineering & Technology, Bambhori, Jalgaon
Computer Engineering Department
Name of student:
Date of Performance: Date of Completion:
---------------------------------------------------------------------------------------------------------------------
EXPERIMENT NO. 4
TITLE: - Page Replacement Algorithm.
AIM: - Implementation of Page Replacement Algorithms - FIFO, LRU and Optimal.
HARDWARE / SOFTWARE REQUIREMENTS: - Turbo C, PC, Mouse.
THEORY: -
Page Replacement -
Initially execution of a process starts with none of its pages in memory. Each of its pages
page fault at least once when it is first referenced. But it may so happen that some of its pages
are never used. In such case those pages which are not referenced even once will never be
brought into memory. This saves load time and memory space. If this is so the degree of
multiprogramming increased so that more ready processes can be loaded and executed. Now we
may come across a situation where in all of a sudden a process hit to not accessing certain pages
start accessing those pages. The degree of multiprogramming has been raised without looking
into this aspect and the memory is over allocated. Over allocation of memory shows up when
there is a page fault for want of page in memory and the operating system finds the required page
in backing store but cannot bring in the page into memory for want of free frames.
There are three page replacement policies to be followed in demand paging -
1] FIFO Page Replacement –
The first in first out policy simply removes pages in the order they arrived in the main
memory. Using this policy we simply remove a page based on the time of its arrival in the
memory. Clearly, use of this policy would suggest that we swap page located at there position as
it arrived in the memory earliest.
The first in first out page replacement algorithm is the simplest page replacement
algorithm. When a page replacement is required the oldest page in memory victim. The
performance of the FIFO algorithm is not always good. The replaced page may have an
initialization module that needs to be executed only once and therefore no longer needed. On the
other hand the page may have a heavily used variable in constant use. Such page swapped out
will cause a page fault almost immediately to be brought in. Thus the number of page faults
increases and results in slower process execution.
17
2] LRU Page Replacement –
The main distinction between FIFO and optimal algorithm is that the FIFO algorithm
uses the time when a page was brought into memory(looks back) where as the optimal algorithm
uses the time when a page is to be used in future (looks ahead). If the recent page is used as an
approximation of the near future, then replace the page that has not been used for the longest
period of time. This is the last recently used (LRU) algorithm.
LRU expands to least recently used. This policy suggests that we remove a page whose
last usage is farthest from current time.
ALGORITHM: -
1) read the size of page frame.
2) read the length of string.
3) read the reference string.
4) read the number of choice as follows
1. FIFO
2. LRU
3. Optimal
if choice =1
then select first page to replace.
If choice =2
then select past or least recently used page for replacement.
If choice =3
then select page which is having longest future reference for replacement.
RESULT: -
REFERENCES: -
1. Operating System Concepts, 6th edition By Abraham Silberschatz, Peter Baer Galvin.
2. Modern Operating System, 2nd edition By Andrew Tanenbaum.
18
QUESTIONS FOR VIVA: -
19
SSBT’s College of Engineering & Technology, Bambhori, Jalgaon
Computer Engineering Department
Name of student:
Date of Performance: Date of Completion:
---------------------------------------------------------------------------------------------------------------------
EXPERIMENT NO. 5
TITLE: - Banker’s Deadlock Avoidance Algorithm.
AIM: - Implementation of Banker’s deadlock avoidance algorithm.
HARDWARE / SOFTWARE REQUIREMENTS: - Turbo C, PC, Mouse.
THEORY: -
The Banker's algorithm is run by the operating system whenever a process requests
resources. The algorithm avoids deadlock by denying or postponing the request if it determines
that accepting the request could put the system in an unsafe state (one where deadlock could
occur). When a new process enters a system, it must declare the maximum number of instances
of each resource type that may not exceed the total number of resources in the system. Also,
when a process gets all its requested resources it must return them in a finite amount of time.
Assuming that the system distinguishes between four types of resources, (A, B, C and D),
the following is an example of how those resources could be distributed. Note that this example
shows the system at an instant before a new request for resources arrives. Also, the types and
number of resources are abstracted. Real systems, for example, would deal with much larger
quantities of each resource.
A state (as in the above example) is considered safe if it is possible for all processes to
finish executing (terminate). Since the system cannot know when a process will terminate, or
how many resources it will have requested by then, the system assumes that all processes will
20
eventually attempt to acquire their stated maximum resources and terminate soon afterward. This
is a reasonable assumption in most cases since the system is not particularly concerned with how
long each process runs (at least not from a deadlock avoidance perspective). Also, if a process
terminates without acquiring its maximum resources, it only makes it easier on the system. Given
that assumption, the algorithm determines if a state is safe by trying to find a hypothetical set of
requests by the processes that would allow each to acquire its maximum resources and then
terminate (returning its resources to the system). Any state where no such set exists is an unsafe
state.
ALGORITHM: -
1] Safety Algorithm
The algorithm for finding out whether or not a system is in a safe state can be described as
follows -
1) Let Work and Finish be vectors of length m & n respectively. Initialize
Work = Available and Finish[i] = false for i = 0, 1, 2,…..,n-1.
2) Find an index i such that both
a. Finish[i] == false
b. Need i <= Work
If no such i exists, go to step 4.
3) Work = Work + allocation i
Finish[i] = true
Go to step 2
4) if Finish[i] == true for all i, then the system is in safe state.
2] Resource Request Algorithm
Let request i be the request vector for processes Pi. if request i [j] = k, then process Pi wants k
instances of resource type Rj. When a request for resources is made by process Pi, the following
actions are taken -
1) if request i <= Need I, go to step 2. Otherwise raise an error condition, since the
process has executed its maximum claim.
2) If request i <= Available, go to step 3. Otherwise, Pi must wait, since the resources
are not available.
3) Have the system pretend to have allocate the requested resources to process Pi
modifying the state as follows:
Available = Available – Request i
Allocation i = Allocation i + Request i
Need i := Need i – Request i
If the resulting resource allocation state is safe, the transaction is completed and process Pi is
allocated its resources.
21
RESULT: -
REFERENCES: -
1. Operating System Concepts, 6th edition By Abraham Silberschatz, Peter Baer Galvin.
QUESTIONS FOR VIVA: -
1) What is the meaning of deadlock?
22