0% found this document useful (0 votes)
61 views25 pages

Revised OS Lab Manual 2022 23

The document discusses various CPU scheduling algorithms including FCFS, SJF, priority-based, and RR scheduling. It provides details on their implementation and compares their performance based on criteria like CPU utilization, throughput, turnaround time, and waiting time.

Uploaded by

SUNIT GAMING
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
61 views25 pages

Revised OS Lab Manual 2022 23

The document discusses various CPU scheduling algorithms including FCFS, SJF, priority-based, and RR scheduling. It provides details on their implementation and compares their performance based on criteria like CPU utilization, throughput, turnaround time, and waiting time.

Uploaded by

SUNIT GAMING
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 25

SSBT’s College of Engineering & Technology, Bambhori, Jalgaon

Computer Engineering Department


Name of student:
Date of Performance: Date of Completion:
---------------------------------------------------------------------------------------------------------------------
EXPERIMENT NO. 1
TITLE: - File and directory management.
AIM: - To Perform various file and directory operations.
HARDWARE / SOFTWARE REQUIREMENTS: - Turbo C, PC, Mouse.
THEORY: -
In case C or C++ there are some functions which are used for calling an interrupt service.
General DOS interrupt interfaces -
Declaration:
int intdos(union REGS *inregs, union REGS *outregs);
Remarks:
intdos execute DOS interrupt 0x21 to invoke a specified DOS function. The value of
inregs -> h.ah specifies the DOS function to be invoked.
Interrupts which are used in program –
1) Function 3CH (INT 21H) – create a file.

2) Function 41H (INT 21H) – delete a file.

3) Function 56H (INT 21H) – rename a file.

4) Function 3FH (INT 21H) – read a file.

5) Function 40H (INT 21H) – write a file.

6) Function 39H (INT 21H) – create a directory.

7) Function 3AH (INT 21H) – delete a directory.

8) Function 3BH (INT 21H) – set working directory.

1
File Management operations -

Create File
Delete File
Rename File
Read File
Write File
Exit File
Directory Management operations-

Create Directory
Remove Directory
Set Working Directory
Exit
ALGORITHM: -
For creating a file -
1) declare file pointer.
2) store 0x3C in ah register.
3) get offset of file pointer in dx register.
4) if carry is generated
then error
else file is created successfully.

For deleting a file –


1) declare file pointer.
2) store 0x41 in ah register.
3) get offset of file pointer in dx register.
4) if carry is generated
then error
else file is deleted successfully.

For rename a file –

1) open old file.


2) open new file.
3) store 0x56 in ah register.
4) get offset of file pointer in dx register of old file.
5) get offset of file pointer in di register of new file.
6) if carry is generated
then error
else file is renamed successfully.

2
For read a file –
1) declare file pointer.
2) store 0x3F in ah register.
3) get offset of file pointer in dx register.
4) if carry is generated
then error
else shows number of bytes actually read.

For write a file –


1) declare file pointer.
2) store 0x40 in ah register.
3) get offset of file pointer in dx register.
4) if carry is generated
then error
else shows number of bytes actually written.

For creating a directory -


1) declare file pointer.
2) store 0x39 in ah register.
3) get offset of file pointer in dx register.
4) if carry is generated
then error
else directory is created successfully.

For deleting a directory


1) declare file pointer.
2) store 0x3A in ah register.
3) get offset of file pointer in dx register.
4) if carry is generated
then error
else directory is deleted successfully.

For set working directory


1) declare file pointer.
2) store 0x3B in ah register.
3) get offset of file pointer in dx register.
4) if carry is generated
then error
else directory path is changed.

3
RESULT: -

REFERENCES: -
1. Advanced MSDOS Programming, 2nd edition By Ray Duncan.
2. Operating System Concepts, 6th edition By Abraham Silberschatz, Peter Baer Galvin.
QUESTIONS FOR VIVA: -
1) Define Operating System. Explain different Operating System services.
2) What is system call? Explain with example.
3) What is Command interpreter?
4) What is file? Explain various file attributes.
5) Explain any three DOS interrupts.

Name & Sign of Teacher

4
SSBT’s College of Engineering & Technology, Bambhori, Jalgaon
Computer Engineering Department

Name of student:
Date of Performance: Date of Completion:
---------------------------------------------------------------------------------------------------------------------

EXPERIMENT NO. 2
TITLE: - CPU Scheduling Algorithm.
AIM: - Implementation of CPU Scheduling Algorithms - FCFS, SJF, Priority based, RR
scheduling algorithms.
HARDWARE / SOFTWARE REQUIREMENTS: - Turbo C, PC, Mouse.
THEORY: -

Scheduling –
Scheduling refers to the way processes are assigned to run on the available CPU’s. This
assignment is carried out by software’s known as scheduler next process to run. A scheduler is
an OS module that selects the next job to be admitted into the system and next process to run.

When more than one process is run able, the operating system must decide which one
first. The part of the operating system concerned with this decision is called the scheduler, and
algorithm it uses is called the scheduling algorithm.

Scheduling criteria -

1) CPU Utilization –keep the CPU as busy as possible.

2) Throughput – processes that complete their execution per time unit.

3) Turnaround time –amount of time to execute a particular process.

4) Waiting time –amount of time a process has been waiting in the ready queue.

Response time – amount of time it takes from when a request was submitted until the first
response is produced, not output (for time-sharing environment).

8
The Scheduling algorithms can be divided into two categories with respect to how they
deal with clock interrupts.

1) Nonpreemptive Scheduling

A non-preemptive means that a running process retains the control of the CPU and all the
allocated resources until it surrenders controls to the OS on its own. This means that even if a
higher priority process enters the system, the running process cannot be enforced to give up the
control. This philosophy is better suited for real time system, where higher priority events need
an immediate attention and therefore need to interrupt the currently running process.

2) Preemptive Scheduling

A preemptive means on the other hand allows a higher priority process to replace a
currently running process even if its time slice is not over it has not requested for any I/O. this
requires context switching more frequently, thus reducing the throughput, but then it is better
suited for online, time processing where interactive users and high priority processes require
immediate attention.

Scheduling Algorithms -

1] First-Come-First-Served (FCFS) Scheduling –

First-Come-First-Served algorithm is the simplest scheduling algorithm is the simplest


scheduling algorithm. Processes are dispatched according to their arrival time on the ready
queue. Being a non preemptive discipline, once a process has a CPU, it runs to completion.

FCFS is more predictable than most of other schemes since it offers time. FCFS scheme
is not useful in scheduling interactive users because it cannot guarantee good response time. The
code for FCFS scheduling is simple to write and understand.

The First-Come-First-Served algorithm is rarely used as a master scheme in modern


operating systems but it is often embedded within other schemes.

Example:
Process Burst Time
P1 24
P2 3
P3 3
Assume processes arrive as: P1 , P2 , P3
The Gantt Chart for the schedule is:

9
Waitiing time for P1 = 0; P2 = 24; P3 = 277
Average waiting time: (0 + 242 + 27)/3 = 17

2] Shorteest-Job-Firsst (SJF) Sch


heduling –

Shortest-Job-First (SJF) is i a non-preeemptive disccipline in whhich waitingg job (or pro


ocess)
with the smallest estimated run-time-to-com mpletion is run
r next. Inn other wordds, when CP PU is
availablee, it is assign
ned to the pro
ocess that haas smallest next
n CPU burrst.

The SJF scheeduling is esspecially apppropriate forr batch jobs for which the
T t run timees are
known inn advance. Since
S the SJJF schedulinng algorithm
m gives the minimum
m avverage time for
f a
given sett of processees, it is probaably optimal.

Exam
mple of Non--Preemptive SJF -
P
Process Arrival Time Burst Timee
A
P1 0 7
P2 2 4
P3 4 1
P4 5 4
SJF (non-preemp
( ptive)

Average waiting time = (0 + 6 + 3 + 7)/44 = 4

Exam
mple of Preem
mptive SJF –
Process Arrival Tim
me Burst Tim
me
P1 0 7
P2 2 4
P3 4 1
P4 5 4

10
SJF (preemptive)
( )

Average waiting time = (9 + 1 + 0 +2)/4 = 3

3] Prioriity Scheduliing -

T basic idea is straigh


The htforward: each
e processs is assigneed a priorityy, and prioriity is
allowed to run. Equ ual-Priority processes
p arre scheduledd in FCFS order.
o The shortest-Job-
s -First
(SJF) alggorithm is a special
s case of general priority
p schedduling algorithm.

A SJF algorrithm is sim


An mply a prioritty algorithmm where the priority is thhe inverse of
o the
(predicteed) next CPU
U burst. Thaat is, the lonnger the CPU burst, thee lower the priority andd vice
versa.

4] Round
d Robin Sch
heduling –

R
Round-robin (RR) is onne of the siimplest scheeduling algoorithms for processes in i an
operatingg system. Ass the term is generally ussed, time slices are assiggned to eachh process in equal
e
portions and in circcular order, handling all processes without priiority (also known as cyclic c
executivee). Round-roobin schedu uling is simpple, easy to implement, and starvattion-free. Roound-
robin scheduling caan also be applied to other scheduling probblems, suchh as data packet
schedulinng in comp puter networrks. The name of the algorithm comes c from the round-robin
principlee known from
m other fieldds, where eacch person takkes an equal share of som mething in tuurn.

Example:

T
Time Quantuum = 20

Process Burst Timee


P1 53
P2 17
P3 68
P4 24

11
Typicallyy, higher aveerage turnaro
ound time thhan SRTF, buut better respponse.
ALGOR
RITHM: -

1) includde required header


h files.
2) definee structure off process and d create the array
a of stru
ucture variable.
3) write a main( ) fun nction
I. declaration of variable
II.. create a meenu as
1. FCFS
2. SJF
3. Priorityy
4. Round Robin
R
5. Exit
IIII. if choice = 1
t
then call FC
CFS( ) functio on
i. read thee number of processes annd burst timee of each proocess.
ii. calculaate waiting tiime, Waitingg Time = Waaiting Time – Arrival Time
and Tuurnaround tim me = Waitinng time + Buurst Time
iii. calculaate average waiting
w timee and averagge turnaroundd time.
iv. print thhe Waiting time
t and Turrnaround tim me of each process and also
a print aveerage
waiting time and averrage turnarouund time.

IV
V. if Choice = 2
then call SJF(
S ) functiion
i. read the number off processes and
a burst tim
me of each prrocess.
ii. if P[i].B
Bt > P[i+1].BBt
then swap
s P[i]& P[i+1]
iii. calculaate waiting time,
t Waitinng Time = Waiting
W Time – Arrival Time
and Tu urnaround timme = Waitinng time + Buurst Time
iv. calculaate average waiting
w timee and average turnaroundd time.
v. print thhe Waiting tiime and Turrnaround tim
me of each prrocess and also
a print aveerage
waiting time and averrage turnarouund time.

V If Choice = 3
V.
then call Priority(
P ) fu
unction
12
i. read the number of process and burst time for process.
ii. if P[i].pri > p[i+1].pri
then swap P[i] & P[i+1]
iii. calculate waiting time, Waiting Time = Waiting Time – Arrival Time
and Turnaround time = Waiting time + Burst Time
iv. calculate average waiting time and average turnaround time.
v. print the Waiting time and Turnaround time of each process and also print average
waiting time and average turnaround time.

VI. If Choice =4
then call RoundRobin( ) function
i. read the number of processes and burst time of each process.
ii. read time quantum.
iii. if p[i].Bt > 0
then If p[i].Bt > quantum
allocate the process for execution as per round robin scheduling algorithm.
iv. calculate waiting time Wt Time = [Wt Time – Arrival Time]
and Turnaround time Tat = [Wt time + Burst Time]
iii. calculate average waiting time and average turnaround time
iv. print the Waiting time and Turnaround time of each process and also print average
waiting time and turnaround time.

VII. if choice = 5
then terminate the program
RESULT: -

REFERENCES: -
1. Operating System Concepts, 6th edition By Abraham Silberschatz, Peter Baer Galvin.
QUESTIONS FOR VIVA: -
1) What is process? Explain process management.

2) What is scheduling? Explain the types of Schedulers.


3) What is multitasking?
4) What is Throughput, Turnaround time, Waiting time and Response time?
5) What is meant by CPU burst and I/O burst?

Name & Sign of Teacher


13
SSBT’s College of Engineering & Technology, Bambhori, Jalgaon
Computer Engineering Department

Name of student:
Date of Performance: Date of Completion:
---------------------------------------------------------------------------------------------------------------------
EXPERIMENT NO. 3
TITLE: - Memory Management.
AIM: - Implementation of various Memory Management strategies - First Fit, Best Fit, Next Fit
and Worst Fit.
HARDWARE / SOFTWARE REQUIREMENTS: - Turbo C, PC, Mouse.
THEORY: -

Memory management is the act of managing computer memory. The essential


requirement of memory management is to provide ways to dynamically allocate portions of
memory to programs at their request, and freeing it for reuse when no longer needed. This is
critical to the computer system.

Several methods have been devised that increase the effectiveness of memory
management. Virtual memory systems separate the memory addresses used by a process from
actual physical addresses, allowing separation of processes and increasing the effectively
available amount of RAM using paging or swapping to secondary storage. The quality of the
virtual memory manager can have a big impact on overall system performance. Memory is
usually divided into fast primary storage and slow secondary storage. Memory management in
the operating system handles moving information between these two levels of memory.

The task of fulfilling an allocation request consists of finding a block of unused memory
of sufficient size. Even though this task seems simple, several issues make the implementation
complex. One of such problems is internal and external fragmentation, which arises when there
are many small gaps between allocated memory blocks, which are insufficient to fulfill the
request. Another is that allocator's metadata can inflate the size of (individually) small
allocations; this effect can be reduced by chunking. Usually, memory is allocated from a large
pool of unused memory area called the heap (also called the free store). Since the precise
location of the allocation is not known in advance, the memory is accessed indirectly, usually via
a pointer reference. The precise algorithm used to organize the memory area and allocate and
deallocate chunks is hidden behind an abstract interface and may use any of the methods
described below.

14
Memory Allocation Strategies –
1) First Fit – Allocate the first hole that is big enough. Searching can start at the beginning of
the set of holes. We can stop searching as such as we find a free hole that is large enough.
2) Best Fit – Allocate the smallest hole that is big enough we must search entire list, unless the
list is ordered by size. This strategy produces the smallest leftover hole.
3) Next Fit – For allocation searching can start from the previous first fit search ended or next
the hole is allocated after the previously allocated hole.
4) Worst Fit – Allocate the largest hole. Again we must search the entire list, unless it is sorted
by size, this strategy produces the largest leftover hole, which may be more useful than the
smaller leftover hole from a best fit approach.
ALGORITHM: -
1) read the number of blocks.
2) read the size of each block.
3) read the requested blocks.
4) read the size of each requested block.
5) read the users choice
if choice = 1
then search for the free hole which will be available very firstly
else if choice = 2
then start to find out the highest fit hole
else if choice = 3
then from next position start to find out next free hole
else if choice = 4
then allocate the hole having the largest size
else if choice = 5
then terminate the program.
RESULT: -

REFERENCES: -
1. Operating System Concepts, 6th edition By Abraham Silberschatz, Peter Baer Galvin.
QUESTIONS FOR VIVA: -
1) What is Memory Management?
2) Explain Contiguous and Non – Contiguous memory allocation.
3) Explain segmentation and paging concepts.

15
4) Explain memory allocation policies.
5) What is Virtual Memory?

Name & Sign of Teacher

16
SSBT’s College of Engineering & Technology, Bambhori, Jalgaon
Computer Engineering Department

Name of student:
Date of Performance: Date of Completion:
---------------------------------------------------------------------------------------------------------------------
EXPERIMENT NO. 4
TITLE: - Page Replacement Algorithm.
AIM: - Implementation of Page Replacement Algorithms - FIFO, LRU and Optimal.
HARDWARE / SOFTWARE REQUIREMENTS: - Turbo C, PC, Mouse.
THEORY: -
Page Replacement -
Initially execution of a process starts with none of its pages in memory. Each of its pages
page fault at least once when it is first referenced. But it may so happen that some of its pages
are never used. In such case those pages which are not referenced even once will never be
brought into memory. This saves load time and memory space. If this is so the degree of
multiprogramming increased so that more ready processes can be loaded and executed. Now we
may come across a situation where in all of a sudden a process hit to not accessing certain pages
start accessing those pages. The degree of multiprogramming has been raised without looking
into this aspect and the memory is over allocated. Over allocation of memory shows up when
there is a page fault for want of page in memory and the operating system finds the required page
in backing store but cannot bring in the page into memory for want of free frames.
There are three page replacement policies to be followed in demand paging -
1] FIFO Page Replacement –
The first in first out policy simply removes pages in the order they arrived in the main
memory. Using this policy we simply remove a page based on the time of its arrival in the
memory. Clearly, use of this policy would suggest that we swap page located at there position as
it arrived in the memory earliest.
The first in first out page replacement algorithm is the simplest page replacement
algorithm. When a page replacement is required the oldest page in memory victim. The
performance of the FIFO algorithm is not always good. The replaced page may have an
initialization module that needs to be executed only once and therefore no longer needed. On the
other hand the page may have a heavily used variable in constant use. Such page swapped out
will cause a page fault almost immediately to be brought in. Thus the number of page faults
increases and results in slower process execution.

17
2] LRU Page Replacement –
The main distinction between FIFO and optimal algorithm is that the FIFO algorithm
uses the time when a page was brought into memory(looks back) where as the optimal algorithm
uses the time when a page is to be used in future (looks ahead). If the recent page is used as an
approximation of the near future, then replace the page that has not been used for the longest
period of time. This is the last recently used (LRU) algorithm.
LRU expands to least recently used. This policy suggests that we remove a page whose
last usage is farthest from current time.

3] Optimal Page Replacement –


One result of discovery of algorithm for an optimal page replacement algorithm. It has
lower page fault rate of all algorithm. Anomaly of such algorithm is minimum replace the page
that will not be used for longest period of time.
Implementation of the optimal page replacement algorithm is difficult since it requires
future a priori knowledge of the reference string. Hence the optimal page replacement algorithm
is more a benchmark for comparison.

ALGORITHM: -
1) read the size of page frame.
2) read the length of string.
3) read the reference string.
4) read the number of choice as follows
1. FIFO
2. LRU
3. Optimal
if choice =1
then select first page to replace.
If choice =2
then select past or least recently used page for replacement.
If choice =3
then select page which is having longest future reference for replacement.
RESULT: -

REFERENCES: -
1. Operating System Concepts, 6th edition By Abraham Silberschatz, Peter Baer Galvin.
2. Modern Operating System, 2nd edition By Andrew Tanenbaum.

18
QUESTIONS FOR VIVA: -

1) Why paging is used?

2) What is fragmentation? Explain different types of fragmentation?

3) What are demand- and pre-paging?

4) What is page fault?


5) Explain page replacement policies.

Name & Sign of Teacher

19
SSBT’s College of Engineering & Technology, Bambhori, Jalgaon
Computer Engineering Department

Name of student:
Date of Performance: Date of Completion:
---------------------------------------------------------------------------------------------------------------------
EXPERIMENT NO. 5
TITLE: - Banker’s Deadlock Avoidance Algorithm.
AIM: - Implementation of Banker’s deadlock avoidance algorithm.
HARDWARE / SOFTWARE REQUIREMENTS: - Turbo C, PC, Mouse.
THEORY: -

The Banker's algorithm is a resource allocation & deadlock avoidance algorithm


developed by Edsger Dijkstra that tests for safety by simulating the allocation of pre-determined
maximum possible amounts of all resources, and then makes a "safe-state" check to test for
possible deadlock conditions for all other pending activities, before deciding whether allocation
should be allowed to continue. The algorithm was developed in the design process for the
operating system and originally described (in Dutch) in EWD108. The name is by analogy with
the way that bankers account for liquidity constraints.

The Banker's algorithm is run by the operating system whenever a process requests
resources. The algorithm avoids deadlock by denying or postponing the request if it determines
that accepting the request could put the system in an unsafe state (one where deadlock could
occur). When a new process enters a system, it must declare the maximum number of instances
of each resource type that may not exceed the total number of resources in the system. Also,
when a process gets all its requested resources it must return them in a finite amount of time.

Assuming that the system distinguishes between four types of resources, (A, B, C and D),
the following is an example of how those resources could be distributed. Note that this example
shows the system at an instant before a new request for resources arrives. Also, the types and
number of resources are abstracted. Real systems, for example, would deal with much larger
quantities of each resource.

Safe and Unsafe States -

A state (as in the above example) is considered safe if it is possible for all processes to
finish executing (terminate). Since the system cannot know when a process will terminate, or
how many resources it will have requested by then, the system assumes that all processes will

20
eventually attempt to acquire their stated maximum resources and terminate soon afterward. This
is a reasonable assumption in most cases since the system is not particularly concerned with how
long each process runs (at least not from a deadlock avoidance perspective). Also, if a process
terminates without acquiring its maximum resources, it only makes it easier on the system. Given
that assumption, the algorithm determines if a state is safe by trying to find a hypothetical set of
requests by the processes that would allow each to acquire its maximum resources and then
terminate (returning its resources to the system). Any state where no such set exists is an unsafe
state.

ALGORITHM: -
1] Safety Algorithm
The algorithm for finding out whether or not a system is in a safe state can be described as
follows -
1) Let Work and Finish be vectors of length m & n respectively. Initialize
Work = Available and Finish[i] = false for i = 0, 1, 2,…..,n-1.
2) Find an index i such that both
a. Finish[i] == false
b. Need i <= Work
If no such i exists, go to step 4.
3) Work = Work + allocation i
Finish[i] = true
Go to step 2
4) if Finish[i] == true for all i, then the system is in safe state.
2] Resource Request Algorithm
Let request i be the request vector for processes Pi. if request i [j] = k, then process Pi wants k
instances of resource type Rj. When a request for resources is made by process Pi, the following
actions are taken -
1) if request i <= Need I, go to step 2. Otherwise raise an error condition, since the
process has executed its maximum claim.
2) If request i <= Available, go to step 3. Otherwise, Pi must wait, since the resources
are not available.
3) Have the system pretend to have allocate the requested resources to process Pi
modifying the state as follows:
Available = Available – Request i
Allocation i = Allocation i + Request i
Need i := Need i – Request i
If the resulting resource allocation state is safe, the transaction is completed and process Pi is
allocated its resources.

21
RESULT: -

REFERENCES: -
1. Operating System Concepts, 6th edition By Abraham Silberschatz, Peter Baer Galvin.
QUESTIONS FOR VIVA: -
1) What is the meaning of deadlock?

2) When is a system in safe state?

3) What is deadlock avoidance?


4) How hold and wait can be prevented in a deadlock prevention approach.
5) Explain Bankers algorithm.

Name & Sign of Teacher

22

You might also like