Memory Management Processor Management Device Management File Management Security

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 47

An Operating System (OS) is an interface between a computer user and

computer hardware. An operating system is a software which performs all


the basic tasks like file management, memory management, process
management, handling input and output, and controlling peripheral devices
such as disk drives and printers.

Some popular Operating Systems include Linux, Windows, OS X, VMS,


OS/400, AIX, z/OS, etc.

Definition
An operating system is a program that acts as an interface between the
user and the computer hardware and controls the execution of all kinds of
programs.

Following are some of important functions of an operating System.

 Memory Management

 Processor Management

 Device Management

 File Management

 Security
 Control over system performance

 Job accounting

 Error detecting aids

 Coordination between other software and users

Memory Management
Memory management refers to management of Primary Memory or Main
Memory. Main memory is a large array of words or bytes where each word
or byte has its own address.

Main memory provides a fast storage that can be accessed directly by the
CPU. For a program to be executed, it must in the main memory. An
Operating System does the following activities for memory management −

 Keeps tracks of primary memory, i.e., what part of it are in use by whom, what
part are not in use.

 In multiprogramming, the OS decides which process will get memory when and
how much.

 Allocates the memory when a process requests it to do so.

 De-allocates the memory when a process no longer needs it or has been


terminated.

Processor Management
In multiprogramming environment, the OS decides which process gets the
processor when and for how much time. This function is called process
scheduling. An Operating System does the following activities for
processor management −

 Keeps tracks of processor and status of process. The program responsible for
this task is known as traffic controller.

 Allocates the processor (CPU) to a process.


 De-allocates processor when a process is no longer required.

Device Management
An Operating System manages device communication via their respective
drivers. It does the following activities for device management −

 Keeps tracks of all devices. Program responsible for this task is known as
the I/O controller.

 Decides which process gets the device when and for how much time.

 Allocates the device in the efficient way.

 De-allocates devices.

File Management
A file system is normally organized into directories for easy navigation and
usage. These directories may contain files and other directions.

An Operating System does the following activities for file management −

 Keeps track of information, location, uses, status etc. The collective facilities are
often known as file system.

 Decides who gets the resources.

 Allocates the resources.

 De-allocates the resources.

Other Important Activities


Following are some of the important activities that an Operating System
performs −

 Security − By means of password and similar other techniques, it prevents


unauthorized access to programs and data.

 Control over system performance − Recording delays between request for a


service and response from the system.

 Job accounting − Keeping track of time and resources used by various jobs
and users.
 Error detecting aids − Production of dumps, traces, error messages, and other
debugging and error detecting aids.

 Coordination between other softwares and users − Coordination and


assignment of compilers, interpreters, assemblers and other software to the
various users of the computer systems.

TYPES OF OS :
Operating systems are there from the very first computer generation and
they keep evolving with time. In this chapter, we will discuss some of the
important types of operating systems which are most commonly used.

Batch operating system


The users of a batch operating system do not interact with the computer
directly. Each user prepares his job on an off-line device like punch cards
and submits it to the computer operator. To speed up processing, jobs with
similar needs are batched together and run as a group. The programmers
leave their programs with the operator and the operator then sorts the
programs with similar requirements into batches.

The problems with Batch Systems are as follows −

 Lack of interaction between the user and the job.

 CPU is often idle, because the speed of the mechanical I/O devices is slower than
the CPU.

 Difficult to provide the desired priority.

Time-sharing operating systems


Time-sharing is a technique which enables many people, located at various
terminals, to use a particular computer system at the same time. Time-
sharing or multitasking is a logical extension of multiprogramming.
Processor's time which is shared among multiple users simultaneously is
termed as time-sharing.

The main difference between Multiprogrammed Batch Systems and Time-


Sharing Systems is that in case of Multiprogrammed batch systems, the
objective is to maximize processor use, whereas in Time-Sharing Systems,
the objective is to minimize response time.

Multiple jobs are executed by the CPU by switching between them, but the
switches occur so frequently. Thus, the user can receive an immediate
response. For example, in a transaction processing, the processor executes
each user program in a short burst or quantum of computation. That is,
if nusers are present, then each user can get a time quantum. When the
user submits the command, the response time is in few seconds at most.

The operating system uses CPU scheduling and multiprogramming to


provide each user with a small portion of a time. Computer systems that
were designed primarily as batch systems have been modified to time-
sharing systems.

Advantages of Timesharing operating systems are as follows −

 Provides the advantage of quick response.

 Avoids duplication of software.

 Reduces CPU idle time.

Disadvantages of Time-sharing operating systems are as follows −

 Problem of reliability.

 Question of security and integrity of user programs and data.

 Problem of data communication.

Distributed operating System


Distributed systems use multiple central processors to serve multiple real-
time applications and multiple users. Data processing jobs are distributed
among the processors accordingly.

The processors communicate with one another through various


communication lines (such as high-speed buses or telephone lines). These
are referred as loosely coupled systems or distributed systems.
Processors in a distributed system may vary in size and function. These
processors are referred as sites, nodes, computers, and so on.
Disadvantages of Batch OS
1. Starvation

Batch processing suffers from starvation. If there are five jobs J1, J2, J3, J4, J4 and J5
present in the batch. If the execution time of J1 is very high then other four jobs will never
be going to get executed or they will have to wait for a very high time. Hence the other
processes get starved.

2. Not Interactive

Batch Processing is not suitable for the jobs which are dependent on the user's input. If a
job requires the input of two numbers from the console then it will never be going to get it
in the batch processing scenario since the user is not present at the time of execution.

Multiprogramming Operating System


Multiprogramming is an extension to the batch processing where the CPU is kept always
busy. Each process needs two types of system time: CPU time and IO time.

In multiprogramming environment, for the time a process does its I/O, The CPU can start
the execution of other processes. Therefore, multiprogramming improves the efficiency of
the system.

Multiprocessing Operating System


In Multiprocessing, Parallel computing is achieved. There are more than one processors
present in the system which can execute more than one process at the same time. This will
increase the throughput of the system.

Real Time Operating System


In Real Time systems, each job carries a certain deadline within which the Job is supposed
to be completed, otherwise the huge loss will be there or even if the result is produced then
it will be completely useless.
The Application of a Real Time system exists in the case of military applications, if you want
to drop a missile then the missile is supposed to be dropped with certain precision.

next →← prev

Process States
State Diagram

The process, from its creation to completion, passes through various states. The
minimum number of states is five.

The names of the states are not standardized although the process may be in one of the
following states during execution.
1. New

A program which is going to be picked up by the OS into the main memory is called a
new process.

2. Ready

Whenever a process is created, it directly enters in the ready state, in which, it waits for
the CPU to be assigned. The OS picks the new processes from the secondary memory
and put all of them in the main memory.

The processes which are ready for the execution and reside in the main memory are
called ready state processes. There can be many processes present in the ready state.

3. Running

One of the processes from the ready state will be chosen by the OS depending upon the
scheduling algorithm. Hence, if we have only one CPU in our system, the number of
running processes for a particular time will always be one. If we have n processors in the
system then we can have n processes running simultaneously.

4. Block or wait

From the Running state, a process can make the transition to the block or wait state
depending upon the scheduling algorithm or the intrinsic behavior of the process.

When a process waits for a certain resource to be assigned or for the input from the user
then the OS move this process to the block or wait state and assigns the CPU to the
other processes.

5. Completion or termination

When a process finishes its execution, it comes in the termination state. All the context
of the process (Process Control Block) will also be deleted the process will be terminated
by the Operating system.
6. Suspend ready

A process in the ready state, which is moved to secondary memory from the main
memory due to lack of the resources (mainly primary memory) is called in the suspend
ready state.

If the main memory is full and a higher priority process comes for the execution then the
OS have to make the room for the process in the main memory by throwing the lower
priority process out into the secondary memory. The suspend ready processes remain in
the secondary memory until the main memory gets available.

7. Suspend wait

Instead of removing the process from the ready queue, it's better to remove the blocked
process which is waiting for some resources in the main memory. Since it is already
waiting for some resource to get available hence it is better if it waits in the secondary
memory and make room for the higher priority process. These processes complete their
execution once the main memory gets available and their wait is finished.

Operations on the Process


1. Creation

Once the process is created, it will be ready and come into the ready queue (main
memory) and will be ready for the execution.

2. Scheduling

Out of the many processes present in the ready queue, the Operating system chooses
one process and start executing it. Selecting the process which is to be executed next, is
known as scheduling.

3. Execution

Once the process is scheduled for the execution, the processor starts executing it.
Process may come to the blocked or wait state during the execution then in that case the
processor starts executing the other processes.
4. Deletion/killing

Once the purpose of the process gets over then the OS will kill the process. The Context of

the process (PCB) will be deleted and the

What is Context Switch?


1. Switching the CPU to another process requires saving the state of the old process
and loadingthe saved state for the new process. This task is known as a Context
Switch.
2. The context of a process is represented in the Process Control Block(PCB) of a
process; it includes the value of the CPU registers, the process state and memory-
management information. When a context switch occurs, the Kernel saves the context
of the old process in its PCB and loads the saved context of the new process scheduled
to run.
3. Context switch time is pure overhead, because the system does no useful work
while switching. Its speed varies from machine to machine, depending on the memory
speed, the number of registers that must be copied, and the existence of special
instructions(such as a single instruction to load or store all registers). Typical speeds
range from 1 to 1000 microseconds.
4. Context Switching has become such a performance bottleneck that programmers are
using new structures(threads) to avoid it whenever and wherever possible.
What is CPU Scheduling?
CPU scheduling is a process which allows one process to use the CPU while the execution
of another process is on hold(in waiting state) due to unavailability of any resource like I/O
etc, thereby making full use of CPU. The aim of CPU scheduling is to make the system
efficient, fast and fair.
Whenever the CPU becomes idle, the operating system must select one of the processes in
the ready queue to be executed. The selection process is carried out by the short-term
scheduler (or CPU scheduler). The scheduler selects from among the processes in memory
that are ready to execute, and allocates the CPU to one of them.

In the uniprogrammming systems like MS DOS, when a process waits for any I/O
operation to be done, the CPU remains idol. This is an overhead since it wastes the time and
causes the problem of starvation. However, In Multiprogramming systems, the CPU doesn't
remain idle during the waiting time of the Process and it starts executing other processes.
Operating System has to define which process the CPU will be given.

In Multiprogramming systems, the Operating system schedules the processes on the


CPU to have the maximum utilization of it and this procedure is called CPU scheduling.
The Operating System uses various scheduling algorithm to schedule the processes.

This is a task of the short term scheduler to schedule the CPU for the number of processes
present in the Job Pool. Whenever the running process requests some IO operation then the
short term scheduler saves the current context of the process (also called PCB) and changes
its state from running to waiting. During the time, process is in waiting state; the Short
term scheduler picks another process from the ready queue and assigns the CPU to this
process. This procedure is called context switching.

Why do we need Scheduling?


In Multiprogramming, if the long term scheduler picks more I/O bound processes then most
of the time, the CPU remains idol. The task of Operating system is to optimize the utilization
of resources.

If most of the running processes change their state from running to waiting then there may
always be a possibility of deadlock in the system. Hence to reduce this overhead, the OS
needs to schedule the jobs to get the optimal utilization of CPU and to avoid the possibility
to deadlock

Another component involved in the CPU scheduling function is the Dispatcher. The
dispatcher is the module that gives control of the CPU to the process selected by the short-
term scheduler. This function involves:
 Switching context
 Switching to user mode
 Jumping to the proper location in the user program to restart that program from where it
left last time.

The dispatcher should be as fast as possible, given that it is invoked during every process
switch. The time taken by the dispatcher to stop one process and start another process is
known as the Dispatch Latency. Dispatch Latency can be explained using the below
figure:

Types of CPU Scheduling


CPU scheduling decisions may take place under the following four circumstances:
1. When a process switches from the running state to the waiting state(for I/O request or
invocation of wait for the termination of one of the child processes).
2. When a process switches from the running state to the ready state (for example, when
an interrupt occurs).
3. When a process switches from the waiting state to the ready state(for example,
completion of I/O).
4. When a process terminates.

In circumstances 1 and 4, there is no choice in terms of scheduling. A new process(if one


exists in the ready queue) must be selected for execution. There is a choice, however in
circumstances 2 and 3.
When Scheduling takes place only under circumstances 1 and 4, we say the scheduling
scheme is non-preemptive; otherwise the scheduling scheme is preemptive.

Non-Preemptive Scheduling
Under non-preemptive scheduling, once the CPU has been allocated to a process, the
process keeps the CPU until it releases the CPU either by terminating or by switching to the
waiting state.
This scheduling method is used by the Microsoft Windows 3.1 and by the Apple Macintosh
operating systems.
It is the only method that can be used on certain hardware platforms, because It does not
require the special hardware(for example: a timer) needed for preemptive scheduling.

Preemptive Scheduling
In this type of Scheduling, the tasks are usually assigned with priorities. At times it is
necessary to run a certain task that has a higher priority before another task although it is
running. Therefore, the running task is interrupted for some time and resumed later when
the priority task has finished its execution.
Various Times related to the Process

1. Arrival Time
The time at which the process enters into the ready queue is called the arrival time.

2. Burst Time
The total amount of time required by the CPU to execute the whole process is called the
Burst Time. This does not include the waiting time. It is confusing to calculate the execution
time for a process even before executing it hence the scheduling problems based on the
burst time cannot be implemented in reality.

3. Completion Time
The Time at which the process enters into the completion state or the time at which the
process completes its execution, is called completion time.

4. Turnaround time
The total amount of time spent by the process from its arrival to its completion, is called
Turnaround time.
5. Waiting Time
The Total amount of time for which the process waits for the CPU to be assigned is called
waiting time.

6. Response Time
The difference between the arrival time and the time at which the process first gets the CPU
is called Response Time.

Scheduling Algorithms
To decide which process to execute first and which process to execute last to achieve
maximum CPU utilisation, computer scientists have defined some algorithms, they are:

1. First Come First Serve(FCFS) Scheduling


2. Shortest-Job-First(SJF) Scheduling
3. Priority Scheduling
4. Round Robin(RR) Scheduling
5. Multilevel Queue Scheduling
6. Multilevel Feedback Queue Scheduling

First Come First Serve


Scheduling
In the "First come first serve" scheduling algorithm, as the name suggests, the process
which arrives first, gets executed first, or we can say that the process which requests the
CPU first, gets the CPU allocated first.

 First Come First Serve, is just like FIFO(First in First out) Queue data structure, where
the data element which is added to the queue first, is the one who leaves the queue
first.
 This is used in Batch Systems.
 It's easy to understand and implement programmatically, using a Queue data
structure, where a new process enters through the tail of the queue, and the scheduler
selects process from the head of the queue.
 A perfect real life example of FCFS scheduling is buying tickets at ticket counter.

irst come first serve (FCFS) scheduling algorithm simply schedules the jobs according to
their arrival time. The job which comes first in the ready queue will get the CPU first. The
lesser the arrival time of the job, the sooner will the job get the CPU. FCFS scheduling may
cause the problem of starvation if the burst time of the first process is the longest among all
the jobs.

Advantages of FCFS
o Simple

o Easy

o First come, First serv

Disadvantages of FCFS
1. The scheduling method is non preemptive, the process will run to the completion.

2. Due to the non-preemptive nature of the algorithm, the problem of starvation may
occur.

3. Although it is easy to implement, but it is poor in performance since the average


waiting time is higher as compare to other scheduling algorithms.

Example
Let's take an example of The FCFS scheduling algorithm. In the Following schedule, there
are 5 processes with process ID P0, P1, P2, P3 and P4. P0 arrives at time 0, P1 at time 1,
P2 at time 2, P3 arrives at time 3 and Process P4 arrives at time 4 in the ready queue. The
processes and their respective Arrival and Burst time are given in the following table.
The Turnaround time and the waiting time are calculated by using the following formula.

1. Turn Around Time = Completion Time - Arrival Time


2. Waiting Time = Turnaround time - Burst Time

The average waiting Time is determined by summing the respective waiting time of all the
processes and divided the sum by the total number of processes.

Process ID Arrival Time Burst Time Completion Time Turn Around Time

0 0 2 2 2

1 1 6 8 7

2 2 4 12 8

3 3 9 21 18

4 4 12 33 29

Avg Waiting Time=31/5

(Gantt chart)

Problems with FCFS Scheduling


Below we have a few shortcomings or problems with the FCFS scheduling algorithm:
1. It is Non Pre-emptive algorithm, which means the process priority doesn't matter.

If a process with very least priority is being executed, more like daily routine
backup process, which takes more time, and all of a sudden some other high priority
process arrives, like interrupt to avoid system crash, the high priority process will
have to wait, and hence in this case, the system will crash, just because of improper
process scheduling.

2. Not optimal Average Waiting Time.


3. Resources utilization in parallel is not possible, which leads to Convoy Effect, and
hence poor resource(CPU, I/O etc) utilization.

Convoy Effect in FCFS


FCFS may suffer from the convoy effect if the burst time of the first job is the highest
among all. As in the real life, if a convoy is passing through the road then the other persons
may get blocked until it passes completely. This can be simulated in the Operating System
also.

If the CPU gets the processes of the higher burst time at the front end of the ready queue
then the processes of lower burst time may get blocked which means they may never get
the CPU if the job in the execution has a very high burst time. This is called convoy
effect or starvation.

Shortest Job First(SJF)


Scheduling
Shortest Job First scheduling works on the process with the shortest burst
time or duration first.
 This is the best approach to minimize waiting time.
 This is used in Batch Systems.
 It is of two types:
1. Non Pre-emptive
2. Pre-emptive
 To successfully implement it, the burst time/duration time of the processes should be
known to the processor in advance, which is practically not feasible all the time.
 This scheduling algorithm is optimal if all the jobs/processes are available at the same
time. (either Arrival time is 0 for all, or Arrival time is same for all)

Non Pre-emptive Shortest Job First


Consider the below processes available in the ready queue for execution, with arrival
time as 0 for all and given burst times.
As you can see in the GANTT chart above, the process P4 will be picked up first as it has
the shortest burst time, then P2, followed by P3 and at last P1.
We scheduled the same set of processes using the First come first serve algorithm in the
previous tutorial, and got average waiting time to be 18.75 ms, whereas with SJF, the
average waiting time comes out 4.5 ms.

Problem with Non Pre-emptive SJF


If the arrival time for processes are different, which means all the processes are not
available in the ready queue at time 0, and some jobs arrive after some time, in such
situation, sometimes process with short burst time have to wait for the current process's
execution to finish, because in Non Pre-emptive SJF, on arrival of a process with short
duration, the existing job/process's execution is not halted/stopped to execute the short job
first.
This leads to the problem of Starvation, where a shorter process has to wait for a long time
until the current longer process gets executed. This happens if shorter jobs keep coming,
but this can be solved using the concept of aging.

Pre-emptive Shortest Job First


In Preemptive Shortest Job First Scheduling, jobs are put into ready queue as they arrive,
but as a process with short burst time arrives, the existing process is preempted or
removed from execution, and the shorter job is executed first.

As you can see in the GANTT chart above, as P1 arrives first, hence it's execution starts
immediately, but just after 1 ms, process P2 arrives with a burst time of 3 ms which is less
than the burst time of P1, hence the process P1(1 ms done, 20 ms left) is preemptied and
process P2 is executed.
As P2 is getting executed, after 1 ms, P3 arrives, but it has a burst time greater than that
of P2, hence execution of P2 continues. But after another millisecond, P4 arrives with a
burst time of 2 ms, as a result P2(2 ms done, 1 ms left) is preemptied and P4 is executed.
After the completion of P4, process P2 is picked up and finishes, then P2 will get executed
and at last P1.
The Pre-emptive SJF is also known as Shortest Remaining Time First, because at any
given point of time, the job with the shortest remaining time is executed first.

Priority Scheduling
In Priority scheduling, there is a priority number assigned to each process. In some
systems, the lower the number, the higher the priority. While, in the others, the higher the
number, the higher will be the priority. The Process with the higher priority among the
available processes is given the CPU. There are two types of priority scheduling algorithm
exists. One is Preemptive priority scheduling while the other is Non Preemptive Priority
scheduling.

The priority number assigned to each of the process may or may not vary. If the priority
number doesn't change itself throughout the process, it is called static priority, while if it
keeps changing itself at the regular intervals, it is called dynamic priority.
Next

Non Preemptive Priority Scheduling


In the Non Preemptive Priority scheduling, The Processes are scheduled according to the
priority number assigned to them. Once the process gets scheduled, it will run till the
completion. Generally, the lower the priority number, the higher is the priority of the
process. The people might get confused with the priority numbers, hence in the GATE, there
clearly mention which one is the highest priority and which one is the lowest one.

Example
In the Example, there are 7 processes P1, P2, P3, P4, P5, P6 and P7. Their priorities, Arrival
Time and burst time are given in the table.

Process ID Priority Arrival Time Burst Time

1 2 0 3

2 6 2 5

3 3 1 4

4 5 4 2

5 7 6 9
6 4 5 4

7 10 7 10

We can prepare the Gantt chart according to the Non Preemptive priority scheduling.

The Process P1 arrives at time 0 with the burst time of 3 units and the priority number 2.
Since No other process has arrived till now hence the OS will schedule it immediately.

Meanwhile the execution of P1, two more Processes P2 and P3 are arrived. Since the priority
of P3 is 3 hence the CPU will execute P3 over P2.

Meanwhile the execution of P3, All the processes get available in the ready queue. The
Process with the lowest priority number will be given the priority. Since P6 has priority
number assigned as 4 hence it will be executed just after P3.

After P6, P4 has the least priority number among the available processes; it will get
executed for the whole burst time.

Since all the jobs are available in the ready queue hence All the Jobs will get executed
according to their priorities. If two jobs have similar priority number assigned to them, the
one with the least arrival time will be executed.

From the GANTT Chart prepared, we can determine the completion time of every process.
The turnaround time, waiting time and response time will be determined.

1. Turn Around Time = Completion Time - Arrival Time


2. Waiting Time = Turn Around Time - Burst Time

Process Priority Arrival Burst Completion Turnaround Waiting


Id Time Time Time Time Time

1 2 0 3 3 3 0
2 6 2 5 18 16 11

3 3 1 4 7 6 2

4 5 4 2 13 9 7

5 7 6 9 27 21 12

6 4 5 4 11 6 2

7 10 7 10 37 30 18

Avg Waiting Time = (0+11+2+7+12+2+18)/7 = 52/7 units

Preemptive Priority Scheduling


In Preemptive Priority Scheduling, at the time of arrival of a process in the ready queue, its
Priority is compared with the priority of the other processes present in the ready queue as
well as with the one which is being executed by the CPU at that point of time. The One with
the highest priority among all the available processes will be given the CPU next.

The difference between preemptive priority scheduling and non preemptive priority
scheduling is that, in the preemptive priority scheduling, the job which is being executed
can be stopped at the arrival of a higher priority job.

Once all the jobs get available in the ready queue, the algorithm will behave as non-
preemptive priority scheduling, which means the job scheduled will run till the completion
and no preemption will be done.
Example
There are 7 processes P1, P2, P3, P4, P5, P6 and P7 given. Their respective priorities,
Arrival Times and Burst times are given in the table below.

Process Id Priority Arrival Time Burst Time

1 2(L) 0 1

2 6 1 7

3 3 2 3

4 5 3 6

5 4 4 5

6 10(H) 5 15

7 9 15 8

GANTT chart Preparation


At time 0, P1 arrives with the burst time of 1 units and priority 2. Since no other process is
available hence this will be scheduled till next job arrives or its completion (whichever is
lesser).
At time 1, P2 arrives. P1 has completed its execution and no other process is available at
this time hence the Operating system has to schedule it regardless of the priority assigned
to it.

The Next process P3 arrives at time unit 2, the priority of P3 is higher to P2. Hence the
execution of P2 will be stopped and P3 will be scheduled on the CPU.

During the execution of P3, three more processes P4, P5 and P6 becomes available. Since,
all these three have the priority lower to the process in execution so PS can't preempt the
process. P3 will complete its execution and then P5 will be scheduled with the priority
highest among the available processes.

Meanwhile the execution of P5, all the processes got available in the ready queue. At this
point, the algorithm will start behaving as Non Preemptive Priority Scheduling. Hence now,
once all the processes get available in the ready queue, the OS just took the process with
the highest priority and execute that process till completion. In this case, P4 will be
scheduled and will be executed till the completion.
Since P4 is completed, the other process with the highest priority available in the ready
queue is P2. Hence P2 will be scheduled next.

P2 is given the CPU till the completion. Since its remaining burst time is 6 units hence P7
will be scheduled after this.

The only remaining process is P6 with the least priority, the Operating System has no choice
unless of executing it. This will be executed at the last.
Multilevel Queue Scheduling
Another class of scheduling algorithms has been created for situations in which processes are
easily classified into different groups.
For example: A common division is made between foreground(or interactive) processes and
background (or batch) processes. These two types of processes have different response-time
requirements, and so might have different scheduling needs. In addition, foreground processes
may have priority over background processes.
A multi-level queue scheduling algorithm partitions the ready queue into several separate
queues. The processes are permanently assigned to one queue, generally based on some property
of the process, such as memory size, process priority, or process type. Each queue has its own
scheduling algorithm.
For example: separate queues might be used for foreground and background processes. The
foreground queue might be scheduled by Round Robin algorithm, while the background queue is
scheduled by an FCFS algorithm.
In addition, there must be scheduling among the queues, which is commonly implemented as
fixed-priority preemptive scheduling. For example: The foreground queue may have absolute
priority over the background queue.

Let us consider an example of a multilevel queue-scheduling algorithm with five queues:

1. System Processes

2. Interactive Processes

3. Interactive Editing Processes

4. Batch Processes

5. Student Processes

Each queue has absolute priority over lower-priority queues. No process in the batch queue, for
example, could run unless the queues for system processes, interactive processes, and interactive
editing processes were all empty. If an interactive editing process entered the ready queue while
a batch process was running, the batch process will be preempted.
Multilevel Feedback Queue
Scheduling
In a multilevel queue-scheduling algorithm, processes are permanently assigned to a queue on
entry to the system. Processes do not move between queues. This setup has the advantage of low
scheduling overhead, but the disadvantage of being inflexible.
Multilevel feedback queue scheduling, however, allows a process to move between queues. The
idea is to separate processes with different CPU-burst characteristics. If a process uses too much
CPU time, it will be moved to a lower-priority queue. Similarly, a process that waits too long in
a lower-priority queue may be moved to a higher-priority queue. This form of aging prevents
starvation.
An example of a multilevel feedback queue can be seen in the below figure.

In general, a multilevel feedback queue scheduler is defined by the following parameters:

 The number of queues.

 The scheduling algorithm for each queue.

 The method used to determine when to upgrade a process to a higher-priority queue.

 The method used to determine when to demote a process to a lower-priority queue.

 The method used to determine which queue a process will enter when that process needs service.

The definition of a multilevel feedback queue scheduler makes it the most general CPU-
scheduling algorithm. It can be configured to match a specific system under design.
Unfortunately, it also requires some means of selecting values for all the parameters to define the
best scheduler. Although a multilevel feedback queue is the most general scheme, it is also
the most complex.
Round Robin Scheduling Algorithm
Round Robin scheduling algorithm is one of the most popular scheduling algorithm which
can actually be implemented in most of the operating systems. This is the preemptive
version of first come first serve scheduling. The Algorithm focuses on Time Sharing. In this
algorithm, every process gets executed in a cyclic way. A certain time slice is defined in
the system which is called time quantum. Each process present in the ready queue is
assigned the CPU for that time quantum, if the execution of the process is completed during
that time then the process will terminate else the process will go back to the ready
queue and waits for the next turn to complete the execution.

Advantages
1. It can be actually implementable in the system because it is not depending on the
burst time.

2. It doesn't suffer from the problem of starvation or convoy effect.

3. All the jobs get a fare allocation of CPU.

Disadvantages
1. The higher the time quantum, the higher the response time in the system.

2. The lower the time quantum, the higher the context switching overhead in the
system.

3. Deciding a perfect time quantum is really a very difficult task in the system.

RR Scheduling Example
In the following example, there are six processes named as P1, P2, P3, P4, P5 and P6. Their
arrival time and burst time are given below in the table. The time quantum of the system is
4 units.

Process ID Arrival Time Burst Time

1 0 5

2 1 6

3 2 3

4 3 1

5 4 5

6 6 4

According to the algorithm, we have to maintain the ready queue and the Gantt chart. The
structure of both the data structures will be changed after every scheduling.
Ready Queue:
Initially, at time 0, process P1 arrives which will be scheduled for the time slice 4 units.
Hence in the ready queue, there will be only one process P1 at starting with CPU burst time
5 units.

P1

GANTT chart
The P1 will be executed for 4 units first.

Ready Queue
Meanwhile the execution of P1, four more processes P2, P3, P4 and P5 arrives in the ready
queue. P1 has not completed yet, it needs another 1 unit of time hence it will also be added
back to the ready queue.

P2 P3 P4 P5 P

6 3 1 5 1

GANTT chart
After P1, P2 will be executed for 4 units of time which is shown in the Gantt chart.
Ready Queue
During the execution of P2, one more process P6 is arrived in the ready queue. Since P2 has
not completed yet hence, P2 will also be added back to the ready queue with the remaining
burst time 2 units.

P3 P4 P5 P1 P6

3 1 5 1 4

GANTT chart
After P1 and P2, P3 will get executed for 3 units of time since its CPU burst time is only 3
seconds.

Ready Queue
Since P3 has been completed, hence it will be terminated and not be added to the ready
queue. The next process will be executed is P4.

P4 P5 P1 P6 P

1 5 1 4 2
GANTT chart
After, P1, P2 and P3, P4 will get executed. Its burst time is only 1 unit which is lesser then
the time quantum hence it will be completed.

Ready Queue
The next process in the ready queue is P5 with 5 units of burst time. Since P4 is completed
hence it will not be added back to the queue.

P5 P1 P6 P2

5 1 4 2

GANTT chart
P5 will be executed for the whole time slice because it requires 5 units of burst time which is
higher than the time slice.

Ready Queue
P5 has not been completed yet; it will be added back to the queue with the remaining burst
time of 1 unit.
P1 P6 P2 P5

1 4 2 1

GANTT Chart
The process P1 will be given the next turn to complete its execution. Since it only requires 1
unit of burst time hence it will be completed.

Ready Queue
P1 is completed and will not be added back to the ready queue. The next process P6
requires only 4 units of burst time and it will be executed next.

P6 P2 P5

4 2 1

GANTT chart
P6 will be executed for 4 units of time till completion.
Ready Queue
Since P6 is completed, hence it will not be added again to the queue. There are only two
processes present in the ready queue. The Next process P2 requires only 2 units of time.

P2 P5

2 1

GANTT Chart
P2 will get executed again, since it only requires only 2 units of time hence this will be
completed.

Ready Queue
Now, the only available process in the queue is P5 which requires 1 unit of burst time. Since
the time slice is of 4 units hence it will be completed in the next burst.

P5

GANTT chart
P5 will get executed till completion.
The completion time, Turnaround time and waiting time will be calculated as shown in the
table below.

As, we know,

1. Turn Around Time = Completion Time - Arrival Time


2. Waiting Time = Turn Around Time - Burst Time

Process ID Arrival Time Burst Time Completion Time Turn Around Time

1 0 5 17 17

2 1 6 23 22

3 2 3 11 9

4 3 1 12 9

5 4 5 24 20

6 6 4 21 15

Avg Waiting Time = (12+16+6+8+15+11)/6 = 76/6 units


PRODUCER CONSUMER

A semaphore S is an integer variable that can be accessed only through two standard
operations : wait() and signal().
The wait() operation reduces the value of semaphore by 1 and the signal() operation increases
its value by 1.
wait(S){
while(S<=0); // busy waiting
S--;
}

signal(S){
S++;
}
Problem Statement – We have a buffer of fixed size. A producer can produce an item and can
place in the buffer. A consumer can pick items and can consume them. We need to ensure that
when a producer is placing an item in the buffer, then at the same time consumer should not
consume any item. In this problem, buffer is the critical section.
To solve this problem, we need two counting semaphores – Full and Empty. “Full” keeps track
of number of items in the buffer at any given time and “Empty” keeps track of number of
unoccupied slots.
Initialization of semaphores –
mutex = 1
Full = 0 // Initially, all slots are empty. Thus full slots are 0
Empty = n // All slots are empty initially
Solution for Producer –

do{

//produce an item

wait(empty);
wait(mutex);

//place in buffer

signal(mutex);
signal(full);
}while(true)

When producer produces an item then the value of “empty” is reduced by 1 because one slot
will be filled now. The value of mutex is also reduced to prevent consumer to access the buffer.
Now, the producer has placed the item and thus the value of “full” is increased by 1. The value
of mutex is also increased by 1 beacuse the task of producer has been completed and
consumer can access the buffer.

Solution for Consumer –


do{

wait(full);
wait(mutex);

// remove item from buffer

signal(mutex);
signal(empty);

// consumes item

}while(true)
As the consumer is removing an item from buffer, therefore the value of “full” is reduced by 1
and the value is mutex is also reduced so that the producer cannot access the buffer at this
moment. Now, the consumer has consumed the item, thus increasing the value of “empty” by 1.
The value of mutex is also increased so that producer can access the buffer now.
Reader writer

You might also like