0% found this document useful (0 votes)
13 views

Operating System Module III

Uploaded by

pupumishra2580
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views

Operating System Module III

Uploaded by

pupumishra2580
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 22

Operating System - Module III

MODULE-III (08 Hours)

Operating System Services for Process Management & Scheduling: Introduction, Process
Creation, Termination & Other Issues, Threads, Multithreading, Types of Threads, Schedulers,
Types of Schedulers, Types of Scheduling, Scheduling Algorithms, Types of Scheduling
Algorithms.

Process Creation and Termination

Process Creation

• Parent process create children processes, which, in turn create other


processes, forming a tree of processes
• Generally, process identified and managed via a process identifier (pid)
• Resource sharing options
▪ Parent and children share all resources
▪ Children share subset of parent’s resources
▪ Parent and child share no resources
• Execution options
▪ Parent and children execute concurrently
▪ Parent waits until children terminate
Example : UNIX

▪ fork() system call creates new process


▪ exec() system call used after a fork() to replace the process’ memory space with
a new program

Process Termination

• Process executes last statement and then asks the operating system to delete it using the
exit() system call.
▪ Returns status data from child to parent (via wait())
▪ Process’ resources are deallocated by operating system
• Parent may terminate the execution of children processes using the abort() system call.
Some reasons for doing so:

By Prof Nitu Dash


▪ Child has exceeded allocated resources
▪ Task assigned to child is no longer required
▪ The parent is exiting and the operating systems does not allow a child to continue
if its parent terminates
• Some operating systems do not allow child to exists if its parent has terminated. If a process
terminates, then all its children must also be terminated.
▪ cascading termination. All children, grandchildren, etc. are terminated.
▪ The termination is initiated by the operating system.
• The parent process may wait for termination of a child process by using the wait()system
call. The call returns status information and the pid of the terminated process
pid = wait(&status);
• If no parent waiting (did not invoke wait()) process is a zombie
• If parent terminated without invoking wait , process is an orphan

What is Thread?
A thread is a flow of execution through the process code, with its own program counter that keeps
track of which instruction to execute next, system registers which hold its current working
variables, and a stack which contains the execution history.

A thread shares with its peer threads few information like code segment, data segment and open
files. When one thread alters a code segment memory item, all other threads see that.

A thread is also called a lightweight process. Threads provide a way to improve application
performance through parallelism. Threads represent a software approach to improving
performance of operating system by reducing the overhead thread is equivalent to a classical
process.

Each thread belongs to exactly one process and no thread can exist outside a process. Each thread
represents a separate flow of control. Threads have been successfully used in implementing
network servers and web server. They also provide a suitable foundation for parallel execution of
applications on shared memory multiprocessors. The following figure shows the working of a
single-threaded and a multithreaded process.

By Prof Nitu Dash


Difference between Process and Thread
S.N. Process Thread
Process is heavy weight or resource Thread is light weight, taking lesser resources
1
intensive. than a process.
Process switching needs interaction with Thread switching does not need to interact with
2
operating system. operating system.
In multiple processing environments, each
All threads can share same set of open files,
3 process executes the same code but has its
child processes.
own memory and file resources.
If one process is blocked, then no other
While one thread is blocked and waiting, a
4 process can execute until the first process
second thread in the same task can run.
is unblocked.
Multiple processes without using threads Multiple threaded processes use fewer
5
use more resources. resources.
In multiple processes each process One thread can read, write or change another
6
operates independently of the others. thread's data.

Advantages of Thread
• Threads minimize the context switching time.
• Use of threads provides concurrency within a process.
• Efficient communication.
• It is more economical to create and context switch threads.
• Threads allow utilization of multiprocessor architectures to a greater scale and efficiency.

By Prof Nitu Dash


Types of Thread
Threads are implemented in following two ways −

• User Level Threads − User managed threads.

• Kernel Level Threads − Operating System managed threads acting on kernel, an


operating system core.

User Level Threads


In this case, the thread management kernel is not aware of the existence of threads. The thread
library contains code for creating and destroying threads, for passing message and data between
threads, for scheduling thread execution and for saving and restoring thread contexts. The
application starts with a single thread.

Advantages

• Thread switching does not require Kernel mode privileges.


• User level thread can run on any operating system.
• Scheduling can be application specific in the user level thread.
• User level threads are fast to create and manage.

Disadvantages

• In a typical operating system, most system calls are blocking.


• Multithreaded application cannot take advantage of multiprocessing.

By Prof Nitu Dash


Kernel Level Threads
In this case, thread management is done by the Kernel. There is no thread management code in
the application area. Kernel threads are supported directly by the operating system. Any
application can be programmed to be multithreaded. All of the threads within an application are
supported within a single process.

The Kernel maintains context information for the process as a whole and for individuals threads
within the process. Scheduling by the Kernel is done on a thread basis. The Kernel performs
thread creation, scheduling and management in Kernel space. Kernel threads are generally
slower to create and manage than the user threads.

Advantages

• Kernel can simultaneously schedule multiple threads from the same process on multiple
processes.

• If one thread in a process is blocked, the Kernel can schedule another thread of the same
process.

• Kernel routines themselves can be multithreaded.

Disadvantages

• Kernel threads are generally slower to create and manage than the user threads.

• Transfer of control from one thread to another within the same process requires a mode
switch to the Kernel.

By Prof Nitu Dash


Multithreading Models
Some operating system provide a combined user level thread and Kernel level thread facility.
Solaris is a good example of this combined approach. In a combined system, multiple threads
within the same application can run in parallel on multiple processors and a blocking system call
need not block the entire process. Multithreading models are three types

• Many to many relationship.


• Many to one relationship.
• One to one relationship.

Many to Many Model

The many-to-many model multiplexes any number of user threads onto an equal or smaller
number of kernel threads.

The following diagram shows the many-to-many threading model where 6 user level threads are
multiplexing with 6 kernel level threads. In this model, developers can create as many user
threads as necessary and the corresponding Kernel threads can run in parallel on a multiprocessor
machine. This model provides the best accuracy on concurrency and when a thread performs a
blocking system call, the kernel can schedule another thread for execution.

By Prof Nitu Dash


Many to One Model

Many-to-one model maps many user level threads to one Kernel-level thread. Thread
management is done in user space by the thread library. When thread makes a blocking system
call, the entire process will be blocked. Only one thread can access the Kernel at a time, so
multiple threads are unable to run in parallel on multiprocessors.

If the user-level thread libraries are implemented in the operating system in such a way that the
system does not support them, then the Kernel threads use the many-to-one relationship modes.

One to One Model

There is one-to-one relationship of user-level thread to the kernel-level thread. This model
provides more concurrency than the many-to-one model. It also allows another thread to run
when a thread makes a blocking system call. It supports multiple threads to execute in parallel on
microprocessors.

Disadvantage of this model is that creating user thread requires the corresponding Kernel thread.
OS/2, windows NT and windows 2000 use one to one relationship model.

By Prof Nitu Dash


Difference between User-Level & Kernel-Level Thread
S.N. User-Level Threads Kernel-Level Thread
User-level threads are faster to create and Kernel-level threads are slower to create and
1
manage. manage.
Implementation is by a thread library at the Operating system supports creation of
2
user level. Kernel threads.
User-level thread is generic and can run on Kernel-level thread is specific to the
3
any operating system. operating system.
Multi-threaded applications cannot take Kernel routines themselves can be
4
advantage of multiprocessing. multithreaded.

Process Scheduling
• Process scheduling is selecting one process for execution out of all the ready processes.

• When a computer is multiprogrammed, it has multiple processes competing for the CPU at
the same time. If only one CPU is available, then a choice has to be made regarding which
process to execute next. This decision making process is known as scheduling and the part
of the OS that makes this choice is called a scheduler. The algorithm it uses in making this
choice is called scheduling algorithm.

• Scheduling queues are used to perform process scheduling

➢ Job queue – This queue consists of all processes in the system those processes
are entered to the system as new processes.
➢ Ready queue – This queue consists of all processes residing in main memory,
ready and waiting to execute.
➢ Device queues – This queue consists of processes waiting for an I/O device.
Each device has its own device queue.
➢ Processes migrate among the various queues

By Prof Nitu Dash


Schedulers
A scheduler is a decision maker that selects the processes from one scheduling queue to another
or allocates CPU for execution. The Operating System has three types of scheduler:

1. Long-term scheduler or Job scheduler


2. Short-term scheduler or CPU scheduler
3. Medium-term scheduler

Long-term scheduler or Job scheduler


• The long-term scheduler or job scheduler selects processes from discs and loads them into
main memory for execution. It executes much less frequently.
• It controls the degree of multiprogramming (i.e., the number of processes in memory).
• Because of the longer interval between executions, the long-term scheduler can afford to
take more time to select a process for execution.
• The long-term scheduler should select a proper mix of CPU-bound processes and I/O-
bound processes.
➢ A CPU-bound process spends most of its time doing computations.
➢ An I/O-bound process spends most of its time doing I/O operations.

Short-term scheduler or CPU scheduler


• The short-term scheduler or CPU scheduler selects a process from among the processes
that are ready to execute and allocates the CPU.
• The short-term scheduler is invoked frequently and should be very fast.
• The short-term scheduler must select a new process for the CPU frequently. A process may
execute for only a few milliseconds before waiting for an I/O request.

Medium-term scheduler
• The medium-term scheduler schedules the processes as intermediate level of scheduling
• Remove process from memory, store on disk, and later bring back in from disk to continue
execution: swapping
• The process is swapped out & swapped in later by medium term scheduler
• Swapping may be used to improve process mix and to free up some memory in
uncontrollable circumstances.

By Prof Nitu Dash


Comparison among Scheduler

S.N. Long-Term Scheduler Short-Term Medium-Term


Scheduler Scheduler

1 It is a job scheduler It is a CPU scheduler It is a process swapping


scheduler.

2 Speed is lesser than Speed is fastest Speed is in between both


short term scheduler among other two short and long term
scheduler.

3 It controls the degree of It provides lesser It reduces the degree of


multiprogramming control over degree of multiprogramming.
multiprogramming

4 It is almost absent or It is also minimal in It is a part of Time


minimal in time sharing time sharing system sharing systems.
system

5 It selects processes It selects those It can re-introduce the


from pool and loads processes which are process into memory and
them into memory for ready to execute execution can be
execution continued.

Context Switching

• Context switching is a process that involves switching of the CPU from one process or
task to another.

• In this phenomenon, the execution of the process that is present in the running state is
suspended by the kernel and another process that is present in the ready state is executed
by the CPU.

• When switching perform in the system, it stores the old running process's status in the form
of registers and assigns the CPU to a new process to execute its tasks.

By Prof Nitu Dash


• While a new process is running in the system, the previous process must wait in a ready
queue. The execution of the old process starts at that point where another process stopped
it.

• It defines the characteristics of a multitasking operating system in which multiple processes


shared the same CPU to perform multiple tasks without the need for additional processors
in the system.

Context switching can happen due to the following reasons:

• When a process of high priority comes in the ready state. In this case, the execution of
the running process should be stopped and the higher priority process should be given
the CPU for execution.

• When an interruption occurs then the process in the running state should be stopped and
the CPU should handle the interrupt before doing something else.

• When a transition between the user mode and kernel mode is required then you have to
perform the context switching.

Steps involved in Context Switching

The process of context switching involves a number of steps. The following diagram depicts the
process of context switching between the two processes P1 and P2.

In the below figure, you can see that initially, the P1 process is running on the CPU to execute its
task, and at the same time, another process, P2, is in the ready state. If an error or interruption has
occurred or the process requires input/output, the P1 process switches its state from running to the
waiting state. Before changing the state of the process P1, context switching saves the context of
the process P1 in the form of registers and the program counter to the PCB1. After that, it loads
the state of the P2 process from the ready state of the PCB2 to the running state.

By Prof Nitu Dash


1. First, the context switching needs to save the state of process P1 in the form of the program
counter and the registers to the PCB (Program Counter Block), which is in the running
state.
2. Now update PCB1 to process P1 and moves the process to the appropriate queue, such as
the ready queue, I/O queue and waiting queue.
3. After that, another process gets into the running state, or we can select a new process from
the ready state, which is to be executed, or the process has a high priority to execute its
task.
4. Now, we have to update the PCB (Process Control Block) for the selected process P2. It
includes switching the process state from ready to running state or from another state like
blocked, exit, or suspend.
5. If the CPU already executes process P2, we need to get the status of process P2 to resume
its execution at the same time point where the system interrupt occurs.

Similarly, process P2 is switched off from the CPU so that the process P1 can resume execution.
P1 process is reloaded from PCB1 to the running state to resume its task at the same point.
Otherwise, the information is lost, and when the process is executed again, it starts execution at
the initial level.

The time involved in the context switching of one process by other is called the Context Switching
Time.

Basic Concepts

• Maximum CPU utilization obtained with


multiprogramming

• CPU–I/O Burst Cycle – Process execution consists of a


cycle of CPU execution and I/O wait

• CPU burst followed by I/O burst

• CPU burst distribution is of main concern


• A Process Scheduler schedules different processes to be
assigned to the CPU based on particular scheduling
algorithms.

By Prof Nitu Dash


• There are six popular process scheduling algorithms −
➢ First-Come, First-Served (FCFS) Scheduling
➢ Shortest-Job-Next (SJN) Scheduling
➢ Priority Scheduling
➢ Shortest Remaining Time
➢ Round Robin(RR) Scheduling
➢ Multiple-Level Queues Scheduling
• These scheduling algorithms are either non-preemptive or preemptive.
• Non-preemptive : once a process has been given the CPU, the CPU cannot be taken
away from that process.
• Preemptive : once a process has been given the CPU can be taken away

Dispatcher
• Dispatcher module gives control of the CPU to the process selected by the short-term
scheduler; this involves:
➢ switching context
➢ switching to user mode
➢ jumping to the proper location in the user program to restart that program
• Dispatch latency – time it takes for the dispatcher to stop one process and start another
running

Scheduling Criteria

• Arrival Time (AT) – The time at which the process arrives in the system

• Burst Time (BT) – The amount of time the process runs on CPU.

• Completion Time (CT) – The time at which process completes.


• Turnaround time (TAT) – Time from arrival to completion of a process (TAT = CT – AT)
• Waiting time – amount of time a process has been waiting in the ready queue
(WT = TAT – BT)

• Response time – amount of time from arrival till first time process gets the CPU.
• Scheduling Length - max(CT) – min(AT)

• Throughput – number of processes that complete their execution per time unit ie n / l

• CPU utilization – keep the CPU as busy as possible

By Prof Nitu Dash


Optimization Criteria

• Max CPU utilization


• Max throughput
• Min turnaround time
• Min waiting time
• Min response time

First- Come, First-Served (FCFS) Scheduling

• Jobs are executed on first come, first serve basis.


• It is a non-preemptive scheduling algorithm.
• Easy to understand and implement.
• Its implementation is based on FIFO queue.
• Poor in performance as average wait time is high.

Turnarround time for P1 = 24; P2 = 27; P3 = 30


Average Turnarround time : (24+27+30)/3 = 27

By Prof Nitu Dash


By Prof Nitu Dash
Shortest-Job-First (SJF) Scheduling

• This is a non-preemptive scheduling algorithm.


• Best approach to minimize waiting time.
• Easy to implement in Batch systems where required CPU time is known in advance.
• Impossible to implement in interactive systems where required CPU time is not known.
• The processer should know in advance how much time process will take.

By Prof Nitu Dash


Shortest Remaining Time

• Shortest-remaining-time (SRT) is the preemptive version of the SJN algorithm.


• The processor is allocated to the job closest to completion but it can be preempted by a
newer ready job with shorter time to completion.
• Impossible to implement in interactive systems where required CPU time is not known.
• It is often used in batch environments where short jobs need to give preference.

By Prof Nitu Dash


By Prof Nitu Dash
Priority Scheduling

• Priority scheduling is a non-preemptive algorithm and one of the most common


scheduling algorithms in batch systems.
• Each process is assigned a priority. Process with highest priority is to be executed first
and so on.
• Processes with same priority are executed on first come first served basis.
• Priority can be decided based on memory requirements, time requirements or any other
resource requirement.
• Problem  Starvation – low priority processes may never execute
• Solution  Aging – as time progresses increase the priority of the process

Round Robin (RR)

• Round Robin is the preemptive process scheduling algorithm.


• Each process gets a small unit of CPU time (time quantum q), usually 10-100
milliseconds. After this time has elapsed, the process is preempted and added to the end
of the ready queue.

By Prof Nitu Dash


• If there are n processes in the ready queue and the time quantum is q, then each process
gets 1/n of the CPU time in chunks of at most q time units at once. No process waits
more than (n-1)q time units.
• Timer interrupts every quantum to schedule next process
• Performance
➢ q large  FIFO
➢ q small  q must be large with respect to context switch, otherwise overhead is
too high

By Prof Nitu Dash


Multilevel Queue

• Multiple-level queues are not an independent scheduling algorithm. They make use of
other existing algorithms to group and schedule jobs with common characteristics.
➢ Multiple queues are maintained for processes with common characteristics.
➢ Each queue can have its own scheduling algorithms.
➢ Priorities are assigned to each queue.

• For example, CPU-bound jobs can be scheduled in one queue and all I/O-bound jobs in
another queue. The Process Scheduler then alternately selects jobs from each queue and
assigns them to the CPU based on the algorithm assigned to the queue.

By Prof Nitu Dash


Multilevel Feedback Queue

• A process can move between the various queues; aging can be implemented this way
• Multilevel-feedback-queue scheduler defined by the following parameters:
➢ number of queues
➢ scheduling algorithms for each queue
➢ method used to determine when to upgrade a process
➢ method used to determine when to demote a process
➢ method used to determine which queue a process will enter when that process
needs service

• Three queues:

➢ Q0 – RR with time quantum 8 milliseconds


➢ Q1 – RR time quantum 16 milliseconds
➢ Q2 – FCFS
➢ Scheduling
➢ A new job enters queue Q0 which is served FCFS
 When it gains CPU, job receives 8 milliseconds
 If it does not finish in 8 milliseconds, job is moved to queue Q1
➢ At Q1 job is again served FCFS and receives 16 additional milliseconds
 If it still does not complete, it is preempted and moved to queue Q2

By Prof Nitu Dash

You might also like