0% found this document useful (0 votes)
20 views102 pages

OS Unit-II KKW

ppt for operating system

Uploaded by

saikat.sarkar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views102 pages

OS Unit-II KKW

ppt for operating system

Uploaded by

saikat.sarkar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 102

Click icon to add picture

OPERATING SYSTEM
DESIGN
Prof. Jayshri A. Kandekar

Department of Computer Science and Design

K.K.Wagh Institute of Engineering Education &


Research
Nashik
Click icon to add picture
UNIT - II
PROCESS MANAGEMENT

Prof. Jayshri A. Kandekar

Department of Computer Science and Design

K.K.Wagh Institute of Engineering Education &


Research
Nashik
Unit-II : Process Management

Syllabus
 Process concept, Process Control Block(PCB), Process Operations, Process
Scheduling: Types of process schedulers, Types of scheduling: Preemptive, Non
preemptive. Scheduling algorithms: FCFS, SJF, RR, Priority, Inter process
Communication(IPC). Threads: multithreaded model, implicit threads, threading
issues

Operating Systems
PROCESS CONCEPT

Operating Systems
Process

 Users submit jobs to the system for execution, the jobs are run on the
system as process.
 A program in execution is a process. A process is executed
sequentially, one instruction at a time.
 A Program is a passive entity.
 For ex. File on the disk.
 A Process is an active entity.

Operating Systems
Structure of Process:

When a program is loaded into the memory and it becomes a


process, it can be divided into four sections ─ stack, heap, text and
data. The following image shows a simplified layout of a process
inside main memory −
Stack: It is used for local variables. The process Stack contains
the temporary data such as method/function parameters, return
address and local variables.
Heap: This is dynamically allocated memory to a process during its
run time and is managed via calls to new, delete, malloc, free, etc..
Text: This includes the current activity represented by the value of
Program Counter and the contents of the processor's registers.
Data: This section contains the global and static variables.
Title of the Course
Process State/Operations
A process can be any one of the following states:
1. New – Process being created.
2. Ready – Process waiting for CPU to be assigned.
3. Running – Instruction being executed.
4. Waiting – Process waiting for an event to occurs.
5. Terminated – Process has finished execution.

12/09/2024 Operating system 7


Process State/Operations
 A process being an active entity changes the state as execution proceeds. The
state of process is defined as current activity at that process.

12/09/2024 Operating system 8


Process State/Operations

1.New
 When a process is created, it is a new state. The process is not yet ready to
run in this state and is waiting for the operating system to give it the green
light. Long-term schedulers shift the process from a NEW state to a READY
state.
2.Ready
 After creation now, the process enters in the ready state, in which, it waits for
the CPU to be assigned for the execution. Now the process is waiting in the
ready queue to be picked up by the short-term scheduler. The short-term
scheduler selects one process from the READY state to the RUNNING state.
The OS picks the new processes from the secondary memory and put all of
them in the main memory.
12/09/2024
There can be many processes Operating
present in the ready state.
system 9
Process State/Operations

3.Running
 One of the processes from the ready state will be chosen by the OS
depending upon the scheduling algorithm. Hence, if we have only one
CPU in our system, the number of running processes for a particular
time will always be one. If we have n processors in the system then we
can have n processes running simultaneously.
 In running state, the process starts to execute the instructions that were
given to it. The running state is also where the process consumes most
of its CPU time.

12/09/2024 Operating system 10


Process State/Operations

4.Block or wait
 The OS suspends the process executing currently in running state and it enters
the waiting state. This could be because
 It’s waiting for some input from the user.
 It’s waiting for a resource that’s not available yet.
 Some high-priority process comes in that need to be executed.
 Then the process is suspended for some time and is put in a WAITING state. Till
then next process is given chance to get executed.
 When a timeout occurs that means the process hadn’t been terminated in the
allotted time interval and next process is ready to execute, then the operating
system preempts the process. Basically this happens in priority scheduling
where on the incoming of high priority process the ongoing process is preempted
i.e., the operating system puts the process in ‘ready’ state.
12/09/2024 Operating system 11
Process State/Operations

5.Completion or termination
 When a process finishes its execution, it comes in the termination state. All
the context of the process (Process Control Block) will also be deleted the
process will be terminated by the Operating system.
 After the process has completed the execution of its last instruction, it is
terminated. The resources held by a process are released after it is
terminated.
 A child process can be terminated by its parent process if its task is no longer
relevant. The child process sends its status information to the parent process
before it terminates. Also, when a parent process is terminated, its child
processes are terminated as well as the child processes cannot run if the
parent processes are terminated.
12/09/2024 Operating system 12
Process Control Block(PCB)
 Process Control block (PCB) is a data structure used by an operating system to
stores information about a process when the process is created.
 Which is also called a task control block.
 PCB is unique for every process which consists of various attributes such as
process ID, program counters, process states, registers, CPU scheduling
information, memory management information, accounting information, and IO
status information, list of open files, etc. in the system.
 The operating system uses the process control block to keep track of all the
processes executing in the system.

12/09/2024 Operating system 13


Process Control Block(PCB)
 PCBs are stored in specially reserved memory for the operating system
known as kernel space thus protected from the normal user access.
 It is also accountable for storing the contents of processor registers.
These are saved when the process moves from the running state and
then returns back to it. The information is quickly updated in the PCB by
the OS as soon as the process makes the state transition.
 The PCB contains information that makes the process an active entity.
 The PCB serves as a repository (Container) of information about a process and
varies from process to process.

12/09/2024 Operating system 14


Process Control Block(PCB)

The process control stores many data items that are needed for
efficient process management. Some of these data items are
explained with the help of the given diagram:
 Process State: It gives the current state of the process. It could be
a new process, ready, running, waiting or terminated.

 Process ID: An identifier that helps us in identifying and locating a


process.
 CPU registers: The content of processor registers gets stored
here when the process is in a running state. The different kinds
of CPU registers are accumulators, index and general-purpose
registers, instruction registers, and condition code registers.
12/09/2024 Operating system 15
Process Control Block(PCB)

 Program Counter : It holds the address of the next instruction to be executed


for the process. It also stores a count of the number of instructions in the
process.
 CPU Scheduling Information: A process needs to be scheduled for
execution. Based on this scheduling, it goes from ready to running.
CPU Scheduling information contains process priority (to determine which
process goes first), pointers for scheduling queues (to mark the order of
execution), and various other scheduling parameters.

12/09/2024 Operating system 16


Process Control Block(PCB)

 Memory Management information: Contains the value of base


and limit registers, details about the page, and segment tables. It
depends on the memory system of the OS in use.

 Accounting information: The time limits, account numbers, CPU


utilization, process numbers etc. are all a part of the PCB
accounting information.

 I/O Status information: Comprises I/O related information


including list of I/O devices allocated to the process, status, etc.

12/09/2024 Operating system 17


PROCESS SCHEDULING

Operating Systems
Process Scheduling
 The Process Scheduling is the activity of the process manager that handles
the removal of the running process from the CPU and the selection of another
process on the basis of a particular strategy.
 Process scheduling is an essential part of a Multiprogramming operating
systems. Such operating systems allow more than one process to be loaded
into the executable memory at a time and the loaded process shares the CPU
using time multiplexing.
 Scheduling refers to the way processes are assigned to run on the available
CPUs. This assignment is carried out by software known as scheduler.
 A scheduler is an OS module that selects the next job to be admitted into the
system and next process to run.
 The problem of determining when processors should be assigned and to
which processes is called as processor
12/09/2024
scheduling or CPU scheduling.
Operating system 19
Process Scheduling

12/09/2024 Operating system 20


Scheduling Objectives
 Here, are important objectives of Process scheduling
 Maximize the number of interactive users within acceptable response times.
 Achieve a balance between response and utilization.
 Avoid indefinite postponement and enforce priorities.
 It also should give reference to the processes holding the key resources.

12/09/2024 Operating system 21


Scheduling Queues
 The OS maintains all Process Control Blocks (PCBs) in Process Scheduling Queues. The OS
maintains a separate queue for each of the process states and PCBs. All the processes in the
same execution state are placed in the same queue. Therefore, whenever the state of a
process is changed, its PCB is unlinked from its current queue and moves back to its new state
queue.
 The Operating System maintains the following important process scheduling queues −
 Job queue − As process enter the system, they are put into a job queue, which consist of all
processes in the system.
 Ready queue − This queue keeps a set of all processes residing in main memory, ready and
waiting to execute. A new process is always put in this queue. A ready queue header contains
pointer to the first & final PCB in the list.
 Device queues − The processes which are blocked due to unavailability of an I/O device
constitute this queue.

12/09/2024 Operating system 22


Scheduling Queues

12/09/2024 Operating system 23


Scheduling Queues

12/09/2024 Operating system 24


Scheduling Queues
 In the above-given Diagram, Rectangle represents a queue. Circle denotes
the resource Arrow indicates the flow of the process.
 As process enter the system, they are put into a job queue, which consist of all
processes in the system.
 Every new process first put in the Ready queue and wait until it is selected for
execution or it is dispatched.
 One of the processes is allocated the CPU and it is executing.
 The process should issue an I/O request. Then, it should be placed in the I/O
queue. The process should create a new subprocess
 It should remove forcefully from the CPU, as a result interrupt. Once interrupt is
completed, it should be sent back to ready queue.
 The process should be waiting for Operating
12/09/2024 its termination.
system 25
Types of Scheduling
 There are two categories of scheduling:
 Non-preemptive:
Non-preemptive algorithms are designed so that once a process enters the
running state, it cannot be preempted until it completes its allotted time .
Non Preemptive means that a running process retains the control of the
CPU & all the allocated resources until it surrenders control to the OS on its
own.
Even if higher priority process enters the system, the running process
cannot be enforced to give up the control.
Here the resource can’t be taken from a process until the process
completes execution. The switching of resources occurs when the running
process terminates and moves to a waiting state.

12/09/2024 Operating system 26


Types of Scheduling
 Preemptive:
Preemptive means on the other hand allows a higher priority process to
replace currently running process even if its time slice is not over, it has not
requested for any I/O.
Here the OS allocates the resources to a process for a fixed amount of time.
During resource allocation, the process switches from running state to ready
state or from waiting state to ready state. This switching occurs as the CPU
may give priority to other processes and replace the process with higher
priority with the running process. Preemptive scheduling is based on priority
where a scheduler may preempt a low priority running process anytime
when a high priority process enters into a ready state.

12/09/2024 Operating system 27


Schedulers

 A Schedulers are special system software which handle process


scheduling in various ways. Their main task is to select the next job to be
admitted into the system and to decide which process to run next.

 Types of schedulers -
1. Long Term Scheduler (LTS)
2. Middle Term Scheduler (MTS)
3. Short Term Scheduler (STS)

12/09/2024 Operating system 28


Long Term Scheduler (LTS)
 When a process changes the state from new to ready, then there is use of long-
term scheduler.
 It is also called a job scheduler. A long-term scheduler determines which
programs are admitted to the system for processing. It selects processes from
the queue and loads them into memory for execution. Process loads into the
memory for CPU scheduling.
 The primary objective of the job scheduler is to provide a balanced mix of jobs,
such as I/O bound (I/O-bound tasks are which use much of their time in
input and output operations)and CPU bound (while CPU-bound
processes are which spend their time on the CPU).
 It also controls the degree of multiprogramming. If the degree of
multiprogramming is stable, then the average rate of process creation must be
equal to the average departure rate of processes leaving the system.
12/09/2024 Operating system 29
Short Term Scheduler (STS)
 STS is also called as CPU scheduler. Its main objective is to maximize
system performance in accordance with the chosen set of criteria. It is the
change of ready state to running state of the process. CPU scheduler
selects a process among the processes that are ready to execute and
allocates CPU to one of them.
 Short-term scheduler only selects the process to schedule it doesn’t load
the process on running. Here is when all the scheduling algorithms are
used. The CPU scheduler is responsible for ensuring there is no
starvation owing to high burst time processes.
 Short-term schedulers are faster than long-term schedulers.
 It is also called a dispatcher as it makes the decision on which process
will be executed next.
12/09/2024 Operating system 30
Middle Term Scheduler (MTS)
 It is responsible for suspending and resuming the process.
 When a running process makes an I/O request it becomes suspended i.e., it cannot be
completed. The suspended process cannot change state or cannot make progress
towards completion until the related suspending condition is removed.
 Middle-term scheduling is a part of swapping (removes the processes from the memory
and make space for other processes, the suspended process is moved to the secondary
storage and vice versa) and the process that goes through swapping is said to be
swapped out or rolled out.
 Swapping is necessary to improve the process mix. It is in-charge of handling the
swapped out-processes. The medium-term scheduler swapped out the process and later
swapped in.
 It is helpful in maintaining a perfect balance between the I/O bound and the CPU bound.
 It reduces the degree of multiprogramming.
12/09/2024 Operating system 31
Middle Term Scheduler (MTS)

12/09/2024 Operating system 32


Comparison among Scheduler
S.N. Long-Term Scheduler Short-Term Scheduler Medium-Term Scheduler

1 It is a job scheduler It is a CPU scheduler It is a process swapping scheduler


2 Speed is lesser than short Speed is fastest among other Speed is in between both short and
term scheduler two long term scheduler

3 It controls the degree of It provides lesser control over It reduces the degree of
multiprogramming degree of multiprogramming multiprogramming
4 It is almost absent or It is also minimal in time It is a part of Time sharing systems.
minimal in time sharing sharing system
system
5 It selects processes from It selects those processes It can re-introduce the process into
pool and loads them into which are ready to execute memory and execution can be
memory for execution continued.

12/09/2024 Operating system 33


Schedulers

Title of the Course


Scheduling Algorithms
 When more than one process is runnable, the OS must decide which one run
first. The part of the OS concerned with the decision is called the scheduling &
algorithm it uses is called the scheduling algorithm.
 Various Scheduling algorithms -
1. First Come First Served Scheduling (FCFS)
2. Shortest Job First Scheduling (SJF)/ Shortest remaining time first(SRTF)
3. Priority Based Scheduling
4. Round Robin Scheduling

12/09/2024 Operating system 35


First Come First Served Scheduling (FCFS)

 First come first serve (FCFS) scheduling algorithm simply schedules the jobs according to
their arrival time. The job which comes first in the ready queue will get the CPU first. The lesser
the arrival time of the job, the sooner will the job get the CPU. FCFS scheduling may cause the
problem of starvation if the burst time of the first process is the longest among all the jobs.
 The process that requests the CPU first is allocated the CPU first.
 When a process enters the ready queue, its PCB is linked onto the tail of the queue when the
CPU is free, it is allocated to the process at the head of the queue.
 FCFS algorithm is non preemptive.
 Jobs are executed on first come, first serve basis.
 Easy to understand and implement.
 Its implementation is based on FIFO queue.
 Poor in performance as average wait time is high.
12/09/2024 Operating system 36
First Come First Served Scheduling (FCFS)

 Wait time of each process is as follows −


 Process Wait Time : Service Time - Arrival
Time
 P0 0-0=0
 P1 5-1=4
 P2 8-2=6
 P3 16 - 3 = 13
 Average Wait Time: (0+4+6+13) / 4 = 5.75

12/09/2024 Operating system 37


First Come First Served Scheduling (FCFS)

 Process Burst time Arrival time


P1 6 2
P2 2 5
P3 8 1
P4 3 0
P5 4 4
 Using the FCFS scheduling algorithm, these processes are handled as follows.
 Step 0) The process begins with P4 which has arrival time 0

12/09/2024 Operating system 38


First Come First Served Scheduling (FCFS)

Turn Around Time = Completion Time - Arrival Time


Waiting Time = Turnaround time - Burst Time

12/09/2024 Operating system 39


Shortest Job First Scheduling (SJF)
 The algorithm associated with each process the length of the processes next CPU burst, when
the CPU is available, it is assigned to the process that has the smallest next CPU burst.
 The SJF algorithm can be either preemptive or non preemptive.
 A preemptive SJF will preempt the currently executing process, whereas a non preemptive SJF
will allow currently running to finish its CPU burst.
 Preemptive SJF scheduling is called as Shortest Remaining Time First Scheduling.
 This is also known as shortest job first, or SJF
 Best approach to minimize waiting time.
 Easy to implement in Batch systems where required CPU time is known in advance.
 Impossible to implement in interactive systems where required CPU time is not known.
 The processer should know in advance how much time process will take.
12/09/2024 Operating system 40
Shortest Job First Scheduling (SJF)
Advantages of SJF
 Maximum throughput
 Minimum average waiting and turnaround time

Disadvantages of SJF
 May suffer with the problem of starvation
 It is not implementable because the exact Burst time for a process can't be known in
advance.

12/09/2024 Operating system 41


Shortest Job First Scheduling (SJF)
Since, No Process arrives at time 0 hence; there will
be an empty slot in the Gantt chart from time 0 to 1
(the time at which the first process arrives).

According to the algorithm, the OS schedules the


process which is having the lowest burst time among
the available processes in the ready queue.

Till now, we have only one process in the ready queue


hence the scheduler will schedule this to the processor
no matter what is its burst time.

This will be executed till 8 units of time. Till then we


have three more processes arrived in the ready queue
hence the scheduler will choose the process with the
lowest burst time.
Avg Waiting Time =
27/5 Among the processes given in the table, P3 will be
executed next since it is having the lowest burst time
among all the available processes.
12/09/2024 Operating system So that's how the procedure will go on in shortest
42 job
first (SJF) scheduling algorithm.
Shortest Job First Scheduling (SJF)
Given: Table of processes, and their Arrival time, Execution time

Process ArrivalTime ExecutionTime ServiceTime

P0 0 5 0

P1 1 3 5

P2 2 8 14

P3 3 6 8

Waiting time of each process is as follows −

Process Waiting Time

P0 0-0=0

P1 5-1=4

P2 14 - 2 = 12

P3 8-3=5 Average Wait Time: (0 + 4 + 12 + 5)/4 = 21 / 4 = 5.25


12/09/2024 Operating system 43
Shortest Remaining Time First(SRTF) Scheduling
 This Algorithm is the preemptive version of SJF scheduling. In SRTF, the execution of the process can be stopped
after certain amount of time. At the arrival of every process, the short term scheduler schedules the process with the
least remaining burst time among the list of available processes and the running process.

 Once all the processes are available in the ready queue, No preemption will be done and the algorithm will work
as SJF scheduling. The context of the process is saved in the Process Control Block when the process is removed
from the execution and the next process is scheduled. This PCB is accessed on the next execution of this process.

 Example:

 There are five jobs P1, P2, P3, P4, P5 and P6. Their arrival time and burst time are given below in the table

 The Gantt chart is prepared according to the arrival and burst time given in the table.

 Since, at time 0, the only available process is P1 with CPU burst time 8. This is the only available process in the list
therefore it is scheduled.

 The next process arrives at time unit 1. Since the algorithm we are using is SRTF which is a preemptive one, the
current execution is stopped and the scheduler checks for the process with the least burst time.
Till now, there are two processes available in the ready queue. The OS has executed P1 for one unit of time till now;
the remaining burst time of P1 is 7 units. The burst time of Process P2 is 4 units. Hence Process P2 is scheduled on
the CPU according to the algorithm.
12/09/2024 Operating system 44
Shortest Remaining Time First(SRTF) Scheduling
 The next process P3 arrives at time unit 2. At this time, the execution of process P3 is stopped and the process with
the least remaining burst time is searched. Since the process P3 has 2 unit of burst time hence it will be given priority
over others.

 The Next Process P4 arrives at time unit 3. At this arrival, the scheduler will stop the execution of P4 and check which
process is having least burst time among the available processes (P1, P2, P3 and P4). P1 and P2 are having the
remaining burst time 7 units and 3 units respectively.

 P3 and P4 are having the remaining burst time 1 unit each. Since, both are equal hence the scheduling will be done
according to their arrival time. P3 arrives earlier than P4 and therefore it will be scheduled again.

 The Next Process P5 arrives at time unit 4. Till this time, the Process P3 has completed its execution and it is no more
in the list. The scheduler will compare the remaining burst time of all the available processes. Since the burst time of
process P4 is 1 which is least among all hence this will be scheduled.

 The Next Process P6 arrives at time unit 5, till this time, the Process P4 has completed its execution. We have 4
available processes till now, that are P1 (7), P2 (3), P5 (3) and P6 (2). The Burst time of P6 is the least among all
hence P6 is scheduled. Since, now, all the processes are available hence the algorithm will now work same as SJF.
P6 will be executed till its completion and then the process with the least remaining time will be scheduled.

 Once all the processes arrive, No preemption is done and the algorithm will work as SJF.

12/09/2024 Operating system 45


Shortest Remaining Time First(SRTF) Scheduling

Avg Waiting Time =


24/6

12/09/2024 Operating system 46


Priority Based Scheduling
 A priority is associated with each process & the CPU is allocated to the process with
the highest priority.
Equal priority processes scheduled in FCFS order.

 Priority scheduling can be either preemptive or non preemptive.

 A preemptive priority algorithm will preempt the CPU if the priority of the newly
arrived process is higher than the priority of the currently running process.

 A non preemptive priority algorithm will simply put the new process at the need of the
ready queue.
12/09/2024 Operating system 47
Priority Based Scheduling
 A major problem with priority scheduling algorithm is indefinite blocking or starvation.

 A solution to the problem of indefinite blockage of low priority processes is aging.

 Aging is technique of gradually increasing the priority of processes that wait in the system for a long period.
 Priority scheduling is a non-preemptive algorithm and one of the most common
scheduling algorithms in batch systems.
 Each process is assigned a priority. Process with highest priority is to be executed first
and so on.
 Processes with same priority are executed on first come first served basis.
 Priority can be decided based on memory requirements, time requirements or any other
resource requirement.

12/09/2024 Operating system 48


Priority Based Scheduling
 Given: Table of processes, and their Arrival time, Execution time, and priority. Here we are
considering 1 is the lowest priority.
 Process ArrivalTime ExecutionTime Priority ServiceTime
P0 0 5 1 0
P1 1 3 2 11
P2 2 8 1 14
P3 3 6 3 5

12/09/2024 Operating system 49


Priority Based Scheduling
 Waiting time of each process is as follows −
 Process Waiting Time
 P0 0-0=0
 P1 11 - 1 = 10
 P2 14 - 2 = 12
 P3 5-3=2

 Average Wait Time: (0 + 10 + 12 + 2)/4 = 24 / 4 = 6

12/09/2024 Operating system 50


Round Robin Scheduling

 The RR scheduling algorithm is designed especially for time sharing system. This is
the preemptive version of first come first serve scheduling.
 In this algorithm, every process gets executed in a cyclic way. A certain time slice
is defined in the system which is called time quantum. A time quantum is generally from
10 to 100 milliseconds.
 Each process present in the ready queue is assigned the CPU for that time
quantum, if the execution of the process is completed during that time then the
process will terminate else the process will go back to the ready queue and waits
for the next turn to complete the execution. Thus the ready queue is treated as a
circular queue.
 Round Robin is the preemptive process scheduling algorithm.
 Context switching is used to save states of preempted processes.
12/09/2024 Operating system 51
Round Robin Scheduling

Advantages
 It can be actually implementable in the system because it is not depending
on the burst time.
 It doesn't suffer from the problem of starvation or convoy effect.
 All the jobs get a fare allocation of CPU.
Disadvantages
 The higher the time quantum, the higher the response time in the system.
 The lower the time quantum, the higher the context switching overhead in
the system.
 Deciding a perfect time quantum is really a very difficult task in the system.

12/09/2024 Operating system 52


Round Robin Scheduling
 Wait time of each process is as follows −
 Process Wait Time : Service Time - Arrival
Time
 P0 (0 - 0) + (12 - 3) = 9
 P1 (3 - 1) = 2
 P2 (6 - 2) + (14 - 9) + (20 - 17) = 12
 P3 (9 - 3) + (17 - 12) = 11
 Average Wait Time: (9+2+12+11) / 4 = 8.5

12/09/2024 Operating system 53


INTER PROCESS
COMMUNICATION(IPC)

Operating Systems
Inter Process Communication
What is Inter Process Communication (IPC) ?

Inter process communication is the mechanism provided by the operating system that
allows processes to communicate with each other. This communication could involve a
process letting another process know that some event has occurred or the transferring of
data from one process to another.

12/09/2024 Operating system 55


Inter Process Communication

Operating Systems
Inter Process Communication

Operating Systems
Inter Process Communication

Operating Systems
Inter Process Communication

Operating Systems
Inter Process Communication

Operating Systems
Inter Process Communication(IPC)
 Inter process Communication allows processes to communicate and synchronize their actions.
Inter process Communication (IPC) mechanism is used by cooperating processes to exchange
data and information.
 Two operations provided by the IPC facility are receive and send messages.
 There are twomodels of IPC:
→Shared Memory
→ Message Passing
 Message Passing system allows processes to communicate with each other without sharing
the same address space.
 Messages sent by a process can be fixed or variable size. If the message size of the process
is fixed then system level implementation is straightforward but it makes the task of
programming more difficult. If the message size of the process is variable then system level
implementation is more complex but it makes the task of programming simpler.
12/09/2024 Operating system 61
Inter Process Communication
 Synchronization in Inter Process Communication
 Synchronization is a necessary part of interprocess communication. It is either provided
by the interprocess control mechanism or handled by the communicating processes.
Some of the methods to provide synchronization are as follows −
 Semaphore: A semaphore is a variable that controls the access to a common resource by
multiple processes. The two types of semaphores are binary semaphores and counting
semaphores.
 Mutual Exclusion: Mutual exclusion requires that only one process thread can enter the
critical section at a time. This is useful for synchronization and also prevents race
conditions.

12/09/2024 Operating system 62


Inter Process Communication
 Synchronization in Inter Process Communication

 Barrier: A barrier does not allow individual processes to proceed until all the processes
reach it. Many parallel languages and collective routines impose barriers.
 Spinlock: This is a type of lock. The processes trying to acquire this lock wait in a loop
while checking if the lock is available or not. This is known as busy waiting because the
process is not doing any useful operation even though it is active.

12/09/2024 Operating system 63


Inter Process Communication
Approaches to Interprocess Communication
The different approaches to implement interprocess communication are
given as follows −
 Pipe
 Socket
 File
 Signal
 Shared Memory
 Message Queue

12/09/2024 Operating system 64


Inter Process Communication(IPC)

 The link between two processes P and Q to send and receive messages is
called communication link. Two processes P and Q want to communicate with
each other; there should be a communication link that must exist between
these two processes so that both processes can able to send and receive
messages using that link.
 For direct communication, a communication link is associated with exactly two
processes. One communication link must exist between a pair of processes.
 In indirect communication between processes P and Q there is a mailbox to
help communication between P and Q. A mailbox can be viewed abstractly as
an object into which messages can be placed by processes and from which
messages can be removed.

12/09/2024 Operating system 65


Inter Process Communication(IPC)

 In the non blocking send, the sending process sends the message and
resumes operation. Sending process doesn’t care about reception. It is also
known as asynchronous send.
 In the Zero capacity queue the sender blocks until the receiver receives the
message. Zero capacity queue has maximum capacity of Zero; thus message
queue does not have any waiting message in it.
 The Zero capacity queue is referred to as a message system with no
buffering.
 Bounded capacity and Unbounded capacity queues are referred to as
Automatic buffering. Buffer capacity of the Bounded capacity queue is finite
length and buffer capacity of the Unbounded queue is infinite.

12/09/2024 Operating system 66


THREADS

Operating Systems
Thread
 A thread sometimes called as a light weight process(LWP).

12/09/2024 Operating system 68


Thread
 Thread is a fundamental unit of CPU utilization that forms the basis of multithreaded
computer systems.
 Threads run within application.
 Multiple tasks with the application can be implemented by separate threads
 Update display
 Fetch data
 Spell checking
 Answer a network request
 Process creation is heavy-weight while thread creation is light-weight
 Can simplify code, increase efficiency
 Kernels are generally multithreaded
12/09/2024 Operating system 69
Thread

12/09/2024 Operating system 70


Benefits of Thread

Operating Systems
Benefits of Thread

 Responsiveness – may allow continued execution if part of process is blocked


especially important for user interfaces
 Resource Sharing – By default threads share common code, data, and other
resources, which allows multiple tasks to be performed simultaneously in a
single address space.
 Economy – Cheaper than process creation. Creating and managing threads
( and context switches between them ) is much faster and cheaper than
performing the same tasks for processes.
 Scalability i.e. Utilization of multiprocessor architectures - A single threaded
process can only run on one CPU, no matter how many may be available,
whereas the execution of a multi-threaded application may be split amongst
available processors. Single threaded processes benefit from multiprocessor
architectures.
Operating Systems
Multithreading Models
 There are two types of threads to be managed in a modern system:
 User threads: Are supported above the kernel, without kernel support. These are the
threads that application programmers would put into their programs. Management
done by user-level threads library.
 Three primary thread libraries:
 POSIX Pthreads
 Windows threads
 Java threads
 Kernel threads: Are supported within the kernel of the OS itself. All modern OSes
support kernel level threads, allowing the kernel to perform multiple simultaneous
tasks and/or to service multiple kernel system calls simultaneously.
 Examples – virtually all general purpose operating systems, including:
 Windows Solaris
 Linux Mac OS X
Operating Systems
Thread

 The user threads must be mapped to kernel threads, using one of the following
strategies.
 Many-To-One Model
 One-To-One Model
 Many-To-Many Model

Operating Systems
Multithreading Models

Operating Systems
Multithreading Models

 In the many-to-one model, many user-level threads are all mapped onto a single
kernel thread.
 Thread management is handled by the thread library in user space, which is very
efficient.
 However, if a blocking system call is made, then the entire process blocks, even if the
other user threads would otherwise be able to continue.
 Because a single kernel thread can operate only on a single CPU, the many-to-one
model does not allow individual processes to be split across multiple CPUs.
 Green threads for Solaris and GNU Portable Threads implement the many-to-one
model in the past, but few systems continue to do so today.
 Examples:
 Solaris Green Threads
 GNU Portable Threads
Operating Systems
Multithreading Models

Operating Systems
Multithreading Models

 The one-to-one model creates a separate kernel thread to handle each user thread.
 One-to-one model overcomes the problems listed above involving blocking system
calls and the splitting of processes across multiple CPUs.
 However the overhead of managing the one-to-one model is more significant,
involving more overhead and slowing down the system.
 Most implementations of this model place a limit on how many threads can be created.
 Linux and Windows from 95 to XP implement the one-to-one model for threads.
 Examples:
 Windows
 Linux
 Solaris 9 and later

Operating Systems
Multithreading Models

Operating Systems
Multithreading Models

 The many-to-many model multiplexes any number of user threads onto an equal or
smaller number of kernel threads, combining the best features of the one-to-one and
many-to-one models.
 Users have no restrictions on the number of threads created.
 Blocking kernel system calls do not block the entire process.
 Processes can be split across multiple processors.
 Individual processes may be allocated variable numbers of kernel threads, depending
on the number of CPUs present and other factors.
 Examples:
 Solaris prior to version 9
 Windows with the ThreadFiber package
Operating Systems
Implicit Threads

 Shifts the burden of addressing the programming challenges from the


application programmer to the compiler and run-time libraries.
 Growing in popularity as numbers of threads increase, program
correctness more difficult with explicit threads  Creation and
management of threads done by compilers and run-time libraries rather
than programmers
 Three methods explored
 Thread Pools
 OpenMP
 Grand Central Dispatch
 Other methods include Microsoft Threading Building Blocks (TBB),
java.util.concurrent package
Operating Systems
Implicit Threads
Thread Pools
 Creating new threads every time one is needed and then deleting it when it is
done can be inefficient, and can also lead to a very large ( unlimited ) number
of threads being created.
 An alternative solution is to create a number of threads when the process first
starts, and put those threads into a thread pool.
 Threads are allocated from the pool as needed, and returned to the pool
when no longer needed.
 When no threads are available in the pool, the process may have to wait
until one becomes available.
 Win32 provides thread pools through the "PoolFunction" function.
 Java also provides support for thread pools through the java.util.concurrent
package,
 Apple supports thread pools underOperating Systems
the Grand Central Dispatch architecture..
Implicit Threads

 Advantages:
 Usually slightly faster to service a request with an existing thread than
create a new thread
 Allows the number of threads in the application(s) to be bound to the
size of the pool
 Separating task to be performed from mechanics of creating task allows
different strategies for running task
 i.e.Tasks could be scheduled to run periodically

Operating Systems
Implicit Threads
 OpenMP
 OpenMP is a set of compiler directives available for C, C++, or FORTRAN
programs that instruct the compiler to automatically generate parallel code
where appropriate.
 For example, the directive:
 #pragma omp parallel { /* some parallel code here */ }would cause the
compiler to create as many threads as the machine has cores available, ( e.g.
4 on a quad-core machine ), and to run the parallel block of code, ( known as
a parallel region ) on each of the threads.
 Another sample directive is "#pragma omp parallel for", which causes the for
loop immediately following it to be parallelized, dividing the iterations up
amongst the available cores.
 Provides support for parallel programming in shared-memory environments
Operating Systems
 Identifies parallel regions – blocks of code that can run in parallel
Implicit Threads

Operating Systems
Implicit Threads
 Grand Central Dispatch, GCD
 GCD is an extension to C and C++ languages, API, and run-time library
available on Apple technology for Mac OSX and iOS operating systems to
support parallelism.
 Allows identification of parallel sections
 Similar to OpenMP, users of GCD define blocks of code to be executed
either serially or in parallel by placing a carat just before an opening curly
brace, Block is in “^{ }” i.e. ^{ printf( "I am a block.\n" ); }
 Internally GCD manages a pool of POSIX threads which may fluctuate in size
depending on load conditions.
 Manages most of the details of threading
Operating Systems
Implicit Threads
 GCD schedules blocks by placing them on one of several dispatch queues.
 Two types of dispatch queues:
 serial – Blocks placed on a serial queue are removed one by one. The
next block cannot be removed for scheduling until the previous block has
completed. blocks removed in FIFO order, queue is per process, called
main queue
 Programmers can create additional serial queues within program
 concurrent –Blocks are also removed from these queues one by one, but
several may be removed and dispatched without waiting for others to
finish first, depending on the availability of threads.i.e removed in FIFO
order but several may be removed at a time.
 There are three concurrent queues, corresponding roughly to low,
medium, or high priority.
Operating Systems
Implicit Threads
 Other Approaches
 There are several other approaches available, including Microsoft's
Threading Building Blocks ( TBB ) and other products, and Java's
util.concurrent package.

Operating Systems
Thread Issues

 Semantics of fork() and exec() system calls


 Signal handling
 Synchronous and asynchronous
 Thread cancellation of target thread
 Asynchronous or deferred
 Thread-local storage
 Scheduler Activations

Operating Systems
Thread Issues

1. Semantics of the fork( ) and exec( ) System Calls


Does fork()duplicate only the calling thread or all threads?
Some UNIXes have two versions of fork
exec() usually works as normal – replace the running process including all
threads

Operating Systems
Thread Issues
 Signal Handling
 Q: When a multi-threaded process receives a signal, to what thread should
that signal be delivered?
 A: There are four major options:
 Deliver the signal to the thread to which the signal applies.
 Deliver the signal to every thread in the process.
 Deliver the signal to certain threads in the process.
 Assign a specific thread to receive all signals in a process.
 The best choice may depend on which specific signal is involved.
 UNIX allows individual threads to indicate which signals they are accepting
and which they are ignoring. However the signal can only be delivered to one
thread, which is generally the first thread that is accepting that particular
signal. Operating Systems
Thread Issues
 Signal Handling

 Signals are used in UNIX systems to notify a process that a particular event has occurred.

 A signal handler is used to process signals


 1. Signal is generated by particular event
 2. Signal is delivered to a process
 3. Signal is handled by one of two signal handlers:
 1. default
 2. user-defined

 Every signal has default handler that kernel runs when handling signal
 User-defined signal handler can override default
 For single-threaded, signal delivered to process

 UNIX provides two separate system calls, kill( pid, signal ) and pthread_kill( tid, signal ), for
delivering signals to processes or specific threads respectively.
 Windows does not support signals, but they can beSystems
Operating emulated using Asynchronous Procedure Calls
( APCs ). APCs are delivered to specific threads, not processes.
Thread Issues

 Thread Cancellation
 Terminating a thread before it has finished
 Thread to be canceled is target thread
 Threads that are no longer needed may be cancelled by another thread in one of two
ways:
 Asynchronous Cancellation cancels the thread immediately.
 Deferred Cancellation allows the target thread to periodically check if it should
be cancelled . Sets a flag indicating the thread should cancel itself when it is
convenient. It is then up to the cancelled thread to check this flag periodically and
exit nicely when it sees the flag set.
 ( Shared ) resource allocation and inter-thread data transfers can be problematic with
asynchronous cancellation.

Operating Systems
Thread Issues

 Thread Cancellation

 Pthread code to create and cancel a thread.

 Invoking thread cancellation requests cancellation, but


actual cancellation depends on thread state
 If thread has cancellation disabled, cancellation
remains pending until thread enables it
 Default type is deferred
 Cancellation only occurs when thread reaches
cancellation point
 I.e. pthread_testcancel()
 Then cleanup handler is invoked

 On Linux systems, thread cancellation is handled


through signals
Operating Systems
Thread Issues
 Thread-Local Storage ( was 4.4.5 Thread-Specific Data )
 Most major thread libraries ( pThreads, Win32, Java ) provide support for thread-
specific data, known as thread-local storage or TLS.
 Thread-local storage (TLS) allows each thread to have its own copy of data
 Useful when you do not have control over the thread creation process (i.e., when
using a thread pool)
 Different from local variables
 Local variables visible only during single function invocation
 TLS visible across function invocations
 Similar to static data as it does not cease to exist when the function ends.
 TLS is unique to each thread
Operating Systems
Thread Issues
 Scheduler Activations
 Both M:M and Two-level models require communication to maintain the appropriate
number of kernel threads allocated to the application
 Typically use an intermediate data structure between user and kernel threads –
lightweight process (LWP)
 Appears to be a virtual processor on which process can schedule user thread to
run
 Each LWP attached to kernel thread
 How many LWPs to create?
 Scheduler activations provide upcalls - a communication mechanism from the kernel
to the upcall handler in the thread library
 This communication allows an application to maintain the correct number kernel
threads

Operating Systems
Thread Issues
 Scheduler Activations
 Many implementations of threads provide a virtual processor as an interface between the user
thread and the kernel thread, particularly for the many-to-many or two-tier models.
 This virtual processor is known as a "Lightweight Process", LWP.
 There is a one-to-one correspondence between LWPs and kernel threads.
 The number of kernel threads available, ( and hence the number of LWPs ) may change
dynamically.
 The application ( user level thread library ) maps user threads onto available LWPs.
 kernel threads are scheduled onto the real processor(s) by the OS.
 The kernel communicates to the user-level thread library when certain events occur ( such
as a thread about to block ) via an upcall, which is handled in the thread library by
an upcall handler. The upcall also provides a new LWP for the upcall handler to run on,
which it can then use to reschedule the user thread that is about to become blocked. The
OS will also issue upcalls when a thread becomes unblocked, so the thread library can
make appropriate adjustments.
Operating Systems
Thread

Scheduler Activations

If the kernel thread blocks, then the LWP blocks,


which blocks the user thread.

Ideally there should be at least as many LWPs


available as there could be concurrently blocked
kernel threads.

Otherwise if all LWPs are blocked, then user threads


will have to wait for one to become available.

Operating Systems
Thread

Operating Systems
Thread

Operating Systems
Thread

Operating Systems
THANK YOU !!
Dr. Dipak D. Bage

[email protected]

K.K.Wagh Institute of Engineering Education & Research, Nashik

You might also like