0% found this document useful (0 votes)
9 views

Module 2

OS module 2

Uploaded by

Lalli Krishnan
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views

Module 2

OS module 2

Uploaded by

Lalli Krishnan
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 44

MODULE 2

Processor Scheduling and


Inter-process Communication
Syllabus
Processor Scheduling: Definition, Scheduling objectives, Types of
Schedulers, Scheduling criteria: CPU utilization, Throughput, Turnaround Time,
Waiting Time, Response Time, Scheduling algorithms: Pre emptive and Non, pre
emptive, FCFS – SJF – RR, Multiprocessor and multitasking.

Inter-process Communication: Race Conditions, Critical Section, Mutual Exclusion,


Hardware Solution, Strict Alternation, Peterson’s Solution, The Producer Consumer
Problem, Semaphores, Event Counters, Monitors, Message Passing, Classical IPC
Problems: Reader’s & Writer Problem, Dinning Philosopher Problem etc., Scheduling,
Scheduling Algorithms.
Processor Scheduling
• CPU Scheduling is a process that allows one process to use the
CPU while another process is delayed (in standby) due to
unavailability of any resources such as I / O etc, thus making full
use of the CPU.

• It decides which task (or process) the CPU should work on at any
given time.

• CPU can only handle one task at a time, but there are usually
many tasks that need to be processed.
CPU and I/O Burst Cycle
• Process execution consists of a cycle of CPU execution and I/O wait.

• Processes alternate between these two states.

• Process execution begins with a CPU burst, followed by an I/O burst, then
another CPU burst ... etc.

• The last CPU burst will end with a system request to terminate execution
rather than with another I/O burst.

• The duration of these CPU burst have been measured.

• An I/O-bound program would typically have many short CPU bursts, A


CPU-bound program might have a few very long CPU bursts.

• This can help to select an appropriate CPU-scheduling algorithm.


CPU and I/O Burst Cycle
Objectives of CPU Scheduling

• Utilization of CPU at maximum level. Keep CPU as


busy as possible.

• Allocation of CPU should be fair.


CPU Schedulers
• Process schedulers are fundamental components of
operating systems responsible for deciding the order
in which processes are executed by the CPU.

• they manage how the CPU allocates its time among


multiple tasks or processes that are competing for its
attention.
Types of Schedulers
There are three types of process schedulers:
1. Long Term or Job Scheduler
- It brings the new process to the ‘Ready State’.
2. Short-Term or CPU Scheduler
- It is responsible for selecting one process from the ready state for scheduling
it on the running state.
-Short-term scheduler only selects the process to schedule it doesn’t load the
process on running.

- The dispatcher is responsible for loading the process selected by the Short-
term scheduler on the CPU (Ready to Running State) Context switching is done by the
dispatcher only.
3. Medium-Term Scheduler
Scheduling Criteria
• CPU utilization – keep the CPU as busy as possible.

• Throughput – Number of processes that complete their execution


per time unit.

• Turnaround time – amount of time to execute a particular


process.

• Waiting time – amount of time a process has been waiting in the


ready queue.

• Response time – amount of time it takes from when a request was


submitted until the first response is produced.
Scheduling Algorithm Optimization
Criteria
• Max CPU utilization
• Max throughput
• Min turnaround time
• Min waiting time
• Min response time
Preemptive Scheduling
• Preemptive scheduling is used when a process switches from
running state to ready state or from waiting state to ready
state.

• The resources (mainly CPU cycles) are allocated to the


process for the limited amount of time and then is taken away,
and the process is again placed back in the ready queue if that
process still has CPU burst time remaining.

• That process stays in ready queue till it gets next chance to


execute.
Non-Preemptive Scheduling
• Non-preemptive Scheduling is used when a process
terminates, or a process switches from running to waiting
state.

• In this scheduling, once the resources (CPU cycles) is


allocated to a process, the process holds the CPU till it gets
terminated or it reaches a waiting state.

• In case of non-preemptive scheduling does not interrupt a


process running CPU in middle of the execution.

• Instead, it waits till the process complete its CPU burst time
and then it can allocate the CPU to another process.
Types of Scheduling Algorithm
(a) First Come First Serve (FCFS)
(b) Shortest Job First (SJF)
(c) Round Robin (RR)Scheduling
(d) Priority Scheduling
First Come First Serve (FCFS)

• The process which arrives first in the ready queue is firstly


assigned the CPU.

• In case of a tie, process with smaller process id is executed first.

• It is always non-preemptive in nature.

• Jobs are executed on first come, first serve basis.

• Easy to understand and implement.

• Its implementation is based on FIFO queue.

• Poor in performance as average wait time is high.


Advantages & Disadvantages
Advantages
- It is simple and easy to understand.
- It can be easily implemented using queue data structure.
- It does not lead to starvation.

Disadvantages
- It does not consider the priority or burst time of the processes.
- It suffers from convoy effect i.e. processes with higher burst
time arrived before the processes with smaller burst time.
Example 1

Q: Consider the following process with burst time (CPU execution time). Calculate
average turnaround time and average waiting time.
Solution for the Example 1 Contd..
Example 2

Q: Consider the processes P1, P2, P3 given in the below table, arrives for execution
in the same order, with Arrival Time 0, and given Burst Time,
Solution for the Example 2 Contd..
Example 3

Q: Consider the processes P1, P2, P3 given in the below table, arrives for execution
in the same order, with Arrival Time 0, and given Burst Time,
Priority Scheduling
• Out of all the available processes, CPU is assigned to the
process having the highest priority.

• In case of a tie, it is broken by FCFS Scheduling.

• Priority Scheduling can be used in both preemptive and non-


preemptive mode.

• The waiting time for the process having the highest priority
will always be zero in preemptive mode.

• The waiting time for the process having the highest priority
may not be zero in non-preemptive mode.
Advantages and Disadvantages
Advantages-
• It considers the priority of the processes and allows the important processes to run first.
• Priority scheduling in pre-emptive mode is best suited for real time operating system.

Disadvantages-
• Processes with lesser priority may starve for CPU.
• There is no idea of response time and waiting time.
Example:1
Contd…
Example : 2
Examples
Multiprocessor

• The availability of more than one


processor per system, which can execute
several set of instructions in parallel is
called as multiprocessing.
Multitasking
• The execution of more than one task simultaneously is known as
multitasking.
• the CPU executes multiple jobs by switching among them typically using a
small time quantum, and these switches occur so frequently that the users
can interact with each program while it is running.
Difference between Multitasking and Multiprocessing

S.No. Multitasking Multiprocessing


The presence of more than one
The execution of more than one
1 processor in a system that can execute
process takes place simultaneously.
large no of instruction in parallel mode.
In this system the no of processor’s In this system the no of processor’s are
2
is one. more than one.
It takes more amount of time in
3 It takes less time in process execution.
process execution.
In this, job is executed one by one In this, more no of jobs can be executed
4
at a time. at a time.
5 In this, the throughput is moderate. In this, the throughput is maximum.
The efficiency of multitasking is The efficiency of multiprocessing is
6
moderate. maximum.
In this system the whole process is
In this system the whole process is
7 divided between the multiple
depend only on one processor.
processors.
Inter-process Communication
• Processes within a system may be
 independent or
 cooperating
• Cooperating processes need interprocess
communication (IPC)
• Two models of IPC
– Shared memory
– Message passing
Communications Models
(a) Shared memory. (b) Message passing.
Critical Section
• The part of code where the shared resource is accessed is
called the critical section.
• It is critical as multiple processes enter this section at same
time leading to data corruption and errors.
Race condition
• A race condition happens when two or more
processes try to access the same resource at the same
time without proper coordination.
• This “race” can lead to incorrect results or
unpredictable behavior because the order of execution
is not controlled.
• Example: Two people trying to edit the same
document at the same time, causing one’s changes to
overwrite the other’s.
Process Synchronization

• Process Synchronization is the mechanism of coordination between multiple


processes that share common resources to execute .
• It ensure that they access shared resources in a controlled and predictable
manner.
• It aims to resolve the problem of race conditions and other synchronization
issues in a concurrent system.
• Different Synchronization Techniques:
 Mutex(Mutual Exclusion): only one process can access the critical section at a
time, it is a locking mechanism.
 Semaphores: Signaling mechanisms to control access to shared resources.
 Monitors: High-level synchronization constructs that encapsulate shared resources
and provide a safe interface for processes to access them.
 Locks : Mechanisms to protect shared resources from concurrent access. Locks can
be simple or complex.
Semaphores
• A semaphore is a synchronization primitive used to control access to a shared
resource by multiple processes.
• Types of Semaphores
 Binary Semaphore: This is also known as a mutex lock. It can have only two values – 0 and 1. Its
value is initialized to 1. It is used to implement the solution of critical section problems with
multiple processes.
 Counting Semaphore: It controls access to a resource that can be used by multiple processes or
threads simultaneously. For instance, a semaphore with an initial count of 3 allows three processes
or threads to access the resource concurrently.

Basic Operations of a Semaphore


•Wait (P): This operation decreases the value of the semaphore by 1 if it is positive. If the
value is zero, the process is blocked until the semaphore becomes positive.
•Signal (V): This operation increases the semaphore's value by 1, potentially waking up a
blocked process.
Semaphore syntax
S=1;
P(semaphore S)
S=S-1;
v(semaphore S)
S=S+1;
Hardware Solution

• There are three algorithms in the hardware approach of solving Process


Synchronization problem:
1. Test and Set
2. Swap
3. Unlock and Lock
• Hardware instructions in many operating systems help in the effective
solution of critical section problems.
Peterson’s Algorithm

• Peterson’s Algorithm is used to synchronize two processes.


• It uses two variables,
– flag
– turn
• Initially, the flags are false.
• When a process wants to execute it’s critical section, it sets its flag to true
and turn into the index of the other process.
Readers-Writers Problem

• The Readers-Writers Problem is a classic synchronization issue in


operating systems that involves managing access to shared data by multiple
threads or processes.
• The problem addresses :

• Readers: Multiple readers can access the shared data


simultaneously without causing any issues because they are only
reading and not modifying the data.
• Writers: Only one writer can access the shared data at a time to
ensure data integrity, as writers modify the data, and concurrent
modifications could lead to data corruption or inconsistencies.
Message Passing
• processes communicate with each other by sending message.

• IPC facility provides two operations:


– send(message)
– receive(message)

• The message size is either fixed or variable


• It can be done in two ways
– Direct communication
– Indirect communication
Direct Communication

• Processes must name each other explicitly:


– send (P, message) – send a message to process P
– receive(Q, message) – receive a message from process
Q
• Properties of communication link
– Links are established automatically
– A link is associated with exactly one pair of
communicating processes
– Between each pair there exists exactly one link
– The link may be unidirectional, but is usually bi-directional
Indirect Communication

• Messages are directed and received from mailboxes


(also referred to as ports)
– Each mailbox has a unique id
– Processes can communicate only if they share a mailbox
• Properties of communication link
– Link established only if processes share a common mailbox
– A link may be associated with many processes
– Each pair of processes may share several communication
links
– Link may be unidirectional or bi-directional
Strict Alternation
• In an operating system, strict alternation is a
technique that provides mutual exclusion in all cases.
• It's a solution to the critical section problem for two
processes.
• Ex: sleep – wakeup.

You might also like