0% found this document useful (0 votes)
19 views28 pages

Unit 2

The document discusses CPU scheduling, explaining key concepts such as arrival time, completion time, burst time, turnaround time, waiting time, and response time. It outlines various CPU scheduling algorithms including FCFS, SJN, priority scheduling, and round robin, along with their characteristics and evaluation methods like deterministic modeling, queueing models, and simulations. Additionally, it covers process synchronization, race conditions, critical section problems, and solutions including Peterson's solution and semaphores.

Uploaded by

Jaya Shree
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views28 pages

Unit 2

The document discusses CPU scheduling, explaining key concepts such as arrival time, completion time, burst time, turnaround time, waiting time, and response time. It outlines various CPU scheduling algorithms including FCFS, SJN, priority scheduling, and round robin, along with their characteristics and evaluation methods like deterministic modeling, queueing models, and simulations. Additionally, it covers process synchronization, race conditions, critical section problems, and solutions including Peterson's solution and semaphores.

Uploaded by

Jaya Shree
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 28

CPU Scheduling

Scheduling of processes/work is done to finish the work on time.


Below are different time with respect to a process.
Arrival Time: Time at which the process arrives in the ready queue.
Completion Time: Time at which process completes its execution.
Burst Time: Time required by a process for CPU execution.
Turn Around Time: Time Difference between completion time and arrival time.
Turn Around Time = Completion Time – Arrival Time
Waiting Time(W.T): Time Difference between turn around time and burst time.
Waiting Time = Turn Around Time – Burst Time

CPU Scheduling Criteria


The criteria include the following:
1. CPU utilisation –
The main objective of any CPU scheduling algorithm is to keep the CPU as busy as possible.
Theoretically, CPU utilisation can range from 0 to 100 but in a real-time system, it varies from
40 to 90 percent depending on the load upon the system.

2. Throughput –
A measure of the work done by CPU is the number of processes being executed and
completed per unit time. This is called throughput. The throughput may vary depending upon
the length or duration of processes.

3. Turnaround time –
For a particular process, an important criteria is how long it takes to execute that process. The
time elapsed from the time of submission of a process to the time of completion is known as
the turnaround time. Turn-around time is the sum of times spent waiting to get into memory,
waiting in ready queue, executing in CPU, and waiting for I/O.

4. Waiting time –
A scheduling algorithm does not affect the time required to complete the process once it starts
execution. It only affects the waiting time of a process i.e. time spent by a process waiting in
the ready queue.

5. Response time –
In an interactive system, turn-around time is not the best criteria. A process may produce some
output fairly early and continue computing new results while previous results are being output
to the user. Thus another criteria is the time taken from submission of the process of request
until the first response is produced. This measure is called response time.

Scheduling algorithms
A Process Scheduler schedules different processes to be assigned to the CPU based on particular
scheduling algorithms. There are six popular process scheduling algorithms which we are going to
discuss in this chapter −

 First-Come, First-Served (FCFS) Scheduling


 Shortest-Job-Next (SJN) Scheduling
 Priority Scheduling
 Shortest Remaining Time
 Round Robin(RR) Scheduling
 Multiple-Level Queues Scheduling
These algorithms are either non-preemptive or preemptive. Non-preemptive algorithms are designed
so that once a process enters the running state, it cannot be preempted until it completes its allotted
time, whereas the preemptive scheduling is based on priority where a scheduler may preempt a low
priority running process anytime when a high priority process enters into a ready state.

First Come First Serve (FCFS)


 Jobs are executed on first come, first serve basis.
 It is a non-preemptive, pre-emptive scheduling algorithm.
 Easy to understand and implement.
 Its implementation is based on FIFO queue.
 Poor in performance as average wait time is high.

Shortest Job Next (SJN)


 This is also known as shortest job first, or SJF
 This is a non-preemptive, pre-emptive scheduling algorithm.
 Best approach to minimize waiting time.
 Easy to implement in Batch systems where required CPU time is known in advance.
 Impossible to implement in interactive systems where required CPU time is not known.
 The processer should know in advance how much time process will take.
Given: Table of processes, and their Arrival time, Execution time
Priority Based Scheduling
 Priority scheduling is a non-preemptive algorithm and one of the most common scheduling
algorithms in batch systems.
 Each process is assigned a priority. Process with highest priority is to be executed first and so
on.
 Processes with same priority are executed on first come first served basis.
 Priority can be decided based on memory requirements, time requirements or any other resource
requirement.
Given: Table of processes, and their Arrival time, Execution time, and priority. Here we are considering 1
is the lowest priority.
Round Robin Scheduling
 Round Robin is the preemptive process scheduling algorithm.
 Each process is provided a fix time to execute, it is called a quantum.
 Once a process is executed for a given time period, it is preempted and other process executes
for a given time period.
 Context switching is used to save states of preempted processes.

Algorithm Evaluation
How do we select a CPU scheduling algorithm for a particular system?
There are many scheduling algorithms, each with its own parameters. As
a result, selecting an algorithm can be difficult. The first problem is
defining the criteria to be used in selecting an algorithm. Criteria are
often defined in terms of CPU utilization, response time, or throughput.
To select an algorithm, we must first define the relative importance of
these measures. Our criteria may include several measures, such as:

 Maximizing CPU utilization under the constraint that the maximum


response time is 1 second
 Maximizing throughput such that turnaround time is (on average)
linearly proportional to total execution time Once the selection
criteria have been defined, we want to evaluate the algorithms
under consideration.We next describe the various evaluation
methods we can use.

Deterministic Modeling
One major class of evaluation methods is analytic evaluation. Analytic
evaluation uses the given algorithm and the system workload to produce
a formula or number that evaluates the performance of the algorithm for
that workload. One type of analytic evaluation is deterministic modeling.
This method takes a particular predetermined workload and defines the
performance of each algorithm for that workload. For example, assume
that we have the workload shown below. All five processes arrive at time
0, in the order given, with the length of the CPU burst given in
milliseconds:
Deterministic modeling is simple and fast. It gives us exact numbers,
allowing us to compare the algorithms. However, it requires exact
numbers for input, and its answers apply only to those cases. The main
uses of deterministic modeling are in describing scheduling algorithms
and providing examples.
In cases where we are running the same program over and over again
and can measure the program's processing requirements exactly, we
may be able to use deterministic modeling to select a scheduling
algorithm. Furthermore, over a set of examples, deterministic modeling
may indicate trends that can then be analyzed and proved separately.
For example, it can be shown that, for the environment described (all
processes and their times available at time 0), the SJF policy will always
result in the minimum waiting time.
Queueing Models
On many systems, the processes that are run vary from day to day, so
there is no static set of processes (or times) to use for deterministic
modeling. What can be determined, however, is the distribution of CPU
and I/O bursts. These distributions can be measured and then
approximated or simply estimated. The result is a mathematical formula
describing the probability of a particular CPU burst. Commonly, this
distribution is exponential and is described by its mean. Similarly, we can
describe the distribution of times when processes arrive in the system
(the arrival-time distribution). From these two distributions, it is possible
to compute the average throughput, utilization, waiting time, and so on
for most algorithms. The computer system is described as a network of
servers.
Each server has a queue of waiting processes. The CPU is a server with
its ready queue, as is the I/O system with its device queues. Knowing
arrival rates and service rates, we can compute utilization, average
queue length, average wait time, and so on. This area of study is called
queueing-network analysis. As an example, let n be the average queue
length (excluding the process being serviced), let W be the average
waiting time in the queue, and let X be the average arrival rate for new
processes in the queue (such as three processes per second).
We expect that during the time W that a process waits, \ x W new
processes will arrive in the queue. If the system is in a steady state, then
the number of processes leaving the queue must be equal to the number
of processes that arrive. Thus, This equation, known as Little's formula, is
particularly useful because it is valid for any scheduling algorithm and
arrival distribution. We can use Little's formula to compute one of the
three variables, if we know the other two.
For example, if we know that 7 processes arrive every second (on
average), and that there are normally 14 processes in the queue, then
we can compute the average waiting time per process as 2 seconds.
Queueing analysis can be useful in comparing scheduling algorithms, but
it also has limitations. At the moment, the classes of algorithms and
distributions that can be handled are fairly limited.
The mathematics of complicated algorithms and distributions can be
difficult to work with. Thus, arrival and service distributions are often
defined in mathematically tractable —but unrealistic—ways. It is also
generally necessary to make a number of independent assumptions,
which may not be accurate. As a result of these difficulties, queueing
models are often only approximations of real systems, and the accuracy
of the computed results may be questionable.
Simulations
To get a more accurate evaluation of scheduling algorithms, we can use
simulations. Running simulations involves programming a model of the
computer system. Software data structures represent the major
components of the system. The simulator has a variable representing a
clock; as this variable's value is increased, the simulator modifies the
system state to reflect the activities of the devices, the processes, and
the scheduler. As the simulation executes, statistics that indicate
algorithm performance are gathered and printed. The data to drive the
simulation can be generated in several ways. The most common method
uses a random-number generator, which is programmed to generate
processes, CPU burst times, arrivals, departures, and so on, according to
probability distributions.

The distributions can be defined mathematically (uniform, exponential,


Poisson) or empirically. If a distribution is to be defined empirically,
measurements of the actual system under study are taken. The results
define the distribution of events in the real system; this distribution can
then be used to drive the simulation. A distribution-driven simulation
may be inaccurate, however, because of relationships between
successive events in the real system. The frequency distribution
indicates only how many instances of each event occur; it does not
indicate anything about the order of their occurrence.
To correct this problem, we can use trace tapes. We create a trace tape
by monitoring the real system and recording the sequence of actual
events (Figure 5.15). We then use this sequence to drive the simulation.
Trace tapes provide an excellent way to compare two algorithms on
exactly the same set of real inputs. This method can produce accurate
results for its inputs.
Simulations can be expensive, often requiring hours of computer time. A
more detailed simulation provides more accurate results, but it also
requires more computer time. In addition, trace tapes can require large
amounts of storage space. Finally, the design, coding, and debugging of
the simulator can be a major task.
Implementation Even a simulation is of limited accuracy. The only
completely accurate way to evaluate a scheduling algorithm is to code it
up, put it in the operating system, and see how it works. This approach
puts the actual algorithm in the real system for evaluation under real
operating conditions. The major difficulty with this approach is the high
cost.

Process Synchronization
On the basis of synchronization, processes are categorized as one of the following two types:
 Independent Process : Execution of one process does not affects the execution of other
processes.
 Cooperative Process : Execution of one process affects the execution of other processes.
Process synchronization problem arises in the case of Cooperative process also because
resources are shared in Cooperative processes.

Race Condition
When more than one processes are executing the same code or accessing the same memory or
any shared variable in that condition there is a possibility that the output or the value of the shared
variable is wrong so for that all the processes doing the race to say that my output is correct this
condition known as a race condition. Several processes access and process the manipulations
over the same data concurrently, then the outcome depends on the particular order in which the
access takes place.
A race condition is a situation that may occur inside a critical section. This happens when the
result of multiple thread execution in the critical section differs according to the order in which the
threads execute.
Race conditions in critical sections can be avoided if the critical section is treated as an atomic
instruction. Also, proper thread synchronization using locks or atomic variables can prevent race
conditions.

Critical Section Problem


Critical section is a code segment that can be accessed by only one process at a time. Critical
section contains shared variables which need to be synchronized to maintain consistency of data
variables.

In the entry section, the process requests for entry in the Critical Section.
Any solution to the critical section problem must satisfy three requirements:
 Mutual Exclusion : If a process is executing in its critical section, then no other process is
allowed to execute in the critical section.
 Progress : If no process is executing in the critical section and other processes are waiting
outside the critical section, then only those processes that are not executing in their remainder
section can participate in deciding which will enter in the critical section next, and the selection
can not be postponed indefinitely.
 Bounded Waiting : A bound must exist on the number of times that other processes are
allowed to enter their critical sections after a process has made a request to enter its critical
section and before that request is granted.

Peterson’s Solution
Peterson’s Solution is a classical software based solution to the critical section problem.
In Peterson’s solution, we have two shared variables:
 boolean flag[i] :Initialized to FALSE, initially no one is interested in entering the critical section
 int turn : The process whose turn is to enter the critical section.

Peterson’s Solution preserves all three conditions :


 Mutual Exclusion is assured as only one process can access the critical section at any time.
 Progress is also assured, as a process outside the critical section does not block other
processes from entering the critical section.
 Bounded Waiting is preserved as every process gets a fair chance.

Disadvantages of Peterson’s Solution


 It involves Busy waiting
 It is limited to 2 processes.

Synchronization Hardware
Process Synchronization refers to coordinating the execution of
processes so that no two processes can have access to the same shared
data and resources. A problem occurs when two processes running
simultaneously share the same data or variable.
There are three hardware approaches to solve process synchronization
problems:
1. Swap
2. Test() and Set()
3. Unlock and lock
Test and Set
In Test and Set the shared variable is a lock that is initialized to false.
The algorithm returns whatever value is sent to it and sets the lock to
true. Mutual exclusion is ensured here, as till the lock is set to true, other
processes will not be able to enter and the loop continues.
However, after one process is completed any other process can go in as
no queue is maintained.
Swap
In this algorithm, instead of directly setting the lock to true, the key is
first set to true and then swapped with the lock.
Similar to Test and Set, when there are no processes in the critical
section, the lock turns to false and allows other processes to enter.
Hence, mutual exclusion and progress are ensured but the bound waiting
is not ensured for the very same reason.
Unlock and Lock
In addition to Test and Set, this algorithm uses waiting[i] to check if there
are any processes in the wait. The processes are set in the ready queue
with respect to the critical section.
Unlike the previous algorithms, it doesn’t set the lock to false and checks the
ready queue for any waiting processes. If there are no processes waiting in the
ready queue, the lock is then set to false and any process can enter.

Semaphores
Semaphores are integer variables that are used to solve the critical section problem by using two atomic
operations, wait and signal that are used for process synchronization.
The definitions of wait and signal are as follows −

 Wait
The wait operation decrements the value of its argument S, if it is positive. If S is negative or zero,
then no operation is performed.
Types of Semaphores
There are two main types of semaphores i.e. counting semaphores and binary semaphores. Details about
these are given as follows −

 Counting Semaphores
These are integer value semaphores and have an unrestricted value domain. These semaphores
are used to coordinate the resource access, where the semaphore count is the number of
available resources. If the resources are added, semaphore count automatically incremented and
if the resources are removed, the count is decremented.

 Binary Semaphores
The binary semaphores are like counting semaphores but their value is restricted to 0 and 1. The
wait operation only works when the semaphore is 1 and the signal operation succeeds when
semaphore is 0. It is sometimes easier to implement binary semaphores than counting
semaphores.

Advantages of Semaphores
Some of the advantages of semaphores are as follows −

 Semaphores allow only one process into the critical section. They follow the mutual exclusion
principle strictly and are much more efficient than some other methods of synchronization.
 There is no resource wastage because of busy waiting in semaphores as processor time is not
wasted unnecessarily to check if a condition is fulfilled to allow a process to access the critical
section.
 Semaphores are implemented in the machine independent code of the microkernel. So they are
machine independent.
Disadvantages of Semaphores
Some of the disadvantages of semaphores are as follows −

 Semaphores are complicated so the wait and signal operations must be implemented in the
correct order to prevent deadlocks.
 Semaphores are impractical for last scale use as their use leads to loss of modularity. This
happens because the wait and signal operations prevent the creation of a structured layout for
the system.
 Semaphores may lead to a priority inversion where low priority processes may access the critical
section first and high priority processes later.

Classical Problems of Synchronization


Semaphore can be used in other synchronization problems besides
Mutual Exclusion.

Below are some of the classical problem depicting flaws of process


synchronaization in systems where cooperating processes are present.

We will discuss the following three problems:


1. Bounded Buffer (Producer-Consumer) Problem

2. Dining Philosophers Problem

3. The Readers Writers Problem

Bounded Buffer Problem

Because the buffer pool has a maximum size, this problem is often called
the Bounded buffer problem.

 This problem is generalised in terms of the Producer Consumer


problem, where a finite buffer pool is used to exchange messages
between producer and consumer processes.

 Solution to this problem is, creating two counting semaphores "full"


and "empty" to keep track of the current number of full and empty
buffers respectively.

 In this Producers mainly produces a product and consumers


consume the product, but both can use of one of the containers
each time.

 The main complexity of this problem is that we must have to


maintain the count for both empty and full containers that are
available.

Bounded Buffer Problem

Because the buffer pool has a maximum size, this problem is often called
the Bounded buffer problem.

 This problem is generalised in terms of the Producer Consumer


problem, where a finite buffer pool is used to exchange messages
between producer and consumer processes.

 Solution to this problem is, creating two counting semaphores "full"


and "empty" to keep track of the current number of full and empty
buffers respectively.

 In this Producers mainly produces a product and consumers


consume the product, but both can use of one of the containers
each time.
 The main complexity of this problem is that we must have to
maintain the count for both empty and full containers that are
available.

Dining Philosophers Problem

 The dining philosopher's problem involves the allocation of limited


resources to a group of processes in a deadlock-free and starvation-
free manner.

 There are five philosophers sitting around a table, in which there are
five chopsticks/forks kept beside them and a bowl of rice in the
centre, When a philosopher wants to eat, he uses two chopsticks -
one from their left and one from their right. When a philosopher
wants to think, he keeps down both chopsticks at their original
place.

The Readers Writers Problem

 In this problem there are some processes(called readers) that only


read the shared data, and never change it, and there are other
processes(called writers) who may change the data in addition to
reading, or instead of reading it.

 There are various type of readers-writers problem, most centred on


relative priorities of readers and writers.

 The main complexity with this problem occurs from allowing more
than one reader to access the data at the same time.

Bounded Buffer Problem


Bounded buffer problem, which is also called producer consumer
problem, is one of the classic problems of synchronization. Let's start by
understanding the problem here, before moving on to the solution and
program code

What is the Problem Statement?

There is a buffer of n slots and each slot is capable of storing one unit of
data. There are two processes running,
namely, producer and consumer, which are operating on the buffer.
A producer tries to insert data into an empty slot of the buffer. A
consumer tries to remove data from a filled slot in the buffer. As you
might have guessed by now, those two processes won't produce the
expected output if they are being executed concurrently.

There needs to be a way to make the producer and consumer work in an


independent manner.

Here's a Solution

One solution of this problem is to use semaphores. The semaphores which


will be used here are:

 m, a binary semaphore which is used to acquire and release the


lock.
 empty, a counting semaphore whose initial value is the number of
slots in the buffer, since, initially all slots are empty.
 full, a counting semaphore whose initial value is 0.
At any instant, the current value of empty represents the number of
empty slots in the buffer and full represents the number of occupied slots
in the buffer.
The Producer Operation

The pseudocode of the producer function looks like this:

do

// wait until empty > 0 and then decrement 'empty'

wait(empty);

// acquire lock

wait(mutex);

/* perform the insert operation in a slot */

// release lock

signal(mutex);

// increment 'full'

signal(full);

while(TRUE)

 Looking at the above code for a producer, we can see that a


producer first waits until there is atleast one empty slot.
 Then it decrements the empty semaphore because, there will now
be one less empty slot, since the producer is going to insert data in
one of those slots.
 Then, it acquires lock on the buffer, so that the consumer cannot
access the buffer until producer completes its operation.
 After performing the insert operation, the lock is released and the
value of full is incremented because the producer has just filled a
slot in the buffer

The Consumer Operation

The pseudocode for the consumer function looks like this:

do

// wait until full > 0 and then decrement 'full'

wait(full);

// acquire the lock

wait(mutex);

/* perform the remove operation in a slot */

// release the lock

signal(mutex);

// increment 'empty'

signal(empty);

while(TRUE);

 The consumer waits until there is atleast one full slot in the buffer.
 Then it decrements the full semaphore because the number of
occupied slots will be decreased by one, after the consumer
completes its operation.
 After that, the consumer acquires lock on the buffer.
 Following that, the consumer completes the removal operation so
that the data from one of the full slots is removed.
 Then, the consumer releases the lock.
 Finally, the empty semaphore is incremented by 1, because the
consumer has just removed data from an occupied slot, thus making
it empty.

Dining Philosophers Problem


The dining philosophers problem is another classic synchronization
problem which is used to evaluate situations where there is a need of
allocating multiple resources to multiple processes.

What is the Problem Statement?

Consider there are five philosophers sitting around a circular dining table.
The dining table has five chopsticks and a bowl of rice in the middle as
shown in the below figure.
At any instant, a philosopher is either eating or thinking.
When a philosopher wants to eat, he uses two chopsticks -
one from their left and one from their right. When a
philosopher wants to think, he keeps down both chopsticks at
their original place.

Here's the Solution

From the problem statement, it is clear that a philosopher


can think for an indefinite amount of time. But when a
philosopher starts eating, he has to stop at some point of
time. The philosopher is in an endless cycle of thinking and
eating.

An array of five semaphores, stick[5], for each of the five


chopsticks.
The code for each philosopher looks like

while(TRUE)

wait(stick[i]);

/*

mod is used because if i=5, next

chopstick is 1 (dining table is circular)

*/

wait(stick[(i+1) % 5]);

/* eat */

signal(stick[i]);

signal(stick[(i+1) % 5]);

/* think */

When a philosopher wants to eat the rice, he will wait for the
chopstick at his left and picks up that chopstick. Then he
waits for the right chopstick to be available, and then picks it
too. After eating, he puts both the chopsticks down.
But if all five philosophers are hungry simultaneously, and
each of them pickup one chopstick, then a deadlock situation
occurs because they will be waiting for another chopstick
forever. The possible solutions for this are:

 A philosopher must be allowed to pick up the chopsticks


only if both the left and right chopsticks are available.
 Allow only four philosophers to sit at the table. That way,
if all the four philosophers pick up four chopsticks, there
will be one chopstick left on the table. So, one
philosopher can start eating and eventually, two
chopsticks will be available. In this way, deadlocks can
be avoided.

What is Readers Writer Problem?


Readers writer problem is another example of a classic
synchronization problem. There are many variants of this
problem, one of which is examined below.

The Problem Statement

There is a shared resource which should be accessed by


multiple processes. There are two types of processes in this
context. They are reader and writer. Any number
of readers can read from the shared resource
simultaneously, but only one writer can write to the shared
resource. When a writer is writing data to the resource, no
other process can access the resource. A writer cannot write
to the resource if there are non zero number of readers
accessing the resource at that time.
The Solution

From the above problem statement, it is evident that readers


have higher priority than writer. If a writer wants to write to
the resource, it must wait until there are no readers currently
accessing that resource.

Here, we use one mutex m and a semaphore w. An integer


variable read_count is used to maintain the number of readers
currently accessing the resource. The variable read_count is
initialized to 0. A value of 1 is given initially to m and w.

Instead of having the process to acquire lock on the shared


resource, we use the mutex m to make the process to acquire
and release lock whenever it is updating
the read_count variable.

The code for the writer process looks like this:

while(TRUE)

wait(w);

/* perform the write operation */

signal(w);

Copy

And, the code for the reader process looks like this:
while(TRUE)

//acquire lock

wait(m);

read_count++;

if(read_count == 1)

wait(w);

//release lock

signal(m);

/* perform the reading operation */

// acquire lock

wait(m);

read_count--;

if(read_count == 0)

signal(w);
// release lock

signal(m);

Copy

Here is the Code uncoded(explained)

 As seen above in the code for the writer, the writer just
waits on the w semaphore until it gets a chance to write
to the resource.
 After performing the write operation, it increments w so
that the next writer can access the resource.
 On the other hand, in the code for the reader, the lock is
acquired whenever the read_count is updated by a
process.
 When a reader wants to access the resource, first it
increments the read_count value, then accesses the
resource and then decrements the read_count value.
 The semaphore w is used by the first reader which
enters the critical section and the last reader which exits
the critical section.
 The reason for this is, when the first readers enters the
critical section, the writer is blocked from the resource.
Only new readers can access the resource now.
 Similarly, when the last reader exits the critical section,
it signals the writer using the w semaphore because
there are zero readers now and a writer can have the
chance to access the resource.

Critical Section Problem


The critical section is a code segment where the shared variables can be accessed. An
atomic action is required in a critical section i.e. only one process can execute in its
critical section at a time. All the other processes have to wait to execute in their critical
sections.
A diagram that demonstrates the critical section is as follows −

In the above diagram, the entry section handles the entry into the critical section. It
acquires the resources needed for execution by the process. The exit section handles
the exit from the critical section. It releases the resources and also informs the other
processes that the critical section is free.

Solution to the Critical Section Problem


The critical section problem needs a solution to synchronize the different processes.
The solution to the critical section problem must satisfy the following conditions −

 Mutual Exclusion
Mutual exclusion implies that only one process can be inside the critical section
at any time. If any other processes require the critical section, they must wait until
it is free.

 Progress
Progress means that if a process is not using the critical section, then it should
not stop any other process from accessing it. In other words, any process can
enter a critical section if it is free.

 Bounded Waiting
Bounded waiting means that each process must have a limited waiting time. Itt
should not wait endlessly to access the critical section

Monitors in Process Synchronization


The monitor is one of the ways to achieve Process synchronization. The
monitor is supported by programming languages to achieve mutual exclusion
between processes. For example Java Synchronized methods. Java provides
wait() and notify() constructs.
1. It is the collection of condition variables and procedures combined together
in a special kind of module or a package.
2. The processes running outside the monitor can’t access the internal variable
of the monitor but can call procedures of the monitor.
3. Only one process at a time can execute code inside monitors.
Syntax:

Condition Variables:
Two different operations are performed on the condition variables of the
monitor.
Wait.
signal.
let say we have 2 condition variables
condition x, y; // Declaring variable

Wait operation
x.wait() : Process performing wait operation on any condition variable are
suspended. The suspended processes are placed in block queue of that
condition variable.
Note: Each condition variable has its unique block queue.
Signal operation
x.signal(): When a process performs signal operation on condition variable, one
of the blocked processes is given chance.
If (x block queue empty)
// Ignore signal
else
// Resume a process from block queue.
Advantages of Monitor:
Monitors have the advantage of making parallel programming easier and less
error prone than using techniques such as semaphore.
Disadvantages of Monitor:
Monitors have to be implemented as part of the programming language . The
compiler must generate code for them. This gives the compiler the additional
burden of having to know what operating system facilities are available to
control access to critical sections in concurrent processes. Some languages
that do support monitors are Java,C#,Visual Basic,Ada and concurrent Euclid.

You might also like