0% found this document useful (0 votes)
29 views50 pages

Os Unit-Ii

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
29 views50 pages

Os Unit-Ii

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 50

UNIT-2

Process Concept: Process scheduling, Operations on processes, Inter-process communication,


Multithreaded Programming: Multithreading models, Thread libraries, Threading issues.
Process Scheduling: Basic concepts, Scheduling criteria, Scheduling algorithms, Multiple processor
scheduling, Thread scheduling. Inter-process Communication: Race conditions, Critical Regions, Mutual
exclusion with busy waiting, Sleep and wakeup, Semaphores, Mutexes, Monitors, Message passing, Barriers,
Classical IPC Problems - Dining philosophers problem, Readers and writers problem.

Process Concept: Process scheduling, Operations on processes, Inter-process communication,


Communication in client server systems.

Process
• A process is an instance of a program in execution.
• Batch systems work in terms of "jobs".
• Many modern process concepts are still
expressed in terms of jobs, ( e.g. job
scheduling ), and the two terms are often
used interchangeably.

Relation between process and Program


▸ When the code is not in execution then it is called
as Program
▸ When the program is in execution then it called as Process
Difference between the Process and Program
• Process is more than a program code. Process is an active entity as oppose to program which consider
to be a passive entity
• Program is an algorithm expressed in some suitable notations
• E.g: Programming Language
• Note:Process is the unit of work in a system

The Process
▸ Process memory is divided into four sections
▹ The text section comprises the compiled program code, read
in from non-volatile storage when the program is launched.
▹ The data section stores global and static variables, allocated
and initialized prior to executing main.
▹ The heap is used for dynamic memory allocation, and is
managed via calls to new, delete, malloc, free, etc.
▹ The stack is used for local variables. Space on the stack is
reserved for local variables when they are declared and the
space is freed up when the variables go out of scope.
Note :that the stack and the heap start at opposite ends of the
process'sfree space and grow towards each other. If they should ever meet, then either a stack overflow error
will occur, or else a call to new or malloc will fail due to insufficient memory available.

UNIT-2 OPERATING SYSTEMS CSE- AIML , ACOE Page 1


Process State
Processes may be in one of 5 states,
• New - The process is in the stage of being created.
• Ready - The process has all the resources available
that it needs to run, but the CPU is not currently
working on this process's instructions.
• Running - The CPU is working on this
process's instructions.
 Waiting - The process cannot run at the
moment, because it is waiting for some resource to
become available or for some event to occur.
• For example the process may be waiting for keyboard input, disk access request, inter-process
messages, a timer to go off, or a child process to finish.
• Terminated - The process has completed.

Process Control Block


▸ Process is an operating system is represented by a data structure called Process Contol Block or
Process Descriptor
▸ For each process there is a Process Control Block, PCB, which stores the following ( types of )
process-specific information, ( Specific details may
vary from system to system. )

Process control block ( PCB )

Diagram showing CPU switch from


process to process
•Process State - Running, waiting, etc., as discussed above.
•Process ID, and parent process ID.
•CPU registers and Program Counter - These need to be saved and restored when swapping processes in
and out of the CPU.
•CPU-Scheduling information - Such as priority information and pointers to scheduling queues.
•Memory-Management information - E.g. page tables or segment tables.
•Accounting information - user and kernel CPU time consumed, account numbers, limits, etc.
•I/O Status information - Devices allocated, open file tables, etc.

UNIT-2 OPERATING SYSTEMS CSE- AIML , ACOE Page 2


Thread:A thread is the unit of execution within a process. A process can have anywhere from just one thread
to many threads

Process Scheduling:
The act of determining which process is in the ready state, and should be moved to
the running state is known as Process Scheduling.
Process Scheduling Objectives
• The two main objectives of the process scheduling system are
• To keep the CPU busy at all times
• To deliver "acceptable" response times for all programs, particularly for interactive ones.
• The process scheduler must meet these objectives by implementing suitable policies for swapping
processes in and out of the CPU.
Scheduling Queues
• All processes, upon entering into the system, are
stored in the Job Queue.
• Processes in the Ready state are placed in
the Ready Queue.
• Processes waiting for a device to become available
are placed in Device Queues. There are unique
device queues available for each I/O device.
• A new process is initially put in the Ready queue.
It waits in the ready queue until it is selected for
execution(or dispatched). Once the process is
assigned to the CPU and is executing, one of the
following several events can occur:
• The process could issue an I/O request, and
then be placed in the I/O queue.
• The process could create a new subprocess and wait for its termination.
• The process could be removed forcibly from the CPU, as a result of an interrupt, and be put
back in the ready queue.

There are three types of schedulers available:


▸ Long Term Scheduler
▸ Short Term Scheduler
▸ Medium Term Scheduler

Long Term or job scheduler :


• It brings the new process to the ‘Ready State’.
• It controls Degree of Multi-programming, i.e., number of process present in ready state at any point
of time.
• It is important that the long-term scheduler make a careful selection of both IO and CPU bound
process.
• IO bound tasks are which use much of their time in input and output operations while CPU bound
processes are which spend their time on CPU.
• The job scheduler increases efficiency by maintaining a balance between the two.

UNIT-2 OPERATING SYSTEMS CSE- AIML , ACOE Page 3


Short term or CPU scheduler :
• It is responsible for selecting one process from ready state
for scheduling it on the running state.
• Note: Short-term scheduler only selects the
process to schedule it doesn’t load the process on
running. Here is when allthe scheduling
algorithms are used. The CPU scheduler is
responsible for ensuring there is no starvation
owing to high burst time processes.

Queueing-diagram representation of process scheduling

Dispatcher is responsible for loading the process selected by Short-term scheduler on the CPU (Ready to
Running State) Context switching is done by dispatcher only.
• A dispatcher does the following:
• Switching context.
• Switching to user mode.
• Jumping to the proper location in the newly loaded program.

Medium-term scheduler :
• It is responsible for suspending and resuming
the process.
• It mainly does swapping (moving processes
from main memory to disk and vice versa).
• Swapping may be necessary to improve the
process mix or because a change in memory
requirements has overcommitted available
memory, requiring memory to be freed up.
• It is helpful in maintaining a perfect balance
between the I/O bound and the CPU bound. It
reduces the degree of multiprogramming.
• When system loads get high, this scheduler will Addition of a medium-term scheduling to the queueing
diagram
swap one or more processes out of the ready queue
system
for a few seconds, in order to allow smaller faster jobs to finish up quickly and clear the system.

Context Switch
▸ Whenever an interrupt arrives, the CPU must do
a state-save of the currently running process,
then switch into kernel mode to handle the
interrupt, and then do a state-restore of the
interrupted process.
▸ Similarly, a context switch occurs when the time
slice for one process has expired and a new
process is to be loaded from the ready queue.
This will be instigated by a timer interrupt,
which will then cause the current process's state
to be saved and the new process's state to be
UNIT-2 OPERATING SYSTEMS CSE- AIML , ACOE Page 4
restored.
▸ Saving and restoring states involves saving and restoring all of the registers and program counter(s),
as well as the process control blocks described above. Diagram showing CPU switch from process
▸ Context switching happens VERY VERY frequently, to process
and the overhead of doing the switching is st
lost CPU time, so context switches ( state saves & restores ) need to be as fast as possible. Some
hardware has special provisions for speeding this up, such as a single machine instruction for saving or
restoring all registers at once.

Operations on Processes
▸ Process Creation
▸ Process Termination

Process Creation
▸ A process may create several new processes, via a create-process system
call, during the course of execution.
▸ The creating process is called a parent process and the new processes
are called the children of that process
▸ Each of these new processes may in turn creates other processes forming
a tree of processes
 How Resource sharing will be done between parent and children
▸ Parent process can share all resource with children
▸ Parent can share some resource with children
▸ Parent never share any resource with children
 When a process creates a new process, two possibilities exist in term of execution:
▸ 1.The parent continues to execute concurrently with its children.
▸ 2.The parent waits until some or all of its children terminated
 There are also two possibilities in terms of the address space of the new process
▸ 1.The child process is a duplicate of the parent process(it has the same program and data as the
parent)
▸ 2.The child process had a new program
loaded into it.
▸ Unix examples:
▸ fork() system call creates a new process
▸ exec system call replaces newly created
process with new process

Process Termination
1. A process terminates when its finishes executing its finalProcess creation
statement usingthe
and asks theoperating
fork() system call.
system
to delete it by the exit() system call.
2. At that point, the process may return a status value(typically an integer) to its parent process(via
the wait() system call).
3. All the resources of the process-including physical and virtual memory open files, and I/O buffers are
deallocated by the operating system.
Termination can occur in other circumstances as well:
1.A process can cause the termination of another process via an appropriate system call

UNIT-2 OPERATING SYSTEMS CSE- AIML , ACOE Page 5


2. Usually such a system call can be invoked only by the parent of the process that is to be terminated
3.Otherwise user could arbitrarily kill each other’s jobs

Interprocess Communication
1. Processes executing concurrently in the operating system may be either independent processes or
cooperating processes.
2.A process is independent if it cannot affect or be affected by the other processes executing in the system.
Any process that does not share data with any other process is independent.
3.A process is cooperating if it can affect or be affected by the other processes executing in the system.

1. There are several reasons for providing an environment that allows processcooperation:
1. Information sharing
2. Computation speedup.
3. Modularity
4. Convenience

2. Information sharing. Since several users may be interested in the same piece of information (for
instance, a shared file), we must provide an environment to allow concurrent access to such
information.

3. Computation speedup. If we want a particular task to run faster, we must break it into subtasks,
each of which will be executing in parallel with the others. Notice that such a speedup can be achieved only
if the computer has multiple processing cores.

4. Modularity. We may want to construct the system in a modular fashion dividing the system
functions into separate processes or threads,

5. Convenience. Even an individual user may work on many tasks at the same time. For instance, a
user may be editing, listening to music, and compiling in parallel.

6. Interprocess Communication (IPC) Mechanism


7. Cooperating processes require an interprocess communication (IPC) mechanism that will allow
them to exchange data and information.
8. There are two fundamental models of interprocess communication:
1. Shared memory
2. Message passing.
9. In the shared-memory model, a region of memory that is shared by cooperating processes is
established. Processes can then exchange information by reading and writing data to the shared
region.
10.10.
11. In the message-passing model, communication takes place by means of messages
exchanged between the cooperating processes.

UNIT-2 OPERATING SYSTEMS CSE- AIML , ACOE Page 6


Shared-Memory Systems
• Interprocess communication using shared memory requires
communicating processes to establish a region of shared memory.
• Typically, a shared-memory region resides in the address space of
the process creating the shared-memory segment.
• Normally, the operating system tries to prevent one process
fromaccessing another process’s memory.
Producer-Consumer Example Using Shared Memory
• A producer process produces information that is consumed by a consumer
process. For example,
o A compiler may produce assembly code that is consumed by an assembler.
o The assembler, in turn, may produce object modules that are consumed by the loader.

Two types of buffers can be used.


• The unbounded buffer places no practical limit on the size of the buffer. The consumer may have to
wait for new items, but the producer can always produce new items.
• The bounded buffer assumes a fixed buffer size. In this case, the consumer must wait if the buffer
is empty, and the producer must wait if the buffer is full.

Message-Passing Systems
• Message passing provides a mechanism to allow processes to communicate and to synchronize
their actions without sharing the same address space.
• It is particularly useful in a distributed environment, where the
communicating processes may reside on different computers connected by
a network.
• A message-passing facility provides at least two operations:
send(message) receive(message)
• Messages sent by a process can be either fixed or variable in size.
 Fixed-sized messages can be sent, the system-level
implementation is straightforward. (But makes the task of
programming more difficult)
 Variable-sized messages require a more complex system level implementation, ( but
the programming task becomes simpler)
• If processes P and Q want to communicate, they must send messages to andreceive messages from each
other:
• A communication link must exist between them. This link can be implemented in a variety of ways.
Here are several methodsfor logically implementing a link and the send()/receive() operations

UNIT-2 OPERATING SYSTEMS CSE- AIML , ACOE Page 7


Issues related to each of these
Naming
• Direct communication
• indirect communication
Synchronization
• Blocked Sender / Blocked Receiver
• Non Blocked Sender /Non Blocked Receiver
Buffering
• The size of the buffer may 0 (ZERO) capacity
• The size of the buffer may be finite capacity
• The size of the buffer is infinite capacity

Direct Communication
Direct communication, each process that wants
to communicate must explicitly name the recipient
or sender of the communication. P Q
1. A communication link in this scheme has the
following properties:
2. A link is established automatically between every pair of processes that want to communicate.
The processes need to know only each other’s identity to communicate.
3. A link is associated with exactly two processes.
4. Between each pair of processes, there exists exactly one link.

Indirect Communication
The messages are sent to and received from mailboxes, or ports
1. send(A, message)—Send a message to mailbox A.
2. receive(A, message)—Receive a message from mailbox A.
In this scheme, a communication link has the following properties:
2. A link is established between a pair of processes only if both members of the pair have a
shared mailbox.
3. A link may be associated with more than two processes.
4. Between each pair of communicating processes, a number of different links may exist,
with each link corresponding to one mailbox..

Synchronization
1. Message passing may be either blocking or nonblocking— also known as synchronous and
asynchronous
1. Blocking send. The sending process is blocked until the message is received by the
receiving process or by the mailbox.
2. Blocking receive. The receiver blocks until a message is available.
3. Nonblocking send. The sending process sends the message and resumes operation.
4. Nonblocking receive. The receiver retrieves either a valid message or a null.
Buffering
1. Whether communication is direct or indirect, messages exchanged by communicating processes
reside in a temporary queue. Basically, such queues can be implemented in three ways:
1. Zero capacity. :The queue has a maximum length of zero; thus, the link cannot have
any messages waiting in it. In this case, the sender must block until the recipient receives
the message.

UNIT-2 OPERATING SYSTEMS CSE- AIML , ACOE Page 8


2. Bounded capacity. The queue has finite length n; thus, at most n messages can reside in it. If
the queue is not full when a new message is sent, the message is placed in the queue and the
sender can continue execution without waiting. The link’s capacity is finite, however. If the
link is full, the sender must block until space is available in the queue.
3. Unbounded capacity. The queue’s length is potentially infinite; thus, any number of
messages can wait in it. The sender never blocks.

Multithreaded Programming
1. Definition of Thread
1. A thread is the unit of execution
within a process. (Or)
2. A thread is a basic unit of CPU
utilization, consisting of a program
counter, a stack, and a set of
registers, ( and a thread ID. )
3. Threads are also known as
Lightweight processes.
4. A traditional (or heavyweight) process
has a single thread of control.
5. If a process has multiple threads of control, it can perform more than one task at a time.
6. Most software applications that run on
modern computers are multithreaded.
7. An application typically is implemented as a
separate process with several threads of control
Example: word processor (MS Word)
The MS-Word process could involve many threads:
1. Interaction with the keyboard § Display
of characters on the display page
2. Regularly saving file to disk
3. Controlling spelling and grammar Etc.
4. All these threads would share the same Multithreaded server architecture
document

Another example is a web server - Multiple threads allow for multiple requests to be satisfied
simultaneously, without having to service requests sequentially or to fork off separate processes for every
incoming request.

Benefits To Multi-threading:
There are four major categories of benefits to multi-threading:
1. Responsiveness - One thread may provide rapid response while other threads are blocked or
slowed down doing intensive calculations.
2. Resource sharing - By default threads share common code, data, and other resources, which allows
multiple tasks to be performed simultaneously in a single address space.
3. Economy – Allocating memory and resources for process creation is costly. Because threads share
resources of the process to which they belong, it is more economical to create and context-switches
threads.

UNIT-2 OPERATING SYSTEMS CSE- AIML , ACOE Page 9


4. Scalability: Utilization of multiprocessor architectures - A single threaded process can only run on
one CPU, no matter how many may be available, whereas the execution of a multi-threaded
application may be split amongst available processor

1) Process-based Multitasking (Multiprocessing)


1. Each process have its own address in memory i.e. each process allocates separate memory area.
2. Process is heavyweight.
3. Cost of communication between the process is high.
4. Switching from one process to another require some time for saving and loading
registers, memory maps, updating lists etc.
2) Thread-based Multitasking (Multithreading)
5. Threads share the same address space.
6. Thread is lightweight.
7. Cost of communication between the thread is low.

Types of Thread
1. There are two types of threads:
1. User Threads
2. Kernel Threads
2. User threads are above the kernel and without kernel support. These are the threads that
application programmers use in their programs.
1. Implementation of User Level thread is done by a thread library and is easy.
2. Example of User Level threads: Java thread, POSIX threads.
3. Kernel threads are supported within the kernel of the OS itself. All modern OSs support kernel-level
threads, allowing the kernel to perform multiple simultaneous tasks and/or to service multiple kernel
system calls simultaneously.
1. While the Implementation of the kernel-level thread is done by the operating system and
is complex.
2. Example of Kernel level threads: Window Solaris.

Multithreading Models
1. The user threads must be mapped to kernel threads, by one of the following strategies:
1. Many to One Model
2. One to One Model
3. Many to Many Model

Many-To-One Model
• In the many-to-one model, many user-level threads are all mapped onto a
single kernel thread.
• Thread management is handled by the thread library in user space, which
is very efficient.
• However, the entire process will block if a thread makes a blocking system call.
• Because a single kernel thread can operate only on a single CPU, the many-to-one model does not
allow individual processes to be split across multiple CPUs.
• The disadvantage is when we considers multiprocessor system so this cannot be considered
because only one kernel is present so we can’t achieve the parallelism

UNIT-2 OPERATING SYSTEMS CSE- AIML , ACOE Page 10


One-To-One Model
• The one-to-one model creates a separate kernel thread to handle each
user thread.
• One-to-one model overcomes the problems listed above involving
blocking system calls and the splitting of processes across multiple CPUs.
• However the overhead of managing the one-to-one model is more
significant, involving more overhead and slowing down the
system.
• Most implementations of this model place a limit on how many threads can be created.
Many-To-Many Model
• The many-to-many model multiplexes any number of user
threads onto an equal or smaller number of kernel threads.
• The following diagram shows the many-to-many
threading model where 4 user level threads are
multiplexing with 3 kernel level threads.
• In this model, developers can create as many user threads as
necessary and the corresponding Kernel threads can run in parallel on a multiprocessor machine.
• This model provides the best accuracy on concurrency and when a
thread performs a blocking system call, the kernel can schedule One popular variation of the
many- to-many model is the two-
another thread for execution.
tier model, which allows either
many-to- many or one-to-one

Thread libraries
• Thread libraries provide programmers with an API for creating and managing threads.
• Thread libraries may be implemented either in user space or in kernel space. The former involves
API functions implemented solely within user space, with no kernel support. The latter involves
system calls, and requires a kernel with thread library support.
• There are three main thread libraries in use today:
1. POSIX Pthreads - may be provided as either a user or kernel library, as an extension to
the POSIX standard.
2. Win32 threads - provided as a kernel-level library on Windows systems.
3. Java threads - Since Java generally runs on a Java Virtual Machine, the implementation of
threads is based upon whatever OS and hardware the JVM is running on, i.e. either Pthreads
or Win32 threads depending on the system.

Threading Issues
• There are a variety of issues to consider with multithreaded programming
1. Semantics of fork() and exec() system calls

2. Thread cancellation
3. Signal handling
4. Thread pooling
5. Thread-specific data

Semantics of fork() and exec()


• The fork() and exec() are the system calls. The fork() call creates a duplicate process of the process
that invokes fork(). The new duplicate process is called child process and process invoking the
fork() is called the parent process
• Let us now discuss the issue with the fork() system call.

UNIT-2 OPERATING SYSTEMS CSE- AIML , ACOE Page 11


Consider that a thread of the multithreaded program has invoked the fork().

UNIT-2 OPERATING SYSTEMS CSE- AIML , ACOE Page 12


ISSUE:
1. Here the issue is whether the new duplicate process created by fork() will duplicate
all the threads of the parent process or
2. The duplicate process would be single-threaded.
Solution:
There are two versions of fork() in some of the UNIX systems. Either the fork() can
duplicate all the threads of the parent process in the child process or the fork() would
only duplicate that thread from parent process that has invoked it.

Which version of fork() must be used totally depends upon the application.
 Next system call i.e. exec() system call when invoked replaces the program along with all its threads
with the program that is specified in the parameter to exec(). Typically the exec() system call is
lined up after the fork() system call.
 Here the issue is if the exec() system call is lined up just after the fork() system call then
duplicating all the threads of parent process in the child process by fork() is useless.
 As the exec() system call will replace the entire process with the process provided to exec() in the
parameter.

Thread Cancellation
Thread cancellation involves terminating a thread before it has completed.
A thread that is to be cancelled is often referred to as the target thread.
Cancellation of a target thread may occur in two different scenarios:
1. Asynchronous cancellation:
One thread immediately terminates the target thread.
2. Deferred cancellation.
The target thread periodically checks whether it should terminate, allowing it an opportunity to
terminate itself in an orderly fashion.
The issue related to the target threads are listed below:
1. What if the resources had been allotted to the cancel target thread?
2. What if the target thread is terminated when it was updating the data, it was sharing with
some other thread.
Here the asynchronous cancellation of the thread where a thread immediately cancels the target
thread without checking whether it is holding any resources or not creates troublesome.
However, in deferred cancellation, the thread that indicates the target thread about the
cancellation, the target thread crosschecks its flag in order to confirm that it should it be cancelled
immediately or not. The thread cancellation takes place where they can be cancelled safely such
points are termed as cancellation points by Pthreads.

Signal Handling
A signal is used in UNIX systems to notify a process that a particular event has occurred. A signal
may be received either synchronously or asynchronously,
Synchronous signals are delivered to the same process that performed the operation that caused the
signal
Examples of synchronous signal include illegal memory access and division by 0.
Asynchronous Signal are generated by an event external to a running process, that process receives
the signal asynchronously.

UNIT-2 OPERATING SYSTEMS CSE- AIML , ACOE Page 13


Examples of such signals include terminating a process with specific keystrokes (such as
<control><C>) and having a timer expire.
Handling signals in single-threaded programs is straightforward: Signals are always delivered to a
process.
However, delivering signals is more complicated in multithreaded programs, where a process may
have several threads. Where, then, should a signal be delivered?
In general, the following options exist:
1. Deliver the signal to the thread to which the signal applies.
2. Deliver the signal to every thread in the process.
3. Deliver the signal to certain threads in the process.
4. Assign a specific thread to receive all signals for the process.
So if the signal is synchronous it would be delivered to the specific thread causing the generation of
the signal.
If the signal is asynchronous it cannot be specified to which thread of the multithreaded program it
would be delivered.
If the asynchronous signal is notifying to terminate the process the signal would be delivered
to all the thread of the process.
The issue of an asynchronous signal is resolved up to some extent in most of the multithreaded
UNIX system.
Here the thread is allowed to specify which signal it can accept and which it cannot.
However, the Window operating system does not support the concept of the signal instead it uses
asynchronous procedure call (ACP) which is similar to the asynchronous signal of the UNIX system.

Thread Pool
When a user requests for a webpage to the server, the server creates a separate thread to service the
request. Although the server also has some potential issues. Consider if we do not have a bound on
the number of actives thread in a system and would create a new thread for every new request then it
would finally result in exhaustion of system resources.
The idea is to create a finite amount of threads when the process starts. This collection of threads is
referred to as the thread pool.
A thread pool is a group of threads that have been pre-created and are available to do work as
needed
1. Threads may be created when the process starts
2. A thread may be kept in a queue until it is needed
3. After a thread finishes, it is placed back into a queue until it is needed again
4. Avoids the extra time needed to spawn new threads when they’re needed
In applications where threads are repeatedly being created/destroyed thread pools might
provide a performance benefit
Example: A server that spawns a new thread each time a client connects to the system and discards that
thread when the client disconnects

Advantages of thread pools:


1. Typically faster to service a request with an existing thread than create a new
thread (performance benefit)
2. Bounds the number of threads in a process
The only threads available are those in the thread pool
If the thread pool is empty, then the process must wait for a thread to re-enter the pool before it can
assign work to a thread

UNIT-2 OPERATING SYSTEMS CSE- AIML , ACOE Page 14


Without a bound on the number of threads in a process, it is possible for a process to create so many
threads that all of the system resources are exhausted

Thread Specific data


We all are aware of the fact that the threads belonging to the same process share the data of that
process. Here the issue is what if each particular thread of the process needs its own copy of data.
So the specific data associated with the specific thread is referred to as thread-specific data.
1. Consider a transaction processing system, here we can process each transaction in a
different thread. To determine each transaction uniquely we will associate a unique
identifier with it. Which will help the system to identify each transaction uniquely.
2. As we are servicing each transaction in a separate thread.
So we can use thread-specific data to associate each thread to a specific transaction and its unique id.
Thread libraries such as Win32, Pthreads and Java support to thread-specific data.
So these are threading issues that occur in the multithreaded programming environment. We have also
seen how these issues can be resolved.

Scheduling
1. Scheduling of this kind is a fundamental operating-system
function.
2. Almost all computer resources are scheduled before use.
3. The CPU is, of course, one of the primary computer resources.
Thus, its scheduling is central to operating-system design.
4. Scheduling of processes/work is done to finish the work
on time.
5. CPU–I/O Burst Cycle
6. The success of CPU scheduling depends on an observed property of processes:
7. Process execution consists of a cycle of CPU execution and I/O wait.
8. Processes alternate between these two states.
1. Process execution begins with a CPU burst. Alternating sequence of CPU and
2. That is followed by an I/O burst. I/O bursts.
9. Another CPU burst, then another I/O burst, and so on.
10. Eventually, the final CPU burst ends with a systemrequest to terminate execution

Process scheduling
The act of determining which process is in the ready state, and should be moved to
the running state is known as Process Scheduling.

CPU Scheduler
1.Short-term scheduler selects from among the processes in ready queue, and allocates the CPU to one of them

Queue may be ordered in various ways

CPU scheduling decisions may take place when a process:

UNIT-2 OPERATING SYSTEMS CSE- AIML , A COE Page 4


1. When a process switches from the running state to the waiting state (for example, as the result of an I/O
request or an invocation of wait() for the termination of a child process)
2. When a process switches from the running state to the ready state(for example, when an interrupt
occurs)
3. When a process switches from the waiting state to the ready state (for example, at completion of I/O)
4. When a process terminates
For situations 1 and 4, there is no choice in terms of scheduling. A new process (if one exists in the ready
queue) must be selected for execution. There is a choice, however, for situations 2 and 3.
When scheduling takes place only under circumstances 1 and 4, we say that the scheduling scheme is
Non-preemptive or Cooperative Otherwise, it is Pre-emptive.

Preemptive
Preemptive scheduling is used when a process switches from running state to ready state or from waiting
state to ready state.
The resources (mainly CPU cycles)are allocated to the process for the limited amount of time and then
is taken away ,and the process is again placed back in the ready queue if that process still has CPU
burst time remaining.
The process stays in ready queue till it gets next chance to execute.
Algorithms based on preemptive scheduling are:
1. Round Robin(RR)
2. Shortest Remaining First(SRTF),
3. Priority(Preemptive version).etc

Non-Preemptive Scheduling
1. Non-Preemptive scheduling is used when a process terminates, or a process switches from running to
waiting state.
2. In this scheduling, once the resources(CPU Cycles) is allocated to a process, the process hold the CPU
till it gets terminated or it reaches a waiting state.
3. In case of non-preemptive scheduling does not interrupts a process running CPU in middle of
the executing .
4. Instead it waits still the process completes its CPU burst time and then it can allocates the CPU to
another process
Algorithms based on Non-Preemptive scheduling are :
1. Shortest Job First(SJF basically non Preemptive)
2. Priority (non-preemptive version)etc
3. First Come First Serve

Dispatcher
1. Another component involved in the CPU-scheduling function is the dispatcher.
2. The dispatcher is the module that gives control of the CPU to the process selected by the short-term
scheduler.
3. This function involves the following:
1. Switching context
2. Switching to user mode
3. Jumping to the proper location in the user program to
restart that program

UNIT-2 OPERATING CSE- AIML , Page


4. The dispatcher should be as fast as possible, since it is invoked during every process switch. The
time it takes for the dispatcher to stop one process and start another running is known as the dispatch
latency.

Scheduling Criteria
Many criteria have been suggested for comparing CPU-scheduling algorithms. Which characteristics are used
for comparison can make a substantial difference in which algorithm is judged to be best.
The criteria include the following:
CPU utilization :
Keep the CPU as busy as possible
Throughput :
Number of processes that complete their execution per time unit
Turnaround time:
Amount of time to execute a particular process
Waiting time:
Amount of time a process has been waiting in the ready queue
Response time
Amount of time it takes from when a request was submitted until the first response is produced, not
output (for time-sharing environment)

Objectives of Process Scheduling Algorithms


1. Maximum CPU utilization
2. Fair allocation of CPU
3. Maximum throughput
4. Minimum turnaround time
5. Minimum waiting time
6. Minimum response time
Below are different time with respect to a process.
1. Arrival Time: Time at which the process arrives in the ready queue.
2. Completion Time: Time at which process completes its execution.
3. Burst Time: Time required by a process for CPU execution.
4. Turn Around Time: Time Difference between completion time and arrival time.
Turn Around Time = Completion Time – Arrival Time
5. Waiting Time(W.T): Time Difference between turn around time and burst time.
Waiting Time = Turn Around Time – Burst Time

Scheduling Algorithms
1. Different Scheduling Algorithms
2. First Come First Serve(FCFS) Scheduling
3. Shortest-Job-First(SJF) Scheduling
4. Priority Scheduling
5. Round Robin(RR) Scheduling
6. Multilevel Queue Scheduling
7. Multilevel Feedback Queue Scheduling
8. Shortest Remaining Time First (SRTF)
9. Longest Remaining Time First (LRTF)

Gantt chart

UNIT-2 OPERATING CSE- AIML , Page


A Gantt chart is a bar chart that provides a visual view of tasks scheduled over time.

First-Come, First-Served Scheduling


1. By far the simplest CPU-scheduling algorithm .
2. In this scheduling, process that requests the CPU first is allocated the
CPU first.
3. The implementation of the FCFS policy is managed with a FIFOqueue.
4. When a process enters the ready queue, its PCB is linked onto the tail
of the queue.
5. When the CPU is free, it is allocated to the process at the head of
the queue.
6. The running process is then removed from the queue.

Example:
Consider the following set of processes that arrive at time 0, with the length of the CPU burst given in
milliseconds

If the processes arrive in the order P1, P2, P3, and are served in FCFS order, we get the result shown in the
following Gantt chart

1. Waiting time for P1 = 0 milliseconds


2. Waiting time for P2 = 24 milliseconds
3. Waiting time for P3 = 27 milliseconds
Thus, the average waiting time is (0 + 24 + 27)/3 = 17 milliseconds.
If the processes arrive in the order P2, P3, P1, however, the results will be as shown in the following
Gantt chart:

1. Waiting time for P1 = 6 milliseconds


2. Waiting time for P2 = 0 milliseconds
3. Waiting time for P3 = 3 milliseconds
Thus, the average waiting time is (6 + 0 + 3)/3 = 3 milliseconds.
This reduction is substantial. Thus, the average waiting time under an FCFS policy is generally not minimal
and may vary substantially if the processes’ CPU burst times vary greatly.

The FCFS Scheduling algorithm is Non-Preemptive


Example :find the average waiting time using the FCFS scheduling algorithm.

UNIT-2 OPERATING CSE- AIML , Page


Example:2 Calculate the average waiting time and average Turnaroundtime, if FCFS Scheduling Algorithm
is followed

Turn Around Time = Completion Time – Arrival Time

Waiting Time = Turn Around Time – Burst Time

Average Turn

Gantt Chart
0 3 4 9 1 17 19
The shaded box represents the idle time of CPU

Around time = (5+11+3+13+8)/5


= 40/5
= 8 ms
Average Waiting Time = (0+7+0+11+4)/5
= 22/5
= 4.4 ms

UNIT-2 OPERATING CSE- AIML , Page


Convoy Effect
1. FCFS algorithm is non-preemptive in nature, that
is, once CPU time has been allocated to a process,
other processes can get CPU time only after the
current process has finished. This property of
FCFS scheduling leads to the situation called
Convoy Effect.
2. Convoy Effect is phenomenon associated with
the First Come First Serve (FCFS) algorithm, in
which the whole Operating System slows down
due to few slow processes.
3. Suppose there is one CPU intensive (large burst time) process in the ready queue, and several other
processes with relatively less burst times but are Input/Output (I/O) bound (Need I/O operations
frequently)

Steps are as following below:


1. The I/O bound processes are first allocated CPU time. As they are less CPU intensive, they quickly
get executedand goto I/O queues.
2. Now, the CPU intensive process is allocated CPU time. As its burst time is high, it takes time
to complete.
3. While the CPU intensive process is being executed, the I/O bound processes complete their
I/O operations and are moved back to ready queue.
4. However, the I/O bound processes are made to wait as the CPU intensive process still
hasn’t finished. This leads to I/O devices being idle.
5. When the CPU intensive process gets over, it is sent to the I/O queue so that it can access an
I/O device.
6. Meanwhile, the I/O bound processes get their required CPU time and move back to I/O queue.
7. However, they are made to wait because the CPU intensive process is still accessing an I/O device. As
a result, the CPU is sitting idle now.
Hence in Convoy Effect, one slow process slows down the performance of the entire set of processes,
and leads to wastage of CPU time and other devices.

Shortest-Job-First Scheduling
1. A different approach to CPU scheduling is the shortest-job-first (SJF) scheduling algorithm.
2. This algorithm associates with each process the length of the process’s next CPU burst.
3. When the CPU is available, it is assigned to the process that has the smallest next CPU burst.
4. This is the best approach to minimize waiting time.
5. If the next CPU bursts of two processes are the same, FCFS scheduling is used to break the tie.
6. A more appropriate term for this scheduling method would be the shortest-next- CPU-burst
algorithm, because scheduling depends on the length of the next CPU burst of a process, rather than
its total length.

Example of SJF Scheduling (Non-Preemptive)


The following set of processes, with the length of the CPU burst given in milliseconds:

UNIT-2 OPERATING CSE- AIML , Page


The waiting time for P1 =3 ms

The waiting time for P2 =16 ms

The waiting time for P3= 9 ms

The waiting time for P4 = 0 ms

Theaverage waiting time = (3 + 16 + 9 + 0)/4 =7ms

By comparison, if we were using the FCFS scheduling scheme, the average waiting time would be 10.25
milliseconds.

Example 2:
Consider the below processes available in the ready queue for execution, with arrival time as 0 for all and
rst times.
given

Gantt chart:

UNIT-2 OPERATING CSE- AIML , Page


Example :3

Problem with Non Pre-emptive SJF


1. If the arrival time for processes are different, which means all the processes are not available in the
ready queue at time 0, and some jobs arrive after some time, in such situation, sometimes process with
short burst time have to wait for the current process's execution to finish, because in Non Pre-emptive
SJF, on arrival of a process with short duration, the existing job/process's execution is not
halted/stopped to execute the short job first.

UNIT-2 OPERATING CSE- AIML , Page


Shortest Job First Scheduling (Preemptive)
1. In Preemptive Shortest Job First Scheduling, jobs are put into ready queue as they arrive, but as a
process with short burst time arrives, the existing process is preempted or removed from execution,
and the shorter job is executed first.
2. The average waiting time for preemptive shortest job first scheduling is less than both,non preemptive
SJF scheduling and FCFS scheduling.
3. The Pre-emptive SJF is also known as Shortest Remaining Time First, because at any given point of
time, the job with the shortest remaining time is executed first.

Example:SJF (Preemptive)

Consider the set of 5 processes whose arrival time and burst time are given below-

If the CPU scheduling policy is SJF preemptive, calculate the average waiting time and average turn
around time. Gantt Chart:

0 1 2 5 7 13 20

Average Turn Around Time =37/5 =7.4


ms

Average Waiting Time = 17/5=3.4 ms

UNIT-2 OPERATING CSE- AIML , Page


Example of SJF Scheduling (Preemptive)
consider the following four processes, with the length of the CPU burst given in milliseconds and processes
arrive at the ready queue at the times shown

Average Turn Around Time =52/4 =13 ms

Average Waiting Time = 26/4=6.5 ms

Priority Scheduling
1. A priority is associated with each process, and the CPU is allocated to the process with the highest priority.
2. Equal-priority processes are scheduled in FCFS order.
3. An SJF algorithm is simply a priority algorithm where the priority (p) is theinverse of the (predicted) next
CPU burst.
4. The larger the CPU burst, the lower the priority, and vice versa.
Note:Some systems use low numbers to represent low priority; others use low numbers for high priority. This
difference can lead to confusion.
1. We assume that low numbers represent high priority.
2. Priority scheduling can be either preemptive or nonpreemptive.
3. A preemptive priority scheduling algorithm will preempt the CPU if the priority of the newly arrived
process is higher than the priority of the currently running process.
4. A nonpreemptive priority scheduling algorithm will simply put the new process at the head of
the ready queue.

UNIT-2 OPERATING CSE- AIML , Page


Example :
Consider the following set of processes, assumed to have arrived at time 0 in the order P1, P2, · · ·, P5,
with the length of the CPU burst given in milliseconds:

Using priority scheduling, we would schedule these processes according to the following

Average Waiting Time =(6+0+16+18+1)/5

= 41/5

=8.2 ms

Example:Priority Scheduling (Non-Preemptive)

Example:
Consider the set of 5 processes whose arrival time and burst time are given below-

If the CPU scheduling policy is priority non-preemptive, calculate the average waiting time and average
turn around time. (Higher number represents higher priority)

UNIT-2 OPERATING CSE- AIML , Page


Example : Priority Scheduling Preemptive

Consider the set of processes with arrival time(in milliseconds), CPU burst time (in milliseconds), and
priority(0 is the highest priority) shown below. None of the processes have I/O burst time.
Calculate the average waiting time (in milliseconds) of all the processes using preemptive priority scheduling
algorithm

0 2 5 3 4 4 5 6
3 0 9 1 7

Waiting Time for p1= 38 ms


Waiting Time for p2= 0 ms
Waiting Time for p3= 37 ms
Waiting Time for p4= 28 ms
Waiting Time for p5= 42 ms

Average Waiting Time =(38+0+37+28+42)/5


=145/5

UNIT-2 OPERATING CSE- AIML , Page


=29 ms

Example:
Consider the set of 5 processes whose arrival time and burst time are given below-

If the CPU scheduling policy is priority preemptive, calculate the average waiting time and average turn
around time. (Higher number represents higher priority)

Problem with the Priority Scheduling


1. A major problem with priority scheduling algorithms is indefinite blocking, or starvation.
2. A process that is ready to run but waiting for the CPU can be considered blocked.
3. A priority scheduling algorithm can leave some low priority processes waiting indefinitely.
4. In a heavily loaded computer system, a steady stream of higher-priority processes can prevent a low-
priority process from ever getting the CPU.

UNIT-2 OPERATING CSE- AIML , Page


Solution to Problem
1. A solution to the problem of indefinite blockage of low-priority processes is aging. Aging involves
gradually increasing the priority of processes that wait in the system for a long time.
1. For example, if priorities range from 127 (low) to 0 (high), we could increase the
priority of a waiting process by 1 every 15 minutes. Eventually, even a process with an
initial priority of 127 would have the highest priority in the system and would be
executed. In fact, it would take no more than 32 hours for a priority-127 process to age
to a priority-0 process.

Round-Robin Scheduling
1. The round-robin (RR)
scheduling algorithm is
designed especially for
P1 P2 P3 CPU
timesharing systems.
2. It is similar to FCFS

P10 P4 scheduling, but preemption


is added to enable the system

P9 P5 P6
to switch between processes.
3. A small unit of time,

P8
called a time quantum or time
P7 slice, is defined. A time quantum
is generally from 10 to 100
milliseconds in length.
4. The ready queue is treated
as a circular queue.
5. The CPU scheduler goes
around the ready queue, allocating the CPU to each process for a time interval of up to 1 time quantum.

Implementation RR scheduling
1. we keep ready queue as a FIFO queue of processes.
2. New processes are added to the tail of the ready queue.
3. The CPU scheduler picks the first process from the ready queue, sets a timer tointerrupt after 1
time quantum, and dispatches the process.

One of two things will then happen .


The process may have a CPU burst of less than 1 time quantum. In this case, the process itself will release the CPU v
If the CPU burst of the currently running process is longer than 1 time

UNIT-2 OPERATING CSE- AIML , Page


quantum, the timer will go off and will cause an interrupt to the operating system. A context switch will
be executed, and the process will be put at the tail of the ready queue. The CPU scheduler will then select
the next process in the ready queue.

Example:
Consider the following set of processes that arrive at time 0, with the length of the CPU burst given in
milliseconds: time quantum of 4 milliseconds

Solution:

Average Turn Around time=(30+7+10)/3

=47/3

=15.66ms

Average Waiting Time =(6+4+7)/3

=5.66
ms

Example: Consider 5ms

ExampleConsider the set of 4 processes whose arrival time and burst time are given below-

If the CPU scheduling policy is Round Robin with time quantum = 2 unit, calculate the average waiting time
and average turnaround time.

UNIT-2 OPERATING CSE- AIML , Page


Ready Queue : Gantt Chart

0 2 4 6 8 9 11 12

Consider the set of 5 processes whose arrival time and burst time are givenbelow-

If the CPU scheduling policy is Round Robin with time quantum = 2 unit, calculate the average waiting
time and average turn around time.

1. Longest Job First (LJF): It is similar to SJF scheduling algorithm. But, in this scheduling algorithm,
we give priority to the process having the longest burst time. This is non-preemptive in nature i.e., when any
process starts executing, can’t be interrupted before complete execution.
2. Longest Remaining Time First (LRTF): It is preemptive mode of LJF algorithm in which we
give priority to the process having largest burst time remaining.

UNIT-2 OPERATING CSE- AIML , Page


Multilevel Queue Scheduling
Another class of scheduling algorithms has been created for situations in which processes are easily classified
into different groups.
For example:
1. Processes that require a user to start them or to interact with them are called foreground processes.
Processes that are run independently of a user are referred to as background processes.
1. Programs and commands run as foreground processes by default. To run a process in
the background, place an ampersand (&) at the end of the command name that you
use to start the process.
2. Common division is made between foreground (interactive) processes and background (batch)
processes.
3. These two types of processes havedifferent response-time requirements different scheduling needs.
4. In addition, foreground processes may have priority (externally defined) over background processes.
5. According to the priority of process, processes are placed in the different queues. Generally high
priority process are placed in the top level queue. Only after completion of processes from top level queue,
lower level queued processes are scheduled. It can suffer from starvation.

6. An example of a multilevel queue scheduling algorithm with five queues, listed below in order
of priority:

1. For example, separate queues might be used for foreground and background processes. The
foreground queue might be scheduled by an RR algorithm, while the background queue is
scheduled by an FCFS algorithm.
In addition, there must be scheduling among the queues, which is commonly implemented as fixed-priority
preemptive scheduling.
For example, the foreground queue may have
absolute priority over the background queue. Each
queue has absolute priority over lower-priority queues.
No process in the batch queue, for example, could run
unless the queues for system processes, interactive
processes, and interactive editing processes were all
empty. If an interactive editing process entered the
ready queue while a batch process was running, the
batch process would be preempted.
Another possibility is to time-slice among the
queues.
Here, each queue gets a certain portion of the CPU time,which it can then schedule among its variou
processes. For instance, in the foreground–background queue example, thesforeground queue can be given 80
percent of the CPU time for RR scheduling among its processes, while the background queue receives 20
percent of the CPU to give to its processes on an FCFS basis.

Multilevel Feedback Queue Scheduling


1. The multilevel feedback queue scheduling algorithm, allows a process to move between queues.
2. The idea is to separate processes according to the characteristics of their CPU bursts.
3. If a process uses too much CPU time, it will be moved to a lower-priority queue.
4. This scheme leaves I/O-bound and interactive processes in the higher-priority queues.
5. In addition, a process that waits too long in a lower-priority queue may be moved to a higher-
priority queue.

UNIT-2 OPERATING CSE- AIML , Page


6. This form of aging prevents starvation.

Consider a multilevel feedback queue scheduler


with three queues, numbered from 0 to 2

In general, a multilevel feedback queue scheduler is


defined by the following parameters:
• The number of queues
• The scheduling algorithm for each queue
• The method used to determine when to upgrade a
process to a higher priority queue
• The method used to determine when to demote a process to a lower priority queue
• The method used to determine which queue a process will enter when that process needs service

Thread Scheduling
1. The process scheduler schedules only the kernel threads.
2. User-level threads are managed by a thread library, and the kernel is unaware of them.
3. To run on a CPU, user-level threads must ultimately be mapped to an associated kernel-level
thread, although this mapping may be indirect and may use a lightweight process (LWP).
4. In this we explore scheduling issues involving user-level and kernel-level threads and offer
specific examples of scheduling for Pthreads.
Contention Scope
1. Contention scope refers to the scope in which threads compete for the use of physical CPUs.
2. On systems implementing the many-to-one and many-to-many models, the thread library schedules
user-level threads to run on an available LWP. This scheme is known as process contention scope
(PCS)
PCS, occurs, because competition occurs between threads that are part of the same process. (
This is the management / scheduling of multiple user threads on a single kernel thread, and is
managed by the thread library. )
1. Since competition for the CPU takes place among threads belonging to the same process To decide
which kernel-level thread to schedule onto a CPU, the kernel uses system-contention scope
(SCS).
2. Competition for the CPU with SCS scheduling takes place among all threads in the system.
Systems using the one-to-one model such as Windows, Linux, and Solaris, schedule threads using only SCS.
Pthread Scheduling:
1. The Portable Operating System Interface (POSIX) is a family of standards specified by the IEEE
Computer Society for maintaining compatibility between operating systems.
2. The Pthread library provides for specifying scope contention:
1. PTHREAD_SCOPE_PROCESS schedules threads using PCS, by scheduling user threads
onto available LWPs using the many-to-many model.
2. PTHREAD_SCOPE_SYSTEM schedules threads using SCS, by binding user threads to
particular LWPs, effectively implementing a one-to-one model.
3. getscope and setscope methods provide for determining and setting the scope contention respectively:
4. pthread attr setscope(pthread attr t *attr, int scope)
5. pthread attr getscope(pthread attr t *attr, int *scope)

Multiple-Processor Scheduling
When multiple processors are available, then the scheduling gets more complicated, because now there is
more than one CPU which must be kept busy and in effective use at all times.
UNIT-2 OPERATING CSE- AIML , Page
Load sharing revolves around balancing the load between multiple processors.
1. Multi-processor systems may be heterogeneous, ( different kinds of CPUs ), or homogenous, ( all the
same kind of CPU ). Even in the latter case there may be special scheduling constraints, such as devices
which are connected via a private bus to only one of the CPUs.
2. Approaches to Multiple-Processor Scheduling
3. One approach to multi-processor scheduling is asymmetric
multiprocessing, in which one processor is the master, controlling all
activities and running all kernel code, while the other runs only user
code. This approach is relatively simple, as there is no need to share
critical system data.
4. Another approach is symmetric multiprocessing, SMP, where each
processor schedules its own jobs, either from a common ready
queue or from separate ready queues for each processor.
5. Virtually all modern OSes support SMP, including XP, Win
2000, Solaris, Linux, and Mac OSX.
Processor Affinity
High cost of invalidating and repopulating caches, most SMP systems try to avoid migration of processes
from one processor to another and instead attempt to keep a process running on the same processor. This is
known as processor affinity—that is, a process has an affinity for the processor on which it is currently
running.

Processor affinity takes several forms.


1. When an operating system has a policy of attempting to keep a process running on the same processor—
but not guaranteeing that it will do so that situation known as soft affinity.
2. Here, the operating system will attempt to keep a process on a single processor,but it is possible for
a process to migrate between processors.
3. In contrast, some systems provide system calls that support hard affinity, thereby allowing a process to
specify a subset of processors on which it may run.
4. Many systems provide both soft and hard affinity.
5. For example, Linux implements soft affinity, but it also provides the sched setaffinity() system call,
which supports hard affinity.
Load Balancing
On SMP systems, it is important to keep the workload balanced among all processors to fully utilize the
benefits of having more than one processor.
Load balancing attempts to keep the workload evenly distributed across all processors in an SMP
system.
1. Push migration involves a separate process that runs periodically, ( e.g. every 200 milliseconds
), and moves processes from heavily loaded processors onto less loaded ones.
2. Pull migration involves idle processors taking processes from the ready queues of other processors.
3.Push and pull migration are not mutually exclusive.
Note that moving processes from processor to processor to achieve load balancing works against the principle
of processor affinity, and if not carefully managed, the savings gained by balancing the system can be lost in
rebuilding caches

UNIT-2 OPERATING CSE- AIML , Page


Multicore Processors
Traditionally, SMP systems have allowed several threads to run concurrently by providing multiple physical
processors.
However, a recent practice in computer hardware has been to place multiple processor cores on the same
physical chip, resulting in a multicore processor.
By assigning multiple kernel threads to a single processor, memory stall can be avoided ( or reduced ) by
running one thread on the processor while the other thread waits for memory.

Process Synchronization

Process Synchronization is the task phenomenon of coordinating the execution of processes in such a way
that no two processes can have access to the same shared data and resources.
Process Synchronization is mainly when multiple processes are running together, and more than one processes
try to gain access to the same shared resource or any data at the same time.

Race Conditions
1. Several processes access and manipulate the same data concurrently
2. The outcome of the execution depends on the particular order in which the access takes place is called
Race Condition
3. Prevent race condition by Synchronization
1. Ensure only one process at a time manipulates the critical data

EXAMPLE

Critical Section
1. When more than one processes access a same code
segment that segment is known as critical section.
2. Critical section contains shared variables or resources
which are needed to be synchronized to maintain
consistency of data variable.
3. In simple terms a critical section is group of
instructions/statements or region of code that need to be executed atomically, such as accessing a
resource (file, input or output port, global data, etc.).
1. Each process must request permission to enter its critical section.
2. The section of code implementing this request is the entry section.
3. The critical section may be followed by an exit section.

UNIT-2 OPERATING CSE- AIML , Page


4. The remaining code is the remainder section.

A solution to the critical-section problem must satisfy the following


three requirements:
1.1.Mutual exclusion. If process Pi is executing in its critical section,
then no other processes can be executing in their critical sections.
2.2.Progress. When no process is the critical section, any process that
requests entry into the critical section must be permitted without
any delay
3.3.Bounded waiting. There exists a bound, or limit, on the number of times that other processes are allowed
to enter their critical sections after a process has made a request to enter its critical section and before that
request is granted.

Here process A enters its critical region at time T1 . A little


later, at time T2 process B attempts to enter its critical
region but fails because another process is already in its
critical region and we allow only one at a time.
Consequently, B is temporarily suspended until time T3
when A leaves its critical region, allowing B to enter
immediately. Eventually B leaves (at T 4 ) and we are back
to the original situation with no processes in their critical
regions.

Mutual Exclusion with Busy Waiting Mutual exclusion using critical regions.
Various proposals for achieving mutual exclusion, so that while one process is busy updating shared memory
in its critical region, no other process will enter its critical region and cause trouble.
These are proposals for achieving mutual exclusion:
1. Disabling Interrupts
2. Lock Variables
3. Turn Variable or Strict Alternation Approach

Proposal -1 :Disabling Interrupts


1. On a single-processor system, the simplest solution is to have each process disable all interrupts just
after entering its critical region and re-enable them just before leaving it.
2. With interrupts disabled, no clock interrupts can occur.
3. The CPU is only switched from process to process as a result of clock or other interrupts, after all,
and with interrupts turned off the CPU will not be switched to another process.
4. Thus, once a process has disabled interrupts, it can examine and update the shared memory without
fear that any other process will intervene.
Problem of Proposal -1: Disabling Interrupts
1. This approach is generally unattractive because it is unwise to give user processes the power to turn
off interrupts.
2. Suppose that one of them did it, and never turned them on again? That could be the end of the system.
3. Furthermore, if the system is a multiprocessor disabling interrupts affects only the CPU that executed the
disable instruction. The other ones will continue running and can access the shared memory.

UNIT-2 OPERATING CSE- AIML , Page


4. The possibility of achieving mutual exclusion by disabling interrupts—even within the kernel—is
becoming less every day due to the increasing number of multicore chips even in low-end PCs. Two cores
are already common, four are present in high-end machines, and eight or 16 are not far behind.
5. In a multicore (i.e., multiprocessor system) disabling the interrupts of one CPU does not prevent
other CPUs from interfering with operations the first CPU is performing. Consequently, more
sophisticated schemes are needed.

Proposal -2:Lock Variables


1. As a second attempt, let us look for a software solution.
2. Lock variable is a synchronization mechanism.
3. It uses a lock variable to provide the synchronization among the processes executing concurrently.
4. However, it completely fails to provide the synchronization.
5. Multiprocessor solution
6. Executes in users mode
7. It is implemented as-
8. Initially, lock value is set to 0.
9. Lock value = 0 means the critical section is currently vacant and no process is present inside it.
10. Lock value = 1 means the critical section is currently occupied and a process is present inside it.

Working-

This synchronization mechanism is supposed to work as explained in the following scenes-

Scene-01:
1. Process P0 arrives.
2. It executes the lock!=0 instruction.
3. Since lock value is set to 0, so it returns value 0 to the while loop.
4. The while loop condition breaks.
5. It sets the lock value to 1 and enters the critical section.
6. Now, even if process P0 gets preempted in the middle, no other process can enter the critical section.
7. Any other process can enter only after process P0 completes and sets the lock value to 0.

Scene-02:
1. Another process P1 arrives.
2. It executes the lock!=0 instruction.
3. Since lock value is set to 1, so it returns value 1 to the while loop.
4. The returned value 1 does not break the while loop condition.
5. The process P1 is trapped inside an infinite while loop.
6. The while loop keeps the process P1 busy until the lock value becomes 0 and its condition breaks.
Scene-03:
1. Process P0 comes out of the critical section and sets the lock value to 0.
2. The while loop condition of process P1 breaks.
3. It sets the lock value to 1 and enters the critical section.
4. Now, even if process P1 gets preempted in the middle, no other process can enter the critical section.
5. Any other process can enter only after process P1 completes and sets the lock value to 0.
Failure of the Mechanism-
1. The mechanism completely fails to provide the synchronization among the processes.
2. It can not even guarantee to meet the basic criterion of mutual exclusion.

UNIT-2 OPERATING CSE- AIML , Page


Explanation-
The occurrence of the following scenes may lead to two processes present inside the critical section at the
same time-
Scene-01:
Process P0 arrives.
1. It executes the lock!=0 instruction.
2. Since lock value is set to 0, so it returns value 0 to the while loop.
3. The while loop condition breaks.
4. Now, process P0 gets preempted before it sets the lock value to 1.

Scene-02:
Another process P1 arrives.
1. It executes the lock!=0 instruction.
2. Since lock value is still 0, so it returns value 0 to the while loop.
3. The while loop condition breaks.
4. It sets the lock value to 1 and enters the critical section.
5. Now, process P1 gets preempted in the middle of the critical section.
Scene-03:
Process P0 gets scheduled again.
1. It resumes its execution.
2. Before preemption, it had already failed the while loop condition.
3. Now, it begins execution from the next instruction.
4. It sets the lock value to 1 (which is already 1) and enters the critical section.

Thus, both the processes get to present inside the critical section at the same time.
Problem of Proposal -2:Lock Variables
Suppose that one process reads the lock and sees that it is 0. Before it can set the lock to 1, another
process is scheduled, runs, and sets the lock to 1. When the first process runs again, it will also set the lock to
1, and two processes will be in their critical regions at the same time.
Now you might think that we could get around this problem by first reading out the lock value, then
checking it again just before storing into it, but that really does not help.

1. The race now occurs if the second process modifies the lock just after the first process has finished
its second check.
2. No Mutual Exclusion

The characteristics of this synchronization mechanism are-


1. It can be used for any number of processes.
2. It is a software mechanism implemented in user mode.
3. There is no support required from the operating system.
4. It is a busy waiting solution which keeps the CPU busy when the process is actually waiting.
5. It does not fulfill any criteria of synchronization mechanism.

Proposal -3 : Turn Variable or Strict Alternation Approach


1. Turn Variable or Strict Alternation Approach is the software mechanism implemented at user mode.
It is a busy waiting solution which can be implemented only for two processes.
2. In this approach, A turn variable is used which is actually a lock.

UNIT-2 OPERATING CSE- AIML , Page


3. In strict alternation approach, Processes have to compulsorily enter the critical section
alternately whether they want it or not.
4. This is because if one process does not enter the critical section, then other process will never get a
chance to execute again.
It is implemented as-

1. Initially, turn value is set to 0.


2. Turn value = 0 means it is the turn of process P0 to enter the critical section.
3. Turn value = 1 means it is the turn of process P1 to enter the critical section.

Initially, two processes Pi and Pj are available and want to execute into critical section.
The turn variable is equal to i hence Pi will get the chance to enter into the critical section. The value of Pi
remains I until Pi finishes critical section.

Pi finishes its critical section and assigns j to turn variable. Pj will get the chance to
enter into the critical section. The value of turn remains j until Pj finishes its critical
section.
Working-
This synchronization mechanism works as explained in the following scenes-
Scene-01:
1. Process P0 arrives.
2. It executes the turn!=0 instruction.
3. Since turn value is set to 0, so it returns value 0 to the while loop.
4. The while loop condition breaks.
5. Process P0 enters the critical section and executes.
6. Now, even if process P0 gets preempted in the middle, process P1 can not enter the critical section.
7. Process P1 can not enter unless process P0 completes and sets the turn value to 1.
Scene-02:
1. Process P1 arrives.
1. It executes the turn!=1 instruction.
2. Since turn value is set to 0, so it returns value 1 to the while loop.
3. The returned value 1 does not break the while loop condition.
4. The process P1 is trapped inside an infinite while loop.
5. The while loop keeps the process P1 busy until the turn value becomes 1 and its condition breaks.
Scene-03:
1. Process P0 comes out of the critical section and sets the turn value to 1.
2. The while loop condition of process P1 breaks.
3. Now, the process P1 waiting for the critical section enters the critical section and execute.
4. Now, even if process P1 gets preempted in the middle, process P0 can not enter the critical section.
5. Process P0 can not enter unless process P1 completes and sets the turn value to 0.
Problem of Proposal -3 : Turn Variable or Strict Alternation Approach

UNIT-2 OPERATING CSE- AIML , Page


Analysis of Strict Alternation approach
1. Mutual Exclusion
1. The strict alternation approach provides mutual exclusion in every case. This procedure works
only for two processes. The pseudo code is different for both of the processes. The process
will only enter when it sees that the turn variable is equal to its Process ID otherwise not
Hence No process can enter in the critical section regardless of its turn.
2. Progress
1. Progress is not guaranteed in this mechanism. If Pi doesn't want to get enter into the critical
section on its turn then Pj got blocked for infinite time. Pj has to wait for so long for its
turn since the turn variable will remain 0 until Pi assigns it to j.
3. Portability
1. The solution provides portability. It is a pure software mechanism implemented at user mode
and doesn't need any special instruction from the Operating System.

Characteristics-
1. The characteristics of this synchronization mechanism are-
2. It ensures mutual exclusion.
3. It follows the strict alternation approach.
4. It does not guarantee progress since it follows strict alternation approach.
5. It ensures bounded waiting since processes are executed turn wise one by one and each process
is guaranteed to get a chance.
6. It ensures processes does not starve for the CPU.
7. It is architectural neutral since it does not require any support from the operating system.
8. It is deadlock free.
9. It is a busy waiting solution which keeps the CPU busy when the process is actually waiting.
Peterson's Solution
1. Peterson's Solution is a classic software-based solution to the critical section problem. It is
unfortunately not guaranteed to work on modern hardware, due to vagaries of load and
store operations.
2. Peterson's solution is based on two processes
3. Suppose system contains more than two process then peterson’s solution not sufficient
4. Peterson's solution requires two shared data items:
1. int turn - Indicates whose turn it is to enter into the critical section. If turn = = i, then process
i is allowed into their critical section.
2. boolean flag[ 2 ] - Indicates when a process wants to enter into their critical section. When
process i wants to enter their critical section, it sets flag[ i ] to true.

The structure of process Pi in Peterson’s solution

Proof: Mutual Exclusion

UNIT-2 OPERATING CSE- AIML , Page


Bounded-Waiting
Case III: (Starvation)
P1 is executing its CS repeatedly
-upon exiting its CS, P1 sets flag[1] = false
-hence the while loop is false for P0 and it can go (sufficient?)
However, P1 may attempt to re-enter its CS before P0 has a chance to run.
- but to re-enter, P1 sets flag[1] to true and sets turn to 0
- hence the while loop is true for P1 and it waits
- the while loop is now false for P0 and it can go
1. In the following diagram, the entry and exit sections are enclosed in boxes.
2. In the entry section, process i first raises a flag indicating a desire to enter the critical section.
3. Then turn is set to j to allow the other process to enter their critical section if process j so desires.
4. The while loop is a busy loop ( notice the semicolon at the end ), which makes process i wait as
long as process j has the turn and wants to enter the critical section.
5. Process i lowers the flag[ i ] in the exit section, allowing process j to continue if it has been waiting

Peterson’s Solution satisfy all three conditions:


1. Mutual Exclusion: Mutual exclusion is assured as only one process can access the Critical Section at
any time. If both processes attempt to enter at the same time, the last process to execute “turn = j” will be
blocked.
2. Progress: Progress is also satisfied. A process outside the Critical Section does not blocks other
processes from entering the Critical Section. The flag variable allows one process to release the other
when exiting their Critical Section.
3. Bounded Waiting: Bounded waiting assure that each process will have to let the other process go first
at most one time before it becomes their turn again. Thus, bounded waiting is also assured as each
process gets a fair chance.

Note that the instruction "turn = j" is atomic, that is it is a single machine instruction which cannot be
interrupted.

UNIT-2 OPERATING CSE- AIML , Page


Synchronization Hardware
• In general we can provide any solution to critical section problem by using a simple tool called as
LOCK where we can prevent the race condition.
• Many systems provide hardware support (hardware instructions available on several systems) for
critical section code.
• In UniProcessor hardware environment by disabling interrupts we can solve the critical section problem.
So that Currently running code would execute without any preemption.
• But by disabling interrupts on multiprocessor systems is time taking so that it is
inefficient compared to UniProcessor system.
• Now a days Modern machines provide special atomic hardware instructions that allow us to either test
memory word and set value Or swap contents of two memory words automatically i.e. Done through an
uninterruptible unit.

Special Atomic hardware Instructions


• TestAndSet()
• Swap()
• The TestAndSet() Instruction is one kind of special atomic hardware instruction that allow us to test memory
and set the value.
1. We can provide Mutual Exclusion by using TestAndSet() instruction.
2. To implement Mutual Exclusion using TestAndSet() we need to declare SharedBoolean variable
called as ‘lock’ ( initialized to false ) .
Test and Set Lock:
1. Test and Set Lock (TSL) is a synchronization mechanism.
2. It uses a test and set instruction to provide the synchronization among the processes
executing concurrently.
Test-and-Set Instruction
1. It is an instruction that returns the old value of a memory location and sets the memory location
value to 1 as a single atomic operation.
2. If one process is currently executing a test-and-set, no other process is allowed to begin another test-
and-set until the first process test-and-set is finished.
It is implemented as-
1. Initially, lock value is set to 0.
2. Lock value = 0 means the critical section is currently vacant and no process is present inside it.
3. Lock value = 1 means the critical section is currently occupied and a process is present inside it. do
{
while (test and set(&lock)); /* do nothing */
/* critical section */
lock = false;
/* remainder section */
} while (true);
boolean test and set(boolean *target)
{
boolean rv = *target;
*target = true;
return rv;
}

UNIT-2 OPERATING CSE- AIML , Page


The characteristics of this synchronization mechanism are-
1. It ensures mutual exclusion.
2. It is deadlock free.
3. It does not guarantee bounded waiting and may cause starvation.
4. It is not architectural neutral since it requires the operating system to support test-and-set
instruction.
5. It is a busy waiting solution which keeps the CPU busy when the process is actually waiting
1. The above examples satisfy the mutual exclusion requirement, but unfortunately do not
guarantee bounded waiting.
2. If there are multiple processes trying to get into their critical sections, there is no guarantee of
what order they will enter, and any one process could have to wait forever until they got their turn
in the critical section.

Swap- TestAndSet algorithm


1. Swap algorithm is a lot like the TestAndSet algorithm.
2. Instead of directly setting lock to true in the swap function, key is set to true and then swapped
with lock.
3. So, again, when a process is in the critical section, no other process gets to enter it as the value of lock
is true. Mutual exclusion is ensured.
4. Again, out of the critical section, lock is changed to false, so any process finding it gets enter
the critical section. Progress is ensured. However, again bounded waiting is not ensured
void Swap (Boolean *a, Boolean *b)
{
boolean temp = *a;
*a = *b;
*b = temp:
}
while (true)
{
key = TRUE;
while ( key == TRUE)
Swap (&lock, &key );
// critical section
lock = FALSE;
// remainder section
}
Shared Boolean variable called as ‘lock’ is to be declared to implement Mutual Exclusion in
Swap() also, which is initialized to FALSE

Mutual Exclusion with Sleep() and Wakeup()


1. when a processes wants to enter in its critical section , it checks to see if then entry is allowed.
2. If it is not, the process goes into tight loop and waits (i.e., start busy waiting) until it is allowed to
enter. This approach waste CPU-time.

UNIT-2 OPERATING CSE- AIML , Page


3. Now look at some interprocess communication primitives is the pair of sleep-wakeup.
Sleep: It is a system call that causes the caller to block, that is, be suspended until some other process wakes it
up.
Wakeup:It is a system call that wakes up the process. Both 'sleep' and 'wakeup' system calls have one
parameter that represents a memory address used to match up 'sleeps' and 'wakeups' .

The Bounded Buffer Producers and Consumers


The bounded buffer producers and consumers assumes that there is a fixed buffer size i.e., a finite numbers of
slots are available.
Statement:
To suspend the producers when the buffer is full, to suspend the consumers when the buffer is empty, and to
make sure that only one process at a time manipulates a buffer so there are no race conditions or lost updates.
As an example how sleep-wakeup system calls are used, consider the producer-consumer problem also known
as bounded buffer problem.
Two processes share a common, fixed-size (bounded) buffer. The producer puts information into the buffer
and the consumer takes information out.
Trouble arises when
The producer wants to put a new data in the buffer, but buffer is already full.
Solution: Producer goes to sleep and to be awakened when the consumer has removed data.
The consumer wants to remove data the buffer but buffer is already empty.
Solution: Consumer goes to sleep until the producer puts some data in buffer and wakes consumer up.

Mutex Locks
1. The hardware solutions presented above are often
difficult for ordinary programmers to access,
particularly on multi-processor machines, and
particularly because they are often platform-dependent.
2. Therefore most systems offer a software API
equivalent called mutex locks or simply mutexes. ( For
mutual exclusion )
3. The terminology when using mutexes is to acquire a lock prior to entering a critical section,
and to release it when exiting

Solution to the critical-section problem using mutex locks

1. Just as with hardware locks, the acquire step will block the
process if the lock is in use by another process, and both the
acquire and release operations are atomic.
2. Acquire and release can be implemented as shown here

1. One problem with the implementation is the busy loop used to


block processes in the acquire phase. These types of locks are
referred to as spinlocks, because the CPU just sits and spins while blocking the process.
2. Spinlocks are wasteful of cpu cycles, and are a really bad idea on single-cpu single-threaded machines,
because the spinlock blocks the entire computer, and doesn't allow any other process to release the
lock.

UNIT-2 OPERATING CSE- AIML , Page


Semaphores
1. Semaphore a proposed by Edsgar Dijkstra is a technique to manage concurrent processes by using
a simple inter value, which is known as a semaphore.
2. A semaphore S is an integer variable which is non-negative and shared between threads. This
variable is used to solve the critical section problem and to achieve process synchronization in the
multiprocessing environment.
3. A semaphore S is an integer variable that, apart from initialization, is accessed only through two
standard atomic operations: wait() and signal().
1. The wait()  termed P (from the Dutch proberen, which means “to test”);
2. signal() was originally called V (from verhogen, which means “to increment”).

The definition of wait() is as follows:


wait(S)
{
while (S <= 0); // busy
wait S--;
}

The definition of signal() is as follows:


signal(S)
{ S+
+;
}
All modifications to the integer value of the semaphore in the wait() and signal() operations must be executed
indivisibly. That is, when one process modifies the semaphore value, no other process can simultaneously
modify that same semaphore value.

Semaphore Usage
Types of Semaphores:
1. The value of a binary semaphore can range only between 0 and 1. In some systems the Binary
Semaphore is called as Mutex locks, because, as they are locks to provide the mutual exclusion
2. The value of a counting semaphore can range over an unrestricted domain. Counting semaphores can
be used to control access to a given resource consisting of a finite number of instances
3. The process that wish to use a resource must performs the wait() operation ( count is decremented )
4. The process that releases a resource must performs the signal() operation ( count is incremented )
5. When the count for the semaphore is 0 means that all the resources are being used by some
processes. Otherwise resources are available for the processes to allocate .
6. When a process is currently using a resource means that it blocks the resource until the count becomes
> 0.
1. For example:

1. – Let us assume that there are two processes p0 and p1 which consists of two statements s0 &
s1 respectively.

2. – Also assume that these two processes are running concurrently such that process p1 executes
the statement s1 only after process p0 executes the statement s0.

UNIT-2 OPERATING CSE- AIML , Page


Semaphore Implementation with no Busy waiting

Implementation of wait: (definition of wait with no busy waiting)


wait (S){
S.value=S.value-1;
if (S.value < 0) {
add this process to waiting queue
block(); }
}
Implementation of signal: (definition of signal with no busy waiting)
Signal (S){
S.value=S.value +1;
if (value <= 0) {
remove a process P from the waiting queue
wakeup(P); }
}

Disadvantages of Semaphores
1. While a process is in its critical section, any other process that tries to enter its critical section must
loop continuously in the entry code.
2. Busy waiting wastes CPU cycles that some other process might be able to use productivity
3. This type of semaphore is also called spinlock because the process “spins“ waiting for the lock.
4. To overcome the need for busy waiting ,we can modify the definition of the wait () and signal
() semaphore operations.
5. When a process executes the wait () operation and finds that the semaphore value is not positive, it
must wait.
6. How rather than engaging in busy waiting, the process can block itself.
7. The block operation places a process into a waiting queue associated with the semaphore, and the state of
the process is switched to the waiting state.
8. The control is transferred to the CPU scheduler, which selects another process to executes.
Deadlocks and Starvation
The implementation of semaphore with waiting queue may result in the
situation where two or more processes are waiting for an event is called as
Deadlocked.
• To illustrate this, let us assume two processes P0 and P1 each
accessing two semaphores S and Q which are initialized to 1 :-

1. Now process P0 executes wait(S) and P1 executes wait(Q), assume


that P0 wants to execute wait(Q) and P1 executes wait(S). But it is possible only after process P1
executes the signal(Q) and P0 executes signal(S).
2. Starvation or indefinite blocking. A process may never be removed from the semaphore queue in
which it is suspended.
Classical Problems of Synchronization
1. Bounded-Buffer Problem
2. Readers and Writers Problem
3. Dining-Philosophers Problem

The Bounded-Buffer Problem

UNIT-2 OPERATING CSE- AIML , Page


Produce 4. The bounded-buffer
problem(Producer Consumer
r Consumer
Problem) is one of the classic
problem of synchronization.
5. There is a buffer of n slots
and each slot is cable of storing
Buffer of n one unit of data..
6. There are two processes
slots running namely Producer and
Consumer ,which are operating on the buffer.
1. The producer tries to insert data into an empty slot of the buffer.
2. The consumer tries to remove data from a filled slot in the buffer.
1. The producer must not insert data when the buffer s full
2. The consumer must not remove data when the buffer is empty
3. The Producer and Consumer should not insert and remove data simultaneously.
We will make use of three semaphores:
1.m(mutex), a binary semaphore which is used to acquire and release the lock.
2. empty, a counting semaphore whose initial value is the number of slots in the buffer since initially all
slots are empty.
3. Full ,counting semaphore whose initial value is 0.

The structure of the producer process.


do {
...
/* produce an item in next produced */
...
wait(empty); // wait until empty>0 and then decrement
‘empty’ wait(mutex); //acquire lock
...
/* add next produced to the buffer */
...
signal(mutex); //release lock
signal(full); //increment ‘full’
} while (true);

The structure of the consumer process.


do {
wait(full); // wait until full>0 and then decrement
‘full’ wait(mutex); //acquire lock
...
/* remove an item from buffer to next consumed */
...
signal(mutex); //release lock
signal(empty); //increment ‘empty’
...
/* consume the item in next consumed */
...
} while (true);

Readers-Writers Problem
1.A database is to be shared among several concurrent processes.

UNIT-2 OPERATING CSE- AIML , Page


2. Some of these processes may want only to read the database, whereas others may want to update (that is, to
read and write) the database.
3. We distinguish between these two types of processes by referring to the former as readers and to the latter
as writers.
4. if two readers access the shared data simultaneously, no adverse effects will result.
5. if a writer and some other process (either a reader or a writer) access the database simultaneously, chaos
may ensue.
6. To ensure that these difficulties do not arise, we require that the writers have exclusive access to the
shared database while writing to the database.
7. This synchronization problem is referred to as the readers–writers problem.

8. We will make use of two semaphores and an integer variable:

9.1. mutex, a semaphore (initialized to 1) which is used to ensure mutual exclusion when readcount is
upadated i.e., when any reader enters or exit from the critical section.
10. 2.Wrt , a semphore (initialized to 1) common to both reader and writer processes.
11. 3.readcount , an integer variable (initialized to 0) that keeps tracks of how many processes are
currently reading the object.

12. The structure of a writer process.


do {
wait(mutex);
...
/* writing is performed */
...
signal(mutex);
} while (true);

The structure of a reader process


while (true) {
wait (mutex) ;
readcount ++ ; //The number of readers has now increased by 1
if (readcount == 1)
wait (wrt) ;//this ensure no writer can enter if there is even one reader
signal (mutex)
// reading is performed

wait (mutex) ;
readcount - - ; //a reader wants to leave
if (readcount == 0) // no reader is left in the critical section
signal (wrt) ; //writers can enter
signal (mutex) ; //reader leave
}

Dining-Philosophers Problem
1. Consider five philosophers who spend their lives thinking
and eating.
2. When a philosopher thinks, she does not interact with
her colleagues.

UNIT-2 OPERATING CSE- AIML , Page


3. Philosopher gets hungry and tries to pick up the two chopsticks that are closest to her left and
right neighbors
4. A philosopher may pick up only one chopstick at a time.
5. Obviously, she cannot pick up a chopstick that is already in the hand of a neighbour.
6. When a hungry philosopher has both her chopsticks at the same time, she eats without releasing
the chopsticks.
7. When she is finished eating, she puts down both chopsticks and starts thinking again
8. One simple solution is to represent each chopstick with a semaphore.
9. A philosopher tries to grab a chopstick by executing a wait() operation on that semaphore.
10. She releases her chopsticks by executing the signal() operation on the appropriate semaphores.
11. Thus, the shared data are
semaphore chopstick[5];

The structure of Philosopher i:


While (true) {
wait ( chopstick[i] );
wait ( chopStick[ (i + 1) % 5] );
// eat
signal ( chopstick[i] );
signal (chopstick[ (i + 1) % 5] );
// think
}
1. Although this solution guarantees that no two neighbours are eating simultaneously, it
nevertheless must be rejected because it could create a deadlock.
2. Suppose that all five philosophers become hungry at the same time and each grabs her left chopstick.
3. All the elements of chopstick will now be equal to 0. When each philosopher tries to grab her
right chopstick, she will be delayed forever.
Several possible remedies to the deadlock problem are replaced by:

• Allow at most four philosophers to be sitting simultaneously at the table.

• Allow a philosopher to pick up her chopsticks only if both chopsticks areavailable (to do this, she must
pick them up in a critical section).

• Use an asymmetric solution—that is, an odd-numbered philosopher picks up first her left chopstick and
then her right chopstick, whereas an even numbered philosopher picks up her right chopstick and then her left
chopstick.

Problems with Semaphores

Incorrect use of semaphore operations:


– signal (mutex) …. wait (mutex) Case 1
– wait (mutex) … wait (mutex) Case 2
– Omitting of wait (mutex) or signal (mutex) (or both) Case 3
• As the semaphores used incorrectly as above may results the timing errors.
• Case 1 Several processes may execute in critical section by violating the mutual exclusion
requirement.
• Case 2 Dead lock will occur.

UNIT-2 OPERATING CSE- AIML , Page


• Case 3 either mutual exclusion is violated or dead lock will occur
• To deal with such type of errors, researchers have developed high-level language constructs.
• One type of high-level language constructs that is to be used to deal with the above type of
errors is the Monitor type.

Monitors
• A high-level abstraction that provides a convenient and
effective mechanism for processsynchronization.
• A procedure can access only those variables that are declared in
a monitor and formal parameters
• Only one process may be active within the monitor at a time
Syntax of the monitor :-
monitor monitor-name
{
// shared variable declarations
procedure P1 (…) { …. }

Schematic view of a Monitor
procedure Pn (…) {……}
Initialization code ( ….) { … }

}

Dining-Philosophers Solution Using Monitors


1. This solution to the dining philosophers uses monitors, and the restriction that a philosopher may only
pick up chopsticks when both are available. There are also two key data structures in use in this
solution:
enum { THINKING, HUNGRY,EATING } state[ 5 ];
2. A philosopher may only set their state to eating when neither of their adjacent neighbors is
eating. ( state[ ( i + 1 ) % 5 ] != EATING && state[ ( i + 4 ) % 5 ] != EATING
).
3. condition self[ 5 ]; This condition is used to delay a hungry philosopher who is unable to
acquire chopsticks.
4. In the following solution philosophers share a monitor, DiningPhilosophers, and eat using the
following sequence of operations:
5. DiningPhilosophers.pickup( ) - Acquires chopsticks, which may block the process. eat
6. DiningPhilosophers.putdown( ) - Releases the chopsticks.

UNIT-2 OPERATING CSE- AIML , Page


UNIT-2 OPERATING CSE- AIML , Page

You might also like