OS Unit-II KKW
OS Unit-II KKW
OPERATING SYSTEM
DESIGN
Prof. Jayshri A. Kandekar
Syllabus
Process concept, Process Control Block(PCB), Process Operations, Process
Scheduling: Types of process schedulers, Types of scheduling: Preemptive, Non
preemptive. Scheduling algorithms: FCFS, SJF, RR, Priority, Inter process
Communication(IPC). Threads: multithreaded model, implicit threads, threading
issues
Operating Systems
PROCESS CONCEPT
Operating Systems
Process
Users submit jobs to the system for execution, the jobs are run on the
system as process.
A program in execution is a process. A process is executed
sequentially, one instruction at a time.
A Program is a passive entity.
For ex. File on the disk.
A Process is an active entity.
Operating Systems
Structure of Process:
1.New
When a process is created, it is a new state. The process is not yet ready to
run in this state and is waiting for the operating system to give it the green
light. Long-term schedulers shift the process from a NEW state to a READY
state.
2.Ready
After creation now, the process enters in the ready state, in which, it waits for
the CPU to be assigned for the execution. Now the process is waiting in the
ready queue to be picked up by the short-term scheduler. The short-term
scheduler selects one process from the READY state to the RUNNING state.
The OS picks the new processes from the secondary memory and put all of
them in the main memory.
12/09/2024
There can be many processes Operating
present in the ready state.
system 9
Process State/Operations
3.Running
One of the processes from the ready state will be chosen by the OS
depending upon the scheduling algorithm. Hence, if we have only one
CPU in our system, the number of running processes for a particular
time will always be one. If we have n processors in the system then we
can have n processes running simultaneously.
In running state, the process starts to execute the instructions that were
given to it. The running state is also where the process consumes most
of its CPU time.
4.Block or wait
The OS suspends the process executing currently in running state and it enters
the waiting state. This could be because
It’s waiting for some input from the user.
It’s waiting for a resource that’s not available yet.
Some high-priority process comes in that need to be executed.
Then the process is suspended for some time and is put in a WAITING state. Till
then next process is given chance to get executed.
When a timeout occurs that means the process hadn’t been terminated in the
allotted time interval and next process is ready to execute, then the operating
system preempts the process. Basically this happens in priority scheduling
where on the incoming of high priority process the ongoing process is preempted
i.e., the operating system puts the process in ‘ready’ state.
12/09/2024 Operating system 11
Process State/Operations
5.Completion or termination
When a process finishes its execution, it comes in the termination state. All
the context of the process (Process Control Block) will also be deleted the
process will be terminated by the Operating system.
After the process has completed the execution of its last instruction, it is
terminated. The resources held by a process are released after it is
terminated.
A child process can be terminated by its parent process if its task is no longer
relevant. The child process sends its status information to the parent process
before it terminates. Also, when a parent process is terminated, its child
processes are terminated as well as the child processes cannot run if the
parent processes are terminated.
12/09/2024 Operating system 12
Process Control Block(PCB)
Process Control block (PCB) is a data structure used by an operating system to
stores information about a process when the process is created.
Which is also called a task control block.
PCB is unique for every process which consists of various attributes such as
process ID, program counters, process states, registers, CPU scheduling
information, memory management information, accounting information, and IO
status information, list of open files, etc. in the system.
The operating system uses the process control block to keep track of all the
processes executing in the system.
The process control stores many data items that are needed for
efficient process management. Some of these data items are
explained with the help of the given diagram:
Process State: It gives the current state of the process. It could be
a new process, ready, running, waiting or terminated.
Operating Systems
Process Scheduling
The Process Scheduling is the activity of the process manager that handles
the removal of the running process from the CPU and the selection of another
process on the basis of a particular strategy.
Process scheduling is an essential part of a Multiprogramming operating
systems. Such operating systems allow more than one process to be loaded
into the executable memory at a time and the loaded process shares the CPU
using time multiplexing.
Scheduling refers to the way processes are assigned to run on the available
CPUs. This assignment is carried out by software known as scheduler.
A scheduler is an OS module that selects the next job to be admitted into the
system and next process to run.
The problem of determining when processors should be assigned and to
which processes is called as processor
12/09/2024
scheduling or CPU scheduling.
Operating system 19
Process Scheduling
Types of schedulers -
1. Long Term Scheduler (LTS)
2. Middle Term Scheduler (MTS)
3. Short Term Scheduler (STS)
3 It controls the degree of It provides lesser control over It reduces the degree of
multiprogramming degree of multiprogramming multiprogramming
4 It is almost absent or It is also minimal in time It is a part of Time sharing systems.
minimal in time sharing sharing system
system
5 It selects processes from It selects those processes It can re-introduce the process into
pool and loads them into which are ready to execute memory and execution can be
memory for execution continued.
First come first serve (FCFS) scheduling algorithm simply schedules the jobs according to
their arrival time. The job which comes first in the ready queue will get the CPU first. The lesser
the arrival time of the job, the sooner will the job get the CPU. FCFS scheduling may cause the
problem of starvation if the burst time of the first process is the longest among all the jobs.
The process that requests the CPU first is allocated the CPU first.
When a process enters the ready queue, its PCB is linked onto the tail of the queue when the
CPU is free, it is allocated to the process at the head of the queue.
FCFS algorithm is non preemptive.
Jobs are executed on first come, first serve basis.
Easy to understand and implement.
Its implementation is based on FIFO queue.
Poor in performance as average wait time is high.
12/09/2024 Operating system 36
First Come First Served Scheduling (FCFS)
Disadvantages of SJF
May suffer with the problem of starvation
It is not implementable because the exact Burst time for a process can't be known in
advance.
P0 0 5 0
P1 1 3 5
P2 2 8 14
P3 3 6 8
P0 0-0=0
P1 5-1=4
P2 14 - 2 = 12
Once all the processes are available in the ready queue, No preemption will be done and the algorithm will work
as SJF scheduling. The context of the process is saved in the Process Control Block when the process is removed
from the execution and the next process is scheduled. This PCB is accessed on the next execution of this process.
Example:
There are five jobs P1, P2, P3, P4, P5 and P6. Their arrival time and burst time are given below in the table
The Gantt chart is prepared according to the arrival and burst time given in the table.
Since, at time 0, the only available process is P1 with CPU burst time 8. This is the only available process in the list
therefore it is scheduled.
The next process arrives at time unit 1. Since the algorithm we are using is SRTF which is a preemptive one, the
current execution is stopped and the scheduler checks for the process with the least burst time.
Till now, there are two processes available in the ready queue. The OS has executed P1 for one unit of time till now;
the remaining burst time of P1 is 7 units. The burst time of Process P2 is 4 units. Hence Process P2 is scheduled on
the CPU according to the algorithm.
12/09/2024 Operating system 44
Shortest Remaining Time First(SRTF) Scheduling
The next process P3 arrives at time unit 2. At this time, the execution of process P3 is stopped and the process with
the least remaining burst time is searched. Since the process P3 has 2 unit of burst time hence it will be given priority
over others.
The Next Process P4 arrives at time unit 3. At this arrival, the scheduler will stop the execution of P4 and check which
process is having least burst time among the available processes (P1, P2, P3 and P4). P1 and P2 are having the
remaining burst time 7 units and 3 units respectively.
P3 and P4 are having the remaining burst time 1 unit each. Since, both are equal hence the scheduling will be done
according to their arrival time. P3 arrives earlier than P4 and therefore it will be scheduled again.
The Next Process P5 arrives at time unit 4. Till this time, the Process P3 has completed its execution and it is no more
in the list. The scheduler will compare the remaining burst time of all the available processes. Since the burst time of
process P4 is 1 which is least among all hence this will be scheduled.
The Next Process P6 arrives at time unit 5, till this time, the Process P4 has completed its execution. We have 4
available processes till now, that are P1 (7), P2 (3), P5 (3) and P6 (2). The Burst time of P6 is the least among all
hence P6 is scheduled. Since, now, all the processes are available hence the algorithm will now work same as SJF.
P6 will be executed till its completion and then the process with the least remaining time will be scheduled.
Once all the processes arrive, No preemption is done and the algorithm will work as SJF.
A preemptive priority algorithm will preempt the CPU if the priority of the newly
arrived process is higher than the priority of the currently running process.
A non preemptive priority algorithm will simply put the new process at the need of the
ready queue.
12/09/2024 Operating system 47
Priority Based Scheduling
A major problem with priority scheduling algorithm is indefinite blocking or starvation.
Aging is technique of gradually increasing the priority of processes that wait in the system for a long period.
Priority scheduling is a non-preemptive algorithm and one of the most common
scheduling algorithms in batch systems.
Each process is assigned a priority. Process with highest priority is to be executed first
and so on.
Processes with same priority are executed on first come first served basis.
Priority can be decided based on memory requirements, time requirements or any other
resource requirement.
The RR scheduling algorithm is designed especially for time sharing system. This is
the preemptive version of first come first serve scheduling.
In this algorithm, every process gets executed in a cyclic way. A certain time slice
is defined in the system which is called time quantum. A time quantum is generally from
10 to 100 milliseconds.
Each process present in the ready queue is assigned the CPU for that time
quantum, if the execution of the process is completed during that time then the
process will terminate else the process will go back to the ready queue and waits
for the next turn to complete the execution. Thus the ready queue is treated as a
circular queue.
Round Robin is the preemptive process scheduling algorithm.
Context switching is used to save states of preempted processes.
12/09/2024 Operating system 51
Round Robin Scheduling
Advantages
It can be actually implementable in the system because it is not depending
on the burst time.
It doesn't suffer from the problem of starvation or convoy effect.
All the jobs get a fare allocation of CPU.
Disadvantages
The higher the time quantum, the higher the response time in the system.
The lower the time quantum, the higher the context switching overhead in
the system.
Deciding a perfect time quantum is really a very difficult task in the system.
Operating Systems
Inter Process Communication
What is Inter Process Communication (IPC) ?
Inter process communication is the mechanism provided by the operating system that
allows processes to communicate with each other. This communication could involve a
process letting another process know that some event has occurred or the transferring of
data from one process to another.
Operating Systems
Inter Process Communication
Operating Systems
Inter Process Communication
Operating Systems
Inter Process Communication
Operating Systems
Inter Process Communication
Operating Systems
Inter Process Communication(IPC)
Inter process Communication allows processes to communicate and synchronize their actions.
Inter process Communication (IPC) mechanism is used by cooperating processes to exchange
data and information.
Two operations provided by the IPC facility are receive and send messages.
There are twomodels of IPC:
→Shared Memory
→ Message Passing
Message Passing system allows processes to communicate with each other without sharing
the same address space.
Messages sent by a process can be fixed or variable size. If the message size of the process
is fixed then system level implementation is straightforward but it makes the task of
programming more difficult. If the message size of the process is variable then system level
implementation is more complex but it makes the task of programming simpler.
12/09/2024 Operating system 61
Inter Process Communication
Synchronization in Inter Process Communication
Synchronization is a necessary part of interprocess communication. It is either provided
by the interprocess control mechanism or handled by the communicating processes.
Some of the methods to provide synchronization are as follows −
Semaphore: A semaphore is a variable that controls the access to a common resource by
multiple processes. The two types of semaphores are binary semaphores and counting
semaphores.
Mutual Exclusion: Mutual exclusion requires that only one process thread can enter the
critical section at a time. This is useful for synchronization and also prevents race
conditions.
Barrier: A barrier does not allow individual processes to proceed until all the processes
reach it. Many parallel languages and collective routines impose barriers.
Spinlock: This is a type of lock. The processes trying to acquire this lock wait in a loop
while checking if the lock is available or not. This is known as busy waiting because the
process is not doing any useful operation even though it is active.
The link between two processes P and Q to send and receive messages is
called communication link. Two processes P and Q want to communicate with
each other; there should be a communication link that must exist between
these two processes so that both processes can able to send and receive
messages using that link.
For direct communication, a communication link is associated with exactly two
processes. One communication link must exist between a pair of processes.
In indirect communication between processes P and Q there is a mailbox to
help communication between P and Q. A mailbox can be viewed abstractly as
an object into which messages can be placed by processes and from which
messages can be removed.
In the non blocking send, the sending process sends the message and
resumes operation. Sending process doesn’t care about reception. It is also
known as asynchronous send.
In the Zero capacity queue the sender blocks until the receiver receives the
message. Zero capacity queue has maximum capacity of Zero; thus message
queue does not have any waiting message in it.
The Zero capacity queue is referred to as a message system with no
buffering.
Bounded capacity and Unbounded capacity queues are referred to as
Automatic buffering. Buffer capacity of the Bounded capacity queue is finite
length and buffer capacity of the Unbounded queue is infinite.
Operating Systems
Thread
A thread sometimes called as a light weight process(LWP).
Operating Systems
Benefits of Thread
The user threads must be mapped to kernel threads, using one of the following
strategies.
Many-To-One Model
One-To-One Model
Many-To-Many Model
Operating Systems
Multithreading Models
Operating Systems
Multithreading Models
In the many-to-one model, many user-level threads are all mapped onto a single
kernel thread.
Thread management is handled by the thread library in user space, which is very
efficient.
However, if a blocking system call is made, then the entire process blocks, even if the
other user threads would otherwise be able to continue.
Because a single kernel thread can operate only on a single CPU, the many-to-one
model does not allow individual processes to be split across multiple CPUs.
Green threads for Solaris and GNU Portable Threads implement the many-to-one
model in the past, but few systems continue to do so today.
Examples:
Solaris Green Threads
GNU Portable Threads
Operating Systems
Multithreading Models
Operating Systems
Multithreading Models
The one-to-one model creates a separate kernel thread to handle each user thread.
One-to-one model overcomes the problems listed above involving blocking system
calls and the splitting of processes across multiple CPUs.
However the overhead of managing the one-to-one model is more significant,
involving more overhead and slowing down the system.
Most implementations of this model place a limit on how many threads can be created.
Linux and Windows from 95 to XP implement the one-to-one model for threads.
Examples:
Windows
Linux
Solaris 9 and later
Operating Systems
Multithreading Models
Operating Systems
Multithreading Models
The many-to-many model multiplexes any number of user threads onto an equal or
smaller number of kernel threads, combining the best features of the one-to-one and
many-to-one models.
Users have no restrictions on the number of threads created.
Blocking kernel system calls do not block the entire process.
Processes can be split across multiple processors.
Individual processes may be allocated variable numbers of kernel threads, depending
on the number of CPUs present and other factors.
Examples:
Solaris prior to version 9
Windows with the ThreadFiber package
Operating Systems
Implicit Threads
Advantages:
Usually slightly faster to service a request with an existing thread than
create a new thread
Allows the number of threads in the application(s) to be bound to the
size of the pool
Separating task to be performed from mechanics of creating task allows
different strategies for running task
i.e.Tasks could be scheduled to run periodically
Operating Systems
Implicit Threads
OpenMP
OpenMP is a set of compiler directives available for C, C++, or FORTRAN
programs that instruct the compiler to automatically generate parallel code
where appropriate.
For example, the directive:
#pragma omp parallel { /* some parallel code here */ }would cause the
compiler to create as many threads as the machine has cores available, ( e.g.
4 on a quad-core machine ), and to run the parallel block of code, ( known as
a parallel region ) on each of the threads.
Another sample directive is "#pragma omp parallel for", which causes the for
loop immediately following it to be parallelized, dividing the iterations up
amongst the available cores.
Provides support for parallel programming in shared-memory environments
Operating Systems
Identifies parallel regions – blocks of code that can run in parallel
Implicit Threads
Operating Systems
Implicit Threads
Grand Central Dispatch, GCD
GCD is an extension to C and C++ languages, API, and run-time library
available on Apple technology for Mac OSX and iOS operating systems to
support parallelism.
Allows identification of parallel sections
Similar to OpenMP, users of GCD define blocks of code to be executed
either serially or in parallel by placing a carat just before an opening curly
brace, Block is in “^{ }” i.e. ^{ printf( "I am a block.\n" ); }
Internally GCD manages a pool of POSIX threads which may fluctuate in size
depending on load conditions.
Manages most of the details of threading
Operating Systems
Implicit Threads
GCD schedules blocks by placing them on one of several dispatch queues.
Two types of dispatch queues:
serial – Blocks placed on a serial queue are removed one by one. The
next block cannot be removed for scheduling until the previous block has
completed. blocks removed in FIFO order, queue is per process, called
main queue
Programmers can create additional serial queues within program
concurrent –Blocks are also removed from these queues one by one, but
several may be removed and dispatched without waiting for others to
finish first, depending on the availability of threads.i.e removed in FIFO
order but several may be removed at a time.
There are three concurrent queues, corresponding roughly to low,
medium, or high priority.
Operating Systems
Implicit Threads
Other Approaches
There are several other approaches available, including Microsoft's
Threading Building Blocks ( TBB ) and other products, and Java's
util.concurrent package.
Operating Systems
Thread Issues
Operating Systems
Thread Issues
Operating Systems
Thread Issues
Signal Handling
Q: When a multi-threaded process receives a signal, to what thread should
that signal be delivered?
A: There are four major options:
Deliver the signal to the thread to which the signal applies.
Deliver the signal to every thread in the process.
Deliver the signal to certain threads in the process.
Assign a specific thread to receive all signals in a process.
The best choice may depend on which specific signal is involved.
UNIX allows individual threads to indicate which signals they are accepting
and which they are ignoring. However the signal can only be delivered to one
thread, which is generally the first thread that is accepting that particular
signal. Operating Systems
Thread Issues
Signal Handling
Signals are used in UNIX systems to notify a process that a particular event has occurred.
Every signal has default handler that kernel runs when handling signal
User-defined signal handler can override default
For single-threaded, signal delivered to process
UNIX provides two separate system calls, kill( pid, signal ) and pthread_kill( tid, signal ), for
delivering signals to processes or specific threads respectively.
Windows does not support signals, but they can beSystems
Operating emulated using Asynchronous Procedure Calls
( APCs ). APCs are delivered to specific threads, not processes.
Thread Issues
Thread Cancellation
Terminating a thread before it has finished
Thread to be canceled is target thread
Threads that are no longer needed may be cancelled by another thread in one of two
ways:
Asynchronous Cancellation cancels the thread immediately.
Deferred Cancellation allows the target thread to periodically check if it should
be cancelled . Sets a flag indicating the thread should cancel itself when it is
convenient. It is then up to the cancelled thread to check this flag periodically and
exit nicely when it sees the flag set.
( Shared ) resource allocation and inter-thread data transfers can be problematic with
asynchronous cancellation.
Operating Systems
Thread Issues
Thread Cancellation
Operating Systems
Thread Issues
Scheduler Activations
Many implementations of threads provide a virtual processor as an interface between the user
thread and the kernel thread, particularly for the many-to-many or two-tier models.
This virtual processor is known as a "Lightweight Process", LWP.
There is a one-to-one correspondence between LWPs and kernel threads.
The number of kernel threads available, ( and hence the number of LWPs ) may change
dynamically.
The application ( user level thread library ) maps user threads onto available LWPs.
kernel threads are scheduled onto the real processor(s) by the OS.
The kernel communicates to the user-level thread library when certain events occur ( such
as a thread about to block ) via an upcall, which is handled in the thread library by
an upcall handler. The upcall also provides a new LWP for the upcall handler to run on,
which it can then use to reschedule the user thread that is about to become blocked. The
OS will also issue upcalls when a thread becomes unblocked, so the thread library can
make appropriate adjustments.
Operating Systems
Thread
Scheduler Activations
Operating Systems
Thread
Operating Systems
Thread
Operating Systems
Thread
Operating Systems
THANK YOU !!
Dr. Dipak D. Bage