0% found this document useful (0 votes)
12 views82 pages

OS Chapter 2

The document provides an overview of processes and process management in operating systems, defining a process as a program in execution and detailing the responsibilities of the operating system in managing processes. It discusses process states, the concept of threads, multithreading, inter-process communication, and the importance of synchronization to avoid race conditions. Additionally, it highlights the significance of process control blocks (PCBs) and processor scheduling in managing system resources efficiently.

Uploaded by

kidefresb
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views82 pages

OS Chapter 2

The document provides an overview of processes and process management in operating systems, defining a process as a program in execution and detailing the responsibilities of the operating system in managing processes. It discusses process states, the concept of threads, multithreading, inter-process communication, and the importance of synchronization to avoid race conditions. Additionally, it highlights the significance of process control blocks (PCBs) and processor scheduling in managing system resources efficiently.

Uploaded by

kidefresb
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 82

DEBRE MARKOS UNIVERSITY

Operating System
BURIE CAMPUS
DEPARTMENT OF COMPUTER SCIENCE

By:
Amare W.

1
2

Chapter Two: Processes and process management

2.1. Process and Thread


• Program
o It is sequence of instructions defined to perform some task
o It is a passive entity
The process concept

 A key concept in all operating systems is the process.

 A process is basically a program in execution.

 It is an instance of a program running on a computer

 It is an active entity

 A processor performs the actions defined by a process


3 01/27/2025

Program Vs Process example


Progra
Process
m

3/2/2018
4 01/27/2025

 The operating system is responsible for the following activities in

connection with Process Management

1. Scheduling processes and threads on the CPUs.

2. Creating and deleting both user and system processes.

3. Suspending and resuming processes.

4. Providing mechanisms for process synchronization among processes

for proper sequencing and coordination when dependencies exist.

5. Providing mechanisms for process communication.


3/2/2018

6. Termination Processes
5 01/27/2025

 There may exist more than one process in the system which

may require the same resource at the same time.

 Therefore, the operating system has to manage all the processes

and the resources in a convenient and efficient way.

 There are two types of processes


Sequential Processes
 Execution progresses in a sequential fashion, i.e. one after the
other
3/2/2018
 At any point in time, at most one process is being executed
6 01/27/2025

Concurrent Processes
There are two types of concurrent processes
True Concurrency (Multiprocessing)
 Two or more processes are executed simultaneously in a
multiprocessor environment
Apparent Concurrency (Multiprogramming)
 Two or more processes are executed in parallel in a
uniprocessor environment by switching from one process to
another
 Supports pseudo parallelism, i.e. the fast switching among
3/2/2018

processes gives illusion of parallelism


7 01/27/2025

Process creation

3/2/2018
8 01/27/2025

3/2/2018
9

Process termination
 After a process has been created, it starts running and does
whatever its job is.
 However, nothing lasts forever, not even processes.
 Sooner or later the new process will terminate, usually due to
one of the following conditions:
1. Normal exit (voluntary).
2. Error exit (voluntary).
3. Fatal error (involuntary).
4. Killed by another process (involuntary).
10 01/27/2025

3/2/2018
11 01/27/2025

3/2/2018
12

Process States
 During its lifetime, a process passes through a number of
states. The most important states are: New, Ready, Running,
Blocked (waiting) and Terminated.
 1. New: A process that has just been created but has not yet
been admitted to the pool of executable processes by the
operating system
 Information concerning the process is already maintained in
memory but the code is not loaded and no space has been
allocated for the process
13

….cont’d

2. The process is waiting to be assigned to a


Ready processor. A process that is not currently
executing but that is ready to be executed
as soon as the operating system dispatches
it.
3. A process that is currently being
Running executed. Process is actually using
the CPU
A process that is waiting for the
4. Blocked completion of some event to occur, such
(Waiting):
as I/O operation. Process is unable to run
until some external event happens
14

….cont’d
 5.Exit (Terminated): A process that has been released from the
pool of executable processes by the operating system, either
because it halted or because it aborted for some reason
15 01/27/2025

New – process is being created


Ready – process is waiting to run (runnable), temporarily stopped to
let another process run
Running – process is actually using the CPU
Blocked – unable to run until some external event happens
Exit (Terminated) – process has finished the execution
3/2/2018
A process resides in which memory during different state?
16 01/27/2025

Process control block (PCB)


 A Process Control Block (PCB) is a data structure maintained by the
operating system for every process.
 Each process is represented in the OS by a process control block. It
contains many pieces of information associated with a specific process:
 The PCB is identified by an integer process ID (PID). A PCB keeps
all the information needed to keep track of a process.
 The PCB is maintained for a process throughout its lifetime and is
deleted once the process terminates.
 The architecture of a PCB is completely dependent on operating
system and may contain different information in different3/2/2018
operating
systems. PCB lies in kernel memory space.
17 01/27/2025

Fields of Process Control Block (PCB)


 Process ID - Unique identification for each of the process in the
operating system.
 Process state: the state may be new ready, running, waiting, halted.
 Pointer - A pointer to parent process.
 Priority - Priority of a process.
 Program counter: the counter indicates the address of the next
instruction to be executed for this process.
 CPU registers: it includes accumulators, index registers, stack
pointers, general purpose registers, plus any condition
3/2/2018
code
information.
18 01/27/2025

 CPU scheduling information: process priority, pointers to


scheduling queues
 Memory management information: limit registers, page tables or
segment tables. Depending on the memory system used by the
operating system.
 Accounting information: the amount of CPU and real-time used,
time limits, job or process number.
 I/O status information: list of I/O devices allocated to the
3/2/2018
process, list of open files.
19 01/27/2025

Context switching

3/2/2018
20 01/27/2025


Threads
A

 A Thread is a single point of execution within the process that can be performed. A
thread is a dispatchable unit of work (lightweight process) that has independent
context, state and stack
 A process is a collection of one or more threads and associated system resources.
 Traditional operating systems are single-threaded system. Modern operating
systems are multithreaded systems

 The process can be split down into so many threads


 Example:- 1. in a browser, many tabs can be viewed as threads.
2. Ms word uses many threads- formatting text, spelling checker, processing input,
save

3/2/2018
21 01/27/2025

Multithreading
 A process can contain multiple threads. Thread is known as light
weight process. This idea is to achieve parallelism by dividing the
process into multiple threads.
 Multithreading is a technique in which a process, executing an
application, is divided into threads that can run concurrently
 Handling several independent tasks of an application that do
not need to be serialized (e.g. database servers, web servers).
 Having great control over the modularity of the application
and the timing of application related events.
 Each thread has independent context, state and stack.

 All threads share the same address space and a separate thread
3/2/2018
table
is needed to manage the threads
22 01/27/2025

As each thread has its own independent resource for process execution,
multiple processes can be executed parallel by increasing number of
threads.

3/2/2018
Figure : Single-threaded and multithreaded processes.
23 01/27/2025

 Most software applications that run on modern computers are


multithreaded.
 An application typically is implemented as a separate process
with several threads of control.
 For example. A word processor may have a thread for
displaying graphics, another thread for responding to
keystrokes from the user, and a third thread for performing
spelling and grammar checking in the background.
3/2/2018
24 01/27/2025

Multithreading usage
 Several reasons for having multiple threads:
 Many applications need multiple activities are going on at once.
 decomposing such an application into multiple sequential threads
that run in quasi-parallel, the programming model becomes
simpler.
 They are lighter weight than processes, they are easier (i.e., faster) to
create and destroy than processes.
 Having multiple threads within an application provide higher
performance argument.
 If there is substantial computing and also substantial I/O, having
threads allows these activities to overlap, thus speeding up the
3/2/2018
application.
25 01/27/2025

Multithreading usage

 Improving Front end response to the users


 Enhancing application performance
 Effective utilization of Computer resources
 Low maintenance cost
 Faster completion of tasks due to parallel operation
 Simplifying development process and increasing productivity

3/2/2018
26 01/27/2025

Types of Threads

3/2/2018
27 01/27/2025

Inter-process communication
 Since a processes frequently needs to communicate with other
processes therefore there is a need for well-structured
communication without interrupts among processes.
 Inter-process communication is the mechanism provided by
the operating system that allows processes to communicate
with each other.
 This communication could involve a process letting another
process know that some event has occurred or transferring of
data from one process to another. 3/2/2018
28 01/27/2025

 Process executing concurrently in operating system may be


either independent process or cooperating process.
 Independent process cannot affect or be affected by the
execution of another process.
 Any process that doesn’t share any data with any other process
is independent.
 Cooperating process can affect or be affected by the execution of
another process.
 Clearly, any process that shares data with other processes is a
cooperating process.
 Cooperating processes need inter process communication
3/2/2018
mechanisms. Give an example of independent processes?
29 01/27/2025

….cont’d
There are several reasons for providing an environment that allows process
cooperation:
Information Sharing: since several users may be interested in the same
piece of information (for instance shared file) we must provide an
environment to allow concurrent access to such information.
Computation speedup: if we want a particular task to run faster, we must
break it in to subtasks, each of w/h will be executing in parallel with the
others.
Notice that such speedup can be achieved only if the computer has multiple
processing elements (such as CPUs or I/O channels).
3/2/2018
Modularity: if we want to construct in a modular fashion, dividing the
system functions in to separate processes or threads.
30 01/27/2025

….cont’d
Convenience: even an individual user may work on many tasks at the same
time. For instance, editing, printing and compiling in parallel.
There are two fundamental models of interprocess communication:
1. Shared memory: a region of memory that is shared cooperating process
is established. Processes can exchange by reading and writing data to the
shared region.
Ex. Producer-consumer share a common memory. Compiler produces
assembly code to consume by assembler. Client-server computer.
To allow producer and consumer process concurrently we must have
available a buffer of items that can be filled by the producer and emptied by
consumer.
31 01/27/2025

….cont’d

2. Message passing: communication takes place by means of


messages exchanged b/n the cooperating processes.
This is another mechanism to exchange information in
cooperating process without sharing the same memory address.
It is useful in a distributed environment, where the
communicating processes may reside on different computers
connected by a network.
For instance a chat program used on WWW
3/2/2018
Models for Inter-process32communication
01/27/2025

(IPC)

3/2/2018
33 01/27/2025

IPC and Race condition


 In operating systems, processes that are working together share some
common storage (main memory, file etc.) that each process can read and
write.
 When two or more processes are reading or writing some shared data and
the final result depends on who runs precisely when, are called race
conditions.
 A race condition is an undesirable

situation that occurs when a

device or system attempts to

perform two or more operations 3/2/2018

at the same time.


34 01/27/2025

 But, because of the nature of the device or system, the


operations must be done in the proper sequence to be done
correctly.
 Concurrently executing threads that share data need to
synchronize their operations and processing in order to avoid
race condition on shared data.

3/2/2018
35 01/27/2025

Critical Section
 Process A reads in and stores the value in a local variable
called next-free-slot.
 Just then a clock interrupt occurs and the CPU decides that
process A has run long enough so it switches to process B.
process B also reads in and also gets a 7.
 At this instant both processes think that the next available slot
is 7.

3/2/2018
36 01/27/2025

Critical regions and

Critical Section: The part of


program where the shared
resource is accessed is called
critical section or critical region.

3/2/2018
37 01/27/2025

Mutual exclusion
 To avoid race condition we need mutual exclusion. Mutual exclusion
Way of making sure that if one process is using a shared variable or
file, the other processes will be excluded doing the same thing.
 However sometimes processes have to access shared memory or files,
or doing other critical things that can lead to races. The part of the
program where the shared memory is accessed is called critical region
r critical section.

3/2/2018
38 01/27/2025

Solving Critical-Section Problem

3/2/2018
39 01/27/2025

Processor Scheduling
 Scheduling refers to a set of policies and mechanisms to control the
order of work to be performed by a computer system.
 Process scheduling is the activity of the process manager that
handles suspension of running process from CPU and selection of
another process on the basis of a particular strategy.
 Of all the resources of a computer system that are scheduled before
use, the CPU/processor is the far most important. Process
Scheduling is the means by which operating systems allocate
processor time for processes.
3/2/2018
40 01/27/2025

 Part of the operating system that makes scheduling decision is


called scheduler and the algorithm it uses is called scheduling
algorithm
 In the case of short term scheduling, the module that gives
control of CPU to the process selected by the scheduler is called
dispatcher.
 The operating system makes three types of scheduling decisions
with respect to execution of processes:
3/2/2018
41 01/27/2025

Long-term Scheduling: It is also called job scheduler. selects which


processes should be brought into the ready queue from the job queue.
• The decision to add to the pool of processes to be executed
• It determines when new processes are admitted to the system
• It is performed when a new process is created and admitted
- New Ready: It increases degree of multiprogramming

3/2/2018
42 01/27/2025

Medium-term Scheduling
• The decision to add to the number of processes that are
partially/fully in main memory
• It determines when a program is
brought partially or fully into main
memory so that it may be executed
It is performed when swapping is done
- Ready/Suspend Ready
- Blocked/Suspend Blocked
• It decreases degree of multiprogramming.
Short-term Scheduling: It is also called CPU scheduler.
• The decision of which ready process to execute next
3/2/2018
• It determines which ready process will get processor time next
43 01/27/2025

• It is performed when a ready process is allocated the processor


(when it is dispatched) - Ready Running
It occurs when the operating system chooses one of the processes in
the ready state for running.

3/2/2018
44 01/27/2025

Selection function
• It determines which process, among ready processes, is
selected for execution
• It may be based on
- Priority
- Resource requirement
- Execution behavior: time spent in system so far (waiting and
executing), time spent in execution so far, total service time
required by the process

3/2/2018
45 01/27/2025

Categories of Scheduling Algorithms


• It specifies the instants in time at which the selection function is exercised.

Scheduling algorithms can be divided into two categories with respect to how

they deal with clock interrupts.

Preemptive
 The strategy of allowing processes that are logically runnable to be
temporarily suspended and be moved to the ready state.
 Events that may result pre-emption are arrival of new processes, occurrence
of an interrupt that moves blocked process to ready state and clock interrupt.
 Suitable for general purpose systems with multiple users. Guarantees
acceptable response time and fairness. Context switching is an overhead
3/2/2018
46 01/27/2025

Non-Preemptive
Run to completion method: once a process is in running state, it
continues to execute until it terminates or blocks itself to wait for
some event.
Simple and easy to implement
Used in early batch systems
It may be well reasonable for some dedicated systems
Efficiency can be attained but response time is very high
3/2/2018
47 01/27/2025

Process Scheduling: are different time with respect to a process.

o Arrival Time: Time at which the process arrives in the ready queue.

o Completion Time: Time at which process completes its execution.

o Burst Time: Time required by a process for CPU execution.

o Turn Around Time: Time Difference between completion time and

arrival time.

Turn Around Time = Completion Time – Arrival Time

o Waiting Time(W.T): Time Difference between turnaround time and


3/2/2018

burst time. Waiting Time = Turn Around Time – Burst Time


48 01/27/2025

When Process scheduling takes place?

 Process Scheduling is the method to select a process from the


ready queue to be executed by CPU when ever the CPU
becomes idle.
o Process scheduling decisions may take place when a process:
1. Switches from running to waiting state
2. Switches from running to ready state
3. Switches from waiting to ready
4. Terminates
3/2/2018
49 01/27/2025

Scheduling Criteria
o CPU Utilization:
 The percentage of times while CPU is busy to the total time
( times CPU busy + times it is idle). Hence, it measures the
benefits from CPU.
 To maximize utilization, keep CPU as busy as possible.
 CPU utilization range from 40% (for lightly loaded systems) to
90% (for heavily loaded) (Explain why? CPU utilization can
not reach 100%, because of the context switch between active
processes). 3/2/2018
50 01/27/2025

o System Throughput: If the CPU is busy executing processes, then work is


being done. The number of process that are completed per time unit (hour)
o Turnaround time: For a particular process, it is the total time needed for
process execution (from the time of submission to the time of completion).
o It is the sum of process execution time and its waiting times (to get memory,
perform I/O, ….).
o Waiting time: is the sum of all periods it spends waiting in the ready queue.
o Response time: It is the time from the submission of a process until the first
response is produced (the time the process takes to start responding).

3/2/2018
51 01/27/2025

Objectives (goals) of scheduling

 Fairness: giving each process a fair share of the CPU.


 Balance: keeping all the parts of the system busy (Maximize).
 Throughput: number of processes that are completed per time unit
(Maximize).
 Turnaround time: time to execute a process from submission to completion
(Minimize).
Turnaround time = Process finish time – Process arrival time
 CPU utilization: percent of time that the CPU is busy in executing a
process.
 keep CPU as busy as possible (Maximized).
 Response time: time between issuing a command and getting the result
(Minimized).
 Waiting time: amount of time a process has been waiting in the ready queue
(Minimize).
3/2/2018
 Waiting time = Turnaround time – Actual execution time
52 01/27/2025

Scheduling Algorithms
1. First-Come-First-Served Scheduling (FCFSS)
The process that requested the CPU first is allocated the CPU and
keeps it until it released it, either due to completion or request of
an I/O operation.
The process that has been in the ready queue the longest is
selected for running. Its selection function is waiting time and it
uses non preemptive scheduling/decision mode
Process execution begins with CPU burst, followed by an I/O
burst, followed by another CPU burst, then by another 3/2/2018
I/O burst
and so on.
53 01/27/2025

Advantages
It is the simplest of all non-preemptive scheduling algorithms:
process selection & maintenance of the queue is simple
There is a minimum overhead and no starvation
It is often combined with priority scheduling to provide efficiency
Drawbacks
Poor CPU and I/O utilization: CPU will be idle when a process
is blocked for some I/O operation
Poor and unpredictable performance: it depends on the arrival of
processes
Unfair CPU allocation: If a big process is executing, all other
processes will be forced to wait for a long time until the process
3/2/2018
releases the CPU.
It performs much better for long processes than short ones
First Come First Serviced
54 (FCFS) 01/27/2025

algorithm(con’t..)

3/2/2018
55 01/27/2025

The average waiting time under the FCFS policy, however, often is
quite long. The following processes that arrive at time 0 with the
length of the CPU burst given in milliseconds.

If the process arrive in the order


p1,p2,p3 and are reserved in FCFS
order.
Gant chart:

waiting times and turnaround times for each process are:

3/2/2018

Calculate average waiting time. (0+24+27)/3=17


56 01/27/2025

Repeat the previous example, assuming that the processes arrive


in the order P2, P3, P1.

Gant chart:

waiting times and turnaround times for each process are:

3/2/2018

 Hence, average waiting time= (6+0+3)/3=3 milliseconds


57 01/27/2025

2. Shortest Job First Scheduling (SJFS)


Process with the shortest expected processing time (CPU burst) is
selected next. If two processes have the same next CPU burst, FCFS
is used.
It is not desirable for a time-sharing or transaction processing
environment because of its lack of processing. Its selection function
is execution time and it uses preemptive scheduling/decision mode.
Advantages: It produces optimal average turn around time and
average response time. There is a minimum overhead
Drawbacks: Starvation: some processes may not get the CPU at all
3/2/2018

as long as there is a steady supply of shorter processes.


58 01/27/2025

Consider the following set of processes, with the length of the CPU
burst time given in milliseconds:

The processes arrive in the order


P1, P2, P3, P4. All at time 0.

1. Using FCFS
Gant chart:
waiting times and turnaround times for each process are:

3/2/2018
Hence, average waiting time= (0+6+14+21)/4=10.25
milliseconds
59 01/27/2025

2. Using SJF

Gant chart:

waiting times and turnaround times for each process are:

Hence, average waiting time= (3+16+9+0)/4=73/2/2018


milliseconds
60 01/27/2025

Shortest Remaining Time Scheduling (SRTS)

 The process that has the shortest expected remaining process time. If
a new process arrives with a shorter next CPU burst than what is left
of the currently executing process, the new process gets the CPU.
 Its selection function is remaining execution time and uses
preemptive decision mode.
 The SJF algorithm can be either preemptive or nonpreemptive. The
choice arises when a new process arrives at the ready queue while a
previous process is still executing.
 The next CPU burst of the newly arrived process may 3/2/2018
be shorter
than what is left of the currently executing process.
61 01/27/2025

 A preemptive SJF algorithm will preempt the currently executing


process, whereas non-preemptive SJF algorithm will allow the
currently running process to finish its CPU burst.
 Preemptive SJF scheduling is sometimes called Shortest Remaining
Time first Scheduling (SRTS).
 Advantages: It gives superior turnaround time performance to
SJFS, because a short job is given immediate preference to a
running longer process.
 Drawbacks: There is a risk of starvation of longer processes.
3/2/2018 High

overhead due to frequent process switch.


62 01/27/2025

 The following four processes, with the length of CPU burst given in
Process Burst Time Arrival Time
millisecond. P1 7 0
P2 4 2
P3 1 4
P4 4 5

Gant chart

 waiting times and turnaround times for each process


are

3/2/2018

 Hence, average waiting time= (9+1+0+2)/4=3


63 01/27/2025

3. Round Robin Scheduling (RRS)

 The round-robin (RR) scheduling algorithm is designed especially for


time sharing systems. It is similar to FCFS scheduling, but preemption
is added to switch between process.
 A small amount of time called a quantum or time slice is defined.
According to the quantum, a clock interrupt is generated at periodic
intervals. When the interrupt occurs, the currently running process is
placed in the ready queue, and the next ready process is selected on a
FCFS basis.
 The CPU is allocated to each process for a time interval of up to one
quantum. When a process finishes its quantum it is added to the ready
3/2/2018

queue, when it is requesting I/O it is added to the waiting queue.


64 01/27/2025

 The ready queue is treated as a circular queue. Its selection function is


based on quantum and it uses preemptive decision mode.
 Setting the quantum too short causes too many process switches and
lowers the CPU efficiency, but setting it too long may cause poor response
to short interactive requests.
 Features: The oldest, simplest, fairest and most widely used preemptive
scheduling.
 Short jobs will be able to leave the system faster since they will finish first.
 Drawbacks: It makes implicit assumption that all processes are equally
important. It does not take external factors into account 3/2/2018

 Maximum overhead due to frequent process switch


65 01/27/2025

The average waiting time under the RR policy is often long. The
following set of processes arrive at 0, with the length of the CPU
burst given in millisecond.

If we use the time quantum is 4 milliseconds, then the process p1


gets the first 4 milliseconds.
Since it requires another 20 milliseconds, it is preempted after the
first time quantum and the CPU is given to the next process in the
queue, process p2.
Since process p2 doesn’t require 4 milliseconds it quits before its
/2018
time quantum expires. The CPU is then given to the next process,
process p3.
66 01/27/2025

RR with Q=4

 Gant chart:

waiting times and turnaround times for each process are:

Hence, average waiting time= (6+4+7)/3=5.66 milliseconds

3/2/2018
67 01/27/2025

RR with Q=2

Gant chart:

 waiting times and turnaround times for each process are:

3/2/2018

 Hence, average waiting time= (6+6+7)/3=6.33 milliseconds


68 01/27/2025

4.Priority Scheduling (PS)

 The SJF algorithm is a special case of the general priority scheduling


algorithm.
 Each process is assigned a priority number and the runnable process with
the highest priority is allowed to run i.e. a ready process with highest
priority is given the CPU. Equal-priority processes are scheduled in FCFS
order.
 An SJF algorithm is simply is a priority algorithm where the priority (p) is
the inverse of the next CPU burst. The larger the CPU burst, the lower the
priority and vice versa.
 It is often convenient to group processes into priority classes and use
3/2/2018

priority scheduling among the classes but round robin scheduling within
69 01/27/2025

Advantages
 It considers the fact that some processes are more important
than others, i.e. it takes external factors into account.
Drawbacks
 A high priority process may run indefinitely and it can prevent
all other processes from running.
 This creates starvation on other processes. There are two
possible solutions for this problem:
• Assigning a maximum quantum to each process
•Assigning priorities dynamically, i.e. avoid using static
3/2/2018

priorities
70 01/27/2025

The following set of processes, assumed to have the arrived time 0, in the order p1, p2,
….p5 with the length of the CPU burst given in milliseconds.

 waiting times and turnaround times for each process are:

3/2/2018
Hence, average waiting time= (6+0+16+18+1)/5=8.2
milliseconds
71 01/27/2025

 Multi-level Queues Scheduling (MLQS)


• Ready queue is partitioned into separate queues:
• foreground (interactive)
• background (batch)
• Each queue has its own scheduling algorithm,
• foreground – RR
• background – FCFS
• Scheduling must be done between the queues.
• Fixed priority scheduling: (i.e., serve all from foreground then from
background). Possibility of starvation.
• Time slice: each queue gets a certain amount of CPU time which it can
schedule amongst its processes; i.e., 80% to foreground in RR;20% to
background in FCFS
 There are two types:
3/2/2018
 Without feedback: processes can not move between queues.
 With feedback: processes can move between queues.
72 01/27/2025

 Divide ready queue into several queues.


 Each queue has specific priority and its own scheduling
algorithm (FCFS, …).

3/2/2018
73 01/27/2025

Multi-level Queues with Feed Back Scheduling (MLFBQS)

 Divide ready queue into several queues.


 Each queue has specific Quantum time as shown in figure.
 Allow processes to move between queues.

3/2/2018
74 01/27/2025

Deadlock
For each use of a kernel-managed resource by a process or thread,
the operating system checks to make sure that the process has
requested and has been allocated the resource.
A system table records whether each resource is free or allocated.
For each resource that is allocated, the table also records the process
to which it is allocated.
If a process requests a resource that is currently allocated to another
process, it can be added to a queue of processes waiting for this
resource. Deadlock is a situation where 2 or more processes
3/2/2018
are
waiting for each other.
75 01/27/2025

A set of processes are in a deadlocked state when every process in


the set is waiting for an event that can be caused only by another
process in the set.

The set of blocked processes each hold a resource and wait to


acquire a resource held by another process in the set. All deadlocks
involve conflicting needs for resources by two or more processes

3/2/2018
76 01/27/2025

Deadlock Characterization
Coffman (1971) identified four necessary conditions that must hold
simultaneously for a deadlock to occur.
Deadlock can arise if four conditions hold simultaneously in a system:
1. Mutual exclusion:- only one process at a time can use a resource
(non-sharable). No process can access a resource unit that has been
allocated to another process
2. Hold and wait: a process holding at least one resource is waiting to
acquire additional resources held by other processes.
In the diagram given below, Process 2 holds Resource 2 and Resource
3 and is requesting the Resource 1 which is held by Process 1.

3/2/2018
77 01/27/2025

 3. No preemption: a resource can be released only voluntarily


by the process holding it, after that process has completed its
task. A resource cannot be preempted from a process by force.
 In the diagram below, Process 2 cannot preempt Resource 1
from Process 1. It will only be released when Process 1
relinquishes it voluntarily after its execution is complete.

3/2/2018
78 01/27/2025

4. Circular wait: A process is waiting for the resource held by


the second process, which is waiting for the resource held by the
third process and so on, till the last process is waiting for a
resource held by the first process. This forms a circular chain.
There exists a set {P0, P1, …, P0} of waiting processes such
that:
 P0 is waiting for a resource that is
held by P1,
 P1 is waiting for a resource that is

held by P2, …,
 Pn–1 is waiting for a resource that is

held by Pn
 Pn is waiting for a resource that is 3/2/2018

held by P0
79 01/27/2025

Methods for handling Deadlocks


Deadlock problems can be handled in one of the following three ways:

1. Using a protocol that prevents or avoids deadlock by ensuring that a


system will never enter a deadlock state, deadlock prevention and
deadlock avoidance scheme are used.

2. Allow the system to enter a deadlock state, detect it and then recover.
(Deadlock Recovery)

3. Ignore the problem and pretend that deadlocks never occur in the
system;

o Used by most operating systems, including UNIX and Windows.


3/2/2018
o It is then up to the application developer to write programs that handle
deadlocks.
80 01/27/2025

Deadlock Prevention
 By ensuring at least one of the necessary conditions for
deadlock will not hold, deadlock can be prevented. This is
mainly done by restraining how requests for resources can be
made
 Deadlock prevention methods fall into two classes:
1. An indirect method of deadlock prevention prevents the
occurrence of one of the three necessary conditions listed
previously i.e. Mutual exclusion, Hold and wait and No
preemption 3/2/2018

2. A direct method of deadlock prevention prevents the


81 01/27/2025

Deadlock Avoidance
 Deadlock avoidance scheme requires each process to declare the
maximum number of resources of each type that it may need in advance.
 Having this full information about the sequence of requests and release of
resources, we can know whether or not the system is entering unsafe
state.
 The deadlock-avoidance algorithm dynamically examines the resource-
allocation state to ensure that there can never be a circular-wait condition
 Resource-allocation state is defined by the number of available and
allocated resources, and the maximum demands of the processes
 A state is safe if the system can allocate resources to each process in some
3/2/2018
order (safe sequence) avoiding a deadlock. A deadlock state is an unsafe
state.
82

Thank you

You might also like