0% found this document useful (0 votes)
0 views

Lecture 3 Processes

Uploaded by

Washaki Jessy
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
0 views

Lecture 3 Processes

Uploaded by

Washaki Jessy
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 38

Processes of Operating

Systems
MUGOYA SHARIFF
Email: [email protected]
Outline

Process states

Control block

Scheduling

Co-operating processes

Synchronization
04/29/2025 10:52 AM MU@KU 2
Process
• A program in execution is called a process
• It is an instance of a program that actually runs
• Two essential process elements are program code and a set
of data associated with that code.
• Process memory is divided into four sections:
• The program memory stores the compiled program code, read
in from non-volatile storage when the program is launched.
• The data section stores global and static variables, allocated
and initialized before executing the main program.
• The heap section is used to manage dynamic memory
allocation inside our program. In other words, it is the portion
of memory where dynamically allocated memory resides i.e.,
memory allocation via new or malloc and memory deallocation
via delete or free, etc.
• The stack section stores local variables defined inside our
program or function. Stack space is created for local variables
when declared, and the space is freed up when they go out of
scope.
04/29/2025 10:52 AM MU@KU 3
Process …
• Process Management: The operating system is responsible for
the following activities in connection with process management:
the creation and deletion of both user and system processes;
the scheduling of processes; and the provision of mechanisms
for synchronization, communication, and deadlock handling for
processes.
• Process, on the other hand, includes:
• Current value of Program Counter (PC)
• Contents of the processors registers
• Value of the variables
• The processes stack (SP) which typically contains temporary data
such as subroutine parameter, return address, and temporary
variables.
• A data section that contains global variables
04/29/2025 10:52 AM MU@KU 4
Process Control Block
• Each process is represented in the operating system by
a process control block (PCS)—also called a task control
block.Process state
process number
program counter
Registers
memory limits
list of open files
.
.

04/29/2025 10:52 AM MU@KU 5


States of a Process in OS
• As a process executes, it changes state. The state of a
process is defined in part by the current activity of that
process. Each process may be in one of the following states:
• New State: The process being created.
• Running State: A process is said to be running if it has the CPU,
that is, process actually using the CPU at that particular
instant.
• Blocked (or waiting) State: A process is said to be blocked if it is
waiting for some event to happen such that as an I/O
completion before it can proceed. Note that a process is unable
to run until some external event happens.
• Ready State: A process is said to be ready if it is waiting to be
assigned to a processor.
• Terminated state: The process has finished execution.
04/29/2025 10:52 AM MU@KU 6
Process States

04/29/2025 10:52 AM MU@KU 7


CPU Scheduling
• CPU scheduling is the basis of multiprogrammed operating
systems. By switching the CPU among processes, the operating
system can make the computer more productive.
• In a single-processor system, only one process can run at a time.
Others must wait until the CPU is free and can be rescheduled
• The CPU will sit idle and waiting for a process that needs an I/O
operation to complete. If the I/O operation completes then only the
CPU will start executing the process. A lot of CPU time has been
wasted with this procedure.
• The objective of multiprogramming is to have some process running
at all times to maximize CPU utilization.
• When several processes are in main memory, if one processes is
waiting for I/O then the operating system takes the CPU away from
that process and gives the CPU to another process. Hence there will
be no wastage of CPU time
04/29/2025 10:52 AM MU@KU 8
Basic Concepts of CPU Scheduling
1. CPU–I/O Burst Cycle
2. CPU Scheduler
3. Pre-emptive Scheduling
4. Dispatcher

• The idea of multiprogramming is relatively simple. A


process is executed until it must wait, typically for the
completion of some I/O request. In a simple computer
system, the CPU would then just sit idle.
• Scheduling is a fundamental operating-system function.
• Almost all computer resources are scheduled before use.
04/29/2025 10:52 AM MU@KU 9
1-CPU - I/O Burst Cycle
• Process execution consists of a
cycle of CPU execution and I/O
wait.
• Process execution begins with a
CPU burst. That is followed by
an I/O burst. Processes alternate
back and forth between these
two states.
• The final CPU burst ends with a
system request to terminate
execution.
• Hence the First cycle and Last
cycle of execution must be CPU
burst.
04/29/2025 10:52 AM MU@KU 10
2-CPU Scheduler
• Whenever the CPU becomes idle, the operating system
must select one of the processes in the ready queue to
be executed.
• The selection process is carried out by the Short-Term
Scheduler or CPU scheduler.

04/29/2025 10:52 AM MU@KU 11


3-Preemptive Scheduling
• CPU scheduling decisions may take place under the following
four circumstances:
1. When a process switches from the running state to the waiting state
(for. example, I/O request, or invocation of wait for the termination
of one of the child processes).
2. When a process switches from the running state to the ready state
(for example, when an interrupt occurs).
3. When a process switches from the waiting state to the ready state
(for example, completion of I/O).
4. When a process terminates.

• For situations 2 and 4 are considered as Pre-emptive scheduling


situations. Mach OS X, WINDOWS 95 and all subsequent versions of
WINDOWS are using Preemptive scheduling.
04/29/2025 10:52 AM MU@KU 12
4-Dispatcher
• The dispatcher is the module that gives control of
the CPU to the process selected by the short-term
scheduler. Dispatcher function involves:
• Switching context.
• Switching to user mode.
• Jumping to the proper location in the user program to
restart that program
• The dispatcher should be as fast as possible, since it is
invoked during every process switch. The time it takes
for the dispatcher to stop one process and start another
process running is known as the Dispatch Latency.

04/29/2025 10:52 AM MU@KU 13


Scheduling Criteria
• Different CPU scheduling algorithms have different properties and may favor
one class of processes over another. In choosing which algorithm to use in a
particular situation, we must consider the properties of the various
algorithms.
• Many criteria have been suggested for comparing CPU scheduling algorithms.
• Criteria that are used include the following:
• CPU utilization: CPU must be kept as busy as possible. CPU utilization can range
from 0 to 100 percent. In a real system, it should range from 40 to 90 percent.
• Throughput: The number of processes that are completed per time unit.
• Turn-Around Time: It is the interval from the time of submission of a process to
the time of completion. Turnaround time is the sum of the periods spent waiting
to get into memory, waiting in the ready queue, executing on the CPU and
doing I/O.
• Waiting time: It is the amount of time that a process spends waiting in the ready
queue.
• Response time: It is the time from the submission of a request until the first
response is produced. Interactive systems use response time as its measure.
04/29/2025 10:52 AM MU@KU 14
Scheduling Algorithms
1. First-Come, First-Served Scheduling
2. Shortest-Job-First Scheduling
3. Priority Scheduling
4. Round-Robin Scheduling
5. Multilevel Queue Scheduling
6. Multilevel Feedback Queue Scheduling

• Gantt Chart is a bar chart that is used to illustrates a


particular schedule including the start and finish times
of each of the participating processes.
04/29/2025 10:52 AM MU@KU 15
First-Come, First-Served
Scheduling (FCFS)
• In FCFS, the process that requests the • Example: Consider the
CPU first is allocated the CPU first. following set of processes
• FCFS scheduling algorithm is Non- that arrive at time 0. The
preemptive. processes are arrived in the
• Once the CPU has been allocated to a
process, it keeps the CPU until it
order P1, P2, P3, with the
releases the CPU. length of the CPU burst
Process Burst Time
• FCFS can be implemented by using given in milliseconds.
FIFO queues. P1 24
• When a process enters the ready
queue, its PCB is linked onto the tail of P2 3
the queue.
P3 3
• When the CPU is free, it is allocated to
the process at the head of the queue
• The running process is then removed
from the queue. • Gantt chart for FCFS
P1 P2 P3
04/29/2025 10:52 AM MU@KU 0 24 27
16 30
FCFS …ctd
• The average waiting time under the FCFS policy is often quite long.
• The waiting time is 0 milliseconds for process P1, 24 milliseconds for process P2
and 27 milliseconds for process P3.
• Thus, the average waiting time is (0 + 24 + 27)/3 = 17 milliseconds.
• Convoy Effect in FCFS
• Convoy effect means, when a big process is executing in CPU, all the smaller
processes must have to wait until the big process execution completes. This will
effect the performance of the system.
• Disadvantage of FCFS
• FCFS scheduling algorithm is Non-preemptive, it allows one process to keep CPU
for long time. Hence it is not suitable for time sharing systems.

• Try this Qn: Suppose we use the example above but with the processes
arrived in the order P2, P3, P1. What is the average waiting time?
• The processes coming at P2, P3, P1 the average waiting time (6 + 0 + 3)/3 = 3
milliseconds whereas the processes are came in the order P1, P2, P3 the average
waiting time is 17 milliseconds
04/29/2025 10:52 AM MU@KU 17
Shortest-Job-First Scheduling (SJF)
• SJF algorithm is defined as • Example: Consider the
“when the CPU is available, it following processes and CPU
is assigned to the process burst in milliseconds:
that has the smallest next Process Burst Time
CPU burst”. If the next CPU
bursts of two processes are P1 6
the same, FCFS scheduling is
P2 8
used between two processes.
P3 7
• SJF is also called as Shortest-
Next CPU-Burst algorithm, P4 3
because scheduling depends
on the length of the next CPU
burst of a process, rather • Gantt Chart of SJF algorithm:
than its total length.
04/29/2025 10:52 AM MU@KU 18
SJF …ctd
Waiting Time for Processes:
• By looking at the above table the average waiting
Process Burst Time (ms) Waiting Time
time by using SJF algorithm is 7ms.
• SJF gives the minimum average waiting time for a P1 6 3
given set of processes. SJF is optimal.
• The average waiting time decreases because moving
P2 8 16
a short process before long process decrease the P3 7 9
waiting time of the short process more than it
increases the waiting time of the long process.
P4 3 0
Difficulty with SJF Average Waiting Time 7ms
• The difficulty with the SJF algorithm is ―knowing the
length of the next CPU request‖. With Short-Term
Scheduling, there is no way to know the length of the
next CPU burst. It is not implemented practically.

• There is pre-emptive SJF algorithm called Shortest


Remaining Time First Scheduling (SRTF)
04/29/2025 10:52 AM MU@KU 19
SJF …Solution for the difficulty
• One approach to this problem is to try to approximate SJF scheduling.
• We may not know the length of the next CPU burst, but we may be able to predict its
value. We expect that the next CPU burst will be similar in length to the previous ones.
• By computing an approximation of the length of the next CPU burst, we can pick the
process with the shortest predicted CPU burst.
• The next CPU burst is generally predicted as an Exponential Average of the measured
lengths of previous CPU bursts.
• The following formula defines the Exponential average:

• tn be the length of the nth CPU burst (i.e. contains the most recent information).
• stores the past history.
• be our predicted value for the next CPU burst.
• α controls the relative weight of recent and past history in our prediction (0 ≤ α ≤1)
• If α=0, then , recent history has no effect
• If α=1 then , only the most recent CPU burst matters.
• If α = 1/2, so recent history and past history are equally weighted

04/29/2025 10:52 AM MU@KU 20


Shortest Remaining Time First Scheduling (SRTF)
• Gantt Chart for SRTF
• SRTF is the pre-emptive SJF algorithm.
• A new process arrives at the ready
queue, while a previous process is
still executing.
• The next CPU burst of the newly
arrived process may be shorter than
the currently executing process. • Process P1 is started at time 0, since it
• SRTF will preempt the currently is the only process in the queue.
executing process and executes the
shortest job. • Process P2 arrives at time 1. The
remaining time for process P1 (7ms) is
• Consider the four processes with larger than the time required by
arrival times and burst times in process P2 (4 ms), so process P1 is
milliseconds: preempted and process P2 is
Process Arrival Time Burst Time(ms)
P1 0 8 scheduled.
P2 1 4 • The average waiting time = 26/4 =
P3 2 9 6.5ms
P404/29/2025 10:52 AM 3 5 MU@KU 21
Priority Scheduling
• Example: Consider the
following processes with CPU
• A priority is associated with burst and Priorities. All the
each process and the CPU is processes are arrived at time
allocated to the process with t=0 in the same order. Low
the highest priority. Equal- numbers are having higher
priority processes are priority. Process Burst Time Priority
scheduled in FCFS order.
P1 10 3
• An SJF algorithm is special
P2 1 1
kind of priority scheduling P3 2 3
algorithm where small CPU P4 1 4
burst will have higher priority. P5 5 2
• Priorities can be defined
based on time limits, memory
requirements, the number of • Gantt chart for Priority
open files etc. Scheduling
04/29/2025 10:52 AM MU@KU 22
Priority Scheduling-ctd
Problem: Starvation or Indefinite
Blocking
• In priority Scheduling when there is a
Process Burst Waiting Time continuous flow of higher priority
Time(ms) processes has come to ready queue then
all the lower priority processes must have
P1 10 6 to wait for the CPU until the all the higher
P2 1 0 priority processes execution completes.
P3 2 16 • This leads to lower priority processes
P4 1 18 blocked from getting CPU for long period
P5 5 1 of time. This situation is called Starvation
or Indefinite blocking.
Average Waiting Time 8.2ms
• In worst case indefinite blocking may take
years to execute the process.
• Priority scheduling can be either Preemptive
or Non-preemptive. Solution: Aging
• A Preemptive Priority Scheduling algorithm • Aging involves gradually increasing the
will preempt the CPU if the priority of the priority of processes that wait in the
system for a long time.
• newly arrived process is higher than the
priority of the currently running process.
04/29/2025 10:52 AM MU@KU 23
Round-Robin …ctd
• Round-Robin (RR) scheduling algorithm is designed especially for Timesharing systems.
• RR is similar to FCFS scheduling, but preemption is added to enable the system to switch between
processes.
• A small unit of time called a Time Quantum or Time Slice is defined. A time quantum is generally
from 10 to 100 milliseconds in length.
• The ready queue is treated as a Circular queue. New processes are added to the tail of the ready
queue.
• The CPU scheduler goes around the ready queue by allocating the CPU to each process for a time
interval of up to 1 time quantum and dispatches the process.
• If a process CPU burst exceeds 1 time quantum, that process is preempted and is put back in the
ready queue.
• In RR scheduling one of two things will then happen.
1. The process may have a CPU burst of less than 1 time quantum. The process itself will release
the CPU voluntarily. The scheduler will then proceed to the next process in the ready queue.
2. If the CPU burst of the currently running process is longer than 1 time quantum, the timer will
go off and will cause an interrupt to the operating system. A context switch will be executed and
the process will be put at the tail of the ready queue. The CPU scheduler will then select the next
process in the ready queue.
• Consider the following set of processes that arrive at time 0Process
and the processes
Burst Timeare arrived in
the order P1, P2, P3 and Time Quanta=4
P1 24
P2 3
04/29/2025 10:52 AM MU@KU 24
P3 3
Round-Robin …ctd
• The average waiting time under the RR policy is often long.
• P1 waits for 6 ms (10 - 4), P2 waits for 4 ms and P3 waits for 7
ms. Thus, the average waiting time is 17/3 = 5.66 ms.
• The performance of the RR algorithm depends on the size of the
Time Quantum.
• If the time quantum is extremely large, the RR policy is the same
as the FCFS policy.
• If the time quantum is extremely small (i.e. 1 millisecond) the RR
approach can result in a large number of context switches.
• The time taken for context switch value should be a small fraction
of Time quanta then the performance of the RR will be increased.
• Note: A rule of thumb is that 80 percent of the CPU bursts
should be shorter than the time quantum.

04/29/2025 10:52 AM MU@KU 25


Round-Robin Scheduling
• Gantt chart of Round Robin Scheduling

• If we use a time quantum of 4 milliseconds, then process P1


gets the first 4 milliseconds.
• Since it requires another 20 milliseconds, it is preempted
after the first time quantum and the CPU is given to the next
process in the queue, process P2.
• CPU burst of Process P2 is 3, so it does not need 4
milliseconds then it quits before its time quantum expires.
The CPU is then given to the next process P3.
• Once each process has received 1 time quantum, the CPU is
returned to process P1 for an additional time quantum.
04/29/2025 10:52 AM MU@KU 26
Multilevel Queue Scheduling
• In a multilevel queue scheduling processes are
permanently assigned to one queues.
• The processes are permanently assigned to one based on
some property of the process, such as
• Memory size
• Process priority
• Process type
• Each queue has its own scheduling algorithm
• Algorithm chooses the process from the occupied queue
that has the highest priority, and run that process either
• Preemptive or
• Non-preemptively
04/29/2025 10:52 AM MU@KU 27
Multilevel Queue Scheduling
• In a multilevel queue scheduling
processes are permanently assigned
to one queues.
• The processes are permanently
assigned to one based on some
property of the process, such as
• Memory size
• Process priority
• Process type
• Each queue has its own scheduling
algorithm
• Algorithm chooses the process from
the occupied queue that has the
highest priority, and run that process
either
04/29/2025 10:52 AM MU@KU
• Preemptive or 28
• Non-preemptively
Multilevel Queue …ctd
• The above figure shows Multi-level queue scheduling
algorithm with five queues, listed below in order of
priority:
1. System processes
2. Interactive processes
3. Interactive editing processes
4. Batch processes
5. Student processes
• Each queue has absolute priority over lower-priority
queues.
• No lower level queue processes will start executing unless
all the processes in higher level queue are empty.
• Example: The interactive processes start executing only
when all the processes in system queue are empty.
• If a lower priority process is executing and an higher
priority process entered into the queue then lower priority
process will be preempted and starts executing a higher
priority process.
• Example: If a system process entered the ready queue
while an interactive process was running, the interactive
04/29/2025 10:52 AM process
MU@KU would be preempted. 29
Multilevel Queue …ctd
Disadvantage: Starvation of Lower level queue
• In multilevel queue scheduling algorithm is inflexible.
• The processes are permanently assigned to a queue when
they enter the system. Process are not allowed to move from
one queue to other queue.
• There is a chance that lower level queues will be in starvation
because unless the higher level queues are empty no lower
level queues will be executing.
• If at any instant of time if there is a process in higher priority
queue then there is no chance that lower level process can be
executed eternally.
• Multilevel Feedback Queue Scheduling is used to overcome
the problem of Multi-level queue scheduling
04/29/2025 10:52 AM MU@KU 30
Multilevel Feedback Queue
Scheduling (MLFQ)
• Multilevel feedback queue scheduling algorithm allows a process
to move between queues.
• Processes are separated according to the characteristics of their
CPU bursts.
• If a process uses too much CPU time, it will be moved to a lower-
priority queue.
• A process that waits too long in a lower-priority queue moved to a
higher-priority queue.
• This form of aging prevents starvation.
• Consider a multilevel feedback queue scheduler with three
queues: queue0, queue1, queue2.
• The scheduler first executes all processes in queue0 then queue1
and then queue2.
• Only when queue0 and queue1 is empty, scheduler will execute
processes in queue2.
• A process that arrives for queue1 will preempt a process in queue2.
A process in queue1 will in turn be preempted by a process arriving
for04/29/2025
queue0. 10:52 AM MU@KU 31
MLFQ …
• A process entering the ready queue is put in queue0. A process in queue 0 is given a time
quantum of 8ms. If it does not finish within this time, it is moved to the tail of queue 1.
• If queue 0 is empty, the process at the head of queue1 is given a quantum of 16ms. If it
does not complete, it is preempted and is put into queue2.
• Processes in queue 2 are run on an FCFS basis but are run only when queues 0 and 1 are
empty.
• This scheduling algorithm gives highest priority to any process with a CPU burst of 8ms or
less. Such a process will quickly get the CPU and finish its CPU burst and go off to its next I/O
burst.
• Processes that need more than 8ms but less than 24ms are also served quickly, although
with lower priority than shorter processes.
• Long processes automatically sink to queue2 and are served in FCFS order with any CPU
cycles left over from queues0 and queue1.
• A Multi-Level Feedback queue scheduler is defined by the following parameters:
• The number of queues.
• The scheduling algorithm for each queue.
• The method used to determine when to upgrade a process to a higher priority queue.
• The method used to determine when to demote a process to a lower priority queue.
• The method used to determine which queue a process will enter when that process needs
service
04/29/2025 10:52 AM MU@KU 32
Process Synchronization
• A cooperating process is one that can affect or be
affected by the other processes executing in the
system.
• Cooperating processes may either directly share a
logical address space(that is, both code and data), or
be allowed to share data only through files. The former
case is achieved through the use of lightweight
processes or threads. Concurrent access to shared data
may result in data inconsistency.
• In this lecture, we discuss various mechanisms to
ensure the orderly execution of cooperating processes
that share a logical address space, so that data
consistency
04/29/2025 10:52 AM is maintained.MU@KU 33
Cooperating Processes
• The concurrent processes executing in the
operating system may be either independent
processes or cooperating processes.
• A process is independent if it cannot affect or
be affected by the other processes executing Information sharing
in the system.
• On the other hand, a process is cooperating if
it can affect or be affected by the other Computation speedup

processes executing in the system.


• There are several reasons for providing an Modularity
environment that allows process cooperation:
04/29/2025 10:52 AM MU@KU 34
Convenience
Race condition
• When several processes access and manipulate the
same data concurrently and the outcome of the
execution depends on the particular order in which the
access takes place, is called a race condition.

04/29/2025 10:52 AM MU@KU 35


The Critical-Section Problem
• The important feature of the system is that, when one
process is executing in its critical section, no other
process is to be allowed to execute in its critical
section.
• Thus, the execution of critical sections by the
processes is mutually exclusive in time.
• The critical-section problem is to design a protocol that
the processes can use to cooperate.
• Each process must request permission to enter its
critical section.

04/29/2025 10:52 AM MU@KU 36


Solution to Critical-Section
Problem
• A solution to the critical-section problem must satisfy the following
three requirements:
1. Mutual Exclusion: If process Pi is executing in its critical section,
then no other processes can be executing in their critical sections.
2. Progress: If no process is executing in its critical section and there
exist some processes that wish to enter their critical sections, then
only those processes that are not executing in their remainder
section can participate in the decision of which will enter its critical
section next, and this selection cannot be postponed indefinitely.
3. Bounded Waiting: There exist a bound on the number of times that
other processes are allowed to enter their critical sections after a
process has made a request to enter its critical section and before
that request is granted.

04/29/2025 10:52 AM MU@KU 37


• Next is a lecture on DEADLOCKS

04/29/2025 10:52 AM MU@KU 38

You might also like