Lecture Three
Lecture Three
Lecture Three
CPU SCHEDULING
CPU scheduling is the basis of multiprogrammed operating systems. By switching
the CPU among processes, the operating system can make the computer more
productive.
This allows the operating system to schedule all processes in the main memory
(using a scheduling algorithm) to run on the CPU at equal intervals. Each switch of
the CPU from one process to another is called a context switch.
3.3.1 Dispatcher
• Switching context.
• Switching to user mode.
• Jumping to the proper location in the user program to restart that program
3.3.2 Scheduling Criteria
• Different CPU scheduling algorithms have different properties and may favour
one class of processes over another. In choosing which algorithm to use in a
particular situation, we must consider the properties of the various algorithms
Many criteria have been suggested for comparing CPU scheduling algorithms.
Criteria that are used include the following:
• CPU utilization.
• Throughput.
• Turnaround time.
• Waiting time.
• Response time.
Algorithm chooses the process from the occupied queue that has the highest priority,
and run that process either preemptively or non-preemptively.
IPC methods are divided into methods for message passing, synchronization, shared
memory, and remote procedure calls (RPC). The method of IPC used may vary based
on the bandwidth and latency of communication between the threads, and the type of
data being communicated. There are several reasons for providing an environment
that allows process cooperation:
Information sharing
Computational speedup
Modularity
Convenience
Privilege separation
Formally, while one process executes the shared resources/variable, all other
processes desiring to do so at the same time/moment should be kept waiting; when
that process has finished executing the shared variable/resources, one of the processes
waiting to access the variable/resources should be allowed to proceed. In this fashion,
each process executing the shared data (variables) excludes all others from doing so
simultaneously.
Mutual Exclusion is a process by which each process access shared data exclusively.
It is also refers to the ability of multiple processes ( or threads) to share code,
resources, or data in such a way that only one process has access to the shared object
at a time.
Mutual exclusion needs to be enforced only when processes access shared modifiable
data - when processes are performing operations that do not conflict with one another
they should be allowed to proceed concurrently. When a process is accessing shared
modifiable data, the process is said to be in a Critical Section (Critical Region). It
must be ensured that when one process is in Critical Section, all other processes (at
least those that access the same shared modifiable data) are excluded from their own
critical sections.
While a process is in its CS, other processes may certainly continue executing outside
their Critical Sections. When a process leaves its Critical Section, then other process
waiting to enter its own Critical Section should be allowed to proceed( if indeed there
is a waiting process)
3.9 The Critical-Section Problem
The important feature of the system is that, when one process is executing in its
critical section, no other process is to be allowed to execute in its critical section.
Thus, the execution of critical sections by the processes is mutually exclusive in time.
The critical-section problem is to design a protocol that the processes can use to
cooperate. Each process must request permission to enter its critical
section.
3.10 DEADLOCK
3.10.1 Definition of Deadlock
A process in a Multiprogramming system is said to be in a state of deadlock (or
deadlocked) if it is waiting for a particular event that will not occur. In this situation,
two or more processes cannot continue because the resource each process requires is
held by another.
A process requests resources; if the resources are not available at that time, the
process enters a wait state. It may happen that waiting processes will never again
change state, because the resources they have requested are held by other waiting
processes. This situation is called a deadlock.
For example, a program requiring ten tape drives must request and receive all ten
drives before it begins executing. If the program needs only one tape drive to begin
execution and then does not need the remaining tape drives for several hours. Then
substantial computer resources (9 tape drives) will sit idle for several hours. This
strategy can cause indefinite postponement (starvation). Since not all the required
resources may become available at once.
Suppose a system does allow processes to hold resources while requesting additional
resources. Consider what happens when a request cannot be satisfied. A process holds
resources a second process may need in order to proceed while second process may
hold the resources needed by the first process. This is a deadlock. This strategy
requires that when a process that is holding some resources is denied a request for
additional resources. The process must release its held resources and, if necessary,
request them again together with additional resources. Implementation of this strategy
denies the “no-preemptive” condition effectively
.
High Cost When a process releases resources the process may lose all its work to
that point. One serious consequence of this strategy is the possibility of indefinite
postponement (starvation). A process might be held off indefinitely as it repeatedly
requests and releases the same resources.
Elimination of “Circular Wait” Condition
The last condition, the circular wait, can be denied by imposing a total ordering on all
of the resource types and then forcing, all processes to request the resources in order
(increasing or decreasing). This strategy impose a total ordering of all resources
types, and to require that each process requests resources in a numerical order
(increasing or decreasing) of enumeration. With this rule, the resource allocation
graph can never have a cycle.
Now the rule is this: processes can request resources whenever they want to, but all
requests must be made in numerical order. A process may request first printer and
then a tape drive (order: 2, 4), but it may not request first a plotter and then a printer
(order: 3, 2). The problem with this strategy is that it may be impossible to find an
ordering that satisfies everyone.
3.13.2 Deadlock Avoidance
This approach to the deadlock problem anticipates deadlock before it actually occurs.
This approach employs an algorithm to access the possibility that deadlock could
occur and acting accordingly. This method differs from deadlock prevention, which
guarantees that deadlock cannot occur by denying one of the necessary conditions of
deadlock.
If the necessary conditions for a deadlock are in place, it is still possible to avoid
deadlock by being careful when resources are allocated. Perhaps the most famous
deadlock avoidance algorithm, due to Dijkstra [1965], is the Banker’s algorithm, so
named because the process is analogous to that used by a banker in deciding if a loan
can be safely made.
2. Preempt resources
To eliminate deadlocks using resource pre-emption, successfully pre-empt some
resources from processes and give these resources to other process until the
deadlock cycle is broken. If pre-emption is required to deal with deadlocks, then
three issues need to be addressed:
1. Selecting a victim: which resources and which processes are to be pre-
empted? As in process termination, determine the order of pre-emption to
minimize cost.
2. Rollback: If we pre-empt a resource from a process, we must roll back the
process to some safe state, and restart it from that state. Back off a process to some
check point allowing preemption of a needed resource and restarting the process at
the checkpoint later.
3. Starvation: Same process killed repeatedly. How do we ensure that
starvation will not occur? That is, how can we guarantee that resources will not
always be pre-empted from the same process? Don’t keep preempting same
process (i.e.,set some limit)
In a system where victim selection is based primarily on cost factors, it may
happen that the same process is always picked as a victim. As a result, this process
never completes its designated task, a starvation situation that needs to be dealt
with in any practical system. Clearly, we must ensure that a process can be picked
as a victim only a (small) finite number of times. The most common solution is to
include the number of rollbacks in the cost factor.