Process Scheduling
Process Scheduling
A Process Control Block is a data structure maintained by the Operating System for every
process. The PCB is identified by an integer process ID (PID).
The information in the preceding list is stored in a data structure, typically called a process
control block (the figure above), that is created and managed by the OS. The significant point
about the process control block is that it contains sufficient information so that it is possible to
interrupt a running process and later resume execution as if the interruption had not occurred.
The process control block is the key tool that enables the OS to support multiple processes and to
provide for multiprocessing. When a process is interrupted, the current values of the program
counter and the processor registers (context data) are saved in the appropriate fields of the
corresponding process control block, and the state of the process is changed to some other value,
such as blocked or ready (described subsequently). The OS is now free to put some other process
in the running state. The program counter and context data for this process are loaded into the
processor registers and this process now begins to execute.
Thus, we can say that a process consists of program code and associated data plus a process
control block. For a single-processor computer, at any given time, at most one process is
executing and that process is in the running state.
Long-Term Scheduling
The long-term scheduler determines which programs are admitted to the system for processing.
Thus, it controls the degree of multiprogramming. Once admitted, a job
Levels of Scheduling
or user program becomes a process and is added to the queue for the short-term scheduler. In
some systems, a newly created process begins in a swapped-out condition, in which case it is
added to a queue for the medium-term scheduler.
In a batch system, or for the batch portion of an OS, newly submitted jobs are routed to disk and
held in a batch queue. The long-term scheduler creates processes from the queue when it can.
There are two decisions involved. The scheduler must decide when the OS can take on one or
more additional processes. And the scheduler must decide which job or jobs to accept and turn
into processes. We briefly consider these two decisions.
The decision as to when to create a new process is generally driven by the desired degree of
multiprogramming. The more processes that are created, the smaller is the percentage of time
that each process can be executed (i.e., more processes are competing for the same amount of
processor time). Thus, the long-term scheduler may limit the degree of multiprogramming to
provide satisfactory service
Queueing Diagram for Scheduling
to the current set of processes. Each time a job terminates, the scheduler may decide to add one
or more new jobs. Additionally, if the fraction of time that the processor is idle exceeds a certain
threshold, the long-term scheduler may be invoked.
The decision as to which job to admit next can be on a simple first-come-first- served (FCFS)
basis, or it can be a tool to manage system performance. The criteria used may include priority,
expected execution time, and I/O requirements.
For example, if the information is available, the scheduler may attempt to keep a mix of
processor-bound and I/O-bound processes. 2 Also, the decision can depend on which I/O
resources are to be requested, in an attempt to balance I/O usage.
For interactive programs in a time-sharing system, a process creation request can be generated by
the act of a user attempting to connect to the system. Time-sharing users are not simply queued
up and kept waiting until the system can accept them.
Rather, the OS will accept all authorized comers until the system is saturated, using some
predefined measure of saturation. At that point, a connection request is met with a message
indicating that the system is full and the user should try again later.
Medium-Term Scheduling
Medium-term scheduling is part of the swapping function. Typically, the swapping-in decision is
based on the need to manage the degree of multiprogramming. On a system that does not use
virtual memory, memory management is also an issue. Thus, the swapping-in decision will
consider the memory requirements of the swapped-out processes.
Short-Term Scheduling
In terms of frequency of execution, the long-term scheduler executes relatively infrequently and
makes the coarse-grained decision of whether or not to take on a new process and which one to
take. The medium-term scheduler is executed somewhat more frequently to make a swapping
decision. The short-term scheduler, also known as the dispatcher, executes most frequently and
makes the fine-grained decision of which process to execute next.
The short-term scheduler is invoked whenever an event occurs that may lead to the blocking of
the current process or that may provide an opportunity to preempt a currently running process in
favor of another. Examples of such events include:
• Clock interrupts
• I/O interrupts
• Operating system calls
• Signals (e.g., semaphores)
Process Scheduling
Definition
The process scheduling is the activity of the process manager that handles the removal of the
running process from the CPU and the selection of another process on the basis of a particular
strategy. It is an essential part of a Multiprogramming operating systems. Such operating
systems allow more than one process to be loaded into the executable memory at a time and the
loaded process shares the CPU using time multiplexing.
The Process Scheduler schedule different processes to be assigned to the CPU based on
particular scheduling algorithm.
There are six popular process scheduling algorithms which we are going to discuss in the
following section:
First-Come, First-Served (FCFS) Scheduling
Shortest-Job-Next (SJN) Scheduling
Priority Scheduling
Shortest Remaining Time
Round Robin (RR) Scheduling
Multiple-Level Queues Scheduling
The normalized turnaround time for process Y is way out of line compared to the other
processes: the total time that it is in the system is 100 times the required processing time. This
will happen whenever a short process arrives just after a long process. On the other hand, even in
this extreme example, long processes do not fare poorly. Process Z has a turnaround time that is
almost double that of Y, but its normalized residence time is under 2.0.
Another difficulty with FCFS is that it tends to favor processor-bound processes over I/O-bound
processes. Consider that there is a collection of processes, one of which mostly uses the
processor (processor bound) and a number of which favor I/O (I/O bound). When a processor-
bound process is running, all of the I/O bound processes must wait. Some of these may be in I/O
queues (blocked state) but may move back to the ready queue while the processor-bound process
is executing. At this point, most or all of the I/O devices may be idle, even though there is
potentially work for them to do. When the currently running process leaves the Running state,
the ready I/O-bound processes quickly move through the Running state and become blocked on
I/O events. If the processor-bound process is also blocked, the processor becomes idle.
Thus, FCFS may result in inefficient use of both the processor and the I/O devices. FCFS is not
an attractive alternative on its own for a uniprocessor system. However, it is often combined with
a priority scheme to provide an effective scheduler.
Thus, the scheduler may maintain a number of queues, one for each priority level, and dispatch
within each queue on a first-come-first-served basis. We see one example of such a system later,
in our discussion of feedback scheduling.
Example 2: