0% found this document useful (0 votes)
4 views

Operating System Unit 2

CPU scheduling determines which process will execute on the CPU while others wait, ensuring the CPU remains utilized. Processes have distinct states (New, Ready, Running, Waiting, Terminated) and are managed through a Process Control Block (PCB) that stores relevant information. Effective process scheduling aims to keep the CPU busy and provide acceptable response times, utilizing various scheduling queues and types of schedulers (long-term, short-term, medium-term).

Uploaded by

servicesdell1
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views

Operating System Unit 2

CPU scheduling determines which process will execute on the CPU while others wait, ensuring the CPU remains utilized. Processes have distinct states (New, Ready, Running, Waiting, Terminated) and are managed through a Process Control Block (PCB) that stores relevant information. Effective process scheduling aims to keep the CPU busy and provide acceptable response times, utilizing various scheduling queues and types of schedulers (long-term, short-term, medium-term).

Uploaded by

servicesdell1
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 12

OPERATING SYSTEM UNIT 2

Process and CPU scheduling process concept:

hat is CPU or process scheduling?

CPU Scheduling is a process of determining which process will own CPU for
execution while another process is on hold. The main task of CPU scheduling is to
make sure that whenever the CPU remains idle, the OS at least select one of the
processes available in the ready queue for execution

Process Concept

 A process is an instance of a program in execution.


 Batch systems work in terms of "jobs". Many modern process concepts are still
expressed in terms of jobs, ( e.g. job scheduling ), and the two terms are often
used interchangeably.

3.1.1 The Process

 Process memory is divided into four sections as shown in Figure 3.1


below:
o The text section comprises the compiled program code, read in
from non-volatile storage when the program is launched.
o The data section stores global and static variables, allocated and
initialized prior to executing main.
o The heap is used for dynamic memory allocation, and is managed
via calls to new, delete, malloc, free, etc.
o The stack is used for local variables. Space on the stack is
reserved for local variables when they are declared ( at function
entrance or elsewhere, depending on the language ), and the space
is freed up when the variables go out of scope. Note that the stack
is also used for function return values, and the exact mechanisms
of stack management may be language specific.
o Note that the stack and the heap start at opposite ends of the
process's free space and grow towards each other. If they should
ever meet, then either a stack overflow error will occur, or else a
call to new or malloc will fail due to insufficient memory
available.
 When processes are swapped out of memory and later restored,
additional information must also be stored and restored. Key among
them are the program counter and the value of all program registers.

Figure 3.1 - A process in memory

3.1.2 Process State

 Processes may be in one of 5 states, as shown in Figure 3.2 below.


o New - The process is in the stage of being created.
o Ready - The process has all the resources available that it needs to
run, but the CPU is not currently working on this process's
instructions.
o Running - The CPU is working on this process's instructions.
o Waiting - The process cannot run at the moment, because it is
waiting for some resource to become available or for some event
to occur. For example the process may be waiting for keyboard
input, disk access request, inter-process messages, a timer to go
off, or a child process to finish.
o Terminated - The process has completed.
 The load average reported by the "w" command indicate the average
number of processes in the "Ready" state over the last 1, 5, and 15
minutes, i.e. processes who have everything they need to run but cannot
because the CPU is busy doing something else.
 Some systems may have other states besides the ones listed here.

Figure 3.2 - Diagram of process state

3.1.3 Process Control Block

For each process there is a Process Control Block, PCB, which stores the following
( types of ) process-specific information, as illustrated in Figure 3.1. ( Specific details
may vary from system to system. )

 Process State - Running, waiting, etc., as discussed above.


 Process ID, and parent process ID.
 CPU registers and Program Counter - These need to be saved and
restored when swapping processes in and out of the CPU.
 CPU-Scheduling information - Such as priority information and
pointers to scheduling queues.
 Memory-Management information - E.g. page tables or segment
tables.
 Accounting information - user and kernel CPU time consumed, account
numbers, limits, etc.
 I/O Status information - Devices allocated, open file tables, etc.
Figure 3.3 - Process control block ( PCB )

Figure 3.4 - Diagram showing CPU switch from process to process


Unnumbered side bar

Digging Deeper: The Linux task_struct definition in sched.h ( See also the top of
that file. )

3.1.4 Threads

 Modern systems allow a single process to have multiple threads of


execution, which execute concurrently. Threads are covered extensively
in the next chapter.

3.2 Process Scheduling

 The two main objectives of the process scheduling system are to keep the CPU
busy at all times and to deliver "acceptable" response times for all programs,
particularly for interactive ones.
 The process scheduler must meet these objectives by implementing suitable
policies for swapping processes in and out of the CPU.
 ( Note that these objectives can be conflicting. In particular, every time the
system steps in to swap processes it takes up time on the CPU to do so, which
is thereby "lost" from doing any useful productive work. )

3.2.1 Scheduling Queues

 All processes are stored in the job queue.


 Processes in the Ready state are placed in the ready queue.
 Processes waiting for a device to become available or to deliver data are
placed in device queues. There is generally a separate device queue for
each device.
 Other queues may also be created and used as needed.
Figure 3.5 - The ready queue and various I/O device queues

3.2.2 Schedulers

 A long-term scheduler is typical of a batch system or a very heavily


loaded system. It runs infrequently, ( such as when one process ends
selecting one more to be loaded in from disk in its place ), and can afford
to take the time to implement intelligent and advanced scheduling
algorithms.
 The short-term scheduler, or CPU Scheduler, runs very frequently, on
the order of 100 milliseconds, and must very quickly swap one process
out of the CPU and swap in another one.
 Some systems also employ a medium-term scheduler. When system
loads get high, this scheduler will swap one or more processes out of the
ready queue system for a few seconds, in order to allow smaller faster
jobs to finish up quickly and clear the system. See the differences in
Figures 3.7 and 3.8 below.
 An efficient scheduling system will select a good process mix of CPU-
bound processes and I/O bound processes.

Figure 3.6 - Queueing-diagram representation of process scheduling

Figure 3.7 - Addition of a medium-term scheduling to the queueing diagram


3.2.3 Context Switch

 Whenever an interrupt arrives, the CPU must do a state-save of the


currently running process, then switch into kernel mode to handle the
interrupt, and then do a state-restore of the interrupted process.
 Similarly, a context switch occurs when the time slice for one process
has expired and a new process is to be loaded from the ready queue. This
will be instigated by a timer interrupt, which will then cause the current
process's state to be saved and the new process's state to be restored.
 Saving and restoring states involves saving and restoring all of the
registers and program counter(s), as well as the process control blocks
described above.
 Context switching happens VERY VERY frequently, and the overhead
of doing the switching is just lost CPU time, so context switches ( state
saves & restores ) need to be as fast as possible. Some hardware has
special provisions for speeding this up, such as a single machine
instruction for saving or restoring all registers at once.
 Some Sun hardware actually has multiple sets of registers, so the context
switching can be speeded up by merely switching which set of registers
are currently in use. Obviously there is a limit as to how many processes
can be switched between in this manner, making it attractive to
implement the medium-term scheduler to swap some processes out as
shown in Figure 3.8 above.

You might also like