OS Unit 1
OS Unit 1
Unit 1
Introduction
Operating System
Every computer must have an operating system to run other programs. The
operating system and coordinates the use of the hardware among the various
Mruthula Sojan, Dept. of Computer Science, Seshadripuram Degree College, Mysuru Page 1
Operating System
system programs and application program for a various users. It simply provides
an environment within which other programs can do useful work. The operating
system is a set of special programs that run on a computer system that allow it
to work properly. It performs basic tasks such as recognizing input from the
keyboard, keeping track of files and directories on the disk, sending output to
the display screen and controlling a peripheral devices.
Mruthula Sojan, Dept. of Computer Science, Seshadripuram Degree College, Mysuru Page 2
Operating System
The part of an Operating System that interprets commands and carries them out.
Mruthula Sojan, Dept. of Computer Science, Seshadripuram Degree College, Mysuru Page 3
Operating System
1. Batch System
Some computer systems only did one thing at a time. They had a list of
the computer system may be dedicated to a single program until its completion,
or they may be dynamically reassigned among a collection of active programs
in different stages of execution.
Batch operating system is one where programs and data are collected together in
a batch before processing starts. A job is predefined sequence of commands,
programs and data that are combined in to a single unit called job.
Memory management in batch system is very simple. Memory is usually
divided into two areas:
1. Operating system and
2. User program area.
Mruthula Sojan, Dept. of Computer Science, Seshadripuram Degree College, Mysuru Page 4
Operating System
Operating System
When job completed execution, its memory is releases and the output for
the job gets copied into an output spool for later printing.
Mruthula Sojan, Dept. of Computer Science, Seshadripuram Degree College, Mysuru Page 5
Operating System
Advantages
Batch processing takes much of the work of the operator to the computer.
Increased performance as a new job get started as soon as the previous
job finished without any manual intervention.
Disadvantages
Spooling
OS handles I/O device data spooling as devices have different data access
rates.
OS maintains the spooling buffer which provides a waiting station where
data can rest while the slower device catches up.
OS maintains parallel computation because of spooling process as a
computer can perform I/O in parallel fashion. It becomes possible to have
the computer read data from a tape, write data to disk and to write out to a
tape printer while it is doing its computing task.
Mruthula Sojan, Dept. of Computer Science, Seshadripuram Degree College, Mysuru Page 6
Operating System
Advantages
Multitasking refers to term where multiple jobs are executed by the CPU
simultaneously by switching between them.Switches occur so frequently that
the users may interact with each program while it is running. Operating system
does the following activities related to multitasking.
Mruthula Sojan, Dept. of Computer Science, Seshadripuram Degree College, Mysuru Page 7
Operating System
3. Multiprogramming
When two or more programs are residing in memory at the same time, then
sharing the processor is referred to the multiprogramming. Multiprogramming
assumes a single shared processor. Multiprogramming increases CPU utilization
by organizing jobs so that the CPU always has one to execute.
The operating system keeps several jobs in memory at a time. This set of jobs is
a subset of the jobs kept in the job pool. The operating system picks and begins
to execute one of the job in the memory.
Multiprogrammed systems provide an environment in which the various system
resources are utilized effectively, but they do not provide for user interaction
with the computer system.
Jobs entering into the system are kept into the memory. Operating system picks
the job and begins to execute one of the job in the memory. Having several
Mruthula Sojan, Dept. of Computer Science, Seshadripuram Degree College, Mysuru Page 8
Operating System
Advantages
Disadvantages
Mruthula Sojan, Dept. of Computer Science, Seshadripuram Degree College, Mysuru Page 9
Operating System
4. Real time system: Real time systems are usually dedicated, embedded
systems.
They typically read from and react to sensor data. The system must guarantee
response to events within fixed periods of time to ensure correct performance.
Operating system does the following activities related to real time system
activity.
1. Process Management
2. Main Memory Management
3. File Management
4. Secondary Storage Management
5. I/O System Management
6. Networking
7. Protection System
8. Command Interpreter System
Mruthula Sojan, Dept. of Computer Science, Seshadripuram Degree College, Mysuru Page 10
Operating System
Process Management
The operating system manages many kinds of activities ranging from user
programs to system programs like printer spooler, name servers, file server etc.
Each of these activities is encapsulated in a process. A process includes the
complete execution context (code, data, PC, registers, OS resources in use etc.).
Main-Memory Management
Keep track of which part of memory are currently being used and by
whom.
Decide which process is loaded into memory when memory space
becomes available.
Allocate and deallocate memory space as needed.
File Management
File systems normally organized into directories to ease their use. These
directories may contain files and other directions.
Mruthula Sojan, Dept. of Computer Science, Seshadripuram Degree College, Mysuru Page 11
Operating System
I/O subsystem hides the peculiarities of specific hardware devices from the user.
Only the device driver knows the peculiarities of the specific device to which it
is assigned.
Secondary-Storage Management
Networking
Mruthula Sojan, Dept. of Computer Science, Seshadripuram Degree College, Mysuru Page 12
Operating System
Protection System
If computer systems has multiple users and allows the concurrent execution of
multiple processes, then the various processes must be protected from one
another's activities. Protection refers to mechanism for controlling the access of
programs, processes, or users to the resources defined by computer systems.
1. If we want to change the way the command interpreter looks, i.e., I want
to change the interface of command interpreter, I am able to do that if the
command interpreter is separate from the kernel. I cannot change the code
of the kernel so I cannot modify the interface.
2. If the command interpreter is a part of the kernel it is possible for a
malicious process to gain access to certain part of the kernel that it
showed not have to avoid this ugly scenario it is advantageous to have the
command interpreter separate from kernel.
1. Program execution
2. I/O operation
3. File system manipulation
4. Communications
5. Error detection
6. Resource Allocation
7. Accounting
8. Protection
Mruthula Sojan, Dept. of Computer Science, Seshadripuram Degree College, Mysuru Page 13
Operating System
Mruthula Sojan, Dept. of Computer Science, Seshadripuram Degree College, Mysuru Page 14
Operating System
System Calls
Application developers often do not have direct access to the system calls, but
can access them through an application programming interface (API). The
functions that are included in the API invoke the actual system calls. By using
the API, certain benefits can be gained:
Portability: as long a system supports an API, any program using that API
can compile and run.
Ease of Use: using the API can be significantly easier then using the
actual system call.
Mruthula Sojan, Dept. of Computer Science, Seshadripuram Degree College, Mysuru Page 15
Operating System
User programs request the device, and when finished they release the device.
Similar to files, we can read, write, and reposition the device.
The OS also keeps information about all its processes and provides system calls
to report this information.
Mruthula Sojan, Dept. of Computer Science, Seshadripuram Degree College, Mysuru Page 16
Operating System
Shared memory use certain system calls to create and gain access to
create and gain access to regions of memory owned by other processes.
The two processes exchange information by reading and writing in the
shared data.
System Programs
These programs are not usually part of the OS kernel, but are part of the overall
operating system.
2. Status information:-Some programs simply request the date and time, and
other simple requests. Others provide detailed performance, logging, and
debugging information. The output of these files is often sent to a terminal
window or GUI window
Mruthula Sojan, Dept. of Computer Science, Seshadripuram Degree College, Mysuru Page 17
Operating System
provide absolute loaders, relocated loaders, linkage editor and overlay loaders.
Debugging system for either high-level languages or machine language are
needed.
Concept of Process
A process is sequential program in execution. A process defines the
fundamental unit of computation for the computer. Components of process are:
1. Object Program
2. Data
3. Resources
4. Status of the process execution.
Object program i.e. code to be executed. Data is used for executing the
program. While executing the program, it may require some resources. Last
component is used for verifying the status of the process execution. A process
can run to completion only when all requested resources have been allocated to
the process. Two or more processes could be executing the same program, each
using their own data and resources.
Processes and Programs
Process is a dynamic entity that is a program in execution. A process is a
sequence of information executions. Process exists in a limited span of time.
Two or more processes could be executing the same program, each using their
own data and resources.
Program is a static entity made up of program statement. Program contains the
instructions. A program exists at single place in space and continues to exist. A
program does not perform the action by itself.
Process State
When process executes, it changes state. Process state is defined as the current
activity of the process. Fig. 3.1 shows the general form of the process state
transition diagram. Process state contains five states. Each process is in one of
the states. The states are listed below.
1. New
2. Ready
3. Running
4. Waiting
5. Terminated (exist)
1. New : A process that just been created.
2. Ready: Ready processes are waiting to have the processor allocated to them
by the operating system so that they can run.
3. Running: The process that is currently being executed. A running process
possesses all the resources needed for its execution, including the processor.
4. Waiting: A process that cannot execute until some event occurs such as the
completion of an I/O operation. The running process may become suspended by
invoking an I/O module.
Mruthula Sojan, Dept. of Computer Science, Seshadripuram Degree College, Mysuru Page 18
Operating System
5. Terminated: A process that has been released from the pool of executable
processes by the operating system.
Mruthula Sojan, Dept. of Computer Science, Seshadripuram Degree College, Mysuru Page 19
Operating System
1. Pointer: Pointer points to another process control block. Pointer is used for
maintaining the scheduling list.
2. Process State: Process state may be new, ready, running, waiting and so on.
3. Program Counter: It indicates the address of the next instruction to be
executed for this process.
4. Event information: For a process in the blocked state this field contains
information concerning the event for which the process is waiting.
5. CPU register: It indicates general purpose register, stack pointers, index
registers and accumulator’s etc. number of register and type of register totally
depends upon the computer architecture.
6. Memory Management Information: This information may include the
value of base and limit register. This information is useful for deallocating the
memory when the process terminates.
7. Accounting Information: This information includes the amount of CPU and
real time used, time limits, job or process numbers, account numbers etc.
Process control block also includes the information about CPU scheduling,
I/O resource management, file management information, priority and so on.
The PCB simply serves as the repository for any information that may vary
from process to process.
Mruthula Sojan, Dept. of Computer Science, Seshadripuram Degree College, Mysuru Page 20
Operating System
When a process is created, hardware registers and flags are set to the values
provided by the loader or linker. Whenever that process is suspended, the
contents of the processor register are usually saved on the stack and the pointer
to the related stack frame is stored in the PCB. In this way, the hardware state
can be restored when the process is scheduled to run again.
Process Management / Process Scheduling
The process scheduling is the activity of the process manager that handles the
removal of the running process from the CPU and the selection of another
process on the basis of a particular strategy.
The scheduling mechanism is the part of the process manager that handles the
removal of the running process from the CPU and the selection of another
process on the basis of particular strategy.
Scheduling Queues
When the process enters into the system, they are put into a job queue. This
queue consists of all processes in the system. The operating system also has
other queues.
Device queue is a queue for which a list of processes waiting for a particular
I/O device. Each device has its own device queue.
Fig. shows the queuing diagram of process scheduling. In the fig , queue is
represented by rectangular box.
Mruthula Sojan, Dept. of Computer Science, Seshadripuram Degree College, Mysuru Page 21
Operating System
Schedules
Mruthula Sojan, Dept. of Computer Science, Seshadripuram Degree College, Mysuru Page 22
Operating System
Mruthula Sojan, Dept. of Computer Science, Seshadripuram Degree College, Mysuru Page 23
Operating System
Context Switch
When the scheduler switches the CPU from executing one process to
executing another, the context switcher saves the content of all processor
registers for the process being removed from the CPU in its process being
removed from the CPU in its process descriptor. The context of a process is
represented in the process control block of a process. Context switch time is
pure overhead. Context switching can significantly affect performance, since
modern computers have a lot of general and status registers to be saved.
Content switch times are highly dependent on hardware support. Context switch
requires ( n + m ) bXK time units to save the state of the processor with n
general registers, assuming b store operations are required to save register and
each store instruction requires K time units. Some hardware systems employ
two or more sets of processor registers to reduce the amount of context
switching time.
When the process is switched the information stored is:
1. Program Counter
2. Scheduling Information
3. Base and limit register value
4. Currently used register
5. Changed State
6. I/O State
7. Accounting
Operation on Processes
Several operations are possible on the process. Process must be created
and deleted dynamically. Operating system must provide the environment for
the process operation. We discuss the two main operations on processes.
1. Create a process
2. Terminate a process
Mruthula Sojan, Dept. of Computer Science, Seshadripuram Degree College, Mysuru Page 24
Operating System
1.Create Process
Operating system creates a new process with the specified or default attributes
and identifier. A process may create several new sub processes.
Syntax for creating new process is:
CREATE (processed, attributes)
Two names are used in the process they are parent process and child process.
Parent process is a creating process. Child process is created by the parent
process. Child process may create another sub process. So it forms a tree of
processes. When operating system issues a CREATE system call, it obtains a
new process control block from the pool of free memory, fills the fields with
provided and default parameters, and insert the PCB into the ready list. Thus it
makes the specified process eligible to run the process.
When a process is created, it requires some parameters. These are priority, level
of privilege, requirement of memory, access right, memory protection
information etc. Process will need certain resources, such as CPU time,
memory, files and I/O devices to complete the operation. When process creates
a sub process, that sub process may obtain its resources directly from the
operating system. Otherwise it uses the resources of parent process.
When a process creates a new process, two possibilities exist in terms of
execution.
1. The parent continues to execute concurrently with its children.
2. The parent waits until some or all of its children have terminated.
For address space, two possibilities occur:
1. The child process is a duplicate of the parent process.
2. The child process has a program loaded into it.
2. Terminate a Process
DELETE system call is used for terminating a process. A process may delete
itself or by another process. A process can cause the termination of another
process via an appropriate system call. The operating system reacts by
reclaiming all resources allocated to the specified process, closing files opened
by or for the process. PCB is also removed from its place of residence in the list
and is returned to the free pool. The DELETE service is normally invoked as a
part of orderly program termination.
Following are the resources for terminating the child process by parent process.
1. The task given to the child is no longer required.
2. Child has exceeded its usage of some of the resources that it has been
allocated.
3. Operating system does not allow a child to continue if its parent terminates.
Co-operating Processes
Co-operating process is a process that can affect or be affected by the
other processes while executing. If suppose any process is sharing data with
Mruthula Sojan, Dept. of Computer Science, Seshadripuram Degree College, Mysuru Page 25
Operating System
Mruthula Sojan, Dept. of Computer Science, Seshadripuram Degree College, Mysuru Page 26
Operating System
• Convenience. Even an individual user may work on many tasks at the same
time. For instance, a user may be editing, printing, and compiling inparallel.
Cooperating processes require an interprocess communication (IPC)
mechanism that will allow them to exchange data and information. There are
two fundamental models of interprocess communication: (1) shared memory
and (2) message passing. In the shared-memory model, a region of memory
that is shared by cooperating processes is established. Processes can then
exchange information by reading and writing data to the shared region. In the
message passing model, communication takes place by means of messages
exchanged between the cooperating processes. The two communications models
are contrasted in Figure.
Both of the models just discussed are common in operating systems, and many
systems implement both. Message passing is useful for exchanging smaller
amounts of data, because no conflicts need be avoided. Message passing is also
easier to implement than is shared memory for intercomputer communication.
Shared memory allows maximum speed and convenience of communication, as
it can be done at memory speeds when within a computer. Shared memory is
faster than message passing, as message-passing systems are typically
implemented using system calls and thus require the more time consuming task
of kernel intervention. In contrast, in shared-memory systems, system calls are
required only to establish shared-memory regions. Once shared memory is
established, all accesses are treated as routine memory accesses, and no
Mruthula Sojan, Dept. of Computer Science, Seshadripuram Degree College, Mysuru Page 27
Operating System
THREAD
Introduction of Thread
A thread is a flow of execution through the process code, with its own
program counter, system registers and stack. Threads are a popular way to
improve application performance through parallelism. A thread is sometimes
called a light weight process.
Threads represent a software approach to improving performance of operating
system by reducing the over head thread is equivalent to a classical process.
Each thread belongs to exactly one process and no thread can exist outside a
process. Each thread represents a separate flow of control.
Fig. shows the single and multithreaded process.
Mruthula Sojan, Dept. of Computer Science, Seshadripuram Degree College, Mysuru Page 28
Operating System
Types of Thread
Threads are implemented in two ways:
1. User Level
2. Kernel Level
1 .User Level Thread
In a user thread, all of the work of thread management is done by the
application and the kernel is not aware of the existence of threads. The thread
library contains code for creating and destroying threads, for passing message
and data between threads, for scheduling thread execution and for saving and
restoring thread contexts. The application begins with a single thread and begins
running in that thread.
User level threads are generally fast to create and manage.
Mruthula Sojan, Dept. of Computer Science, Seshadripuram Degree College, Mysuru Page 29
Operating System
Advantages of Thread
1. Thread minimize context switching time.
2. Use of threads provides concurrency within a process.
3. Efficient communication.
4. Economy- It is more economical to create and context switch threads.
5. Utilization of multiprocessor architectures –
The benefits of multithreading can be greatly increased in a multiprocessor
architecture.
Multithreading Models
Some operating system provides a combined user level thread and Kernel level
thread facility. Solaris is a good example of this combined approach. In a
combined system, multiple threads within the same application can run in
parallel on multiple processors and a blocking system call need not block the
entire process.
Multithreading models are three types:
1. Many to many relationship.
2. Many to one relationship.
3. One to one relationship.
1. Many to Many Models
In this model, many user level threads multiplexes to the Kernel thread of
smaller or equal numbers. The number of Kernel threads may be specific to
either a particular application or a particular machine.
Fig. shows the many to many model.
In this model, developers can create as many user threads as necessary and the
corresponding Kernel threads can run in parallels on a multiprocessor.
2. Many to One Model
Many to one model maps many user level threads to one Kernel level thread.
Thread management is done in user space. When thread makes a blocking
system call, the entire process will be blocks. Only one thread can access the
Mruthula Sojan, Dept. of Computer Science, Seshadripuram Degree College, Mysuru Page 30
Operating System
If the user level thread libraries are implemented in the operating system, that
system does not support Kernel threads use the many to one relationship modes.
3. One to One Model
There is one to one relationship of user level thread to the kernel level thread.
Fig. shows one to one relationship model.
This model provides more concurrency than the many to one model.
It also another thread to run when a thread makes a blocking system call. It
support multiple thread to execute in parallel on microprocessors.
Disadvantage of this model is that creating user thread requires the
corresponding Kernel thread. OS/2, windows NT and windows 2000 use one to
one relationship model.
Mruthula Sojan, Dept. of Computer Science, Seshadripuram Degree College, Mysuru Page 31
Operating System
User level thread are faster to create Kernel level thread are slower to
and manage create and manage
Implemented by a thread library at Operating system support directly to
the user level. Kernel threads
User level thread can run on any Kernel level threads are specific to
operating system the operating system
Support provided at the user level Support may be provided by kernel
called user level thread is called Kernel level threads
Multithread application cannot take Kernel routines themselves can be
advantage of multiprocessing multithreaded
Threading Issues
System calls fork and exec is discussed here. In a multithreaded program
environment, fork and exec system calls is changed. Unix system have two
version of fork system calls. One call duplicates all threads and another that
Mruthula Sojan, Dept. of Computer Science, Seshadripuram Degree College, Mysuru Page 32
Operating System
duplicates only the thread that invokes the fork system call. Whether to use one
or two version of fork system call totally depends upon the application.
Duplicating all threads is unnecessary, if exec is called immediately after fork
system call.
Thread cancellation is a process of thread terminates before its completion of
task. For example, in multiple thread environment, thread concurrently
searching through a database. If any one thread returns the result, the remaining
thread might be cancelled. Thread cancellation is of two types.
1. Asynchronous cancellation
2. Synchronous cancellation
In asynchronous cancellation, one thread immediately terminates the target
thread. Deferred cancellation periodically check for terminate by target thread.
It also allows the target thread to terminate itself in an orderly fashion.
Some resources are allocated to the thread. If we cancel the thread, which
update the data with other thread. This problem may face by asynchronous
cancellation system wide resource is not free if threads cancelled
asynchronously. Most of the operating system allows a process or thread to be
cancelled asynchronously.
CPU Scheduling
CPU scheduling is a process which allows one process to use the CPU while the
execution of another process is on hold (in waiting state) due to unavailability
of any resource like I/O etc, thereby making full use of CPU. The aim of CPU
scheduling is to make the system efficient, fast and fair.
Mruthula Sojan, Dept. of Computer Science, Seshadripuram Degree College, Mysuru Page 33
Operating System
-In an I/O – bound program would have many very short CPU bursts.
– In a CPU – bound program would have a few very long CPU bursts.
CPU Scheduler
Whenever the CPU becomes idle, the operating system must select one of the
processes in the ready queue to be executed. The selection process is carried out
by the short-term scheduler (or CPU scheduler). The scheduler selects a
process from the processes in memory that are ready to execute and allocates
the CPU to that process.
Preemptive Scheduling
Mruthula Sojan, Dept. of Computer Science, Seshadripuram Degree College, Mysuru Page 34
Operating System
Dispatcher
Scheduling Criteria
Different CPU scheduling algorithms have different properties, and the
choice of a particular algorithm may favour one class of processes over another.
In choosing which algorithm to use in a particular situation, we must consider
the properties of the various algorithms.
Many criteria have been suggested for comparing CPU scheduling algorithms.
Which characteristics are used for comparison can make a substantial difference
in which algorithm is judged to be best. The criteria include the following:
1. CPU utilization. We want to keep the CPU as busy as possible.
Conceptually,
CPU utilization can range from 0 to 100 percent. In a real system, it should
range from 40 percent (for a lightly loaded system) to 90 percent (for a heavily
used system).
2. Throughput. If the CPU is busy executing processes, then work is being
done. One measure of work is the number of processes that are completed per
time unit, called throughput. For long processes, this rate may be one process
per hour; for short transactions, it may be 10 processes per second.
3. Turnaround time. From the point of view of a particular process, the
important criterion is how long it takes to execute that process. The interval
from the time of submission of a process to the time of completion is the
Mruthula Sojan, Dept. of Computer Science, Seshadripuram Degree College, Mysuru Page 35
Operating System
turnaround time. Turnaround time is the sum of the periods spent waiting to get
into memory, waiting in the ready queue, executing on the CPU, and doing I/O.
4. Waiting time. The CPU scheduling algorithm does not affect the amount of
time during which a process executes or does I/O; it affects only the amount of
time that a process spends waiting in the ready queue. Waiting time is the sum
of the periods spent waiting in the ready queue.
5. Response time. In an interactive system, turnaround time may not be the best
criterion. Often, a process can produce some output fairly early and can
continue computing new results while previous results are being output to the
user. Thus, another measure is the time from the submission of a request until
the first response is produced. This measure, called response time, is the time it
takes to start responding, not the time it takes to output the response. The
turnaround time is generally limited by the speed of the output device.
It is desirable to maximize CPU utilization and throughput and to
minimize turnaround time, waiting time, and response time.
Scheduling Algorithms
CPU scheduling deals with the problem of deciding which of the
processes in the ready queue is to be allocated the CPU. There are many
different CPU scheduling algorithms.
If the processes arrive in the order P1, P2, P3, and are served in FCFS order, we
get the result shown in the following Gantt chart:
Mruthula Sojan, Dept. of Computer Science, Seshadripuram Degree College, Mysuru Page 36
Operating System
The waiting time is 0 milliseconds for process P1, 24 milliseconds for process
P2, and 27 milliseconds for process P3. Thus, the average waiting time is (0 +
24 + 27)/3 = 17 milliseconds. If the processes arrive in the order p2, p3, p1
however, the results will be as shown in the following Gantt chart:
The FCFS scheduling algorithm is nonpreemptive. Once the CPU has been
allocated to a process, that process keeps the CPU until it releases the CPU,
either by terminating or by requesting I/O.
2. Shortest-Job-First Scheduling
A different approach to CPU scheduling is the shortest-job-first (SJF)
Scheduling algorithm. This algorithm associates with each process the length
of the process's next CPU burst. When the CPU is available, it is assigned to the
process that has the smallest next CPU burst. If the next CPU bursts of two
processes are the same, FCFS scheduling is used to break the tie. Note that a
more appropriate term for this scheduling method would be the shortest-next-
CPU-burst algorithm, because scheduling depends on the length of the next
CPU burst of a process, rather than its total length. We use the term SJF because
most people and textbooks use this term to refer to this type of scheduling.
As an example of SJF scheduling, consider the following set of processes, with
the length of the CPU burst given in milliseconds:
Mruthula Sojan, Dept. of Computer Science, Seshadripuram Degree College, Mysuru Page 37
Operating System
The waiting time is 3 milliseconds for process P1, 16 milliseconds for process
P2, 9 milliseconds for process P3, and 0 milliseconds for process P4. Thus, the
average waiting time is (3 + 16 + 9 + 0)/4 - 7 milliseconds. By comparison, if
we were using the FCFS scheduling scheme, the average waiting time would be
10.25 milliseconds.
The SJF scheduling algorithm is provably optimal, in that it gives the minimum
average waiting time for a given set of processes. Moving a short process before
a long one decreases the waiting time of the short process more than it increases
the waiting time of the long process. Consequently, the average waiting time
decreases.
The SJF algorithm can be either preemptive or nonpreemptive.
A preemptive SJF algorithm will preempt the currently executing process,
whereas a nonpreemptive SJF algorithm will allow the currently running
process to finish its CPU burst.
Preemptive SJF scheduling is sometimes called shortest-remaining-time-first
scheduling.
As an example, consider the following four processes, with the length of the
CPU burst given in milliseconds:
If the
processes arrive at the ready queue at the times shown and need the indicated
burst times, then the resulting preemptive SJF schedule is as depicted in the
following Gantt chart:
Process P1 is started at time 0, since it is the only process in the queue. Process
Mruthula Sojan, Dept. of Computer Science, Seshadripuram Degree College, Mysuru Page 38
Operating System
3.Priority Scheduling
The SJF algorithm is a special case of the general priority scheduling
algorithm.
A priority is associated with each process, and the CPU is allocated to the
process with the highest priority. Equal-priority processes are scheduled in
FCFS order.
An SJF algorithm is simply a priority algorithm where the priority (p) is the
inverse of the (predicted) next CPU burst. The larger the CPU burst, the lower
the priority, and vice versa.
Note that we discuss scheduling in terms of high priority and low priority.
Priorities are generally indicated by some fixed range of numbers, such as 0 to 7
or 0 to 4,095. However, there is no general agreement on whether 0 is the
highest or lowest priority. Some systems use low numbers to represent low
priority; others use low numbers for high priority. This difference can lead to
confusion. In this text, we assume that low numbers represent high priority. As
an example, consider the following set of processes, assumed to have arrived at
time 0, in the order P1, P2, • • -, P5, with the length of the CPU burst given in
milliseconds:
Using priority
scheduling, we would schedule these processes according to the following Gantt
chart:
Mruthula Sojan, Dept. of Computer Science, Seshadripuram Degree College, Mysuru Page 39
Operating System
4.Round-Robin Scheduling
The round-robin (RR) scheduling algorithm is designed especially for
timesharing systems. It is similar to FCFS scheduling, but preemption is added
to switch between processes. A small unit of time, called a time quantum or
time slice, is defined. A time quantum is generally from 10 to 100 milliseconds.
The ready queue is treated as a circular queue. The CPU scheduler goes around
the ready queue, allocating the CPU to each process for a time interval of up to
1 time quantum.
To implement RR scheduling, we keep the ready queue as a FIFO queue of
processes. New processes are added to the tail of the ready queue. The CPU
scheduler picks the first process from the ready queue, sets a timer to interrupt
after 1 time quantum, and dispatches the process.
The average waiting time under the RR policy is often long. Consider the
following set of processes that arrive at time 0, with the length of the CPU burst
given in milliseconds:
Mruthula Sojan, Dept. of Computer Science, Seshadripuram Degree College, Mysuru Page 40
Operating System
first time quantum, and the CPU is given to the next process in the queue,
process P2. Since process Pi does not need 4 milliseconds, it quits before its
time quantum expires. The CPU is then given to the next process, process P3.
Once each process has received 1 time quantum, the CPU is returned to process
P1 for an additional time quantum. The resulting RR schedule is
Mruthula Sojan, Dept. of Computer Science, Seshadripuram Degree College, Mysuru Page 41