0% found this document useful (0 votes)
42 views41 pages

OS Unit 1

An operating system serves as an intermediary between users and computer hardware, managing resources and providing an environment for executing programs efficiently. It encompasses various functions such as processor management, memory management, input/output management, and file system management. The document also outlines different types of operating systems, including batch systems, multitasking systems, and real-time systems, along with their advantages and disadvantages.

Uploaded by

acharvinay35
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
42 views41 pages

OS Unit 1

An operating system serves as an intermediary between users and computer hardware, managing resources and providing an environment for executing programs efficiently. It encompasses various functions such as processor management, memory management, input/output management, and file system management. The document also outlines different types of operating systems, including batch systems, multitasking systems, and real-time systems, along with their advantages and disadvantages.

Uploaded by

acharvinay35
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 41

Operating System

Unit 1
Introduction

An operating system act as an intermediary between the user of a


computer and computer hardware. The purpose of an operating system is to
provide an environment in which a user can execute programs in a convenient
and efficient manner. An operating system is software that manages the
computer hardware.

Operating System

Definition of Operating System:

An Operating system is a program that controls the execution of application


programs and acts as an interface between the user of a computer and the
computer hardware.
A more common definition is that the operating system is the one program
running at all times on the computer (usually called the kernel), with all else
being applications programs.

An Operating system is concerned with the allocation of resources and services,


such as memory, processors, devices and information. The Operating System
correspondingly includes programs to manage these resources, such as a traffic
controller, a scheduler, memory management module, I/O programs, and a file
system.

Goals of Operating System


1. Convenience: An OS makes a computer more convenient to use.
2. Efficiency: An OS allows the computer system resources to be used in an
efficient manner.
3. Ability to Evolve: An OS should be constructed in such a way as to permit
the effective development, testing and introduction of new system functions
without at the same time interfering with service.

Operating System as User Interface (user view)


Every general purpose computer consists of the hardware, operating system,
system programs, and application programs. The hardware consists of memory,
CPU, ALU, I/O devices, peripheral device and storage device. System program
consists of compilers, loaders, editors, OS etc. The application program consists
of business program, database program.

Every computer must have an operating system to run other programs. The
operating system and coordinates the use of the hardware among the various
Mruthula Sojan, Dept. of Computer Science, Seshadripuram Degree College, Mysuru Page 1
Operating System

system programs and application program for a various users. It simply provides
an environment within which other programs can do useful work. The operating
system is a set of special programs that run on a computer system that allow it
to work properly. It performs basic tasks such as recognizing input from the
keyboard, keeping track of files and directories on the disk, sending output to
the display screen and controlling a peripheral devices.

Computer System Components

1. Hardware – provides basic computing resources (CPU,memory, I/O devices).


2. Operating system – controls and coordinates the use of the hardware among
the various application programs for the various users.
3. Applications programs – define the ways in which the system resources are
used to solve the computing problems of the users (compilers, database systems,
video games, business programs).
4. Users (people, machines, other computers).

Mruthula Sojan, Dept. of Computer Science, Seshadripuram Degree College, Mysuru Page 2
Operating System

System View of an Operating System


The operating system is a resource allocator:-
-Manages all resources.
-Decides between conflicting requests for efficient and fair resource use.
The operating system is a control program:-Controls execution of programs to
prevent errors and improper use of the computer.

Operating System Views


-Resource allocator:-to allocate resources (software and hardware) of the
computer system and manage them efficiently.
-Control program:-Controls execution of user programs and operation
of I/O devices.
-Kernel :-The program that executes forever (everything else is an
application with respect to the kernel).

Operating system goals:


- Execute user programs and make solving user problems easier.
- Make the computer system convenient to use.

Main Functions of Operating System

Operating systems perform the following important functions:


i) Processor Management: It means assigning processor to different tasks
which has to be performed by the computer system.
ii) Memory Management: It means allocation of main memory and secondary
storage areas to the system programmes, as well as user programmes and data.
iii) Input and Output Management: It means co-ordination and assignment of
the different output and input devices while one or more programmes are being
executed.
iv) File System Management: Operating system is also responsible for
maintenance of a file system, in which the users are allowed to create, delete
and move files.

What is a command interpreter?

The part of an Operating System that interprets commands and carries them out.

A command interpreter is the part of a computer operating system that


understands and executes commands that are entered interactively by a human
being or from a program. In some operating systems, the command interpreter is
called the shell.

Mruthula Sojan, Dept. of Computer Science, Seshadripuram Degree College, Mysuru Page 3
Operating System

A command interpreter is a program which reads the instructions given by the


user. It then translates these instructions into the context of the operating system
followed by the execution. Command interpreter is also known as ‘shell.

History of Operating System


Operating systems have been evolving through the years. Following table shows
the history of OS.
Generation Year Electronic devices used Types of OS and devices

First 1945 – 55 Vacuum tubes Plug boards

Second 1955 – 1965 Transistors Batch system

Third 1965 – 1980 Integrated Circuit (IC) Multiprogramming

Fourth since 1980 Large scale integration PC

Assembler:-Input to an assembler is an assembly language program. Output is


an object program plus information that enables the loader to prepare the object
program for execution.
Loader:-A loader is a routine that loads an object program and prepares it for
execution. There are various loading schemes: absolute, relocating and direct-
linking. In general, the loader must load, relocate, and link the object program
Compilers.
A compiler is a program that accepts a source program in a high-level language
and produces a corresponding object program.

Types of Operating System

1. Batch System
Some computer systems only did one thing at a time. They had a list of
the computer system may be dedicated to a single program until its completion,
or they may be dynamically reassigned among a collection of active programs
in different stages of execution.
Batch operating system is one where programs and data are collected together in
a batch before processing starts. A job is predefined sequence of commands,
programs and data that are combined in to a single unit called job.
Memory management in batch system is very simple. Memory is usually
divided into two areas:
1. Operating system and
2. User program area.

Mruthula Sojan, Dept. of Computer Science, Seshadripuram Degree College, Mysuru Page 4
Operating System

Fig shows Memory Layout for a Simple Batch System

Operating System

User program area


Scheduling is also simple in batch system. Jobs are processed in the order of
submission i.e first come first served fashion.

Batch processing is a technique in which Operating System collects one


programs and data together in a batch before processing starts. Operating
system does the following activities related to batch processing.

 OS defines a job which has predefined sequence of commands, programs


and data as a single unit.
 OS keeps a number a jobs in memory and executes them without any
manual information.
 Jobs are processed in the order of submission i.e first come first served
fashion.
 When job completes its execution, its memory is released and the output
for the job gets copied into an output spool for later printing or
processing.

When job completed execution, its memory is releases and the output for
the job gets copied into an output spool for later printing.

Example of this system is generating monthly bank statement.

Mruthula Sojan, Dept. of Computer Science, Seshadripuram Degree College, Mysuru Page 5
Operating System

Advantages

 Batch processing takes much of the work of the operator to the computer.
 Increased performance as a new job get started as soon as the previous
job finished without any manual intervention.

Disadvantages

 Difficult to debug program.


 A job could enter an infinite loop.
 Due to lack of protection scheme, one batch job can affect pending jobs.

Spooling

Spooling is an acronym for simultaneous peripheral operations on line.


Spooling refers to putting data of various I/O jobs in a buffer. This buffer is a
special area in memory or hard disk which is accessible to I/O devices.
Operating system does the following activates related to distributed
environment.

 OS handles I/O device data spooling as devices have different data access
rates.
 OS maintains the spooling buffer which provides a waiting station where
data can rest while the slower device catches up.
 OS maintains parallel computation because of spooling process as a
computer can perform I/O in parallel fashion. It becomes possible to have
the computer read data from a tape, write data to disk and to write out to a
tape printer while it is doing its computing task.

Mruthula Sojan, Dept. of Computer Science, Seshadripuram Degree College, Mysuru Page 6
Operating System

Advantages

 The spooling operation uses a disk as a very large buffer.


 Spooling is capable of overlapping I/O operation for one job with
processor operations for another job.

2. Multitasking(Time Sharing System)

Multitasking refers to term where multiple jobs are executed by the CPU
simultaneously by switching between them.Switches occur so frequently that
the users may interact with each program while it is running. Operating system
does the following activities related to multitasking.

Time sharing, or multitasking, is a logical extension of multiprogramming.


Multiple jobs are executed by the CPU switching between them, but the
switches occur so frequently that the users may interact with each program
while it is running.

 The user gives instructions to the operating system or to a program


directly, and receives an immediate response.
 Operating System handles multitasking in the way that it can handle
multiple operations / executes multiple programs at a time.
 Multitasking Operating Systems are also known as Time-sharing systems.
 These Operating Systems were developed to provide interactive use of a
computer system at a reasonable cost.
 A time-shared operating system uses concept of CPU scheduling and
multiprogramming to provide each user with a small portion of a time-
shared CPU.
 Each user has at least one separate program in memory.

Mruthula Sojan, Dept. of Computer Science, Seshadripuram Degree College, Mysuru Page 7
Operating System

 A program that is loaded into memory and is executing is commonly


referred to as a process.
 When a process executes, it typically executes for only a very short time
before it either finishes or needs to perform I/O.
 Since interactive I/O typically runs at people speeds, it may take a long
time to completed. During this time a CPU can be utilized by another
process.
 Operating system allows the users to share the computer simultaneously.
Since each action or command in a time-shared system tends to be short,
only a little CPU time is needed for each user.
 As the system switches CPU rapidly from one user/program to the next,
each user is given the impression that he/she has his/her own CPU,
whereas actually one CPU is being shared among many users.

3. Multiprogramming

When two or more programs are residing in memory at the same time, then
sharing the processor is referred to the multiprogramming. Multiprogramming
assumes a single shared processor. Multiprogramming increases CPU utilization
by organizing jobs so that the CPU always has one to execute.

The operating system keeps several jobs in memory at a time. This set of jobs is
a subset of the jobs kept in the job pool. The operating system picks and begins
to execute one of the job in the memory.
Multiprogrammed systems provide an environment in which the various system
resources are utilized effectively, but they do not provide for user interaction
with the computer system.
Jobs entering into the system are kept into the memory. Operating system picks
the job and begins to execute one of the job in the memory. Having several

Mruthula Sojan, Dept. of Computer Science, Seshadripuram Degree College, Mysuru Page 8
Operating System

programs in memory at the same time requires some form of memory


management.
Multiprogramming operating system monitors the state of all active programs
and system resources. This ensures that the CPU is never idle unless there are
no jobs.

Following figure shows the memory layout for a multiprogramming system.

Operating system does the following activities related to multiprogramming.

 The operating system keeps several jobs in memory at a time.


 This set of jobs is a subset of the jobs kept in the job pool.
 The operating system picks and begins to execute one of the job in the
memory.
 Multiprogramming operating system monitors the state of all active
programs and system resources using memory management programs to
ensures that the CPU is never idle unless there are no jobs

Advantages

 High and efficient CPU utilization.


 User feels that many programs are allotted CPU almost simultaneously.

Disadvantages

 CPU scheduling is required.


 To accommodate many jobs in memory, memory management is
required.

Mruthula Sojan, Dept. of Computer Science, Seshadripuram Degree College, Mysuru Page 9
Operating System

4. Real time system: Real time systems are usually dedicated, embedded
systems.
They typically read from and react to sensor data. The system must guarantee
response to events within fixed periods of time to ensure correct performance.

Operating system does the following activities related to real time system
activity.

 In such systems, Operating Systems typically read from and react to


sensor data.
 The Operating system must guarantee response to events within fixed
periods of time to ensure correct performance.

5. Distributed System: Distributes computation among several physical


processors. The processors do not share memory or a clock. Instead, each
processor has its own local memory. They communicate with each other
through various communication lines.

Distributed environment refers to multiple independent CPUs or processors in a


computer system. Operating system does the following activities related to
distributed environment.

 OS Distributes computation logics among several physical processors.


 The processors do not share memory or a clock.
 Instead, each processor has its own local memory.
 OS manages the communications between the processors. They
communicate with each other through various communication lines.

Operating System Components


Modern operating systems share the goal of supporting the system components.
The system components are:

1. Process Management
2. Main Memory Management
3. File Management
4. Secondary Storage Management
5. I/O System Management
6. Networking
7. Protection System
8. Command Interpreter System

Mruthula Sojan, Dept. of Computer Science, Seshadripuram Degree College, Mysuru Page 10
Operating System

Process Management

The operating system manages many kinds of activities ranging from user
programs to system programs like printer spooler, name servers, file server etc.
Each of these activities is encapsulated in a process. A process includes the
complete execution context (code, data, PC, registers, OS resources in use etc.).

It is important to note that a process is not a program. A process is only ONE


instant of a program in execution. There are many processes can be running the
same program. The five major activities of an operating system in regard to
process management are

 Creation and deletion of user and system processes.


 Suspension and resumption of processes.
 A mechanism for process synchronization.
 A mechanism for process communication.
 A mechanism for deadlock handling.

Main-Memory Management

Primary-Memory or Main-Memory is a large array of words or bytes. Each


word or byte has its own address. Main-memory provides storage that can be
access directly by the CPU. That is to say for a program to be executed, it must
in the main memory.

The major activities of an operating in regard to memory-management are:

 Keep track of which part of memory are currently being used and by
whom.
 Decide which process is loaded into memory when memory space
becomes available.
 Allocate and deallocate memory space as needed.

File Management

A file is a collected of related information defined by its creator. Computer can


store files on the disk (secondary storage), which provide long term storage.
Some examples of storage media are magnetic tape, magnetic disk and optical
disk. Each of these media has its own properties like speed, capacity, data
transfer rate and access methods.

File systems normally organized into directories to ease their use. These
directories may contain files and other directions.

Mruthula Sojan, Dept. of Computer Science, Seshadripuram Degree College, Mysuru Page 11
Operating System

The five main major activities of an operating system in regard to file


management are

1. The creation and deletion of files.


2. The creation and deletion of directions.
3. The support of primitives for manipulating files and directions.
4. The mapping of files onto secondary storage.
5. The backup of files on stable storage media.

I/O System Management

I/O subsystem hides the peculiarities of specific hardware devices from the user.
Only the device driver knows the peculiarities of the specific device to which it
is assigned.

Secondary-Storage Management

Generally speaking, systems have several levels of storage, including primary


storage, secondary storage and cache storage. Instructions and data must be
placed in primary storage or cache to be referenced by a running program.
Because main memory is too small to accommodate all data and programs, and
its data are lost when power is lost, the computer system must provide
secondary storage to back up main memory. Secondary storage consists of
tapes, disks, and other media designed to hold information that will eventually
be accessed in primary storage (primary, secondary, cache) is ordinarily divided
into bytes or words consisting of a fixed number of bytes. Each location in
storage has an address; the set of all addresses available to a program is called
an address space.

The three major activities of an operating system in regard to secondary storage


management are:

1. Managing the free space available on the secondary-storage device.


2. Allocation of storage space when new files have to be written.
3. Scheduling the requests for memory access.

Networking

A distributed system is a collection of processors that do not share memory,


peripheral devices, or a clock. The processors communicate with one another
through communication lines called network. The communication-network
design must consider routing and connection strategies, and the problems of
contention and security.

Mruthula Sojan, Dept. of Computer Science, Seshadripuram Degree College, Mysuru Page 12
Operating System

Protection System

If computer systems has multiple users and allows the concurrent execution of
multiple processes, then the various processes must be protected from one
another's activities. Protection refers to mechanism for controlling the access of
programs, processes, or users to the resources defined by computer systems.

Command Interpreter System

A command interpreter is an interface of the operating system with the user.


The user gives commands with are executed by operating system (usually by
turning them into system calls). The main function of a command interpreter is
to get and execute the next user specified command. Command-Interpreter is
usually not part of the kernel, since multiple command interpreters (shell, in
UNIX terminology) may be support by an operating system, and they do not
really need to run in kernel mode. There are two main advantages to separating
the command interpreter from the kernel.

1. If we want to change the way the command interpreter looks, i.e., I want
to change the interface of command interpreter, I am able to do that if the
command interpreter is separate from the kernel. I cannot change the code
of the kernel so I cannot modify the interface.
2. If the command interpreter is a part of the kernel it is possible for a
malicious process to gain access to certain part of the kernel that it
showed not have to avoid this ugly scenario it is advantageous to have the
command interpreter separate from kernel.

Operating System Services

An operating system provides services to programs and to the users of those


programs. It provided by one environment for the execution of programs.
The services provided by one operating system is difficult than other operating
system. Operating system makes the programming task easier.
The common service provided by the operating system is listed below.

1. Program execution
2. I/O operation
3. File system manipulation
4. Communications
5. Error detection
6. Resource Allocation
7. Accounting
8. Protection

Mruthula Sojan, Dept. of Computer Science, Seshadripuram Degree College, Mysuru Page 13
Operating System

1. Program execution: Operating system loads a program into memory and


executes the program. The program must be able to end its execution, either
normally or abnormally.
2. I/O Operation : I/O means any file or any specific I/O device. Program may
require any I/O device while running. So operating system must provide the
required I/O.
3. File system manipulation: Program needs to read a file or write a file. The
operating system gives the permission to the program for operation on file.
4. Communication: Data transfer between two processes is required for some
time. The both processes are on the one computer or on different computer but
connected through computer network. Communication may be implemented by
two methods:
a. Shared memory
b. Message passing.
5. Error detection: error may occur in CPU, in I/O devices or in the memory
hardware. The operating system constantly needs to be aware of possible errors.
It should take the appropriate action to ensure correct and consistent computing.
Operating system with multiple users provides following services.
6. Resource Allocation
7. Accounting
8. Protection
6. Resource Allocation:
If there is more than one user or jobs running at the same time, then resources
must be allocated to each of them. Operating system manages different types of
resources require special allocation code, i.e. main memory, CPU cycles and
file storage.
There are some resources which require only general request and release code.
For allocating CPU, CPU scheduling algorithms are used for better utilization of
CPU. CPU scheduling algorithms are used for better utilization of CPU. CPU
scheduling routines consider the speed of the CPU, number of available
registers and other required factors.
7. Accounting:
Logs of each user must be kept. It is also necessary to keep record of which user
how much and what kinds of computer resources. This log is used for
accounting purposes.
The accounting data may be used for statistics or for the billing. It also used to
improve system efficiency.
8. Protection:
Protection involves ensuring that all access to system resources is controlled.
Security starts with each user having to authenticate to the system, usually by
means of a password. External I/O devices must be also protected from invalid
access attempts.

Mruthula Sojan, Dept. of Computer Science, Seshadripuram Degree College, Mysuru Page 14
Operating System

In protection, all the access to the resources is controlled. In multiprocess


environment, it is possible that, one process to interface with the other, or with
the operating system, so protection is required.

System Calls

The system call provides an interface to the operating system services.

Application developers often do not have direct access to the system calls, but
can access them through an application programming interface (API). The
functions that are included in the API invoke the actual system calls. By using
the API, certain benefits can be gained:

 Portability: as long a system supports an API, any program using that API
can compile and run.
 Ease of Use: using the API can be significantly easier then using the
actual system call.

System Call Parameters

Three general methods exist for passing parameters to the OS:

1. Parameters can be passed in registers.


2. When there are more parameters than registers, parameters can be stored
in a block and the block address can be passed as a parameter to a
register.
3. Parameters can also be pushed on or popped off the stack by the
operating system.

Mruthula Sojan, Dept. of Computer Science, Seshadripuram Degree College, Mysuru Page 15
Operating System

Types of System Calls

There are 5 different categories of system calls:


Process control, file manipulation, device manipulation, information
maintenance and communication.

1. Process Control:-A running program needs to be able to stop execution


either normally or abnormally. When execution is stopped abnormally, often a
dump of memory is taken and can be examined with a debugger.

2. File Management:-Some common system calls are create, delete, read,


write, reposition, or close. Also, there is a need to determine the file attributes –
get and set file attribute. Many times the OS provides an API to make these
system calls.

3. Device Management:-Process usually require several resources to execute, if


these resources are available, they will be granted and control returned to the
user process. These resources are also thought of as devices. Some are physical,
such as a video card, and others are abstract, such as a file.

User programs request the device, and when finished they release the device.
Similar to files, we can read, write, and reposition the device.

4. Information Management:-Some system calls exist purely for transferring


information between the user program and the operating system. An example of
this is time, or date.

The OS also keeps information about all its processes and provides system calls
to report this information.

5. Communication:-There are two models of interprocess communication, the


message-passing model and the shared memory model.

 Message-passing uses a common mailbox to pass messages between


processes.

Mruthula Sojan, Dept. of Computer Science, Seshadripuram Degree College, Mysuru Page 16
Operating System

 Shared memory use certain system calls to create and gain access to
create and gain access to regions of memory owned by other processes.
The two processes exchange information by reading and writing in the
shared data.

System Programs

System programs provide a convenient environment for program development


and execution.
They can be divided into:
1. File manipulation
2. Status information
3. File modification
4. Communications
5. Application programs
6. Programming language support
7. Program loading and execution

These programs are not usually part of the OS kernel, but are part of the overall
operating system.

1. File management:-These programs create, delete, copy, rename, print,


dump, list, and generally manipulate files and directories.

2. Status information:-Some programs simply request the date and time, and
other simple requests. Others provide detailed performance, logging, and
debugging information. The output of these files is often sent to a terminal
window or GUI window

Note:-The registry is in this category.

3. File modification:-Programs such as text editors are used to create, and


modify files.

4. Communications:- These programs provide the mechanism for creating a


virtual connect among processes, users, and other computers. Email and web
browsers are a couple examples.

5. Programming language support :-Compiler, assembler and interpreter for


common programming languages are often provided to the user with the os.

6. Program loading and execution: - Once a program is assembled or


compiled, it must be loaded into memory to be executed. The system may

Mruthula Sojan, Dept. of Computer Science, Seshadripuram Degree College, Mysuru Page 17
Operating System

provide absolute loaders, relocated loaders, linkage editor and overlay loaders.
Debugging system for either high-level languages or machine language are
needed.

Concept of Process
A process is sequential program in execution. A process defines the
fundamental unit of computation for the computer. Components of process are:
1. Object Program
2. Data
3. Resources
4. Status of the process execution.
Object program i.e. code to be executed. Data is used for executing the
program. While executing the program, it may require some resources. Last
component is used for verifying the status of the process execution. A process
can run to completion only when all requested resources have been allocated to
the process. Two or more processes could be executing the same program, each
using their own data and resources.
Processes and Programs
Process is a dynamic entity that is a program in execution. A process is a
sequence of information executions. Process exists in a limited span of time.
Two or more processes could be executing the same program, each using their
own data and resources.
Program is a static entity made up of program statement. Program contains the
instructions. A program exists at single place in space and continues to exist. A
program does not perform the action by itself.
Process State
When process executes, it changes state. Process state is defined as the current
activity of the process. Fig. 3.1 shows the general form of the process state
transition diagram. Process state contains five states. Each process is in one of
the states. The states are listed below.
1. New
2. Ready
3. Running
4. Waiting
5. Terminated (exist)
1. New : A process that just been created.
2. Ready: Ready processes are waiting to have the processor allocated to them
by the operating system so that they can run.
3. Running: The process that is currently being executed. A running process
possesses all the resources needed for its execution, including the processor.
4. Waiting: A process that cannot execute until some event occurs such as the
completion of an I/O operation. The running process may become suspended by
invoking an I/O module.

Mruthula Sojan, Dept. of Computer Science, Seshadripuram Degree College, Mysuru Page 18
Operating System

5. Terminated: A process that has been released from the pool of executable
processes by the operating system.

Fig: Diagram for Process State


Whenever processes changes state, the operating system reacts by placing the
process PCB in the list that corresponds to its new state. Only one process can
be running on any processor at any instant and many processes may be ready
and waiting state.
Suspended Processes
Characteristics of suspend process
1. Suspended process is not immediately available for execution.
2. The process may or may not be waiting on an event.
3. For preventing the execution, process is suspend by OS, parent process,
process itself and an agent.
4. Process may not be removed from the suspended state until the agent orders
the removal.
Swapping is used to move all of a process from main memory to disk. When all
the process by putting it in the suspended state and transferring it to disk.
Reasons for process suspension
1. Swapping
2. Timing
3. Interactive user request
4. Parent process request
Swapping: OS needs to release required main memory to bring in a process that
is ready to execute.
Timing: Process may be suspended while waiting for the next time interval.
Interactive user request: Process may be suspended for debugging purpose by
user.
Parent process request: To modify the suspended process or to coordinate the
activity of various descendants.

Mruthula Sojan, Dept. of Computer Science, Seshadripuram Degree College, Mysuru Page 19
Operating System

Process Control Block (PCB)


Each process contains the process control block (PCB). PCB is the data
structure used by the operating system. Operating system groups all information
that needs about particular process.
Fig. shows the process control block.

1. Pointer: Pointer points to another process control block. Pointer is used for
maintaining the scheduling list.
2. Process State: Process state may be new, ready, running, waiting and so on.
3. Program Counter: It indicates the address of the next instruction to be
executed for this process.
4. Event information: For a process in the blocked state this field contains
information concerning the event for which the process is waiting.
5. CPU register: It indicates general purpose register, stack pointers, index
registers and accumulator’s etc. number of register and type of register totally
depends upon the computer architecture.
6. Memory Management Information: This information may include the
value of base and limit register. This information is useful for deallocating the
memory when the process terminates.
7. Accounting Information: This information includes the amount of CPU and
real time used, time limits, job or process numbers, account numbers etc.
Process control block also includes the information about CPU scheduling,
I/O resource management, file management information, priority and so on.
The PCB simply serves as the repository for any information that may vary
from process to process.

Mruthula Sojan, Dept. of Computer Science, Seshadripuram Degree College, Mysuru Page 20
Operating System

When a process is created, hardware registers and flags are set to the values
provided by the loader or linker. Whenever that process is suspended, the
contents of the processor register are usually saved on the stack and the pointer
to the related stack frame is stored in the PCB. In this way, the hardware state
can be restored when the process is scheduled to run again.
Process Management / Process Scheduling

The process scheduling is the activity of the process manager that handles the
removal of the running process from the CPU and the selection of another
process on the basis of a particular strategy.

Process scheduling is an essential part of a Multiprogramming operating


system. Such operating systems allow more than one process to be loaded into
the executable memory at a time and loaded process shares the CPU using time
multiplexing.

The scheduling mechanism is the part of the process manager that handles the
removal of the running process from the CPU and the selection of another
process on the basis of particular strategy.
Scheduling Queues
When the process enters into the system, they are put into a job queue. This
queue consists of all processes in the system. The operating system also has
other queues.
Device queue is a queue for which a list of processes waiting for a particular
I/O device. Each device has its own device queue.
Fig. shows the queuing diagram of process scheduling. In the fig , queue is
represented by rectangular box.

Mruthula Sojan, Dept. of Computer Science, Seshadripuram Degree College, Mysuru Page 21
Operating System

The circles represent the resources that serve the queues.


The arrows indicate the flow of processes in the system.
Queues are of two types: ready queue and set of device queues. A newly
arrived process is put in the ready queue. Processes are waiting in ready queue
for allocating the CPU. Once the CPU is assigned to the process, then process
will execute. While executing the process, one of the several events could
occur.
1. The process could issue an I/O request and then place in an I/O queue.
2. The process could create new sub process and waits for its termination.
3. The process could be removed forcibly from the CPU, as a result of interrupt
and put back in the ready queue.

Two State Process Model


Process may be in one of two states:
a) Running
b) Not Running
When new process is created by OS that process enters into the system in the
running state.
Processes that are not running are kept in queue, waiting their turn to execute.
Each entry in the queue is a printer to a particular process.
Queue is implemented by using linked list. Use of dispatcher is as follows.
When a process interrupted, that process is transferred in the waiting queue. If
the process has completed or aborted, the process is discarded. In either case,
the dispatcher then selects a process from the queue to execute.

Schedules

Schedulers are of three types.


1. Long Term Scheduler
2. Short Term Scheduler
3. Medium Term Scheduler
1. Long Term Scheduler
It is also called job scheduler. Long term scheduler determines which programs
are admitted to the system for processing. Job scheduler selects processes from
the queue and loads them into memory for execution. Process loads into the
memory for CPU scheduler. The primary objective of the job scheduler is to
provide a balanced mix of jobs, such as I/O bound and processor bound. It also
controls the degree of multiprogramming. If the degree of multiprogramming is
stable, then the average rate of process creation must be equal to the average
departure rate of processes leaving the system.
On same systems, the long term scheduler may be absent or minimal.
Timesharing operating systems have no long term scheduler. When process
changes the state from new to ready, then there is a long term scheduler.

Mruthula Sojan, Dept. of Computer Science, Seshadripuram Degree College, Mysuru Page 22
Operating System

2. Short Term Scheduler


It is also called CPU scheduler. Main objective is increasing system
performance in accordance with the chosen set of criteria. It is the change of
ready state to running state of the process. CPU scheduler selects from among
the processes that are ready to execute and allocates the CPU to one of them.
Short term scheduler also known as dispatcher, execute most frequently and
makes the fine grained decision of which process to execute next. Short term
scheduler is faster than long tern scheduler.
3. Medium Term Scheduler
Medium term scheduling is part of the swapping function. It removes the
processes from the memory. It reduces the degree of multiprogramming. The
medium term scheduler is in charge of handling the swapped out-processes.

Medium term scheduler is shown in the Fig.


Fig: Queueing diagram with medium term scheduler
Running process may become suspended by making an I/O request.
Suspended processes cannot make any progress towards completion. In this
condition, to remove the process from memory and make space for other
process.
Suspended process is move to the secondary storage is called swapping, and the
process is said to be swapped out or rolled out. Swapping may be necessary to
improve the process mix.
Comparison between Schedulers
Long Term Short Term Medium Term
It is job scheduler It is CPU Scheduler It is swapping
Speed is less than short Speed is very fast Speed is in between both
term scheduler
It controls degree of Less control over Reduce the degree of
multiprogramming degree of multiprogramming.
multiprogramming
Absent or minimal in Minimal in time sharing Time sharing system use
time sharing system. system. medium term scheduler

Mruthula Sojan, Dept. of Computer Science, Seshadripuram Degree College, Mysuru Page 23
Operating System

5 It select processes from It select from among Process can be


pool and load them into the processes that are reintroduced into
memory for execution. ready to execute. memory and its
execution can be
continued.
Process state is (New to Process state is (Ready --
Ready) to Running)

Select a good process, Select a new process --


mix of I/O bound and for a CPU quite
CPU bound. frequently.

Context Switch

When the scheduler switches the CPU from executing one process to
executing another, the context switcher saves the content of all processor
registers for the process being removed from the CPU in its process being
removed from the CPU in its process descriptor. The context of a process is
represented in the process control block of a process. Context switch time is
pure overhead. Context switching can significantly affect performance, since
modern computers have a lot of general and status registers to be saved.
Content switch times are highly dependent on hardware support. Context switch
requires ( n + m ) bXK time units to save the state of the processor with n
general registers, assuming b store operations are required to save register and
each store instruction requires K time units. Some hardware systems employ
two or more sets of processor registers to reduce the amount of context
switching time.
When the process is switched the information stored is:
1. Program Counter
2. Scheduling Information
3. Base and limit register value
4. Currently used register
5. Changed State
6. I/O State
7. Accounting

Operation on Processes
Several operations are possible on the process. Process must be created
and deleted dynamically. Operating system must provide the environment for
the process operation. We discuss the two main operations on processes.
1. Create a process
2. Terminate a process
Mruthula Sojan, Dept. of Computer Science, Seshadripuram Degree College, Mysuru Page 24
Operating System

1.Create Process
Operating system creates a new process with the specified or default attributes
and identifier. A process may create several new sub processes.
Syntax for creating new process is:
CREATE (processed, attributes)
Two names are used in the process they are parent process and child process.
Parent process is a creating process. Child process is created by the parent
process. Child process may create another sub process. So it forms a tree of
processes. When operating system issues a CREATE system call, it obtains a
new process control block from the pool of free memory, fills the fields with
provided and default parameters, and insert the PCB into the ready list. Thus it
makes the specified process eligible to run the process.
When a process is created, it requires some parameters. These are priority, level
of privilege, requirement of memory, access right, memory protection
information etc. Process will need certain resources, such as CPU time,
memory, files and I/O devices to complete the operation. When process creates
a sub process, that sub process may obtain its resources directly from the
operating system. Otherwise it uses the resources of parent process.
When a process creates a new process, two possibilities exist in terms of
execution.
1. The parent continues to execute concurrently with its children.
2. The parent waits until some or all of its children have terminated.
For address space, two possibilities occur:
1. The child process is a duplicate of the parent process.
2. The child process has a program loaded into it.
2. Terminate a Process
DELETE system call is used for terminating a process. A process may delete
itself or by another process. A process can cause the termination of another
process via an appropriate system call. The operating system reacts by
reclaiming all resources allocated to the specified process, closing files opened
by or for the process. PCB is also removed from its place of residence in the list
and is returned to the free pool. The DELETE service is normally invoked as a
part of orderly program termination.
Following are the resources for terminating the child process by parent process.
1. The task given to the child is no longer required.
2. Child has exceeded its usage of some of the resources that it has been
allocated.
3. Operating system does not allow a child to continue if its parent terminates.

Co-operating Processes
Co-operating process is a process that can affect or be affected by the
other processes while executing. If suppose any process is sharing data with

Mruthula Sojan, Dept. of Computer Science, Seshadripuram Degree College, Mysuru Page 25
Operating System

other processes, then it is called co-operating process. Benefit of the co-


operating processes are:
1. Sharing of information
2. Increases computation speed
3. Modularity
4. Convenience
Co-operating processes share the information: Such as a file, memory etc.
System must provide an environment to allow concurrent access to these types
of resources. Computation speed will increase if the computer has multiple
processing elements are connected together. System is constructed in a modular
fashion. System function is divided into number of modules.
Process 1
Printf("abc")
Process 2
Printf("CBA")
CBAabc abCcBA abcCBA
Behaviour of co-operating processes is nondeterministic i.e. it depends on
relative execution sequence and cannot be predicted a priori. Co-operating
processes are also Reproducible. For example, suppose one process writes
―ABC‖, another writes ―CBA‖ can get different outputs, cannot tell what
comes from which. Which process output first ―C‖ in ―ABCCBA‖. The subtle
state sharing that occurs here via the terminal. Not just anything can happen,
though. For example, ―AABBCC‖ cannot occur.

Interprocess Communication (IPC)


Processes executing concurrently in the operating system may be either
independent processes or cooperating processes. A process is independent if it
cannot affect or be affected by the other processes executing in the system.
Any process that does not share data with any other process is independent. A
process is cooperating if it can affect or be affected by the other processes
executing in the system. Clearly, any process that shares data with other
processes is a cooperating process.
There are several reasons for providing an environment that allows process
cooperation:
• Information sharing. Since several users may be interested in the same piece
of information (for instance, a shared file), we must provide an environment to
allow concurrent access to such information.
• Computation speedup. If we want a particular task to run faster, we must
break it into subtasks, each of which will be executing in parallel with the
others. Notice that such a speedup can be achieved only if the computer has
multiple processing elements (such as CPUs or I/O channels).
• Modularity. We may want to construct the system in a modular fashion,
dividing the system functions into separate processes or threads.

Mruthula Sojan, Dept. of Computer Science, Seshadripuram Degree College, Mysuru Page 26
Operating System

• Convenience. Even an individual user may work on many tasks at the same
time. For instance, a user may be editing, printing, and compiling inparallel.
Cooperating processes require an interprocess communication (IPC)
mechanism that will allow them to exchange data and information. There are
two fundamental models of interprocess communication: (1) shared memory
and (2) message passing. In the shared-memory model, a region of memory
that is shared by cooperating processes is established. Processes can then
exchange information by reading and writing data to the shared region. In the
message passing model, communication takes place by means of messages
exchanged between the cooperating processes. The two communications models
are contrasted in Figure.

Both of the models just discussed are common in operating systems, and many
systems implement both. Message passing is useful for exchanging smaller
amounts of data, because no conflicts need be avoided. Message passing is also
easier to implement than is shared memory for intercomputer communication.
Shared memory allows maximum speed and convenience of communication, as
it can be done at memory speeds when within a computer. Shared memory is
faster than message passing, as message-passing systems are typically
implemented using system calls and thus require the more time consuming task
of kernel intervention. In contrast, in shared-memory systems, system calls are
required only to establish shared-memory regions. Once shared memory is
established, all accesses are treated as routine memory accesses, and no

Mruthula Sojan, Dept. of Computer Science, Seshadripuram Degree College, Mysuru Page 27
Operating System

assistance from the kernel is required. In the remainder of this section, we


explore each of these IPC models in more detail.

THREAD

Introduction of Thread
A thread is a flow of execution through the process code, with its own
program counter, system registers and stack. Threads are a popular way to
improve application performance through parallelism. A thread is sometimes
called a light weight process.
Threads represent a software approach to improving performance of operating
system by reducing the over head thread is equivalent to a classical process.
Each thread belongs to exactly one process and no thread can exist outside a
process. Each thread represents a separate flow of control.
Fig. shows the single and multithreaded process.

Threads have been successfully used in implementing network servers. They


also provide a suitable foundation for parallel execution of applications on
shared memory multiprocessors.
Many operating system kernels are now multithreaded: several threads operate
in the kernel, and each thread performs a specific task, such as managing
services or interrupt handling. Threads also play a vital role in remote procedure
call (RPC) systems. RPCs allow interposes communication by providing a
communication mechanism similar to ordinary function or procedure calls.
Typically, RPC servers are multithreaded. When a server receives a message, it
services the message using a separate thread.

Mruthula Sojan, Dept. of Computer Science, Seshadripuram Degree College, Mysuru Page 28
Operating System

Types of Thread
Threads are implemented in two ways:
1. User Level
2. Kernel Level
1 .User Level Thread
In a user thread, all of the work of thread management is done by the
application and the kernel is not aware of the existence of threads. The thread
library contains code for creating and destroying threads, for passing message
and data between threads, for scheduling thread execution and for saving and
restoring thread contexts. The application begins with a single thread and begins
running in that thread.
User level threads are generally fast to create and manage.

Advantage of user level thread over Kernel level thread:


1. Thread switching does not require Kernel mode privileges.
2. User level thread can run on any operating system.
3. Scheduling can be application specific.
4. User level threads are fast to create and manage.
Disadvantages of user level thread:
1. In a typical operating system, most system calls are blocking.
2. Multithreaded application cannot take advantage of multiprocessing.

2. Kernel Level Threads


In Kernel level thread, thread management done by the Kernel. There is
no thread management code in the application area. Kernel threads are
supported directly by the operating system. Any application can be programmed
to be multithreaded. All of the threads within an application are supported
within a single process. The Kernel maintains context information for the
process as a whole and for individuals threads within the process.
Scheduling by the Kernel is done on a thread basis. The Kernel performs thread
creation, scheduling and management in Kernel space. Kernel threads are
generally slower to create and manage than the user threads.
Advantages of Kernel level thread:
1. Kernel can simultaneously schedule multiple threads from the same process
on multiple process.
2. If one thread in a process is blocked, the Kernel can schedule another thread
of the same process.
3. Kernel routines themselves can multithreaded.
Disadvantages:
1. Kernel threads are generally slower to create and manage than the user
threads.
2. Transfer of control from one thread to another within same process requires a
mode switch to the Kernel.

Mruthula Sojan, Dept. of Computer Science, Seshadripuram Degree College, Mysuru Page 29
Operating System

Advantages of Thread
1. Thread minimize context switching time.
2. Use of threads provides concurrency within a process.
3. Efficient communication.
4. Economy- It is more economical to create and context switch threads.
5. Utilization of multiprocessor architectures –
The benefits of multithreading can be greatly increased in a multiprocessor
architecture.
Multithreading Models
Some operating system provides a combined user level thread and Kernel level
thread facility. Solaris is a good example of this combined approach. In a
combined system, multiple threads within the same application can run in
parallel on multiple processors and a blocking system call need not block the
entire process.
Multithreading models are three types:
1. Many to many relationship.
2. Many to one relationship.
3. One to one relationship.
1. Many to Many Models
In this model, many user level threads multiplexes to the Kernel thread of
smaller or equal numbers. The number of Kernel threads may be specific to
either a particular application or a particular machine.
Fig. shows the many to many model.

In this model, developers can create as many user threads as necessary and the
corresponding Kernel threads can run in parallels on a multiprocessor.
2. Many to One Model
Many to one model maps many user level threads to one Kernel level thread.
Thread management is done in user space. When thread makes a blocking
system call, the entire process will be blocks. Only one thread can access the

Mruthula Sojan, Dept. of Computer Science, Seshadripuram Degree College, Mysuru Page 30
Operating System

Kernel at a time, so multiple threads are unable to run in parallel on


multiprocessors.
Fig. shows the many to one model

If the user level thread libraries are implemented in the operating system, that
system does not support Kernel threads use the many to one relationship modes.
3. One to One Model
There is one to one relationship of user level thread to the kernel level thread.
Fig. shows one to one relationship model.

This model provides more concurrency than the many to one model.
It also another thread to run when a thread makes a blocking system call. It
support multiple thread to execute in parallel on microprocessors.
Disadvantage of this model is that creating user thread requires the
corresponding Kernel thread. OS/2, windows NT and windows 2000 use one to
one relationship model.

Mruthula Sojan, Dept. of Computer Science, Seshadripuram Degree College, Mysuru Page 31
Operating System

Difference between User Level & Kernel Level Thread


User Level Threads Kernel Level Thread

User level thread are faster to create Kernel level thread are slower to
and manage create and manage
Implemented by a thread library at Operating system support directly to
the user level. Kernel threads

User level thread can run on any Kernel level threads are specific to
operating system the operating system
Support provided at the user level Support may be provided by kernel
called user level thread is called Kernel level threads
Multithread application cannot take Kernel routines themselves can be
advantage of multiprocessing multithreaded

Difference between Process and Thread


Process Thread
Process is called heavy weight process Thread is called light weight
process
Process switching needs interface with Thread switching does not need
operating system to call a operating system and
cause an interrupt to the Kernel
In multiple process implementation All threads can share same set of
each process executes the same code open files, child processes
but has its own memory and file
resources
If one server process is blocked no While one server thread is
other server process can execute until blocked and waiting, second
the first process unblocked thread in the same task could
run
Multiple redundant process uses more Multiple threaded process uses
resources than multiple threaded fewer resources than multiple
redundant process
In multiple process each process One thread can read, write or
operates independently of the others even completely wipe out
another threads stack

Threading Issues
System calls fork and exec is discussed here. In a multithreaded program
environment, fork and exec system calls is changed. Unix system have two
version of fork system calls. One call duplicates all threads and another that

Mruthula Sojan, Dept. of Computer Science, Seshadripuram Degree College, Mysuru Page 32
Operating System

duplicates only the thread that invokes the fork system call. Whether to use one
or two version of fork system call totally depends upon the application.
Duplicating all threads is unnecessary, if exec is called immediately after fork
system call.
Thread cancellation is a process of thread terminates before its completion of
task. For example, in multiple thread environment, thread concurrently
searching through a database. If any one thread returns the result, the remaining
thread might be cancelled. Thread cancellation is of two types.
1. Asynchronous cancellation
2. Synchronous cancellation
In asynchronous cancellation, one thread immediately terminates the target
thread. Deferred cancellation periodically check for terminate by target thread.
It also allows the target thread to terminate itself in an orderly fashion.
Some resources are allocated to the thread. If we cancel the thread, which
update the data with other thread. This problem may face by asynchronous
cancellation system wide resource is not free if threads cancelled
asynchronously. Most of the operating system allows a process or thread to be
cancelled asynchronously.

CPU Scheduling

CPU scheduling is a process which allows one process to use the CPU while the
execution of another process is on hold (in waiting state) due to unavailability
of any resource like I/O etc, thereby making full use of CPU. The aim of CPU
scheduling is to make the system efficient, fast and fair.

CPU-I/O Burst Cycle


The success of CPU scheduling depends on an observed property of
processes:
Process execution consists of a cycle of CPU execution and I/O wait. Processes
alternate between these two states. Process execution begins with a CPU burst.
That is followed by an I/O burst, which is followed by another CPU burst, then
another I/O burst, and so on.
fig:

Mruthula Sojan, Dept. of Computer Science, Seshadripuram Degree College, Mysuru Page 33
Operating System

-In an I/O – bound program would have many very short CPU bursts.
– In a CPU – bound program would have a few very long CPU bursts.

CPU Scheduler

Whenever the CPU becomes idle, the operating system must select one of the
processes in the ready queue to be executed. The selection process is carried out
by the short-term scheduler (or CPU scheduler). The scheduler selects a
process from the processes in memory that are ready to execute and allocates
the CPU to that process.

Preemptive Scheduling

CPU-scheduling decisions may take place under the following four


circumstances:
1. When a process switches from the running state to the waiting state (for
example, as the result of an I/O request or an invocation of wait for the
termination of one of the child processes)
2. When a process switches from the running state to the ready state (example,
when an interrupt occurs)
3. When a process switches from the waiting state to the ready state (for
example, at completion of I/O)
4. When a process terminates

Mruthula Sojan, Dept. of Computer Science, Seshadripuram Degree College, Mysuru Page 34
Operating System

For situations 1 and 4, there is no choice in terms of scheduling. A new


process (if one exists in the ready queue) must be selected for execution. There
is a choice, however, for situations 2 and 3.
When scheduling takes place only under circumstances 1 and 4, we say
that the scheduling scheme is nonpreemptive or cooperative; otherwise, it is
preemptive. Under nonpreemptive scheduling, once the CPU has been
allocated to a process, the process keeps the CPU until it releases the CPU
either by terminating or by switching to the waiting state.

Dispatcher

Another component involved in the CPU-scheduling function is the


dispatcher. The dispatcher is the module that gives control of the CPU to the
process selected by the short-term scheduler. This function involves the
following:
• Switching context
• Switching to user mode
• Jumping to the proper location in the user program to restart that program
The dispatcher should be as fast as possible, since it is invoked during
every process switch. The time it takes for the dispatcher to stop one process
and start another running is known as the dispatch latency.

Scheduling Criteria
Different CPU scheduling algorithms have different properties, and the
choice of a particular algorithm may favour one class of processes over another.
In choosing which algorithm to use in a particular situation, we must consider
the properties of the various algorithms.
Many criteria have been suggested for comparing CPU scheduling algorithms.
Which characteristics are used for comparison can make a substantial difference
in which algorithm is judged to be best. The criteria include the following:
1. CPU utilization. We want to keep the CPU as busy as possible.
Conceptually,
CPU utilization can range from 0 to 100 percent. In a real system, it should
range from 40 percent (for a lightly loaded system) to 90 percent (for a heavily
used system).
2. Throughput. If the CPU is busy executing processes, then work is being
done. One measure of work is the number of processes that are completed per
time unit, called throughput. For long processes, this rate may be one process
per hour; for short transactions, it may be 10 processes per second.
3. Turnaround time. From the point of view of a particular process, the
important criterion is how long it takes to execute that process. The interval
from the time of submission of a process to the time of completion is the

Mruthula Sojan, Dept. of Computer Science, Seshadripuram Degree College, Mysuru Page 35
Operating System

turnaround time. Turnaround time is the sum of the periods spent waiting to get
into memory, waiting in the ready queue, executing on the CPU, and doing I/O.
4. Waiting time. The CPU scheduling algorithm does not affect the amount of
time during which a process executes or does I/O; it affects only the amount of
time that a process spends waiting in the ready queue. Waiting time is the sum
of the periods spent waiting in the ready queue.
5. Response time. In an interactive system, turnaround time may not be the best
criterion. Often, a process can produce some output fairly early and can
continue computing new results while previous results are being output to the
user. Thus, another measure is the time from the submission of a request until
the first response is produced. This measure, called response time, is the time it
takes to start responding, not the time it takes to output the response. The
turnaround time is generally limited by the speed of the output device.
It is desirable to maximize CPU utilization and throughput and to
minimize turnaround time, waiting time, and response time.

Scheduling Algorithms
CPU scheduling deals with the problem of deciding which of the
processes in the ready queue is to be allocated the CPU. There are many
different CPU scheduling algorithms.

1. First-Come, First-Served Scheduling (FCFS)


The simplest CPU-scheduling algorithm is the first-come, first- served
(FCFS) scheduling algorithm. With this scheme, the process that requests the
CPU first is allocated the CPU first. The implementation of the FCFS policy is
easily managed with a FIFO queue. When a process enters the ready queue, its
PCB is linked onto the tail of the queue. When the CPU is free, it is allocated to
the process at the head of the queue. The running process is then removed from
the queue. The code for FCFS scheduling is simple to write and understand.
The average waiting time under the FCFS policy, however, is often quite long.
Consider the following set of processes that arrive at time 0, with the length of
the CPU burst given in milliseconds:

If the processes arrive in the order P1, P2, P3, and are served in FCFS order, we
get the result shown in the following Gantt chart:

Mruthula Sojan, Dept. of Computer Science, Seshadripuram Degree College, Mysuru Page 36
Operating System

The waiting time is 0 milliseconds for process P1, 24 milliseconds for process
P2, and 27 milliseconds for process P3. Thus, the average waiting time is (0 +
24 + 27)/3 = 17 milliseconds. If the processes arrive in the order p2, p3, p1
however, the results will be as shown in the following Gantt chart:

The average waiting time is now (6 + 0 + 3)/3 = 3 milliseconds.

The FCFS scheduling algorithm is nonpreemptive. Once the CPU has been
allocated to a process, that process keeps the CPU until it releases the CPU,
either by terminating or by requesting I/O.

2. Shortest-Job-First Scheduling
A different approach to CPU scheduling is the shortest-job-first (SJF)
Scheduling algorithm. This algorithm associates with each process the length
of the process's next CPU burst. When the CPU is available, it is assigned to the
process that has the smallest next CPU burst. If the next CPU bursts of two
processes are the same, FCFS scheduling is used to break the tie. Note that a
more appropriate term for this scheduling method would be the shortest-next-
CPU-burst algorithm, because scheduling depends on the length of the next
CPU burst of a process, rather than its total length. We use the term SJF because
most people and textbooks use this term to refer to this type of scheduling.
As an example of SJF scheduling, consider the following set of processes, with
the length of the CPU burst given in milliseconds:

Mruthula Sojan, Dept. of Computer Science, Seshadripuram Degree College, Mysuru Page 37
Operating System

Using SJF scheduling, we would schedule these processes according to the


following Gantt chart:

The waiting time is 3 milliseconds for process P1, 16 milliseconds for process
P2, 9 milliseconds for process P3, and 0 milliseconds for process P4. Thus, the
average waiting time is (3 + 16 + 9 + 0)/4 - 7 milliseconds. By comparison, if
we were using the FCFS scheduling scheme, the average waiting time would be
10.25 milliseconds.
The SJF scheduling algorithm is provably optimal, in that it gives the minimum
average waiting time for a given set of processes. Moving a short process before
a long one decreases the waiting time of the short process more than it increases
the waiting time of the long process. Consequently, the average waiting time
decreases.
The SJF algorithm can be either preemptive or nonpreemptive.
A preemptive SJF algorithm will preempt the currently executing process,
whereas a nonpreemptive SJF algorithm will allow the currently running
process to finish its CPU burst.
Preemptive SJF scheduling is sometimes called shortest-remaining-time-first
scheduling.
As an example, consider the following four processes, with the length of the
CPU burst given in milliseconds:

If the
processes arrive at the ready queue at the times shown and need the indicated
burst times, then the resulting preemptive SJF schedule is as depicted in the
following Gantt chart:

Process P1 is started at time 0, since it is the only process in the queue. Process

Mruthula Sojan, Dept. of Computer Science, Seshadripuram Degree College, Mysuru Page 38
Operating System

P2 arrives at time 1. The remaining time for process P1 (7 milliseconds) is


larger than the time required by process P2 (4 milliseconds), so process P1 is
preempted, and process P2 is scheduled. The average waiting time for this
example is ((10 - 1) + (1 - 1) + (17 - 2) + (5 - 3))/4 = 26/4 = 6.5 milliseconds.
Nonpreemptive SJF scheduling would result in an average waiting time of 7.75
milliseconds.

3.Priority Scheduling
The SJF algorithm is a special case of the general priority scheduling
algorithm.
A priority is associated with each process, and the CPU is allocated to the
process with the highest priority. Equal-priority processes are scheduled in
FCFS order.
An SJF algorithm is simply a priority algorithm where the priority (p) is the
inverse of the (predicted) next CPU burst. The larger the CPU burst, the lower
the priority, and vice versa.
Note that we discuss scheduling in terms of high priority and low priority.
Priorities are generally indicated by some fixed range of numbers, such as 0 to 7
or 0 to 4,095. However, there is no general agreement on whether 0 is the
highest or lowest priority. Some systems use low numbers to represent low
priority; others use low numbers for high priority. This difference can lead to
confusion. In this text, we assume that low numbers represent high priority. As
an example, consider the following set of processes, assumed to have arrived at
time 0, in the order P1, P2, • • -, P5, with the length of the CPU burst given in
milliseconds:

Using priority
scheduling, we would schedule these processes according to the following Gantt
chart:

Mruthula Sojan, Dept. of Computer Science, Seshadripuram Degree College, Mysuru Page 39
Operating System

The average waiting time is 8.2 milliseconds.


Priority scheduling can be either preemptive or nonpreemptive. When a process
arrives at the ready queue, its priority is compared with the priority of the
currently running process. A preemptive priority scheduling algorithm will
preempt the CPU if the priority of the newly arrived process is higher than the
priority of the currently running process. A nonpreemptive priority scheduling
algorithm will simply put the new process at the head of the ready queue.
A major problem with priority scheduling algorithms is indefinite blocking, or
starvation. A process that is ready to run but waiting for the CPU can be
considered blocked. A priority scheduling algorithm can leave some lowpriority
processes waiting indefinitely.
A solution to the problem of indefinite blockage of low-priority processes is
aging. Aging is a technique of gradually increasing the priority of processes that
wait in the system for a long time. For example, if priorities range from 127
(low) to 0 (high), we could increase the priority of a waiting process by 1 every
15 minutes.

4.Round-Robin Scheduling
The round-robin (RR) scheduling algorithm is designed especially for
timesharing systems. It is similar to FCFS scheduling, but preemption is added
to switch between processes. A small unit of time, called a time quantum or
time slice, is defined. A time quantum is generally from 10 to 100 milliseconds.
The ready queue is treated as a circular queue. The CPU scheduler goes around
the ready queue, allocating the CPU to each process for a time interval of up to
1 time quantum.
To implement RR scheduling, we keep the ready queue as a FIFO queue of
processes. New processes are added to the tail of the ready queue. The CPU
scheduler picks the first process from the ready queue, sets a timer to interrupt
after 1 time quantum, and dispatches the process.
The average waiting time under the RR policy is often long. Consider the
following set of processes that arrive at time 0, with the length of the CPU burst
given in milliseconds:

If we use a time quantum of 4 milliseconds, then process P1 gets the first 4


milliseconds. Since it requires another 20 milliseconds, it is preempted after the

Mruthula Sojan, Dept. of Computer Science, Seshadripuram Degree College, Mysuru Page 40
Operating System

first time quantum, and the CPU is given to the next process in the queue,
process P2. Since process Pi does not need 4 milliseconds, it quits before its
time quantum expires. The CPU is then given to the next process, process P3.
Once each process has received 1 time quantum, the CPU is returned to process
P1 for an additional time quantum. The resulting RR schedule is

The average waiting time is 17/3 = 5.66 milliseconds.


In the RR scheduling algorithm, no process is allocated the CPU for more than
1 time quantum in a row (unless it is the only runnable process). If a process's
CPU burst exceeds 1 time quantum, that process is preempted and is put back in
the ready queue. The RR scheduling algorithm is thus preemptive.

Mruthula Sojan, Dept. of Computer Science, Seshadripuram Degree College, Mysuru Page 41

You might also like