0% found this document useful (0 votes)
189 views

Operating Systems Notes - Part1

The document outlines the course structure and content for the Operating Systems course code 10CS53. It is divided into 8 units that will be covered over 52 hours. The units cover topics such as process management, process synchronization, deadlocks, memory management, file systems, mass storage structures, and a case study of the Linux operating system. The document also lists three textbooks and reference books that will be used for the course.

Uploaded by

H full jjjfy
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
189 views

Operating Systems Notes - Part1

The document outlines the course structure and content for the Operating Systems course code 10CS53. It is divided into 8 units that will be covered over 52 hours. The units cover topics such as process management, process synchronization, deadlocks, memory management, file systems, mass storage structures, and a case study of the Linux operating system. The document also lists three textbooks and reference books that will be used for the course.

Uploaded by

H full jjjfy
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 78

Smartworld.asia 1 Smartzworld.

com

Operating Systems 10CS53

Operating Systems
Subject Code: 10CS53 I.A. Marks : 25
Hours/Week : 04 Exam Hours: 03
Total Hours : 52 Exam Marks: 100

PART -A

UNIT -1 INTRODUCTION TO OPERATING SYSTEMS, SYSTEM STRUCTURES: What operating


systems do; Computer System organization; Computer System architecture; Operating System structure;
Operating System operations; Process management; Memory management; Storage management; Protection
and security; Distributed system; Special-purpose systems; Computing environments. Operating System
Services; User -Operating System interface; System calls; Types of system calls; System programs; Operating
System design and implementation; Operating System structure; Virtual machines; Operating System
generation; System boot.

6 Hours

UNIT -2 Process Management: Process concept; Process scheduling; Operations on processes; Inter-process

communication. Multi-Threaded Programming: Overview; Multithreading models; Thread Libraries;

Threading issues. Process Scheduling: Basic concepts; Scheduling criteria; Scheduling algorithms; Multiple-

Processor scheduling; Thread scheduling.7 Hours

UNIT -3 PROCESS SYNCHRONIZATION: Synchronization: The Critical section problem; Peterson’s

solution; Synchronization hardware; Semaphores; Classical problems of synchronization; Monitors.

7 Hours

UNIT -4 DEADLOCKS: Deadlocks: System model; Deadlock characterization; Methods for handling deadlocks;

Deadlock prevention; Deadlock avoidance; Deadlock detection and recovery from deadlock.

6 Hours

Dept of CSE, SJBIT 1


Smartworld.asia 2 Smartzworld.com

Operating Systems 10CS53

PART -B

UNIT -5

MEMORY MANAGEMENT: Memory Management Strategies: Background; Swapping; Contiguous

memory allocation; Paging; Structure of page table; Segmentation. Virtual Memory Management:

Background; Demand paging; Copy-on-write; Page replacement; Allocation of frames; Thrashing.


7 Hours
UNIT -6 FILE SYSTEM, IMPLEMENTATION OF FILE SYSTEM: File System: File
concept; Access methods; Directory structure; File system mounting; File sharing; Protection.
Implementing File System: File system structure; File system implementation; Directory
implementation; Allocation methods; Free space management.

7 Hours

UNIT-7 SECONDARY STORAGE STRUCTURES, PROTECTION: Mass storage structures; Disk


structure; Disk attachment; Disk scheduling; Disk management; Swap space management. Protection: Goals of
protection, Principles of protection, Domain of protection, Access matrix, Implementation of access
6 Hours

UNIT -8 CASE STUDY: THE LINUX OPERATING SYSTEM: Linux history; Design principles; Kernel
modules; Process management; Scheduling; Memory management; File systems, Input and output; Inter-process
communication. 6Hours

TEXT BOOK:

1. Operating System Principles – Abraham Silberschatz, Peter Baer Galvin, Greg Gagne, 8th edition, Wiley-
India, 2009

REFERENCE BOOKS:
nd
1. Operating Systems: A Concept Based Approach – D.M Dhamdhere, 2 Edition, Tata McGraw-Hill, 2002.
nd
2. Operating Systems – P.C.P. Bhatt, 2 Edition, PHI, 2006.
rd
3. Operating Systems – Harvey M Deital, 3 Edition, Addison Wesley, 1990.

Dept of CSE, SJBIT 2


Smartworld.asia 3 Smartzworld.com

Operating Systems 10CS53

Table of Contents

Topics Page no
UNIT 1: INTRODUCTION TO OPERATING SYSTEMS, STRUCTURES 7-26
1.1 WHAT OPERATING SYSTEM DO.
1.2 COMPUTER SYSTEM ORGANIZATION.
1.3 COMPUTER SYSTEM ARCHITECTURE.
1.4 OPERATING SYSTEM STRUCTURE.
1.5 OPERATING SYSTEM OPERATIONS.
1.6 PROCESS MANAGEMENT.
1.7 MEMORY MANAGEMENT.
1.8 STORAGE MANAGEMENT.
1.9 PROTECTION AND SECURITY.
1 . 1 0 DISTRIBUTED SYSTEM.
1.11 SPECIAL-PURPOSE SYSTEMS.
1.12 COMPUTING ENVIRONMENTS.
1.13 OPERATING SYSTEM SERVICES.
1.14 USER-OPERATING SYSTEM INTERFACE.
1.15 SYSTEM CALLS, TYPES OF SYSTEM CALLS.
1.16 SYSTEM PROGRAMS.
1.17 OPERATING SYSTEM DESIGN AND IMPLEMENTATION.
1.18 OPERATING SYSTEM STRUCTURE.
1.19 VIRTUAL MACHINES.
1.20 OPERATING SYSTEM GENERATION.
1.21 SYSTEM BOOT.

UNIT 2: PROCESS MANAGEMENT 27-53

2.1 PROCESS CONCEPT.


2.2 PROCESS SCHEDULING.
2.3 OPERATIONS ON PROCESSES.

Dept of CSE, SJBIT 3


Smartworld.asia 4 Smartzworld.com

Operating Systems 10CS53

2.4 INTER-PROCESS COMMUNICATION.


2.5 MULTI-THREADED PROGRAMMING.
2.6 OVERVIEW; MULTITHREADING MODELS.
2.7 THREAD LIBRARIES; THREADING ISSUES.
2.8 PROCESS SCHEDULING: BASIC CONCEPTS.
2.9 SCHEDULING CRITERIA.
2.10 SCHEDULING ALGORITHMS.
2.11 THREAD SCHEDULING.
2.12 MULTIPLE-PROCESSOR SCHEDULING.

UNIT 3: PROCESS SYNCHRONIZATION 54-67

3.1 SYNCHRONIZATION
3.2 THE CRITICAL SECTION PROBLEM
3.3 PETERSON’S SOLUTION
3.4 SYNCHRONIZATION HARDWARE
3.5 SEMAPHORES
3.6 CLASSICAL PROBLEMS OF SYNCHRONIZATION
3.7 MONITORS

UNIT 4: DEADLOCK 68 -78

4.1 DEADLOCKS
4.2 SYSTEM MODEL
4.3 DEADLOCK CHARACTERIZATION
4.4 METHODS FOR HANDLING DEADLOCKS
4.5 DEADLOCK PREVENTION
4.6 DEADLOCK AVOIDANCE
4.7 DEADLOCK DETECTION
4.8 RECOVERY FROM DEADLOCK

Dept of CSE, SJBIT 4


Smartworld.asia 5 Smartzworld.com

Operating Systems 10CS53

UNIT 5 : STORAGE MANAGEMENT 79-104

5.1 MEMORY MANAGEMENT STRATEGIES


5.2 BACKGROUND
5.3 SWAPPING
5.4 CONTIGUOUS MEMORY ALLOCATION
5.5 PAGING, STRUCTURE OF PAGE TABLE
5.6 SEGMENTATION
5.7 VIRTUAL MEMORY MANAGEMENT
5.8 BACKGROUND,DEMAND PAGING
5.9 COPY-ON-WRITE
5.10 PAGE REPLACEMENT
5.11 ALLOCATION OF FRAMES
5.12 THRASHING.

UNIT 6 : FILE SYSTEM INTERFACE 105-121

. 6 . 1 FILE SYSTEM: FILE CONCEPT


. 6 . 2 ACCESS METHODS
. 6 . 3 DIRECTORY STRUCTURE
. 6 . 4 FILE SYSTEM MOUNTING
. 6 . 5 FILE SHARING; PROTECTION.
. 6 . 6 IMPLEMENTING FILE SYSTEM
. 6 . 7 FILE SYSTEM STRUCTURE
. 6 . 8 FILE SYSTEM IMPLEMENTATION
. 6 . 9 DIRECTORY IMPLEMENTATION
. 6 . 1 0 ALLOCATION METHODS
. 6 . 1 1 FREE SPACE MANAGEMENT.

UNIT 7 : MASS STORAGE STRUCTURE 122-134

7.1 MASS STORAGE STRUCTURES


7.2 DISK STRUCTURE 7.3 DISK ATTACHMENT

Dept of CSE, SJBIT 5


Smartworld.asia 6 Smartzworld.com

Operating Systems 10CS53

7.4 DISK SCHEDULING


7.5 DISK MANAGEMENT
7.6 SWAP SPACE MANAGEMENT
7.7 PROTECTION: GOALS OF PROTECTION
7.8 PRINCIPLES OF PROTECTION
7.9 DOMAIN OF PROTECTION
7.10 ACCESS MATRIX
7.11 IMPLEMENTATION OF ACCESS MATRIX
7.12 ACCESS CONTROL 7.13REVOCATION OF ACCESS RIGHTS
7.14 CAPABILITY-BASED SYSTEM.

UNIT 8: LINUX SYSTEM 135-150

8.1 LINUX HISTORY


8.2 DESIGN PRINCIPLES
8.3 KERNEL MODULES
8.4 PROCESS MANAGEMENT
8.5 SCHEDULING
8.6 MEMORY MANAGEMENT
8.7 FILE SYSTEMS
8.8 INPUT AND OUTPUT
8.9 INTER-PROCESS COMMUNICATION

Dept of CSE, SJBIT 6


Smartworld.asia 7 Smartzworld.com

Operating Systems 10CS53

UNIT 1 INTRODUCTION TO OPERATING SYSTEMS, STRUCTURES

1.22 WHAT OPERATING SYSTEM DO.


1.23 COMPUTER SYSTEM ORGANIZATION.
1.24 COMPUTER SYSTEM ARCHITECTURE.
1.25 OPERATING SYSTEM STRUCTURE.
1.26 OPERATING SYSTEM OPERATIONS.
1.27 PROCESS MANAGEMENT.
1.28 MEMORY MANAGEMENT.
1.29 STORAGE MANAGEMENT.
1.30 PROTECTION AND SECURITY.
1 . 3 1 DISTRIBUTED SYSTEM.
1.32 SPECIAL-PURPOSE SYSTEMS.
1.33 COMPUTING ENVIRONMENTS.
1.34 OPERATING SYSTEM SERVICES.
1.35 USER-OPERATING SYSTEM INTERFACE.
1 . 3 6 SYSTEM CALLS, TYPES OF SYSTEM CALLS.
1.37 SYSTEM PROGRAMS.
1.38 OPERATING SYSTEM DESIGN AND IMPLEMENTATION.
1.39 OPERATING SYSTEM STRUCTURE.
1.40 VIRTUAL MACHINES.
1.41 OPERATING SYSTEM GENERATION.
1.42 SYSTEM BOOT.

Dept of CSE, SJBIT 7


Smartworld.asia 8 Smartzworld.com

Operating Systems 10CS53

UNIT -1 INTRODUCTION TO OPERATING SYSTEMS, STRUCTURES


1.1 WHAT OPERATING SYSTEM DO
An OS is an intermediary between the user of the computer & the computer hardware.
• It provides a basis for application program & acts as an intermediary between user of computer & computer
hardware.
• The purpose of an OS is to provide a environment in which the user can execute the program in a
convenient & efficient manner.
• OS is an important part of almost every computer systems.
• A computer system can be roughly divided into four components
• The Hardware
• The OS
• The application Program
• The user
• The Hardware consists of memory, CPU, ALU, I/O devices, peripherals devices & storage devices.
• The application program mainly consisted of word processors, spread sheets, compilers & web browsers
defines the ways in which the resources are used to solve the problems of the users.
• The OS controls & co-ordinates the use of hardware among various application program for various users.

1.2 COMPUTER SYSTEM ORGANIZATION


The following figure shows the conceptual view of a computer system

Views OF OS
1. User Views:-The user view of the computer depends on the interface used.
i. Some users may use PC’s. In this the system is designed so that only one user can utilize the resources and
mostly for ease of use where the attention is mailnly on performances and not on the resource
utilization.
ii. Some users may use a terminal connected to a mainframe or minicomputers.
iii. Other users may access the same computer through other terminals. These users may share resources and
exchange information. In this case the OS is designed to maximize resource utilization-so that all
available CPU time, memory & I/O are used efficiently.
iv. Other users may sit at workstations, connected to the networks of other workstation and servers. In this
case OS is designed to compromise between individual visibility & resource utilization.

Dept of CSE, SJBIT 8


Smartworld.asia 9 Smartzworld.com

Operating Systems 10CS53

2. System Views:
i. We can view system as resource allocator i.e. a computer system has many resources that may be used to
solve a problem. The OS acts as a manager of these resources. The OS must decide how to allocate
these resources to programs and the users so that it can operate the computer system efficiently and
fairly.
ii. A different view of an OS is that it need to control various I/O devices & user programs i.e. an OS is a
control program used to manage the execution of user program to prevent errors and improper use of
the computer.
iii. Resources can be either CPU Time, memory space, file storage space, I/O devices
and so on. The OS must support the following tasks

a. Provide the facility to create, modification of programs & data files using on editors.
b. Access to compilers for translating the user program from high level language to machine language.
c. Provide a loader program to move the compiled program code to computers memory for execution.
d. Provides routines that handle the details of I/O programming.

1.3 OPERATING SYSTEM ARCHITECTURE Mainframe System:


a. Mainframe systems are mainly used for scientific & commercial applications.
b. An OS may process its workload serially where the computer runs only one application

or concurrently where computer runs many applications. Batch Systems:


a. Early computers where physically large machines.
b. The common I/P devices are card readers & tape drives.
c. The common O/P devices are line printers, tape drives & card punches.
d. The user do not interact directly with computers but we use to prepare a job with the program, data & some
control information & submit it to the computer operator.
e. The job was mainly in the form punched cards.
f. At later time the O/P appeared and it consisted of result along with dump of memory and register content for
debugging.

The OS of these computers was very simple. Its major task was to transfer control from one job to the next. The
OS was always resident in the memory. The processing of job was very slow. To improve the processing speed
operators batched together the jobs with similar needs and processed it through the computers. This is called Batch
Systems.
• In batch systems the CPU may be idle for some time because the speed of the mechanical devices slower
compared to the electronic devices.
• Later improvement in technology and introduction of disks resulted in faster I/O devices.
• The introduction of disks allowed the OS to store all the jobs on the disk. The OS could perform the
scheduling to use the resources and perform the task efficiently.

Disadvantages of Batch Systems:


1. Turn around time can be large from user.
2. Difficult to debug the program.
3. A job can enter into infinite loop.
4. A job could corrupt the monitor.
5. Due to lack of protection scheme, one job may affect the pending jobs.

Dept of CSE, SJBIT 9


Smartworld.asia 10 Smartzworld.com

Operating Systems 10CS53

Multi programmed System:


a. If there are two or more programs in the memory at the same time sharing the processor, this is referred as multi
programmed OS.
b. It increases the CPU utilization by organizing the jobs so that the CPU will always have one job to execute.
c. Jobs entering the systems are kept in memory.
d. OS picks the job from memory & it executes it.
e. Having several jobs in the memory at the same time requires some form of memory management.
f. Multi programmed systems monitors the state of all active program and system resources and ensures that CPU is
never idle until there are no jobs.
g. While executing a particular job, if the job has to wait for any task like I/O operation to be complete then the CPU
will switch to some other jobs and starts executing it and when the first job finishes waiting the CPU will switch
back to that.
h. This will keep the CPU & I/O utilization busy. The following figure shows the memory layout of multi
programmed OS

Time sharing Systems:


a. Time sharing system or multi tasking is logical extension of multi programming systems. The CPU executes
multiple jobs by switching between them but the switching occurs so frequently that user can interact with each
program while it is running.
b. An interactive & hands on system provides direct communication between the user and the system. The user can
give the instruction to the OS or program directly through key board or mouse and waits for immediate results.
c. A time shared system allows multiple users to use the computer simultaneously. Since each action or commands
are short in time shared systems only a small CPU time will be available for each of the user.
d. A time shared systems uses CPU scheduling and multi programming to provide each user a small portion of time
shared computers. When a process executes it will be executing for a short time before it finishes or need to perform
I/O. I/O is interactive i.e. O/P is to a display for the user and the I/O is from a keyboard, mouse etc.
e. Since it has to maintain several jobs at a time, system should have memory management & protection.
f. Time sharing systems are complex than the multi programmed systems. Since several jobs are kept in memory
they need memory management and protection. To obtain less response time jobs are swapped in and out of main
memory to disk. So disk will serve as backing store for main memory. This can be achieved by using a technique
called virtual memory that allows for the execution of job i.e. not complete in memory.
g. Time sharing system should also provide a file system & file system resides on collection of disks so this need
disk management. It supports concurrent execution, job synchronization & communication.

II. DESKTOP SYSTEMS:


• Pc’s appeared in 1970’s and during this they lacked the feature needed to protect an OS from user program

Dept of CSE, SJBIT 10


Smartworld.asia 11 Smartzworld.com

Operating Systems 10CS53

& they even lack multi user nor multi tasking.


• The goals pf those OS changed later with the time and new systems includes Microsoft Windows & Apple
Macintosh.
• The Apple Macintosh OS ported to more advanced hardware & includes new features like virtual memory &
multi tasking.
• Micro computers are developed for single user in 1970’s & they can accommodate software with
large capacity & greater speeds. MS-DOS is an example for micro computer OS & are used by commercial,
educational, government enterprises.

III. Multi Processor Systems:


• Multi processor systems include more than one processor in close communication.
• They share computer bus, the clock, m/y & peripheral devices.
• Two processes can run in parallel.
• Multi processor systems are of two types
• a. Symmetric Multi processors ( SMP)
• b. Asymmetric Multi processors.
• In symmetric multi processing, each processors runs an identical copy of OS and they communicate with
one another as needed. All the CPU shares the common memory.
• In asymmetric multi processing, each processors is assigned a specific task. It uses a master slave
relationship. A master processor controls the system. The master processors schedules and allocates work to slave
processors. The following figure shows asymmetric multi processors.
• SMP means al processors are peers i.e. no master slave relationship exists between processors. Each
processors concurrently runs a copy of OS.
• The differences between symmetric & asymmetric multi processing may be result of either H/w or S/w.
Special H/w can differentiate the multiple processors or the S/w can be written to allow only master & multiple
slaves.

Advantages of Multi Processor Systems:


1. Increased Throughput:-By increasing the Number of processors we can get more work done in less time. When
multiple process co operate on task, a certain amount of overhead is incurred in keeping all parts working
correctly.
2. Economy Of Scale:-Multi processor system can save more money than multiple single processor, since they
share peripherals, mass storage & power supplies. If many programs operate on same data, they will be stored
on one disk & all processors can share them instead of maintaining data on several systems.
3. Increased Reliability:-If a program is distributed properly on several processors, than the failure of one
processor will not halt the system but it only slows down.

Dept of CSE, SJBIT 11


Smartworld.asia 12 Smartzworld.com

Operating Systems 10CS53

1.4 OPERATING SYSTEM STRUCTURES

PROCESS CONTROL & JOB CONTROL


• A system call can be used to terminate the program either normally or abnormally. Reasons for abnormal
termination are dump of m/y, error message generated etc.
• Debugger is mainly used to determine problem of the dump & returns back the dump to the OS.
• In normal or abnormal situations the OS must transfer the control to the command interpreter system.
• In batch system the command interpreter terminates the execution of job & continues with the next job.
• Some systems use control cards to indicate the special recovery action to be taken in case of errors.
• Normal & abnormal termination can be combined at some errors level. Error level is defined before & he
command interpreter uses this error level to determine next action automatically.

MS-DOS:
MS-DOS is an example of single tasking system, which has command interpreter system i.e. invoked when the
computer is started. To run a program MS-DOS uses simple method. It does not create a process when one process
is running MS-DOS the program into m/y & gives the program as much as possible. It lacks the general
multitasking capabilities.

BSD:Free BSD is an example of multitasking system. In free BSD the command interpreter may continue running
while other program is executing. FORK is used to create new process.

Dept of CSE, SJBIT 12


Smartworld.asia 13 Smartzworld.com

Operating Systems 10CS53

1.5 OPERATING SYSTEM OPERATIONS

Modern OS supports all system components. The system components are,


1. Process Management.
2. Main M/y Management.
3. File Management.
4. Secondary Storage Management.
5. I/O System management.
6. Networking.
7. Protection System.
8. Command Interpreter System.

1.6 PROCESS MANAGEMENT


• A process is a program in execution.
• A process abstraction is a fundamental OS mechanism for the management of concurrent program
execution.
• The OS responds by creating process.
• Process requires certain resources like CPU time, M/y, I/O devices. These resources are allocated to the
process when it created or while it is running.
• When process terminates the process reclaims all the reusable resources.
• Process refers to the execution of M/c instructions.
• A program by itself is not a process but is a passive entity.

The OS is responsible for the following activities of the process management,


• Creating & destroying of the user & system process .
• Allocating H/w resources among the processes.
• Controlling the progress of the process.
• Provides mechanism for process communication.
• Provides mechanism for deadlock handling.

1.7 MEMORY MANAGEMENT


• Main M/y is the centre to the operation of the modern computer.
• Main M/y is the array of bytes ranging from hundreds of thousands to billions. Each byte will have their
own address.
• The central processor reads the instruction from main M/y during instruction fetch cycle & it both reads &
writes the data during the data-fetch cycle. The I/O operation reads and writes data in main M/y.
• The main M/y is generally a large storage device in which a CPU can address & access directly.
• When a program is to be executed it must be loaded into memory & mapped to absolute address. When it is
executing it access the data & instruction from M/y by generating absolute address. When the program terminates all
available M/y will be returned back.
• To improve the utilization of CPU & the response time several program will be kept in M/y.
• Several M/y management scheme are available & selection depends on the H/w design of the
• system. The OS is responsible for the following activities.
• Keeping track of which part of the M/y is used & by whom.
• Deciding which process are to be loaded into M/y.
• Allocating & de allocating M/y space as needed.

Dept of CSE, SJBIT 13


Smartworld.asia 14 Smartzworld.com

Operating Systems 10CS53

File Management:
• File management is one of the most visible component of an OS.
• Computer stores data on different types of physical media like Magnetic Disks, Magnetic tapes, optical disks
etc.
• For convenient use of the computer system the OS provides uniform logical view of information storage.
• The OS maps file on to physical media & access these files via storage devices.
• A file is logical collection of information.
• File consists of both program & data. Data files may be numeric, alphabets or alphanumeric.
• Files can be organized into directories. The OS is responsible for the following activities,
• Creating & deleting of files.
• Creating & deleting directories.
• Supporting primitives for manipulating files & directories.
• Maping files onto secondary storage.
• Backing up files on stable storage media.

1.8 STORAGE MANAGEMENT


• Is a mechanism where the computer system may store information in a way that it can be retrieved later.
• They are used to store both data & programs.
• The programs & data are stored in main memory.
• Since the size of the M/y is small & volatile Secondary storage devices is used.
• Magnetic disk is central importance of computer system. The OS is responsible for the following
activities,

• Free space management.

• Storage allocation.

• Disk scheduling. The entire speed of computer system depends on the speed of the disk sub system.

I/O System Management:


• Each I/o device has a device handler that resides in separate process associated with that device. The I/O
management consists of,
• A M/y management component that include buffering,, caching & spooling.
• General device-driver interface.
• Drivers for specific H/w device.

Networking :
• Networking enables users to share resources & speed up computations.
• The process communicates with one another through various communication lines like high
• speed buses or N/w. Following parameters are considered while designing the N/w,
• Topology of N/w.
• Type of N/w.
• Physical media.
• Communication protocol,
• Routing algorithms.

Dept of CSE, SJBIT 14


Smartworld.asia 15 Smartzworld.com

Operating Systems 10CS53

1.9 PROTECTION AND SECURITY


• Modern computer system supports many users & allows the concurrent execution of multiple processes
organization rely on computers to store information. It necessary that the information & devices must be protected
from unauthorized users or processors.
• The protection is a mechanism for controlling the access of program, processes or users to the resources
defined by a computer system.
• Protection mechanism are implemented in OS to support various security policies.
• The goal of security system is to authenticate their access to any object.
• Protection can improve reliability by detecting latent errors at the interface B/w component sub system.
• Protection domains are extensions of H/w supervisor mode ability.

1.10 DISTRIBUTED SYSTEMS


• A distributed system is one in which H/w or S/w components located at the networked computers
communicate & co ordinate their actions only by passing messages.
• A distributed systems looks to its user like an ordinary OS but runs on multiple, Independent CPU’s.
• Distributed systems depends on networking for their functionality which allows for communication so that
distributed systems are able to share computational tasks and provides rich set of features to users.
• N/w may vary by the protocols used, distance between nodes & transport media. Protocols->TCP/IP, ATM
etc. Network-> LAN, MAN, WAN etc. Transport Media-> copper wires, optical fibers & wireless transmissions

Client-Server Systems:
• Since PC’s are faster, power full, cheaper etc. designers have shifted away from the centralized system
architecture.
• User-interface functionality that used to be handled by centralized system is handled by PC’s.

So the centralized system today act as server program to satisfy the requests of client. Server system can be
classified as follows
c. Computer-Server System:-Provides an interface to which client can send requests to perform some actions, in
response to which they execute the action and send back result to the client.
d. File-Server Systems:-Provides a file system interface where clients can create, update, read & delete files.

Peer-to-Peer Systems:
• PC’s are introduced in 1970’s they are considered as standalone computers i.e. only one user can use it at a
time.
• With wide spread use of internet PC’s were connected to computer networks.
• With the introduction of the web in mid 1990’s N/w connectivity became an essential component of a
computer system.
• All modern PC’s & workstation can run a web. Os also includes system software that enables the computer
to access the web.
• In distributed systems or loosely coupled couple systems, the processor can communicate with one another
through various communication lines like high speed buses or telephones lines.
• A N/w OS which has taken the concept of N/w & distributed system which provides features fir file sharing
across the N/w and also provides communication which allows different processors on different computers to share
resources.

Dept of CSE, SJBIT 15


Smartworld.asia 16 Smartzworld.com

Operating Systems 10CS53

Advantages of Distributed Systems:


1. Resource sharing. 2. Higher reliability. 3.Better price performance ratio. 4.Shorter response time. 5.Higher
throughput. 6.Incremental growth
1.11 SPECIAL-PURPOSE SYSTEMS. Clustered
Systems
• Like parallel systems the clustered systems will have multiple CPU but they are composed of two or more
individual system coupled together.
• Clustered systems share storage & closely linked via LAN N/w.
• Clustering is usually done to provide high availability.
• Clustered systems are integrated with H/w & S/w. H/w clusters means sharing of high performance disk.
S/w clusters are in the form of unified control of a computer system in a cluster.
• A layer of S/w cluster runs on the cluster nodes. Each node can monitor one or more of the others. If the
monitored M/c fails the monitoring M/c take ownership of its storage and restart the application that were running
on failed M/c.
• Clustered systems can be categorized into two groups
• Asymmetric Clustering &
• Symmetric clustering
• In asymmetric clustering one M/c is in hot standby mode while others are running the application. The hot
standby M/c does nothing but it monitors the active server. If the server fails the hot standby M/c becomes the active
server.
• In symmetric mode two or more hosts are running the Application & they monitor each other. This mode is
more efficient since it uses all the available H/w.
• Parallel clustering and clustering over a LAN is also available in clustering. Parallel clustering allows
multiple hosts to access the same data on shared storage.
• Clustering provides better reliability than the multi processor systems.
• It provides all the key advantages of a distributed systems.
• Clustering technology is changing & include global clusters in which M/c could be anywhere in the world.

Real-Time Systems
Real time system is one which were originally used to control autonomous systems like satellites, robots,
hydroelectric dams etc.
• Real time system is one that must react to I/p & responds to them quickly.
• A real time system should not be late in response to one event.
• A real time should have well defined time constraints.
• Real time systems are of two types
• Hard Real Time Systems
• Soft Real Time Systems
• A hard real time system guarantees that the critical tasks to be completed on time. This goal requires that all
delays in the system be bounded from the retrieval of stored data to time that it takes the OS to finish the request.
• In soft real time system is a less restrictive one where a critical real time task gets priority over other tasks &
retains the property until it completes. Soft real time system is achievable goal that can be mixed with other type of
systems. They have limited utility than hard real time systems.
• Soft real time systems are used in area of multimedia, virtual reality & advanced scientific projects. It cannot
be used in robotics or industrial controls due to lack of deadline support.
• Real time OS uses priority scheduling algorithm to meet the response requirement of a real time application.
• Soft real time requires two conditions to implement, CPU scheduling must be priority based & dispatch
latency should be small.
• The primary objective of file management in real time systems is usually speed of access, rather than
efficient utilization of secondary storage.

Dept of CSE, SJBIT 16


Smartworld.asia 17 Smartzworld.com

Operating Systems 10CS53

1.12 COMPUTING ENVIRONMENTS


Different types of computing environments are:

• Traditional Computing.
• Web Based Computing.
• Embedded Computing.

• Traditional Computing Typical office environment uses traditional computing. Normal PC is used in
traditional computing environment. N/w computers are essential terminals that understand web based computing. In
domestic application most of the user had a single computer with internet connection. Cost of accessing internet is
high.
• Web Based Computing has increased the emphasis on N/w. Web based computing uses PC, handheld PDA
& cell phones. One of the feature of this type is load balancing. In load balancing, N/w connection is distributed
among a pool of similar servers.
• Embedded computing uses real time OS. Application of embedded computing is car engines,
manufacturing robots, microwave ovens. This type of system provides limited features.

1.13 OPERATING SYSTEM SERVICES:


An OS provides services for the execution of the programs and the users of such programs. The services provided by
one OS may be different from other OS. OS makes the programming task easier. The common services provided by
the OS are

1. Program Execution:-The OS must able to load the program into memory & run that program. The program must
end its execution either normally or abnormally.
2. I/O Operation:-A program running may require any I/O. This I/O may be a file or a specific device users cant
control the I/O device directly so the OS must provide a means for controlling I/O devices.
3. File System Interface:-Program need to read or write a file. The OS should provide permission for the creation
or deletion of files by names.
4. Communication:-In certain situation one process may need to exchange information with another process. This
communication May takes place in two ways.

a. Between the processes executing on the same computer.


b. Between the processes executing on different computer that are connected by a network. This
communication can be implemented via shared memory or by OS.
5. Error Detection:-Errors may occur in CPU, I/O devices or in M/y H/w. The OS constantly needs to be aware
of possible errors. For each type of errors the OS should take appropriate actions to ensure correct &
consistent computing.
OS with multiple users provides the following services,
a. Resource Allocation:-When multiple users logs onto the system or when multiple jobs are running, resources must
be allocated to each of them. The OS manages different types of OS resources. Some resources may need some
special allocation codes & others may have some general request & release code.
b. Accounting:-We need to keep track of which users use how many & what kind of resources. This record keeping
may be used for accounting. This accounting data may be used for statistics or billing. It can also be used to improve
system efficiency.
c. Protection:-Protection ensures that all the access to the system are controlled. Security starts with each user
having authenticated to the system, usually by means of a password. External I/O devices must also be protected
from invalid access. In multi process environment it is possible that one process may interface with the other or with
the OS, so protection is required.

Dept of CSE, SJBIT 17


Smartworld.asia 18 Smartzworld.com

Operating Systems 10CS53

1.14 USER OPERATING SYSTEM INTERFACE Command Interpreter System


• Command interpreter system between the user & the OS. It is a system program to the OS.
• Command interpreter is a special program in UNIX & MS DOS OS i.e. running when the user logs on.
• Many commands are given to the OS through control statements when the user logs on, a program that reads
& interprets control statements is executed automatically. This program is sometimes called the control card
interpreter or command line interpreter and is also called as shell.
• The command statements themselves deal with process creation & management, I/O handling, secondary
storage management, main memory management, file system access, protection & N/w.

1.15 SYSTEM CALLS


• System provides interface between the process & the OS.
• The calls are generally available as assembly language instruction & certain system allow system calls to be
made directly from a high level language program.
• Several language have been defined to replace assembly language program.
• A system call instruction generates an interrupt and allows OS to gain control of the processors.
• System calls occur in different ways depending on the computer. Some time more information is needed to
identify the desired system call. The exact type & amount of information needed may vary according to the
particular OS & call.

TYPES OF SYSTEM CALLS


PASSING PARAMETERS TO OS

Three general methods are used to pass the parameters to the OS.
• The simplest approach is to pass the parameters in registers. In some there can be more parameters than
register. In these the parameters are generally in a block or table in m/y and the address of the block is passed as
parameters in register. This approach used by Linux.
• Parameters can also be placed or pushed onto stack by the program & popped off the stack by the OS.
• Some OS prefer the block or stack methods, because those approaches do not limit the number or length of
parameters being passed.
• System calls may be grouped roughly into 5 categories
. Process control.
. File management.
. Device management.
. Information maintenance.
. Communication.

1.16 SYSTEM PROGRAMS


• Many system calls are used to transfer information between user program & OS. Example:-Most systems

Dept of CSE, SJBIT 18


Smartworld.asia 19 Smartzworld.com

Operating Systems 10CS53

have the system calls to return the current time & date, number of current users, version number of OS, amount of
free m/y or disk space & so on.
• In addition the OS keeps information about all its processes & there are system calls to access

this information. COMMUNICATION:-There are two modes of communication,


1. Message Passing Models:
• In this information is exchanged using inter-process communication facility provided by OS.
• Before communication the connection should be opened.
• The name of the other communicating party should be known, it ca be on the same computer or it can be on
another computer connected by a computer network.
• Each computer in a network may have a host name like IP name similarly each process can have a process
name which can be translated into equivalent identifier by OS.
• The get host id & process id system call do this translation. These identifiers are then passed to the open &
close connection system calls.
• The recipient process must give its permission for communication to take place with an accept connection
call.
• Most processes receive the connection through special purpose system program dedicated for that purpose
called daemons. The daemon on the server side is called server daemon & the daemon on the client side is called
client daemon.

2. Shared Memory:
• In this the processes uses the map m/y system calls to gain access to m/y owned by another process.
• The OS tries to prevent one process from accessing another process m/y.
• In shared m/y this restriction is eliminated and they exchange information by reading and writing data in
shared areas. These areas are located by these processes and not under OS control.
• They should ensure that they are not writing to same m/y area.
• Both these types are commonly used in OS and some even implement both.
• Message passing is useful when small number of data need to be exchanged since no conflicts are to be
avoided and it is easier to implement than in shared m/y. Shared m/y allows maximum speed and convenience of
communication as it is done at m/y speed when within a computer.

Dept of CSE, SJBIT 19


Smartworld.asia 20 Smartzworld.com

Operating Systems 10CS53

1.17 OPERATING SYSTEM DESIGN AND IMPLEMENTATION FILE MANAGEMENT


• System calls can be used to create & deleting of files. System calls may require the name of the files with
attributes for creating & deleting of files.
• Other operation may involve the reading of the file, write & reposition the file after it is opened.
• Finally we need to close the file.
• For directories some set of operation are to be performed. Sometimes we require to reset some of the
attributes on files & directories. The system call get file attribute & set file attribute are used for this type of
operation.

DEVICE MANAGEMENT:
• The system calls are also used for accessing devices.
• Many of the system calls used for files are also used for devices.
• In multi user environment the requirement are made to use the device. After using the device must be
released using release system call the device is free to be used by another user. These function are similar to open &
close system calls of files.
• Read, write & reposition system calls may be used with devices.
• MS-DOS & UNIX merge the I/O devices & the files to form file services structure. In file device structure
I/O devices are identified by file names.

1.18 OPERATING SYSTEM STRUCTURES


• Modern OS is large & complex.
• OS consists of different types of components.
• These components are interconnected & melded into kernel.
• For designing the system different types of structures are used. They are,
• Simple structures.
• Layered structured.
• Micro kernels

Simple Structures
• Simple structure OS are small, simple & limited systems.
• The structure is not well defined
• MS-DOS is an example of simple structure OS.
• MS-DOS layer structure is shown below

• UNIX consisted of two separate modules

Dept of CSE, SJBIT 20


Smartworld.asia 21 Smartzworld.com

Operating Systems 10CS53

• a. Kernel

• b. The system programs.

• Kernel is further separated into series of interfaces & device drivers which were added & expanded as the
UNIX evolved over years.

• The kernel also provides the CPU scheduling, file system, m/y management & other OS function through
system calls.

• System calls define API to UNIX and system programs commonly available defines the user interface. The
programmer and the user interface determines the context that the kernel must support.

• New versions of UNIX are designed to support more advanced H/w. the OS can be broken down into large
number of smaller components which are more appropriate than the original MS-DOS.

21
Smartworld.asia 22 Smartzworld.com

Operating Systems 10CS53

Layered Approach

In this OS is divided into number of layers, where one layer is built on the top of another layer. The bottom
layer is hardware and higher layer is the user interface.
• An OS is an implementation of abstract object i.e. the encapsulation of data & operation to manipulate
these
data.
• The main advantage of layered approach is the modularity i.e. each layer uses the services & functions
provided by the lower layer. This approach simplifies the debugging & verification. Once first layer is debugged
the
correct functionality is guaranteed while debugging the second layer. If an error is identified then it is a problem in
that layer because the layer below it is already debugged.
• Each layer is designed with only the operations provided by the lower level layers.
• Each layer tries to hide some data structures, operations & hardware from the higher level layers.
• A problem with layered implementation is that they are less efficient then the other types.

22
Smartworld.asia 23 Smartzworld.com

Operating Systems 10CS53

• In this OS is divided into number of layers, where one layer is built on the top of another layer. The bottom
layer is hardware and higher layer is the user interface.
• An OS is an implementation of abstract object i.e. the encapsulation of data & operation to manipulate these
data.
• The main advantage of layered approach is the modularity i.e. each layer uses the services & functions
provided by the lower layer. This approach simplifies the debugging & verification. Once first layer is debugged the
correct functionality is guaranteed while debugging the second layer. If an error is identified then it is a problem in
that layer because the layer below it is already debugged.
• Each layer is designed with only the operations provided by the lower level layers.
• Each layer tries to hide some data structures, operations & hardware from the higher level layers.
• A problem with layered implementation is that they are less efficient then the other types.

Micro Kernels
• Micro kernel is a small Os which provides the foundation for modular extensions.
• The main function of the micro kernels is to provide communication facilities between the current program
and various services that are running in user space.
• This approach was supposed to provide a high degree of flexibility and modularity.
• This benefits of this approach includes the ease of extending OS. All the new services are added to the user
space & do not need the modification of kernel.
• This approach also provides more security & reliability.
• Most of the services will be running as user process rather than the kernel process.
• This was popularized by use in Mach OS.
• Micro kernels in Windows NT provides portability and modularity. Kernel is surrounded by a number of
compact sub systems so that task of implementing NT on variety of platform is easy.
• Micro kernel architecture assign only a few essential functions to the kernel including address space, IPC &
basic scheduling.
• QNX is the RTOS i.e. also based on micro kernel design.

Dept of CSE, SJBIT 23


Smartworld.asia 24 Smartzworld.com

Operating Systems 10CS53

1.19 VIRTUAL MACHINES


A virtual machine takes the layered approach to its logical conclusion. It treats hardware and the operating system
kernel as though they were all hardware. A virtual machine provides an interface identical to the underlying bare
hardware. The operating system creates the illusion of multiple processes, each executing on its own processor with
its own (virtual) memory. The resources of the physical computer are shared to create the virtual machines. CPU
scheduling can create the appearance that users have their own processor. Spooling and a file system can provide
virtual card readers and virtual line printers. A normal user time-sharing terminal serves as the virtual machine
operator’s console.

NON-VIRTUAL MACHINE VIRTUALMACHINE

Advantages and Disadvantages of Virtual Machines

• The virtual-machine concept provides complete protection of system resources since each virtual machine is
isolated from all other virtual machines. This isolation, however, permits no direct sharing of resources.
• A virtual-machine system is a perfect vehicle for operating-systems research and development. System
development is done on the virtual machine, instead of on a physical machine and so does not disrupt normal system
operation.
• The virtual machine concept is difficult to implement due to the effort required to provide an exact duplicate
to the underlying machine.

Dept of CSE, SJBIT 24


Smartworld.asia 25 Smartzworld.com

Operating Systems 10CS53

Java Virtual Machine


• Compiled Java programs are platform-neutral bytecodes executed by a Java Virtual Machine (JVM).
• JVM consists of -class loader -class verifier -runtime interpreter
• Just-In-Time (JIT) compilers increase performance

JAVA VIRTUAL MACHINE

1.20 OPERATING SYSTEM GENERATION;

User goals – operating system should be convenient to use, easy to learn, reliable, safe, and fast.
o System goals – operating system should be easy to design, implement, and maintain, as well as flexible, reliable,
error-free, and efficient
Mechanisms determine how to do something, policies decide what will be done.
o The separation of policy from mechanism is a very important principle, it allows maximum flexibility if policy
decisions are to be changed later
• Traditionally written in assembly language, operating systems can now be written in higher-level languages.
• Code written in a high-level language:
• o can be written faster.
• o is more compact.
• o is easier to understand and debug.
• An operating system is far easier to port (move to some other hardware) if it is written in a high-level
language.

1.21 SYSTEM BOOT

• Operating systems are designed to run on any of a class of machines; the system must be configured for each
specific computer site.
• SYSGEN program obtains information concerning the specific configuration of the hardware system.
• Booting – starting a computer by loading the kernel.
• Bootstrap program – code stored in ROM that is able to locate the kernel, load it into memory, and start its
execution.

Dept of CSE, SJBIT 25


Smartworld.asia 26 Smartzworld.com

Operating Systems 10CS53

IMPORTANT QUESTIONS

1. What are the three main purposes of an operating system?


2. What is the main advantage of multiprogramming?

3. What are the main differences between operating systems for mainframe computers and personal computers?
4. Define the essential properties of the following types of operating systems:
a. Batch
b. Interactive
c. Time sharing
d. Real time
e. Network
f. Distributed
一. What are the differences between a trap and an interrupt? What is the use of each function?
一. What are the five major activities of an operating system in regard to process management?
一. What are the three major activities of an operating system in regard to secondary-storage management?
一. List five services provided by an operating system.
一. What is the main advantage of the layered approach to system design?
一. 10. What is the main advantage for an operating-system designer of using a virtual-machine architecture? What
is the main advantage for a user?

Dept of CSE, SJBIT 26


Smartworld.asia 27 Smartzworld.com

Operating Systems 10CS53

UNIT 2 PROCESS MANAGEMENT

TOPICS
2.13 PROCESS CONCEPT.
2.14 PROCESS SCHEDULING.
2.15 OPERATIONS ON PROCESSES.
2.16 INTER-PROCESS COMMUNICATION.
2.17 MULTI-THREADED PROGRAMMING.
2.18 OVERVIEW; MULTITHREADING MODELS.
2.19 THREAD LIBRARIES; THREADING ISSUES.
2.20 PROCESS SCHEDULING: BASIC CONCEPTS.
2.21 SCHEDULING CRITERIA.
2.22 SCHEDULING ALGORITHMS.
2.23 THREAD SCHEDULING.
2.24 MULTIPLE-PROCESSOR SCHEDULING.

Dept of CSE, SJBIT 27


Smartworld.asia 28 Smartzworld.com

Operating Systems 10CS53

2.1 PROCESS CONCEPTS

Processes & Programs:

• Process is a dynamic entity. A process is a sequence of instruction execution process exists in a limited span
of time. Two or more process may execute the same program by using its own data & resources.
• A program is a static entity which is made up of program statement. Program contains the instruction. A
program exists in a single space. A program does not execute by itself.
• A process generally consists of a process stack which consists of temporary data & data section which
consists of global variables.
• It also contains program counter which represents the current activities.
• A process is more than the program code which is also called text section.

Process State:

The process state consist of everything necessary to resume the process execution if it is somehow put aside
temporarily. The process state consists of at least following:

x Code for the program.


x Program's static data.
x Program's dynamic data.
x Program's procedure call stack.
x Contents of general purpose registers.
x Contents of program counter (PC)
x Contents of program status word (PSW).
x Operating Systems resource in use.

2.2 PROCESS SCHEDULING

PROCESS SCHEDULING QUEUES

The following are the different types of process scheduling queues.

1. Job queue – set of all processes in the system


2. Ready queue – set of all processes residing in main memory, ready and waiting to execute
3. Device queues – set of processes waiting for an I/O device
4. Processes migrate among the various queues

Dept of CSE, SJBIT 28


Smartworld.asia 29 Smartzworld.com

Operating Systems 10CS53

Ready Queue And Various I/O Device Queues

Ready Queue:
The process that are placed in main m/y and are already and waiting to executes are placed in a list called the
ready queue. This is in the form of linked list. Ready queue header contains pointer to the first & final PCB in the
list. Each PCB contains a pointer field that points next PCB in ready queue.

Device Queue:The list of processes waiting for a particular I/O device is called device. When the CPU is allocated
to a process it may execute for some time & may quit or interrupted or wait for the occurrence of a particular event
like completion of an I/O request but the I/O may be busy with some other processes. In this case the process must
wait for I/O. This will be placed in device queue. Each device will have its own queue.
The process scheduling is represented using a queuing diagram. Queues are represented by the rectangular box &
resources they need are represented by circles. It contains two queues ready queue & device queues. Once the
process is assigned to CPU and is executing the following events can occur,
1.20 It can execute an I/O request and is placed in I/O queue.
1.21 The process can create a sub process & wait for its termination.
1.22 The process may be removed from the CPU as a result of interrupt and can be put back into ready
queue.

Schedulers:

The following are the different type of schedulers

1. Long-term scheduler (or job scheduler) – selects which processes should be brought into the ready queue.
2. Short-term scheduler (or CPU scheduler) – selects which process should be executed next and allocates CPU.
3. Medium-term schedulers

Dept of CSE, SJBIT 29


Smartworld.asia 30 Smartzworld.com

Operating Systems 10CS53

-> Short-term scheduler is invoked very frequently (milliseconds) . (must be fast)

-> Long-term scheduler is invoked very infrequently (seconds, minutes) (may be slow)

-> The long-term scheduler controls the degree of multiprogramming

->Processes can be described as either:

x I/O-bound process – spends more time doing I/O than computations, many short CPU bursts

CPU-bound process – spends more time doing computations; few very long CPU bursts

2.3 OPERATION ON PROCESS

Process Creation

In general-purpose systems, some way is needed to create processes as needed during operation. There are four
principal events led to processes creation.

x System initialization.
x Execution of a process Creation System calls by a running process.
x A user request to create a new process.
x Initialization of a batch job.

Foreground processes interact with users. Background processes that stay in background sleeping but suddenly
springing to life to handle activity such as email, webpage, printing, and so on. Background processes are called
daemons. This call creates an exact clone of the calling process. A process may create a new process by some create
process such as 'fork'. It choose to does so, creating process is called parent process and the created one is called the
child processes. Only one parent is needed to create a child process. Note that unlike plants and animals that use
sexual representation, a process has only one parent. This creation of process (processes) yields a hierarchical
structure of processes like one in the figure. Notice that each child has only one parent but each parent may have
many children. After the fork, the two processes, the parent and the child, have the same memory image, the same
environment strings and the same open files. After a process is created, both the parent and child have their own
distinct address space. If either process changes a word in its address space, the change is not visible to the other
process.

Following are some reasons for creation of a process

x User logs on.


x User starts a program.
x Operating systems creates process to provide service, e.g., to manage printer.
x Some program starts another process, e.g., Netscape calls xv to display a picture.

Process Termination

Dept of CSE, SJBIT 30


Smartworld.asia 31 Smartzworld.com

Operating Systems 10CS53

A process terminates when it finishes executing its last statement. Its resources are returned to the system, it is
purged from any system lists or tables, and its process control block (PCB) is erased i.e., the PCB's memory space is
returned to a free memory pool. The new process terminates the existing process, usually due to following reasons:

x Normal Exist Most processes terminates because they have done their job. This call is exist in UNIX. x Error Exist
When process discovers a fatal error. For example, a user tries to compile a program that does not exist. x Fatal
Error An error caused by process due to a bug in program for example, executing an illegal instruction, referring
non-existing memory or dividing by zero. x Killed by another Process A process executes a system call telling the
Operating Systems to terminate some other process. In UNIX, this call is kill. In x some systems when a process
kills all processes it created are killed as well (UNIX does not work this way).

Process States :A process goes through a series of discrete process states.

x New State The process being created. x Terminated State The process has finished execution. x Blocked
(waiting) State When a process blocks, it does so because logically it cannot
continue, typically because it is waiting for input that is not yet available. Formally, a process is said to be
blocked if it is waiting for some event to happen (such as an I/O completion) before it can proceed. In this
state a process is unable to run until some external event happens.
x Running State A process is said t be running if it currently has the CPU, that is, actually using the CPU at that
particular instant. x Ready State A process is said to be ready if it use a CPU if one were available. It is runable but
temporarily stopped to let another process run.

Logically, the 'Running' and 'Ready' states are similar. In both cases the process is willing to run, only in the case of
'Ready' state, there is temporarily no CPU available for it. The 'Blocked' state is different from the 'Running' and
'Ready' states in that the process cannot run, even if the CPU is available.

Process Control Block

A process in an operating system is represented by a data structure known as a process control block (PCB) or
process descriptor. The PCB contains important information about the specific process including x The current state of
the process i.e., whether it is ready, running, waiting, or whatever.

Dept of CSE, SJBIT 31


Smartworld.asia 32 Smartzworld.com

Operating Systems 10CS53

x Unique identification of the process in order to track "which is which" information.


x A pointer to parent process.
x Similarly, a pointer to child process (if it exists).
x The priority of process (a part of CPU scheduling information).
x Pointers to locate memory of processes.
x A register save area.
x The processor it is running on.

The PCB is a certain store that allows the operating systems to locate key information about a process. Thus, the
PCB is the data structure that defines a process to the operating systems.

The following figure shows the process control block.

Context Switch:

1. When CPU switches to another process, the system must save the state of the old process and load the saved
state for the new process.

2. Context-switch time is overhead; the system does no useful work while switching.

3. Time dependent on hardware support

Cooperating Processes & Independent Processes

Independent process: one that is independent of the rest of the universe.

x Its state is not shared in any way by any other process.


x Deterministic: input state alone determines results.
x Reproducible.
x Can stop and restart with no bad effects (only time varies). Example: program that sums the
integers from 1 to i (input).

There are many different ways in which a collection of independent processes might be executed on a processor:

Dept of CSE, SJBIT 32


Smartworld.asia 33 Smartzworld.com

Operating Systems 10CS53

x programming: a single process is run to completion before anything else can be run
on the processor.
• Multiprogramming: share one processor among several processes. If no shared state,
• U then order of dispatching is irrelevant.
n
i• Multiprocessing: if multiprogramming works, then it should also be ok to run
p processes in parallel on separate processors.
r
. o A o given process runs on only one processor at a time.
. o A g process may run on different processors at different times (move state, assume
r
processors are identical).
. a
o Cannot distinguish multiprocessing from multiprogramming on a very fine grain.
m
Cooperating m processes:
i
n must model the social structures of the people that use it. People cooperate, so
x Machine
g
machine
must: support that cooperation. Cooperation means shared state, e.g. a single file system. x
Cooperating processes are those that share state. (May or may not actually be "cooperating") x
Behaviora is nondeterministic: depends on relative execution sequence and cannot be predicted a
priori. x Behavior is irreproducible. x Example: one process writes "ABC", another writes
"CBA"s. Can get different outputs, cannot
tell iwhat comes from which. E.g. which process output first "C" in "ABCCBA"? Note the
subtnle state sharing that occurs here via the terminal. Not just anything can happen, though.
Forgexample, "AABBCC" cannot occur.
l
e
1. Independent process cannot affect or be affected by the execution of another process
2. Cooperapting process can affect or be affected by the execution of another process
r 3. Advantages of process cooperation
o
c • Information sharing
e • Computation speed-up
s • Modularity
s • Convenience

• i
s

r
u
n

t
o
U
n

Dept of CSE, SJBIT 33


Smartworld.asia 34 Smartzworld.com

Operating Systems 10CS53

2.4 INTERPROCESS COMMUNICATION (IPC)


1. Mechanism for processes to communicate and to synchronize their actions.
2. Message system – processes communicate with each other without resorting to shared variables
3. IPC facility provides two operations:
� send(message) – message size fixed or variable
� receive(message)
4. If P and Q wish to communicate, they need to:exchange messages via send/receive
5. Implementation of communication link

physical (e.g., shared memory, hardware bus)


logical (e.g., logical properties)

Communications Models

there are two types of communication models

1. Multi programming
2. Shared Memory

Direct Communication

1. Processes must name each other explicitly:

x send (P, message) – send a message to process P x receive(Q, message) – receive a message from process Q

2. Properties of communication link

x Links are established automatically x A link is associated with exactly one pair of
communicating processes x Between each pair there exists exactly one link x The link may be
unidirectional, but is usually bi-directional

Indirect Communication

1. Messages are directed and received from mailboxes (also referred to as ports)

x Each mailbox has a unique id

Dept of CSE, SJBIT 34


Smartworld.asia 35 Smartzworld.com

Operating Systems 10CS53

x Processes can communicate only if they share a mailbox

2. Properties of communication link


x Link established only if processes share a common mailbox

x A link may be associated with many processes

x Each pair of processes may share several communication links x Link may be unidirectional
or bi-directional

3. Operations

. o create a new mailbox


. o send and receive messages through mailbox
. o destroy a mailbox

4.Primitives are defined as: send(A, message) – send a message to mailbox A receive(A, message) – receive a

message from mailbox A

5.Mailbox sharing P1, P2, and P3 share mailbox A P1, sends; P2 and P3 receive Who gets the message?

6. Solutions Allow a link to be associated with at most two processes

Allow only one process at a time to execute a receive operation Allow the system to select

arbitrarily the receiver. Sender is notified who the receiver was.

Synchronization
1. Message passing may be either blocking or non-blocking
2. Blocking is considered synchronous

->Blocking send has the sender block until the message is received. ->Blocking receive has the receiver block
until a message is available.

3. Non-blocking is considered asynchronous


->Non-blocking send has the sender send the message and continue.
->Non-blocking receive has the receiver receive a valid message or null.

Buffering

->Queue of messages attached to the link; implemented in one of three ways

1. Zero capacity – 0 messages sender must wait for receiver (rendezvous)


2. Bounded capacity – finite length of n messages Sender must wait if link full

Dept of CSE, SJBIT 35


Smartworld.asia 36 Smartzworld.com

Operating Systems 10CS53

3. Unbounded capacity – infinite length sender never waits

2.5 MULTI THREADED PROGRAMMING

Despite of the fact that a thread must execute in process, the process and its associated threads are different
concept. Processes are used to group resources together and threads are the entities scheduled for execution
on the CPU. A thread is a single sequence stream within in a process. Because threads have some of the
properties of processes, they are sometimes called lightweight processes. In a process, threads allow multiple
executions of streams. In many respect, threads are popular way to improve application through parallelism.
The CPU switches rapidly back and forth among the threads giving illusion that the threads are running in
parallel. Like a traditional process i.e., process with one thread, a thread can be in any of several states
(Running, Blocked, Ready or Terminated). Each thread has its own stack. Since thread will generally call
different procedures and thus a different execution history. This is why thread needs its own stack. An
operating system that has thread facility, the basic unit of CPU utilization is a thread. A thread has or consists
of a program counter (PC), a register set, and a stack space. Threads are not independent of one other like
processes as a result threads shares with other threads their code section, data section, OS resources also
known as task, such as open files and signals.

Processes Vs Threads

As we mentioned earlier that in many respect threads operate in the same way as that of processes. Some of
the similarities and differences are:

Similarities

x Like processes threads share CPU and only one thread active (running) at a time.
x Like processes, threads within a processes, threads within a processes execute sequentially.
x Like processes, thread can create children.
x And like process, if one thread is blocked, another thread can run.

Dept of CSE, SJBIT 36


Smartworld.asia 37 Smartzworld.com

Operating Systems 10CS53

Differences

x Unlike processes, threads are not independent of one another.


x Unlike processes, all threads can access every address in the task .
x Unlike processes, thread are design to assist one other. Note that processes might or might not
assist one another because processes may originate from different users.

Why Threads?

Following are some reasons why we use threads in designing operating systems.

1. A process with multiple threads make a great server for example printer server.
2. Because threads can share common data, they do not need to use interprocess communication.
3. Because of the very nature, threads can take advantage of multiprocessors.
4. Responsiveness
5. Resource Sharing
6. Economy
7. Utilization of MP Architectures

Threads are cheap in the sense that

1. They only need a stack and storage for registers therefore, threads are cheap to create.
2. Threads use very little resources of an operating system in which they are working. That is, threads do not
need new address space, global data, program code or operating system resources.
3. Context switching are fast when working with threads. The reason is that we only have to save and/or
restore PC, SP and registers.

But this cheapness does not come free -the biggest drawback is that there is no protection between threads.

Single and Multithreaded Processes

User-Level Threads

1. Thread management done by user-level threads library

Dept of CSE, SJBIT 37


Smartworld.asia 38 Smartzworld.com

Operating Systems 10CS53

Three primary thread libraries: -> POSIX Pthreads


-> Win32 threads

-> Java threads

User-level threads implement in user-level libraries, rather than via systems calls, so thread switching does
not need to call operating system and to cause interrupt to the kernel. In fact, the kernel knows nothing about
user-level threads and manages them as if they were single-threaded processes.

Advantages:

The most obvious advantage of this technique is that a user-level threads package can be implemented on an
Operating System that does not support threads. Some other advantages are

x User-level threads does not require modification to operating systems. x Simple


representation: Each thread is represented simply by a PC, registers, stack and a small
control block, all stored in the user process address space. x Simple Management: This
simply means that creating a thread, switching between threads and synchronization
between threads can all be done without intervention of the kernel. x Fast and Efficient:
Thread switching is not much more expensive than a procedure call.

Disadvantages:

x There is a lack of coordination between threads and operating system kernel. Therefore, process as
whole gets one time slice irrespect of whether process has one thread or 1000 threads within. It is up
to each thread to relinquish control to other threads.
x User-level threads requires non-blocking systems call i.e., a multithreaded kernel. Otherwise, entire
process will blocked in the kernel, even if there are runable threads left in the processes. For example,
if one thread causes a page fault, the process blocks.

Kernel-Level Threads

1. Supported by the Kernel

Examples: ->Windows XP/2000, ->Solaris , ->Linux, ->Tru64 UNIX, ->Mac OS X

In this method, the kernel knows about and manages the threads. No runtime system is needed in this case.
Instead of thread table in each process, the kernel has a thread table that keeps track of all threads in the
system. In addition, the kernel also maintains the traditional process table to keep track of processes.
Operating Systems kernel provides system call to create and manage threads.

Advantages:

x Because kernel has full knowledge of all threads, Scheduler may decide to give more time to a
process having large number of threads than process having small number of threads. x
Dept of CSE, SJBIT 38
Smartworld.asia 39 Smartzworld.com

Operating Systems 10CS53

Kernel-level threads are especially good for applications that frequently block.

Disadvantages:
x The kernel-level threads are slow and inefficient. For instance, threads operations are hundreds of times
slower than that of user-level threads.
x Since kernel must manage and schedule threads as well as processes. It require a full thread control
block (TCB) for each thread to maintain information about threads. As a result there is significant
overhead and increased in kernel complexity.

Advantages of Threads over Multiple Processes

x Context Switching Threads are very inexpensive to create and destroy, and they are inexpensive to
represent. For example, they require space to store, the PC, the SP, and the general-purpose registers,
but they do not require space to share memory information, Information about open files of I/O
devices in use, etc. With so little context, it is much faster to switch between threads. In other words,
it is relatively easier for a context switch using threads.
x Sharing Treads allow the sharing of a lot resources that cannot be shared in process, for example,
sharing code section, data section, Operating System resources like open file etc.

Disadvantages of Threads over Multiprocesses

x Blocking The major disadvantage if that if the kernel is single threaded, a system call of one thread will
block the whole process and CPU may be idle during the blocking period.
x Security Since there is, an extensive sharing among threads there is a potential problem of security. It is
quite possible that one thread over writes the stack of another thread (or damaged shared data)
although it is very unlikely since threads are meant to cooperate on a single task.

Application that Benefits from Threads

A proxy server satisfying the requests for a number of computers on a LAN would be benefited by a multi-
threaded process. In general, any program that has to do more than one task at a time could benefit from
multitasking. For example, a program that reads input, process it, and outputs could have three threads, one
for each task.

Application that cannot Benefit from Threads

Any sequential process that cannot be divided into parallel task will not benefit from thread, as they would
block until the previous one completes. For example, a program that displays the time of the day would not
benefit from multiple threads.

2.6 OVERVIEW:MULTITHREADING MODELS

• Many-to-One
• One-to-One
• Many-to-Many

Dept of CSE, SJBIT 39


Smartworld.asia 40 Smartzworld.com

Operating Systems 10CS53

Many-to-One Many user-level threads mapped to single kernel thread


->Examples: ->Solaris Green Threads, ->GNU Portable Threads

One-to-One

1. Each user-level thread maps to kernel thread

• Examples Windows NT/XP/2000


• Linux
• Solaris 9 and later

Many-to-Many Model

1. Allows many user level threads to be mapped to many kernel threads.


2. Allows the operating system to create a sufficient number of kernel threads.
3. Solaris prior to version 9.
4. Windows NT/2000 with the ThreadFiber package.

2.7 THREAD LIBRARIES, THREADING ISSUES.

Resources used in Thread Creation and Process Creation

Dept of CSE, SJBIT 40


Smartworld.asia 41 Smartzworld.com

Operating Systems 10CS53

When a new thread is created it shares its code section, data section and operating system resources like open
files with other threads. But it is allocated its own stack, register set and a program counter.

The creation of a new process differs from that of a thread mainly in the fact that all the shared resources of a
thread are needed explicitly for each process. So though two processes may be running the same piece of code
they need to have their own copy of the code in the main memory to be able to run. Two processes also do not
share other resources with each other. This makes the creation of a new process very costly compared to that
of a new thread.

Thread Pools

1. Create a number of threads in a pool where they await work

• Advantages: Usually slightly faster to service a request with an existing thread than create a new
thread
• Allows the number of threads in the application(s) to be bound to the size of the pool

Context Switch

To give each process on a multiprogrammed machine a fair share of the CPU, a hardware clock generates
interrupts periodically. This allows the operating system to schedule all processes in main memory (using
scheduling algorithm) to run on the CPU at equal intervals. Each time a clock interrupt occurs, the interrupt
handler checks how much time the current running process has used. If it has used up its entire time slice,
then the CPU scheduling algorithm (in kernel) picks a different process to run. Each switch of the CPU from
one process to another is called a context switch.

Major Steps of Context Switching

x Thevalues of the CPU registers are saved in the process table of the process that was running just before the
clock interrupt occurred. x The registers are loaded from the process picked by the CPU scheduler to run next.

In a multiprogrammed uniprocessor computing system, context switches occur frequently enough that all
processes appear to be running concurrently. If a process has more than one thread, the Operating System can
use the context switching technique to schedule the threads so they appear to execute in parallel. This is the
case if threads are implemented at the kernel level. Threads can also be implemented entirely at the user level
in run-time libraries. Since in this case no thread scheduling is provided by the Operating System, it is the
responsibility of the programmer to yield the CPU frequently enough in each thread so all threads in the
process can make progress.

Action of Kernel to Context Switch Among Threads

Dept of CSE, SJBIT 41


Smartworld.asia 42 Smartzworld.com

Operating Systems 10CS53

The threads share a lot of resources with other peer threads belonging to the same process. So a context
switch among threads for the same process is easy. It involves switch of register set, the program counter and
the stack. It is relatively easy for the kernel to accomplished this task.

Action of kernel to Context Switch Among Processes

Context switches among processes are expensive. Before a process can be switched its process control block
(PCB) must be saved by the operating system. The PCB consists of the following information:

x The process state.


x The program counter, PC.
x The values of the different registers.
x The CPU scheduling information for the process.
x Memory management information regarding the process.
x Possible accounting information for this process.
x I/O status information of the process.

When the PCB of the currently executing process is saved the operating system loads the PCB of the next
process that has to be run on CPU. This is a heavy task and it takes a lot of time.

2.8 PROCESS SCHEDULING: BASIC CONCEPTS

The assignment of physical processors to processes allows processors to accomplish work. The problem of
determining when processors should be assigned and to which processes is called processor scheduling or
CPU scheduling. When more than one process is runable, the operating system must decide which one first.
The part of the operating system concerned with this decision is called the scheduler, and algorithm it uses is
called the scheduling algorithm.

CPU Scheduler

a. Selects from among the processes in memory that are ready to execute, and allocates the CPU to one of
them
b. CPU scheduling decisions may take place when a process:

1. Switches from running to waiting state

2. Switches from running to ready state

3. Switches from waiting to ready


4. Terminates
• Scheduling under 1 and 4 is nonpreemptive
• All other scheduling is preemptive

Dept of CSE, SJBIT 42


Smartworld.asia 43 Smartzworld.com

Operating Systems 10CS53

Dispatcher

1. Dispatcher module gives control of the CPU to the process selected by the short-term scheduler; this
involves:

• switching context
• switching to user mode
• jumping to the proper location in the user program to restart that program

2. Dispatch latency – time it takes for the dispatcher to stop one process and start another running.

2.9 SCHEDULING CRITERIA

1. CPU utilization – keep the CPU as busy as possible


2. Throughput – # of processes that complete their execution per time unit
3. Turnaround time – amount of time to execute a particular process
4. Waiting time – amount of time a process has been waiting in the ready queue
5. Response time – amount of time it takes from when a request was submitted until the first response is
produced, not output (for time-sharing environment)

General Goals

Fairness
Fairness is important under all circumstances. A scheduler makes sure that each process gets its fair
share of the CPU and no process can suffer indefinite postponement. Note that giving equivalent or equal
time is not fair. Think of safety control and payroll at a nuclear plant.

Policy Enforcement
The scheduler has to make sure that system's policy is enforced. For example, if the local policy is safety
then the safety control processes must be able to run whenever they want to, even if it means delay in payroll
processes.

Efficiency
Scheduler should keep the system (or in particular CPU) busy cent percent of the time when possible.
If the CPU and all the Input/Output devices can be kept running all the time, more work gets done per
second than if some components are idle.

Response Time
A scheduler should minimize the response time for interactive user.

Turnaround
A scheduler should minimize the time batch users must wait for an output.

Throughput
A scheduler should maximize the number of jobs processed per unit time.
A little thought will show that some of these goals are contradictory. It can be shown that any scheduling
Dept of CSE, SJBIT 43
Smartworld.asia 44 Smartzworld.com

Operating Systems 10CS53

algorithm that favors some class of jobs hurts another class of jobs. The amount of CPU time available is
finite, after all.
Preemptive Vs Nonpreemptive Scheduling

The Scheduling algorithms can be divided into two categories with respect to how they deal with clock
interrupts.

Nonpreemptive Scheduling

A scheduling discipline is nonpreemptive if, once a process has been given the CPU, the CPU cannot be
taken away from that process.

Following are some characteristics of nonpreemptive scheduling

1. In nonpreemptive system, short jobs are made to wait by longer jobs but the overall treatment of all
processes is fair.
2. In nonpreemptive system, response times are more predictable because incoming high priority jobs can not
displace waiting jobs.
3. In nonpreemptive scheduling, a schedular executes jobs in the following two situations.
a. When a process switches from running state to the waiting state.
b. When a process terminates.

Preemptive Scheduling

A scheduling discipline is preemptive if, once a process has been given the CPU can taken away.

The strategy of allowing processes that are logically runable to be temporarily suspended is called
Preemptive Scheduling and it is contrast to the "run to completion" method.

2.10 SCHEDULING ALGORITHMS

CPU Scheduling deals with the problem of deciding which of the processes in the ready queue is to be
allocated the CPU. Following are some scheduling algorithms we will study.

• FCFS Scheduling.
• Round Robin Scheduling.
• SJF Scheduling.
• SRT Scheduling.
• Priority Scheduling.
• Multilevel Queue Scheduling.
• Multilevel Feedback Queue Scheduling.

First-Come-First-Served (FCFS) Scheduling


Other names of this algorithm are:
x First-In-First-Out (FIFO)
x Run-to-Completion

Dept of CSE, SJBIT 44


Smartworld.asia 45 Smartzworld.com

Operating Systems 10CS53

x Run-Until-Done

Perhaps, First-Come-First-Served algorithm is the simplest scheduling algorithm is the simplest scheduling
algorithm. Processes are dispatched according to their arrival time on the ready queue. Being a nonpreemptive
discipline, once a process has a CPU, it runs to completion. The FCFS scheduling is fair in the formal sense
or human sense of fairness but it is unfair in the sense that long jobs make short jobs wait and unimportant
jobs make important jobs wait.

FCFS is more predictable than most of other schemes since it offers time. FCFS scheme is not useful in
scheduling interactive users because it cannot guarantee good response time. The code for FCFS scheduling is
simple to write and understand. One of the major drawback of this scheme is that the average time is often
quite long. The First-Come-First-Served algorithm is rarely used as a master scheme in modern operating
systems but it is often embedded within other schemes.

Example:-Process Burst Time

P1 24

P2 3

P3 3

Suppose that the processes arrive in the order: P1 , P2 , P3 ,The Gantt Chart for the schedule is:

Waiting time for P1 = 0; P2 = 24; P3 = 27, Average waiting time: (0 + 24 + 27)/3 = 17 Suppose that the

processes arrive in the order P2 , P3 , P1 , The Gantt chart for the schedule is:

Waiting time for P1 = 6; P2 =0; P3 = 3, Average waiting time: (6 + 0 + 3)/3 = 3


Round Robin Scheduling
x One of the oldest, simplest, fairest and most widely used algorithm is round robin (RR). x In
the round robin scheduling, processes are dispatched in a FIFO manner but are given a limited
amount of CPU time called a time-slice or a quantum.
x If a process does not complete before its CPU-time expires, the CPU is preempted and given to the next
Dept of CSE, SJBIT 45
Smartworld.asia 46 Smartzworld.com

Operating Systems 10CS53

process waiting in a queue. The preempted process is then placed at the back of the ready list.
x Round Robin Scheduling is preemptive (at the end of time-slice) therefore it is effective in timesharing
environments in which the system needs to guarantee reasonable response times for interactive users.
x The only interesting issue with round robin scheme is the length of the quantum. Setting the quantum
too short causes too many context switches and lower the CPU efficiency. On the other hand, setting
the quantum too long may cause poor response time and appoximates FCFS.

In any event, the average waiting time under round robin scheduling is often quite long.

1. Each process gets a small unit of CPU time (time quantum), usually 10-100 milliseconds. After this time
has elapsed, the process is preempted and added to the end of the ready queue.
2. If there are n processes in the ready queue and the time quantum is q, then each process gets 1/n of the
CPU time in chunks of at most q time units at once. No process waits more than (n-1)q time units.
3. Performance ->q large . FIFO ->q small . q must be large with respect to context switch, otherwise
overhead is too high. Example:Process Burst Time

P1 53 P2 17 P3 68 P4 24

The Gantt chart is: ->Typically, higher average turnaround than SJF, but better response

P1 P2 P3 P4 P1 P3 P4 P1 P3 P3
0 20 37 57 77 97 117 121 134 154 162

Shortest-Job-First (SJF) Scheduling


x Other name of this algorithm is Shortest-Process-Next (SPN).
x Shortest-Job-First (SJF) is a non-preemptive discipline in which waiting job (or process) with the
smallest estimated run-time-to-completion is run next. In other words, when CPU is available, it is
assigned to the process that has smallest next CPU burst.
x The SJF scheduling is especially appropriate for batch jobs for which the run times are known in
advance. Since the SJF scheduling algorithm gives the minimum average time for a given set of
processes, it is probably optimal.
x The SJF algorithm favors short jobs (or processors) at the expense of longer ones. x
The obvious problem with SJF scheme is that it requires precise knowledge of how
long a job or process will run, and this information is not usually available. x The best
SJF algorithm can do is to rely on user estimates of run times.

In the production environment where the same jobs run regularly, it may be possible to provide
reasonable estimate of run time, based on the past performance of the process. But in the development
environment users rarely know how their program will execute.

Like FCFS, SJF is non preemptive therefore, it is not useful in timesharing environment in which reasonable
response time must be guaranteed.

Dept of CSE, SJBIT 46


Smartworld.asia 47 Smartzworld.com

Operating Systems 10CS53

1 Associate with each process the length of its next CPU burst. Use these lengths to schedule the
process with the shortest time
2 Two schemes:

x nonpreemptive – once CPU given to the process it cannot be preempted until completes its
CPU burst
x preemptive – if a new process arrives with CPU burst length less than remaining time of
current executing process, preempt. This scheme is know as the ShortestRemaining-
Time-First (SRTF)

3. SJF is optimal – gives minimum average waiting time for a given set of processes

Process Arrival Time Burst Time

P1 0.0 7

P2 2.0 4

P3 4.0 1

P4 5.0 4

Dept of CSE, SJBIT 47


Smartworld.asia 48 Smartzworld.com

Operating Systems 10CS53

-> SJF (preemptive)

->Average waiting time = (9 + 1 + 0 +2)/4 = 3

Shortest-Remaining-Time (SRT) Scheduling

x The SRT is the preemtive counterpart of SJF and useful in time-sharing environment.
x In SRT scheduling, the process with the smallest estimated run-time to completion is run next,
including new arrivals.
x In SJF scheme, once a job begin executing, it run to completion.
x In SJF scheme, a running process may be preempted by a new arrival process with shortest
estimated run-time.
x The algorithm SRT has higher overhead than its counterpart SJF.
x The SRT must keep track of the elapsed time of the running process and must handle occasional
preemptions.
x In this scheme, arrival of small processes will run almost immediately. However, longer jobs
have even longer mean waiting time.

Priority Scheduling
x A priority number (integer) is associated with each process
x The CPU is allocated to the process with the highest priority (smallest integer { highest priority)
o ->Preemptive
o ->nonpreemptive
x SJF is a priority scheduling where priority is the predicted next CPU burst time
x Problem { Starvation – low priority processes may never execute
x Solution { Aging – as time progresses increase the priority of the process

The basic idea is straightforward: each process is assigned a priority, and priority is allowed to run. Equal-
Priority processes are scheduled in FCFS order. The shortest-Job-First (SJF) algorithm is a special case of
general priority scheduling algorithm. An SJF algorithm is simply a priority algorithm where the priority is
the inverse of the (predicted) next CPU burst. That is, the longer the CPU burst, the lower the priority and
vice versa.

Priority can be defined either internally or externally. Internally defined priorities use some measurable
quantities or qualities to compute priority of a process.

Examples of Internal priorities are

Dept of CSE, SJBIT 48


Smartworld.asia 49 Smartzworld.com

Operating Systems 10CS53

x Time limits. x Memory requirements. x File requirements,


for example, number of open files. x CPU Vs I/O requirements.

Externally defined priorities are set by criteria that are external to operating system such as

x The importance of process. x Type or amount of funds being paid for computer use. x The department
sponsoring the work. x Politics.

Priority scheduling can be either preemptive or non preemptive

xA preemptive priority algorithm will preemptive the CPU if the priority of the newly
arrival process is higher than the priority of the currently running process. x A non-
preemptive priority algorithm will simply put the new process at the head of the ready
queue.

A major problem with priority scheduling is indefinite blocking or starvation. A solution to the problem of
indefinite blockage of the low-priority process is aging. Aging is a technique of gradually increasing the
priority of processes that wait in the system for a long period of time.

Multilevel Queue Scheduling

A multilevel queue scheduling algorithm partitions the ready queue in several separate queues, for instance,
In a multilevel queue scheduling processes are permanently assigned to one queues. The processes are
permanently assigned to one another, based on some property of the process, such as Memory size , Process
priority , Process type . Algorithm choose the process from the occupied queue that has the highest priority,
and run that process either

Preemptive or Non-preemptively Each queue has its own scheduling algorithm or policy.

PossibilityI
If each queue has absolute priority over lower-priority queues then no process in the queue could run
unless the queue for the highest-priority processes were all empty. For example, in the above figure no
process in the batch queue could run unless the queues for system processes, interactive processes, and
interactive editing processes will all empty.

Dept of CSE, SJBIT 49


Smartworld.asia 50 Smartzworld.com

Operating Systems 10CS53

Possibility II

If there is a time slice between the queues then each queue gets a certain amount of CPU times, which it can
then schedule among the processes in its queue. For instance;

x 80% of the CPU time to foreground queue using RR.


x 20% of the CPU time to background queue using FCFS.

Since processes do not move between queue so, this policy has the advantage of low scheduling overhead, but
it is inflexible.

Multilevel Feedback Queue Scheduling

Multilevel feedback queue-scheduling algorithm allows a process to move between queues. It uses many
ready queues and associate a different priority with each queue. The Algorithm chooses to process with
highest priority from the occupied queue and run that process either preemptively or unpreemptively. If the
process uses too much CPU time it will moved to a lower-priority queue. Similarly, a process that wait too
long in the lower-priority queue may be moved to a higher-priority queue may be moved to a highest-priority
queue. Note that this form of aging prevents starvation.

x A process entering the ready queue is placed in queue 0.


x If it does not finish within 8 milliseconds time, it is moved to the tail of queue 1.
x If it does not complete, it is preempted and placed into queue 2.
x Processes in queue 2 run on a FCFS basis, only when 2 run on a FCFS basis queue, only when
queue 0 and queue 1 are empty.

Example:-Three queues:

• Q0 – RR with time quantum 8 milliseconds


• Q1 – RR time quantum 16 milliseconds
• Q2 – FCFS

1. Scheduling

• A new job enters queue Q0 which is served FCFS. When it gains CPU, job receives 8 milliseconds. If
it does not finish in 8 milliseconds, job is moved to queue Q1.
• At Q1 job is again served FCFS and receives 16 additional milliseconds. If it still does not complete,
it is preempted and moved to queue Q2.

Dept of CSE, SJBIT 50


Smartworld.asia 51 Smartzworld.com

Operating Systems 10CS53

2.11 THREAD SCHEDULING

Pthreads

x a POSIX standard (IEEE 1003.1c) API for thread creation and synchronization. x API specifies
behavior of the thread library, implementation is up to development of the library. x Common in UNIX
operating systems.

Windows 2000 Threads


x Implements the one-to-one mapping.
x Each thread contains -a thread id -register set -separate user and kernel stacks -private data storage area

Linux threads
x Linux refers to them as tasks rather than threads. x Thread creation is done through clone() system call.
x Clone() allows a child task to share the address space of the parent task (process)
Java threads
x Java threads may be created by:
o Extending Thread class
o Implementing the Runnable interface x Java threads are managed by the JVM.

Dept of CSE, SJBIT 51


Smartworld.asia 52 Smartzworld.com

Operating Systems 10CS53

2.12 MULTIPLE-PROCESSOR SCHEDULING

x CPU scheduling more complex when multiple CPUs are available. x Homogeneous processors within a
multiprocessor. x Load sharing x Asymmetric multiprocessing – only one processor accesses the system
data structures,
alleviating the need for data sharing.

x Hard real-time systems – required to complete a critical task within a guaranteed amount of time. x Soft real-
time computing – requires that critical processes receive priority over less fortunate ones.

Algorithm Evaluation
x Deterministic modeling – takes a particular predetermined workload and defines the
performance of each algorithm for that workload. x Queueing models x Implementation

Evaluation of CPU Schedulers by Simulation

Dept of CSE, SJBIT 52


Smartworld.asia 53 Smartzworld.com

Operating Systems 10CS53

IMPORTANT QUESTIONS:

1. Describe the differences among short-term, medium-term, and long-term scheduling.


2. Describe the actions a kernel takes to context switch between processes.
3. What are two differences between user-level threads and kernel-level threads?
4. Describe the actions taken by a kernel to context switch between kernel-level threads.
5. What resources are used when a thread is created? How do they differ from those used when a process is
created?
6. Define the difference between preemptive and nonpreemptive scheduling
7. Consider the following set of processes, with the length of the CPU-burst time given in milliseconds:
Process Burst Time Priority P1 103

P2 11
P3 23
P4 14
P5 52
The processes are assumed to have arrived in the order P1, P2, P3, P4, P5, all at time 0.
a. Draw four Gantt charts illustrating the execution of these processes using FCFS, SJF, a nonpreemptive
priority (a smaller priority number implies a higher priority), and RR (quantum = 1) scheduling.
b. What is the turnaround time of each process for each of the scheduling algorithms in part a?
c. What is the waiting time of each process for each of the scheduling algorithms in part a?
d. Which of the schedules in part a results in the minimal average waiting time (over all processes)?

8. Suppose that the following processes arrive for execution at the times indicated. Each process will
run the listed amount of time. In answering the questions, use nonpreemptivescheduling and base
all decisions on the information you have at the time the decisionmust be made.
Process Arrival Time Burst Time P1 0.0 8 P2 0.4 4 P3 1.0 1
a. What is the average turnaround time for these processes with the FCFS scheduling
algorithm?
b.What is the average turnaround time for these processes with the SJF scheduling
algorithm?
c. The SJF algorithm is supposed to improve performance, but notice that we chose to run process P1 at time
0 because we did not know that two shorter processes would arrive soon. Compute what the average
turnaround time will be if the CPU is left idle for the first 1 unit and then SJF scheduling is used. Remember
that processes P1 and P2 are waiting during this idle time, so their waiting time may increase. This algorithm
could be known as future-knowledge scheduling.

Dept of CSE, SJBIT 53


Smartworld.asia 54 Smartzworld.com

Operating Systems 10CS53

UNIT 3 PROCESS SYNCHRONIZATION

TOPICS
3.8 SYNCHRONIZATION
3.9 THE CRITICAL SECTION PROBLEM
3.10 PETERSON’S SOLUTION
3.11 SYNCHRONIZATION HARDWARE
3.12 SEMAPHORES
3.13 CLASSICAL PROBLEMS OF SYNCHRONIZATION
3.14 MONITORS

Dept of CSE, SJBIT 54


Smartworld.asia 55 Smartzworld.com

Operating Systems 10CS53

3.1 SYNCHRONIZATION

Since processes frequently needs to communicate with other processes therefore, there is a need for a well-
structured communication, without using interrupts, among processes.

Race Conditions

In operating systems, processes that are working together share some common storage (main memory, file
etc.) that each process can read and write. When two or more processes are reading or writing some shared
data and the final result depends on who runs precisely when, are called race conditions. Concurrently
executing threads that share data need to synchronize their operations and processing in order to avoid race
condition on shared data. Only one ‘customer’ thread at a time should be allowed to examine and update the
shared variable. Race conditions are also possible in Operating Systems. If the ready queue is implemented as
a linked list and if the ready queue is being manipulated during the handling of an interrupt, then interrupts
must be disabled to prevent another interrupt before the first one completes. If interrupts are not disabled than
the linked list could become corrupt.

1. count++ could be implemented as

register1 = count

register1 = register1 + 1

count = register1

2. count--could be implemented as

register2 = count

register2 = register2 – 1

count = register2

3. Consider this execution interleaving with “count = 5” initially:

S0: producer execute register1 = count {register1 = 5} S1: producer execute register1 = register1 + 1
{register1 = 6} S2: consumer execute register2 = count {register2 = 5} S3: consumer execute register2 =
register2 -1 {register2 = 4} S4: producer execute count = register1 {count = 6 } S5: consumer execute count
= register2 {count = 4}

3.2 THE CRITICAL SECTION PROBLEM

Dept of CSE, SJBIT 55


Smartworld.asia 56 Smartzworld.com

Operating Systems 10CS53

1. Mutual Exclusion -If process Pi is executing in its critical section, then no other processes can be executing
in their critical sections

2. Progress -If no process is executing in its critical section and there exist some processes that wish to enter
their critical section, then the selection of the processes that will enter the critical section next cannot be
postponed indefinitely
3. Bounded Waiting -A bound must exist on the number of times that other processes are allowed to enter
their critical sections after a process has made a request to enter its critical section and before that request
is granted
• Assume that each process executes at a nonzero speed
• No assumption concerning relative speed of the N processes

A. Critical Section

The key to preventing trouble involving shared storage is find some way to prohibit more than one process
from reading and writing the shared data simultaneously. That part of the program where the shared memory
is accessed is called the Critical Section. To avoid race conditions and flawed results, one must identify codes
in Critical Sections in each thread. The characteristic properties of the code that form a Critical Section are

x Codes thatreference one or more variables in a “read-update-write” fashion while any of those variables is
possibly being altered by another thread. x Codes that alter one or more variables that are possibly being
referenced in “read-updata-write”
fashion by another thread. x Codes use a data structure while any part of it is possibly being altered by
another thread. x Codes alter any part of a data structure while it is possibly in use by another thread.
Here, the important point is that when one process is executing shared modifiable data in its
critical section, no other process is to be allowed to execute in its critical section. Thus, the
execution of critical sections by the processes is mutually exclusive in time.

B. Mutual Exclusion
A way of making sure that if one process is using a shared modifiable data, the other processes will be
excluded from doing the same thing. Formally, while one process executes the shared variable, all other
processes desiring to do so at the same time moment should be kept waiting; when that process has finished
executing the shared variable, one of the processes waiting; while that process has finished executing the
shared variable, one of the processes waiting to do so should be allowed to proceed. In this fashion, each

Dept of CSE, SJBIT 56


Smartworld.asia 57 Smartzworld.com

Operating Systems 10CS53

process executing the shared data (variables) excludes all others from doing so simultaneously. This is called
Mutual Exclusion.
Note that mutual exclusion needs to be enforced only when processes access shared modifiable data when
processes are performing operations that do not conflict with one another they should be allowed to proceed
concurrently.

Mutual Exclusion Conditions

If we could arrange matters such that no two processes were ever in their critical sections simultaneously, we
could avoid race conditions. We need four conditions to hold to have a good solution for the critical section
problem (mutual exclusion).

x No two processes may at the same moment inside their critical sections.
x No assumptions are made about relative speeds of processes or number of CPUs.
x No process should outside its critical section should block other processes.
x No process should wait arbitrary long to enter its critical section.

3.3 PETERSON’S SOLUTION

The mutual exclusion problem is to devise a pre-protocol (or entry protocol) and a post-protocol (or exist
protocol) to keep two or more threads from being in their critical sections at the same time. Tanenbaum
examine proposals for critical-section problem or mutual exclusion problem.

Problem

When one process is updating shared modifiable data in its critical section, no other process should allowed to
enter in its critical section.

Proposal 1 -Disabling Interrupts (Hardware Solution)

Each process disables all interrupts just after entering in its critical section and re-enable all interrupts just
before leaving critical section. With interrupts turned off the CPU could not be switched to other process.
Hence, no other process will enter its critical and mutual exclusion achieved.

Conclusion
Disabling interrupts is sometimes a useful interrupts is sometimes a useful technique within the kernel of an
operating system, but it is not appropriate as a general mutual exclusion mechanism for users process. The
reason is that it is unwise to give user process the power to turn off interrupts.

Proposal 2 -Lock Variable (Software Solution)

In this solution, we consider a single, shared, (lock) variable, initially 0. When a process wants to enter in its
critical section, it first test the lock. If lock is 0, the process first sets it to 1 and then enters the critical section.
If the lock is already 1, the process just waits until (lock) variable becomes 0. Thus, a 0 means that no process
in its critical section, and 1 means hold your horses -some process is in its critical section.
Dept of CSE, SJBIT 57
Smartworld.asia 58 Smartzworld.com

Operating Systems 10CS53

Conclusion

The flaw in this proposal can be best explained by example. Suppose process A sees that the lock is 0. Before
it can set the lock to 1 another process B is scheduled, runs, and sets the lock to 1. When the process A runs
again, it will also set the lock to 1, and two processes will be in their critical section simultaneously.

Proposal 3 -Strict Alteration

In this proposed solution, the integer variable 'turn' keeps track of whose turn is to enter the critical section.
Initially, process A inspect turn, finds it to be 0, and enters in its critical section. Process B also finds it to be 0
and sits in a loop continually testing 'turn' to see when it becomes 1.Continuously testing a variable waiting
for some value to appear is called the Busy-Waiting.

Conclusion

Taking turns is not a good idea when one of the processes is much slower than the other. Suppose process 0
finishes its critical section quickly, so both processes are now in their noncritical section. This situation
violates above mentioned condition 3.

Using Systems calls 'sleep' and 'wakeup'


Basically, what above mentioned solution do is this: when a processes wants to enter in its critical section , it
checks to see if then entry is allowed. If it is not, the process goes into tight loop and waits (i.e., start busy
waiting) until it is allowed to enter. This approach waste CPU-time.

Now look at some interprocess communication primitives is the pair of steep-wakeup.

x Sleep
o It is a system call that causes the caller to block, that is, be suspended until some other
process wakes it up. x Wakeup
o It is a system call that wakes up the process.

Both 'sleep' and 'wakeup' system calls have one parameter that represents a memory address used to
match up 'sleeps' and 'wakeups' .

The Bounded Buffer Producers and Consumers

The bounded buffer producers and consumers assumes that there is a fixed buffer size i.e., a finite
numbers of slots are available.

Statement
To suspend the producers when the buffer is full, to suspend the consumers when the buffer is empty, and to
make sure that only one process at a time manipulates a buffer so there are no race conditions or lost updates.
As an example how sleep-wakeup system calls are used, consider the producer-consumer problem also known
Dept of CSE, SJBIT 58
Smartworld.asia 59 Smartzworld.com

Operating Systems 10CS53

as bounded buffer problem. Two processes share a common, fixed-size (bounded) buffer. The producer puts
information into the buffer and the consumer takes information out.

Trouble arises when

1. The producer wants to put a new data in the buffer, but buffer is already full. Solution: Producer goes to
sleep and to be awakened when the consumer has removed data.
2. The consumer wants to remove data the buffer but buffer is already empty. Solution: Consumer goes to
sleep until the producer puts some data in buffer and wakes consumer up.

Conclusion

This approaches also leads to same race conditions we have seen in earlier approaches. Race condition can
occur due to the fact that access to 'count' is unconstrained. The essence of the problem is that a wakeup
call, sent to a process that is not sleeping, is lost.

3.4 SYNCHRONIZATION HARDWARE

1. Many systems provide hardware support for critical section code


2. Uniprocessors – could disable interrupts
• Currently running code would execute without preemption
• Generally too inefficient on multiprocessor systems
• Operating systems using this not broadly scalable
• 3. Modern machines provide special atomic hardware instructions

->Atomic = non-interruptable

• Either test memory word and set value


• Or swap contents of two memory words

3.5 SEMAPHORES

E.W. Dijkstra (1965) abstracted the key notion of mutual exclusion in his concepts of semaphores.

Definition
A semaphore is a protected variable whose value can be accessed and altered only by the operations P and V
and initialization operation called 'Semaphoiinitislize'.

Dept of CSE, SJBIT 59


Smartworld.asia 60 Smartzworld.com

Operating Systems 10CS53

Binary Semaphores can assume only the value 0 or the value 1 counting semaphores also called general
semaphores can assume only nonnegative values. The P (or wait or sleep or down) operation on semaphores
S, written as P(S) or wait (S), operates as follows:

P(S): IF S>0
THEN S:= S-1
ELSE (wait on S)

The V (or signal or wakeup or up) operation on semaphore S, written as V(S) or signal (S), operates as
follows:

V(S): IF (one or more process are waiting on S)


THEN (let one of these processes proceed)
ELSE S := S +1

Operations P and V are done as single, indivisible, atomic action. It is guaranteed that once a semaphore
operations has stared, no other process can access the semaphore until operation has completed. Mutual
exclusion on the semaphore, S, is enforced within P(S) and V(S).

If several processes attempt a P(S) simultaneously, only process will be allowed to proceed. The other
processes will be kept waiting, but the implementation of P and V guarantees that processes will not suffer
indefinite postponement. Semaphores solve the lost-wakeup problem.

Semaphore as General Synchronization Tool

1. Counting semaphore – integer value can range over an unrestricted domain.


2. Binary semaphore – integer value can range only between and 1; can be simpler to implement Also known
as mutex locks.
3. Can implement a counting semaphore S as a binary semaphore.
4. Provides mutual exclusion
• Semaphore S; // initialized to 1
• wait (S);

Critical Section

signal (S);

Semaphore Implementation

1. Must guarantee that no two processes can execute wait () and signal () on the same semaphore at the same
time
2. Thus, implementation becomes the critical section problem where the wait and signal code are placed in the
crtical section.

Could now have busy waiting in critical section implementation




Dept of CSE, SJBIT 60
Smartworld.asia 61 Smartzworld.com

Operating Systems 10CS53

• But implementation code is short

• Little busy waiting if critical section rarely occupied


• 3. Note that applications may spend lots of time in critical sections and therefore this is not a good
solution.

Operations P and V are done as single, indivisible, atomic action. It is guaranteed that once a semaphore
operations has stared, no other process can access the semaphore until operation has completed. Mutual
exclusion on the semaphore, S, is enforced within P(S) and V(S).

If several processes attempt a P(S) simultaneously, only process will be allowed to proceed. The other
processes will be kept waiting, but the implementation of P and V guarantees that processes will not suffer
indefinite postponement. Semaphores solve the lost-wakeup problem

If we could arrange matters such that no two processes were ever in their critical sections simultaneously, we
could avoid race conditions. We need four conditions to hold to have a good solution for the critical section
problem (mutual exclusion).

x No two processes may at the same moment inside their critical sections.

x No assumptions are made about relative speeds of processes or number of CPUs.

x No process should outside its critical section should block other processes.

x No process should wait arbitrary long to enter its critical section

Note that mutual exclusion needs to be enforced only when processes access shared modifiable data when
processes are performing operations that do not conflict with one another they should be allowed to proceed
concurrently.

61
Smartworld.asia 62 Smartzworld.com

Operating Systems 10CS53

Semaphore Implementation with no Busy waiting

1. With each semaphore there is an associated waiting queue. Each entry in a waiting queue has two data
items:

value (of type integer)


pointer to next record in the list

2. Two operations:

block – place the process invoking the operation on the appropriate waiting queue.

wakeup – remove one of processes in the waiting queue and place it in the ready queue. ->Implementation of

wait: wait (S){ value--;

if (value < 0) { add this process to waiting queue

block(); }

->Implementation of signal:

Signal (S){

value++;

if (value <= 0) {

remove a process P from the waiting queue

wakeup(P); }

Dept of CSE, SJBIT 62


Smartworld.asia 63 Smartzworld.com

Operating Systems 10CS53

3.6 CLASSICAL PROBLEMS OF SYNCHRONIZATION

1. Bounded-Buffer Problem
2. Readers and Writers Problem
3. Dining-Philosophers Problem

Bounded-Buffer Problem

1. N buffers, each can hold one item


2. Semaphore mutex initialized to the value 1
3. Semaphore full initialized to the value 0
4. Semaphore empty initialized to the value N.
5. The structure of the producer process while (true) {

// produce an item wait (empty); wait (mutex);

// add the item to the buffer signal (mutex); signal (full);

6. The structure of the consumer process

while (true) { wait (full); wait (mutex);

// remove an item from buffer signal (mutex); signal (empty);

// consume the removed item }

Readers-Writers Problem

1. A data set is shared among a number of concurrent processes


. o Readers – only read the data set; they do not perform any updates
. o Writers – can both read and write.

2. Problem – allow multiple readers to read at the same time. Only one single writer can access the shared
data at the same time.
3. Shared Data
. o Data set
. o Semaphore mutex initialized to 1.
. o Semaphore wrt initialized to 1.
. o Integer readcount initialized to 0.

4. The structure of a writer process while (true) { wait (wrt) ; // writing is performed

Dept of CSE, SJBIT 63


Smartworld.asia 64 Smartzworld.com

Operating Systems 10CS53

signal (wrt) ; }

5. The structure of a reader process

while (true) { wait (mutex) ; readcount ++ ; if (readcount == 1) wait (wrt) ; signal (mutex)

// reading is performed

wait (mutex) ;

readcount --;

if (readcount == 0) signal (wrt) ;

signal (mutex) ;

Dining-Philosophers Problem

1. Shared data

. o Bowl of rice (data set)


. o Semaphore chopstick [5] initialized to 1

2. The structure of Philosopher i: While (true) { wait ( chopstick[i]

); wait ( chopStick[ (i + 1) % 5] );

// eat signal ( chopstick[i] ); signal (chopstick[ (i + 1) % 5] ); // think

}
Dept of CSE, SJBIT 64
Smartworld.asia 65 Smartzworld.com

Operating Systems 10CS53

Problems with Semaphores


1. Correct use of semaphore operations:

. o signal (mutex) …. wait (mutex)


. o wait (mutex) … wait (mutex)
. o Omitting of wait (mutex) or signal (mutex) (or both)

3.7 MONITORS

1. high-level abstraction that provides a convenient and effective mechanism for process synchronization
2. Only one process may be active within the monitor at a time monitor monitor-name

{ // shared variable declarations procedure P1 (…) { …. }

… procedure Pn (…) {……}

Initialization code ( ….) { … } … }

Dept of CSE, SJBIT 65


Smartworld.asia 66 Smartzworld.com

Operating Systems 10CS53

Solution to Dining Philosophers

monitor DP

{ enum { THINKING; HUNGRY, EATING) state [5] ; condition self [5];

void pickup (int i) { state[i] = HUNGRY; test(i); if (state[i] != EATING) self [i].wait;

void putdown (int i) { state[i] = THINKING; // test left and right neighbors

test((i + 4) % 5); test((i + 1) % 5); }

void test (int i) { if ( (state[(i + 4) % 5] != EATING) && (state[i] == HUNGRY) && (state[(i + 1) % 5] !=

EATING) ) {

state[i] = EATING ; self[i].signal () ; } }

initialization_code() { for (int i = 0; i < 5; i++) state[i] = THINKING;

} ->Each philosopher I invokes the operations pickup() and putdown() in the following sequence:

dp.pickup (i) EAT dp.putdown (i)

Monitor Implementation Using Semaphores

1. Variables semaphore mutex; // (initially = 1) semaphore next; // (initially = 0) int next-count =

0;

2.Each procedure F will be replaced by wait(mutex); …

body of F; … if (next-count > 0)

Dept of CSE, SJBIT 66


Smartworld.asia 67 Smartzworld.com

Operating Systems 10CS53

signal(next) else signal(mutex);


1. Mutual exclusion within a monitor is ensured.
2. For each condition variable x, we have: semaphore x-sem; // (initially = 0) int x-count = 0;
3. The operation x.wait can be implemented as: x-count++; if (next-count > 0)

signal(next); else

signal(mutex); wait(x-sem); x-count--;

6. The operation x.signal can be implemented as:

if (x-count > 0) { next-count++; signal(x-sem); wait(next); next-count--;

Producer-Consumer Problem Using Semaphores

The Solution to producer-consumer problem uses three semaphores, namely, full, empty and mutex.

The semaphore 'full' is used for counting the number of slots in the buffer that are full. The 'empty' for
counting the number of slots that are empty and semaphore 'mutex' to make sure that the producer and
consumer do not access modifiable shared section of the buffer simultaneously.

Initialization
x Set full buffer slots to 0. i.e., semaphore Full = 0. x Set empty buffer slots to N. i.e., semaphore empty = N. x For
control access to critical section set mutex to 1. i.e., semaphore mutex = 1.

Producer ( ) WHILE (true) produce-Item ( ); P (empty); P (mutex); enter-Item ( ) V (mutex) V (full);

Consumer ( )
WHILE (true) P (full) P (mutex); remove-Item ( ); V (mutex); V (empty);
consume-Item (Item)

IMPORTANT QUESTIONS:
1. What is the meaning of the term busy waiting?
2. Explain semaphores with the help of an example.
3. Explain dining philosophers problem, and how it is solved.
4. What is race condition? Explain how it is handled.
5. Define process synchronization.
6. How is producer-consumer problem with the help of semaphores?

Dept of CSE, SJBIT 67


Smartworld.asia 68 Smartzworld.com

Operating Systems 10CS53

UNIT 4 DEADLOCK

TOPICS

4.9 DEADLOCKS

4.10 SYSTEM MODEL

4.11 DEADLOCK CHARACTERIZATION

4.12 METHODS FOR HANDLING DEADLOCKS

4.13 DEADLOCK PREVENTION

4.14 DEADLOCK AVOIDANCE

4.15 DEADLOCK DETECTION

4.16 RECOVERY FROM DEADLOCK

Dept of CSE, SJBIT 68


Smartworld.asia 69 Smartzworld.com

Operating Systems 10CS53

4.1 DEADLOCKS

• When processes request a resource and if the resources are not available at that time the process enters
into waiting state. Waiting process may not change its state because the resources they are requested are held
by other process. This situation is called deadlock.
• The situation where the process waiting for the resource i.e., not available is called deadlock.

4.2 SYSTEM MODEL


• A system may consist of finite number of resources and is distributed among number of processes.
There resources are partitioned into several instances each with identical instances.
• A process must request a resource before using it and it must release the resource after using it. It can
request any number of resources to carry out a designated task. The amount of resource requested may not
exceed the total number of resources available.

A process may utilize the resources in only the following sequences:


1. Request:-If the request is not granted immediately then the requesting process must wait it can acquire the
resources.
2. Use:-The process can operate on the resource.
3. Release:-The process releases the resource after using it.

Deadlock may involve different types of resources.


For eg:-Consider a system with one printer and one tape drive. If a process Pi currently holds a
printer and a process Pj holds the tape drive. If process Pi request a tape drive and process Pj request a
printer then a deadlock occurs.
Multithread programs are good candidates for deadlock because they compete for shared
resources.
4.3 DEADLOCK CHARACTERIZATION

Necessary Conditions:A deadlock situation can occur if the following 4 conditions occur simultaneously in a
system:-1. Mutual Exclusion:Only one process must hold the resource at a time. If any other process requests
for the resource, the requesting process must be delayed until the resource has been released.
1. Hold and Wait:-A process must be holding at least one resource and waiting to acquire additional
resources that are currently being held by the other process.
2. No Preemption:-Resources can’t be preempted i.e., only the process holding the resources must release it
after the process has completed its task.
3. Circular Wait:-A set {P0,P1……..Pn} of waiting process must exist such that P0 is waiting for a resource
i.e., held by P1, P1 is waiting for a resource i.e., held by P2. Pn-1 is waiting for resource held by process
Pn and Pn is waiting for the resource i.e., held by P1. All the four conditions must hold for a deadlock to
occur.

Resource Allocation Graph:


1. Deadlocks are described by using a directed graph called system resource allocation graph. The graph
consists of set of vertices (v) and set of edges (e).
2. The set of vertices (v) can be described into two different types of nodes P={P1,P2……..Pn} i.e., set
consisting of all active processes and R={R1,R2……….Rn}i.e., set consisting of all resource types in the
system

Dept of CSE, SJBIT 69


Smartworld.asia 70 Smartzworld.com

Operating Systems 10CS53

7. A directed edge from process Pi to resource type Rj denoted by Pi->Ri indicates that Pi requested
an instance of resource Rj and is waiting. This edge is called Request edge.
8. A directed edge Ri-> Pj signifies that resource Rj is held by process Pi. This is called
Assignment edge

Eg:

R1 R3

R2 R4

• If the graph contain no cycle, then no process in the system is deadlock. If the graph contains a cycle
then a deadlock may exist.
• If each resource type has exactly one instance than a cycle implies that a deadlock has occurred. If
each resource has several instances then a cycle do not necessarily implies that a deadlock has occurred.

4.4 METHODS FOR HANDLING DEADLOCKS

There are three ways to deal with deadlock problem x We can use a protocol to prevent deadlocks ensuring
that the system will never enter into the
deadlock state. x We allow a system to enter into deadlock state, detect it and recover from it. x We
ignore the problem and pretend that the deadlock never occur in the system. This is used by
most OS including UNIX.

• To ensure that the deadlock never occur the system can use either deadlock avoidance or a deadlock
prevention.
• Deadlock prevention is a set of method for ensuring that at least one of the necessary conditions does
not occur.
• Deadlock avoidance requires the OS is given advance information about which resource a process
will request and use during its lifetime.
• If a system does not use either deadlock avoidance or deadlock prevention then a deadlock situation
may occur. During this it can provide an algorithm that examines the state of the system to determine whether
a deadlock has occurred and algorithm to recover from deadlock.

Dept of CSE, SJBIT 70


Smartworld.asia 71 Smartzworld.com

Operating Systems 10CS53

• Undetected deadlock will result in deterioration of the system performance.

HANDLING DEADLOCKS

There are three ways to deal with deadlock problem x We can use a protocol to prevent deadlocks ensuring
that the system will never enter into the deadlock state. x We allow a system to enter into deadlock state,
detect it and recover from it. x We ignore the problem and pretend that the deadlock never occur in the
system. This is used by most OS including UNIX.

• To ensure that the deadlock never occur the system can use either deadlock avoidance or a deadlock
prevention.

• Deadlock prevention is a set of method for ensuring that at least one of the necessary conditions does
not occur.

• Deadlock avoidance requires the OS is given advance information about which resource a process
will request and use during its lifetime.

• If a system does not use either deadlock avoidance or deadlock prevention then a deadlock situation
may occur. During this it can provide an algorithm that examines the state of the system to determine whether
a deadlock has occurred and algorithm to recover from deadlock.

71
Smartworld.asia 72 Smartzworld.com

Operating Systems 10CS53

4.5 DEADLOCK PREVENTION

For a deadlock to occur each of the four necessary conditions must hold. If at least one of the there
condition does not hold then we can prevent occurrence of deadlock.
1. Mutual Exclusion:This holds for non-sharable resources. Eg:-A printer can be used by only one process at a
time.
Mutual exclusion is not possible in sharable resources and thus they cannot be
involved in deadlock. Read-only files are good examples for sharable resources. A process never waits for
accessing a sharable resource. So we cannot prevent deadlock by denying the mutual exclusion condition in
non-sharable resources.
2. Hold and Wait:This condition can be eliminated by forcing a process to release all its resources held by it
when it request a resource i.e., not available.
x One protocol can be used is that each process is allocated with all of its resources before
its start execution. Eg:-consider a process that copies the data from a tape drive to the disk,
sorts the file and then prints the results to a printer. If all the resources are allocated at the
beginning then the tape drive, disk files and printer are assigned to the process. The main problem
with this is it leads to low resource utilization because it requires printer at the last and is allocated
with it from the beginning so that no other process can use it. x Another protocol that can be used
is to allow a process to request a resource when the
process has none. i.e., the process is allocated with tape drive and disk file. It performs the
required operation and releases both. Then the process once again request for disk file and the
printer and the problem and with this is starvation is possible.
3. No Preemption:To ensure that this condition never occurs the resources must be preempted. The following
protocol can be used.
x If a process is holding some resource and request another resource that cannot be immediately
allocated to it, then all the resources currently held by the requesting process are preempted
and added to the list of resources for which other processes may be waiting. The process will
be restarted only when it regains the old resources and the new resources that it is requesting.
x When a process request resources, we check whether they are available or not. If they are
available we allocate them else we check that whether they are allocated to some other
waiting process. If so we preempt the resources from the waiting process and allocate them to
the requesting process. The requesting process must wait.
4.Circular Wait:-The fourth and the final condition for deadlock is the circular wait condition. One
way to ensure that this condition never, is to impose ordering on all resource types and each process
requests resource in an increasing order.
Let R={R1,R2,………Rn} be the set of resource types. We assign each resource type
with a unique integer value. This will allows us to compare two resources and determine whether one
precedes the other in ordering. Eg:-we can define a one to one function
F:RN as follows :-F(disk drive)=5 F(printer)=12 F(tape drive)=1

Dept of CSE, SJBIT 72


Smartworld.asia 73 Smartzworld.com

Operating Systems 10CS53

Deadlock can be prevented by using the following protocol:


x Each process can request the resource in increasing order. A process can
request any number of instances of resource type say Ri and it can request
instances of resource type Rj only F(Rj) > F(Ri).
x Alternatively when a process requests an instance of resource type Rj,
it has released any resource Ri such that F(Ri) >= F(Rj). If these
two protocol are used then the circular wait can’t hold.

4.6 DEADLOCK AVOIDANCE


• Deadlock prevention algorithm may lead to low device utilization and reduces system throughput.
• Avoiding deadlocks requires additional information about how resources are to be requested. With the
knowledge of the complete sequences of requests and releases we can decide for each requests whether or not
the process should wait.
• For each requests it requires to check the resources currently available, resources that are currently
allocated to each processes future requests and release of each process to decide whether the current requests
can be satisfied or must wait to avoid future possible deadlock.
• A deadlock avoidance algorithm dynamically examines the resources allocation state to ensure that a
circular wait condition never exists. The resource allocation state is defined by the number of available and
allocated resources and the maximum demand of each process.

Safe State:
• A state is a safe state in which there exists at least one order in which all the process will run
completely without resulting in a deadlock.
• A system is in safe state if there exists a safe sequence.
• A sequence of processes <P1,P2,………..Pn> is a safe sequence for the current allocation state if for
each Pi the resources that Pi can request can be satisfied by the currently available resources.
• If the resources that Pi requests are not currently available then Pi can obtain all of its needed resource
to complete its designated task.
• A safe state is not a deadlock state.
• Whenever a process request a resource i.e., currently available, the system must decide whether
resources can be allocated immediately or whether the process must wait. The request is granted only if the
allocation leaves the system in safe state.
• In this, if a process requests a resource i.e., currently available it must still have to wait. Thus resource
utilization may be lower than it would be without a deadlock avoidance algorithm.

Resource Allocation Graph Algorithm:


This algorithm is used only if we have one instance of a resource type. In addition to the request edge
and the assignment edge a new edge called claim edge is used. For eg:-A claim edge PiRj indicates
that process Pi may request Rj in future. The claim edge
is represented by a dotted line.

Dept of CSE, SJBIT 73


Smartworld.asia 74 Smartzworld.com

Operating Systems 10CS53

x When a process Pi requests the resource Rj, the claim edge is converted to a request edge. x When resource Rj
is released by process Pi, the assignment edge RjPi is replaced by the claim edge PiRj.
When a process Pi requests resource Rj the request is granted only if converting the request edge
PiRj to as assignment edge RjPi do not result in a cycle. Cycle detection algorithm is used to
detect the cycle. If there are no cycles then the allocation of the resource to process leave the system
in safe state
. Banker’s Algorithm:

• This algorithm is applicable to the system with multiple instances of each resource types, but this is
less efficient then the resource allocation graph algorithm.
• When a new process enters the system it must declare the maximum number of resources that it may
need. This number may not exceed the total number of resources in the system. The system must determine
that whether the allocation of the resources will leave the system in a safe state or not. If it is so resources are
allocated else it should wait until the process release enough resources.
• Several data structures are used to implement the banker’s algorithm. Let ‘n’ be the number of
processes in the system and ‘m’ be the number of resources types. We need the following data structures:

x Available:-A vector of length m indicates the number of available resources. If Available[i]=k, then k
instances of resource type Rj is available.
x Max:-An n*m matrix defines the maximum demand of each process if Max[i,j]=k, then Pi may request
at most k instances of resource type Rj.
x Allocation:-An n*m matrix defines the number of resources of each type currently allocated to each
process. If Allocation[i,j]=k, then Pi is currently k instances of resource type Rj.
x Need:-An n*m matrix indicates the remaining resources need of each process. If Need[i,j]=k, then Pi
may need k more instances of resource type Rj to compute its task. So Need[i,j]=Max[i,j]-
Allocation[i]

Safety Algorithm:
This algorithm is used to find out whether or not a system is in safe state or not. Step 1. Let work and
finish be two vectors of length M and N respectively. Initialize work = available and Finish[i]=false for
i=1,2,3,…….n
Step 2. Find i such that both Finish[i]=false Need i <= work
If no such i exist then go to step 4
Step 3. Work = work + Allocation Finish[i]=true Go to step 2
Step 4. If finish[i]=true for all i, then the system is in safe state. This algorithm may require an order of m*n*n
operation to decide whether a state is safe.

Dept of CSE, SJBIT 74


Smartworld.asia 75 Smartzworld.com

Operating Systems 10CS53

Resource Request Algorithm:Let Request(i) be the request vector of process Pi. If Request(i)[j]=k,
then process Pi wants K instances of the resource type Rj. When a request for resources is made by process Pi
the following actions are taken.
x If Request(i) <= Need(i) go to step 2 otherwise raise an error condition since the process has exceeded its
maximum claim. x If Request(i) <= Available go to step 3 otherwise Pi must wait. Since the resources are not
available. x If the system want to allocate the requested resources to process Pi then modify the state
as follows. Available = Available – Request(i) Allocation(i) = Allocation(i) + Request(i)
Need(i) = Need(i) – Request(i)
x If the resulting resource allocation state is safe, the transaction is complete and Pi is allocated
its resources. If the new state is unsafe then Pi must wait for Request(i) and old resource
allocation state is restored.

4.7 DEADLOCK DETECTION


If a system does not employ either deadlock prevention or a deadlock avoidance algorithm then a deadlock
situation may occur. In this environment the system may provide x An algorithm that examines the state of
the system to determine whether a deadlock has occurred. x An algorithm to recover from the deadlock.

Single Instances of each Resource Type:


• If all the resources have only a single instance then we can define deadlock detection algorithm that
uses a variant of resource allocation graph called a wait for graph. This graph is obtained by removing the
nodes of type resources and removing appropriate edges.

• An edge from Pi to Pj in wait for graph implies that Pi is waiting for Pj to release a resource that Pi
needs.

• An edge from Pi to Pj exists in wait for graph if and only if the corresponding resource allocation
graph contains the edges PiRq and RqPj.

• Deadlock exists within the system if and only if there is a cycle. To detect deadlock the system needs
an algorithm that searches for cycle in a graph.

Dept of CSE, SJBIT 75


Smartworld.asia 76 Smartzworld.com

Operating Systems 10CS53

Dept of CSE, SJBIT 76


Smartworld.asia 77 Smartzworld.com

Operating Systems 10CS53

4.8 RECOVERY FROM DEADLOCKS

Several Instances of a Resource Types:


The wait for graph is applicable to only a single instance of a resource type. The following algorithm
applies if there are several instances of a resource type. The following data structures are used:-

• o Available:-Is a vector of length m indicating the number of available resources of each type .

• o Allocation:-Is an m*n matrix which defines the number of resources of each type currently
allocated to each process.

• o Request:-Is an m*n matrix indicating the current request of each process. If request[i,j]=k then Pi is
requesting k more instances of resources type Rj.

Step 1. let work and finish be vectors of length m and n respectively. Initialize Work = available/expression
For i=0,1,2……….n if allocation(i)!=0 then Finish[i]=0
else Finish[i]=true
Step 2. Find an index(i) such that both Finish[i]
= false Request(i)<=work
If no such I exist go to step 4.

Step 3. Work = work + Allocation(i) Finish[i] = true Go to step 2. Step 4. If Finish[i] = false for some I where
m>=i>=1. When a system is in a deadlock state. This algorithm needs an order of m*n square operations to
detect whether the system is in deadlock state or not.

IMPORTANT QUESTIONS:

1. For the following snapshot of the system find the safe sequence (using Banker’salgorithm).

77
Smartworld.asia 78 Smartzworld.com

Operating Systems 10CS53

b. Is the system is in safe state?


c. If the process P1 request (0,4,2,0) resources cam the request be granted immediately?

3. The operating system contains three resources. The numbers of instances of each resource type are (7, 7,
10). The current allocation state is given below.
a. Is the current allocation is safe?
b. find need?
c. Can the request made by the process P1(1,1,0) can be granted?

4. Explain different methods to recover from deadlock?


5. Write advantage and disadvantage of deadlock avoidance and deadlock prevention?

Dept of CSE, SJBIT 78

You might also like